Why does two different ways of doing the same thing differ so much?


a = 1.0 i = 0 while a != 0: … a = a/2 … i = i+1 … print(a,i) The above program ended at i=1075(I know that ideally the while loop shouldn’t have ended, and the reason why it stopped was because of the memory limit of the computer). Now, look at the next program. a = 1.0 b = 1.0 i = 0 while a+b != b: … a = a/2 … i = i+1 … print(a,i) This program ended at i=53. Why is there a big difference here? Why does the first program go up to 1075 iterations, while the latter goes up to 53? Also, when I tried a+b+c!=b+c, it went till just 52, furthermore a+b+c+d!=b+c+d went till 51. Why isn’t there a big difference here?

What is this function syntax doing? [duplicate]

I’m trying to work my way through another person’s notebook. I came across this function definition:

H[cpl_] :=   H[cpl] = Function[y,     Sqrt[(1. + y + RneuT[cpl][y] + RL[cpl]*y^4)/        2./(1 + RneuT[cpl][1]/2. + RL[cpl]/2.)]/y] 

What’s going on here with the H[cpl_] := H[cpl] = … syntax? This appears to be explicitly assigning the return value – which is, itself, a function – to the return value of this function. I can’t find any documentation on this syntax. Could anyone explain what’s going on or give me relevant link?

How come RFC7636 (PKCE) stops malicous app doing the same code challenge and get legitimate access to API

As per the RFC7636 it stops malicious apps which pretend to be legitimate apps, gaining access to OAuth2.0 protected API’s.

The flow suggests a method of having a runtime level secret which generated from the client and letting the Auth server knows it. This allows token issuer to verify the secret with auth server and grant a proper access token.

However lets assume a malicous app, as the RFC paper suggests, with a correct client_id and client_secret, it can do the same PKCE process and gain access to protected resources.

Is this RFC doesn’t meant to protect those kind of attacks or simply I’m missing something here?

Would all melee PC attacks doing alignment damage be unbalanced at low levels?

For a campaign, I am currently discussing giving the PCs the ability to convert all melee and unarmed damage into alignment damage instead of normal physical, slashing, etc. damage.

One of the players suddenly realized that that makes the attacks extremely powerful and unbalanced at first level and lower levels.

Now I am wondering: is there anything I have overlooked that means having this ability starting at lower levels or level 1 could unbalance things in a normal Golarion campaign?

How do we cross-verify if the device is doing exactly what it is supposed to do?

I am very sorry for misleading and confusing title as this was best I could think of.

What i meant to ask is, how do we know any device is doing what it is supposed to do? like for example, Android is an open source OS (ignore google libraries for now) and they do claim that all passwords will be store on device only, but what if they are storing it on their servers and this piece of code is not there in the open source version but it is there only in pre-compiled libraries so, How do we check that the same code is there in the actual phone and open source version? same goes for other devices like iphone, routers, desktops etc.

Also most manufactures now a days have encryption enabled which makes it impossible to monitor the actual content on the tcp/ip packet.

We can always remove existing os and install the open source version but thats not possible in all cases as in some, it might be really confusing and might even need lot of extra stuff that people dont have usually.

So my general question is how do we verify if the same code is there in the open source version and pre-compiled binaries? I can think of reverse engineering but that would require great knowledge and skills which most people dont have.

Two processes doing extensive calculations – I want one to get ~100% of processor time – how?

I am running Ubuntu basic server with two processes: – process 1 – performing calculation 100% of uptime, and which I use to share computing power to community (it’s running @ prio 19) – process 2 – performing calculations for 5-10mins, from time to time, which I use to compute for me (it’s running @prio -19)

I want process 2 to be given with 100% of computing power (process 1 is at that moment should get close to 0% of CPU). But best what I get is 50% of CPU for process 1 and 50% of CPU for process 2 (checked with htop).

I don’t want to manually stop/start any process when I need computing power (both processes must be running all the time); 100% of CPU for process 2 must be given automatically.

What should I do to achieve my goal? Thanks.