## Decrypting AES-128-CBC leads to first block being correct, the rest corrupt

I’m currently investigating a piece of software which encrypts it’s files with AES-128-CBC.

From disassembly it is truly known that the algorithm used is correct (log messages plus calls to the BCrypt library).

The key and IV are static and stored within the executable as a blob of 96 bytes, which is split using a set of XOR loops into 2 blobs of 16 bytes — one for the key, and one for the IV.

I have been able to reproduce the same algorithm and acquire both the key and the IV.

However, when I try to use the acquired data to decrypt the file, either using tiny-aes or the OpenSSL command line tool, I get a piece of the correct decrypted header for the file, containing human-readable text at that, but further just a bunch of zero bytes, and then seemingly the original encrypted data again.

Decrypting with the incorrect IV causes the first block of plaintext to be corrupt but subsequent plaintext blocks will be correct.

However, this seems to be the exactly inverse in my case. Moreover, even if I set the IV to all zeros during decryption, I still get the header, but not the further data.

Am I missing a critical point in how to apply the algorithm properly? Or may it be that the implementation in Windows BCrypt differs from tinyAES and OpenSSL on Linux?

## Is this general form of Von Neumann’s reduction postulate correct?

I have had a look at a book on ‘Quantun Measurement’ by Braginsky and Khalili$$^1$$. In it appears an equation that I would like confirmation of. The equation seems odd, in that it sets a probability for the result of a measurement to the trace of some matrix.The equation in question is (2.7) ,see the book, Section 2.5 ‘von Neumann’s postulate of reduction’. I quote

$$w_n=Tr(|q_n\rangle\langle q_n|\hat{\rho}_{init}) \tag{2.7}$$

My question is. Is (2.7) correct?

Reference

1) Vladimir B Braginsky and Farid Ya Khalili, Quantum Measurement, Ed Kip S Thorne, Cambridge University Press, First paperback edition (with corrections) 1995.

Other Info

In the book the second part of the general form of the postulate is (2.8), this applies to the density matrix asociated with the state of the system after measurement, it is

$$\hat{\rho}_n = \frac{1}{w_n} |q_n\rangle\langle q_n| > \hat{\rho}_{init} |q_n\rangle\langle q_n| \tag{2.8}$$

(2.7) is supposed to apply to a quantum system which is initially in a state with associated density operator $$\hat{\rho}_{init}$$. The system may consist of two particles (or I assume an arbitrary number). In it, $$w_n$$ is the probability of obtaining the result of measurement $$q_n$$, for an observable $$q$$ with an associated operator $$\hat{q}$$, this operator has a discrete set of eigenvalues.We have $$\hat{q} |q_n\rangle= q_n |q_n\rangle$$ The book also has the one degree of freedom version of von Neumann’s postulate, in this the probability of a measurement is given in (2.6) as

$$w_n=\langle q_n|\hat{\rho}_{init}|q_n\rangle \tag{2.6}$$

## Change / correct desktop to display mapping

I have a Mac Book laptop with Mojave. I use two desks (in different location) each having two Dell monitors that I connect via display port to the laptop.

It works well, however when I switch between my desks the desktop are mapped to different monitors. The windows and icons that were on my left monitor at my other desk are now on the right monitor, conversely the windows and icons that were on the right monitor are now on the left one.

Note that the display arrangement is correct, I can move my mouse from one screen to another in the same way that they are physically set (which is the same at both location).

Hence my question: how do I fix the desktop (which contains windows and icons) to screens to have the window placed in the same way regardless where I work?

## Correct Python script structure

I’ve been programming in Python for some time now and I’ve always used the below structure to organize my code. Lately, I’ve been wondering if this structure is an ideal and correct one to use generally in Python, both in terms of it being Pythonic but also performance wise?

The structure I follow in my scripts is as below:

def main(argv):          """         Main entry for execution of the program         """         try:             call_func1()             call_func2()             call_func3()         except some_specific_exception_type1 as e:             do_something             raise         except some_specific_exception_type2 as e:             do_something             raise         except Exception as e: # catch all other exceptions that might occur             do_something             raise  def call_func1():         try:             do_something_in_here         except some_specific_exception_type1 as e:             do_something             raise         except some_specific_exception_type2 as e:             do_something             raise         except Exception as e: # catch all other exceptions that might occur             do_something             raise  def call_func2():         try:             do_something_else_in_here         except some_specific_exception_type1 as e:             do_something             raise         except some_specific_exception_type2 as e:             do_something             raise         except Exception as e: # catch all other exceptions that might occur             do_something             raise    if __name__ == '__main__':         main(sys.argv[1:])

As you can see, I have a central point in the script where the execution of the program starts(in this case is the main() method) and from there I call other methods that perform some type of actions.

Then each of the methods(whether it is the central starting point or the other methods that do a specific work) handle exceptions that might be thrown by the actions they perform and raise those.

This way, I have a central point where the execution of the program starts but also a central point where the execution will finish since all methods will return back to main() whether they succeed or throw an exception(since they will raise it).

I was wondering if implementing this kind of structure is a correct thing to do both in terms of it being Pythonic, or whether if it affects the performance of the program or if it is the right away to handle errors?

## Is “Unearthed Arcana: Players Make All Rolls” Correct?

In this Unearthed Arcana, they give rules so that players can make all the rolls, rather than the DM sometimes rolling for enemies. In one section, there are specifically rules for converting saving throw bonuses into an equivalent DC, which the players roll against, rather than the DM. However, after doing some math, I think they’ve made a mistake.

They say that you convert a Saving Throw into a Saving Throw Check by adding 11 to the defender’s saving throw modifiers, and using that as the DC for the check. The player then rolls against this DC, adding their spellcasing ability modifier and proficiency bonus. However, according to the following math, this conversion does not produce the same results.

In this math, I compare the chances of a player succeeding on a saving throw check, using the rules in the UA article, and of a monster failing its saving throw using the standard PHB rules. If the math is correct, both of these results should have the same chance.

These are the formulas I used.

Chance to succeed Saving Throw Check = (20 – DC + 1 + proficiency + casting mod) / 20

Chance to FAIL a Saving Throw = (DC – 1 – save proficiency – save mod) / 20

Assuming a proficiency bonus of +2, +0 for all ability mods, and no save proficiency:

Monster’s Saving throw Check DC = 11 + save mod = 11

Chance to succeed a saving throw check DC11: (20 – 11 + 1 + proficiency + casting mod) / 20 = 12/20 = 60%

Player’s Saving throw DC = 8 + proficiency + spellcasting ability mod = 10

Chance for monster to fail saving throw DC10: (10 – 1 – save proficiency – save mod) / 20 = 9/20 = 45%

If this were a valid conversion, both formulas would result in the same chance of success for the player, and failure for the monster respectively, however, they are off by 15%. This suggests that the unearthed Arcana’s rules are not a valid conversion.

Are the Unearthed Arcana rules wrong, or is my math wrong?

## Is this correct about subnetting?

Say I’ve been issued the address 176.168.2.50/18

First I would need to find out the network part before I start subnetting? If it’s in the third octet (since it’s a /18) then why would they give me a .50 in the fourth octet, since I’m just going to get rid of it when I start subnetting and make my own subnet masks?

After I’ve found the network part I can start subnetting… But what about if I subnet so much I reach the network part? Do I just have to stop subnetting?

I get how to do the calculations but I’m not too sure how all this actually works in the real world

## Not receiving emails in Google Apps MX records are correct

I am not sure what to do to troubleshoot further:

I did recently move my domain to CloudFlare (but I have always used CF DNS) but I did that on March 8th and my last email was April 9th.

I have used https://toolbox.googleapps.com/apps/checkmx/ which came back all fine except DKIM / DMARC / SPF which have never been set up properly. So Google sees the DNS pointing to it properly.

I’m not sure what to do or what to troubleshoot next?

## Why do Warlocks only have spells up to 5th level? What’s the correct progression for their slots?

Instead of a “Spell Slots for Spell Level” table, the warlock just has two columns: one for number of slots and one for slot level, because all of their slots are the same level.

However, the table on PHB p. 106 only goes up to the 5th slot level. It looks like the higher levels are misprinted as they should go up to 9th level slots (as the warlock’s spell list has spells of 6th, 7th, 8th, and 9th levels). What is the correct progression? Is a warlock’s slot level the same as the maximum slot level of a wizard of the same class level?

## What is the correct limit on PC “unanswered invitations” in EasyChair?

We would like to add more PC invitations in EasyChair, but we had reached the 100-limit in the free license. We then purchased the Professional license because EasyChair help page said the Pro license has a 300-limit.

Now, when we try to add more invitations, EasyChair still will not allow over 100 unanswered invitations. I cannot figure out why. It says we have the Professional license too.

## Is my proof that the monotone convergence theorem implies the Archimedean property correct?

I have proved that the monotone convergence theorem implies the nested interval property and now I came up with a “proof” that the monotone convergence theorem implies the Archimedean property.

\begin{proof} Assume towards a contradiction that $\mathbb{N}$ is bounded above. Since $\mathbb{N}$ is a monotonic by the monotone convergence theorem, $\mathbb{N}$ converges to some $\alpha$ . Now fix $\epsilon = 1$ . Then there exists some natural number $N_{1}$ such that $\forall n \ge N_{1}$ implies $|a_{n} – x| < 1$ . So look at the term $a_{N_{1}}$ which is really $N_{1}$ ,we have $|N_{1} – x < 1|$ meaning $x < 1 + N_{1}$ . But we assumed that the natural numbers converge to $x$ so it can’t be less than the natural number $1+N_{1}$ . I have a feeling this proof is wrong, but I am on the right idea.

\end{proof}