This is the ultimate noob question.
When reading discussions of cryptography, I often come across phrases like these:
…calculates a hash over the primary key…
…a key derivation function over a static string…
…an HMAC over the i-th derived key…
Is “over” in these examples just a hip way to say “of”?
More concretely, is there a real technical difference between the sentences above and their counterparts below?
…calculates a hash of the primary key…
…a key derivation function of a static string…
…an HMAC of the i-th derived key…
for authenticating on a remote API, i need to send a sha256 hash, that will be calculated from previous data entered in fields.
ie: field 1 username field 2: variable data input by user field 3: will be a hidden field that will concatenate input rom fields 1 & 2 (i will use merge tag for that
field 4 (named authhash) : i need it to auto generate hash256 from field 3 data; it will be a read only field, hidden
Can someone help with this case? Thansk a lot
Is it possible to find the BIOS password hash in a BIOS dump and decrypt it?
For instance, a person could use flashrom Live CD to dump the BIOS with flashrom -r but how would you find the BIOS password hash in the bios dump?
I would love to understand at the most basic level what their differences are. When is each one used?
Any advantages and disadvantages?
Estou configurando o meu projeto para implementar o login com o google porem estou tendo algumas dificuldades ele pede a chave sha-1, e não sei quais comandos deve utilizar para gerar a chave.
o google passa este comando mas preciso saber o que inserir nele:
keytool -exportcert -keystore path-to-debug-or-production-keystore -list -v
When I deploy a SharePoint Online Add-in at the Fully Qualified Domain Name (FQDN) myTenant.sharepoint.com , SharePoint mangles the FQDN and instead puts the Add-in at myTenant-[App Hash].sharepoint.com
This App Hash seems to confuse browsers into thinking I’m attempting to perform Cross-origin Resource Sharing (CORS) or setup clickjacking attacks. I found a post at  that allowed my client side Representational State Transfer (REST) calls to go to the same FQDN that the page originated from, but I still haven’t found a way to safely mitigate the issue where my browsers think I’m possibly trying to setup clickjacking …
Is there a safe way to turn off the App Hash so deploying an Add-In named exampleAddInName to the URL
actually goes to
instead of some weird URL like
 – http://mundrisoft.com/tech-bytes/what-is-the-host-web-url-and-app-web-url-in-sharepoint-add-in/
 – A few posts on the internet such as
suggest I can get around the “This content can’t be shown in a frame” clickjacking safety feature errors I was getting by putting a
<WebPartPages:AllowFraming runat=”server” />
line into the pages that I want to host inside “Client Web Part (Host Web)” items. Unfortunately, using AllowFraming in that manner seems dangerous and irresponsible to me…
I am hashing all requests to my server with a secret token known by only the server and my mobile app, to prevent malicious apps from using my servers.
Should I also do this on the responses from the server, to validate the servers identity in the app, or would this be useless?
Im of course using HTTPS already, and pinned certificates.
This question already has an answer here:
- Which hashing algorithm is best for uniqueness and speed? 11 answers
I am a 10th grader and I need to do a research project. I am doing a science fair experiment on “The Effect of Different Cryptographic Hash Function on Decryption Times and Singularity.” To test this, I wanted to know what software/platform I should use to test these different hash functions and also what are the newest hash functions I should use. I only need 3-5 functions and my control is SHA-256 because it is the most standardly used one. What other newer hash functions should I use and what makes them so special to be used?
Here’s a link to a good amount of data I found, but it is quite outdated and I wanted to know if there were newer functions: Which hashing algorithm is best for uniqueness and speed?
This can be a science fair experiment because I am testing 200k words on a couple of hash functions. I am basically testing this on that. I am sorry if you got confused. The output is what I will be putting in my data table.
Como eu faço pra selecionar o código pra HASH no NodeJS?
Eu tenho um sistema feito em outra linguagem com senhas encryptadas com SHA256
A função de encryptação lá é assim:
#define HASH_CODE = 'WEASDSAEWEWAEAWEAWEWA'; SHA256_PassHash(HASH_CODE, password, 64);
Primeiro parametro é o código da HASH, o segundo é o valor a ser encryptado e o terceiro é a base64
Consegui fazer encryptação no NodeJS, mas não tenho controle do código da HASH, então os sistemas não criam a mesma HASH, como eu faço pra selecionar o código da HASH ao registrar no NodeJS para que ele possa se comunicar com esse outro sistema?
const code = 'WEASDSAEWEWAEAWEAWEWA'; const normal = 'anne'; const crypto = require('crypto'); const encryptado = crypto .createHash('sha256') .update(normal) .digest('base64'); console.log(encryptado);
I’d like to know if this data compression scheme would work or not, and why:
Suppose we have a file. If we treat the bits that make up the file as the binary representation of a number n, we have n (of course, if the first bit is zero we flip every bit so that n is unique). Now we have the number n, and a boolean that informs us whether to flip all the bits of the binary representation of n or not.
My idea was approximating n from below (e.g. finding a relative big number raised to a relative big power, such as 17^6038) and then start to compute arbitrary hashes for all numbers from this approximated n to the real n, counting the number of collisions. When we finally get to n, we have the “collision state” of the hashes and then we output the compressed file, which basically contains information about how to get to the approximation of n (e.g. 17^6038) and the “collision state” for n (note that this “collision state” must also occupy very few bits, so I’m not sure this would be possible).
The decompression procedure would do a very similar process; it will approximate n (e.g. compute ~n as 17^6038) and then start to hash (i.e. apply a function and check the result) every single number (we could also check every 5 numbers or another divisor of n – ~n) until the “collision state” is the same as the specified in the compressed file. Once we match everything, we have n. Then, it would just be a matter of flipping every bit or not (as specified in the compressed file) and outputting to a file.
Could this work? The only problem I can think of is (besides the time required for processing) the number of collisions being extremely huge.