## Hashes coupon collector

The story:

A sport card store manager has $$r$$ customers, that together wish to assemble a $$n$$-cards collection. Every day, a random customer arrives and buys his favorite card (that is, each customer is associated with a single card), even if it has purchased the same card before. How many days will past before the customers complete their collection?

Formally, let $$n\le r$$ be integer parameters, and let $$h:[r]\to [n]$$ be a random projection from $$[r]$$ to $$[n]$$ (i.e., it maps every element uniformly and independently). How many random samples (with replacement) $$(x,h(x))$$ to we need to get before we see all $$n$$ values possible for $$h$$?

Clearly, there is some chance that $$h$$ is not onto and thus the expectation of the required number is not bounded.

I’m interested in a bound of the form:

• After $$T(r,n,\delta)$$ samples, with probability at least $$1-\delta$$, we have seen all possible $$n$$ values.

For example, if $$r=3,n=2$$, we have a probability of $$1/4$$ that $$h$$ is not onto, and if it is, then after collecting $$4$$ cards, the chance of not seeing all $$h$$ values is $$(1/3)^4+(2/3)^4\le 0.21$$. This means that $$T(3,2,0.46)=4$$ is a correct upper bound.

## Where are NTLM and LM hashes stored in a password protected microsoft presentation file

I have a password protected presentation file (MS office 2003).

My assignment required me to either remove the password or find it.

In my research i found out that presentation use ntlm and/or nt hashes. I also found out that office2john looks like the tool for the job.

Now my questions: How office2john extract the hashes? Where are they in the file? Can you explain to me where are they located? Or can you point me to some documentation that explain it?

What is the point of providing shaXXX hashes for downloads of software over say TLS when any attacker that could change the download could have easily changed the hash? Isn’t there enough information in the download to know that it is corrupt? Just seems brain dead to me.

## Can we implement custom algorithms to encode and decode wifi password hashes between Windows 10 and our Router?

Windows 10 has its own way of encrypting hashes, but these can be brute-forced by hashcat. One of our students created 2 of his own password encryption algorithms in python (One for encode, one for decode). Is there a way we can implement his algorithms on our Windows 10 PC and Router such that the algorithm encodes the wifi password, creates a hash, and then sends it to the pc, which reads the hash and then uses the decode algorithm to get the correct password.

## How to make John the Ripper output example hashes for a given hash type?

Is it possible to make John the Ripper output example hashes for a given hash type given by the --format= option?

This is possible using Hashcat, but currently I look in John the Ripper’s source code for example hashes, which is rather slow.

Any ideas?

## P2P getHeaders message – how to get block locator hashes

I’m trying to build my own block explorer So that I need to use p2p bitcoin protocol. For that I’m using btcd (golang) lib.

According to bitcoin book I have to send version message (and receive verack message back) to init the connection, then I will send getheaders message in cycle.

Now, imagine I am a new peer and want to download all the blockchain (actually I want to receive block hashes with getheaders (2000 per tick) and for each block hash I want to receive a block itself to process it and put blocks data in my mongo database)

If I am new peer in the network, how should I send first (and next) getheaders message, I don’t quite understand where should I get block locator hashes and how to form the message?

## if password hash algorithm is broken, is best practice to re-hash the hashes with a new hash? [duplicate]

• Is there any recommended approach for “upgrading” MD5 hashes to something secure? [duplicate] 2 answers

Suppose you store a bunch of hashed passwords, but your hashing algorithm gets broken. What is the best practice?

It seems like the only safe practice would be to take the old password hashes (hashed with the semi-broken algorithm Hash1()) and hash the hashes with a new hash, not known to be broken (Hash2()). And now when a user enters their password, you hash the entered password with Hash2(Hash1()) to see if it matches.

This seems like the only logical conclusion, but I’ve never heard of this recommended anywhere as a best practice. Is this a known best practice that’s already documented somewhere? Or is there an error in this reasoning, or a simpler way to achieve the same thing?

## I need help cracking NTLM hashes

need help cracking these NTLM hashes 853889953055D66CF11F6BE1F261E50A 8E507986CFC0085BB1D7B8A7C34A54AC﻿ mixalpha-numeric

## Is there a collision free Hash Function for dynamic data that preserves already created hashes?

I am familiar with the concept of hash functions and fingerprints but I am new to the implementation of functions and the specific characteristics of all those hash functions out there.

## What I need

Say I have a table of employee data {FirstName, LastName, Birthday, …} which is dynamic. New employees join the the company, old employees leave. I want to hash the employee data. I need to make shure that the data of new employees is not hashed to the same value as any other employee that has ever been in the database.

In this sense the data set is append only. While deleting the employee data of an retiring employee is mandatory, I can store the hash, that was once linked to that employee. (Which most likely is of no use because the hash of past employees data will not evaluate to itself once hashed again 🙁 )

Hashing does not need to be cryptographic. Yet hashes must not easily be tracked back to employee data. By this I mean you are not supposed to use an efficient method to calculate employee data from hash values. Since information is lost in the process of hashing I would assume that this requirement is easy to match.

## The Goal

I want to reduce the size of the hash (meaning bit size) as much as possible.

I would assume that, without collision resistance, I need to pick a fairly large hash (2^32 or bigger) to assure a tolerable risk of collision. Avoiding this is the main interest behind the question.

I must guarantee that never ever a new employees data is hashed to the same value as one of the prior employees data was hashed to. I can make assumption like “Given infinite time there will in total never be more then 1.000.000 people in the company or retired from the company.” So the total number space of hashes is fixed.

Is this solvable? If not, what would be the best hashing procedure that assures maximum collision resistance (Rabin’s Fingerprinting?)

## Are the sha1 hashes used by common ssh configurations insecure?

I got an automated PCI security test result that checked various server configurations. The automated test determined the server to be unsafe due to the use of sha1 algorithm in some elements of the ssh configuration.

The configuration can be seen when running ssh -vvv, so here’s the relevant part of that output. I snipped out the other algorithms that are available on this particular server, but several are available.

debug2: KEX algorithms: ...snip...diffie-hellman-group14-sha1 debug2: MACs ctos: hmac-sha1...snip... 

It’s the use of:

• diffie-hellman-group14-sha1 in the key exchange algorithms
• hmac-sha1 in the MACs from client to server

I’ve searched this site a bit and I don’t see much data about whether these algorithms are 1) in use 2) considered insecure for a PCI compliant site in 2019.