Determining sample size of sectors in order to validate a data wipe procedure


I am doing some research into validating implementations of hardware optimized data wipe procedures in solid state storage devices, such as ATA8 Security Mode Erase and NVMe Secure Erase procedures.

As I have attempted to define what "success" means in this context I have established that a key measure would be that "it is possible to demonstrate a change in the value of a sector X of the storage medium between observations pre and post wipe."

The most rigorous approach to this would be to make a copy of all of the sectors, conduct the wipe procedure, then compare every sector’s new value with the reference copy and ensure that it is different. However this extremely time consuming and only really practical in a lab environment.

At the opposite extreme, simply checking that the initial sectors of the medium where the file-system structures are held are no long valid is not sufficient as the actual data is easily recoverable in their absence.

The middle ground then appears to be record a number of observations of sectors randomly selected from the medium, conduct the wipe, then compare. I believe the key to that is to determine in some formal fashion what how many sectors to sample in order for there to be any confidence in the outcome.

My understanding of sampling theory from college is all based upon sampling human populations using established models and tables, which I don’t think apply here. Accordingly, I am looking for suggestions as to techniques that can be applied to determine an appropriate sample size, or if due to the nature of the population it is not possible to actually construct such a sample with any useful meaning. I think I understand that statistical models rely upon the ability to reason about other people you didn’t observe based upon those you did, and it’s not clear to me that in this case there is a way to reason about the state of other sectors based upon the ones you check. If that were the case than perhaps all you are left with is making some arbitrary decision that X percentage of sectors being wiped is sufficient according to some policy standard, but that feels unsatisfactory to me.

This might be a Statistics question rather than a Computer Science question, but I am more comfortable with CS terminology that stats, and I think an understanding of how storage devices work is important to understanding the question, so I decided to start here. If this would be better off asked elsewhere please let me know.