Is there a cross-platform way to compare data in one columnd on each sitde of replicated data, like a checksum or hash?

I have an Oracle 12 database with lots of tables, and I am replicating several of the tables (a subset of rows) into a SQL Server 2016 database. The subset of rows can be established with a WHERE clause on the Oracle side.

I have two web services that can expose anything I want to from that data, one on each side.

Do you have a suggestion for an approach of what I can expose, then compare to find out if the data between the systems matches?

I am currently exposing from one table, which has a few million rows the COUNT(*), which is a no-brainer since it is very efficient. So far, so good.

I’m also exposing the SUM of each of a few NUMBER(18,2) columns and comparing it to the corresponding SUM on the SQL Server side. However, this is problematic, as it has to scan the whole table in SQL Server; it is sometimes blocked, and sometimes might cause other processes to block. I imagine similar problems could occur on the Oracle side too.

Also, the SUM will not tell me if the rows match–it will only tell me that the totals match; if an amount was improperly added to one row and subtracted from another, I wouldn’t catch it.

I’ve pondered whether Oracle’s STANDARD_HASH might help me, but it seems cumbersome or error-prone to try to generate the exact same HASH on the SQL Server side, and also this doesn’t help with the inefficiency/blocking nature of the call.

So is there any way to have both databases keep track of a hash, checksum, CRC code, or other summary of a column’s data, that is efficient to retrieve, and that I can then use to compare to have some idea whether data is the same on both sides? It need not be a perfect solution–for example, comparing SUMs is close but perhaps not quite good enough.

As a first stab, I created an "summary" indexed view, with columns derived from SUMs, on the SQL Server side. This makes querying the view very fast, but incurs additional penalty on every write to the large table underneath. Still, I think it will work, but I’d like to improve on it. Other, better ideas?

Does Oracle guarentee that ORA_HASH is used to determine which hash partition a row is inserted, and will this be honored in the future?

I use hash partitioning for a few of my very large tables, and occasionally I have a use case where it would be convenient to have a mechanism that would return the partition name that a row would be inserted into, given a partition value.

This blog here shows that we can use ORA_HASH function for this purpose. Incidentally, it appears this page is the only page on the entire internet that explains this.

I’ve used it successfully and it works in all cases that I have tried. It seems ORA_HASH is definitely what Oracle itself uses to pick the hash partition that it inserts data into, and that at least on the current version of Oracle it is safe to use for this use case.

However there is no guarantee in the documentation that Oracle even uses it, or will continue to use it in the future. This makes me think that using ORA_HASH in this way is not safe or future proof. What if a DB is upgraded and ORA_HASH no longer behaves this way?

For reference, you can use the following SQL to return the hash partition for a given value:

SELECT partition_name FROM all_tab_partitions WHERE table_name = 'FOO'     AND partition_position = ORA_HASH('bar', n - 1) + 1 

Where 'bar' is the value you wish to analyze, and n is the number of partitions in your table. There are some edge cases when the number of partitions is not a power of 2, which is covered in the blog article linked above.

Can we say that CA produces the hash of TBSCertificate and then encrypt it instead of signing it? [duplicate]

CA signs the TBSCertificate, this is a pretty known fact.

Signing m means producing the hash value of m then encrypting m. For example:

Does this apply to signing certificates?

Here the answerer says:

The most important is that both your encrypt boxes are wrong, they should say sign.

Can’t open hash with John or Hashcat

I’m trying to open a hash with John and HashCat, but both don’t work?

NTLMv2 Response Captured from DOMAIN: DEV29-APP01 USER: testuser LMHASH:Disabled LM_CLIENT_CHALLENGE:Disabled NTHASH:3045e74dac0653865d353e93e8c5ca8c  NT_CLIENT_CHALLENGE:0101000000000000c2af33072879d60195da2f228ded77b7000000000200120041004e004f004e0059004d004f00550053000100120041004e004f004e0059004d004f00550053000400120061006e006f006e0079006d006f00750073000300120061006e006f006e0079006d006f00750073000800300030000000000000000000000000200000feb33cee8c0f22d8b27a15278ee7fdfbb47b23655ada87d2da7b3a3b1db5450e0a00100000000000000000000000000000000000090038004d005300530051004c005300760063002f003100360038002e00360033002e003100310031002e003100300036003a0031003400330033000000000000000000 

Manually rewritten to:

testuser::DEV29-APP01:3045e74dac0653865d353e93e8c5ca8c:0101000000000000c2af33072879d60195da2f228ded77b7000000000200120041004e004f004e0059004d004f00550053000100120041004e004f004e0059004d004f00550053000400120061006e006f006e0079006d006f00750073000300120061006e006f006e0079006d006f00750073000800300030000000000000000000000000200000feb33cee8c0f22d8b27a15278ee7fdfbb47b23655ada87d2da7b3a3b1db5450e0a00100000000000000000000000000000000000090038004d005300530051004c005300760063002f003100360038002e00360033002e003100310031002e003100300036003a0031003400330033000000000000000000  me>hashcat -m 5600 -a 3 testuser.txt --force Hashfile 'testuser.txt' on line 1 (testus...31003400330033000000000000000000): Separator unmatched No hashes loaded.  me>john --format=netntlmv2 testuser.txt Using default input encoding: UTF-8 No password hashes loaded (see FAQ) me>john --show --format=netntlmv2 testuser.txt 0 password hashes cracked, 0 left 

What am I missing?

Double Hash Family Universality

In this problem Here I am given 2 hash families and I need to prove the universality of the double hash, but I am stuck as to how to prove this. I know the properties of an epsilon-universal family is that the probability of collision is at most epsilon, but how could I relate this to prove the universality of the double hash?

Encrypting salted password hash before storing in the database

I have read here, that instead of using pepper, it is better to encrypt hashed/salted passwords before storing in the database. Especially with Java, as there’s no library for salt/pepper, but just for salt hashing, and I’m not going to implement my own crypto in any way. Have questions about it:

  1. Is it true? Will it add security, if db server is on the another physical computer, and encryption keys are sored on the app server’s fs?
  2. If so, is it ok to use RSA for hash encryption?
  3. To check password in this case, is it better to read encrypted password from the DB, decrypt it, and then compare it to the hashed/salted one entered by user, or encrypt entered hashed/salted password and then compare with the encrypted value in the database? In this case, will it be the same as using another hash, as encrypted hash is never decrypted?

Thank you

Forward secrecy in Merkle trees vs. hash chains

Say we have a one-time password authentication system that uses a Merkle tree. Assume that the secret keys are of the form {sk0, sk1, ..., sk7}, and at time t = 3 an attacker recovers sk6. Will he/she be able to recover any of the previous secret keys (ie sk3, sk4, sk5, and sk6)?

My guess would be no, since all Merkle tree would do is provide confirmation whether the root value computed from sk7 is equal to the one stored on the server. Would the adversary somehow be able to recover any other key?

Follow-up question. What if a simple hash chain is used? I assume the answer to this would be yes as to get to k0, we would have to be able to calculate all the previous hashes (which includes sk3, sk4, sk5, sk6).

Where do hash functions get their preimage resistance? [migrated]

I read through this answer and it seemed to make sense to me, but when I try to make a simpler answer to explain it to myself I lose something in the process.

Here is the much simpler hash function I wrote after reading the description of how MD5 works.

  1. Take in a single digit integer input as M
  2. Define A[0] to be some public constant
  3. for int i=1; i<=4; i++:
    A[i] = (A[i-1] + M) mod 10
  4. return A[4]

This hash function uses the message word in multiple rounds, which is what the answer says leads to preimage resistance. But with some algebra using mod addition we can reduce this "hash function" to just A[i] = (A[0] + i*M) mod 10.

A[1] = (A[0] + M) mod 10 A[2] = (A[1] + M) mod 10    //Substitute A[1] in      = ((A[0] + M) mod 10 + M) mod 10   // Distribute outer mod 10 in      = ((A[0] + M) mod 10 mod 10 + M mod 10) mod 10 // simplify mod 10 mod 10 to mod 10      = ((A[0] + M) mod 10 + M mod 10) mod 10    // Distribute inner mod 10      = ((A[0] mod 10 + M mod 10) mod 10 + M mod 10) mod 10  //factor mod 10 out      = ((A[0] mod 10 + M mod 10) + M) mod 10    // remove redudent paraens      = (A[0] mod 10 + M mod 10 + M) mod 10  // factor mod 10 in      = (A[0] mod 10 mod 10 + M mod 10 mod 10 + M mod 10) mod 10 // simplify mod 10 mod 10 to mod 10      = (A[0] mod 10 + M mod 10 + M mod 10) mod 10   // factor mods 10 out      = (A[0] + M + M) mod 10      = (A[0] + 2M) mod 1 // Repeat with A[3] to find A[3] = (A[0] + 3M) mod 10 and so on 

Because A[i] = (A[0] + i*M) mod 10 is not preimage resistant, I’m confused as to what action in a hash function gives it its preimage resistance. To phrase my question another way, if I wanted to write a super simple hash function, what would I need to include to be preimage resistant?