Issue with Campaign problem to find distinct valid campaigns

I have encountered this problem as part of tech interview prep. However, I am not able to identify on how to solve the problem.


An email campaign is a sequence of email messages, where each email message belongs to one of two types, transactional or marketing.

M T T M M T T M M T Marketing messages are subject to a cooldown of length k. Within a given campaign, there must be at least k transactional emails between any two marketing emails.

For example, if k=2, the campaign

M T T M T T T M T is valid because between any two marketing emails there are at least two transactional emails. However, the campaign

M T M T T T T M T is invalid because there is only one transactional email separating the first two marketing emails. Note that, thus, there can never be two consecutive marketing emails or the pattern M T M.

Given the campaign length n and the cooldown length k, find the number of distinct valid campaigns. For example, there are 9 distinct campaigns of length 5 with k=2:

Sample solution:

T T T T T M T T T T T M T T T T T M T T T T T M T T T T T M M T T M T M T T T M T M T T M 

My Approach:

My initial thoughts were around generating all the valid permutations for each scenario and then counting it. However, I am not sure if that’s the way to go!

Would appreciate any insights.

Delete rows or columns of matrix containing invalid elements, such that a maximum number of valid elements is kept

Originally posted in stack-overflow but was told to post here.

Context: I am doing a PCA on a MxN (N >> M) matrix with some invalid values located in the matrix. I cannot infer these values, so I need to remove all of them, which means I need to delete the whole corresponding row or column. Of course I want to keep the maximum amount of data. The invalid entries represent ~30% of data, but most of it is completly fill in a few lines, few of it is scattered in the rest of the matrix.

Some possible approches:

  • Similar to this problem , where I format my matrix such that valid data entries are equal to 1 and invalid entries to a huge negative number. However, all proposed solutions are of exponential complexity and my problem is simpler.

  • Computing the ratio (invalid data / valid data) for each row or column, and deleting the highest ratio(s). Recompute the ratios for the sub-matrix and remove the highest(s) ratios. (not sure how many lines or columns we can remove safely in one step), and so on until there is no invalid data left. It seems like an okay solution, but I am unsure it always gives the optimal solution.

My guess is that it is a standard data analysis problem, but surprisingly I could not find a solution online.

[ Politics ] Open Question : Did Trump have a valid reason to scold WHO the way he just did?

He said they should have seen through China’s ‘lies’, and that they (WHO) should not have opposed any travel bans. Are Trump’s accusations in any way Justified? Oh, and BTW, why did he not accept Test kits available from the WHO, at a time when we REALLY needed to test people? From the NYT:  “A top White House adviser starkly warned Trump administration officials in late JANUARY that the coronavirus crisis could cost the United States trillions of dollars and put millions of Americans at risk of illness or death.” @anonymous . PLEASE do not block me and answer my question. But I will say one thing, your answer is devious. It appears to be objective but leaves out critical ‘highlights’.

Is public exposure of Google Safebrowsing API Key a valid vulnerability

When I was scanning through a target mobile application, I found the exposure of Google Safebrowsing API Key. However, I found that this is a free key and anyone can obtain it. Is this a valid security issue? If so, what are the things an attacker could achieve with the help of it? Also, can you please help me identifying the CVSS if this is an issue in the first place.

Thanks in advance.

Does HSTS prevents MITM using a valid certificate?

Let’s consider this scenario:

An attacker got a valid certificate for a HSTS protected domain Can he still perform a man-in-the middle attack even if the website is already loaded in the browser HSTS list?

I remember using Burp suíte once and getting a strict transport security related error for a valid certificate, so I would suppose the HSTS list also contain the certificate fingerprint, although I could not find anything about it in the RFC

Is multiplying hashes a valid way to ensure two sets of data are identical (but in arbitrary order)

Let’s say “User A” has a set of data like below. Each entry has been hashed (sha256) to ensure integrity within a single entry. You can’t modify data of a single entry without also modifying the corresponding hash:

[ { data: "000000", hash: "91b4d142823f7d20c5f08df69122de43f35f057a988d9619f6d3138485c9a203" },  { data: "111111", hash: "bcb15f821479b4d5772bd0ca866c00ad5f926e3580720659cc80d39c9d09802a" },  { data: "345345", hash: "dbd3b3fcc3286d927ec214c5648fbb226353a239789750f51430b1e6e9d91f4f" },  ] 

And “User B” has the same data but in a slightly different order. Hashes are the same of course:

[ { data: "345345", hash: "dbd3b3fcc3286d927ec214c5648fbb226353a239789750f51430b1e6e9d91f4f" },  { data: "111111", hash: "bcb15f821479b4d5772bd0ca866c00ad5f926e3580720659cc80d39c9d09802a" },  { data: "000000", hash: "91b4d142823f7d20c5f08df69122de43f35f057a988d9619f6d3138485c9a203" },  ] 

I want to allow both users to verify they have the exactly same set of data, ignoring sort order. If, as an extreme example, a hacker is able to replace User B’s files with otherwise valid-looking data, the users should be able to compare a hash of their entire datasets and detect a mismatch.

I was thinking to calculate a “total hash” which the users could compare to verify. It should be next to impossible to fabricate a valid looking dataset that results in the same “total hash”. But since the order can change, it’s a bit tricky.

I might have a possible solution, but I’m not sure if it’s secure enough. Is it, actually, secure at all?

My idea is to convert each sha256 hash to integer (javascript BigInt) and multiply them with modulo to get a total hash of similar length:

 var entries = [ { data: "345345", hash: "dbd3b3fcc3286d927ec214c5648fbb226353a239789750f51430b1e6e9d91f4f" },  { data: "111111", hash: "bcb15f821479b4d5772bd0ca866c00ad5f926e3580720659cc80d39c9d09802a" },  { data: "000000", hash: "91b4d142823f7d20c5f08df69122de43f35f057a988d9619f6d3138485c9a203" },  ];  var hashsize = BigInt("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"); var totalhash = BigInt(1); // arbitrary starting point  for (var i = 0; i < entries.length; i++) {   var entryhash = BigInt("0x" + entries[i].hash);   totalhash = (totalhash * entryhash) % hashsize;  } totalhash = totalhash.toString(16); // convert from bigint back to hex string 

This should result in the same hash for both User A and User B, unless other has tampered data, right? How hard would it be to create a slightly different, but valid-looking dataset that results in the same total checksum? Or is there a better way to accomplish this (without sorting!).

Is it safe to assume that a JWT is valid if it can be decrypted with the server’s private key?

This is my first time using JWT. I’m using the jwcrypto library as directed, and the key I’m using is an RSA key I generated with OpenSSL.

My initial inclination was to store the JWT in the database with the user’s row, and then validate the token’s claims against the database on every request.

But then it occurred to me that the token payload is signed with my server’s private key.

Assuming my private key is never compromised, is it safe to assume that all data contained in the JWT, presented as a Bearer token by the user, is tamper-proof? I understand that it is potentially readable by third parties, since I’m not encrypting.

What I want to know is, is there anything I need to do to validate the user’s JWT besides decrypt it and let the library tell me if there’s anything wrong?