How does BLE secure connection ensure man in the middle protection?

I understand BLE secure connection pairing mode is improvement over Legacy Pairing. The issue with legacy pairing was intial TK value can easily be bruteforce by an attacker.

In contrast, in secure connection, both device start by generating ECDH key pair and exchange public key.

Since BLE doesn’t use certificate for public key, how would a device know if the public key actually belong to the entity it wants to communicate with.

I know later in pairing, there is confirmation check but that’s similar idea to legacy pairing, just sequence is changed.

How to ensure your own native app is talking to your own API

I’m developing an API and different apps to access to it, each with different scopes, including a native mobile app, and I’m wondering what would be a good strategy to authenticate my own native app to my own API (or more specifically my users).

I can’t find a recommended method to guarantee that it is really my client (in this case a native app) which is talking to my API.

For example, if I implement the Authorization flow to authenticate my users. Let’s say I have a server acting as the client mobile.mydomain.com, so my mobile app make requests only to mobile.mydomain.com and mobile.mydomain.com is be able to securely talk to api.mydomain.com as the client id / client secret is never exposed to the public.

So far so good, api.mydomain.com is sure that calls are from mobile.mydomain.com however mobile.mydomain.com isn’t sure who is sending requests to it and it’s still possible to impersonate my mobile app by making another app that just includes the same login button and does the same oauth2 process and finally get a token to continue talking to mobile.mydomain.com.

How is that different from using the Password flow (which isn’t recommended I know) and embedding the client id / client secret in this case? (client_secret being completely useless in this case)

=> basically from the api point of view, it just needs to know what is the client id.

How does google to make sure that a request is really from the Gmail app and not from another app doing the exact same thing with the same redirect uri etc? (which wouldn’t be harmful anyway as it requires a username / password). I guess it can’t know for sure

PS: I’m aware that OAuth2 isn’t for authentication but for authorization only

How to ensure JWT security in authentication?

I have implemented a backend as a REST API. To maintain the statelessness in REST, I intend to use JWT to verify that that a user has logged in or not. (A user is logged in if a valid token is present in headers. Not logged in if a token is not present.)
But even with expiration times are set, an attacker can access the REST api by simply copying the JWT from the web browser. What are the methods available to stop this without killing the statelessness?

Is multiplying hashes a valid way to ensure two sets of data are identical (but in arbitrary order)

Let’s say “User A” has a set of data like below. Each entry has been hashed (sha256) to ensure integrity within a single entry. You can’t modify data of a single entry without also modifying the corresponding hash:

[ { data: "000000", hash: "91b4d142823f7d20c5f08df69122de43f35f057a988d9619f6d3138485c9a203" },  { data: "111111", hash: "bcb15f821479b4d5772bd0ca866c00ad5f926e3580720659cc80d39c9d09802a" },  { data: "345345", hash: "dbd3b3fcc3286d927ec214c5648fbb226353a239789750f51430b1e6e9d91f4f" },  ] 

And “User B” has the same data but in a slightly different order. Hashes are the same of course:

[ { data: "345345", hash: "dbd3b3fcc3286d927ec214c5648fbb226353a239789750f51430b1e6e9d91f4f" },  { data: "111111", hash: "bcb15f821479b4d5772bd0ca866c00ad5f926e3580720659cc80d39c9d09802a" },  { data: "000000", hash: "91b4d142823f7d20c5f08df69122de43f35f057a988d9619f6d3138485c9a203" },  ] 

I want to allow both users to verify they have the exactly same set of data, ignoring sort order. If, as an extreme example, a hacker is able to replace User B’s files with otherwise valid-looking data, the users should be able to compare a hash of their entire datasets and detect a mismatch.

I was thinking to calculate a “total hash” which the users could compare to verify. It should be next to impossible to fabricate a valid looking dataset that results in the same “total hash”. But since the order can change, it’s a bit tricky.

I might have a possible solution, but I’m not sure if it’s secure enough. Is it, actually, secure at all?

My idea is to convert each sha256 hash to integer (javascript BigInt) and multiply them with modulo to get a total hash of similar length:

 var entries = [ { data: "345345", hash: "dbd3b3fcc3286d927ec214c5648fbb226353a239789750f51430b1e6e9d91f4f" },  { data: "111111", hash: "bcb15f821479b4d5772bd0ca866c00ad5f926e3580720659cc80d39c9d09802a" },  { data: "000000", hash: "91b4d142823f7d20c5f08df69122de43f35f057a988d9619f6d3138485c9a203" },  ];  var hashsize = BigInt("0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff"); var totalhash = BigInt(1); // arbitrary starting point  for (var i = 0; i < entries.length; i++) {   var entryhash = BigInt("0x" + entries[i].hash);   totalhash = (totalhash * entryhash) % hashsize;  } totalhash = totalhash.toString(16); // convert from bigint back to hex string 

This should result in the same hash for both User A and User B, unless other has tampered data, right? How hard would it be to create a slightly different, but valid-looking dataset that results in the same total checksum? Or is there a better way to accomplish this (without sorting!).

What degree of physical destruction is sufficient to ensure an SSD is not readable?

My organization has upgraded a few printers and decommissioned the internal SSD hard drives by passing the memory chips through a band saw, cutting each chip into halves, and in some cases tearing whole sections loose from the greenboard.

These printers were used in such a way that they likely have PHI / HIPAA information on them.

I am looking for advice on whether or not this method of destruction was sufficient or not.

I do not believe it is, but would like additional resources.

I have posted what I have found so far as an answer, as it may be the answer to my question, but I am hoping for other input.

What is correct way to ensure that row is not deleted in concurrent transaction

I have 2 tables:

  • Parent (PK id)
  • Child (PK id, parent_id)

Note that parent_id is not defined a foreign key (actually they are not parent/child, I use those names just for example).

I need to handle a case when someone inserts new Child with link to Parent’s row, and someone else deletes Parent’s row. I would like to avoid scenario when DB contains Child row with link to non-existent Parent.

How it works now:

[I1] Insertion Thread: BEGIN [I2] Insertion Thread: SELECT * FROM Parent WHERE id = 3 [D1] Deletion Thread: BEGIN [D2] Deletion Thread: DELETE * FROM Child WHERE parent_id = 3 [D3] Deletion Thread: DELETE * FROM Parent WHERE id = 3 [D4] Deletion Thread: COMMIT [I3] Insertion Thread: INSERT INTO Child VALUES(default, 3) [I4] Insertion Thread: COMMIT 

What I’ve tried to do:

  1. Use SELECT ... FOR UPDATE in insertion thread

    Disadvantage: that will lock Deletion Thread on line D3, but line D2 is already executed at that moment. After Insertion Thread is done, Deletion Thread continue from D3 and will not delete just inserted Child row.

    Solution: swap D2 and D3, so deletion from Parent goes first.

    Disadvantage: a bit fragile. Those SQL statements are called from business logic code, in different places, so it is easy to swap their order back accidentally.

  2. Use REPEATABLE READ isolation level

    Disadvantage: it does not work as I expected. I thought it will abort Insertion Thread’s transaction as selected Parent’s row was deleted in another transaction, but it does not. I suppose it is because I do not perform update of Parent’s row in Insertion Thread, just read.

I’m kinda OK with solution 1 or explicit row locking in the beginning of transaction, but I hope there is more correct way to achieve expected behavior. Maybe I just misused isolation level?

DB is Postgres

Does Dekkers solutions to critical section problem ensure progress?

I was reading concurrency control section from Operating Systems book by William Stallings. In this book, he gives three attempts by Dekker to give solution to critical section problem:

Attempt 1

+------------------------+------------------------+ | //process 0            | //process 1            | | while (flag[1]);       | while (flag[0]);       | | flag[0] = true;        | flag[1] = true;        | | /* critical section*/; | /* critical section*/; | | flag[0] = false;.      | flag[1] = false;.      | +------------------------+------------------------+ 

Attempt 2

+------------------------+------------------------+ | //process 0            | //process 1            | | flag[0] = true;        | flag[1] = true;        | | while (flag[1]);       | while (flag[0]);       | | /* critical section*/; | /* critical section*/; | | flag[0] = false;       | flag[1] = false;       | +------------------------+------------------------+ 

Attempt 3

+-----------------------+-----------------------+ | //process 0           | //process 1           | | flag[0] = true;       | flag[1] = true;       | | while (flag[1])       | while (flag[0])       | | {                     | {                     | |    flag[0] = false;   |    flag[1] = false;   | |    /* delay */        |    /* delay */        | |    flag[0] = true;    |    flag[1] = true;    | | }                     | }                     | | /* critical section*/ | /* critical section*/ | | flag[0] = false;      | flag[1] = false;      | +-----------------------+-----------------------+ 

Which of the above attempts ensure progress requirement of solution to critical section?

Progress requirement is stated as follows in the book by Galvin et al:

If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in the decision on which will enter its critical section next, and this selection cannot be postponed indefinitely.

This is what I feel:

  1. Remainder section is code executing after critical section. In attempt 1, if process 0 is busy waiting in while(), then it will be unblocked by process 1 by setting flag[1] to false, which is in remainder section. So this attempt does not seem to ensure progress.
  2. This attempt can cause deadlock. So, it may indefinitely postpone decision to let process enter its critical section. Hence, it does not ensure progress requirement.
  3. This attempt can cause livelock. So, like attempt 2, it may indefinitely postpone decision to let process enter its critical section and hence, it does not ensure progress requirement.

Am I correct with these points?

How to ensure Windows 10 is safe from critical security hole reported by NSA on 2020-01-14?

All over the news today (2020-01-14) is the story that the NSA and Microsoft have reported a critical security vulnerability in Windows 10.

But I haven’t been able to find clear instructions about how to ensure that Windows Update has worked properly.

When I click the Start button and then then type “winver” and click “Run command”, I see that I have Windows 10 Version 1803 (OS Build 17134.191)

Windows > Settings > “Update & Security” > “See what’s new in the latest update”, it bounces me to https://support.microsoft.com/en-us/help/4043948/windows-10-whats-new-in-recent-updates, which doesn’t seem to mention security at all.

The Windows Update feature itself seems flaky, confusing, and unreliable.

I’m the most tech-savvy in my large extended family, and I generally try to help others (especially older generations) keep their systems working well, but right now I’m struggling to find a set of steps I can walk them through to confirm that their systems are no longer vulnerable.

pg_restore: [archiver] did not find magic string in file header: please check the source URL and ensure it is publicly accessible

I have been trying to push a dump file from my local Postgrel DB (which I uploaded into my Google Drive and is accessible to public) into my Heroku remote DB with the following URL:

heroku pg:backups:restore 'https://drive.google.com/open?id=dump_id_link_here' DATABASE_URL 

I am already logged in into my Heroku app from the terminal on which I run the command. but I got the same error twice. I have been searching online and found threads such as pg_restore: [archiver] did not find magic string in file header but I could not help link between the two, since I am very new two Postgrel. I Hope you guys point me out towards the issues. Very much appreciated.

Starting restore of https://drive.google.com/open?id=dump_id_link_here to postgresql-symmetrical-52186... done  Stop a running restore with heroku pg:backups:cancel.  Restoring... ! ▸    An error occurred and the backup did not finish. ▸     ▸    waiting for restore to complete ▸    pg_restore finished with errors ▸    waiting for download to complete ▸    download finished with errors ▸    please check the source URL and ensure it is publicly accessible ▸    Run heroku pg:backups:info r002 for more details.   === Backup r002 Database:         BACKUP Started at:       2019-09-14 21:14:26 +0000 Finished at:      2019-09-14 21:14:27 +0000 Status:           Failed Type:             Manual Backup Size:      0.00B (0% compression)  === Backup Logs 2019-09-14 21:14:27 +0000 pg_restore: [archiver] did not find magic string in file header 2019-09-14 21:14:27 +0000 waiting for restore to complete 2019-09-14 21:14:27 +0000 pg_restore finished with errors 2019-09-14 21:14:27 +0000 waiting for download to complete 2019-09-14 21:14:27 +0000 download finished with errors 2019-09-14 21:14:27 +0000 please check the source URL and ensure it is publicly accessible