Code Change That Resulted in Database Fields and Values Exposed

At my company, we have a new development team that has been completely rewriting all of the code for different parts of the system.

I’ve noticed that with one of the recent changes, you can now see the JSON data for all of the fields and values for each field that exist in our database for that particular section of an account where a user is logged in. You can do so simply by using Developer Tools in Chrome.

Is this a bad idea from an information security perspective? Why or why not?

Disclaimer: I am not part of any development team, but would like to make others aware so that this can be dealt with appropriately if it is a security concern.

Database Security – Encryption & Searching

I am completely new to cryptography but have been trying to make myself familiar with the concepts and applications. I have a project where I believe cryptography to be beneficial.

Project Info:

DB = MySQL 5.6+

Engine = InnoDB

My application will reside on an intranet web server behind a network firewall with a very small white-list. Few users of this application will have the ability to add/remove values from the database. A larger number of users would be able to read these values. Values I would hope to encrypt could include:

  • emails
  • account numbers
  • paths
  • dates
  • unique ids

Largest table(s) would have up to 150k entries and total sessions likely to remain under 100.

Being an intranet site I assume (with limited security knowledge) that my primary threats will be malicious users, hardware theft, and persistent XSS from an internal or external source. I am doing my best to mitigate all of these.

Doing some research on how to encrypt my data while allowing it to be searchable leaves me with a few options (please correct wrong information);

  • CipherSweet Blind Indexing: requires library, may be overkill, false positives possible
  • MySQL AES_ENCRYPT/DECRYPT: if logs are compromised plaintext values will also be compromised
  • Application Side: runtime “nightmare”, heavy load, could cause issues with multiple threads running

Questions

  1. While ugly and poor practice should application side encryption/decryption be acceptable for my environment?
  2. Would the likelihood of false positives with CipherSweet be negligible for my datasets?
  3. Given my environment, would letting MySQL handle the encryption/decryption be acceptable (neglecting hardware theft or server compromise)
  4. Bonus – should I be worrying about external XSS given my environment

I understand this question may fall into the category of discussion and if that is the case please direct me to where I may find further information to narrow my questions.

Local MySQL database check for changes

Me and my friend had a website. Before closing website he dumped MySQL Database from phpmyadmin and stores it locally on his PC. Now he send me sql file. Is there a way to see if he didn’t change/add/remove lines,tables and variables and other stuff from *.sql file ? How he can modify the file so I can’t notice that file is modified ?

Bidirectional grant between rols for Postgres database

If I have 2 roles, role_a and role_b.

If I grant all from a to b with:

grant role_a to role_b;

Then any table, schema, etc created by role_a can be queried by role_b.

But if I create a table/schema/… with role_b, then role_a cannot query it…

But I cannot for the life of me figure out how to configure the roles. Basically I have 3 roles:

  • role_dev
  • role_app
  • role_deployment

Dev only exists in CD environment and sometimes I connect and create stuff directly to test re-runable scripts. This role is rather open.

App cannot create tables/schemas but can read/write to everything that exists in that database.

Deploy can create all the tables/schemas.

At the moment I manage this by explicitly granting to the stuff I create, but it’s a hassle.

PostgreSQL group roles

This is the closest I can find but it doesn’t appear to work for me.

Storing third party API tokens in a database

This is my first question so if there are many mistakes with formatting or if there are any standards I should follow please let me know.

Currently I have a Node JS project that uses the Spotify API. The project displays the the users top played artists and tracks. I am following the Authorization Code Flow to obtain the access token. This access token will he used to query certain endpoints to obtain a JSON response of the data that will be used for my project. This token lasts an hour. I am currently storing this access token in a cookie and using this cookie to make new requests.

My question is is this acceptable from a security standpoints? This token does not have the ability to change any of the users profile settings or read sensitive data. However, if another person were able to obtain this token they could use this to see another users data. Or would it be more secure to store this access token in a database an query the database for access tokens whenever need?

How can I preserve the uniqueness of a document without a database?

I’m willing to create a system of transferable documents (identified by it’s ID) whose author can transfer his ownership of that document to another person (identified by his/her ID).

For example:

  1. Alice; owner of document 1.
  2. Alice transfers his ownership of that document to Bob.
  3. Now: Bob is owner of document 1. 4. Alice says she is the owner of document 1, but she fails.

(Item 4 is very important)

We can make sure that the system with it’s author remains untouched by using digital signature. But if Alice made a copy of that document signed when she was the owner, there would be no way to prevent her from saying she is not the owner of the document.

So we would need something to make a signature to expire whenever it is transferred.

IF I HAD A DATABASE: I would simply add that signature to a ban list.

Are there any solutions to preserve the uniqueness of this document?

What would you choose to run your database 2 x NVME 1920 Raid 1 or 4 x SSD 960 RAID 10

I am trying to move my server from one provider to another. Currently, I do have 4 SSD Raid 10 but I am intrigued by NVME drives. I would like more IO but I am not sure how long they last and if one goes done replacing it would mean more downtime. Your recommendation. Also are hourly backups enough or you recommend more often.

This is a production environment and on heavy loads time, few queries get a lot of data returned not I don’t see the struggle with the 4 SSD right now.

Storing Argon2 hash in database

Merry Christmas!!!!!

This is how I am using Argon2:

step1 =  argon2.PasswordHasher(time_cost=16, memory_cost=2**15, parallelism=2,                                 hash_len=32, salt_len=16, encoding = 'utf-8')    step2 = step1.hash('password1') print step2  # $  argon2id$  v=19$  m=32768,t=16,p=2$  vruz5GwPq3vNO9SOlf1O4w$  ahmCvQcgB+MqUrWdYGLbLB4G7ZOGP5bgcYxaDM/AaLo 

I am storing the output, so obtained in step2, as a single unit in one column with character set utf8mb4 and collation utf8mb4_unicode_520_ci.
I have no separate column for salt, since the hash already has it.

Is this a proper way to store Argon2 hash?