Certificate Signed Using Weak Hashing Algorithm impact on a workstation

I did a vulnerability scan on some of our company workstations. These are workstations used by employees (dev, HR, accounting, etc.) to do their job. One of the common result I found is SSL/TLS Certificate Signed Using Weak Hashing Algorithm. Based on the vulnerability description "An attacker can exploit this to generate another certificate with the same digital signature, allowing an attacker to masquerade as the affected service." I’m thinking this is more on a server side.

My question is, what could be the impact of this in an ordinary workstation?
What can an attacker/pentester do to the workstation with this vulnerability?

Replace hashing function with asymmetric cryptography when password authentication

I would like to know if the following ideas are feasible:

Hash function is one-way function.

Generate public key from private key is irreversible(asymmetric cryptography).

User password entry -> SHA(or adding salt before hashing) -> hash value(as ECC private key) -> generate public key from private key -> save public key(drop private key)

Password authentication:

User password entry -> SHA(or adding salt before hashing) -> hash value(as ECC private key) -> generate public key from private key -> verify the public key with the saved one.

Based on that:

a.User or others can encrypt selected information(by using public key) that only user can decrypt it.

b.System administrator can generate a public/private key pair then both user and administrator can encrypt/decrypt selected information(by using Diffie–Hellman key exchange method).

I think that brute-force method(exhaustive attack method) can crack any password, and it is only a matter of time.It should be an another topic.

I am trying to prevent user information leak or rainbow table attack even if system being hacked.

I have searched and read the following information:

https://crypto.stackexchange.com/questions/9813/generate-elliptic-curve-private-key-from-user-passphrase

Handling user login using asymmetric cryptography

Asymmetric Cryptography as Hashing Function

Why salt hashing is better than just hashing?

I have recently read an article about keeping passwords safely and I have a few misunderstandings. I found out 4 ways of storing users’ passwords: 1). Just keep them in plain text 2). Keep passwords in plain text, but encrypt the database 3). Hash a password 4). Use "salt hashing". Generate a random string and use a hashing algorithm after concatenation entered user’s password and randomly generated string. And I also have to store this string to let user in.

The best way was 4-th. Then my main misunderstanding is about: "what do we want to protect from?". As I understand – from the case when hacker gets access to our server and gets the database and the server-side source code. In this case I, as a developer, jeopardize accounts of those users, who have same passwords on all websites. Hacker gets a password from my database and then knows, what password does this user have on other websites/apps. That is why I want to to make passwords as secure as it is possible. In this case I do not understand the difference between ways 3 and 4 (1 and 2 are obviously bad, I think).

To crack passwords in way 3 the hacker gets my hashing algorithm and just tries to hash some passwords (either brute force or maybe some db of common passwords) and compare gotten string to the database. To crack way 4 the hacker has to try passwords like in the way 3, but, when hashing, his program will just take stored random string from my base and use it when hashing. It will be a bit longer for hacker, but not a lot (I think). So why the way 4 should be a lot more secure than the 3 one, or what do not I understand in the 4th way?

Does hashing client-side increase attack surface (assuming TLS and serverside salt+hash)? [duplicate]

This question asks whether one should hash on the client or the server. I want to know if there is any reason, aside from having to maybe handle one extra hashing library (if it’s not already in your security stack), why you wouldn’t want to hash both on the client and on the server. Extra code complexity is fine, you are just invoking one extra pure-functional method.

Workflow: User submits username/password. Assert the usual password strength check. Submit HTTPS username=username and password2=cryptohash(password). Backend generates salt := make_nonce() and stores username=username, salt=salt, key=cryptohash(password2 + salt).

I ask because I still see lots of websites which set a maximum number of characters to some obnoxiously small number, like 16, 14, 10, or even 8 (I’m fine if you want to cap at 64). Also many limit the types of characters you can input. Ostensibly, this is to protect against buffer overflows, escapes, injection attacks, etc, as well as avoid under-defined internationalization behavior. But why not just take that field and run SomeHash.ComputeHash(Encoding.Unicode.GetBytes(value)), ideally a key-derivation function? That’ll take any trash you could put into that field and yield nice random bytes.

This question and this question are kinda similar, but mostly addresses whether you’d want to do only client-side hashing from a security point of view. I’m assuming the security would be at-least-as-good-as regular password form submission.

If hashing should occur server-side, how do you protect the plaintext in transit?

I’ve been working on a full stack project recently for my own amusement and would like to add authentication services to this project, but want to make sure I’m doing it the right way. And before anyone says it, yes, I know: a well-tested and trusted authentication framework is highly suggested as it’s likely you won’t get your homegrown authentication just right. I’m well aware of this; this project is purely for amusement as I said, and I won’t be using it in a production environment (albeit I want to develop it as though it will be used in a production environment).

I believe I have a fairly adequate understanding of the basic security principles in the authentication realm such as the difference between authentication and authorization, how you should never store plaintext passwords, why you should salt your passwords, why you should always prefer to use TLS/SSL connection between the client and server, etc. My question is this:

Should passwords be:

a) hashed and salted once server-side,

b) hashed and salted once client-side and once server-side, or

c) encrypted client-side and decrypted and then hashed and salted once server-side?

Many different variations of this question have been asked before on StackExchange (such as this question which links to many other good questions/answers), but I couldn’t find one question that addresses all three of these options.

I’ll summarize what I’ve been able to glean from the various questions/answers here:

  1. Passwords should not only be hashed on the client-side, as this effectively allows any attackers who gain access to the hash to impersonate users by submitting the hash back to the server. It would effectively be the same as simply storing the plaintext password; the only benefit is that the attacker wouldn’t be able to determine what the password actually is, meaning that they wouldn’t be able to compromise the victim’s accounts on other services (assuming salting is being used).
  2. Technically speaking, if using TLS/SSL, passwords are encrypted client-side and decrypted server-side before being hashed and salted.
  3. Hashing and salting on the client-side presents the unique issue that the salt would somehow need to be provided to the client beforehand.

It’s my understanding that the most widely used method is to hash and salt the password server side and to rely on TLS/SSL to protect the password in transit. The main question that remains unanswered to me is what if TLS/SSL is breached? Many companies, schools, and government agencies that provide network services on site have provisioning profiles or other systems in place to allow them to decrypt network traffic on their own network. Wouldn’t this mean that these institutions could potentially view the plaintext version of passwords over their networks? How do you prevent this?

Encryption (not hashing) of credentials in a Python connection string

I would like to know how to encrypt a database connection string in Python – ideally Python 3 – and store it in a secure wallet. I am happy to use something from pip. Since the connection string needs to be passed to the database connection code verbatim, no hashing is possible. This is motivated by:

  • a desire to avoid hard-coding the database connection credentials in a Python source file (bad for security and configurability);
  • avoid leaving them plain-text in a configuration file (not much better due to security concerns).

In a different universe, I have seen an equivalent procedure done in .NET using built-in machineKey / EncryptedData set up by aspnet_regiis -pe, but that is not portable.

Though this problem arises from an example where an OP is connecting via pymysql to a MySQL database,

  • the current question is specific neither to pymysql nor MySql, and
  • the content from that example is not applicable as a minimum reproducible example here.

The minimum reproducible example is literally

#!/usr/bin/env python3  PASSWORD='foo' 

Searching for this on the internet is difficult because the results I get are about storing user passwords in a database, not storing connection passwords to a database in a separate wallet.

I would like to do better than a filesystem approach that relies on the user account of the service being the only user authorized to view what is otherwise a plain-text configuration file.

Related questions

  • Securing connection credentials on a web server – but that requires manual intervention on every service start, which I want to avoid
  • Security while connecting to a MySQL database using PDO – which is PHP-specific and does not discuss encryption

Uniform Hashing. Understanding space occupancy and choice of functions

I’m having troubles understanding two things from some notes about Uniform Hashing. Here’s the copy-pasted part of the notes:

Let us first argue by a counting argument why the uniformity property, we required to good hash functions, is computationally hard to guarantee. Recall that we are interested in hash functions which map keys in $ U$ to integers in $ \{0, 1, …, m-1\}$ . The total number of such hash functions is $ m^{|U|}$ , given that each key among the $ |U|$ ones can be mapped into $ m$ slots of the hash table. In order to guarantee uniform distribution of the keys and independence among them, our hash function should be anyone of those ones. But, in this case, its representation would need $ \Omega(log_2 m^{|U|}) = \Omega(|U| log_2 m)$ bits, which is really too much in terms of space occupancy and in the terms of computing time (i.e. it would take at least $ \Omega(\frac{|U|log_2 m}{log_2 |U|})$ time to just read the hash encoding).

The part I put in bold is the first thing is confusing me.

Why the function should be any one of those? Shouldn’t you avoid a good part of them, like the ones sending every element from the universe $ U$ into the same number and thus not distributing the elements?

The second thing is the last "$ \Omega$ ". Why would it take $ \Omega(\frac{|U|log_2 m}{log_2 |U|})$ time just to read the hash encoding?

The numerator is the number of bits needed to index every hash function in the space of such functions, the denominator is the size in bits of a key. Why this ratio gives a lower bound on the time needed to read the encoding? And what hash encoding?