How critical is encryption-at-rest for public cloud hosted systems

I wok as a solutions architect for web based systems on AWS and as part of this role often respond to Information Security questionnaires. Nearly all questionnaires request information about data encryption at-rest and in-transit. However only a much smaller percentage ask about other security aspects, such as password policies or common web application security issues, as published by OWASP.

I wonder how common/ likely accessing of clients data is within a public cloud provider such as AWS, Azure and GCP. It seems a very high barrier to pass for an external party, even data centers of small local web hosting companies seem to have very good physical access security. And informal conversations with bank employees tell me that accessing someone’s bank account without reason leads to instant dismissal, so surely public cloud providers would have similar controls in place?

This is not to challenge the value of encryption at rest, it is very cheap to access, so there is no reason not to enable it, but where does it sit in terms of priorities?

get_comments with post_status ‘public’ retrieves NULL result

This is my loop:

<?php $  comments = get_comments(array(     'status' => 'approve',     'type' => 'comment',     'number' => 10,     'post_status' => 'public' )); ?>      <ul class="sidebar-comments">         <?php foreach ($  comments as $  comment) { ?>             <li>                 <div><?php echo get_avatar($  comment, $  size = '35'); ?></div>                 <em style="font-size:12px"><?php echo strip_tags($  comment->comment_author); ?></em> (<a href="<?php echo get_option('home'); ?>/?p=<?php echo ($  comment->comment_post_ID); ?>/#comment-<?php echo ($  comment->comment_ID); ?>">link</a>)<br>                 <?php echo wp_html_excerpt($  comment->comment_content, 35); ?>...             </li>         <?php } ?>     </ul> 

This always gives an empty result (no errors). If I remove 'post_status' => 'public' from the get_comments arguments, the function works, comments load but also comments from private posts (which I don’t want).

Any ideas on why 'post_status' => 'public' is not working?

Is it possible to export an expired GPG subkey’s public key without signatures?

Based on Is it possible to export a GPG subkey's public component? I got familiar with:

gpg --keyid-format long --with-fingerprint --list-key {e-mail} gpg --export --armor --output public-key.asc 633DBBC0! # for ssb1 

and

gpg --export-options export-minimal {key-id} 

I also found the following which I added to my gpg.conf.

list-options show-unusable-subkeys 

In the context of a Yubikey, I sometimes need to transfer public key components to a new key ring on a new system in order to decrypt an old file. For some reason gpg --card-status is not enough to get the ball rolling. Gpg will keep reporting that no key exist to decrypt the file. After importing the public key component, it works. I read somewhere on Stack that "the yubikey has not enough data on it to recontruct the public key component." (Might add source later).

However, I don’t want to export all old subkeys (hence keyid!), only a select few and I don’t want to export any signatures (hence export-minimal).

So this is what I tried, but did not result in a desired result:

gpg --armor --export --export-options export-minimal {subkeyid1}! {subkeyid2!} or gpg --armor --export --export-options export-minimal {subkeyid1}! gpg --armor --export --export-options export-minimal {subkeyid2}! 

If I pick one {subkeyx}!, the output is the same. The combination of export-minimal and pointing to a subkey is not working as far as I can tell. I don’t know of any switch I can put in front of keyid, do you?

Then I tried the following and merged them later:

gpg --armor --export --output file1.asc {subkeyid1}! gpg --armor --export --output file2.asc {subkeyid2}! 

But these public key components contain unwanted signatures (and their primary key public part and uid which is acceptable).

I used gpg --armor --export {subkeyid2}! | gpg for reading the output. If I do this with unexpired subkeys, I get an expected result of keys, but if I do this with expired subkeys, the subkey is not listed.

The question: So, how do I export two expired subkeys’s public key components without any signatures?


(Sidenote; meta question; alternative route):

gpg --card-status delivers:

[...] General key info..: sub {rsaX/eccX}/{keyid} {date} {name} {address} sec# {rsaX/eccX}/{keyid} {created date} {expires date} [...] ssb> {rsaX/eccX}/{subkeyid1} {created date} {expires date} card-no: {nr} ssb> {rsaX/eccX}/{subkeyid2} {created date} {expires date} card-no: {nr} 

And as we now from gpg -k and gpg -K. ‘sub’ means public subkey; ‘ssb’ means private subkey and the ‘>’ indicator means material is on smartcard. So this all seems to confirm the public material is not on the card.

What prevents me from using a some server’s public key and impersonate another server [duplicate]

I read alot regarding RSA encryption/DH key exchange/digital signatures and the whole TLS protocol.

There’s something i am missing regarding the public key signatue validation.

Let say some website has a certificate signed with its private key, as a client I have access to the public key.

But if the server only sends the public key to the client, what is preventing me as an attacker from taking this public key, and returning it to who ever wants to communicate with me.

I mean, where is the private-key authentiction comes to place?

I created this small C# code to demostrate:

private const int _port = 4455; static void Main(string[] args) {     Task.Run(async () =>     {         await TcpServerInit();     });      Task.Run(async () =>     {         await TcpClientInit();     });      Console.ReadLine(); }  private static async Task TcpServerInit() {     var server = new TcpListener(IPAddress.Any, _port);     server.Start();      while (true)     {         TcpClient client = await server.AcceptTcpClientAsync();         using (var netStream = client.GetStream())         {             ServicePointManager.ServerCertificateValidationCallback = ValidateCertificate;             ServicePointManager.Expect100Continue = true;              using (var ssl = new SslStream(netStream, false))             {                 using (var cert = new X509Certificate2(@"MyPublicCert.cer"))                 {                     await ssl.AuthenticateAsServerAsync(cert, false, SslProtocols.Tls12, true);                 }             }         }     } }  private static async Task TcpClientInit() {     using (TcpClient client = new TcpClient("localhost", _port))     {         using (SslStream sslStream = new SslStream(client.GetStream(), false, new RemoteCertificateValidationCallback(ValidateCertificate), null))         {             var servername = "CN=localhost";             await sslStream.AuthenticateAsClientAsync(servername);             byte[] messsage = Encoding.UTF8.GetBytes("Hello");             sslStream.Write(messsage);             sslStream.Flush();          }     } }    private static bool ValidateCertificate(object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) {     //cert validation     return true; } 

Why can’t we encrypt the message with sender’s private key and receiver’s public key in case of sending messages through a server?

I read that why do we need E2EE and can’t rely only on HTTPS for sending messages through a messaging app. The reason which i understood is when sender sends the message to the server, the TLS connection is associated with the server. TLS terminates at the server and whoever controls the server has the ability to view the messages since they are not encrypted.But, In this process when we send a message to the server, we are firstly encrypting the message with sender’s private key and then with server’s public key.

My question is why can’t we encrypt the message with sender’s private key and then receiver’s public key? In this way, even if it reaches server, it won’t be able to view anything since it can only be decrypted using receiver’s private key.

If this is possible, then why do we use methods like Diffie Hellman key exchange?

What prevents someone from spoofing their public key when trying to establish an SSH connection?

Recently I’ve been trying to learn the mechanisms behind SSH keys but I came across this question that I haven’t been able to find an answer to (I haven’t figured out how to word my question such that searching it would give me the answer).

Basically, we add our local machine’s public key to the server’s authorized_keys file which allows us to be authenticated automatically when we try to ssh into the server later on. My question is: what if someone takes my public key (it is public after all) and replaces their public key with it? When the "attacker" tries to connect to the server, what part of the process allows the server to know that they do not have the correct private key?

I read somewhere that for RSA, it is possible for a user (let’s say user A) to encrypt/sign a message with their private key, and then for others to decrypt this message using A‘s public key, thus proving that A is really who they claim to be. However, apparently, this is not true for all cryptosystems, where it is not possible to sign with a private key (according to What happens when encrypting with private key?, feel free to correct this information if it is wrong). In those cases, how does the server make sure that the user is really who they claim to be?

Is using Argon2 with a public random on client side a good idea to protect passwords in transit?

Not sure if things belongs in Crypto SE or here but anyway:

I’m building an app and I’m trying to decide whatever is secure to protect user passwords in transit, in addition to TLS we already have.

In server side, we already have bcrypt properly implemented and takes the password as an opaque string, salts and peppers it, and compares/adds to the database.

Even though SSL is deemed secure, I want to stay at the "server never sees plaintext" and "prevent MiTM eavesdropping from sniffing plaintext passwords" side of things. I know this approach doesn’t change anything about authenticating, anyone with whatever hash they sniff can still login, my concern is to protect users’ plaintext passwords before leaving their device.

I think Argon2 is the go-to option here normally but I can’t have a salt with this approach. If I have a random salt at client side that changes every time I hash my plaintext password, because my server just accepts the password as an opaque string, I can’t authenticate. Because of my requirements, I can’t have a deterministic "salt" (not sure if that can even be called a salt in this case) either (e.g. if I used user ID, I don’t have it while registering, I can’t use username or email either because there are places that I don’t have access to them while resetting password etc.) so my only option is using a static key baked into the client. I’m not after security by obscurity by baking a key into the client, I’m just trying to make it harder for an attacker to utilize a hash table for plain text passwords. I think it’s still a better practice than sending the password in plaintext or using no "salt" at all, but I’m not sure.

Bottomline: Compared to sending passwords in plaintext (which is sent over TLS anyway but to mitigate against server seeing plaintext passwords and against MiTM with fake certificates), is that okay to use Argon2 with a public but random value as "salt" to hash passwords, to protect user passwords in transit? Or am I doing something terribly wrong?

What are the most tolerable options for a more general public type not to be victimized by malware?

I’ve talked with a new friend who is fairly bright and who can do some interesting things programming Office applications, but whose technical abilities omit infosec. And he got bitten by nasty malware.

I’m wondering what options might be most productive to offer to him. I’m not sure it’s realistic to repel all dedicated assault, but cybercriminals often look for someone who would be an easy kill, and (perhaps showing my ignorance here), I think it could be realistic to make a system that’s hardened enough not to be an easy kill.

Possibilities I’ve thought of include:

  1. Windows 10 with screws turned down (how, if that is possible?).

  2. Mint or another Linux host OS for what can be done under Linux, and a VMware or VirtualBox VM that is used for compatibility and may be restorable if the machine is trashed.

  3. Migrating to a used or new Mac, possibly with a Windows Virtual Machine, but most people using Macs don’t complain they are missing things.

  4. Perhaps with one of the technical situation, point my friend to user education saying things like "Don’t download software that you hadn’t set out to get. The price of Marine Aquarium of $ 20 up front is dwarfed by the hidden price tags of adware and spyware offering a free aquarium screensaver.

This is not an exhaustive list, although it’s what I can think of now. I’ve had a pretty good track record for not engaging malicious software, and I think it can be learned (and that documentation for online safety would be taken very, very seriously).

What can I suggest to my friend for online safety?