How to be able to still view Encrypted files on Samsung S9/Android, but not outside of it

I had lost a phone once and I was more upset with the idea of the bad guy having access to my Micro SD memory card/personal data than losing the phone itself.

After getting a new phone, Samsung S9 has a Encryption feature for SD cards, but the thing with this is, it won’t allow viewing of files unless it goes through the long process of Decrypting it on the phone.

I want a way to still be able to access encrypted files on the phone itself, but will require a password to decrypt/view outside of the phone (if the memory card was removed and tried being viewed on a computer for example, that this would not be possible).

Any possibilities?

Can any file be recovered if a new USB fully encrypted first then formatted and copied and deleted without encryption?

I read some advices that, FDE for new USB before using it against to recover deleted data. It’s clear. Assume that I fully encrypt a new USB before copying any file, then format it and then copy personal data without using encryption. In this case, any deleted file can be recovered if it’s deleted normal way or it’s securely erased?

How to use GPG to pass along encrypted content and double encrypt

Say I have a document. I want to encrypt it, then give it to my friend, and I don’t want them to be able to decrypt it. I want them to encrypt it again using their GPG system. Then to decrypt it my friend has to first decrypt it, send it to me, and I decrypt it.

How do I do that roughly with the GPG CLI? I am confused because there are 3 interrelated elements.

gpg --encrypt --sign --armor -r person@email.com -r foo@bar.com name_of_file 
  • encrypt
  • sign
  • recipients

I amnot sure how they all relate or what they do exactly. Do I just encrypt it without signing and without recipients, pass it to my friend and they encrypt it without signing or adding recipients?

Does encrypted content in a database need to be signed?

If a user is logged into a website and they submit sensitive info over HTTPS, which is encrypted and stored in a database, does it matter if the info is not also signed?

Given that signing requires a private key, if a hacker has access to the server and they wanted to tamper with the data, couldn’t they also resign the data with the same key?

End to End Encrypted Traffic – Web Application Firewall needed?

Background

We have a Java Spring Server, MongoDB and an Angular Frontend.

Some Application Security Configs:

  • TLS
  • Basic Authentication (Password Strength Policy)
  • XSS Protection (Content Security Policy)
  • CSRF Protection (Double Submit Cookie)

All important user data which they generate through chats, creating topics, etc… is end to end encrypted (E2EE).

Now our hosting company can add a WAF (Web Application Firewall) to the Server Setup.

From the hoster’s advertisement:

OWASP Top 10 Protection:

  • Bot-Access Control
  • Remote File Inclusion
  • Illegal Resource Access- SQL Injection
  • Blocking policy (block request, block IP, block session)

Question

Do we need WAF if most of our data is E2EE? What would be the benefit of it in our setup?

Generate private key encrypted with password using openssl

I’m using openssl to sign files, it works but I would like the private key file is encrypted with a password. These are the commands I’m using, I would like to know the equivalent commands using a password:

- Use the following command to generate your private key using the RSA algorithm:  openssl genrsa -out private.key 2048   - Use the following command to extract your public key:  openssl rsa -in private.key -pubout -out public.key   - Use the following command to sign the file:  openssl dgst -sha512 -sign private.key -out signature.bin file.txt   - To verify the signature:  openssl dgst -sha512 -verify public.key -signature signature.bin file.txt 

Storing encrypted tokens in LocalStorage

I am building a JavaScript application that will run in a web browser but also as a pseudo-native mobile application via Apache Cordova. It communicates via API to a separate backend service.

The app requires that the user be prompted for some kind of identifying information whenever it is launched, but for ease of use this need not be their full username and password. Password entry in particular (and especially on mobile devices) will be too cumbersome to make this feasible. Therefore, we are considering a once-off login procedure where the full credentials are supplied, followed by a “setup” step where the user creates a PIN for future access. This could, in future, be extended to allow a fingerprint/face scan to also “unlock” the app on supported devices.

We are also hoping to avoid the use of cookies. Doing so subsequently avoids CSRF concerns but also, support for cookies in Cordova applications appears to be either non-existent or at least unreliable.

My initial thoughts on implementation are:

  • The user submits a valid username/password combination to a login/ endpoint of the API and receives an “ID token” in response.
  • During the “setup” phase, the PIN chosen by the user is used to encrypt the ID token.
  • The encrypted ID token is stored in LocalStorage.
  • A secondary request is made to an authorise/ endpoint of the API, including the plaintext ID token. Assuming the token validates, a second token is issued to the app.
  • This second token is what is used in all subsequent requests to the API to prove the user is trusted, has a relatively short expiry, and is not stored in any persistent manner by the application.
  • Upon returning to the app at a later date, the user need only provide their chosen PIN.
  • It can be used to decrypt the stored ID token, which is then used in the authorise/ request to generate and return a new short-lived session token.

The internet abounds with articles advising against the use of LocalStorage for anything sensitive, due to its exposure in the event of XSS attacks. The threat is that a token in LocalStorage could be stolen when the same token in an httpOnly cookie could not. It is worth noting that in both cases, a malicious script running within the app could successfully issue fraudulent requests to the backend API.

I believe the XSS threat of the ID token being stolen is mitigated by it being encrypted under the user’s PIN, and neither the decrypted value nor the PIN itself being stored or used beyond the authorise/ request.

The session token is also vulnerable to being stolen by XSS. It is only stored in memory, but is obviously still accessible to JavaScript and thus to a malicious script. These tokens would be given short expiry times to mitigate this threat. Not to mention we would do our best to harden against XSS in the first place.

I think the above sounds like a secure way to implement our requirements, but I am no security expert. Am I missing any obvious weaknesses here? Does this actually sound secure?