What is the safe way to update production data if data found inconsistent

I am in a small company and require to fix a bunch of data in production which are inconsistent. I have written the script to handle the fix.

I understand that writing sql to fix data is way more risky than working on fixes normal front end or backend code. For front end or back end, we can write thousands of test code to automatically test result or even little bug the affect might not be as dramatic.

How do you guys fix the data in production normally if many data was inconsistent? Are there any method or strategy which can reduce the risk

Is it safe to use third party OIDC ID Token as our APIs bearer token?

Practically, we are outsourcing the authentication of our users to a third party application, that’s, needless to say, external to our system. I am not sure if this is actually advisable, but from our perspective, since we don’t really want to maintain security credentials ourselves, we thought that it makes sense to leave that to the hands of a more capable party. For now we intend to use them mainly as identity provider, because we find their authorisation support hard to use. To be clear, at the moment, we do not require any access to any other resources at the side of the identity provider beyond the user profile; the authorisation I’m referring to is for our own system. Because of this, acquiring an ID token from the trusted identity provider seems to be good enough for our purposes.

We intend to internally keep track of references to the user id provided through the id token (e.g. the JWT sub claim) for the purpose of attaching our own authorisation details to them. I’m thinking that since this is the case, because the ID token provides us enough information to be able to pull authorisation details about the user, we don’t really need anything else. I’m not sure however, if this is a sound approach or there’s a security risk in this kind of flow.

In this setup, for our own API we’d have to use the external IdP for authentication, but we’d probably need to be issuing the access tokens ourselves to our clients.

Is it safe to create a session from an auth token?

My server is using Django Rest Framework. My mobile app logs in using token authentication. However, I also have a webview in the mobile app where I need to log in. I can’t inject the auth token on every request in the webview, so I use the auth token for authenticating this endpoint and then create a session from it. This is the code:

class CreateSessionView(APIView):     authentication_classes = [TokenAuthentication]     permission_classes = (permissions.IsAuthenticated,)     throttle_classes = [ScopedRateThrottle]     throttle_scope = 'auth_token_verify'      def get(self, request, format=None):         login(request, request.user, backend='django.contrib.auth.backends.ModelBackend')         return redirect(reverse('home')) 

My questions are:

  1. Is there a vulnerability here? If so, how can I secure it?

  2. Do I need CSRF?

Is it safe to encrypt a user’s third party API key with their own password?

I’m running a node application which needs to make calls to a third party API, on behalf of my user, using their own API keys.

API calls only need to be made on behalf of the user while they are logged into my site.

Currently I use bcrypt to hash and compare their password:

bcrypt.hash(req.body.password, 12, function (err,   hash) {... bcrypt.compare(req.body.password, users[req.body.username]['password'], function (err, result) {... 

I thought when a user adds their API key to the website I could require their password again, and after validating the password, I could use the encryption method Here to encrypt it (with their plaintext password as the key)

When a user logs in, I could validate their password, decrypt their API key using method from link above (and their password), and store the API key in plain text using express-sessions, ready for making calls on user request.

With this method if the user losses the password they will have to reset their API keys. I’m happy to accept that trade off.

Is this approach safe or is there something I’m overlooking?

Is it safe to assume that a JWT is valid if it can be decrypted with the server’s private key?

This is my first time using JWT. I’m using the jwcrypto library as directed, and the key I’m using is an RSA key I generated with OpenSSL.

My initial inclination was to store the JWT in the database with the user’s row, and then validate the token’s claims against the database on every request.

But then it occurred to me that the token payload is signed with my server’s private key.

Assuming my private key is never compromised, is it safe to assume that all data contained in the JWT, presented as a Bearer token by the user, is tamper-proof? I understand that it is potentially readable by third parties, since I’m not encrypting.

What I want to know is, is there anything I need to do to validate the user’s JWT besides decrypt it and let the library tell me if there’s anything wrong?

Is it safe to assume that my computer’s clock will always be synced with actual time within the second or a few seconds at the worst?

Years ago, I was running a service where the moderators were able to do various actions with massive privacy implications if the accounts or contributions were less than a short period of time. I did this by checking the timestamp against the current Unix epoch, allowing for X hours/days. Normally, this worked well.

One day, the server where this was hosted on had been “knocked offline” in the data centre where I was renting it, according to the hosting company. When it came back on again, its clock had been reset to the factory default, which was many years back.

This resulted in all my moderators potentially being able to see every single account’s history and contributions in my service until I came back and noticed the wrong time (which I might not even have done!) and re-synced it. After that, I hardcoded a timestamp into the code which the current time had to be more than or else it would trigger “offline mode”, to avoid any potential disasters like this in the future. I also set up some kind of automatic timekeeping mechanism (in FreeBSD).

You’d think that by now, not only would every single computer be always auto-synced by default with tons of fallback mechanisms to never, ever be in a situation where the clock isn’t perfectly synced with “actual time”, at least down to the second, if not more accurately; it would be impossible or extremely difficult to set the clock to anything but the current actual time, even if you go out of your way to do it.

I can’t remember my Windows computer ever having been out of time for the last “many years”. However, I do important logging of events in my system running on it. Should I just assume that the OS can keep the time at all times? Or should I use some kind of time-syncing service myself? Like some free HTTPS API, where I make a lookup every minute and force the system clock to me whatever it reports? Should I just leave it be and assume that this is “taken care of”/solved?

How to report false positive to Google Safe Browsing without signing up with Google?

I was wondering how to report a false positive to Google Safe Browsing without having to create a Google account and feeding their insatiable hunger for more data?

I have not found such a way as of yet. Google pretty much seems intent on preventing any contact in this matter or others.

Background

My domain – yep whole one, including subdomains – was reported as (two examples):

Firefox blocked this page because it might try to trick you into installing programs that harm your browsing experience (for example, by changing your homepage or showing extra ads on sites you visit).

… and:

This site is unsafe

The site https://***********.net/ contains harmful content, including pages that:

Install unwanted or malicious software on visitors’ computers

I won’t disclose my domain here, but given I have a list of digests for all the files located on my (private) website and the list is signed with my PGP key and I verified the hashes and the signature and all checked out, I am sufficiently certain that this is a false positive. None of these files have changed in the last four years, because my current software development activities are going on elsewhere.

Unfortunately there is no useful information to be had from the “details” provided by Google Safe Browsing. A full URL to the alleged malicious content would have been helpful; heck even a file name or something like MIME-type plus cryptographic hash …

I have two pieces of content on my website where one could debate whether they are PUA/PUP (as it’s called these days). Both are executables inside a ZIP file and alongside the respective source code which was used to create those executables. So in no way would any of that attempt to install anything on a visitors computer, unless we imagine a fictitious browser hellbent on putting its user at risk by requesting to run at highest privileges upon start and then unpacking every download and running found executables without user interaction. And even then one of the two pieces of software would fail and the other would be visible.

  1. One is a Proof of Concept for an exploit of Windows debug ports which has been patched for well over a decade and so will hardly be a danger to anyone.
  2. The other is a tutorial which includes a keylogger which – when run – is clearly visible to the user. So no shady dealings here either.

But since these two items came up in the past, I thought I should mention them.

Anyway, a cursory check on VirusTotal showed one out of seventy engines giving a “malicious” for my domain. Given Google bought VT some time ago, it stands to reason they use it for Google Safe Browsing.

The mysterious engine with the detection is listed as “CRDF” and I still have been unable to find out who or what that refers to. So obviously there is no way to appeal, request a review or whatever … seems Google is judge, jury and executioner in this one.

So how do I “appeal”?

Is attribute-based encryption safe for production use?

I’m very interested in attribute-based encryption (ABE). I see various working examples online, and I want to know, has it been verified as production-ready? What does it mean to have a security audit, and how do I know if ABE is safe for use with real customer data?

I tried to create the tags attribute-based-encryption, abe, and cp-abe, but I don’t have enough reputation yet to make them. Since these don’t exist yet on security.stackexchange.com, I feel like I deserve the reputation points for this question đŸ˜‰