Freelancer has access to Firebase Database. What should I do?

Back in November, I put up a $ 100 bounty on a freelancer website for anybody who could debug a bug I had found in my app I was developing and couldn’t squash. It turns out the freelancer was in no position to work for me. He had lied about being Danish (he was actually from northeastern China and had such a poor internet connection, he could barely run my app). Furthermore, his English was far worse than any freelancer I had worked with previously, you cannot even hold a conversation with him.

Anyways, I want to redact my $ 100 that I staged up for him, but I’m afraid of him vandalizing my database as an act of retaliation. He has cloned my project from Github, including the GoogleService-Info.plist file that would allow him to make changes to my backend.

My project is still in Beta, but is slated to go public next month. Should I just generate a new GoogleService-Info.plist file and force all current beta users to update their version (the previous version will be unusable) or should I just go with it and hope the freelancer doesn’t destroy everything I have?

PS: Sorry, this may not be the correct StackExchange site for this question. I am a seasoned Stack Overflow user and know it wouldn’t be appropriate there. If somebody points me to a better site, I will gladly move the question.

What do people mean when they talk about “hackers gaining access to our network” (at home)?

Have I fundamentally missed something between the time when I sat with my 486 IBM PC in the house, fully offline, and today? Do normal people actually set up complex local networks in their homes where they have some kind of “trust anyone with an internal IP address” security scheme going on?

What exactly do they mean by this? I get the feeling that either I am extremely ignorant and somehow have not understood basic concepts of networking in spite of dealing with this (and hating it) for 25 years, or they have no idea what they are talking about and have learned everything they know about computers from Hollywood blockbuster movies and crappy TV series…

What does “gaining access” to a home network mean? Is that, like, exploiting the NAT router (if such a thing is used, which has not always been the case for me)? Even if they exploit the router, that doesn’t magically give them any “access” to the “network” (meaning PCs connected to the router)? At best, they can maybe read plaintext traffic, but how much such is there these days? I shall hope 0% of all traffic.

And again, for a long time, I didn’t even have any device “in between” the ISP and my single PC. It was a very “stupid” cable modem or ADSL modem which had no control panel or any NAT features etc. Right now, I’m using a Mikrotik NAT router which I update maybe once a year at best, because it has the most user-hostile, idiotic mechanism for enabling “auto updates”, which you’d think would be not only dead-simply, but enabled by default. Nope. You have to follow their cryptic news and decide when to manually SSH into it (or use the extremely confusing and messy web interface) to apply updates. I guarantee that 99.99% of all people (including “geeks”) have no idea that they even need to do this, let alone actually bother to.

So what do people mean when they talk about “gaining access”? No updated version of Windows has ever just allowed somebody to randomly connect remotely to “gain access”, regardless of the presence/absence of a router/switch/whatever in between. Or, if it has, that’s some kind of “0-day” exploit or unknown-to-the-public exploit. The so-called “hackers” that people talk about more than likely never “gain access” like that at all; I bet it’s 100% social engineering and tricking them into running coolgame.exe as sent to them in an e-mail attachment and things like that.

Since apparently I always sound rude, I should point out that my intention with this question is to understand people and the world, and not an attempt to somehow sound “superior”. I’m genuinely wondering about this since not a day goes by without me feeling extremely paranoid about security and privacy, especially knowing how incredibly naive and stupid I used to be, and how naive and stupid people in general seem to perpetually be about these things.

Institution can access my email (inbox/sent items/etc) and edit it? [migrated]

First of all, I have thoroughly read the answers to the following questions and none of them answer my queries:

  • Can Google Chrome read/scan my ProtonMail inbox page?
  • Can my IT department read my Google Hangouts chats while at work?
  • Does company email can access hangouts and private emails?

I am a student at one of the highest ranking universities of the UK (even though I avoid giving much credence to rankings). A couple of days ago, someone found a way to send emails to every single student, masquerading as the Vice Chancellor (the emails appeared to be from his own university email address).

The email body was basically a silly hoax, appearing to have been executed by teenagers (“Dear students, just got off the phone with the prime minister, the University of [censored] will close indefinitely, exams are cancelled, go out and party, etc“).

The same day after a few hours, all the received emails had been deleted from all of our inboxes; a few people had replied to that email though and still had the original ‘hoax’ email. The next day, the email they had sent was deleted as well.


The University evidently has access to our email accounts, but:

  1. Are they allowed to Access/View/Edit out accounts?
  2. Is this legal?
  3. Do I have any say in it? As in, can I refuse using the University email for any type of correspondence and can I demand that they contact me only through my personal email address? If not, is there any way I can preserve my privacy?
  4. If this was not disclosed clearly to any form of “Terms & Conditions” that I agree to by studying here, then what are my rights?
  5. Does GDPR have any effect here, or is the university allowed to do whatever they feel like?
  6. What else do they possibly have access to?

The university uses Office365 (Outlook).

Store passwords local with plain text access on WinPE

I have an application that needs to store Network Credentials for a Network Drive/Share on the disk. The user shouldn’t need to enter the password every time. The OS is WinPE, so he cannot map the drive once and it will stay there.

Limitation:

  • I need the password in plain text, to map the drive.
  • The program should work without an additional password that the user has to enter.

Thoughts:

  • Hash + Salt is not reversible, so I cannot get the password in plain text.
  • An encrypted password is not safe, because the program has to store the key. If someone looks inside the code he will get the key and decrypt the password.
  • I cannot use the “Protect Data” interface of windows, because I use WinPE. Protect Data Documentation

The program is written in C#. Maybe someone has a good idea about my problem. Thanks!

Can you restrict the services a certificate can access on the client side?

I have a verification certificate signed by my organisation’s CA, which I can use to authenticate my user account on intranet web services.

Is there someway I can sign a new certificate which can only authenticate to one specific web service. Or some other way to enable limited access to one web service by a script I don’t want to give full access to my verification certificate.

Unfortunately I don’t have access to modify the web service, which is running nginx.

Would a difficult to access “Key” be an option to securely solve the Apple vs. FBI problem?

In recent times, there has been an escalating demand by legislators in the US and the world around to be able to decrypt phones that come pre-configured with strong encryption. Key escrow is commonly suggested as a solution, with the risk seeming to arise out of the escrow agent misusing or not appropriately securing the keys — allowing for remote, illegal, or surreptitious access to the secured data.

Could a system secure from remote attack be devised by adding an offline tamper-evident key to the device? This could be an unconnected WLCSP flash chip or a barcode within the device with the plaintext of a decryption key.

I recognize the evil maid attack, but presume a tamper seal could be made sufficiently challenging to thwart all but the most motivated attackers from surreptitious access to the data.

What would be lost in this scheme relative to the current security afforded by a consumer-grade pre-encrypted device (cf. iPhone)? Bitcoin, Subpoena efficacy, and other scenarios that seem fine with “smash and grab” tactics come to mind.

How come RFC7636 (PKCE) stops malicous app doing the same code challenge and get legitimate access to API

As per the RFC7636 it stops malicious apps which pretend to be legitimate apps, gaining access to OAuth2.0 protected API’s.

The flow suggests a method of having a runtime level secret which generated from the client and letting the Auth server knows it. This allows token issuer to verify the secret with auth server and grant a proper access token.

However lets assume a malicous app, as the RFC paper suggests, with a correct client_id and client_secret, it can do the same PKCE process and gain access to protected resources.

Is this RFC doesn’t meant to protect those kind of attacks or simply I’m missing something here?

Is this jwt access and refresh tokens logic/structure secure?

  1. User logs in
    1. User gets a refresh_token assigned and stored in the database (long lived 7d)
    2. Client receives an accestoken (Short lived, 2h), and stores it as a cookie. Client also receives the userId AES encrypted, and stores it as a cookie.
    3. As far as the access token is not expired, the user keeps using the token to navigate the website
    4. The token expires
    5. The expired access token gets send to a refresh endpoint, so is the userID (Aes encrypted) both currently stored in out cookies.
    6. The server decrypts the userId and retrieves the refreshtoken that corresponds to the user by selecting the refresh token from the database using out userId.
    7. Now we have in the server our refreshtoken and accestoken, so we refresh the token, and send back the new accesstoken. We also generate a new refreshtoken and overwrite our old refreshtoken in the database with the new one. (I guess I need to somehow blacklist my old refreshtoken at this point)

Bitlocker security with physical access?

Made a search term for how to make a 3d printed weapon, the police came n seized all my shit n im suspected for weapon manufacturing… This was out of curiosity, havent made one so hopefully theyll be able to look into my printers log and clear me.

But they took my computer, which pisses me off for privacy reasons. Its bitlocker encrypted, turned off with a very strong 16+ char password. What are the chances of them actually gaining access to it?