Securing API access. oAuth Client Credentials vs client ID and secret

I have a REST API that will be called by other external 3rd party servers over the internet and be used only for machine to machine communication. I am looking for mechanisms to secure this API such that only servers that I designate will be allowed to use this API. I do not control the servers and cannot guarantee the IP addresses that these servers use.

I thought of using the OAuth client credentials flow to secure the API by giving each external server a client id and a client secret. But that made me realize why use OAuth at all and not directly deal with client id and secret i.e. when the external servers call my API they will pass the client id and client secret and if a compromise happens I will revoke the client secret and the communication will no longer be allowed. For the communication to start again, I will issue them a new client ID and secret.

Is this approach correct? If not, what is the advantage of using the client credentials flow in OAuth if the external server has to store and pass the client id and secret to get a token anyway.

Ways to transition SELinux domain / process context (securing SELinux boundaries)

(Apologies for multi-question. Theme is the same, but there are quite a few edge cases.)

Browsing the web, I come across resources (see below), but they don’t make this quite clear what the situation really is, so this is my attempt to clarify and gather info that I am missing.


Ways to transition

I gather there are at least three ways for process to transition into another domain. I will list them as rules that are displayed by sesearch:

  1. type_transition <source> <file_label>:process <target>” – process in source domain can execute a file with file_label, which will then have target domain.
  2. “allow <source> <target>:process dyntransition” – process in source domain can use /proc/self/attr/current to transition into target domain.
  3. “allow <source> <target>:process transition” – process in source domain can use /proc/self/attr/exec to transition into target domain when exec is called.

Are there any other ways?


Protections for these transitions

Besides the above rules, transitions will also require:

  • “allow <source> <file_label>:file { execute read getattr }” (is getattr really required? read?) – for type_transition and probably transition
  • “allow <target> <file_label>:file entrypoint” – for type_transition and probably transition
  • “allow <source> <target>:process setexec” – for transition
  • “allow <source> <target>:process setcurrent” – for dyntransition

Other potential problems

  • In case of memfd_create+exec(“/proc/self/fd/%d”), is the file_label same as the “symlink” label? I assume for normal /proc/self/fd/ entries symlink would be followed, so that should be fine.
  • Can a ptraced process transition to another domain? Experiments tell me exec fails with EPERM in case of type_transition, and there’s a denial logged because of missing process ptrace permission from source to target. Would this work with dyntransition?

Resources:

  • https://selinuxproject.org/page/NB_Domain_and_Object_Transitions
  • https://selinuxproject.org/page/NB_ObjectClassesPermissions
  • https://wiki.gentoo.org/wiki/SELinux/Tutorials/How_does_a_process_get_into_a_certain_context

Securing Code Secrets – What is the relevance if the host gets compromised?

I’ve been researching and testing different approaches when it comes to securing code secrets, and am unsure what the best options are, and if they even have any relevance once a host gets compromised.

Some standard approaches I’ve read about storing variables are:

  • Compiled code
  • Environment variables on machine or through Docker
  • Files
  • Encrypted/decrypted through keys to a vault API/DB

If a host gets compromised (admin access), secrets can be exposed via:

  • Decompiling code
  • Viewing env variables / files
  • Memory dumps
  • Viewing SSL traffic using private keys on host
  • Decompiling and modifying code to expose possible encryption/decryption keys and output secrets once fetched from a vault

Are there methods that will protect secrets once a host is compromised, or is it just making the ability to fetch secrets more complex, so an intruder will find it more difficult to reach them?
If a host is secured and firewalled and admin access is tightly controlled, is there really any benefit to the added complexity of storing secrets elsewhere rather than on the host itself?

Securing Delphi application SSL traffic from decryption

i wrote a VCL app using Delphi10.2 ,it has a simple activation setup, encrypted key is stored in Kinvey backend. the key to decrypte the encrypted key is hidden in the source code. Now in order to establish HTTPS connection, the app needs Indy’s ssl files libeay32.dll and ssleay32.dll. They are attached in the same folder with the exe. Now my concern is , is it possible somehow to use the DLL files to extract the private key to decrypt the HTTPS traffic? and if they successesed, will the attacker be able to dectpyt the encrypted key ? note : i’m talking about armature crackers, my app in not published worldwide.

Securing exec for executing shell commands

I have to execute my .c program with gcc, but i want to prevent other code not related to this concept from being executed at the same time.

<?php      DEFINE('program', './exec.out');      exec(program); 
  1. Is there a way to make this function fully private from being accessed by others?
  2. How to sanitize it?
  3. Is it safe to use pointers for this function? let’s say $ executable

Are the security basics of a non-wifi router different from securing your desktop?

I have studied much about securing a desktop from enabling firewall to browsing internet safely among other things. I also know that many steps can be taken to improve the security of wifi routers. But if I am using a non-wifi router or a usb dongle with wifi turned off, are there any steps I can take to secure that router? Or is a non-wifi router secure?

I have read about web cams that are vulnerable and can be hacked so what about routers? Can you give me an introduction? How can I find out if my router has any vulnerabilities?

I am getting a message that this question appears subjective so I will tell you that basically what I am asking is: how does router security work?

Securing DEV Environment

I have not been able to find an answer online so I thought I would ask here. I have setup an Amazon Lightsail instance to act as a development platform for my website. I have setup SSH for accessing administrative tools such as PHPMyAdmin. Would this be sufficent in terms of being able to securly access PHPMYAdmin and upload files or should I be using SSL aswell?

Why CORS is still securing an open api where all requests have a wildcard (*)?

In case of an open API, the only possible value for Access-Control-Allow-Origin is a wildcard (*), since you can’t have a list of allowed domains.

Still, this seems not to bug developpers and appears to keep the system secure. How is that possible? Isn’t allowing all domains to make every request the same as not having SOP or CORS Policy?

It might be that I don’t really get the security provided by CORS, but as I understood it, it avoid an unwanted domain to use session cookies of a user without his consent. Still, I don’t get why it protect the user to see his account used for unwanted purposes once a data modifying route is opened to this domain.

Securing multiple systems accessing the same data

I’m hitting a roadblock when it comes to security for managing scoped permissions for servers.

Right now I run a community which can create sub-servers. So community A can allow certain users to moderate it, change settings, invite users, read logs etc for their own sub-server, but not others.

My current system has a global user, this user has permissions structured similar to this:

{  "id" :"their unique id",  "username" : "username",  "globalRole" : "user",  "permissions": [    {       "resource" : "guilds_id_here",       "permissions" : [          {            "resource" : "guild.logs",            "read" : true,            "write" : false           }        ]      }   ] } 

A single user has access based on ‘resources’, and when they attempt to modify, read or do anything through my API or Socket, i check if they have access to the resource they’re modifying.

This is pretty easy for me to manage permissions through the API as i intercept the request, grab the resource and see if they’re permitted to perform the action they’re trying to do, such as read a log or invite a user and then either reject the API call before it ever reaches the controller.

The main issue I’m now having is with maintaining multiple means of access, I now have the REST API and a WebSocket which can access the same kinds of data depending on where they’re accessing the guild from.

So now the permission system has become significantly more complicated and isnt as easy as intercepting a request from the REST API and blocking it, i now have two permission checking systems which I feel is wrong and breaks the DRY Principle.

I’d like to learn if there are any industry standards for multiple means of accessing data. Should I build a resource manager which always needs credentials and the target resource, and then have a system user for internal access or is there an easier standard for tight nit control over who can do what based on the permissions they have for a specific resource.

The end goal is to be able to grant permission to an object and properly filter out data that is authorised for the user requesting data.

Securing Clipboard Memory

Sometimes sensitive data is copied from/to processes and external devices like (long)passwords, usernames and so forth , but also often people don’t get this always in mind and forget to clean that type of memory (it is actually RAM) with the risk that another person extract this information by reading from it.

How can one secure this?