PKI Usage in Trusted Boot

I am trying to understand how PKI is used to boot an ARM board.

The following image relates to BL1:

enter image description here

The booting steps state:

enter image description here

Also from:

enter image description here

The certificate used in step 1 appears to be a content certificate. In the diagram it suggests in contains the public key used to sign a hash, and the signed hash for BL2. Referring to X-509 certificate:

enter image description here

My question is that from the description above, is ARM not using the subject public key information in X509, and is instead adding the public key used to verify the hash in the extension field, and the signed hash in the digital signature field ?

The diagram also indicates that the trusted key certificate contains 3 keys (ROTPK, TWpub, NWpub). Does that mean that put all 3 keys in extension field, then added the signed hash of perhaps TWpub + NWpub in the digital signature and again didn’t use the subject public key information field (with certificate later verified with the ROTPK in the extension field) ?

Near identical MySQL deployments behaving very different – High CPU Usage problem

So I have five identical websites, running on five machines provisioned in the same way. The only thing that differs between these installations are the language files and the languages of the text stored in MySQL tables.

Four of them have no problems what so ever. One is struggling a LOT under the same or somewhat less load than the other four.

I cannot understand why this is.

Things I’ve done so far:

  1. Checked slow queries. All queries uses indexes and are in the realm of 0.0008 Sec execution time i.e. very fast
  2. I’ve noticed that the thing that causes most trouble for this MySQL instance is UPDATE and INSERT, so much so, I’ve turned off the UPDATE’s that were there, for this instance. Bear in mind that these UPDATE’s doesn’t cause a blip on the other servers.
  3. Tried to eliminate external factors i.e. noisy neighbours (moved host) etc.

Worth noticing is that the machines are deployed the same i.e. a vanilla Debian 10 installation with a LEMP stack, nothing out of the ordinary at all.

Still, the problem persists. I can see the load of the machine struggling to keep under 1.00. The other machines are in the 0.10 – 0.20 range all the time.

Looking at CPU for the MySQL process on this machine (with 2 CPU cores as the other machines have as well) it is quite often above 100%. The other machines are never – EVER – over 60% for the MySQL process.

So, any help is much appreciated.

Please do let me know if you need me to run a command that you need to see the output from in order to help.

Thanks.

EDIT Spelling and clarifications

Reseting GSA SER keeping projects because high usage of CPU e RAM

How to reset the GSA SER in relation to memory usage and keep only the posting settings in general and the program?
My GSA SER has done a lot of submitted and verified and is now out of memory and with a lot of CPU usage. I would like to keep the projects but reduce the memory and CPU usage. I am currently running on a VPS with 5 Core and 150GB of SSD.

Self signed SSL certificate usage [duplicate]

My scenario is about a closed system utilising a db driven website. It’s not supposed to be open to the public and the URL is hidden.

Currently I use Commodo certs to do the SSL, but I’m wondering since it a closed system whether it makes sense to use a self signed one. Is there any danger to this? I control all the end users computers so could easily install the cert in their browsers.

This is very different from the question marked as a dupe, it’s not a LAN environment.

ntlmrealyx powershell usage

I would like to know about your experience regarding the use of ntlmrelayx

When using ntlmrelayx.py -tf targets.txt -c "Powershell command" does this powershell command need to be an Empire launcher or can it be any powershell command like a powershell onliner?

Thanks

Postgresql: Refreshing materialized view fails with “No space left on device” and a huge spike in disk usage

I have a materialized view that refreshes every five minutes. The SQL aggregates the data among many tables with over 800k rows in each.

However, the when using the "REFRESH MATERIALIZED VIEW CONCURRENTLY tableName", the query runs for about an hour and then complains: ERROR: could not write block 39760692 of temporary file: No space left on device

It should be noted that this 39760692 changes every time I execute the query.

The disk size is about 960 GB and the database size is about 30 GB. So the disk has a free space of about 930 GB.

I noticed that when running the refresh query, there is a huge spike in the disk usage of about 12GB per minute and then finally the query errors out with no space error when it hits the 960 GB mark. Immediately, the disk usage is back to 30GB from the abnormal growth.

I even tried the REFRESH MATERIALIZED VIEW tableName (without concurrently) and seeing the same behavior.

I’m not sure what can be done here to diagnose the problem.

Who (Designer or User) Should be Resposible for the Correct/Secure Usage of a Tool Intended for Developers/Admins?

There is a healthy debate around a series of stack overflow posts that refer to the "RunAs" command. Specifically the discussion is in reference to design decision that the folks at Microsoft made a long time ago, to users of this command to enter the users password in one specific way, Raymond Chen accurately summarizes one side of the argument quite clearly:

The RunAs program demands that you type the password manually. Why doesn’t it accept a password on the command line?

This was a conscious decision. If it were possible to pass the password on the command line, people would start embedding passwords into batch files and logon scripts, which is laughably insecure.

In other words, the feature is missing to remove the temptation to use the feature insecurely.

If this offends you and you want to be insecure and pass the password on the command line anyway (for everyone to see in the command window title bar), you can write your own program that calls the CreateProcessWithLogonW function.

I’m doing exactly what is being suggested in the last line of Raymond’s comment, implementing my own (C#) version of this application that complete circumvents this restriction. There are also many others who have done this as well. I find this all quite irritating and agree with sentiment expressed by @AndrejaDjokovic who states:

Which is completely defeating. It is a really tiresome that idea of "security" is invoked by software designers who are trying to be smarter than the user. If the user wants to embed the password, then that is their prerogative. Instead all of us coming across this link are going to go and search other ways to utilize SUDO equivalent in windows through other unsavory means, bending the rules and wasting times. Instead of having one batch file vulnerable, i am going to sendup reducing overall security on the machine to get "sudo" to work. Design should never smarter than the user. You fail!

Now while I agree with the sentiment expressed by Microsoft and their concern with "embedding passwords into batch files" (I personally have seen poor practice myself way too many times), it really does strike me as wrong what Microsoft has done here. In my specific example I’m still following best practices and my script won’t store credentials, however I’m forced to resort to a workaround like everybody else.

This decision really follows a common pattern at Microsoft of applications acting in ways that are contrary to the needs of the specific users with the intention of "helping" the users by preventing them from completing a action that is viewed as unfavorable. Then obfuscating or purposely making the implementation of workarounds more difficult.

This leads us to a broader question, extremely relevant to this issue, who is the true responsible party when it comes to security around credentials, the user of the software or the designer of the software? Obviously both parties hold some responsibility, but where is the dividing line?

When you create tools for other developers should you seek to the best of your ability to prevent them from using your application in an insecure manner, or do you only need to be concerned about the application itself and whether it’s secure internally (irregardless to how the user invokes it)? If you are concerned about "how" they are using your application, to what extent do you need validate their usage (example: should "RunAs" fail if the system is not fully "up to date" i.e. insecure in another way), if that example seems far fetched, then define that line, in the case of "RunAs" the intention is quite clear, the developers who created it are not only concerned about managing credentials securely internally with their application but also care deeply about the security implications of how you use it. Was their decision correct in validating the usage in this case, and if so/or not where should that dividing line be for the applications that are created in the future?

Restricting website usage for Google Maps API doesn’t prevent it from being used in the browser?

Say I’ve restricted my Google Maps API key to the website abc.com/*. This would mean that no other website domains could use my API key to make requests to maps.googleapis.com.

However, using the API key through the browser url bar to make requests to maps.googleapis.com still works fine. Calls made through Postman also work.

What’s the explanation for this and is there an elegant way to prevent this?

Btw, I’m using the Maps Static & Javascript API. From my understanding both are client-side Maps API and called from the browser?