at just 5 projects, 140 thread cpu is at max. cpu is going 100% even at 120 threads. Earlier i was running 15-20 projects, 170 threads at minimum.
This has been going on for last 2 months for most version of SER except one version but i don’t remember which one was working fine.
For example, let’s say I have an SaaS application, that enables users help create Telegram bot. On the server, we process the bots and APIs for it. Now, let’s say if some users try to use the application for bad purpose, like spamming, scamming, illegal activity. Now, are we responsible for the user’s usage? What type of workarounds or disclaimer can we have to avoid these troubles?
Glad to be here, first post haha
Looking for ways to reduce data used by SB’s YellowPages scraper plugin.
The problem is that I use a proxy provider that charges per 1GB and the scraper is blazing through data.
To get 50k leads I am probably paying about $50 USD.
PS: Any way to enable captcha solving for YellowPages scraper?
I am trying to understand how PKI is used to boot an ARM board.
The following image relates to BL1:
The booting steps state:
The certificate used in step 1 appears to be a content certificate. In the diagram it suggests in contains the public key used to sign a hash, and the signed hash for BL2. Referring to X-509 certificate:
My question is that from the description above, is ARM not using the subject public key information in X509, and is instead adding the public key used to verify the hash in the extension field, and the signed hash in the digital signature field ?
The diagram also indicates that the trusted key certificate contains 3 keys (ROTPK, TWpub, NWpub). Does that mean that put all 3 keys in extension field, then added the signed hash of perhaps TWpub + NWpub in the digital signature and again didn’t use the subject public key information field (with certificate later verified with the ROTPK in the extension field) ?
So I have five identical websites, running on five machines provisioned in the same way. The only thing that differs between these installations are the language files and the languages of the text stored in MySQL tables.
Four of them have no problems what so ever. One is struggling a LOT under the same or somewhat less load than the other four.
I cannot understand why this is.
Things I’ve done so far:
- Checked slow queries. All queries uses indexes and are in the realm of 0.0008 Sec execution time i.e. very fast
- I’ve noticed that the thing that causes most trouble for this MySQL instance is UPDATE and INSERT, so much so, I’ve turned off the UPDATE’s that were there, for this instance. Bear in mind that these UPDATE’s doesn’t cause a blip on the other servers.
- Tried to eliminate external factors i.e. noisy neighbours (moved host) etc.
Worth noticing is that the machines are deployed the same i.e. a vanilla Debian 10 installation with a LEMP stack, nothing out of the ordinary at all.
Still, the problem persists. I can see the load of the machine struggling to keep under 1.00. The other machines are in the 0.10 – 0.20 range all the time.
Looking at CPU for the MySQL process on this machine (with 2 CPU cores as the other machines have as well) it is quite often above 100%. The other machines are never – EVER – over 60% for the MySQL process.
So, any help is much appreciated.
Please do let me know if you need me to run a command that you need to see the output from in order to help.
EDIT Spelling and clarifications
How to reset the GSA SER in relation to memory usage and keep only the posting settings in general and the program?
My GSA SER has done a lot of submitted and verified and is now out of memory and with a lot of CPU usage. I would like to keep the projects but reduce the memory and CPU usage. I am currently running on a VPS with 5 Core and 150GB of SSD.
Suppose I have a list of thousands of ip addresses to block. Right now I know how to iterate through the list and for each one run:
iptables -A INPUT -s XX.XX.XX.XX -j DROP
But this means I will have to run thousands of processes!
How can I do this more efficiently?
My scenario is about a closed system utilising a db driven website. It’s not supposed to be open to the public and the URL is hidden.
Currently I use Commodo certs to do the SSL, but I’m wondering since it a closed system whether it makes sense to use a self signed one. Is there any danger to this? I control all the end users computers so could easily install the cert in their browsers.
This is very different from the question marked as a dupe, it’s not a LAN environment.
I would like to know about your experience regarding the use of ntlmrelayx
When using ntlmrelayx.py -tf targets.txt -c "Powershell command" does this powershell command need to be an Empire launcher or can it be any powershell command like a powershell onliner?
I have a materialized view that refreshes every five minutes. The SQL aggregates the data among many tables with over 800k rows in each.
However, the when using the "REFRESH MATERIALIZED VIEW CONCURRENTLY tableName", the query runs for about an hour and then complains:
ERROR: could not write block 39760692 of temporary file: No space left on device
It should be noted that this 39760692 changes every time I execute the query.
The disk size is about 960 GB and the database size is about 30 GB. So the disk has a free space of about 930 GB.
I noticed that when running the refresh query, there is a huge spike in the disk usage of about 12GB per minute and then finally the query errors out with no space error when it hits the 960 GB mark. Immediately, the disk usage is back to 30GB from the abnormal growth.
I even tried the
REFRESH MATERIALIZED VIEW tableName (without concurrently) and seeing the same behavior.
I’m not sure what can be done here to diagnose the problem.