Best Practices to use Indetified URLs from GSA PI for SER

Hi 

I am scraping my own Targets with ScrapeBox 24*7
After I scrape 5-10GB of Data, I trim to root, de-duplicate and send the resulted list to GSA PI.

GSA PI feeding Identified list to my GSA SER Instance which verifies the lists and being used for link building.

Now my questions are.

*Question 1*
Should I keep adding my new targets into the existing Identified list OR I should wipe out my identified list occasionally and start over with a fresh GSA PI identified list?

Because over time, All the targets are already tried by GSA SER and there is nothing more left under this identified list to verify.

If yes, How occasionally? OR you can tell after how big your Identified list becomes when you wipe it out.

I am running GSA SER on a dedicated server with 2000 threads and getting 100+ LPM. So you can get an idea when I need to wipe it out.

*Question 2*
I am using SB Link Extractor to scrape the initial targets. I believe I am getting LESS Unique targets. 

How can I increase it?

Thanks in Advance 

Which practices should i use while generating SMS codes for auth on my project?

Let’s imagine that we have an SMS verification auth, and using random 4-digit codes, i.e. 1234, 5925, 1342 and etc.

I’m using this random inclusive algorithm on my node.js server:

function getRandomIntInclusive(min, max) {     min = Math.ceil(min);     max = Math.floor(max);     return Math.floor(Math.random() * (max - min + 1) + min); //The maximum is inclusive and the minimum is inclusive  }  const another_one_code = getRandomIntInclusive(1000, 9999); 

taken from https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/random

Seems i have range from 1000 to 9999 and have some questions about security:

  1. I’m using good algo? Maybe i need to use something better?
  2. Will it increase security if i will check previous sent code for {$ n} last minutes from db and regenerate another one if it will be same (brute same code twice case), so user always gets random 5941-2862-1873-3855-2987 without 1023-1023-2525-2525-3733-3733 case? I understand that chance is low, but anyway…

Thank you for answers!

B2B authentication best practices

I’m in the process of developing a B2B (business-to-business) application. I’ve implemented JWT auth, and it is working as expected. Right now the authentication functions as if it were a B2C (business-to-customer) app.

I’m trying to determine the best practices for B2B authentication.

Is having one authentication account bad practice in a B2B app? For example, every employee at Company A would use the same set of login credentials.

API open endpoint best practices

I am currently developing an API for my front-end React application. All my routes (besides the two I’ll mention below) are secure by the use of JWTs. They get generated once a user logs in and is then used for the remainder of the session. The app to API connection will be over HTTPS so it should hinder MiTM attacks.

The two endpoints (which you have probably guessed) is the login and register endpoint. I have come across this question that suggests using HMAC. If I understand it correctly, the front end will create a hash (using a shared secret) of the request body and send it with the request; once the request arrives the API will generate a hash (with the same shared secret) based off of the request and compare the two hash values. If they don’t match then the request was tampered with or is fraudulent.

So that obviously verifies the integrity of the requests made. The other problem is now that, anyone can just spam the hell out of the endpoint and effectively DoS/DDoS the endpoint. Even though the requests are fraudulent, the request will still be tried to be verified on the API side by calculating the hash. Which takes compute power. So if I am getting a lot of requests, very quickly, it will drag my API down.

Would it be right to say that I need to rate-limit the endpoint based on the request IP address? Say limit the call to 10 per hour from a specific IP address? Would appreciate any feedback with regards how to stop the spamming of the endpoints.

PostgreSQL WAL Archival Best Practices?

The postgresql documentation gives an example archive_command of

archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'  # Unix 

but adds the disclaimer saying

This is an example, not a recommendation

As someone who’s about to setup a postgresql database, I’m wondering what the best practice would be for handling WAL archival? Forgive me if this is already a question that’s been beaten to death, but my stackexchange search-foo is failing me. There *is a few recommendations for using pgBarman. Is it still a good direction to go?

Coming from a setup where MSSQL backups were handled by IT with a full backup taken daily in the morning with hourly transaction logs. I’m wondering what would be an equivalent setup in postgresql to give the ability of point-in-time recovery for the past week or two?

Mind-maps conventions, best practices and approaches

I encountered this issue when prepping for a big arc in my campaign, let me explain:

The players are entering a place where there are three main factions (A B C ) and possibly minor ones (D E). The factions have interconnections with each others: ( X1: if the dragon appears A does attacks B; X2 C tries to do steal from A’s vault) etc etc. On top of this, the actions of the characters influence what each faction does with the characters and with other factions (X3: if the characters contact B first, A knows and resents them etc etc).

Now, i wrote this down in prose, with a bit of structure etc etc but there’s a good chance to miss things when the list of X1, X2 … grows too large. For example, something in X1 may be influenced by something in X10 that is way down the prep page.

Question:

What are better practices to deal with this kind of mind maps? (i’m not really asking for what has worked for people but rather if there are resources that explain how to do these mind maps and why doing them in certain ways)

I can envision having arrows between Xi and Xj when Xj refers to Xi, but there may be also different kinds of relationships between different Xi and Xj (will differently-coloured arrows suffice?). So, i don’t know if the text + arrows is the best approach, the clearest or whatever.

Perhaps this is a known concern when writing story/prep and there are some guidelines ?

Best practices for storing long-term access credentials locally in a desktop application?

I’m wondering how applications like Skype and Dropbox store access credentials securely on a user’s computer. I imagine the flow for doing this would look something like this:

  1. Prompt the user for a username/password if its the first time
  2. Acquire an access token using the user provided credentials
  3. Encrypt the token using a key which is just really a complex combination of some static parameters that the desktop application can generate deterministically. For example something like:
value = encrypt(data=token, key=[os_version]+[machine_uuid]+[username]+...) 
  1. Store value in the keychain on OSX or Credential Manager on Windows.
  2. Decrypt the token when the application needs it by generating the key

So two questions:

  1. Is what I described remotely close to what a typical desktop application that needs to store user access tokens long term does?
  2. How can a scheme like this be secure? Presumably, any combination of parameters we use to generate the the key can also be generated by a piece of malware on the user’s computer. Do most applications just try to make this key as hard to generate as possible and keep their fingers crossed that no one guesses how it is generated?

Local variables in sums and tables – best practices?

Stumbled on Local variables when defining function in Mathematica in math.SE and decided to ask it here. Apologies if it is a duplicate – the only really relevant question with a detailed answer I could find here is How to avoid nested With[]? but I find it sort of too technical somehow, and not really the same in essence.

Briefly, things like f[n_]:=Sum[Binomial[n,k],{k,0,n}] are very dangerous since you never know when you will use symbolic k: say, f[k-1] evaluates to 0. This was actually a big surprise to me: by some reason I thought that summation variables and the dummy variables in constructs like Table are automatically localized!

As discussed in answers there, it is not entirely clear what to use here: Module is completely OK but would share variables across stack frames. Block does not solve the problem. There were also suggestions to use Unique or Formal symbols.

What is the optimal solution? Is there an option to automatically localize dummy variables somehow?