How to generate random waves for a bullet hell game that feel balanced and natural

My game consists of ‘waves’ of objects called ‘spawners’, which once every certain amount of time (their firetime), move to a new place on the screen and spawn an enemy. Each wave has 4 important properties:
1: the amount of spawners the wave has
2: the interval of time between creating new spawners
3: the total length of the wave
4: the types of spawners that can appear in a wave (represented as a std::vector<std::pair<std::string, int>> where the string is the spawner name, and the integer is its spawn weight.

The game works by picking a random spawner from the possible spawner types (with a weighted rng) every new spawner interval. Currently waves are set and are loaded from a file at runtime.

My problem is that I cannot find a good way to randomly generate waves that feel balanced and natural. Currently, I am trying to generate waves based on a difficulty value, mostly using weighted random number generation. However, this does not produce balanced waves that correspond well to the target difficulty. Even after trying several different techniques, I am unable to get a system that generates waves that fell balanced and natural (like the ones hand made).

Is there any way to generate waves that feel natural, based off of the difficulty value? How should I approach this problem?

Also if its of any help, each spawner also defines its own difficulty value.

Comparison of feature importance values in logistic regression and random forest in scikitlearn [closed]

I am trying to rank the features for binary classification, based on their importance using an ensemble method by combining the feature importances estimated by random forest and logistic regression. I know that logisticregression coefficients and random forest feture_importances are different values and Im looking for a method to make them comparable. Here is what I have in mind:

X=features y=lablels rf=RandomForestClassifier() rf.fit(X,y) RFfitIMP=rf.feature_importances_/rf.feature_importances_.sum() #normalizing feature importances to sum up to 1 lr=LogisticRegression() lr.fit(X,y) lrfitIMP=np.absolute(lr.coef_)/np.absolute(lr.coef_).sum() #Taking absolute values and normalizing coefficient values to sum up to 1 ensembleFitIMP = np.mean([featIMPs for featIMPs in [RFfitIMP,lrfitIMP]], axis=0) 

What I think the code does is to take the relative importance from both models, normalize them and returns the importance of features averaged over two models. I was wondering whether it is a correct approach for this purpose or not?

Diameter of a random graph

I’m considering the standard Erdös/Renyi $ G(n,p)$ model where we have $ n$ nodes and each possible edge is sampled independently with probability $ p = \frac{1}{n^\epsilon}$ .

It is relatively straightforward to show that, starting from any node $ u$ , the expected number of hops to reach every other node is $ 1/\epsilon$ . However, this does not say much about the probability of having a diameter of $ 1/\epsilon$ . (I could apply Markov’s inequality, but that gives a rather weak bound.)

Thus I’m looking for a reference to a concentration result that states that the diameter of such a random graph is $ O(1/\epsilon)$ with probability at least $ 1 – o(1)$ .

Establish a symmetric key: KDF based on shared secret and random salt or key wrapping?

I am designing a basic KMS based on a simple HSM, I only have access to: AES256, SHA256, PBKDF2, HMAC (and combinations like AES256-HMAC-SHA256). The admin and the users of the system have a personal HSM where the keys are stored and it works like this:

  1. The administrator generates a key inside his HSM with PBKDF2 (random salt and random seed)
  2. The HSM of the administrator encrypts the new key using AES-256 with a different symmetric key for each user (the key used for key wrapping was established during the physical initialization of the HSM of the user) and sends it to every user that needs it along with key’s metadata. The whole payload (encrypted key value + key’s metadata) is encrypted another time with AES256 with another unique key for each user.
  3. The payload reaches the user that, thanks to the two symmetric keys previously shared with the admin (during the HSM physical initialization), is able to retrieve the requested key and metadata.

I was thinking about another possible approach that could be better but I am not really sure about it:

  1. The administrator establishes a shared secret common to every user of the system. This secret is stored in every HSM belonging to the users or to the administrator.
  2. When a key must be generated, the administrator computes it with PBKDF2 using the common secret and a random salt.
  3. When a key must be sent to any user, only the salt that was used by the administrator is actually sent to the user. The salt may be encrypted with a pre-shared symmetric key (like the example above) and it is used by every user along with the shared secret to generate again the key.

The first approach has the following problems: I need to send the actual key value, I have to perform two encryptions, the HSM must offer an API to retrieve from its internal flash memory the actual value of a key (as cleartext or ciphertext depending on the choice of the caller, the API can be called only if the administrator is logged in the HSM and it can’t be called if the user is logged).

The second approach has the following problems: the secret is common to all users so if an attacker finds the secret of a single user, he founds the secret of everyone. The HSM must offer an API to retrieve the secret as cleartext from its internal flash memory because the secret must be the same for every user, even for users that are added to the system weeks/months later (again this API is callable only if the administrator is logged in the HSM).

I suppose that the second approach, in principle, could be better because the keys are not actually sent from the administrator to the users. But the secret common to everybody is a problem, moreover I imagine that if an attacker finds out the value of a random salt, he may simply try to compute all possible keys given that salt using PBKDF2 and all possible seeds (because the implementation is open source so he knows that the secret is 32 bytes long and he also has access to the PBKDF2 code).

In conclusion I think that in the real world the first approach is more secure, provided that the login as administrator to the HSM is protected by a very complex PIN and possibly by a second factor (i.e. fingerprint). Do you agree? Any thoughts about other vulnerabilities in my approach?

What is the correct way of grabbing a RANDOM record from a PostgreSQL table, which isn’t painfully slow or not-random?

I always used to do:

SELECT column FROM table ORDER BY random() LIMIT 1; 

For large tables, this was unbearably, impossibly slow, to the point of being useless in practice. That’s why I started hunting for more efficient methods. People recommended:

SELECT column FROM table TABLESAMPLE BERNOULLI(1) LIMIT 1; 

While fast, it also provides worthless randomness. It appears to always pick the same damn records, so this is also worthless.

I’ve also tried:

SELECT column FROM table TABLESAMPLE BERNOULLI(100) LIMIT 1; 

It gives even worse randomness. It picks the same few records every time. This is completely worthless. I need actual randomness.

Why is it apparently so difficult to just pick a random record? Why does it have to grab EVERY record and then sort them (in the first case)? And why do the “TABLESAMPLE” versions just grab the same stupid records all the time? Why aren’t they random whatsoever? Who would ever want to use this “BERNOULLI” stuff when it just picks the same few records over and over? I can’t believe I’m still, after all these years, asking about grabbing a random record… it’s one of the most basic possible queries.

What is the actual command to use for grabbing a random record from a table in PG which isn’t so slow that it takes several full seconds for a decent-sized table?

How does the random reading part of the divination spell work?

The last section of the diviniation spell reads:

If you cast the spell two or more times before finishing your next long rest, there is a cumulative 25 percent chance for each casting after the first that you get a random reading. The GM makes this roll in secret.

I’m uncertain what this means, what the random reading part specifically means.

Random disappearance of footer widgets

The footer widget disappear on their own after few days even if the website is untouched. This happened 4-5 times before and I don’t see any security audit log of widget events nor any malware infected. What could be the cause?

There is no solution from the premium theme developers as they have given the common “Added to bug list” reply.

Normal footer:

enter image description here

Footer widgets disappeared:

enter image description here

I have a Development Site for the issue to be checked.

The least security check:

enter image description here