Shot em up! Vertical or Horizontal “side scrolling”?

Question it’s about "Shot em up" style game (For example: Tyrian) also known as Top-Down shooters.

These games in the past mostly exist as top down concept, it means player object can move from the left to the right and enemies start appearing from the top of the screen and moving down to the screen. This concept was probably picked due to the monitors aspect ratios in the past where was most spreaded 4:5.

Also this concept are very popular on mobile phones, because is sometimes more user friendly not using landscape mode when playing.

There was a very little titles, when the concept Top-Down are changed Right-left It means, player object is on left screen side and moving Top and Down, and the enemies moving from right to the left. Also, there is almost no titles on desktops when current aspect ratios are mostly 16:9. Even if some new “Modern” Shot em up is released, mostly pick the Top Down concept.

And my question is, is there any reason for that on Desktops target platform?

I think for the current monitor aspect ratios 16:9 “Right-Left” better fit the gameplay and user experience. Or Am I missing something?

How do I make my character attack on the side of the character that I click the LMB on (Unity)?

This is a top-down game, I guess I should also mention that the character is not centred on the screen and the attack should be in 4 different directions. The attack’s direction should be based on which side of the character(up, down, left, right) that you click on(this should also be universal on the screen, the only deciding factor should be the character and the mouse). I really have no clue where to start on this problem.

Pattern for access controlled client side encryption

How would you design a server/client system where a client is granted a key to encrypt/decrypt data, but the key could be revoked/redistributed by the server? Data encrypted prior must still be readable with the new key.

A simple scenario:

  1. Client wants to send a document to a server
  2. Client encrypts the document with some client-side credentials and sends to server
  3. Server receives document and stores in database
  4. Client requests document, receives, then decrypts. The roundtrip is complete.

Now, suppose the client credentials are compromised and key used to encrypt/decrypt data is stolen. The client changes their password, etc, but the key that can decrypt incoming data is still an issue.

My question is about redistributing an encryption key without having to re-encrypt all of the clients data. Are there any patterns that can help me with this? It feels like a variation of symmetric encryption with a KEK and DEK, but I’m having trouble figuring out how to encrypt something on the client side without exposing the DEK.

Is using Argon2 with a public random on client side a good idea to protect passwords in transit?

Not sure if things belongs in Crypto SE or here but anyway:

I’m building an app and I’m trying to decide whatever is secure to protect user passwords in transit, in addition to TLS we already have.

In server side, we already have bcrypt properly implemented and takes the password as an opaque string, salts and peppers it, and compares/adds to the database.

Even though SSL is deemed secure, I want to stay at the "server never sees plaintext" and "prevent MiTM eavesdropping from sniffing plaintext passwords" side of things. I know this approach doesn’t change anything about authenticating, anyone with whatever hash they sniff can still login, my concern is to protect users’ plaintext passwords before leaving their device.

I think Argon2 is the go-to option here normally but I can’t have a salt with this approach. If I have a random salt at client side that changes every time I hash my plaintext password, because my server just accepts the password as an opaque string, I can’t authenticate. Because of my requirements, I can’t have a deterministic "salt" (not sure if that can even be called a salt in this case) either (e.g. if I used user ID, I don’t have it while registering, I can’t use username or email either because there are places that I don’t have access to them while resetting password etc.) so my only option is using a static key baked into the client. I’m not after security by obscurity by baking a key into the client, I’m just trying to make it harder for an attacker to utilize a hash table for plain text passwords. I think it’s still a better practice than sending the password in plaintext or using no "salt" at all, but I’m not sure.

Bottomline: Compared to sending passwords in plaintext (which is sent over TLS anyway but to mitigate against server seeing plaintext passwords and against MiTM with fake certificates), is that okay to use Argon2 with a public but random value as "salt" to hash passwords, to protect user passwords in transit? Or am I doing something terribly wrong?

Context free languages invariant by “shuffling” right hand side

Given a grammar for a Context Free language $ L$ , we can augment it by "shuffling" the right hand side of each production, e.g.:

$ A \to BCD$ is expanded to $ A \to BCD \; | \; BDC \; | \; CBD \; | CDB \; | \; DBC \; | \; DCB$

It may happen that the resulting language $ L’$ is equal to $ L$

For example:

Source               Shuffled S -> XA | YB         S -> XA | AX | YB | BY A -> YS | SY         A -> YS | SY B -> XS | SX         B -> XS | SX X -> 1               X -> 1 Y -> 0               Y -> 0 

Is there a name for such class of CF languages ($ L = \text{shuffled}(L)$ ?

Does RLNC (Random Linear Network Coding) still need interaction from the other side to overcome packet loss reliably?

I’m looking into implementing RLNC as a project, and while I understand the concept of encoding the original data with random linear coefficients, resulting in a number of packets, sending those packets, and thus the client will be able to reconstruct the original data from some of the received packets, even if a few of the packets are lost, if enough encoded packets have arrived to create a solvable equation system.

However, I have not seen any mention of the fact that if there is a non zero percent of packet loss, there is a possibility that the receiver of the encoded packets will not receive enough packets to reconstruct the original data. The only solution I see to this is to implement some type of sequencing so that the receiver can verify he hasn’t missed some packets that would allow him to reconstruct the original data, in other words interaction. Am I missing some part of the algorithm or not? If someone has solved this problem already, can you please show me where it has been solved so I can read about it?

The Other Side of CAPTCHA [closed]

CAPTCHA stands for "Completely Automated Public Turing Test to Tell Computers and Humans Apart", but we only consider one half of it i.e. a test to separate genuine humans from bots.

Can a program/test exist that can be only "solved" by computers/bots and not by humans?

If not, what prevents such a program from existing in the foreseeable future?

If it does, could you please cite it and how it works?

Note: The Voight-Kampff Test by Phillip K Dick still has its drawbacks.