I’m making a 2.5D game, where 2D sprites are in a 3D environment. I’m using URP and I have a problem with lighting the sprites. The sprites are lighting up from behind, and not front. I tried with directional, spot and point lights but the result is the same no matter what official shader I use they only light up when they receive light from behind. Front light has no effect whatsoever on the sprites.
I spent the entire day looking for a solution but I’ve got almost nothing. Only solution I saw someone else mention is making the game object with the "sprite renderer" on, a child of another gameobject and rotate it 180 degrees on Y. But that is not an option for me cause I’m using custom scripts to rotate that game object already.
So can there be a custom shader? Can one be created using shadergraph maybe? I know some others have faced the same problem but did anyone really solve it?
I’ve been looking for a similar question here and reading about it about ""Add Azure Replica Wizard", but I heard the it doesnt works because it’s a deprecated feature.
I used to have a primary server and a secondary server on premises as always on, but because of costs I had to delete the secondary "replica".
I would like to know if it’s possible to recreate this always on environment, and then have the primary server On Premises, and a replicated environment on a virtual machine on Azure Cloud.
Then if something happens with our primary, automatically the secondary replica on azure will take the work.
A trusted execution environment (TEE) provides a way for one to deploy tamper-proof programs on a device. The most prominent example of TEEs seem to be Intel SGX for PCs.
What I wonder is, if there exists an equivalent solution for mobile devices. For example, I want to deploy an arbitrary application on a smartphone that even a malicious OS can’t tamper with. Is there such a solution at the moment?
As part of moving from few on-premise monoliths to multiple on-premise microservices, I’m trying to improve the situation where database passwords and other credentials are stored in configuration files in /etc.
Regardless of the the technology used, the consumer of the secrets needs to authenticate with a secret store somehow. How is this initial secret-consumer-authenication trust established?
It seems we have a chicken-and-egg problem. In order to get credentials from a server, we need to have a
/etc/secretCredentials.yaml file with a cert, token or password. Then I (almost) might as well stick to the configuration files as today.
If I wanted to use something like HashiCorp Vault (which seems to be the market leader) for this, there is a Secure Introduction of Vault Clients article. It outlines three methods:
- Platform Integration: Great if you’re on AliCloud, AWS, Azure, GCP. We’re not
- Trusted Orchestrator: Great if you’re using Terraform, Puppet, Chef. We’re not
- Vault Agent: The remaining candidate
When looking at the various Vault Auth Methods available to the Vault Agent, they all look like they boil down to having a cert, token or password stored locally. Vault’s AppRole Pull Authentication article describes the challenge perfectly, but then doesn’t describe how the app gets the
The only thing I can think of is IP address. But our servers are all running in the same virtualization environment, and so today, they all have random IP addresses from the same DHCP pool, making it hard to create ACLs based on IP address. We could change that. But even then, is request IP address/subnet sufficiently safe to use as a valid credential?
We can’t be the first in the universe to hit this. Are there alternatives to having a
/etc/secretCredentials.yaml file or ACLs based on IP address, or is that the best we can do? What is the relevant terminology and what are the best-practices, so we don’t invent our own (insecure) solution?
I have a project that I have to present on a Zoom call for my AP Computer Science class. I have a flask site that I am running off of my laptop onto a port forward. When I run the server it says:
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off
I only plan to run this for a couple of hours, and it doesn’t need to be particularly efficient, but I don’t want to open my computer up to attack. (I know it’s very dangerous to run it in debug mode like this). The web app doesn’t have any sensitive data to be stolen, but I wanted to make sure I wasn’t opening my machine to remote code execution, or anything like that.
I know the general differences between static and dynamic linking but what is the requirement on the target environment? In fact, what even do we consider as the “target environment”? Where is the program finally run by the Operating System during dynamic linking?
I am wondering if we had a large array to sort (let’s say 1,048,576 random integers), chosen because it is a perfect power of 2, if we can just keep dividing those blocks into smaller and smaller half size blocks, how would someone know (on a particular computer using a particular language and complier) what the ideal blocksize would be to get the best actual runtime speed using mergesort to put them all back together? For example, what if someone had 1024 sorted blocks of size 1024, but it could that be beaten by some other combination? Is there anyway to predict this or someone has to just code them and try them all and pick the best? Perhaps for simplicity they would want to use some simple bubblesort on the 1024 size blocks, then merge them all together at the end using mergesort. Of course the mergesort portion would only work on 2 sorted blocks at a time, merging them into 1 larger sorted block.
Also, what about the time complexity analysis on something like this? Would all divide and conquer variations of this be of the same time complexity? The 2 extremes would be 2 sorted blocks (of size 524,288) or 1,048,576 “sorted” blocks of size 1, handed over to a merge process at that point.
Are there any mechanical consequences in the game for characters without darkvision when moving and taking actions in a completely dark environment (such as a pitch black forest)?
Are there penalties for moving in the dark when you cannot see?
Is it, for example, possible to use a weapon to shoot at a character standing in light if you are in darkness yourself, or would you suffer from not having light to see the path you are walking, load your weapon etc.?
I’ve been reading about System F Omega lately, and I keep stumbling across a construct in typing rules that I cannot find an explanation of:
Γ(x) = k. For example, in A Short Introduction to Systems F and F Omega:
Γ(a) = k -------- Γ ⊢ a : k
I see the same construct in Hereditary Substitution for Stratified System F. I understand the bottom part fine. It would read something like: “In context
a has kind
k“. I’ve not been able to find an explanation of the top part, and the sources I referenced both assume familiarity with this construct. If I had to guess, I suspect that it means something like “In context
a, running a kind-checking algorithm on
a gives you kind
k as the result”. Is that accurate? What online resources describe this construct?
I am studying Trusted Execution Environment (TEE) in Android mobile phone. From reading, I found there are 2 APIs in TEE (isolated OS):
Internal API: a programming and services API for Trusted Application (TA) in TEE, cannot be called by any application running in rich OS (Android’s original OS). E.g, internal API provides cryptographic services
External API or client API: called by applications running in rich OS, in order to access TA and TEE services.
Assume I want to apply TEE in this way:
- I have an APP running in rich OS
- I want to securely store some cryptographic keys of my APP
- Hence, the keys are stored in TEE
- The APP in rich OS retrieves the keys from TEE when it needs, and delete from rich OS memory after usage
Please help explain that
- How the internal & external API should work in above situation.
- Except the APP in rich OS, do I also need a TA runing in TEE to store & provide the keys?