How to bootstrap trust in an on-premise environment?

As part of moving from few on-premise monoliths to multiple on-premise microservices, I’m trying to improve the situation where database passwords and other credentials are stored in configuration files in /etc.

Regardless of the the technology used, the consumer of the secrets needs to authenticate with a secret store somehow. How is this initial secret-consumer-authenication trust established?

It seems we have a chicken-and-egg problem. In order to get credentials from a server, we need to have a /etc/secretCredentials.yaml file with a cert, token or password. Then I (almost) might as well stick to the configuration files as today.

If I wanted to use something like HashiCorp Vault (which seems to be the market leader) for this, there is a Secure Introduction of Vault Clients article. It outlines three methods:

  • Platform Integration: Great if you’re on AliCloud, AWS, Azure, GCP. We’re not
  • Trusted Orchestrator: Great if you’re using Terraform, Puppet, Chef. We’re not
  • Vault Agent: The remaining candidate

When looking at the various Vault Auth Methods available to the Vault Agent, they all look like they boil down to having a cert, token or password stored locally. Vault’s AppRole Pull Authentication article describes the challenge perfectly, but then doesn’t describe how the app gets the SecretID 🙁

The only thing I can think of is IP address. But our servers are all running in the same virtualization environment, and so today, they all have random IP addresses from the same DHCP pool, making it hard to create ACLs based on IP address. We could change that. But even then, is request IP address/subnet sufficiently safe to use as a valid credential?

We can’t be the first in the universe to hit this. Are there alternatives to having a /etc/secretCredentials.yaml file or ACLs based on IP address, or is that the best we can do? What is the relevant terminology and what are the best-practices, so we don’t invent our own (insecure) solution?

Is it safe to run a flask server in a development environment?

I have a project that I have to present on a Zoom call for my AP Computer Science class. I have a flask site that I am running off of my laptop onto a port forward. When I run the server it says:

WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off 

I only plan to run this for a couple of hours, and it doesn’t need to be particularly efficient, but I don’t want to open my computer up to attack. (I know it’s very dangerous to run it in debug mode like this). The web app doesn’t have any sensitive data to be stolen, but I wanted to make sure I wasn’t opening my machine to remote code execution, or anything like that.

How can divide by 2 blocksize bubblesort followed by a final mergesort be optimized in a particular environment?

I am wondering if we had a large array to sort (let’s say 1,048,576 random integers), chosen because it is a perfect power of 2, if we can just keep dividing those blocks into smaller and smaller half size blocks, how would someone know (on a particular computer using a particular language and complier) what the ideal blocksize would be to get the best actual runtime speed using mergesort to put them all back together? For example, what if someone had 1024 sorted blocks of size 1024, but it could that be beaten by some other combination? Is there anyway to predict this or someone has to just code them and try them all and pick the best? Perhaps for simplicity they would want to use some simple bubblesort on the 1024 size blocks, then merge them all together at the end using mergesort. Of course the mergesort portion would only work on 2 sorted blocks at a time, merging them into 1 larger sorted block.

Also, what about the time complexity analysis on something like this? Would all divide and conquer variations of this be of the same time complexity? The 2 extremes would be 2 sorted blocks (of size 524,288) or 1,048,576 “sorted” blocks of size 1, handed over to a merge process at that point.

Mechanics of moving and acting in a completely dark environment

Are there any mechanical consequences in the game for characters without darkvision when moving and taking actions in a completely dark environment (such as a pitch black forest)?

Are there penalties for moving in the dark when you cannot see?

Is it, for example, possible to use a weapon to shoot at a character standing in light if you are in darkness yourself, or would you suffer from not having light to see the path you are walking, load your weapon etc.?

Parentheses after Typing Environment

I’ve been reading about System F Omega lately, and I keep stumbling across a construct in typing rules that I cannot find an explanation of: Γ(x) = k. For example, in A Short Introduction to Systems F and F Omega:

Γ(a) = k -------- Γ ⊢ a : k 

I see the same construct in Hereditary Substitution for Stratified System F. I understand the bottom part fine. It would read something like: “In context Γ, a has kind k“. I’ve not been able to find an explanation of the top part, and the sources I referenced both assume familiarity with this construct. If I had to guess, I suspect that it means something like “In context a, running a kind-checking algorithm on a gives you kind k as the result”. Is that accurate? What online resources describe this construct?

Trusted Execution Environment (TEE) internal API vs. external (client) API

I am studying Trusted Execution Environment (TEE) in Android mobile phone. From reading, I found there are 2 APIs in TEE (isolated OS):

  • Internal API: a programming and services API for Trusted Application (TA) in TEE, cannot be called by any application running in rich OS (Android’s original OS). E.g, internal API provides cryptographic services

  • External API or client API: called by applications running in rich OS, in order to access TA and TEE services.

Assume I want to apply TEE in this way:

  • I have an APP running in rich OS
  • I want to securely store some cryptographic keys of my APP
  • Hence, the keys are stored in TEE
  • The APP in rich OS retrieves the keys from TEE when it needs, and delete from rich OS memory after usage

Please help explain that

  • How the internal & external API should work in above situation.
  • Except the APP in rich OS, do I also need a TA runing in TEE to store & provide the keys?

Password Policy for Digital Environment

When creating a Security Policy such as Password Policy, what are some of the typical assets that need to be protected?

And how does this affect, employees, contractors, vendors, suppliers, and representatives who access the organization-provided or organization-supported applications, programs, networks, systems, and devices.

Curios in learning more. I’d also appreciate real expertise details, in addition, if anyone has a good valid password policy that is written and implement, I’d love to read it and learn from it.

PCI DSS 1.2.1 Restrict inbound and outbound traffic to that which is necessary for the cardholder data environment

A strict interpretation of that rule would seem to prohibit web browsing by PCs on the same LAN as a card processing PC. However, it appears that rule is interpreted in practice as though it says “Restrict inbound and outbound traffic to that which is necessary for the business environment.” Can anyone provide confirmation or clarification?

Is using typical memory used when working under windows environment a good knowledge of needed memory for a new computer?

Let s suppose I want to buy a new computer. My reasonning for the amount of ram is the following.

Is it correct ?

I look under windows environment how much memory I use when a large amount of applications that I typically use are simultaneously active. This gives me the memory that I need for the next computer.

Not more is needed.