Value flow (and economics) in stacked reinforcement learning systems: agent as reinforcement environment for other agents?

There is evolving notion of stacked reinforcement learning systems, e.g. https://www.ijcai.org/proceedings/2018/0103.pdf – where one RL systems executes actions of the second RL system and it itself executes action of the thrid and the reward and value flows back.

So, one can consider the RL system RLn with:

  • S – space of states;
  • A={Ai, Ainner, Ao} – space of actions which can be divided into 3 subspaces:
    • Ai – actions that RLn exposes to other RL systems (like RLn-1, RLn-2, etc.) and for whome the RLn provides reward to those other systems;
    • Ainner – actions that RLn executes internally under its own will and life;
    • Ao – actions that RLn imports from other RL systems (like RLn+1, RLn+2, etc.) and that RLn executes on the other systems and form which RLn gets some rewards, that can be used for providing rewards for Ai actions.

So, one can consider the network (possibly hierarchical) of RL systems . My questions are:

  • is there some literature that consider such stacks/networks of reinforcement learning systems/environments?
  • is there economic research about value flow, about accumulation of wealth, about starvation and survival proceses and evolution in such stacks/networks of RL systems/environments

Essentially – my question is about RL environments that can function in the role of agents for some other environments and about agents that can function in the role of environemnts for some other agents. How computer science/machine learning/artificial intelligence name such systems, what methods are used for research and how the concepts of economics/governance/preference/utility are used in the research of evolution of such systems?

Reliable data synchronisation between several systems

We have several systems that hold the same data in different databases. The requirement is to keep the data synced in as much reliable way as possible. Performance isn’t a concern because we don’t have lots of users.

Currently, we’re using RabbitMQ to produce events on data changes so that consumers are able to track the updates and do corresponding changes in its own data. We’ve set up a RabbitMQ cluster to increase reliability. However, my team leader is still concerned about insufficient reliability and data loss in case if one of the systems failed to handle a message correctly.

Is there any better way to solve the task?

What are the topological phases of quantum Hall systems?

(Fractional) quantum Hall systems are $ 2+1$ -dimensional models which are said to possess topological order. One (maybe even complete) set of invariants of topological phases in $ 2+1$ dimensions is given that anyon statistics, or in mathematical terms, the (2-extended) axiomatic TQFT of the phase. Both are labelled by modular fusion categories.

Question: Which are the modular fusion categories associated to quantum Hall systems? Is there a table somewhere in the literature that associates to each value of the “filling fraction” (and whatever other parameters there are in quantum Hall models) a modular fusion category?

Also: Quantum Hall systems involve fermionic degrees of freedom. Are they proper fermionic phases? Or are the fermions somehow restricted to local islands of even parity, such that the resulting phases are still bosonic? In the former case, the anyon statistics are not described by modular fusion categories but their fermionic analogue, something like “modular super-fusion categories”.

I guess this is written down at a lot of places, I’m just having a hard time finding it without having to skim through lots of condensed matter literature that I don’t understand.

How do I create a customized ISO of Ubuntu that can be installed on multiple systems, a la OEM configuration?

I am working on created a customized version of Ubuntu (with certain NEEDED customizations) that will, aftewards, need to be installed on others computers. I know about OEM install, but how do I customize the ISO with the packages and setup files I need?

How to init other systems in the enterprise

We currently have a REST API on top of a MongoDB cluster which works pretty well given the right indexes, as we are mostly benchmarking the MongoDB driver.

A few data points:

  • several thousands of requests per second
  • 100 GB of data
  • 1 billion of items in the main collection

There are other entities in the enterprise that need some part of this data, in order to aggregate / rework / merge with other systems.

The question is: how do you handle an init of their systems in that case?

We have thought about a few solutions:

  • stream the data from the API
  • provide them with a dump of the data, in the same format as the API
  • provide them with a MongoDB read-only replica
  • … ?

How do bigger entities handle that kind of data transfer?

what is the solution for designing two seperate systems that need to share their data? [on hold]

for my dissertation I want to design two separate systems, one for hospital and the other one for pharmacy. but these two systems need to share their data, the hospital system needs to have access to pharmacy system’s data. for example a doctor using hospital’s system sends a prescription to the pharmacy system using the pharmacy’s drugs data. or the pharmacist gets the patients information from hospital system on pharmacy system. f.n: the goal is just to design these systems, not to implement them. questions:

  • is data integration a good solution for these systems?
  • can data integration be done during design(when there is no code)?
  • should I use an architecture pattern for designing systems?
  • if yes which architecture pattern suits it?

thank you for your further help 🙂

How can I safely store application secrets/passwords in git and other version control systems?

When I saw this question: Why is storing passwords in version control a bad idea?

I immediately thought that question could be inverted to be: Why is storing passwords in version control a good idea?

  1. True Infrastructure as Code = App + Config + Secrets, all stored as code. (Having this allows results to be replicated reliably.)
  2. Consistency is the best friend of automation/CI/CD Pipelines. Having App + Config in source control, and Secrets in HashiCorp Vault makes your automation more complex. If all 3 are stored consistently in git automation becomes much easier.
  3. It’s important to store your config in a version control system. The thing is .json or .yaml config files with secrets and other sensitive information embedded alongside the configuration are pretty common. Why not just put those in version control too?
  4. Allowing Secrets in git offers the following benefits:
    1. There’s a changelog of when the secret changed, and an audit trail of who changed it, this knowledge allows the scope of debugging to be narrowed.
    2. Sometimes a dev isn’t sure if their code is wrong, or if the secret is formatted in some weird an unexpected way. A dev being able to look at a dev version of the secret while working, and then an ops person being able to compare a dev and pre-prod version of a secret helps debug quicker. (Example: Maybe a .txt file was created on Mac/Linux by a Dev, then created on Windows by an Ops guy and the dev vs pre-prod version of the secret ended up with 2 separate character encodings?, Missing Quote(s), rn vs m, extra space, all kinds of misspellings.)
    3. I’ve run into a scenario where an app was being rapidly developed, and a new feature required a new secret to be added, the secret was added to the dev environment, then a pre-prod version of the application was launched, it wasn’t working and it took a while to figure out that it was because the newly added secret was never created for (much less applied to) the higher environments. (If secrets were consistently stored in git, this would have been obvious by inspection.)

But then I realized there’s a better question beyond:
Why is storing passwords in version control a bad idea?
vs
Why is storing passwords in version control a good idea?

And that’s:
How can I safely store application secrets/passwords in git?
Challenges:

  • It’s obvious that the secret would need to be stored encrypted. But safely storing encrypted data in git, requires that it’s impossible for decryption keys to be leaked:
    If git users directly decrypt secrets using PGP or symmetric keys, then when the decryption keys get leaked, there’s no way to revoke or invalidate the decryption keys and there’s no way to purge the git history because it’s decentralized.
  • Need a means to audit if a piece of data was decrypted or who decrypted it.
  • Need to be able to assign Granular access rights to who can decrypt what secrets. Devs shouldn’t be able to decrypt prod secrets. Ops person who can decrypt Prod application A’s secrets, shouldn’t necessarily be able to decrypt Prod application B’s secrets.
  • Need to be able to prevent footgun scenarios: like accidentally decrypting a previously encrypted secret, then committing the decrypted version of the secret back to the repo.