gitlab ci (self hosted), docker, access to other containers

Even if i’m not allowed to access a specific repo (or if i have low perms (cant see ci/cd vars)) i still can create one and do something like:

variables:   USER: gitlab build:   stage: build   image: docker:latest   script:     - docker ps -a     - docker images 

Then when i have what i need, i can:

variables:   USER: gitlab build:   stage: build   image: docker:latest     - docker exec <container> /bin/cat /var/www/html/config.php     - docker exec <container> /usr/bin/env 

How to avoid this kind of stuff?

PS: This is on a self hosted gitlab server.

PS2: Originaly post on stackoverflow, but im asking here since i didnt have any answer.

What is the difference between the certificate containers in Windows?

I’ve always wondered. When you import a PKI certificate on a Windows system, you have several choices of where to store it:

  • Personal
  • Trusted Root Certification Authorities
  • Enterprise Trust
  • Intermediate Certification Authorities
  • Trusted Publishers
  • Third Party Root Certification Authorities
  • Etc.

All of these stores are duplicated between the user account and the machine account, and I understand the difference between those.

In practice, however, it does not seem to matter which location I choose; Certs get trusted by applications regardless of where I put them.

Is there any functional difference between them? Why do we need so many?

How do I share secret key files with Docker containers following 12 Factor App?

I am building an API and trying to follow the 12 Factor App methodology. Using Docker, the methodology says containers must be disposable.

Assuming the API will have high traffic, multiple docker containers will be running with the same app, connecting to the same database.

Certain fields in the database are encrypted and stored with a reference to the file containing the passphrase. – This is done so the passphrase can be updated, and old data can still be decrypted.

With a Docker container and following 12 Factor App, how should I provide the key files to each of the containers?

Am I correct in assuming I would need a separate server to handle the creating of new key files and distributing them over the network?

Is there secure software, protocols or services that do this already, or would I need a custom solution?

start proxy server on docker containers for http request from host

I have a docker container connected to a VPN, but sometimes i need to open a URL on browser for debug.

I cannot run the VPN on my host machine for security reasons, specifically i want to open the URL in my host machine and intercept request with BURP Suite, i already tried some “python proxy servers” from github to start a proxy on my docker machine and connect my host to it, without success.

Someone did something similar?. any ideas?

PD. sorry for my english. 🙂

I’m seeing strange names in my list of docker containers, is someone having fun at docker or is that from hackers?

I’m trying to run a docker and it fails for various reasons. As I check my list of dockers (docker ps -a), I see those names:

pedantic_gauss recursing_feynman adoring_brattain suspicious_tesla gallant_gates competent_gates elated_davinci ecstatic_mahavira focused_mirzakhani 

I use docker-compose and I’m sure we do not have such names in our setup files. Is that just something docker people thought would be fun to do?! I searched on some of those names and could not really find anything useful, although it looks like these appear on many sites, somewhat sporadically.

Openstack Novalx does not complete nested containers

We have (almost) deployed Openstack NovaLXD with conjure-up on a single machine. In this setup, conjure-up uses juju and lxd, and creates nested lxc containers. All of them comes up with IP addresses, but all the nested containers fails to complete the setup.

I can attach to the nested containers, and from there I can ping the IP of juju and also the internet.

How do I troubleshoot further ?

How do I fix “failed to start journal service” on Ubuntu 18 Server running docker containers?

I have an Ubuntu 18 Server. I created a small number of persistent docker containers but ever since I created the docker containers the server has started locking up with the error “failed to start journal service”.

I’m not a Linux expert so I’m not sure where to start in debugging and fixing this.

Rebooting works for a while.

Granularity of microservices and containers

Let’s imagine I have a single web app containing features which are topically related, but have highly differing implementation requirements.

Let’s imagine that this web app is about fruit, and contains the following features:

  • A fruit encyclopedia giving details about various kinds of fruit, including full 3D models that can be inspected.
  • A game where you can throw fruit at a wall and watch it smush, again using 3D models but also full physics.
  • A marketplace for purchasing different kinds of fruit.

This seems like a prime opportunity for a microservices-based architecture. As microservices are meant to be separated into bounded contexts each of which implements a cohesive set of functionality, it makes sense to have one microservice for each of the three above features, providing its backend. A fourth microservice is then added to provide the actual web app UI frontend, to be filled in with data from the other three services. Finally, we want some sort of API gateway / load balancer which handles all client requests and redirects them to the appropriate service.

This will look something like the following: Four services plus gateway diagram

However, these services aren’t as easy to separate as it first appears. I see two main issues:

Issue 1: code reuse

Both the encyclopedia and the game require 3D models of fruit, although the encyclopedia wants to add wider information, and the game wants to add physics simulation. According to traditional DRY, one would factor out the duplicated functionality as a shared library. However, in microservices this can lead to tight coupling as changing the shared library affects both services.

I can think of three solutions:

  1. Ignore DRY and just implement the same functionality in both. This keeps them nicely separated, but causes duplication of work if e.g. a bugfix in the common functionality needs to be applied to both.
  2. Embrace DRY and use a shared library. If a change is needed, upgrade the version used by each of the services separately as necessary. Accept that the services may end up running different versions of the same library, or you’ll often be making changes to both together.
  3. Embrace microservices and create yet another one to serve 3D models of fruit. If the implementation is hidden behind a generic API, then implementation changes and bugfixes shouldn’t affect either service using it, as the API contract is still fulfilled. However, depending on the technologies in use, generalising the 3D models in this way may not be feasible or performant, leading to tight coupling and effectively devolving into a slower and less flexible option (2).

Under what situations would each of these methods be appropriate? Is there another method I have not thought of?

Issue 2: containers

Containers are an obvious and powerful tool for implementing microservices. However, they are not synonymous with microservices, and as far as I’ve been able to determine, the relationship between the two is hazy at best. From my research, best practices state both that one container should implement one microservice, but also that one container should only house a single process or responsibility.

A single microservice likely still contains several components; for example, the encyclopedia and marketplace likely both want some sort of database as well as their business logic.

If the logic and the database are placed in separate containers, then there is no longer a 1:1 mapping between containers and microservices. Proliferation of containers also leads to lots of inter-container communication, which slows things down. Containers cannot necessarily be guaranteed to be co-located, so requests between must be encrypted in case they pass over the internet (I think, correct me if I’m wrong). The formation of requests, translation between different APIs, encrpytion, and the transmission time itself all add overhead.

If the logic and database are placed in the same container, then there is no longer a 1:1 mapping between containers and processes. This makes it harder to scale them independently, in case the logic is very simple but requires enormous data storage, or vice versa. One must also build and deploy them together.

How should microservices be divided into containers under this scenario? Are there genuinely good alternatives to containers?

Magento 2: Any concern to share generated folder across containers?

We are running Magento on Kubernetes, so every time we have some more traffic new pods/containers scale up and down multiple times a day.

We are seeing some weird things happening and it seems like the generated folder should be shared across containers. We build the image used to bring the containers up, but it only contains the generated/metadata folder as we run di:compile as part of the docker image build.

We run the setup:upgrade on a separate container that we call post-deploy.

We are thinking about sharing the generated folder on EFS, is there any concern with this? Thanks.