Should a developer be able to create a docker artifact from a lerna monorepo in their development environment?

I’ve recently started using lerna to manage a monorepo, and in development it works fine.

Lerna creates symlinks between my various packages, and so tools like ‘tsc –watch’ or nodemon work fine for detecting changes in the other packages.

But I’ve run into a problem with creating docker images in this environment.

Let’s say we have a project with this structure:

root   packages      common → artifact is a private npm package, this depends on utilities, something-specific      utilities → artifact is a public npm package      something-specific -> artifact is a public npm package      frontend → artifact is a docker image, depends on common      backend → artifact is a docker image, depends on common and utilities 

In this scenario, in development, everything is fine. I’m running some kind of live reload server and the symlinks work such that the dependencies are working.

Now let’s say I want to create a docker image from backend.

I’ll walk through some scenarios:

  1. I ADD package.json in my Dockerfile, and then run npm install.

    Doesn’t work, as the common and utilities packages are not published.

  2. I run my build command in backend, ADD /build and /node_modules in the docker file.

    Doesn’t work, as my built backend has require('common') and require('utilities') commands, these are in node_modules (symlinked), but Docker will just ignore these symlinked folders.

    Workaround: using cp --dereference to ‘unsymlink’ the node modules works. See this AskUbuntu question.

  3. Step 1, but before I build my docker image, I publish the npm packages.

    This works ok, but for someone who is checking out the code base, and making a modification to common or utilities, it’s not going to work, as they don’t have privledges to publish the npm package.

  4. I configure the build command of backend to not treat common or utilities as an external, and common to not treat something-specific as an external.

    I think first build something-specific, and then common, and then utilities, and then backend.

    This way, when the build is occuring, and using this technique with webpack, the bundle will include all of the code from something-specfic, common and utilities.

    But this is cumbersome to manage.

It seems like quite a simple problem I’m trying to solve here. The code that is currently working on my machine, I want to pull out and put into a docker container.

Remember the key thing we want to achieve here, is for someone to be able to check out the code base, modify any of the packages, and then build a docker image, all from their development environment.

Is there an obvious lerna technique that I’m missing here, or otherwise a devops frame of reference I can use to think about solving this problem?

Can not connect to docker registrly on localhost

I’ve forwarded ports from the Nexus in k8s to localhost:

$   kubectl port-forward --namespace devops nexus-6cd979dbd-zdjff 5001 Forwarding from 127.0.0.1:5001 -> 5001 Forwarding from [::1]:5001 -> 5001 Handling connection for 5001 Handling connection for 5001 Handling connection for 5001 

Forward works:

$   curl http://localhost:5001/v2/_catalog {"repositories":["test/test"]} 

But I can not login there:

$   docker login localhost:5001 Username: admin Password:  Error response from daemon: Get http://localhost:5001/v2/: dial tcp [::1]:5001: connect: connection refused 

What’s wrong?

Logging SFTP File Transfers in Docker container

I want to log SFTP file transfers. My sshd_conf is:

Protocol 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ed25519_key  PermitRootLogin yes  AuthorizedKeysFile      .ssh/authorized_keys  PasswordAuthentication yes  UsePAM yes  AllowTcpForwarding no X11Forwarding no UseDNS no  Subsystem sftp /usr/libexec/openssh/sftp-server -l INFO -f AUTH 

Host keys generated like this:

$   ssh-keygen -N "" -t rsa -f ssh_host_rsa_key $   ssh-keygen -N "" -t ed25519 -f ssh_host_ed25519_key 

SFTP server runs in Docker container created by this Dockerfile:

FROM centos:7  RUN yum install -y openssh-server  RUN echo 'root:123456' | chpasswd  COPY ssh_host_rsa_key /etc/ssh/ssh_host_rsa_key COPY ssh_host_ed25519_key /etc/ssh/ssh_host_ed25519_key COPY sshd_config /etc/ssh/sshd_config  RUN chmod 400 /etc/ssh/*  EXPOSE 22  CMD ["/usr/sbin/sshd", "-D", "-e"] 

And the only output I have after successful login and file uploading:

$   docker run --rm --name sftp-server -p "2222:22" test/sftp Server listening on 0.0.0.0 port 22. Server listening on :: port 22. Accepted keyboard-interactive/pam for root from 172.17.0.1 port 58556 ssh2 

However, I would expect output as described in OpenSSH wiki:

Oct 22 11:59:50 server internal-sftp[4929]: open "/home/fred/foo" flags WRITE,CREATE,TRUNCATE mode 0664 Oct 22 11:59:50 server internal-sftp[4929]: close "/home/fred/foo" bytes read 0 written 928 

What might be a problem with my setup?

“no such file or directory” when mounting, built using the golang:alpine Docker image

I created a fork of the LinuxServer.io’s docker-transmission image, adding support for Google Cloud Storage.

I used the Ernest (chiaen)’s docker-gcsfuse project to build gcsfuse, namely, extracting parts of his Dockerfile and added to my own one. gcsfuse is built using the golang:alpine image.

The image builts successfully (including gcsfuse; the Dockerfile instructs to just copy gcsfuse to /usr/local/bin). However, gcsfuse refuses to mount, and the logs outputs Mount: stat /donwloads: no such file or directory, enev if the directory /download actually exists, and the right permissions were set already (set at /etc/cont-init.d/20-config). I even tried to run from the shell, but still fails.

Is there a missing package or parameter in order to get gcsfuse working in my (Alpine) Docker image?

If you want to reproduce, you may bould your own local copy of the image following the instructions at README.md in my repo (you need to upload your json key to the VM) (amitie10g/transmission is also available at Docker Hub).

Logs are available here.

Thanks in advance.

docker.exe files disabled in Docker Toolbox

Docker Toolbox had been working for me for awhile. I had been installing other things and so I restarted my computer a few times. Now, when I go to Docker, it tells me docker.exe: command not found, docker-machine: command not found. When I ls in my c/Program Files/Docker Toolbox, the .exe files are there but have asterisks appended to them. docker-compose.exe*, docker.exe*, docker-machine.exe*, etc. They are right here, why can’t I run them?

Can docker log levels be made useful by actually filtering by level?

mag 21 16:58:17 computer dockerd[13008]: time="2019-05-21T16:58:17.345353593+02:00" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}]" module=grpc

This is a log line from docker when started, configured to have the journald driver.

As you can see, it logs the time as text, which is completely useless since journald logs the time as well.

Also, the log level is expressed as text instead of actual log level (that would show up red in the terminal).

This means that I can’t filter logs by log level.

Is there a way to configure it to log decently, using the log level instead of just writing what log level that would have been, if it had been using log levels?

Why does docker bypass ufw config one time and another time not?

I am using ufw to setup a firewall on my host system. It seems that ufw would let me bypass certain rules when using it combined with docker in some cases.

I am aware that docker by default alters iptables directly, which leads to certain problems, especially with ufw, but I have encountered an issue that seems very strange to me.

Here is a breakdown of what I did.

  1. I want to deny all incoming traffic:
sudo ufw default deny incoming 
  1. I want to allow ssh:
sudo ufw allow ssh 
  1. I want to allow everything that goes from my host system back to my host system on port 8181 (Context: this shall later be used to build a ssh tunnel to my host and access port 8181 from anywhere – but is not important for now)
sudo ufw allow from 127.0.0.1 to 127.0.0.1 port 8181 
  1. I enable my firewall settings:
sudo ufw enable 

If I now have a look at the firewall status via sudo ufw status it states the following:

Status: active  To                         Action      From --                         ------      ---- 22/tcp                     ALLOW       Anywhere                   127.0.0.1 8181             ALLOW       127.0.0.1                  22/tcp (v6)                ALLOW       Anywhere (v6) 

Looks good to me, but now comes the strange part. I have an API, that runs inside a docker container available at port 8080 internally.

If I now run the docker container with the following command and map port 8080 to port 8181 on my host system

docker run -d --name ufw_test -p 8181:8080 mytest/web-app:latest 

it seems to bypass my rule that I have set earlier to only allow traffic from 127.0.0.1 to 127.0.0.1 on port 8181. I was able to access my API from anywhere. I tried it with different PCs on the same Network and my API was accessible via 192.168.178.20:8181 from another PC. I figured, a way to fix this would be starting my container like so:

docker run -d --name ufw_test -p 127.0.0.1:8181:8080 mytest/web-app:latest 

This would restrict access to my API the way I intended it to, however I wonder, what would be the reason that the second command worked, where the first did not?

Docker process opens TCP port, but connection is refused

How can I run a simple server listening on a port, inside a Docker container?

(In case it matters, this is a MacOS X 10.13.6 host.)

When I run a simple HTTP Server:

python3 -m http.server 8001 

the process starts correctly, and it listens correctly on that port (confirmed with telnet localhost 8001).

When I run a Docker container, it also runs the process correctly, but now the connection is refused.

web-api/Dockerfile:

FROM python:3.7 CMD python3 -m http.server 8001 

docker-compose.yaml:

version: '3' services:   web-api:     hostname: web-api     build:       context: ./web-api/       dockerfile: Dockerfile     expose:       - "8001"     ports:       - "8001:8001"     networks:       - api_network networks:   api_network:     driver: bridge 

When i start the container(s), it is running the HTTP server:

$   docker-compose up --detach --build Building web-api […]  $   docker ps CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                    NAMES 1db4cd1daaa4        ipsum_web-api        "/bin/sh -c 'python3…"   8 minutes ago       Up 8 minutes        0.0.0.0:8001->8001/tcp   ipsum_web-api_1  $   docker-machine ssh lorem  docker@lorem:~$   ps -ef | grep http.server root     12883 12829  0 04:40 ?        00:00:00 python3 -m http.server 8001  docker@lorem:~$   telnet localhost 8001 [connects successfully] ^D  docker@lorem:~$   exit  $   telnet localhost 8001 Trying ::1... telnet: connect to address ::1: Connection refused Trying 127.0.0.1... telnet: connect to address 127.0.0.1: Connection refused telnet: Unable to connect to remote host 

What is configured incorrectly here, such that the server works inside the Docker container; but I get Connection refused on port 8001 when connecting from outside the container on its exposed port?

intellij connect to docker daemon throws cannot connect: java.io.FileNotFoundException

I got a problem with setting up the docker plugin for intellij (docker newbie).

Settings > Build, Execution, Deployment > Docker.

When I want to connect to the daemon via unix socket I get prompted cannot connect: java.io.FileNotFoundException.

I already rebooted, put my user into the docker group and relogged, but nothing changed.

Does anyone here know how to fix this or what might have went wrong? What file is the Exception referring to?

I already set up Settings > Build, Execution, Deployment > Docker > Tools and checked that all three of them are installed.

OS: Arch Linux

intellij-idea docker setup