pam_radius_auth : DEBUG : getservbyname(radius,udp) returned -1217119556 on DEBIAN 9.6

I have two debian servers connected. I set FreeRADIUS on a Debian Server (10.10.10.20). And I’d like the another Debian Server (10.10.10.10) can login locally using the users listed on RADIUS Server. I use libpam-radius-auth. But when I try to login locally using the RADIUS user, It keeps fail and the log says “pam_radius_auth : DEBUG : getservbyname(radius,udp) returned -1217119556”.

Anyone can tell me how to fix this problem?

OS : Debian 9.6

Here are my configs :

  1. RADIUS SERVER

    /etc/freeradius/3.0/users (at the last line I add : users2 Cleartext-Password := “user”) /etc/freeradius/3.0/clients.conf (at the last line) client 10.10.10.10 { ipaddr = 10.10.10.10 nastype = other secret = admin123 }

    1. Another Debian Server that use libpam-radius-auth

      /etc/pam_radius_auth.conf i add :

      server[:port] SharedSecret Timeout(s)

      10.10.10.20 admin123 7

    /etc/pam.d/common-auth i add on the last line :

    auth sufficient pam_radius_auth.so

    I also create homedirectory and login shell for user2 with command:

    adduser –disabled-password user2

THANK YOU.

debian stretch + bind 9

trying to setup a secondary authoritative name server, no recursions. I get the following when running named-checkconf.

  /etc/default# named-checkconf /etc/bind/named.conf 

/etc/bind/named.conf.default-zones:2: unknown option ‘zone’ /etc/bind/named.conf.default-zones:10: unknown option ‘zone’ /etc/bind/named.conf.default-zones:15: unknown option ‘zone’ /etc/bind/named.conf.default-zones:20: unknown option ‘zone’ /etc/bind/named.conf.default-zones:25: unknown option ‘zone’

it is the default named.conf.default-zones files from the installation. I wonder if I even need that ‘include’ as there isn’t any recursion and just an authoritative name server .

Debian Stretch IPv6 prioritization

I have a Debian Stretch system with both IPv4 and IPv6 addresses and default gateway. IPv4 and IPv6 addresses in the internet are reachable. When I start a ping to a domain which has an A and AAAA DNS record the system pings the IPv4 address from the A record. In the packet capture of the DNS request I can see that both A and AAAA are requested and answered. When I remove the IPv4 address from the system obviously everything works as expected….

How is the prioritization, I thought IPv6 will be preferred. If not, is there a option to change it?

A google search has not really helped me out because everyone asks to disable IPv6 and not want to use it…

libvirt-lxc container on Debian buster with user namespacing not always startable

I have a bunch of libvirt-lxc containers whose configuration I migrated from Debian jessie to a fresh Debian buster host. I re-created the rootfs’ for the containers using lxc-create -t debian -- --release buster.

The container configuration looks like this:

<domain type='lxc'>   <name>some-container</name>   <uuid>1dbc80cf-e287-43cb-97ad-d4bdb662ca43</uuid>   <title>Some Container</title>   <memory unit='KiB'>2097152</memory>   <currentMemory unit='KiB'>2097152</currentMemory>   <memtune>     <swap_hard_limit unit='KiB'>2306867</swap_hard_limit>   </memtune>   <vcpu placement='static'>1</vcpu>   <resource>     <partition>/machine</partition>   </resource>   <os>     <type arch='x86_64'>exe</type>     <init>/sbin/init</init>   </os>   <idmap>     <uid start='0' target='200000' count='65535'/>     <gid start='0' target='200000' count='65535'/>   </idmap>   <clock offset='utc'/>   <on_poweroff>destroy</on_poweroff>   <on_reboot>restart</on_reboot>   <on_crash>destroy</on_crash>   <devices>     <emulator>/usr/lib/libvirt/libvirt_lxc</emulator>     <filesystem type='mount' accessmode='passthrough'>       <source dir='/var/lib/lxc/some-container/rootfs/'/>       <target dir='/'/>     </filesystem>     <filesystem type='mount' accessmode='passthrough'>       <source dir='/var/www/some-container/static/'/>       <target dir='/var/www/some-container/static/'/>     </filesystem>     <interface type='bridge'>       <mac address='52:54:00:a1:98:03'/>       <source bridge='guests0'/>       <ip address='192.0.2.3' family='ipv4' prefix='24'/>       <ip address='2001:db8::3' family='ipv6' prefix='112'/>       <route family='ipv4' address='0.0.0.0' prefix='0' gateway='192.0.2.1'/>       <route family='ipv6' address='2000::' prefix='3' gateway='fe80::1'/>       <target dev='vcontainer0'/>       <guest dev='eth0'/>     </interface>     <console type='pty' tty='/dev/pts/21'>       <source path='/dev/pts/21'/>       <target type='lxc' port='0'/>       <alias name='console0'/>     </console>     <hostdev mode='capabilities' type='misc'>       <source>         <char>/dev/net/tun</char>       </source>     </hostdev>   </devices> </domain> 

(IP addresses have been changed to use the documentation/example IPv4/IPv6 prefixes.) The mountpoints exist and are prepared. I have about 15 containers similar to this. The following things happen:

  • When the host is freshly booted, I can either:

    • start a container with user namespacing, and then only containers without user namespacing
    • start a container without user namespacing, and then no containers with user namespacing

When I run virsh -c lxc:/// start some-container after any other container is already started, libvirt claims to have started the container:

# virsh -c lxc:/// start some-container Domain some-container started 

It also shows as running in the virsh -c lxc:/// list output, but there is no process running under the root UID of the container. Running systemctl restart libvirtd makes libvirt recognize that the container is actually dead and mark it as shut off in virsh -c lxc:/// list again.

When looking into the libvirt logs, I can’t find anything useful:

2019-05-09 15:38:38.264+0000: starting up PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LIBVIRT_DEBUG=4 LIBVIRT_LOG_OUTPUTS=4:stderr /usr/lib/libvirt/libvirt_lxc --name some-container --console 25 --security=apparmor --handshake 52 --veth vnet0 PATH=/bin:/sbin TERM=linux container=lxc-libvirt HOME=/ container_uuid=1dbc80cf-e287-43cb-97ad-d4bdb662ca43 LIBVIRT_LXC_UUID=1dbc80cf-e287-43cb-97ad-d4bdb662ca43 LIBVIRT_LXC_NAME=some-container /sbin/init 

(NB: I tried with and without apparmor)

I became quite desperate and attached strace with strace -ff -o somedir/foo -p to libvirtd and then started a container. After a lot of digging, I found out that libvirt starts /sbin/init inside the container, which then quickly exits with status code 255. This is after an EACCESS upon doing something with cgroups:

openat(AT_FDCWD, "/sys/fs/cgroup/systemd/system.slice/libvirtd.service/init.scope/cgroup.procs", O_WRONLY|O_NOCTTY|O_CLOEXEC) = -1 EACCES (Permission denied) writev(3, [{iov_base="[0;1;31m", iov_len=9}, {iov_base="Failed to create /system.slice/l"..., iov_len=91}, {iov_base="[0m", iov_len=4}, {iov_base="\n", iov_len=1}], 4) = 105 epoll_ctl(4, EPOLL_CTL_DEL, 5, NULL)    = 0 close(5)                                = 0 close(4)                                = 0 writev(3, [{iov_base="[0;1;31m", iov_len=9}, {iov_base="Failed to allocate manager objec"..., iov_len=52}, {iov_base="[0m", iov_len=4}, {iov_base="\n", iov_len=1}], 4) = 66 openat(AT_FDCWD, "/dev/console", O_WRONLY|O_NOCTTY|O_CLOEXEC) = 4 ioctl(4, TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(4, TIOCGWINSZ, {ws_row=0, ws_col=0, ws_xpixel=0, ws_ypixel=0}) = 0 writev(4, [{iov_base="[", iov_len=1}, {iov_base="[0;1;31m!!!!!![0m", iov_len=19}, {iov_base="] ", iov_len=2}, {iov_base="Failed to allocate manager objec"..., iov_len=34}, {iov_base="\n", iov_len=1}], 5) = 57 close(4)                                = 0 writev(3, [{iov_base="[0;1;31m", iov_len=9}, {iov_base="Exiting PID 1...", iov_len=16}, {iov_base="[0m", iov_len=4}, {iov_base="\n", iov_len=1}], 4) = 30 exit_group(255)                         = ? +++ exited with 255 +++ 

Digging further, I figured that libvirt is not creating a Cgroup namespace for the containers, and apparently they all try to use the same cgroup path. With that, the behaviour makes sense: If the first container which is started is user-namespaced, it takes ownership of the cgroup subtree and other user-namespaced containers cannot use it. The non-user-namespaced containers can simply take over the cgroup tree because they run as UID 0.

The question is now: why are the cgroups configured incorrectly? Is it a libvirt bug? Is it a misconfiguration on my system?

Как поставить mysql на Debian 10 (testing) не прибегая к даунгрейду и VM?

sudo apt-get install mysql-server  

Чтение списков пакетов… Готово Построение дерева зависимостей
Чтение информации о состоянии… Готово Некоторые пакеты не могут быть установлены. Возможно, то, что вы просите, неосуществимо, или же вы используете нестабильную версию дистрибутива, где запрошенные вами пакеты ещё не созданы или были удалены из Incoming. Следующая информация, возможно, вам поможет:

Следующие пакеты имеют неудовлетворённые зависимости: mysql-server : Зависит: mysql-community-server (= 8.0.16-2debian9) но он не будет установлен E: Невозможно исправить ошибки: у вас зафиксированы сломанные пакеты.

Я пытался ставить мускул 5.7 / 8.0

mysql-apt-config_0.8.8-1_all.deb mysql-apt-config_0.8.12-1_all.deb mysql-apt-config_0.8.13-1_all.deb 

Частично мне это даже удавалось. Пакеты ставились в порядке обратном зависимостям, сервер стартовал но… не работал (не активный)

Error in docker run hello-world (Error response from daemon: OCI runtime create failed:) Debian Stretch

OS: Debian 9 Stretch Docker version: 18.09.6, build 481bc77

When I create a Docker container using sudo(or without) docker run hello-world, I get the error as below.

  docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:424: container init caused \"rootfs_linux.go:58: mounting \\"proc\\" to rootfs \\"/var/lib/docker/vfs/dir/ea78c4ea761c4911f88243b517b764f8980c6d47e01eb08834dbd35855118dd8\\" at \\"/proc\\" caused \\"permission denied\\"\"": unknown. ERRO[0000] error waiting for container: context canceled 

Output of Docker Status

* docker.service - Docker Application Container Engine    Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: e    Active: active (running) since Wed 2019-05-08 05:50:21 UTC; 1min 0s ago      Docs: https://docs.docker.com  Main PID: 1508 (dockerd)    CGroup: /system.slice/docker.service            `-1508 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/contain  May 08 05:50:21 hostnamehere dockerd[1508]: time="2019-05-08T05:50:21.00458 May 08 05:50:21 hostnamehere dockerd[1508]: time="2019-05-08T05:50:21.00591 May 08 05:50:21 hostnamehere dockerd[1508]: time="2019-05-08T05:50:21.00690 May 08 05:50:21 hostnamehere dockerd[1508]: time="2019-05-08T05:50:21.00775 May 08 05:50:21 hostnamehere dockerd[1508]: time="2019-05-08T05:50:21.21921 May 08 05:50:21 hostnamehere dockerd[1508]: time="2019-05-08T05:50:21.40344 May 08 05:50:21 hostnamehere dockerd[1508]: time="2019-05-08T05:50:21.46928 May 08 05:50:21 hostnamehere dockerd[1508]: time="2019-05-08T05:50:21.46947 May 08 05:50:21 hostnamehere dockerd[1508]: time="2019-05-08T05:50:21.48421 May 08 05:50:21 hostnamehere systemd[1]: Started Docker Application Contain 

hostnamehere is not the actual hostname used

Steps taken to resolve issue

I have added sudo user to docker group

ran – sudo systemctl restart docker.service & sudo systemctl status docker.service, and docker info? The command docker run hello-world

uninstalled all packages and folders and re-installed

From all the research I’ve done most people have come to the conclusion it is related to LXC/LXD but I did not install this at first. I tried to add LXC later to fix it but it didn’t resolve issue. LXC has been fully removed and system rebooted

Can someone please point me into the right direction? Thank you.

About “jbd2/md2-8”, if mysql server is disabled on Debian server and IO and load is still high when heavy raid5 disk activity is present [on hold]

About “jbd2/md2-8”, if mysql server is disabled on the Debian server and IO and load is still high when heavy raid5 disk activity is present. Its a Hetzner server with soft raid5 4x6tb HGST 7200 HDD with i7 plus 32gb of ram. Examining the IOTOP, jbd2/md2-8 would popup every few seconds when there is high HDD write like 50-90mb/s of some hdd write will cause jbd2/md2-8 to take 99% every few seconds of IO time, also I always get load of 4-6. What to tweak when mysql is disabled I don’t use it? Is jbd2/md2-8 journal using some internal version of mysql?

Debian 9 server no sshd in auth.log

On one of my servers, Debian 9, there is no output from sshd in /var/log/auth.log. In fact, if I do ag sshd in /var/log, it just doesn’t appear. The only thing in auth.log is systemd-logind. In fact, it’s suspicous that almost all log messages are from systemd. Only a sporadic few from something else.

This is my /etc/rsyslog.conf (minus comments) (it should be default):

module(load="imuxsock") # provides support for local system logging module(load="imklog")   # provides kernel logging support  $  ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat  $  FileOwner root $  FileGroup adm $  FileCreateMode 0640 $  DirCreateMode 0755 $  Umask 0022  $  WorkDirectory /var/spool/rsyslog  $  IncludeConfig /etc/rsyslog.d/*.conf  auth,authpriv.*                 /var/log/auth.log *.*;auth,authpriv.none          -/var/log/syslog daemon.*                        -/var/log/daemon.log kern.*                          -/var/log/kern.log lpr.*                           -/var/log/lpr.log mail.*                          -/var/log/mail.log user.*                          -/var/log/user.log  mail.info                       -/var/log/mail.info mail.warn                       -/var/log/mail.warn mail.err                        /var/log/mail.err  *.=debug;\         auth,authpriv.none;\         news.none;mail.none     -/var/log/debug *.=info;*.=notice;*.=warn;\         auth,authpriv.none;\         cron,daemon.none;\         mail,news.none          -/var/log/messages  *.emerg                         :omusrmsg:* 

There’s nothing in /etc/rsyslog.d. I also tried copying the conf from an Ubuntu 18.04 machine, to no avail.

SSH is 7.4p1-10+deb9u6. /etc/ssh/sshd_config is:

# cat sshd_config |grep -v '^#'|sed -e '/^$  /d' Port 22 PermitRootLogin yes ChallengeResponseAuthentication no UsePAM yes X11Forwarding yes PrintMotd no AcceptEnv LANG LC_* Subsystem       sftp    /usr/lib/openssh/sftp-server 

Rsyslog is running:

# systemctl status rsyslog ● rsyslog.service - System Logging Service    Loaded: loaded (/lib/systemd/system/rsyslog.service; enabled; vendor preset: enabled)    Active: active (running) since Sun 2019-05-05 15:06:20 CEST; 34s ago      Docs: man:rsyslogd(8)            http://www.rsyslog.com/doc/  Main PID: 3551 (rsyslogd)     Tasks: 4 (limit: 4915)    CGroup: /system.slice/rsyslog.service            └─3551 /usr/sbin/rsyslogd -n  May 05 15:06:20 brick systemd[1]: Starting System Logging Service... May 05 15:06:20 brick systemd[1]: Started System Logging Service. 

I vaguely remember that when this problem started, I did see a very occasional message from sshd in auth.log, but I can’t prove that right now.