Ubuntu 16.04 stuck after decrypting drive–probably X not starting because of NVIDIA drivers and dist-upgrade (systemd-logind: failed to get session)

For a while now, I’ve had NVIDIA 418.56, CUDA 10.1, and a 4.4.0-148-generic kernel.

I might have caused issues when I ran dist-upgrade or similar recently; after decrypting the drive, it gets stuck there. This is not an issue with the decryption, as I’ve also tried recovery mode and logging into the root shell, and I could work with the shell right after it asks for the decryption credentials.

Running startx from another tty did not work, so I thought that I needed to reinstall the driver. I upgraded it to 418.67 and rebooted (confirmed by nvidia-smi), but the GUI still would not boot.

The Xorg logs are shown here. An error is seen:

(EE) systemd-logind: failed to get session: PID 1214 does not belong to any known session 

Where do I go from here? I’ve searched about the topic, and the posts were mostly from a few years back, involving Arch Linux and Bumblebee.

scp Pseudo-terminal will not be allocated because stdin is not a terminal

I’m trying to perform a scp call to move files between a local computer and my university remote servers.

The flow is to enter the details of the username, then it asks for an OTP password and if it’s correct, then you get asked to your own user password in the remote server.

The basic command I use is: For example – executing SSH:

$   ssh user@gw.cs.huji.ac.il (OTP) Password: ... (IDng) Password:  ###################################################################  You are using river-01 running debian64-5779 Linux  Please report problems to <system@cs>. ###################################################################  Last login: Thu May 23 20:59:31 2019 from The only time a dog gets complimented is when he doesn't do anything.       -- C. Schulz <1|0> user@river-01:~% 

Note the option to create an ssh key is disabled, thus we have to go with this specific procedure.

Now I want to perform an SCP command to transfer “~/foo.txt” in the remote server to “./foo.txt”. I issue the command

scp -o user%river@gw.cs.huji.ac.il:~/foo.txt ./foo.txt 

But I Then get an error which’s related to TTY. Look at this output:

$   scp user%river@gw.cs.huji.ac.il:~/foo.txt ./foo.txt (OTP) Password: 454583 Pseudo-terminal will not be allocated because stdin is not a terminal. 

In other words, instead of asking the second password, it shows the Pesudo-terminal error.

I tried to set -o RequireTTY=force but it didn’t work. Is there any other way to handle this?

Thanks in advance!

MySql Error: Can’t update table in stored function/trigger because it is already used by statement which invoked this stored function/trigger

Quiero realizar un trigger que elimine el registro de una tabla llamada stock al momento que en la tabla cotizacion el campo estatus cambie a 1 tengo el siguiente codigo


CREATE TRIGGER elimina_maquina BEFORE INSERT ON cotizacion FOR EACH ROW DELETE a1, a2 FROM cotizacion AS a1 INNER JOIN stock AS a2 WHERE a1.estatus=1 AND a1.id_serie=a2.id_serie; $ $ DELIMITER ;

pero el error que me muestra es el del titulo

MySql Error: Can’t update table in stored function/trigger because it is already used by statement which invoked this stored function/trigger

aqui tengo una captura de la relacion de la tablaintroducir la descripción de la imagen aquí

MacBook Pro ’18 – USB HDD can’t be opened because the original item can’t be found

If you have a similar problem like described here: USB HDD can’t be opened because the original item can’t be found

and if you can actually list the mounted drive from the terminal like so:

MyMac:~ root# df Filesystem    512-blocks       Used  Available Capacity iused               ifree %iused  Mounted on /dev/disk1s1  1953595632  437825216 1486309400    23% 2609168 9223372036852166639    0%   / devfs                413        413          0   100%     715                   0  100%   /dev /dev/disk1s4  1953595632   27947072 1486309400     2%      13 9223372036854775794    0%   /private/var/vm map -hosts             0          0          0   100%       0                   0  100%   /net map auto_home          0          0          0   100%       0                   0  100%   /home /dev/disk5s1  3907024000 1256704032 2650319968    33%  116175          1325160801    0%   /Volumes/SAMSUNG 

Then try to list content of the mounted volume, like so:

MyMac:~ root# ls -lt /Volumes/SAMSUNG total 28536 drwxr-xr-x@ 1 root  wheel     4096 31 Mar 20:12 $  RECYCLE.BIN drwxr-xr-x@ 1 root  wheel        0 14 Jun  2017 .fseventsd drwxr-xr-x@ 1 root  wheel     4096 11 Jun  2017 Rocksmith 2014 drwxr-xr-x@ 1 root  wheel        0 21 Oct  2016 .Trashes drwxr-xr-x  1 root  wheel        0 25 Jan  2016 .Trash-37044 drwxr-xr-x@ 1 root  wheel     4096 23 Jan  2016 System Volume Information drwxr-xr-x  1 root  wheel        0 16 Jan  2016 .Trash-1000 -rwxr-xr-x  1 root  wheel  6160384 14 Jan  2016 test_write2.dvr drwxr-xr-x  1 root  wheel        0 14 Jan  2016 ALIDVRS2 -rwxr-xr-x  1 root  wheel  6160384 14 Jan  2016 test_write1.dvr drwxr-xr-x@ 1 root  wheel        0 27 Jul  2015 Samsung Software drwxr-xr-x  1 root  wheel     4096  6 Jan  2015 User Manual drwxr-xr-x  1 root  wheel     4096  6 Jan  2015 Samsung Drive Manager Manuals drwxr-xr-x  1 root  wheel        0  6 Jan  2015 Samsung Drive Manager drwxr-xr-x  1 root  wheel        0  6 Jan  2015 Macintosh Driver -rwxr-xr-x  2 root  wheel   166488 17 Dec  2013 Samsung_Drive_Manager.exe -rwxr-xr-x  2 root  wheel  1407568 17 Dec  2013 Portable SecretZone.exe -rwxr-xr-x  2 root  wheel   712704 17 Dec  2013 Secure Unlock_win.exe -rwxr-xr-x  1 root  wheel      120  6 Dec  2013 Autorun.inf 

and if any of the methods described in the linked issues does not work for you. Try to access the mounted volume directly in Finder like so

enter image description here

Updated static files do not get served with CDN after deployment because of cache

I recently started using AWS Cloudfront to serve my static files with CDN. Since then, when I deploy updated static files such as js or css, CDN doesn’t serve updated static files right away. Because of this, Python files (I’m using Django) or HTML files are shown wrong as they were supposed to be working correctly with updated static files.

I found this documentation. It says that I need to add identifier to the static files. For examples, I gotta change functions.js to functions_v1.js every time deploying, so that Cloudfront doesn’t serve cached static files, but serve updated static files. I manually changed the updated static files, and it worked well. However, I felt like that ‘s a hassle and there must be a better way so that I don’t need to change all the updated file names one by one manually.

Can anyone give me a direction about this? I’m really confused about that.

dkms failure because gcc version is newer than that used to compile kernel

I have a kernel module that was registered with dkms. When a recent upgrade bumped my kernel to 4.15.0-50 I started getting the below error from dkms. Apparently kernel 4.15.0-50 was compiled with gcc version 7.3.0, but part of the upgrade involved installing a new version of gcc (7.4.0), which is causing dkms to fail. gcc 7.3 is no longer available on my system. How do I install gcc 7.3 in addition to 7.4, or even downgrade 7.4 to 7.3?

DKMS make.log for nvidia-430.14 for kernel 4.15.0-50-generic (x86_64) Tue May 14 17:08:12 CDT 2019 make[1]: Entering directory '/usr/src/linux-headers-4.15.0-50-generic' Makefile:976: "Cannot use CONFIG_STACK_VALIDATION=y, please install libelf-dev, libelf-devel or elfutils-libelf-devel"   SYMLINK /var/lib/dkms/nvidia/430.14/build/nvidia/nv-kernel.o   SYMLINK /var/lib/dkms/nvidia/430.14/build/nvidia-modeset/nv-modeset-kernel.o  Compiler version check failed:  The major and minor number of the compiler used to compile the kernel:  gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)  does not match the compiler used here:  cc (Ubuntu 7.4.0-1ubuntu1~18.04) 7.4.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions.  There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.   It is recommended to set the CC environment variable to the compiler that was used to compile the kernel.  The compiler version check can be disabled by setting the IGNORE_CC_MISMATCH environment variable to "1". However, mixing compiler versions between the kernel and kernel modules can result in subtle bugs that are difficult to diagnose.  *** Failed CC version check. Bailing out! ***  /var/lib/dkms/nvidia/430.14/build/Kbuild:182: recipe for target 'cc_version_check' failed make[2]: *** [cc_version_check] Error 1 make[2]: *** Waiting for unfinished jobs.... Makefile:1552: recipe for target '_module_/var/lib/dkms/nvidia/430.14/build' failed make[1]: *** [_module_/var/lib/dkms/nvidia/430.14/build] Error 2 make[1]: Leaving directory '/usr/src/linux-headers-4.15.0-50-generic' Makefile:81: recipe for target 'modules' failed make: *** [modules] Error 2 

Commands doen’t work because of the Address module and its dependencies

When I try to use a Drupal Console command, even a simple a drupal list, I get an error.

Fatal error: Class ‘CommerceGuys\Addressing\AddressFormat\AddressFormatRepository’ not found in /path/to/my_project/web/modules/contrib/address/src/Repository/AddressFormatRepository.php on line 16

Of course, the last AddressFormatRepository extends the other and has
use CommerceGuys\Addressing\AddressFormat\AddressFormatRepository as ExternalAddressFormatRepository;.

The namespace is good, classes exist, so I don’t understand why I get the error.

The composer.json file contains the following lines.

"drupal/address": "~1.0",      "drupal/console": "^1.0.2",     "drupal/core": "^8.6.0",     

I tried to update the Address module, manually install the dependencies with Composer, but nothing changed.

Do you have any idea?

Tons of mail windows opened because of no preferred outgoing server

I don’t use mail normally. So today when I open my app, this caught me by surpriseenter image description here

I selected one of the server and then close the mail window, but there are just so many to go through.

My questions:

1) The email are seemingly created based on some of my calendar events. Why they tried to send me emails?

2) How can I efficiently close all these windows in one go, instead of making a selection of outgoing server than close a mail window?

I have tried to quit the mail app from its context menu of the Dock, but it just forces one of the mail window to the foreground.

JAX-WS – Error during call because a namespace has no prefix – How to add it?

I am facing an issue when I try to call a WS using JAX-WS. All my java classes are generated with wsdl-2-java maven plugin (cxf).

Below my maven wsdlOptions:

<wsdlOption> <wsdl>https://mywsdl</wsdl> <wsdlLocation>https://mywsdl</wsdlLocation> <extendedSoapHeaders>true</extendedSoapHeaders> <extraargs>     <extraarg>-client</extraarg>     <extraarg>-autoNameResolution</extraarg>     <extraarg>-p</extraarg>     <extraarg>http://myws=com.ws.myws</extraarg>     <extraarg>-p</extraarg>     <extraarg>http://namespace1=com.xsd.namespace1</extraarg>     <extraarg>-p</extraarg>     <extraarg>http://namespace2=com.xsd.namespace2</extraarg>     <extraarg>-p</extraarg>     <extraarg>http://namespace3=com.xsd.namespace3</extraarg> </extraargs> 

Here is the generated SOAP call :

<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Header>     <ns2:Options      xmlns="http://namespace1"     xmlns:ns2="http://namespace2"      xmlns:ns3="http://namespace3" /> </soap:Header> <soap:Body>     <ns2:GetRequest      xmlns="http://namespace1"     xmlns:ns2="http://namespace2"      xmlns:ns3="http://namespace3">         <ns2:Key>             <ns3:Code>AAA</ns3:Code>             <ns3:CmpCode>BBB</ns3:CmpCode>             <ns3:Level>CCC</ns3:Level>         </ns2:Key>     </ns2:GetRequest> </soap:Body> 

Here is the call that works with SOAP UI :

<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Header>     <ns2:Options      xmlns:ns1="http://namespace1"     xmlns:ns2="http://namespace2"      xmlns:ns3="http://namespace3" /> </soap:Header> <soap:Body>     <ns2:GetRequest      xmlns:ns1="http://namespace1"     xmlns:ns2="http://namespace2"      xmlns:ns3="http://namespace3">         <ns2:Key>             <ns3:Code>AAA</ns3:Code>             <ns3:CmpCode>BBB</ns3:CmpCode>             <ns3:Level>CCC</ns3:Level>         </ns2:Key>     </ns2:GetRequest> </soap:Body> 

The only difference is that the namespace xmlns (related to namespace1) has no prefix.

<ns2:Options xmlns="http://namespace1" [...] /> 

The error returned says that I try to call a method on the “namespace1”, which I don’t.

So I have tried to add a SOAPHandler to be able to parse the SOAP Body and do some modifications. But when I open all the variables that I can access at runtime, I don’t even see this “namespace1” so I can’t add/remove it. Maybe I should continue on this way.

I have also tried to change the order in my wsdloptions but it’s always “namespace1” that is ko.

I am looking for a global/elegant solution because I have nearly 20 others ws to generate for the same provider, so I am pretty sure that I will have the same issue.

Any idea ?