Everytime I start up my Lenovo Ideapad, the Grub menu launches by default. When I select ‘Ubuntu’, the computer displays the purple screen and it never boots up. I tried adding ‘nomodeset’ to a line in advanced launch options, and even then it never boots up.
I was filming a video on my Xiomi A1 phone for about 6 minutes. I did not notice any warning of storage data usage running out until I finished filming, but afterwards I saw that it showed the warning. I tried to play the whole video, but it stopped showing what I filmed around 1 minute into the video and froze, but it continued to show that the time was still running. Is there anyway I can recover the whole video again or is the rest of the video lost?
I recently upgraded to 19.04 (4.15.0-48-generic) from 18.04. Since then i cant upgrade, because apt cannot configure “libcupsfilters1”.
- to remove and/or reinstall the package
- apt-get install -f
- run auto clean, autoremove
- tried to install with dpkg
nothing worked so far. google search is not helping much, since its not finding the package name somehow.
Any help will be nice 🙂
For a while now, I’ve had NVIDIA 418.56, CUDA 10.1, and a 4.4.0-148-generic kernel.
I might have caused issues when I ran dist-upgrade or similar recently; after decrypting the drive, it gets stuck there. This is not an issue with the decryption, as I’ve also tried recovery mode and logging into the root shell, and I could work with the shell right after it asks for the decryption credentials.
startx from another tty did not work, so I thought that I needed to reinstall the driver. I upgraded it to 418.67 and rebooted (confirmed by nvidia-smi), but the GUI still would not boot.
The Xorg logs are shown here. An error is seen:
(EE) systemd-logind: failed to get session: PID 1214 does not belong to any known session
Where do I go from here? I’ve searched about the topic, and the posts were mostly from a few years back, involving Arch Linux and Bumblebee.
I’m trying to perform a
scp call to move files between a local computer and my university remote servers.
The flow is to enter the details of the username, then it asks for an OTP password and if it’s correct, then you get asked to your own user password in the remote server.
The basic command I use is: For example – executing SSH:
$ ssh firstname.lastname@example.org (OTP) Password: ... (IDng) Password: ################################################################### You are using river-01 running debian64-5779 Linux Please report problems to <system@cs>. ################################################################### Last login: Thu May 23 20:59:31 2019 from 188.8.131.52 The only time a dog gets complimented is when he doesn't do anything. -- C. Schulz <1|0> user@river-01:~%
Note the option to create an ssh key is disabled, thus we have to go with this specific procedure.
Now I want to perform an SCP command to transfer “~/foo.txt” in the remote server to “./foo.txt”. I issue the command
scp -o email@example.com:~/foo.txt ./foo.txt
But I Then get an error which’s related to TTY. Look at this output:
$ scp firstname.lastname@example.org:~/foo.txt ./foo.txt (OTP) Password: 454583 Pseudo-terminal will not be allocated because stdin is not a terminal.
In other words, instead of asking the second password, it shows the Pesudo-terminal error.
I tried to set
-o RequireTTY=force but it didn’t work. Is there any other way to handle this?
Thanks in advance!
Quiero realizar un trigger que elimine el registro de una tabla llamada stock al momento que en la tabla cotizacion el campo estatus cambie a 1 tengo el siguiente codigo
DELIMITER $ $
CREATE TRIGGER elimina_maquina BEFORE INSERT ON cotizacion FOR EACH ROW DELETE a1, a2 FROM cotizacion AS a1 INNER JOIN stock AS a2 WHERE a1.estatus=1 AND a1.id_serie=a2.id_serie; $ $ DELIMITER ;
pero el error que me muestra es el del titulo
MySql Error: Can’t update table in stored function/trigger because it is already used by statement which invoked this stored function/trigger
aqui tengo una captura de la relacion de la tabla
If you have a similar problem like described here: USB HDD can’t be opened because the original item can’t be found
and if you can actually list the mounted drive from the terminal like so:
MyMac:~ root# df Filesystem 512-blocks Used Available Capacity iused ifree %iused Mounted on /dev/disk1s1 1953595632 437825216 1486309400 23% 2609168 9223372036852166639 0% / devfs 413 413 0 100% 715 0 100% /dev /dev/disk1s4 1953595632 27947072 1486309400 2% 13 9223372036854775794 0% /private/var/vm map -hosts 0 0 0 100% 0 0 100% /net map auto_home 0 0 0 100% 0 0 100% /home /dev/disk5s1 3907024000 1256704032 2650319968 33% 116175 1325160801 0% /Volumes/SAMSUNG
Then try to list content of the mounted volume, like so:
MyMac:~ root# ls -lt /Volumes/SAMSUNG total 28536 drwxr-xr-x@ 1 root wheel 4096 31 Mar 20:12 $ RECYCLE.BIN drwxr-xr-x@ 1 root wheel 0 14 Jun 2017 .fseventsd drwxr-xr-x@ 1 root wheel 4096 11 Jun 2017 Rocksmith 2014 drwxr-xr-x@ 1 root wheel 0 21 Oct 2016 .Trashes drwxr-xr-x 1 root wheel 0 25 Jan 2016 .Trash-37044 drwxr-xr-x@ 1 root wheel 4096 23 Jan 2016 System Volume Information drwxr-xr-x 1 root wheel 0 16 Jan 2016 .Trash-1000 -rwxr-xr-x 1 root wheel 6160384 14 Jan 2016 test_write2.dvr drwxr-xr-x 1 root wheel 0 14 Jan 2016 ALIDVRS2 -rwxr-xr-x 1 root wheel 6160384 14 Jan 2016 test_write1.dvr drwxr-xr-x@ 1 root wheel 0 27 Jul 2015 Samsung Software drwxr-xr-x 1 root wheel 4096 6 Jan 2015 User Manual drwxr-xr-x 1 root wheel 4096 6 Jan 2015 Samsung Drive Manager Manuals drwxr-xr-x 1 root wheel 0 6 Jan 2015 Samsung Drive Manager drwxr-xr-x 1 root wheel 0 6 Jan 2015 Macintosh Driver -rwxr-xr-x 2 root wheel 166488 17 Dec 2013 Samsung_Drive_Manager.exe -rwxr-xr-x 2 root wheel 1407568 17 Dec 2013 Portable SecretZone.exe -rwxr-xr-x 2 root wheel 712704 17 Dec 2013 Secure Unlock_win.exe -rwxr-xr-x 1 root wheel 120 6 Dec 2013 Autorun.inf
and if any of the methods described in the linked issues does not work for you. Try to access the mounted volume directly in Finder like so
I recently started using AWS Cloudfront to serve my static files with CDN. Since then, when I deploy updated static files such as js or css, CDN doesn’t serve updated static files right away. Because of this, Python files (I’m using Django) or HTML files are shown wrong as they were supposed to be working correctly with updated static files.
I found this documentation. It says that I need to add identifier to the static files. For examples, I gotta change
functions_v1.js every time deploying, so that Cloudfront doesn’t serve cached static files, but serve updated static files. I manually changed the updated static files, and it worked well. However, I felt like that ‘s a hassle and there must be a better way so that I don’t need to change all the updated file names one by one manually.
Can anyone give me a direction about this? I’m really confused about that.
Ok I did stuff in the terminal and then the whole desktop except for the wallpaper disappeared. And because of my stupidity I ran the startx on root. Then when it did not work. I shut it down. When at start again on entering password I get black and back to login prompt. It is LUbuntu 18.04.
I have a kernel module that was registered with dkms. When a recent upgrade bumped my kernel to 4.15.0-50 I started getting the below error from dkms. Apparently kernel 4.15.0-50 was compiled with gcc version 7.3.0, but part of the upgrade involved installing a new version of gcc (7.4.0), which is causing dkms to fail. gcc 7.3 is no longer available on my system. How do I install gcc 7.3 in addition to 7.4, or even downgrade 7.4 to 7.3?
DKMS make.log for nvidia-430.14 for kernel 4.15.0-50-generic (x86_64) Tue May 14 17:08:12 CDT 2019 make: Entering directory '/usr/src/linux-headers-4.15.0-50-generic' Makefile:976: "Cannot use CONFIG_STACK_VALIDATION=y, please install libelf-dev, libelf-devel or elfutils-libelf-devel" SYMLINK /var/lib/dkms/nvidia/430.14/build/nvidia/nv-kernel.o SYMLINK /var/lib/dkms/nvidia/430.14/build/nvidia-modeset/nv-modeset-kernel.o Compiler version check failed: The major and minor number of the compiler used to compile the kernel: gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3) does not match the compiler used here: cc (Ubuntu 7.4.0-1ubuntu1~18.04) 7.4.0 Copyright (C) 2017 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. It is recommended to set the CC environment variable to the compiler that was used to compile the kernel. The compiler version check can be disabled by setting the IGNORE_CC_MISMATCH environment variable to "1". However, mixing compiler versions between the kernel and kernel modules can result in subtle bugs that are difficult to diagnose. *** Failed CC version check. Bailing out! *** /var/lib/dkms/nvidia/430.14/build/Kbuild:182: recipe for target 'cc_version_check' failed make: *** [cc_version_check] Error 1 make: *** Waiting for unfinished jobs.... Makefile:1552: recipe for target '_module_/var/lib/dkms/nvidia/430.14/build' failed make: *** [_module_/var/lib/dkms/nvidia/430.14/build] Error 2 make: Leaving directory '/usr/src/linux-headers-4.15.0-50-generic' Makefile:81: recipe for target 'modules' failed make: *** [modules] Error 2