Slow query in production, bad plan or stale index?

I’ve just fixed a production performance issue by dropping an index and recreate it. I suspect dropping the index also dropped executions plans that used it and one of them happen to be bad.

Arguments in favor of bad execution plan :

  • Before dropping the indexe, I looked up the last update date for the statistics on the given table and they were up to date.
  • My DBA has put in place Hallengren’s index and statistic maintenance solution
  • The slow query was a select statement executed from sp_executesql with dates parameters. Executing the same select statement without sp_executesql was fast, but also didn’t use the same execution plan.

Arguments against bad execution plan :

  • Before dropping the indexe, we went real wild and runned the forbidden dbcc freeproccache to clear any bad plan, but this didn’t fix or change the performance issue.

Note:

  • The slow query happen to be using a table indexed by date. However, there is wide differences in the amount of records for each date. In other word, any given date range from few records to more than 100k and it is pretty random.

  • The database is running under compatibility level 140 (SQL Server 2017)

Was the source of the problem a bad plan or a stale indexe?

Slow Startup of Ubuntu

This is my systemd-analyze blame

     52.231s plymouth-quit-wait.service      15.176s snapd.service      15.051s dev-sda5.device      12.565s networkd-dispatcher.service      12.202s systemd-journal-flush.service      11.897s gpu-manager.service      10.950s ModemManager.service      10.698s NetworkManager-wait-online.service       9.294s udisks2.service       8.115s dev-loop1.device       6.849s accounts-daemon.service       6.706s NetworkManager.service       5.915s dev-loop11.device       5.579s dev-loop18.device       5.480s dev-loop2.device       5.299s dev-loop12.device       5.253s systemd-resolved.service       5.156s dev-loop19.device       5.029s dev-loop14.device       5.020s dev-loop16.device       4.924s dev-loop9.device       4.575s thermald.service       4.571s grub-common.service       4.447s apport.service       4.341s dev-loop7.device       3.968s systemd-logind.service       3.594s avahi-daemon.service       3.531s bluetooth.service       3.526s wpa_supplicant.service       3.489s fwupd.service       3.366s dev-loop8.device       2.919s rsyslog.service       2.760s dev-loop10.device       2.731s dev-loop6.device       2.656s dev-loop4.device       2.644s dev-loop5.device       2.390s systemd-fsck@dev-disk-by\x2duuid-90E0\x2d8818.service       2.371s apparmor.service       2.255s systemd-tmpfiles-setup.service       2.108s polkit.service       1.943s dev-loop3.device       1.870s dev-loop13.device       1.764s dev-loop0.device       1.727s systemd-udevd.service       1.641s dev-loop15.device       1.405s dev-loop17.device       1.391s systemd-sysctl.service       1.298s gdm.service       1.094s upower.service        874ms snap-gnome\x2dcalculator-406.mount        838ms snap-core18-1066.mount        837ms snap-gtk\x2dcommon\x2dthemes-1198.mount        836ms snap-gnome\x2dcharacters-254.mount        832ms grub-initrd-fallback.service        821ms snap-gnome\x2d3\x2d28\x2d1804-71.mount        821ms snap-gnome\x2dcharacters-296.mount        820ms snap-gnome\x2dsystem\x2dmonitor-100.mount        783ms systemd-backlight@backlight:intel_backlight.service        755ms snap-libreoffice-139.mount        754ms snap-core18-1074.mount        747ms systemd-modules-load.service        673ms snap-vlc-1049.mount        644ms pppd-dns.service        639ms snap-gtk\x2dcommon\x2dthemes-1313.mount        633ms snap-chromium-821.mount        583ms systemd-timesyncd.service        565ms snap-core-7270.mount        552ms systemd-tmpfiles-setup-dev.service        532ms systemd-sysusers.service        525ms keyboard-setup.service        497ms systemd-rfkill.service        482ms systemd-journald.service        466ms switcheroo-control.service        423ms snapd.seeded.service        383ms plymouth-start.service        347ms systemd-udev-trigger.service        332ms networking.service        323ms snap-gnome\x2d3\x2d28\x2d1804-67.mount        314ms colord.service        308ms snap-core-7396.mount        296ms systemd-user-sessions.service        289ms openvpn.service        278ms swapfile.swap        206ms snap-gimp-189.mount        182ms ifupdown-pre.service        181ms snap-chromium-817.mount        180ms nvidia-persistenced.service        172ms dns-clean.service        169ms sys-kernel-debug.mount        169ms dev-mqueue.mount        166ms dev-hugepages.mount        163ms boot-efi.mount        163ms plymouth-read-write.service        162ms snap-gnome\x2dsystem\x2dmonitor-77.mount        162ms snap-gnome\x2dlogs-61.mount        154ms rtkit-daemon.service        149ms snap-hw\x2dprobe-337.mount        137ms systemd-random-seed.service        132ms systemd-update-utmp.service        125ms ufw.service        121ms kmod-static-nodes.service        119ms setvtrgb.service        107ms kerneloops.service         99ms console-setup.service         97ms bolt.service         91ms systemd-remount-fs.service         65ms user@1000.service         11ms user-runtime-dir@1000.service          9ms systemd-update-utmp-runlevel.service          3ms sys-fs-fuse-connections.mount          1ms sys-kernel-config.mount          1ms snapd.socket 

Slow transfer speeds on external USB SSD hard drive?

I recently bought a Samsung Exernal SSD T5 for storing backups with Ubuntu’s built Deja Dup backup tool, and I’m finding it’s performance is terrible. Samsung advertises it as having “the T5 provides transfer speeds of up to 540 MB/s*, that’s up to 4.9x faster than external HDDs”, but the real world performance isn’t anywhere close to this.

Using the command provided in this answer, I’m monitoring the transfer progress of several large files. One file, called duplicity-full-signatures.20190720T075111Z.sigtar.gz is 648 MB in size and the tool is saying the ETA for transfer completion is in 5 hours!

Am I missing something here? Shouldn’t a drive with transfer speeds up 540 MB/s be able to have a 648 MB file transferred to it in 648 / 540 = 1.2 seconds? I realize they said “up to” and other resource draws on my computer will cause the actual transfer speed to be well below that that…but not by 5 hours.

Other than Samsung being outright frauds, what would the reason be for these slow transfer times? I formatted the drive in Ext4 with encryption. Is there a different format I should use to speed things up? Are there any other system-wide changes I could make to speed up the deja-dup/duplicity process without making my system unusable during the backup?

dramatically boost and increase wordpress speed, fix slow admin in 24hr for $20

If you want your content to rank, it’s vital that your site loads quickly. The faster the better. Page loading speed has a direct impact on the position of your site has in the search engine results. I will optimize and improve your WordPress site performance, speed and loading time. ✓ Page Caching ✓ Files Compression ✓ Enabling gZip Compression ✓ ultra-fast load time ✓ Browser Caching ✓ Database Optimization ✓ Optimize your websites Images ✓ Basic WordPress Security Fast Speed of Site means: Better User ExperienceLower Bounce RateHigher ROI & Conversion RatesBetter Ranking Google Search EngineHappy Visitors Delivery Includes: 1. Service As Promised! 2. Before & After Reports 3. Answering the Questions 4. Regards Still Unsure? Let’s Talk to clear it! Or You can Order Now. 100% Satisfaction

by: shanjutt11
Created: —
Category: WordPress
Viewed: 173


Computer running extremely slow on 19.04, any suggestions?

Computer running verrry slow.. How may this be rectified? (19.04)

PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                            2880 root      20   0  262020  43240  34580 R  60.0   1.1  16:05.41 Xorg                                                                               4034 sayne     20   0 3567880 147212  52868 S  40.0   3.8   3:40.92 gnome-shell                                                                        3913 sayne     20   0  338420   6000   3168 S  20.0   0.2   0:00.92 ibus-extension-                                                                    5630 sayne     20   0  976408  44224  32576 S  20.0   1.1   0:00.87 gnome-terminal-                                                                    5709 sayne     20   0   12052   3752   3064 R  20.0   0.1   0:00.07 top   

Why is my image convolution funciton so slow?

I wasn’t sure if I should post this on the machine learning board or this one, but I chose this one since my problem has more to do with optimization. I am trying to build a YOLO model from scratch in python, but each convolution operation takes 10 seconds. Clearly I am doing something wrong, as YOLO is supposed to be super fast (able to produce results real-time). I don’t need the network to run real-time, but it will be a nightmare trying to train it if it takes several hours to run on one image. Please help me!

Here is my convolution function:

def convolve(image, filter, stride, modifier):     new_image = np.zeros ([image.shape[0], _round((image.shape[1]-filter.shape[1])/stride)+1, _round((image.shape[2]-filter.shape[2])/stride)+1], float)      #convolve     for channel in range (0, image.shape[0]):         filterPositionX = 0         filterPositionY = 0         while filterPositionX < image.shape[1]-filter.shape[1]+1:             while filterPositionY < image.shape[2]-filter.shape[2]+1:                 sum = 0                 for i in range(0,filter.shape[1]):                     for j in range(0,filter.shape[2]):                         if filterPositionX+i<image.shape[1] and filterPositionY+j<image.shape[2]:                             sum += image[channel][filterPositionX+i][filterPositionY+j]*filter[channel][i][j]                 new_image[channel][int(filterPositionX/stride)][int(filterPositionY/stride)] = sum*modifier                 filterPositionY += stride             filterPositionX += stride             filterPositionY = 0      #condense     condensed_new_image = np.zeros ([new_image.shape[1], new_image.shape[2]], float)     for i in range(0, new_image.shape[1]):         for j in range(0, new_image.shape[2]):             sum = 0             for channel in range (0, new_image.shape[0]):                 sum += new_image[channel][i][j]             condensed_new_image[i][j] = sum      condensed_new_image = np.clip (condensed_new_image, 0, 255)      return condensed_new_image 

Running the function on a 448×448 grayscale image with a 7×7 filter and a stride of 2 takes about 10 seconds. My computer has an i7 processor.

Slow Cheetah – Add Transformation resurrects deleted config files

I working on some automation testing code and I’ve installed the Slow Cheetah extension and packages in order to be able to create multiple config files.

The idea is that I can then target these config files depending on where I want my code to run (e.g. locally, on a server, etc).

So I’ve a dummy file just to try the feature and when I was happy with it working, I deleted the file.

The feature allows you to generate the config files, and somehow, when I do that, it resurrects the deleted file.

So this is my issue, I’m haunted by a ghost file….

Things I’ve tried to exorcise it:

  1. deleted the file in Visual Studio
  2. deleted the local file manually

enter image description here

  1. deleted any trace of the file from the Configuration Manager

enter image description here

So yeah, I cleaned up all the slime and every time I click on Add Transform to generate new config file, the ghost one tags along…

enter image description here

…there is something strange in my neighborhood, who you gonna call?

add-apt-repository very slow

I just did a fresh install of Ubuntu 19.04 on a Razer Blade 15 (2019) and am trying to install OpenRazer drivers so I can control the keyboard backlighting.

I go into terminal and run sudo add-apt-repository ppa:openrazer/stable. When I hit enter, it sits there for 5-10 minutes and finally asks whether or not I want to add the repository.

I hit enter to confirm, wait another minute or so, and it says that it timed out trying to retrieve the gpg key.

I’ve tried:

  1. Restarting and running the command again
  2. Disabling IPv6
  3. Letting Software & Updates choose the best download mirror
  4. Using multiple different keyservers
  5. Removing all repositories with missing keys

Nothing seems to work. I get the same behaviour every time. Is there any way to fix this?