Slow performance of embedding Mathematica Demonstrations in webpages

When I embed a Mathematica Demonstration in my webpage, the performance is very slow and laggy.

For instance, if I follow the instruction video and embed the Radial Engine Demonstration in an HTML page, it takes about 5 seconds to load (that’s ok) and when I drag a slider, it takes about 2 seconds for the image to update (that’s a big problem). This is the case even for simpler demonstrations, such as this magnetic field demonstration.

Is there any way to improve performance?

When does setting the SORT_IN_TEMPDB option actually improve performance?

My question is related to index rebuilding, mainly the SORT_IN_TEMPDB option.

BOL states that:

Even if other users are using the database and accessing separate disk addresses, the overall pattern of reads and writes are more efficient when SORT_IN_TEMPDB is specified than when it is not.

On the other hand, one of the users states:

When rebuilding an index you would need twice the space of the index + 20% for the sorting. So in general to rebuild every index in your db you only need 120% of your biggest index in your DB. If you use SORT_IN_TEMPDB, you only win 20%, you still need an aditional 100% in your data file. Further more, using sort in tempdb increases your IO load drastically, since instead of Writing the index one time to the datafile, you now write it one time to the tempdb and then write it to the data file. So that is not always ideal.

Would you like to share your own experience about this option? Have you ever had to use this option while rebuilding indexes? What was the performance result?

Fstrim ruins bcache’s performance

Since Ubuntu 18.04 two machines I use, one desktop and one notebook, both started to present an occasional really slow boot and a really slow performance for everything after that boot. Both use a small SSD and a bigger HD through bcache.

Except for those occasions, they are fast. No noticeable difference from other PCs with SSD only. Bcache is great. And usually a reboot after that slow boot makes things go back to normal. That’s why I took so long to investigate it deeper.

Following instructions I probably found here in askubuntu, I used systemd-analyze to discover fstrim was causing it.

$   sudo systemd-analyze blame 2min 9.448s fstrim.service ... 

The package was found using this:

$   dpkg -S fstrim util-linux: /sbin/fstrim util-linux: /usr/share/man/man8/fstrim.8.gz util-linux: /lib/systemd/system/fstrim.service util-linux: /lib/systemd/system/fstrim.timer util-linux: /usr/share/bash-completion/completions/fstrim 

My guess is that this fstrim ruins bcache’s performance. It is scheduled for running once a week, which is consistent with the observed behaviour. It probably thinks the bcache device is a huge SSD and does its thing making the boot super slow, which also messes with the cache and thus every access after that is a cache miss.

It’s kind of fixed on my machines, since I disabled fstrim and it’s timer following the instructions here and the slow boot haven’t occurred again.

rm /var/lib/systemd/timers/stamp-fstrim.timer systemctl stop fstrim.service fstrim.timer systemctl disable fstrim.service fstrim.timer systemctl mask fstrim.service fstrim.timer 

But there’s probably better solutions to this. For example: there should be a way to disable fstrim for only one partition editing fstab.
There is, maybe… I have just found it reading ArchLinux’s wiki and a link to kernel.org from there. You just add nodiscard to the line of that bcache filesystem in fstab. I haven’t tested it yet. In my case it would be:

... # /home was on /dev/bcache0 during installation UUID=0880deae-1eeb-4c07-af01-a3db9d2d6282 /home           ext4    defaults,nodiscard        0       2 ... 

Even better would be bcache to report as not having trim support to lsblk --discard or fstrim to recognize a bcache partition and avoid it.

Any suggestions? Should I file a bug? Where?

Would forcing a lawyer to turn on their client with a Glamour Bard’s Enthralling Performance feature be seen as an attack?

Say you are a bard, level 3+. You have are being sued by an enemy, and you have got their lawyer tied up in a chair. You make them charmed after their failed saving throw against your Enthralling Performance feature, which states:

Each target must succeed on a Wisdom saving throw against your spell save DC or be charmed by you. While charmed in this way, the target idolizes you, it speaks glowingly of you to anyone who speaks to it, and it hinders anyone who opposes you, avoiding violence unless it was already inclined to fight on your behalf. This effect ends on a target after 1 hour, if it takes any damage, if you attack it, or if it witnesses you attacking or damaging any of its allies.

Would charming the lawyer and making him throw out the case be seen as an attack against the lawyer’s ally, your enemy?

cache performance (no. of hits)

consider the following sequence of memory references given as word addresses: 22,10,27,21,23,30,4,22,7,35,5,31,10,27,and 21.Assume the cache is initially empty. How many above references a cache hit is if : 1. A 64 byte 2- way set associative cache, with a block size of 8 bytes, a word size of 4 bytes and LRU replacement policy. 2. A 64 byte fully associative cache, with a block size of 8 bytes, a word size of 4 bytes and LRU replacement policy.

¿Cómo mejorar performance de un geom_histogram()?

Me encuentro graficando un histograma a partir de un conjunto muy importante de datos mediante geom_histogram(), me he dado cuenta que a medida que se aumenta la “definición” del mismo aumentando en numero de bins o barras, el resultado es cada vez es más lento. La relación, con un histograma de R base es de al menos 10 a 1 en tiempo. Ejemplo:

library("ggplot2") library("microbenchmark") set.seed(2019) x <- rnorm(100000) df <- data.frame(x=x)  ggplot_hist <- function(data, bins=100000){   print(ggplot(data, aes(x=x)) + geom_histogram(bins=bins)) }  base_hist <- function(x, breaks=100000){   print(hist(x, breaks=length(x))) }  microbenchmark(   base_hist(x),    ggplot_hist(df),    times=3L  )  Unit: seconds             expr       min        lq      mean    median        uq       max neval     base_hist(x)  4.503556  4.632358  4.680143  4.761159  4.768436 4.775713     3  ggplot_hist(df) 56.330033 57.249490 60.182923 58.168946 62.109369 66.049791    3 

¿Hay forma de optimizar un histograma en ggplot?

is there any performance issue if we install ubuntu on windows based laptops?

i mean that if we have a windows based laptop then if i install ubuntu on this laptop will it work fine or there will be some performance issue because there are other types of laptop too in the market which are ubuntu based so what is the difference between these two types of laptops.

i wish to install ubuntu on Acer predator helios 300 with i5 processor and 8GB of RAM 128 GB SSD 1TB HDD.