Data Hazards and stalls

I am studying for my exam tomorrow and I am having difficulty in the below code :

sub $  2, $  1, $  3 and $  12, $  2, $  5 or $  13, $  6, $  2 add $  14, $  2, $  2 sw $  15, 100($  2) 

Due to the ALU-ALU dependency here on Register $ 2 , The sub instruction does not write its result until the fifth stage, meaning that we would have to waste three clock cycles in the pipeline. My question is why 3 clock cycles ? This dependency can be solved by inserting two nops and therefore we are wasting 2 clock cycles ? Please clarify it to me as I am trying to relate the nops to the wasted cycles and I am sure that I have a huge misunderstanding here .

Yeti Blue Stalls My USB Mouse Out

Any time I plug my Yeti Blue into my pc my mouse (regardless of brand but right now its a G403) my mouse freezes up. I can try and scroll but any time I do nothing happens till i unplug the mic then any actions I did with the mouse take place. I figured it was because my pc trys to make the yeti headphone jack my default audio device but now even changing that doesn’t solve the problem. If anyone has any ideas it would be much appreciated. Thanks in advance.

Samsung S2 Network Stalls

My S2 tab connection stalls. Sometimes when browsing the progress bar gets about 10% and then tops, then I get DNS probe failed error. If I boot to recovery and wipe the cache it then works OK for a short while and then stops again. Also stalls using play store. I’m using Lineage OS16.0 and Chrome browser

Database stalls on innodb_buffer_pool_wait_free

I have 4 identical MariaDB 10.0.33 databases that are regularly experiencing database stalls in different situations. I’m trying to understand how to tune the innodb parameters to prevent these stalls. I am aware I can/should add more ram/more disks to better support this workload. However, I really want to understand the implications innodb_lru_scan_depth has on the innodb_buffer_pool_pages_free metric, and what is causing frequent stalls as measured by the innodb_buffer_pool_wait_free metric.

First, some config:

  • Raid10 x4 SSD, 64gb ram, zero swap
  • innodb_buffer_pool_size = 48G (yes i know this can be increased a little)
  • innodb_buffer_pool_instances = 8
  • innodb_flush_log_at_trx_commit = 2
  • innodb_flush_method = O_DIRECT
  • innodb_lock_wait_timeout = 50
  • innodb_log_file_size = 4G
  • innodb_log_buffer_size = 8M
  • transaction-isolation = READ-COMMITTED
  • innodb_flush_neighbors = 0
  • innodb_io_capacity = 2000 # default 200
  • innodb_io_capacity_max = 4000 # default 2000
  • innodb_max_dirty_pages_pct_lwm = 50
  • innodb_max_dirty_pages_pct = 75
  • innodb_flushing_avg_loops=90
  • innodb_lru_scan_depth = 16384 (this is the most important)

Second, some stats:

  • Disk reads are fairly stable throughout the day at ~3k – 5k, peaks to 10k
  • Disk writes are fairly stable throughout the day at ~1k peaks to 2k
  • Disk read latency less than ~0.5ms on average, some spikes to 1ms
  • Disk write latency less than 3ms on average, some spikes to 10ms
  • Disk utilization averaging ~80% some spikes to 95-100%
  • Between 1% and 2% buffer pool cache miss ratio
  • Consistent 1k innodb pool pages flushed per sec
  • Consistent 13-18% nnodb pool bytes dirty
  • Consistent 350k – 420k pages dirty
  • Frequent drops of innodb_buffer_pool_pages_free from 130k down to 0k
  • Frequent spikes of innodb_buffer_pool_wait_free to as high as 25k

Increasing the innodb_lru_scan_depth from the default of 1024 improved the situation. innodb_buffer_pool_pages_free immediately grows every time I increase it. In the attached dashboard, I changed from a value of 16384 to 32768 innodb_lru_scan_depth at 11:40 on two of the four servers, and the pages free jumped up to ~260k from ~130k. There seems to be strong correlation between a lack of free pages, lru depth, and wait free.

My primary question is, what is the relationship between this LRU parameter and the database stalls I am experiencing?

Second question, other than adding more ram/storage bandwidth, from the database perspective, what can I tune to throttle expensive workloads that may be running in parallel, and saturating my disks? There may be a correlation with the innodb_buffer_pool_pages_old metric increasing when I’m seeing periods of database stalls. This seems to indicate something is doing a large full table scan, and making the buffer pool less optimal.

mariadb last 3hour dashboard

When is it unacceptable to use accessible/disabled toilet stalls?

While traveling around the world, one will often stumble upon bathrooms for people who are physically disadvantaged:

enter image description here

Here in North America, I wouldn’t think twice before using these bathrooms, as I presume they’re meant to be accessibility-friendly, rather than accessibility-exclusive. Likewise I’ve always used them without issues in Central Europe.

But are there countries or particular locations where using such bathrooms is impolite or outright illegal, similar to how accessible parking works?

Question inspired by my answer to a related question.