What happen if the L1 cache has the address entry with write_back attribute. Will that address be available in L2 cache?

I have the TLB entry for a particular address. This address has write-back attributes in both L1 cache and L2 cache. My queries are: 1> if L1 cache entry has write-back, can it be write-back in L2? 2> if L1 cache entry has write-back, then updated values will not be written into DDR until we apply flush. Does the same behaviour like DDR is applicable for L2 cache also?

Can the cost factor be changed of Windows’ credential cache?

I’m having a hard time finding relevant documentation from Microsoft (or any third party, for that matter) about any registry key that may change the cost factor of cached credentials.

One can control how many logons are stored through the registry key CachedLogonsCount, but what I would like to change is the cost parameter of PBKDF2. The default is a cost of 10240, which is quite low (we managed to crack an 11 character password (of a domain admin) by using a large wordlist and hashcat on an ~$ 8/hour VPS with a GPU for about one hour).

Of course, turning off the cache (and especially not logging in with a domain admin account on attacker-accessible workstations) is the proper solution for this kind of attack. However, our client does not want to turn off caching for availability reasons: if the network goes down, individual systems should still work. Since they refuse to turn it off altogether, and since rolling out disk encryption and physical access controls will take a while, it would be good to be able to recommend changing a simple setting in the meantime.

Impacto del cache instrucción y de datos

Estoy trabajando con un microprocesador que utiliza la estructura VLIW, algo particular de este procesador (Trimedia-TM1300)es sus memorias cache.

Normalmente la cache de datos es más grande que la de instrucciones. ¿Qué impacto tiene que el cache de instrucciones sera más grande que el de datos?

Si alguien tiene una idea sobre el tema no dude en decirme, muchas gracias

Blob Cache problem fails to update image renditions

We’re having problem with image renditions and blob cache. When we upload images to a site image renditions generate properly. But if we change crop of an image, that rendition does not update. We have to clear blob cache to fix this problem. This problem occurs sometimes even in 10 minutes after we clear blob cache. I don’t think clearing blob cache all the time is the solution. What might be the problem? Why blob cache is out of sync most of the times?

How would I place the Chromium cache on tmpfs (ram)?

Some background: Today I have realized that my system SSD is almost full (90% full) for the /home dir. I have realized that the main culprit is (are) the 77 chromium profiles that I use on my computer. I manage 16 google accounts and have created several profiles for some of the accounts as sub-departments (yes, I need the 77 chrome profiles, I want them, please do not tell to delete some, as I know that)

I am a Xubuntu user, and this question is related to Ubuntu / Xubuntu. I have been searching the web for solutions. The most valuable site regarding Chromium options has been this Arch Linux site for Chromium https://wiki.archlinux.org/index.php/Chromium/Tips_and_tricks

I currently launch my Chromium profiles like this:

chromium-browser --user-data-dir=/home/ThisComputerUsername/.config/chromium/GoogleUserProfileXSubjectY 

1) One option I would have is to limit the amount of cache for each profile. So I would append --disk-cache-size=50000000 to my chromium launch command from before. Adding that to my launch command would result in a cache size of only 50MB, and the total command would look like :

chromium-browser --user-data-dir=/home/ThisComputerUsername/.config/chromium/GoogleUserProfileXSubjectY --disk-cache-size=50000000 

Would this command be correct?

2) Another option would be to send the cache to /tmp to have it deleted each time the computer is restarted. I am ok with that, more internet usage but less SSD wear. Appending something like --disk-cache-dir=/tmp/cache to the launch command I would (I think) achieve that. The total command would look like this, also reducing the cache to 50MB as in option 1.

chromium-browser --user-data-dir=/home/ThisComputerUsername/.config/chromium/GoogleUserProfileXSubjectY --disk-cache-size=50000000 --disk-cache-dir=/tmp/cache/GoogleUserProfileXSubjectY 

Would this command be correct?

3) And the final option, as I always move from place to place where I have very good internet connection, would be to place the cache in a /tmpfs, that as I understand, (just from today) is something like /tmp on ram, that also would be deleted when I restart the computer but with the plus of avoiding SSD wear as the cache is never stored (I don’t mind having to load all the content of every web page I visit, I prefer to save SSD life)

So how would that be achieved?. I guess it will not be as easy as doing something like this, --disk-cache-dir=/tmpfs/cache/GoogleUserProfileXSubjectY right? Please note the fs on tmp

Would that be correct? If not, how I could do that? I am a regular Linux user, but not a systems wizard. Please some help.

cache performance (no. of hits)

consider the following sequence of memory references given as word addresses: 22,10,27,21,23,30,4,22,7,35,5,31,10,27,and 21.Assume the cache is initially empty. How many above references a cache hit is if : 1. A 64 byte 2- way set associative cache, with a block size of 8 bytes, a word size of 4 bytes and LRU replacement policy. 2. A 64 byte fully associative cache, with a block size of 8 bytes, a word size of 4 bytes and LRU replacement policy.

Finding Cache Miss Penalty in Memoery with Banks


Following the same argument we compute the miss rate as 1/2Consider a memory system with 4 Gbyte of main memory, and a 256 Kbyte direct mapped cache with 128 byte lines. The main memory system is organized as 4 interleaved memory banks with a 50 cycle access time, where each bank can concurrently deliver one word. The instruction miss rate is 3% and the data miss rate is 4%. We find that 40% of all instructions generate a data memory reference. a. What is the miss penalty in cycles?

Taken from here

I could not figure out how a miss penalty of 440 cycles is calculated here. The solution given just says

Address cycles + access cycles + transfer time 8 + 8 x 50 + 32 = 440 cycles

My understanding is Miss Penalty (MP) = time for successful access at next level + (MR_next_level x MP_next_level)

Since, here next level is RAM itself, if we assume 100 HR (hit rate) then MR_next_level=0. So, MP(cache)=Access time RAM = 50 cycles. Further, about the 4 banks, if they would have not been there, then I presume the access timing would have been ~50×4 cycles.

Pls help me understand, what I’m missing.