Finding Cache Miss Penalty in Memoery with Banks


Following the same argument we compute the miss rate as 1/2Consider a memory system with 4 Gbyte of main memory, and a 256 Kbyte direct mapped cache with 128 byte lines. The main memory system is organized as 4 interleaved memory banks with a 50 cycle access time, where each bank can concurrently deliver one word. The instruction miss rate is 3% and the data miss rate is 4%. We find that 40% of all instructions generate a data memory reference. a. What is the miss penalty in cycles?

Taken from here

I could not figure out how a miss penalty of 440 cycles is calculated here. The solution given just says

Address cycles + access cycles + transfer time 8 + 8 x 50 + 32 = 440 cycles

My understanding is Miss Penalty (MP) = time for successful access at next level + (MR_next_level x MP_next_level)

Since, here next level is RAM itself, if we assume 100 HR (hit rate) then MR_next_level=0. So, MP(cache)=Access time RAM = 50 cycles. Further, about the 4 banks, if they would have not been there, then I presume the access timing would have been ~50×4 cycles.

Pls help me understand, what I’m missing.

When Multiples are Allowed – Finding the IDs of the People Selected in a peoplePicker

On another page, I have a peoplepicker saving to SharePoint (when only one person can be selected). I used this tutorial (https://www.c-sharpcorner.com/article/custom-people-picker-in-sharepoint-online/)

And, I’m trying to adapt that to grab the id’s of people when more than one is allowed… I “thought” I did this correctly by putting it into an array, but I’m missing something. Appreciate any help/suggestions..

function ensureUser() {          var peoplePickerTopDivId = $  ('#peoplePickerUsers').children().children().attr('id');         var peoplePicker = this.SPClientPeoplePicker.SPClientPeoplePickerDict[peoplePickerTopDivId];         var users = peoplePicker.GetAllUserInfo();         var arryuser = users[0];         if (arryuser) {             var payload = { 'logonName': arryuser.Key };             $  .ajax({                 url: _spPageContextInfo.webAbsoluteUrl + "/_api/web/ensureuser",                 type: "POST",                 async: false,                 contentType: "application/json;odata=verbose",                 data: JSON.stringify(payload),                 headers: {                     "X-RequestDigest": $  ("#__REQUESTDIGEST").val(),                     "accept": "application/json;odata=verbose"                 },                 success: function (data, status, xhr) {                     UserId = data.d.Id;                 },                 error: function (xhr, status, error) {                 }             });         }         else {             UserId = 0;         }     }   function addRequest() {           var userID = [];         $  ('.peoplePickerUsers:selected').each(function () {             userID .push($  (this).val());         })          var item = {             "__metadata": { "type": "SP.Data.NewDepartmentListItem" },              "EmployeeID":{                 "__metadata": { "type": "Collection(Edm.String)" },                 "results": UserId             }       };           $  .ajax({             url: _spPageContextInfo.webAbsoluteUrl + "/_api/Web/lists/getbytitle('" + listName + "')/Items",             type: "POST",             contentType: "application/json;odata=verbose",             data: JSON.stringify(item),             headers: {                 "Accept": "application/json;odata=verbose",                 "X-RequestDigest": $  ("#__REQUESTDIGEST").val()             },             success: function (data) {               },             error: function (data) {              }         });         return false;     }  

C++ finding the shortest path, reducing time complexity, dijkstra v Floyd Warshall Algorithm?

I have an algorithm that I am performing on a graph and I am looking to do an analysis of how to speed it up and would appreciate any comments.

The algorithm iterates over every edge in the graph.

For each edge it (1) finds the shortest path from the input node of the graph to the source node of the edge (Task 1).

It then (2) finds the shortest path from the sink node of the edge to output node of the graph (Task 2).

Doing this process for every edge is what is causing it to be slow.

I am currently using dijkstra’s algorithm implemented with using priority queues in C++ to find the shorest paths. According to this website the complexity of this is O(ELogV)).

There are a couple of ways that I think I could improve this.

  1. There is a lot of redundant calculation going on. Once I’ve found the shortest path from a node to the output for example (Task 2), I know the shortest path for every node along the path from the node where I started. I am wondering what is an efficient way to implement this, which C++ STL containers to use? Is there anyway to estimate the decrease in complexity?

  2. A different approach would be to use Floyd Warshall Algorithm which finds the shortest distances between every pair of vertices in a DAG. The complexity of this is O(V^3). Would it then be quicker to look this information up when computing the shortest paths? How could I quantify how much faster this approach is.

Finding partition with maximum number of edges between sets

Given a graph (say in adjacency list form), is there an algorithm to find a partition of vertices such that the number of edges between the two sets of the partition is the maximum possible?

For example, for the following set of edges of a graph with vertex set $ \{1, 2, 3, 4, 5, 6\}$ : $ \{(1, 2), (2, 3), (3, 1), (4, 5) , (5, 6), (6, 4)\}$ , one possible “maximum” partition is $ \{\{1, 3, 4, 6\}, \{2, 5\}\}$ with $ 4$ edges between the sets $ \{1, 3, 4, 6\}$ and $ \{2, 5\}$ .

Finding return value in terms of $n$

 int coffee(int n) {    int s = n * n;    for (int q = 0; q < n; q++)       s = s - q;    for (int q = n; q > 0; q--)       s = s - q;    return s + 2; }  int tea(int n) {     int r = 0;     for (int i = 1; i < n*n*n; i = i * 2)         r++;     return r * r; }  int mocha(int n) {     int r = 0;     for (int i=0; i<=n; i = i+16)         for (int j=0; j<i; j++)             r++;     return r; }  int espresso(int n) {     int j=0;     for (int k = 16; coffee(k) * mocha(k) - k <= n; k+=16) {         j++;         cout << "I am having so much fun with asymptotics!" << endl;     } } return j; 

I am trying to find the returning value in terms of $ n$ for coffee, tea, mocha, but I am stuck right now.

I know coffee will return 2 as the code follows:

$ s = n^2$

$ s = n^2 – \displaystyle\sum_{q=0}^{n-1}q = n^2 – \dfrac{n(n-1)}{2}$

$ s = n^2 – \dfrac{n(n-1)}{2} – \displaystyle\sum_{q=1}^n q = n^2 – \dfrac{n(n-1)}{2} – \dfrac{n(n+1)}{2} = 0$

Then, $ s = 0 + 2$

, but I can’t seem to figure out tea, mocha, and espresso at, because they don’t follow +1 increments. Could anyone help me out how to compute the return value in terms of $ n$ ?

Prevent finding key in C++ application binary

I need to store a key (symmetric-key cryptography) within my C++ application binary (based on OpenCV) so that the key as unidentifiable as possible. Can someone help me choose the key so that is secure and it will be difficult for an attacker to find the key in the binary?

If I choose plain text (regardless of length) then this will be saved as text in my binary and will be easily identifiable, am I right? So I probably want to select a key that looks like a generic binary content.

I guess having multiple keys that will be used in predefined order may also make it more secure since the attacker has to find all the keys (which may be stored in different places within binary).

I need to choose this type of protection because of the nature of my application. I accept the risk of an attacker finding the key – in such case I will use new key and new application binary – but I do not want to do this very often (hopefully never).

I know it is impossible to have 100% key security this way but I want a basic level of security so that the attacker needs to have some knowledge at least.

Finding efficient randomized algorithm

I’m doing a course on randomized algorithms and I’ve encountered a question that I’m struggling to solve.

Given a system of $ m$ linear equations with $ n$ variables over finite field $ \mathbb{F_2}$ where every equation is of form $ a_1x_1+…+a_nx_n=b \mod 2$ and $ x_i,a_i,b\in\{0,1\}$ and multiplication and sum are $ \mod 2$ .

First part of the question is to find an efficient randomized algorithm that finds an assignment to the variables over $ \{0,1\}$ so that expected value of satisfied equations is $ \frac{m}{2}$ . This part is relatively easy, choosing a random assingment over $ \{0,1\}$ gives us the desired result. Each equation is satisfied with probability $ \frac{1}{2}$ and expected value over $ m$ equations will give us $ \frac{m}{2}$ satisfied equations.

The second part is the one I struggle with, it is asked to prove that there exist $ O(logm)$ assignments to the variables such that every equation is satisfied by at least one assignment. And it is asked to propose a randomized efficient algorithm that finds those $ O(logm)$ assignments with probability at least $ \frac{1}{2}$ . There is also a hint: let $ E_i$ be an event that random assignment satisfies equation $ i$ , then events $ E_i(1<=i<=m)$ are $ 2-wise$ independent.

I really don’t know how to approach this. Any help would be appreciated.

Finding an substring in an infinite sequence

I’m trying to find a substring in an infinite sequence of numbers (Similar to Substring in a infinite sequence of numbers) and am a little stuck on improving my algorithm. I know there is already an answer given in the question I linked above, but I want to try and improve my brute force algorithm.

Given a sequence $ S = \{12345678910…n-1n | S \in \mathbb{Z}:\}$ , the algorithm tries to compute the first index of a pattern $ P$ in the string. For instance, $ \mathrm{find}(P = 456) == 3$ as the sequence $ 456$ is located at index $ 3$ . I have a very simple algorithm that generates a sequence till the substring is found, and then goes through the sequence to return the index of the substring. This algorithm is very slow for for large $ N$ and I want to improve it:

def find_position(string):      # Build a window of size len(string) and init from 1 -> m     windowSize = len(string)     window = [str(i) for i in range(1, windowSize + 1)]     result = ''.join(window)      # Loop till match     while string not in result:          # Remove front and add back         window.append(str(int(window[-1]) + 1))          # Join window and match         result = ''.join(window)      return result.index(string) 

This algorithm is like a very basic Rabin-Karp without any hashing but just regular string matching. I am not too sure if adding a rolling hash would speed up this algorithm in any way because the slow aspect of the algorithm is generating and appending to the sequence and then checking if the string is contained in the window.

Any ideas on how to improve this algorithm?

Is finding a minimal set of seed variables for a complete deduction of a system of equations NP-complete?

Suppose we have a set of variables $ V$ . We also have a set of equations $ E$ , which are sets of at least two variables. We don’t know anything about these equations, except if we know all but one of the variables in an equation, we can deduce the missing one.

Does there exist a set of variables $ R \subseteq V$ with $ |R| \leq k$ such that revealing these variables allows us to deduce all variables?

Is this problem NP-complete? What if all equations have degree $ \leq d$ ?

I’m ultimately interested in the search version of this problem (where we actually need to find minimal $ R$ ).