## SER Consuming way to much memory on 100-200 threads

Once again facing problems with SER when comes about memory and cpu, project is kinda light, only posts on custom auto approve lists.

Updated to version 14.70 and issue persist.

I think no need to show for 200 threads.

Project is not heavy one, is just posting on auto approve list, 3k links list.

## How does the address bus , Memory Controller , RAM and CPU interact?

So I was taking a course and didnt quite get what was happening so I tried to research the material and found 2 different representations for the topic.

2 Photos

CPU getting data/instructions from RAM

So the first one shows that the Memory controller is part of the CPU and the process of sending data is:

CPU -> Memory controller -> Address bus -> RAM -> Data Bus -> CPU

The second shows :

CPU -> Address Bus -> Memory Controller -> Ram -> Memory Controller -> Data bus -> CPU

I did search for the topic but couldn’t find an answer as most people don’t show memory controller and address bus interaction

So does anybody know which is the right one ?

## Set data structure for data too large to fit into memory

I’m trying to solve the following exercise:

Given N data items and memory that can hold M/B blocks of size B. Describe a data structure that needs at most N/B blocks of external memory and allows us to answer questions of the form “$$s \in S$$?” using $$log_BN$$ I/O operations.

We may assume a static set, i.e. we do not care for efficient insertion or deletion of elements to or from the set and do not care for efficient creation of the data structure. Furthermore, M and B are known.

If the data is ordered, I could sort the data using e.g. MergeSort for external memory, then partition the sorted data into N/B sections and create a B-tree, where each node consists of a block of memory with elements containing a value of the set and an external memory location pointing to the block which contains all elements larger or equal to the value and smaller than the value of the next element in the parent block. In that case we would need $$N/B$$ external memory locations to store the block and $$log_B N$$ I/O operations to search for a value.

My question is about how one would handle non-ordered data? I don’t see how one could encode the data as a numeric value and then use the B-tree construction.

## Histogram Memory Allocation Failure

I have a list of 3256 pairs of data with which I plotted a scatter plot, no problem. But when I tried to take the ratio with the two elements of each pair and do a Histogram of the ratios, I got a Memory Allocation Failure. Please help.

ListPlot[ABC]

Histogram[MapThread[ If[#2 == 0 || #1 == 0, 0, #1/#2] &, {ABC[[All, 1]], ABC[[All, 2]]}]

## Forcing NIntegrate not to store full solution, in order to save memory

When using NIntegrate to integrate a time-varying dynamical system, I often only care about the final value of the dependent variables at the end of integration. However, NIntegrate returns interpolation functions, implying that it is storing all intermediate values of the dynamical variables in memory.

Is there a way to save memory by forcing NIntegrate to only return the final values of the dependent variables?

## How to safely handle non public data in memory

Alice needs to get non public information from Bob, validate it (let’s say check that birth day is between 1900 and now) and forward it to Charlie. There’s an end to end encryption between Alice and Bob and Alice and Charlie.

If the computer Alice uses is some remote machine, can Alice avoid leaking the non public information she is handling to whoever has access to the machine she uses?

My undestanding is that the moment the data is decrypted in the machine’s memory it’s at the mercy of whoever has physical access to that machine. Is that correct? If so. Does that mean that for handling non public information I should never use cloud solutions and rely only on physical machines that I own?

I see there’s “Homomorphic encryption”. But I understand that if, as in my example, I have to validate that a number is btween x and y it’s equivalent to the number being known?

There’s a somewhat similar question here: encrypting data while in memory

But it does not focus on these questions and is implementation specific.

## Binary integer programming problem without exponential memory

Consider binary integer programming problem with n variables.
I think the branch and cut algorithm takes exponential memory. What are existing algorithms without much memory? Please suggest.

## Operating systems memory management: help understanding a question related to segmentation

There is this question in my textbook (Operating Systems: Internals and Design Principles by William Stallings):

Write the binary translation of the logical address 0011000000110011 under the following hypothetical memory management schemes, and explain your answer:

a. A paging system with a 512-address page size, using a page table in which the frame number happens to be half of the page number.

b. A segmentation system with a 2K-address maximum segment size, using a segment table in which bases happen to be regularly placed at real addresses: segment# + 20 + offset + 4,096.

I am having trouble with understanding part b. I’m not a native English speaker. Initially, I assumed that “using a segment table in which bases happen to be regularly placed at real addresses” means that the segment number in the logical address is the number of the physical segment, but then I read this “segment# + 20 + offset + 4,096”, and I am not sure what to make of it. So does this mean that the base number in the segment table contains segment# (in the logical address) + 20 + offset (in the logical address) + 4,096?

## Optimal encoding scheme for semi-rewritable memory?

Let’s define a “semi-rewritable” memory device as having the following properties:

• The initial blank media is initialised with all zeroes.
• When writing to the media, individual zeroes can be turned into ones.
• Ones can not be turned back into zeroes.

Making a physical interpretation of this is easy. Consider for instance a punch card where new holes can easily be made, but old holes can not be filled.

What makes this different from a “write once, read many” device is that a used device can be rewritten (multiple times), at the cost of reduced capacity for each rewrite.

# Implicit assumptions I would like to make explicit:

1. The memory reader has no information about what was previously written on the device. It can therefore not be relied upon to use a mechanism such as “which symbols have been changed?” to encode data on a device rewrite. That is, the reader is stateless.
2. On the other hand, different “generations” of the device may use different encoding schemes as the available capacity shrinks.
3. The data stored can assumed to be random bits.

# Sample storage scheme, to demonstrate rewrite capability:

Information in this scheme is stored on the device as pairs of binary symbols, each pair encoding one of the three states of a ternary symbol, or [DISCARDED] in the case where both symbols have been written.

The first generation thus stores data at a density of $$\frac{log_2(3)}{2} \approx 0.79$$ times that of simple binary encoding.

When the device is rewritten, the encoder considers each pair of binary symbols in sequence. If the existing state matches the one it desires to write, the encoder considers the data written. If on the other hand the pair doesn’t match, it writes the necessary modification to that pair, or in the case where that isn’t possible, writes the symbol [DISCARDED] and considers the next pair instead until it has successfully written the ternary symbol.

As such, every rewrite would discard $$\frac{4}{9}$$ of existing capacity.

For a large number of cycles, the device would in sum have stored $$\frac{9log_2(3)}{8} \approx 1.78$$ times the data of a simple one-time binary encoding.

(For a variation of the above, one could also encode the first generation in binary and then apply this scheme on every subsequent generation. The loss from the first generation to the second would be larger, and the total life time capacity reduced, but the initial capacity would be larger).

# Question:

1. Is it possible to have a better life-time capacity than $$\frac{9log_2(3)}{8}$$? I suspect the the real asymptotic capacity is 2.

2. Can a scheme do better than having $$\frac{4}{9}$$ capacity loss between rewrites?

## Thick Client memory encryption

I came across a thick client application recently. The application temporarily stores sensitive data (like username and password) in clear text in memory and the data is flushed once the user logs out or the application is closed.

Inorder to increase the security implementation of the application is there any technique where I could encrypt the sensitive data stored in memory? so that data are encrypted whenever application memory is dumped.

Please suggest some resources for implementation.