Memory policy with ArrayPool.Shared

i’ve recently released a from scratch set of library Astron, and I wanted to get my memory policy logic reviewed.(you can find a little doc about it here).

My first goal was to provide an extandable API to allow the user to implement his own memory policy, but I’ve also provided some base implementations especially one that use the ArrayPool<byte>.Shared instance in order to handle the IMemoryOwner<byte> logic. Thanks to M. Gravell and its implementation, I didn’t had much work to do about it.

According to the specifications, I first defined my memory policy interface :

public interface IMemoryPolicy {     Memory<T> GetArray<T>(int size);     IMemoryOwner<T> GetOwnedArray<T>(int size); } 

You may have noticed that this interface can return an IMemoryOwner<T> which is an interface from the BCL, therefore I then had to implement it.

As the memory policy may be used in a multi-threaded context, its behavior must be handled in a thread-safe way, and this is where M.Gravell’s implementation come in action. He used the Interlocked class to implement thread-safety, which I did accordingly. So I just had to abstract its implementation to fit the specifications :

    public abstract class PoolOwner<T> : IMemoryOwner<T>     {         private readonly int _length;         private T[] _oversized;          public Memory<T> Memory => new Memory<T>(GetArray(), 0, _length);          protected PoolOwner(T[] oversized, int length)         {             if (length > oversized.Length) throw new ArgumentOutOfRangeException(nameof(length));              _length = length;             _oversized = oversized;         }          protected abstract void ReturnToPool(T[] array);          protected T[] GetArray() =>             Interlocked.CompareExchange(ref _oversized, null, null)             ?? throw new ObjectDisposedException(ToString());          public void Dispose()         {             var arr = Interlocked.Exchange(ref _oversized, null);             if (arr != null) ReturnToPool(arr);         }     } 

Now the user only have to implement this class with the abstract void ReturnToPool(T[] array); method to have a thread-safe IMemoryOwner<T>. Also, this class was unit-tested you can find the tests on my repo here. Then I just had to implement it and create the relative policy which make usage of my IMemoryOwner<T> implementation :

    internal sealed class SharedPoolOwner<T> : PoolOwner<T>     {         public SharedPoolOwner(T[] oversized, int length) : base(oversized, length)         {         }          protected override void ReturnToPool(T[] array) => ArrayPool<T>.Shared.Return(array);     } 
    public class HeapAllocWithSharedPoolPolicy : IMemoryPolicy     {         private static IMemoryOwner<T> EmptyOwner<T>() => SimpleMemoryOwner<T>.Empty;         private static T[] Empty<T>() => Array.Empty<T>();          [MethodImpl(MethodImplOptions.AggressiveInlining)]         public Memory<T> GetArray<T>(int size)         {             if (size == 0) return Empty<T>();             if (size < 0) throw new ArgumentOutOfRangeException(nameof(size));              return new T[size];         }          public IMemoryOwner<T> GetOwnedArray<T>(int size)         {             if (size == 0) return EmptyOwner<T>();             if (size < 0) throw new ArgumentOutOfRangeException(nameof(size));              var arr = ArrayPool<T>.Shared.Rent(size);             return new SharedPoolOwner<T>(arr, size);         }     } 

(unit-tests here)

Finally, here are my questions :

  • How relevant is the usage of the shared ArrayPool in production context ? How about implementing its own pool ? Allocating its own instance of the pool ?
  • Should I had more methods to the memory policy with some constraints on T in order to handle more behaviors ? I assume this could also be used to pool objects in the future, i’ve made it for networking context in order to get buffers but it could also pool Socket objects
  • M.Gravell is monitoring the leak count with its implementation, I don’t see any point on doing that because if they’re leaked they can’t be unleaked then who cares ?

Any suggestions is welcome, thanks you very much for reading me.

Why do i have a lot of memory usage

My Ubuntu 18.04 server has 32 GB of ram, i see a lot of memory usage because of java, i deployed two application on ports 8080 and 9095, 9095 has socket, i tried to make a load test on sockets and there is no deference of ram usage i opened around 2500 socket connection 8080 is a tomcat server integrated with port 9095 both application are working together not separately,im not in production yet, but actually there is something wrong. when i tried to make a load test there is no big defference in ram, but if i leave it around 15 days it goes from 5.7GB to 14 GB, i tried to see if my code has memory leak, so i did dump the memory and i understand nothing when i used jxray. how can i know where the problem is? what should i do more than this ? i have legacy code so its impossible to find where the memory leak is is there anything can show me where is the problem? be noted that i closed the two above ports and java has the same ram usage

C Shared Memory Reader-Writer Segmentation Fault

Here is a Synchronized Reader and Writer. The target is passing data between these two Processes via a Shared Memory. The Writer opens a Shared Memory through a Structure and writes Some Data. I am getting Segmentation Fault(Core Dumped) error message. The code is compiled through the following command in Ubuntu.

g++ Writer.c -o Writer -lrt g++ Reader.c -o Reader -lrt 

And these two Processes are run by-

./Writer ./Reader     

The Writer.c

#include <stdio.h> #include <stdlib.h> #include <string.h> #include <fcntl.h> #include <unistd.h> #include <sys/stat.h> #include <sys/mman.h>  int main(void){     struct MemData{         char* FileName;         int LastByteLength;         int ReadPointer;         int WritePointer;         char Data[512000];//MEMORY BLOCK SIZE: 500 KB     };     int SD;     struct MemData *M;     int NumberOfBuffers=10;     int BufferSize=51200;//FILE BUFFER SIZE 50 KB      SD= shm_open("/program.shared", O_RDWR|O_CREAT, S_IREAD|S_IWRITE);     if(SD< 0){         printf("\nshm_open() error \n");          return EXIT_FAILURE;      }     fchmod(SD, S_IRWXU|S_IRWXG|S_IRWXO);      if(ftruncate(SD, sizeof(MemData))< 0){         printf ("ftruncate() error \n");         return EXIT_FAILURE;      }     //THE FOLLOWING TYPECASTING AVOIDS THE NEED TO ATTACH THROUGH shmat() in shm.h HEADER I GUESS.     M=(struct MemData*)mmap(NULL, sizeof(MemData), PROT_READ|PROT_WRITE, MAP_SHARED, SD, 0);     if(M== MAP_FAILED){         printf("mmap() error");         return EXIT_FAILURE;     }else{         M->FileName=(char*)"xaa";         M->LastByteLength=0;         M->ReadPointer=-1;         M->WritePointer=-1;         memset(M->Data, '', strlen(M->Data));     }     /*     FILE *FP= fopen(FileName, "rb");     if(FP!= NULL){         unsigned long int FilePosition;         fseek(FP, 0, SEEK_SET);         FilePosition=ftell(FP);         fclose(FP);     }     */     close(SD);     return 0; } 

The Reader.c

#include <stdio.h> #include <stdlib.h> #include <string.h> #include <fcntl.h> #include <unistd.h> #include <sys/stat.h> #include <sys/mman.h>  int main(void){     struct MemData{         char* FileName;         int LastByteLength;         int ReadPointer;         int WritePointer;         char Data[512000];//MEMORY BLOCK SIZE: 500 KB     };     int SD;     struct MemData *M;     int NumberOfBuffers=10;     int BufferSize=51200;//FILE BUFFER SIZE 50 KB      SD= shm_open("/program.shared", O_RDWR|O_CREAT, S_IREAD|S_IWRITE);     if(SD< 0){         printf("\nshm_open() error \n");          return EXIT_FAILURE;      }     fchmod(SD, S_IRWXU|S_IRWXG|S_IRWXO);      if(ftruncate(SD, sizeof(MemData))< 0){         printf ("ftruncate() error \n");         return EXIT_FAILURE;      }     //THE FOLLOWING TYPECASTING AVOIDS THE NEED TO ATTACH THROUGH shmat() in shm.h HEADER I GUESS.     M=(struct MemData*)mmap(NULL, sizeof(MemData), PROT_READ|PROT_WRITE, MAP_SHARED, SD, 0);     if(M== MAP_FAILED){         printf("mmap() error");         return EXIT_FAILURE;     }else{         printf("\n%s", M->FileName);         printf("\n%d", M->LastByteLength);         printf("\n%d", M->ReadPointer);         printf("\n%d", M->WritePointer);     }     /*     FILE *FP= fopen(FileName, "rb");     if(FP!= NULL){         unsigned long int FilePosition;         fseek(FP, 0, SEEK_SET);         FilePosition=ftell(FP);         fclose(FP);     }     */     munmap(M,sizeof(MemData));     close(SD);     return 0; } 

Why can’t I release memory cache by /proc/sys/vm/drop_caches

Today I found my server only have few free memory. and I executed free -h, it shows there are 60G memory used by cache. So I execute command to release cache, the result like this:

$   free -h; sudo sync; echo 3 > sudo /proc/sys/vm/drop_caches; free -h              total       used       free     shared    buffers     cached Mem:          126G       114G        11G       5.6M       465M        60G -/+ buffers/cache:        53G        72G Swap:          75G       607M        74G              total       used       free     shared    buffers     cached Mem:          126G       114G        11G       5.6M       465M        60G -/+ buffers/cache:        53G        72G Swap:          75G       607M        74G 

It seemes didm’t release any cache at all, and this server doesn’t have virtual machine on it. Why? What should I do to release cache except reboot server(My OS is Debian 8)?
Thank you!

How should the DM manage the discrepancy between the player’s memory and their PC’s memory?

It may happen that, during a session, players don’t remember the name of a NPC that they met (or, more generally, information about something that happened) during the previous session. Obviously, their PC remembers that information. Conversely, a player may have taken notes about a not so important event that happened several years ago (in game). In this case, it is possible that the PC does not remember it.

How should the DM manage the discrepancy between the player’s memory and their PC’s memory? In the case of 5e, should he have the PCs make Intelligence checks?

What does it mean by “consistent view of memory” in lock-free dequeue implementation?

I am currently reading this paper by Chales-Lev. This paper explain an implementation of work-stealing dequeue. The part where I don’t understand is in the implementation of steal operation.

public Object steal() { 11 long t = this.top; 12 long b = this.bottom; 13 CircularArray a = this.activeArray; 14 long size = b - t; 15 if (size <= 0) return Empty; 16 Object o = a.get(t); 17 if (! casTop(t, t+1)) 18 return Abort; 19 return o; } 

I am confused by this sentence on page 3 regarding this implementation:

Note that because top is read before bottom, it is guaranteed that the values read represent a consistent view of the memory. Specifically, it implies that bottom and top indeed had their observed values when bottom was read at Line 12.

I actually have find an example where loading bottom before top is problematic. Say for example where one thread do steal operation and in between the load bottom and load top operation there is another threads that keep doing pop operation until the queue is empty. Then after that we load top variable. Because we load the bottom variable first, we don’t know that the queue is already empty and at that point of time, top variable is up to date so the casTop() will be successful. But if we load the top variable first. There is no way to make the queue empty without modifying the top variable, so the casTop() will fail when there is another thread that try to make the dequeue empty.

But I still don’t understand the sentence above. What does it mean by consistent view of memory? Even though I already find the reason why I must load top first before bottom, I afraid there is a concept that I don’t understand here.

Is it safe to deploy software without memory protections such as DEP/ASLR?

Several software packages on our SOE Windows machines come with DEP and ASLR memory protections off by default. More troubling is that these applications run as SYSTEM and load their own drivers.

Given that these are expected memory protections and these are modern applications, is there any reason why they would/should be disabled for applications?

Am I, as an administrator, able to forcibly enable them without? What is the risk of having such applications in the environment?

Problems with Memory Card Reader

I just added an internal card reader model Icy Box IB-863a to my computer and have some problems with it. I am running Ubuntu 16.04.

All is fine when there is an SD-card in the reader at boottime. The card is mounted correctly, unmounted when taken out, remounted when inserted again and I can switch to another card.

Problems arise when I boot the computer with an empty card-rader. The booting process hangs for quite a while with a message of not being able to determine the cache-mode for sdc, sdd, sde and sdf (those seem to be the devices associated to the card reader). After a while booting finishes, but cards I insert into the reader don’t get mounted and I cannot access them. In addition the disk-app crashes on start.

Any Idea how I can get this to work?