Why is it possible to see running programs in memory dumps?

Following the tutorial described in this link, I was able to extract pictures in memory dumps with volatility and GIMP. But I don’t understand why it is possible to see other running programs in some memory dumps. For example, I was able to see excel images (or any other running programs) in the dump of ms-paint.exe program.

How can it be possible to see other applications in some programs memroy dump ?

So, do you have any explanation for it ?

NB: I’m just a beginner in memory forensics.

How to find program’s Application Support folder

I am wondering if there is a way to get the name of a program’s Application Support folder, perhaps in a way similar to get the id of an app.

osascript -e 'id of app "Sublime Text"' 

gives you com.sublimetext.3 which is the name of the Caches folder but the name of the Application Support folder is Sublime Text 3

Is there a way to do with osascript or something else?

I’m pulling data from Shopify through a C# program, but program’s performance isn’t convincing me

As title says: I’m extracting Shopify’s data through a C# program (calling the Shopify’s API). First of at all, Shopify’s API got the following limitation: it uses the leaky bucket algorithm for handle incoming requests, where its bucket size is 40 request, and the rate of leak is 2 request per second. So in case the bucket is full, and a new request hits the API, the API itself will respond with a HTTP 429 error message (too many requests). You can check more about this in Shopify’s documentation: https://help.shopify.com/en/api/getting-started/api-call-limit.

I’m pulling some data from Shopify by hitting their API, and actually the amount of data is kind of small (like 80-90k of order’s transactions), but due to API’s limitation, is kind of tricky/hard achieve this on the smallest amount of time possible. So basically what my program does is burst like 40 calls to the Shopify’s API, I wait for them, then sleep the program for a couple of seconds (10 seconds), and the burst the next 40 calls. Due to I’m hitting the next burst without waiting the corresponding time for avoid HTTP 429 responses, I implemented the retry pattern for retry HTTP calls that fails because some transient error (like HTTP 429, 503, and so on), so I’m ensure I’m trying my best to not retrieve partial results.

So that’s what my program does; I pulled 85k transactions in 11 hours (which I feel this is pretty bad), but I’m trying to see where else I can improve for reduce the hours of processing. I know that there’s a bottleneck on the Shopify’s API, but this is something out of my scope… do you think guys that there’s some technique/approach for improve data pulling from an API? I’d like to hear your opinions/thoughts about this. I’m totally open to any suggestion! Also, I’d be so thankful.

Check the code snippet below for see one of my program’s functions that shows the logic explained above. I’d also thank any review on the code; like, is it the performance offered by AsParallel method from ParallelEnumerable class good enough for the situation that I’m handling?

public void BulkInsertOrdersEvents(List<long> orders, IPersistence persistence) {     if (orders != null && orders.Any())     {         return;     }      short ordersPerBurst = 40;     int totalOrders = orders.Count;     int ordersProcessed = 0;      while (true)     {         if (ordersProcessed >= totalOrders)         {             break;         }          var ordersForProcess = orders.Skip(ordersProcessed).Take(ordersPerBurst);          ordersForProcess.AsParallel().ForAll((orderId) =>         {             var httpCallParameters = new Dictionary<string, object>();             httpCallParameters.Add("orderId", orderId);              Console.WriteLine("Started processing the order {0}", orderId);              int pages = CalculatePages(ShopifyEntity.OrderEvent, httpCallParameters); //calculates how many pages of data (events) are for the current order              if (pages == 0)             {                 return;             }              string getOrderEventsEndpoint = string.Format(ShopifyApiEndpoints.GET_ORDER_EVENTS_BY_ORDER_ID, orderId) + $  "?limit={ShopifyApiConstants.MAX_LIMIT_ORDER_EVENTS}";             var orderEventsBag = new ConcurrentBag<string>();              Parallel.For(1, pages + 1, (index) =>             {                 //Create HTTP client for call Shopify API                 var httpClient = GetHttpClient();                 var httpHeaders = GetHttpHeaders();                  //Call Shopify API for return order transactions                 string orderEvents = httpClient.Get(getOrderEventsEndpoint + "&page=" + index, httpHeaders);                  Console.WriteLine("Obtained page {0} of Events from Order {1}", index, orderId);                  //Put the order's events for current page into concurrent bag                 orderEventsBag.Add(orderEvents);             });              //Merge all events pages into a single JSON.             var orderEventsJson = JsonHelper.MergeJsons(orderEventsBag.ToArray());              persistence.Save(orderEventsJson);              Console.WriteLine("Finished processing the order {0}", orderId);         });          Thread.Sleep(TimeSpan.FromSeconds(5));          ordersProcessed += ordersPerBurst;     }  }  

I missed to mention that I’m also storing these results on Azure blob storage; but this isn’t a problem at all! Where my program is taking so much time it’s when pulling the data from Shopify.

Thanks so much, guys!

Designing a website based on Python programs outputs

I’m just after a bit of advice really. I have programmed a script in Python that downloads market data from an API, and then does various bits of analysis and spits out some tables and charts displaying this analysis. There are a couple of user inputs required such as time frame or market they want to analyse, but generally the script itself is quite straight forward.

My question is how to set up a website, and what would be required for users to basically be able to use my script, but via the website. I actually only want the script itself to run every 15 minutes say, to update the data, and then the user can decide what bits they want to look at.

I’m not after a full blown answer, more just guidance as to what would be required and any good tutorials if anyone knows of any, or even just the right thing to start googling, or a service that would be able to cater for this.

I have absolutely zero HTML/CSS knowledge, and a decent understanding of python. FWIW.

Many thanks

Making XFCE GUI programs

I would like to make an application that keeps the icons on the desktop grouped within certain application defined “fences” on linux. I have a general idea of how to interact with the linux operating system by making system() calls, however I dont know how to interact with the GUI component on top of linux.

I am running Xubuntu, which I know use XFCE, and from the wikipedia page I know there exist some Xfce libraries but I havent been able to find any examples. I was hoping someone could point me in the right direction