Expensive combinatorial optimization of choice of subset from a large finite space

I have a fairly general question — what’s a good (gradient-free) approach to optimizing an expensive function whose input is a choice of subset from a large finite population?

That is, I have a set $ X$ , an integer $ n$ , and a function $ F: Y \to\mathbb{R}$ , where $ Y$ is the set of all $ n$ -element subsets of $ X$ (and $ |X| >> n$ ). Knowing nothing about F (in fact, assume it’s expensive and free of any helpful structure. In the actual use case it’s noisy as well, though I’m interested in a non-noisy answer too), what are good options to maximize $ F$ on $ Y$ ?

I’m in the process of doing this using a random search-style approach, where the choice of next subset is made by fixing some $ m<n$ and redrawing $ m$ elements of the subset at each step; I think that “number of elements not in common” constitutes a metric on $ Y$ , so it seems like this is sound, but it also looks naive to me. This problem seems pretty general and useful, so I’d love to be pointed to some other ideas.

Android / Java: Time efficient large scale Image processing to create median image

I am trying to create an android app [using android studio]. the app captures 200 – 500 pictures in a set period of time, and then does some processing on all the images together. each picture’s size is around 960×720. the goal is to create a “median” image, in which each pixel is a median of all the 200-500 pixels in the same spot in all the pictures.

for example: for the output image in the pixel in row 10 column 23: output_image[10][23] = median{image1[10][23], image2[10][23],….}

i wrote a big code segment which does this using about 4 loops, and the whole procedure took about 2 hours on a real device.

a friend of mine told me that if i used matlab, this would take me just 1 second – and showed it to me on his computer. he was right !!! and since both matlab and java are based on C, I am sure that there is a more efficient way to run an algorithm.

this is a question for pro’s, and i would appreciate your support. if anyone knows the direction to take and can give me a few pointers, it would be great. possible solution directions: 1. openCV [not sure how to use] 2. very efficient algorithm 3. image manipulations 4. conversion of images to matrices and doing algebric matrix addition

thanks !

dealing with state data in an incremental migration from large legacy application

I have a very large monolithic legacy application that I am tasked with breaking into many context-bounded applications on a different architecture. My management is pushing for the old and new applications to work in tandem until all of the legacy functionality has been migrated to the current architecture.

Unfortunately, as is the case with many monolithic applications, this one maintains a very large set of state data for each user interaction and it must be maintained as the user progresses through the functionality.

My question is what are some ways that I can satisfy a hybrid legacy/non-legacy architecture responsibly so that in the future state all new individual applications are hopelessly dependent on this shared state model?

My initial thought is to write the state data to a cache of some sort that is accessible to both the legacy application and the new applications so that they may work in harmony until the new applications have the infrastructure necessary to operate independently. I’m very skeptical about this approach so I’d love some feedback or new ways of looking at the problem.

Software to transfer files from one large hard drive to multiple smaller hard / flash drives?

Any suggestions of a software that would allow copy files from a large drive to multiple smaller drives, preserving all attributes of the files, such as created/modified date?

I’m looking for something that would fill one drive up and ask for another destination to copy remaining files until that drive is full and so on until all files are copied.

FastCopy almost works, except it doesn’t allow changing destination without resetting of what already was copied (I could be wrong though)

MQTT large delay between messages

I have a testing configuration which consists of a raspberry pi 3 b+ has a mqtt server, an esp32 with some led´s and a webpage to toogle them (webpage and esp32 are clients). For the esp32 i am using the PubSubClient has the library for the communication. Basically what it happens when i toogle a led, i publish “in1” topic which the esp32 is subscribed, reads its payload (“on” or “off”) and then it publishs a message to the broswer according the output state for confirmation. Keep in mind that this is all happening in a local network. After some tests, i saw that it takes max. 10 ms for the browser to publish a message. However, the complete cycle takes from 20 to 100 ms since the first publish message till the confirmation message (message esp32 sends reporting output state). Ive found out that it has nothing to do with the code since it takes 1ms from when the subscribed topic gets a new message, till the esp32 publishs the output state of the designed led.

The raspberry pi is running mosquitto v 1.5.5 with websockets enabled. Don´t think it has something to do with the configuration since the same happens when i test it with the mosquitto server i have in my laptop.

All the messages and subscriptions are 0 qos.

I just think it is a ridiculous amount of time for a communication in a local network, not to talk it´s very inconsistent (20 to 100 ms).

Very gratefull for all the help.

Ubuntu server crashes while uploading large files

Trying to repurpose my old Dell Optiplex 745 as a media sever. Loaded Ubuntu 16.04.5 LTS, set up drives in Ext 4 with snapRAID and SAMBA. I’m trying to upload movie and TV files from my newer PC running Windows 10. The server crashes on upload of every 3rd to 8th file. I get different messages. Some start with “BUG: unable to handle kernel NULL pointer dereference at 00000000000001”. Some with “general protection fault :0000 [#1] SMP”. Sometimes the server PC just spontaneously reboots. The files are 1 – 5 GB in length and transfer very fast (approx. 100 mb/sec), when they work. I’ve tried it with the Windows firewall on and off. I’ve tried it with 8GB of memory and with 2 DIMMs removed to give 4GB. I’ve tried it with FTP using Filezilla in Windows, and by dragging and dropping with file manager. None of it makes any difference. I have 3 dmesg log files from 3 crashes, which can be provided if you can tell me how to attach them. They seem to long to just paste here. I’d appreciate it if anyone can help.

I want to create a holy book app on my religion but problem with large amount of texts

am trying to create holy book app in android studio, but this requires large amount of texts. the thing is when i paste those text in string.xml it showing the error like “TF8 representation for string is too long for the constant pool”. My question is is there any way to do that as many apps already done it but i don’t know how. i want to show that text in my main activity as my app would have single activity.