## Choosing the best fit from given plots

I have some data:

x={1,2,3,4,5,6,7,8,9,10} y={3.05,21.05,69.05,162.05,315.05,543.05,861.05,1284.05,1827.05,2505.05} 

That I plot with ListPlot. I have two functions that I would like to check which one of them best fits my data. The functions are:

f1[x_]:=0.5 x + (4/2) x^3 f2[x_]:=0.5 x + (5/2) x^3 + 80 

and I plot them along with data points that I have I get a plot:

Function f2 is plotted red, and function f1 is plotted blue. Function f2 is seems to be the best fit to data points, but is there a way in which I can in Mathematica check this, not by using something like FindFit or NonlinearModelFit, but instead calculating say distances between points and plots and see which function gets data points closer to it. Is this a correct way of thinking about fitting? Is there a code that allows checking how close are data points to some model function, or two of them or even more?

## At which specifications to look before choosing shared hosting

Which specifications are important to compare when choosing a shared plan? I do not ask for ads this time. Thanks!

## choosing finite subsets of natural numbers

Let $$t>0$$ and $$\delta\in\big(0,\frac12\big)$$ be fixed. For any $$k\in\mathbb{N}$$ let $$I_k,J_k\in\mathbb{N}$$ be finite subsets of natural numbers with cardinalities denoted as $$|I_k|,|J_k|$$, respectively. Now define numbers

$$\hspace{70pt}C_k:=\max\Big\{\max\limits_{i\in I_k}i^{2t}\sum\limits_{j\in J_k}j^{2t},\,\,\max\limits_{j\in J_k}j^{2t}\sum\limits_{i\in I_k}i^{2t}\Big\}$$

and

$$\hspace{70pt}D_k:=\max\Big\{|I_k|\max\limits_{j\in J_k}j^{4t+2\delta},\,\,|J_k|\max\limits_{i\in I_k}i^{4t+2\delta}\Big\}$$.

Aim: choose the subsets $$|I_k,|J_k|$$ so that

$$\hspace{100pt}R_k:=\frac{C_k}{D_k}\to\infty\,\,\,\,$$as$$\,\,\,\,k\to\infty$$.

## Choosing design pattern/architect for my Python streaming/image processing project

I have a Python project which will receive stream of image data (sent as message pack rpc) from another embedded system. My application then will process these images and stream out information about the particles in the images. It also have other communication command/message channels.

I am learning Python for this project and hope to learn more through working on this project. I have experience working with image processing on LabVIEW but not much in Python. I understand that I need to create a thread to read in the image data, then passing these data to a queue. Another thread will read the data from the queue the process it. One more thread will stream these data out. There are also other threads to receive message/command then process and response to these message.

I need your experience to guide me which design-pattern or architecture I should use for my project.

Should I use multi-threading or multiprocessing in this case?

## Choosing algorithms and/or data structures at runtime based on input characteristics

I’ve been reading about Adaptive Computing, i.e. the idea of computer programs taking feedback from the environment at runtime to improve the output in some way. More precisely, my current focus is in Self-Optimization: how to write programs that are able to choose the best algorithm/data structure in response to changes in the input profile. In the lower end, simple heuristics are used to apply specific algorithms in special cases, eg. Tim/Quick/MergeSort using Insertion Sort (which is $$O(n^2)$$) when the partition size is below a certain threshold. On the other extreme we have JIT compilers that optimize/deoptimize the code at runtime according to certain metrics.

However, I haven’t found so far any examples of “high-level” decisions, like automatically choosing between two distinct implementations of an algorithm or a data structure at runtime. For example, think about a AdaptiveList object with the usual operations (add,remove...) and a array-backed storage. If the program keeps inserting elements in the middle of the list (which requires moving a lot of data to free space for the new element), the AdaptiveList may choose to move the data out of the array into a linked list. If the usage pattern changes again, the AdaptiveList may decide to go back to the array storage.

The closest thing I’ve been able to find (other than JIT compilers, of course) are projects like ATLAS and FFTW where the code generation/algorithm selection is done a priori and never revisited. Maybe I’m the first one to entertain such fantasies, but I doubt it. Are you aware of other papers/projects that have investigated this idea?

## What are the benefits of choosing a higher wireless multicast transmission rate?

Without considering reliability(assume there is no packet loss), does the higher sending rate means that more data can be sent per unit of time by wireless router.

## General question on choosing an Assembly language based on my goal

So I know assembly is so big and it seems learning assembly is like learning high level programming, you don’t need to learn them all, you can learn couple and this will be enough.

The thing is with high-level language it is easier to grasp and implement in your life. But I am having hard time with assembly, I learned ARM at university, but I am not sure which version it was. And, it wasn’t practical in real life. Also, it was too primitive. It doesn’t even help me read other assemblies. Because when i read other assembly, I am lost seeing too many new commands that I have never seen before.

But I want to learn a language where I can add it to my code or make stand-alone version out of it easily like C++ or C# for Windows, so basically a practical language, also a language with live community and not an abandoned one. i learned some programming languages and they are useless for me like CLIPS, ADA and Prolog.

So what are the useful and practical assembly languages out there to learn for windows. Is FAsm a good start? Also note that I do not know what I want to do with it, I will learn it them when I get good command of it I will decide, I may use it for images, or AI project or anyother use where my path will lead. I already know C++ (i use it for UE4 and AI code implementations), C# (desktop/web) app and java (desktop).

## MongoDB query choosing the wrong index in winning plan, Though in executionTimeMillisEstimate as lower for the other index?

MongoDB Query chooses the wrong index in the winning plan. I have two indexes for the same field, one is a single field index and a Compound index with another field.

Eg. Field name: Field1, Contains Yes or No Field name: Field2, Contains 0 or 1 or 2 or 3

Index 1: {'Field1':1} Single Field Index Index 2: {'Field1':1,'Field2':1} Compound Index.

On Search Query {‘Field1′:’Yes’} for Field1 it uses the compound index, instead of single key index. Attached below is the query execution plan.

{     "queryPlanner" : {         "plannerVersion" : 1,         "namespace" : "xxxx",         "indexFilterSet" : false,         "parsedQuery" : {             "Field1" : {                 "$eq" : "Yes" } }, "winningPlan" : { "stage" : "FETCH", "inputStage" : { "stage" : "IXSCAN", "keyPattern" : { "Field1" : 1, "Field2" : 1 }, "indexName" : "Field1_Field2_1", "isMultiKey" : false, "multiKeyPaths" : { "Field1" : [], "Field2" : [] }, "isUnique" : false, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "Field1" : [ "[\"Yes\", \"Yes\"]" ], "Field2" : [ "[MinKey, MaxKey]" ] } } }, "rejectedPlans" : [ { "stage" : "FETCH", "inputStage" : { "stage" : "IXSCAN", "keyPattern" : { "Field1" : 1 }, "indexName" : "Field1_1", "isMultiKey" : false, "multiKeyPaths" : { "Field1" : [] }, "isUnique" : false, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "Field1" : [ "[\"Yes\", \"Yes\"]" ] } } } ] }, "executionStats" : { "executionSuccess" : true, "nReturned" : 762490, "executionTimeMillis" : 379131, "totalKeysExamined" : 762490, "totalDocsExamined" : 762490, "executionStages" : { "stage" : "FETCH", "nReturned" : 762490, "executionTimeMillisEstimate" : 377572, "works" : 762491, "advanced" : 762490, "needTime" : 0, "needYield" : 0, "saveState" : 16915, "restoreState" : 16915, "isEOF" : 1, "invalidates" : 0, "docsExamined" : 762490, "alreadyHasObj" : 0, "inputStage" : { "stage" : "IXSCAN", "nReturned" : 762490, "executionTimeMillisEstimate" : 1250, "works" : 762491, "advanced" : 762490, "needTime" : 0, "needYield" : 0, "saveState" : 16915, "restoreState" : 16915, "isEOF" : 1, "invalidates" : 0, "keyPattern" : { "Field1" : 1, "Field2" : 1 }, "indexName" : "Field1_Field2_1", "isMultiKey" : false, "multiKeyPaths" : { "Field1" : [], "Field2" : [] }, "isUnique" : false, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "Field1" : [ "[\"Yes\", \"Yes\"]" ], "Field2" : [ "[MinKey, MaxKey]" ] }, "keysExamined" : 762490, "seeks" : 1, "dupsTested" : 0, "dupsDropped" : 0, "seenInvalidated" : 0 } }, "allPlansExecution" : [ { "nReturned" : 101, "executionTimeMillisEstimate" : 0, "totalKeysExamined" : 101, "totalDocsExamined" : 101, "executionStages" : { "stage" : "FETCH", "nReturned" : 101, "executionTimeMillisEstimate" : 0, "works" : 101, "advanced" : 101, "needTime" : 0, "needYield" : 0, "saveState" : 10, "restoreState" : 10, "isEOF" : 0, "invalidates" : 0, "docsExamined" : 101, "alreadyHasObj" : 0, "inputStage" : { "stage" : "IXSCAN", "nReturned" : 101, "executionTimeMillisEstimate" : 0, "works" : 101, "advanced" : 101, "needTime" : 0, "needYield" : 0, "saveState" : 10, "restoreState" : 10, "isEOF" : 0, "invalidates" : 0, "keyPattern" : { "Field1" : 1 }, "indexName" : "Field1_1", "isMultiKey" : false, "multiKeyPaths" : { "Field1" : [] }, "isUnique" : false, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "Field1" : [ "[\"Yes\", \"Yes\"]" ] }, "keysExamined" : 101, "seeks" : 1, "dupsTested" : 0, "dupsDropped" : 0, "seenInvalidated" : 0 } } }, { "nReturned" : 101, "executionTimeMillisEstimate" : 260, "totalKeysExamined" : 101, "totalDocsExamined" : 101, "executionStages" : { "stage" : "FETCH", "nReturned" : 101, "executionTimeMillisEstimate" : 260, "works" : 101, "advanced" : 101, "needTime" : 0, "needYield" : 0, "saveState" : 10, "restoreState" : 10, "isEOF" : 0, "invalidates" : 0, "docsExamined" : 101, "alreadyHasObj" : 0, "inputStage" : { "stage" : "IXSCAN", "nReturned" : 101, "executionTimeMillisEstimate" : 0, "works" : 101, "advanced" : 101, "needTime" : 0, "needYield" : 0, "saveState" : 10, "restoreState" : 10, "isEOF" : 0, "invalidates" : 0, "keyPattern" : { "Field1" : 1, "Field2" : 1 }, "indexName" : "Field1_Field2_1", "isMultiKey" : false, "multiKeyPaths" : { "Field1" : [], "Field2" : [] }, "isUnique" : false, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "Field1" : [ "[\"Yes\", \"Yes\"]" ], "Field2" : [ "[MinKey, MaxKey]" ] }, "keysExamined" : 101, "seeks" : 1, "dupsTested" : 0, "dupsDropped" : 0, "seenInvalidated" : 0 } } } ] }, "serverInfo" : { "host" : "xxxxx", "port" : 27017, "version" : "3.6.0", "gitVersion" : "xxxxx" }, "ok" : 1.0 }  The executionTimeMillisEstimate for single filed index is 0 where us executionTimeMillisEstimate for the compound index is 260, then why still it uses the compound index in winning plan. I am using a single field query for single field index why it uses compound index? ## Choosing$r$things from a set containing$l$things of one kind,$m$things of a different kind,$n\$ things of a third kind,…

Here is a statement from a textbook that I’m referring to:

From a set containing $$l$$ things of one kind, $$m$$ things of a different kind, $$n$$ things of a third kind and so on, the number of ways of choosing $$r$$ things out of this set of objects is the coefficient of $$x^r$$ in the expansion of $$(1+x+x^2+x^3+…+x^l)(1+x+x^2+x^3+…+x^m)(1+x+x^2+x^3+…+x^n).$$

Can someone please explain the intuition behind this? How can it be derived?

## Choosing compression method and settings for mp4 files

I’ve got a hard drive filled up with videos, and want them to take up as little space as possible.

Video encoding:

• Around 20000 kbps datarate and bitrate
• 30 frames/second
• H.264 AVC

Audio encoding:

• AAC
• 96 kbps
• Stereo channels
• 48 KHz sample rate

They’re separated into multiple folders, and here’s an example folder:

• 29 files
• Total duration of around 6 hours
• Total size of around 25 GB
• File size tends to vary between 650 MB and 1.25 GB

What I want to know is how to choose settings like dictionary size, word size, solid block size, etc. I’m assuming that 7z with LZMA2 is best for the archive format and compression method.