## Optimising tensor operations under memory constraints

Let `riem` is a free variable with `riem ∈ Arrays[{4, 4, 4, 4}` as assumption. Let:

`val = TensorConstract[TensorProduct[riem, riem, riem], {{4,5}}]`

Let `riemVals` be an actual `{4, 4, 4, 4}` tensor whose indices have symbolic values.

I’m interested in computing `val /. (riem-> riemVals)`. I’m “guessing” there are two ways Mathematica could do this internally:

1) Compute `v1 =TensorProduct[riemVals, riemVals, riemVals]` then compute the result as `TensorConstract[v1,{{4,5}}]`.

2) Note that `val` is equivalent to:

`TensorProduct[TensorConstract[TensorProduct[riem, riem], {{4,5}}],riem]`.

Compute `v1= TensorProduct[riemVals, riemVals]`. Then `v2= TensorConstract[v1,{{4,5}}]`. Then the result as `TensorProduct[v1, riemVals]`.

Now, what’s the difference between these two? Obviously they give us the same result, but in the first approach we have to store a $$4^{12}$$ tensor in memory as intermediate value, while in the second one we only have to store a $$4^{10}$$ tensor. The idea being that, when your maximum memory is constrained, it pays off to move `TensorConstract` inwards to the expression so you can do it as earlier as possible before you do the `TensorProduct`.

My question is: does Mathematica take the memory-efficient approach when doing these types of operations? If not, is there any way to implement the evaluation/computation in a controlled manner such that the result will be calculated in a memory-efficient way (compute and prioritize forms where the `TensorCotract` is made early)?

## Optimising Longsword Wielding Character [closed]

Due to plot reasons, I’m entering a Curse of Strahd Campaign where I know in advance that my character will eventually be wielding the Sunsword.

The Sunsword is essentially a Sun Blade; a magic longsword with the Versitile and Finesse properties, and if the wielder is proficient with shortswords or longswords they are proficient with it.

The current party comprises of a front-lining Bear Totem Barbarian, a pacifist Cleric of the Life Domain, a Battlemaster Fighter (hand crossbow build), an Assassin Rogue, and a School of Divination Wizard (blasting/battlefield control, depending on the day).

How can I optimise my character for optimal damage over five rounds with the longsword/rapier, so that I can compete with the other min-maxers in my party? The resulting character should be able to take a solid few hits, but ultimately deal out maximum single target damage across five rounds.

Restrictions:

• Starting ability scores determined by point buy.

• All PHB races except Dragonborn and Tieflings are permitted (can use variants of PHB races from other books).

• The character must be a Monk, Paladin, or Warlock (no multiclassing).
• Feats are permitted.
• The character must primarily use a longsword or rapier.
• Max level is 10.

## 3D Plotting and optimising on the surface area of a spherical gyroid

*this is an equation of a shperical gyroid emphasized texti want to plot it in mathematica and obtain the shapes like those on the picture below. sinxcosy + sinycosz + sinzcosx = 0 i also want to know the code to use and how i can model it to obtain different surface areas. please i am in urgent need for an answer

## Optimising query for date based selecting

I am looking to speed up querying of records from a specific date range.

My table has millions of rows and the dates are in Unix epoch format, in text format IIRC ( i don't work weekends so can't be certain)

I am pulling in a single schedule record, which has an unique ID field attributed to it.

This ID field is recognised in a separate table called movement, and this movement field stores movement and timing data for the schedule.

Each schedule has its associated movement rows (usually…

Optimising query for date based selecting

## Optimising matketing budget on marketing and sales data

I have a dataset from business where the variables include investment across diferent channels and independent variable is the sales.

I have developed a marketing mix model using linear regression to study the coefficients of each of these investment channels.

Next problem is to come up with an optimised distribution of marketing budget across channels given the total amount to br spend with an objective to maximise sales. The data is weekly spend and sales data for 2 years.

Can someone please suggest best approach to deal with this problem.

## Optimising the Magento 2 backend

In general, the loading times of pages in the backend are around 2 seconds. Can’t really argue with that. However, I have noticed that Grid Pages (such as the Order and Products) require additional time to finish rendering the page.

If you look at Chrome you can see that after the page loads, it takes ad additional 2 seconds to render the Grid with the Order/Product Data.

Having looked at the loading assets, the following asset is to culprit. It appears that this is responsible for retrieving data from the database. The waterfall data attributes the time down to the TTFB.

What I find interesting, is that if you use any of the Filters or Searches after the page loads, the page renders with the results almost instantly. It’s just that initial page load.

However, the reason for this post is an attempt to shed some more light on the following:

• As I have no other Production environments to benchmark against, is the Load vs Finish time a reasonable expectation? i.e. Should I even bother to attempt to optimise this
• If I were to optimise this, should I be looking at PHP or MySQL as root of the TTFB delay?

Side Note: It also appears that limiting both the Number of Results and the Number of Columns has a small but positive effect on Finish time.

## Suggestions for optimising this front end code

I’ve stumbled across some UI (React.js) code that I’ve worked out to have a time complexity of `O(n^4)` (I hope that’s right, if not please do correct me). I was interested in how one might go about optimising something like this.

I know that `for()` loops do perform slightly better than `array.forEach()` and I’d assume `const` uses a bit less memory than `let` due to not needing to support re-allocation.

Would using functional operators like `map` and `filter` improve performance?

I’d be really interested to see some suggestions to this, and maybe the ideal optimisation for it?

The code:

``animate () {   let movingScreens = {}   if (this.animationsDisabled()) {     movingScreens = {}   } else {     for (let i in scrollAnimationDefs.movingScreens) {       if (i !== 'appsTop' && i !== 'appsBottom') movingScreens[i] = scrollAnimationDefs.movingScreens[i]     }   }    for (let shape in movingScreens) {     if (movingScreens[shape].scrolls) {       for (let scroll of movingScreens[shape].scrolls) {         const from = scroll.from * this.scrollMax         const to = scroll.to * this.scrollMax         if (from <= this.scrollTop && to >= this.scrollTop) {           const styles = scroll.styles(             (this.scrollTop - from) / (to - from)           )            for (let style in styles) {             let newStyle = styles[style]              const els = scrollAnimationDefs.movingScreens[shape].els.map(               ref => {                 if (ref === 'screen1IfContainer') return this.animations.screen1IfContainer.current.container.current                 return this.animations[ref].current               }             )              for (const i in els) {               if (els[i]) {                 els[i].style[style] = newStyle               }             }           }         }       }     }   } } ``

## Optimising a list searching algorithm

I’ve created the following code to try and find the optimum “diet” from a game called Eco. The maximum amount of calories you can have is 3000, as shown with MAXCALORIES.

Is there any way to make this code faster, since the time predicted for this code to compute 3000 calories is well over a few hundred years.

Note: I am trying to find the highest SP (skill points) you get from a diet, the optimum diet. To find this, I must go through every combination of diets and check how many skill points you receive through using it. The order of food does not matter, and I feel this is something that is slowing this program down.

``import itertools import sys import time  sys.setrecursionlimit(10000000)  #["Name/Carbs/Protein/Fat/Vitamins/Calories"] available = ['Fiddleheads/3/1/0/3/80', 'Fireweed Shoots/3/0/0/4/150', 'Prickly Pear Fruit/2/1/1/3/190', 'Huckleberries/2/0/0/6/80', 'Rice/7/1/0/0/90', 'Camas Bulb/1/2/5/0/120', 'Beans/1/4/3/0/120', 'Wheat/6/2/0/0/130', 'Crimini Mushrooms/3/3/1/1/200', 'Corn/5/2/0/1/230', 'Beet/3/1/1/3/230', 'Tomato/4/1/0/3/240', 'Raw Fish/0/3/7/0/200', 'Raw Meat/0/7/3/0/250', 'Tallow/0/0/8/0/200', 'Scrap Meat/0/5/5/0/50', 'Prepared Meat/0/4/6/0/600', 'Raw Roast/0/6/5/0/800', 'Raw Sausage/0/4/8/0/500', 'Raw Bacon/0/3/9/0/600', 'Prime Cut/0/9/4/0/600', 'Cereal Germ/5/0/7/3/20', 'Bean Paste/3/5/7/0/40', 'Flour/15/0/0/0/50', 'Sugar/15/0/0/0/50', 'Camas Paste/3/2/10/0/60', 'Cornmeal/9/3/3/0/60', 'Huckleberry Extract/0/0/0/15/60', 'Yeast/0/8/0/7/60', 'Oil/0/0/15/0/120', 'Infused Oil/0/0/12/3/120', 'Simple Syrup/12/0/3/0/400', 'Rice Sludge/10/1/0/2/450', 'Charred Beet/3/0/3/7/470', 'Camas Mash/1/2/9/1/500', 'Campfire Beans/1/9/3/0/500', 'Wilted Fiddleheads/4/1/0/8/500', 'Boiled Shoots/3/0/1/9/510', 'Charred Camas Bulb/2/3/7/1/510', 'Charred Tomato/8/1/0/4/510', 'Charred Corn/8/1/0/4/530', 'Charred Fish/0/9/4/0/550', 'Charred Meat/0/10/10/0/550', 'Wheat Porridge/10/4/0/10/510', 'Charred Sausage/0/11/15/0/500', 'Fried Tomatoes/12/3/9/2/560', 'Bannock/15/3/8/0/600', 'Fiddlehead Salad/6/6/0/14/970', 'Campfire Roast/0/16/12/0/1000', 'Campfire Stew/5/12/9/4/1200', 'Wild Stew/8/5/5/12/1200', 'Fruit Salad/8/2/2/10/900', 'Meat Stock/5/8/9/3/700', 'Vegetable Stock/11/1/2/11/700', 'Camas Bulb Bake/12/7/5/4/400', 'Flatbread/17/8/3/0/500', 'Huckleberry Muffin/10/5/4/11/450', 'Baked Meat/0/13/17/0/600', 'Baked Roast/4/13/8/7/900', 'Huckleberry Pie/9/5/4/16/1300', 'Meat Pie/7/11/11/5/1300', 'Basic Salad/13/6/6/13/800', 'Simmered Meat/6/18/13/5/900', 'Vegetable Medley/9/5/8/20/900', 'Vegetable Soup/12/4/7/19/1200', 'Crispy Bacon/0/18/26/0/600', 'Stuffed Turkey/9/16/12/7/1500']  global AllSP, AllNames AllSP = [] AllNames = []  def findcombs(totalNames, totalCarbs, totalProtein, totalFat, totalVitamins, totalNutrients, totalCalories, MAXCALORIES):     doneit = False     for each in available:         each = each.split("/")         name = each[0]         carbs = float(each[1])         protein = float(each[2])         fat = float(each[3])         vitamins = float(each[4])         nutrients = carbs+protein+fat+vitamins         calories = float(each[5]) #        print(totalNames, totalCalories, calories, each)         if sum(totalCalories)+calories <= MAXCALORIES:             doneit = True             totalNames2 = totalNames[::]             totalCarbs2 = totalCarbs[::]             totalProtein2 = totalProtein[::]             totalFat2 = totalFat[::]             totalVitamins2 = totalVitamins[::]             totalCalories2 = totalCalories[::]             totalNutrients2 = totalNutrients[::]              totalNames2.append(name)             totalCarbs2.append(carbs)             totalProtein2.append(protein)             totalFat2.append(fat)             totalVitamins2.append(vitamins)             totalCalories2.append(calories)             totalNutrients2.append(nutrients) #            print("    ", totalNames2, totalCarbs2, totalProtein2, totalFat2, totalVitamins2, totalNutrients2, totalCalories2)             findcombs(totalNames2, totalCarbs2, totalProtein2, totalFat2, totalVitamins2, totalNutrients2, totalCalories2, MAXCALORIES)         else:             #find SP             try:                 carbs    = sum([x * y for x, y in zip(totalCalories, totalCarbs)])    / sum(totalCalories)                 protein  = sum([x * y for x, y in zip(totalCalories, totalProtein)])  / sum(totalCalories)                 fat      = sum([x * y for x, y in zip(totalCalories, totalFat)])      / sum(totalCalories)                 vitamins = sum([x * y for x, y in zip(totalCalories, totalVitamins)]) / sum(totalCalories)                 balance  = (carbs+protein+fat+vitamins)/(2*max([carbs,protein,fat,vitamins]))                 thisSP   = sum([x * y for x, y in zip(totalCalories, totalNutrients)]) / sum(totalCalories) * balance + 12             except:                 thisSP = 0             #add SP and names to two lists             AllSP.append(thisSP)             AllNames.append(totalNames)  def main(MAXCALORIES):     findcombs([], [], [], [], [], [], [], MAXCALORIES)     index = AllSP.index(max(AllSP))     print()     print(AllSP[index], "  ", AllNames[index])  for i in range(100, 3000, 10):     start = time.time()     main(i)     print("Calories:", i, ">>> Time:", time.time()-start) ``

Edit: On request, here is the formula for calculating the SP

``Carbs = (amount1*calories1*carbs1 + ...) / (amount1*calories1 + ...)            (N1*C1) + (N2*C2) SP =     ----------------    x Balance + Base Gain                 C1+C2  ^^ Where N is the nutrients of the food (carbs+protein+fat+vitamins), and C is the calories of the food  Base Gain = 12 (Always 12)  Balance = Sum Nutrients / (2 * highest nutrition) ``

How would you go about revamping a navigation as long as this while keeping all the links visible

On desktop

On mobile

## How to profile before optimising for readability

When optimising for speed, profiling first can help focus the effort on the parts of the code that bring most benefit. Since speed can be measured objectively, the benefit can also be measured objectively by profiling before and after making a change.

Is there any analogous approach when optimising for readability? I’m not expecting to find a method of saying objectively that one version of the code is more readable than another. Measuring by profiling before and after a change is therefore out of scope. Instead I’m looking for ways of identifying parts of the code that would most benefit from an improvement in readability. How to make any improvement will still be a human judgement call.

For example, even if a section of code is already more readable that other sections, it may still be the most beneficial place to make a further readability improvement if it is read more often.

## In thinking this through I’ve considered the following as possibilities:

• Measuring how many other parts of the code depend on this part
• Measuring how frequently this part is edited
• Measuring how frequently parts of the code that depend on this part are edited
• Measuring how much time is spent per change to this part of the code

What I’m trying to indirectly measure is “How much time is spent reading this part of the code?”. I’m trying to think of indirect approaches because I can’t think of a currently practical way of directly measuring how long is spent reading code. I want ways of estimating this using data that is already being recorded, rather than something impractical/intrusive like setting up eye tracking for everyone who works on the codebase.

I’m assuming that code that is read often is worth making more readable, and that code that is read slowly is worth making more readable. Estimating the total length of time spent reading each part of the code seems like a good way of capturing both cases.

Measuring which parts of the code are dependent on which other parts requires knowledge of the specific programming language(s) being used. Measuring how frequently part of the code is changed or how much time is spent per change could potentially be done in a language agnostic way using version control data. Time spent per change probably can’t be measured in any meaningful way for individual changes, but I include it in case there is some way of estimating it as a long term average (perhaps from the average lifetime of raised issues that went on to involve a change to that part of the code?).

I can imagine version control based results being very tailored to the particular team that happened to be working on the code during the period of measurement, but if the team working on the code in future will be mostly the same people I can see this as being useful. In some cases tailoring to a specific team may even be preferable.

My guess is that building such a profiler will be more effort than it is worth for an individual team working on a specific project. Ideally I’d like to find an approach that can be as widely used as possible, making it worthwhile as an open source project to be used by anyone. For this reason I’d lean more towards language agnostic approaches, but I’m open to arguments that a language specific approach would be worth the effort.