Optimizing a complex list in C#

Need to improve the while loop performance. In my code I’m using for loop and create new instance for every loop. It takes around 25s to complete. How to boost the performance? Here is my code snippet:

while (i <= rowLen)  // rowLen  = 10000, i=0 {     if (grid.Rows == null)     {        grid.Rows = new List<Row> { new Row() };     }     else     {        grid.Rows.Add(new Row());     }     if (i > -1)     {        int j = 0;        while (j + 1 <= columnLen) // columnLen = 100        {           if (grid.Columns == null)           {              grid.Columns = new List<Column> { new Column() };           }           else if (grid.Columns.Count < columnLen)           {              grid.Columns.Add(new Column());           }           if (grid.Rows[i].Cells == null)           {               grid.Rows[i].Cells = new List<Cell> { new Cell() };           }           else           {               grid.Rows[i].Cells.Add(new Cell());           }           if (j > -1)           {                // process the grid cell data.           }          j++;        }      }    }  i++;} 

Which will take more time. How to reduce the time? Need to initialize cell by cell. Is this possible to initialize cell with some capacity? like List<int>(10), or any other way to reduce the time?

App Optimizing Search Seo – Make 300$ per day – easily – work from home

HOW IT WORKS

No headaches, services offered on the site are completely outsourced. All supplier details are included with the sale of this website, including backup suppliers.

Anyone can run this Business, no design, optimization, video production or sales experience necessary to run your own Mobile App Optimization Business in this multi-million dollar niche encompassing the Optimization of Mobile Apps.

What happens when a sale takes place?

You will receive an email and payment via…

App Optimizing Search Seo – Make 300$ per day – easily – work from home

Advice needed with optimizing JavaScript For Loop function

I need help optimizing the following jQuery code. The below jQuery snippet has been partially optimized to use a native JavaScript for loop to make some alterations to the html instead of using the jQuery .each() method. What further techniques could be used to make the click event more performant?

$  ('.table__button').click(function() { 	for (var i = 0; i < $  ('table.table td').length; i++) { 		if (!$  ($  ('table.table td')[i]).hasClass('table__cell--disabled')) { 			$  ($  ('table.table td')[i]).css('background', $  ($  ('table.table td')[i]).attr('data-colour')); 			$  ($  ('table.table td')[i]).css('text-decoration', 'underline'); 			$  ($  ('table.table td')[i]).css('font-weight', 'bold'); 			$  ($  ('table.table td')[i]).css('text-align', 'center'); 			$  ($  ('table.table td')[i]).addClass('is--coloured'); 			$  ($  ('table.table td')[i]).html('I am now ' + $  ($  ('table.table td')[i]).attr('data-colour')); 		} 		$  ('.table__button').attr('disabled', 'disabled'); 	} });
/* do not change the css */  table { 	width: 100%; 	margin-bottom: 40px; } tr:nth-child(odd) { 	background: #f0f0f0; } td { 	padding: 10px; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <!-- do not change the markup -->  <table class="table"> 	<tr class="table__row"> 		<td class="table__cell" data-colour="green">Make me green</td> 		<td class="table__cell" data-colour="yellow">Make me yellow</td> 	</tr> 	<tr class="table__row"> 		<td class="table__cell" data-colour="blue">Make me blue</td> 		<td class="table__cell" data-colour="red">Make me red</td> 	</tr> 	<tr class="table__row"> 		<td class="table__cell--disabled">Leave me alone</td> 		<td class="table__cell--disabled">Leave me alone</td> 	</tr> </table> <button class="table__button">Colourfy table cells</button>

Here is the codepen – https://codepen.io/kelborn/pen/EBNjKe

Optimizing Pact of the Blade’s ability to conjure any weapon

At level 3, warlocks gain the Pact Boon feature, and one of the options is Pact of the Blade. One of the benefits of the warlock’s Pact of the Blade is the ability to conjure any melee weapon the warlock likes, and for the warlock to be proficient in that weapon:

You can use your action to create a pact weapon in your empty hand. You can choose the form that this melee weapon takes each time you create it. You are proficient with it while you wield it. This weapon counts as magical for the purpose of overcoming resistance and immunity to nonmagical attacks and damage.

This received a lot of attention when discussing monster-only weapons like the ice devil’s spear, but developer commentary nixed that combo, barring perhaps if you get proficiency elsewhere and become legitimately Large yourself.

Without such weapons, though, this feature looks rather difficult to leverage: the game rewards specializing, but if, for example, you build around a high Dexterity, non-finesse weapons are basically useless to you. If you instead multiclass with fighter and take the great weapon fighting style and the Great Weapon Master feat, then non-great weapons aren’t worth your time. The Hexblade patron goes a long way towards solving the biggest problem here, multiple-ability dependency, but does nothing about the difficulty leveraging feats, and in any event the Hexblade may not be available in every campaign.

So this is my question: what is the best approach to getting the most from the ability to use any weapon I want? Ideally, a build that switches between weapons on the fly for different situations. Importantly, I want a character that has a reason for using so many weapons—if having just one weapon, or just relying on eldritch blast, is strictly-superior to a given approach, that isn’t an answer to the question—it’s a claim that the build simply is not supported by the system at all. Which may well be true, but be prepared to back that claim up.

Crucially, how having multiple weapons is advantageous is up to you: if eldritch blast cannot be beat for damage, for example, then a build that uses weapons for utility somehow would be great, where a build that goes for damage and just ends up worse than eldritch blast would not. But since I am not an expert in 5e, and don’t know the answer to my own question, I am explicitly looking for answerer’s expertise and judgment in how to best leverage this feature. I have offered my expertise and judgment on similar questions for D&D 3.5e many, many times, so I know this is a thing people are capable of doing.

Feats are allowed, and so is a limited amount of judicious multiclassing—but answers with less multiclassing are better. Ideally an answer considers a build’s progression from 1st to 20th, but an answer that focuses on a somewhat narrower range—explaining why it doesn’t work before that range or why it fails to grow beyond that range—is acceptable. For reference, but not as a restriction, my particular character is starting at 4th level.

Please be specific about what sources you use—nothing is completely off the table, including Unearthed Arcana, but answers that use fewer sources are better. In particular, anything that’s not in Player’s Handbook should note why it’s important and what, if any, substitutes might be available from Player’s Handbook-only play.

The reason I ask for those notes is that I am joining a game with mostly new players, and while the DM seems amenable to me making light usage of supplemental materials, I very much don’t want to push it or overburden him, or outshine my fellow players. Nonetheless, I worry that without the Hexblade, there just isn’t really a good way to do this. So I want to know what the options are, so I can make my own judgment about how much is worth asking for.

Optimizing the speed of the code using C++ instead of Python

I wrote code in python and it’s like this:

import math import time from itertools import compress   def prime_number(n):     ''' Prime numbers up to n'''     sieve = bytearray([True]) * (n//2+1)     for i in range(1,int(n**0.5)//2+1):         if sieve[i]:             sieve[2*i*(i+1)::2*i+1] = bytearray((n//2-2*i*(i+1))//(2*i+1)+1)     return  {2,*compress(range(3,n,2), sieve[1:])}   #Using set to increase the search time  list_of_primes = prime_number(10**8)  # listing prime numbers up to 10**8 square_numbers = {i**2 for i in range(2, 10**4)} for i in square_numbers:     for j in list_of_primes:         if (j-1) % i == 0 and j in list_of_primes:             list_of_primes.remove(j)         else:             break  list_of_primes = list_of_primes - square_numbers sophie_german_primes = prime_number(50 * 10**6) sg_prime = {i for i in sophie_german_primes if 2*i + 1 in list_of_primes}  def test_condition (num):     ''' Testing the condition of d+n/d by taking the input as array of divisor and num'''     for i in range(1, int(num**0.5) + 1):         if num % i == 0:             if (i + num /i) not in list_of_primes:                 return False     return True  start = time.perf_counter()  Sum = 0 for num in sg_prime:     if num + 2 in list_of_primes:         Sum += 2*num  new_list_primes = list_of_primes - {2*i+1 for i in sg_prime}  for num in new_list_primes:     if (num - 1) / 2 + 2 in list_of_primes:         if test_condition (num-1) == True:             Sum += num - 1  end = time.perf_counter() print("Time : ", end-start, "sec")  print(Sum+1) 

The problem is that the code works. However, it works on python in 8.35 second. But online I see that C++ works much faster than python (10-100x faster). Is it possible to speed up my code more than 10 times by just using C++ ? Is there someone willing to try that ?

Optimizing a spellsword bard

I recently rolled a bard for a new game set in a low fantasy world. It’s a halfling bard, with 17 CHA and 16 DEX and +0s or +1s elsewhere. In the party there is also a defensively minded Paladin, a gunship rogue with a crossbow, necromantic offensive warlock and a fighter inspired by 4e warlords.

The party lacks a bit in healing and utility spellcasting I would like to fill in that niche while still having a considerable presence in combat. To accomplish that I wanted to embrace a Spellsword approach.

I would like to be able to combine spellcasting (buffs and debuffs, preferably as cantrips or low level spells) with DEX-based combat opportunities. I know that College of Valor has some options, but it seems like a huge waste (I have high Dex and therefore prefer light armor, I will not use non-finessable martial weapons, and Extra Attack doesn’t seem to fit well with eventual Battle Magic). I would like to have a “cast and slash” battlefield presence, but if I had opportunities to use my bonus action for added awesome I would consider it gladly.

Is it possible to optimize the character to accomplish those gameplay goals and if yes, how?

I am limited to PHB and I need a very good arguments to be allowed to multiclass. I briefly I mentioned possible MC into a Paladin and had a mixed response.

I was wondering if I can some perspective on optimizing a function that has nested for-loops for feature selection?

I looked up a tutorial and created a function based upon its script. It’s essentially used so I can select dependent variables that’s a subset of a data frame. It runs but it is very very slow.

How would I flatten a nested for-loop such as this?

I tried implementing an enumerate version but it did not work. Ideally I’d like the complexity to be linear, currently, it’s at 2^n. I’m not sure, how I can flatten the nested for loop, such that I can append the results of a function to a list.

def BestSubsetSelection(X,Y, plot = True):     k = len(X.columns)     RSS_list = []     R_squared_list = []     feature_list = []     numb_features = []      # Loop over all possible combinations of k features     for k in range(1, len(X.columns) + 1):             # Looping over all possible combinations: from 11 choose k             for combo in itertools.combinations(X.columns,k):                 # Store temporary results                 temp_results = fit_linear_reg(X[list(combo)],Y)                  # Append RSS to RSS Lists                 RSS_list.append(temp_results[0])                  # Append R-Squared TO R-Squared list                 R_squared_list.append(temp_results[1])                  # Append Feature/s to Feature list                 feature_list.append(combo)                  # Append the number of features to the number of features list                 numb_features.append(len(combo)) 

A copy of the full implementation can be found here: https://github.com/melmaniwan/Elections-Analysis/blob/master/Implementing%20Subset%20Selections.ipynb

Optimizing Luhn check digit algorithm

The internet as a whole and Code Review in special already provide a decent amount of implementations of the Luhn check digit algorithm. They often follow a relatively “naive” strategy, in that they are mostly straightforward translations of the algorithm’s pseudo-code (as found e.g. on Wikipedia), like below:

class Luhn:      @staticmethod     def calculate_naive(input_):         """Calculate the check digit using Luhn's algorithm"""         sum_ = 0         for i, digit in enumerate(reversed(input_)):             digit = int(digit)             if i % 2 == 0:                 digit *= 2                 if digit > 9:                     digit -= 9             sum_ += digit         return str(10 - sum_ % 10) 

I chose 6304900017740292441 (the final 1 is the actual check digit) from this site about credit card validation as example to validate the coming changes. The mini-validaton and timing of this implementation generated the following results:

assert Luhn.calculate_naive("630490001774029244") == "1" %timeit -r 10 -n 100000 Luhn.calculate_naive("630490001774029244") 13.9 µs ± 1.3 µs per loop (mean ± std. dev. of 10 runs, 100000 loops each) 

This algorithm IMHO lends itself to some optimizations. I came up with the following ones:

  1. Computing the double and then subtract 9 if above 9 of every second digit seems to cry for a lookup-table.
  2. The string-to-int and int-to-string conversion also seem like low hanging fruits for a lookup-table too, since the number of values is relatively limited.

This lead to the following code:

class Luhn:      DOUBLE_LUT = (0, 2, 4, 6, 8, 1, 3, 5, 7, 9)     # CHECK_DIGIT_LUT = tuple(str(10 - i) for i in range(10))     CHECK_DIGIT_LUT = ("0", "9", "8", "7", "6", "5", "4", "3", "2", "1")     # STR_TO_INT_LUT = {str(i): i for i in range(10)}     STR_TO_INT_LUT = {         '0': 0, '1': 1, '2': 2, '3': 3, '4': 4,         '5': 5, '6': 6, '7': 7, '8': 8, '9': 9     }      @classmethod     def calculate_lut1(cls, input_):         """Calculate the check digit using Luhn's algorithm"""         sum_ = 0         for i, digit in enumerate(reversed(input_)):             digit = int(digit)             sum_ += digit if i % 2 else cls.DOUBLE_LUT[digit]         return str(10 - sum_ % 10)      @classmethod     def calculate_lut12(cls, input_):         """Calculate the check digit using Luhn's algorithm"""         sum_ = 0         for i, digit in enumerate(reversed(input_)):             digit = cls.STR_TO_INT_LUT[digit]             sum_ += digit if i % 2 else cls.DOUBLE_LUT[digit]         return cls.CHECK_DIGIT_LUT[sum_ % 10] 

This piece of code was also validated and timed:

assert Luhn.calculate_lut1("630490001774029244") == "1" %timeit -r 10 -n 100000 Luhn.calculate_lut1("630490001774029244") 11.9 µs ± 265 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each)  assert Luhn.calculate_lut12("630490001774029244") == "1" %timeit -r 10 -n 100000 Luhn.calculate_lut12("630490001774029244") 7.28 µs ± 166 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) 

I found the second result especially suprising, decided to go full berserk and went on to try to precompute as much as possible.

Since all digits of the sum apart from the last one are irrelevant, the possible intermediate results can all be pre-computed $ mod\,10$ .

Enter this behemoth:

class Luhn:      # ... other code from above, e.g. CHECK_DIGIT_LUT      SUM_MOD10_LUT = {         i: {str(j): (i + j) % 10 for j in range(10)}         for i in range(10)     }     SUM_DOUBLE_MOD10_LUT = {         i: {str(j): (i + (0, 2, 4, 6, 8, 1, 3, 5, 7, 9)[j]) % 10 for j in range(10)}         #                 ^ I don't like this. But doesn't seem to work with DOUBLE_LUT         for i in range(10)     }      @classmethod     def calculate_lut_overkill(cls, input_):         """Calculate the check digit using Luhn's algorithm"""         sum_ = 0         for i, digit in enumerate(reversed(input_)):             if i % 2:                 sum_ = cls.SUM_MOD10_LUT[sum_][digit]             else:                 sum_ = cls.SUM_DOUBLE_MOD10_LUT[sum_][digit]         return cls.CHECK_DIGIT_LUT[sum_] 
assert Luhn.calculate_lut_overkill("630490001774029244") == "1" %timeit -r 10 -n 100000 Luhn.calculate_lut_overkill("630490001774029244") 5.63 µs ± 200 ns per loop (mean ± std. dev. of 10 runs, 100000 loops each) 

This is were I stopped, shivered, and decided to go to The Happy Place.


Leaving aside the old wisdom on “premature optimization”: What I would like to know now is if there are any aspects that might be optimized further that I haven’t thought?

Would you let the later stages of the code pass in a code review? Especially the last one seems to be a good candidate for confusion. Should there be more explanation on how the lookup-tables came to be?

Of course all thoughts and feedback whatsoever are much appreciated.


This post is part of a (developing?) mini-series on check digit algorithms. You may also want to have a look at part 1 Verhoeff check digit algorithm.

Is there a canonical choice for multiclassing with the aim of optimizing an Arcane Trickster for melee?

First of all, let me say that I am new to this region of stackexchange and I am not yet completely familiar with the standards here. In particular, I hope this question is not viewed as too vague or too opinion-based. If it is, I apologize.


The general setting

  1. I want to play a rogue.
  2. My party needs me in melee.
  3. I want to choose the Arcane Trickster archetype.
  4. XGtE or SCAG are allowed.

I am aware that these basic assumptions conflict with the common consensus about strict optimization of a rogue, so we are talking about optimization under constraints, of course.

The main objective is to build this character in a such a way that its most important mechanic, the sneak attack, can be exploited as efficiently as possible: Maximize the opportunities to sneak and the likelihood to actually hit.

The main advantage a melee rogue has over a ranged rogue is that the former is more likely to trigger off-turn sneak attacks via opportunity attacks. This is why my character will be a Human (variant) and take Sentinel as his starting feat. I will also use dual-wielding so that in case of a miss, I can sacrifice my bonus action for a second chance to hit.


The general question

In view of my main objective, are there obvious ways to go from there in terms of multiclassing that are demonstrably optimal or at least superior to a singleclass character?


Doing three to five levels of Fighter Battlemaster or two to six of Wizard Bladesinger (my DM is okay with relaxing the elf requirement) seem like a good place to look for me. But maybe someone has already done the math and can give a more or less definite answer?

Thanks for reading this far and thanks even more for any helpful comment or answer.

Optimizing De Boor’s algorithm

According to De Boor’s algorithm, a B-Spline basis function can be evaluated using the formula:

$ $ B_{i,0} = \left\{ \begin{array}{ll} 1 & \mbox{if } t_i \le x < t_{i+1} \ 0 & \mbox{otherwise} \end{array} \right. $ $

$ $ B_{i,p} = \frac{x-t_i}{t_{i+p}-t_i}B_{i-1,p}(x) + \frac{t_{i+p+1} – x}{t_{i+p+1}-t_{i+1}}B_{i+1,p-1}(x) $ $

where the function $ B$ is defined for $ n$ control points for the curve of degree $ d$ . The domain $ t$ is divided into $ n+d+1$ points called knots (in the knot vector). To evaluate this, we can define a recursive function $ B(i,p)$ .

The B-Spline itself is represented as, $ S(x) = \sum{c_iB_{i,p}}$ .

To evaluate this, the algorithm in Wikipedia tells us to take $ p+1$ control points starting from $ c_{k-p}$ to $ c_p$ , and then repeatedly take each consecutive pair’s weighted average, ultimately reducing to one point.


I find this algorithm fine for one or two evaluations; however, when we draw a curve, we take hundreds of points from the curve and connect them to make it look smooth. The recursive formula still requires up to $ (p-1)+(p-2)+(p-3)…$ calculations, right? (To take the weighted averages)

In my research, however, we need to evaluate only one polynomial – since the B-Spline is ultimately composed of $ p+d+1$ basis polynomials (as I’ll show).

Suppose we take a knot vector $ [0, .33, .67, 1]$ and control points $ [0, 1, 0]$ (degree $ 1$ ), then we can represent the basis polynomials as:

$ $ c_0B_{0,1}= 0, \mbox{ if } 0\leq x<.25, + 0, \mbox{ if } .25\leq x < .5 $ $ $ $ c_1B_{1,1}= 4x-1, \mbox{ if } .25\leq x<.5, + \,\,-4x+3, \mbox{ if } .5\leq x < .75 $ $ $ $ c_2B_{2,1}= 0, \mbox{ if } .5\leq x<.75 + 0, \mbox{ if } .75\leq x < 1 $ $

Now, we can flatten them to produce: $ $ S(x) = \sum{c_i B_{i,1}} = \left\{ \begin{array}{ll} 0 & \mbox{if } 0 \le x < .25 \ 4x-1 & \mbox{if } .25 \le x < .5 \ -4x+3 & \mbox{if } .5 \le x < .5 \ 0 & \mbox{if } .75 \le x < 1 \ \end{array} \right. $ $

Now, if we were to calculate $ S$ at any $ x$ , we can directly infer which polynomial to use and then calculate it in $ d$ multiplications and $ d+1$ additions.

I have implemented this calculation using explicit Polynomial objects in JavaScript. See https://cs-numerical-analysis.github.io/.

Source: https://github.com/cs-numerical-analysis/cs-numerical-analysis.github.io/blob/master/src/graphs/BSpline.js

I want to know why people don’t use the algorithm I described. If you calculated the polynomial representation of the B-Spline and then flatten it out, it will be a one-time cost. Shouldn’t that one-time cost be offset by remove the unnecessary recursive averaging?