Estimating the bit operations using big O notation

Using big- O notation estimate in terms of a simple function of $ n $ the number of bit operations required to compute $ 3^n$ in binary.

I need some help with the above question. The number of bit operations required to multiply two k- bit numbers is $ O(k^2)$ . In the first step I am multiplying two 2-bit numbers, in the 2nd step a 4-bit and a 2-bit number and so on. So the total bit operations will be I feel $ O(k^2) + O(k^2 * k) +…. + O(k^{n-1} * k) \,\,with \,\, k \,= 2 $

How will the above sum be estimated as a function of n?

Estimating users standard deviation given avg, min, max for various tests

Given a series of tests, where we are given one users score, the overall minimum, the overall maximum, and the overall average, how would I estimate the user’s standard deviation on total score (ie. sum of all their tests)?

We cannot assume that the lowest scoring person from one test was the lowest scoring in the next test, but I think it is fair to assume that people generally stay within some score bands (although if this can be done without that assumption, that would be better).

My intuition tells me that this seems to be some sort of application of Monte Carlo, but I can’t seem to figure out how to actually do this.

Estimating P in Amdahl’s Law theoretically and in practice

In parallel computing, Amdahl’s law is mainly used to predict the theoretical maximum speedup for program processing using multiple processors. If we denote the speed up by S then Amdahl’s is given by the formula :

S=1/((1-P)+(P/N)

where P is the proportion of a system or program that can be made parallel, and 1-P is the proportion that remains serial. My question is how can we compute or estimate P for a given program ?

More specifically, my question has two parts:

how can we compute P theoretically? how can we compute P in practice? I know my question could be easy but I am learning.

ref : https://www.techopedia.com/definition/17035/amdahls-law

Estimating number of points in 1D space

There are some arbitrary-chosen points in 1D space. What needs to be found is the approximate number of them without counting all of them. It is possible to choose some coordinates (numbers) and for each one there are two numbers returned – the distances to the closest points to the left and to the right.

I’m looking for some sources on how to solve such problem efficiently so any papers, generalizations or similar problems are needed.

Are there fundamental problems with this guideline for estimating the rarity of home-brewed Magic Items?

There are a lot of questions about estimating the appropriate rarity level of a homebrewed Magic Item. In one of those, I came across this answer by Lino Frank Ciaralli suggesting a fairly general method:

When comparing rare and very rare items, I usually just start with the base weapons/armor as my guideline, and then add modifiers based on the number of magical effects. The formula I use is as follows:

  1. Base item comparison – so for instance +1 weapon = uncommon, +2 = rare, +3 = very rare. Armor/Shields start at rare.
  2. Does it require attunement? If yes, rarity drops one category.
  3. Is it cursed? If yes, rarity drops one category.
  4. Add one rarity level for every two magical effects it has. For example: A sunblade deals extra damage to undead, and sheds light. Add one rarity level.

Using this formula allows me to balance out homebrew weapons fairly easily and keep them on par with the weapons in the book for balance.

Are there specific fundamental problems with this approach?

Examples of possible fundamental problems include

  • a similar guideline somewhere in the official rules that differs significantly from this one,
  • a large number of official Magic Items for which this formula produces different rarity levels,
  • a small number of official Magic Items for which it produces vastly different rarity levels.

Estimating the range of $1$’s in an array of $0$’s and $1$’s

I have a large array $ A$ that contains something like $ [0..1..0..]$ . It has a continuous range of $ 0$ ‘s, followed by a range of $ 1$ ‘s, and then another range of $ 0$ ‘s.

This array is large and access is expensive, so I want to use a sampling algorithm to estimate the range $ (i, j)$ , where $ A_i$ is the first $ 1$ and $ A_j$ is the last $ 1$ . Let’s say I want to approximate this within an error of $ \epsilon n$ where $ n$ is the size of $ A$ , so that I get a range $ (i’, j’)$ where $ |i’ – i| \leq \epsilon n$ and $ |j’ – j| \leq \epsilon n$ .

What is an algorithm I can use to achieve this?

Battery always shows Estimating on Ubuntu 19.04

Been using Ubuntu for over a year now had no issues, Since a few days my battery status is bit weird. It always shows me 100% and when I plugin charging it shows Estimating and it remains like this when I plug out the charger or plug in.

The following command shows the below result,

~$   upower -i /org/freedesktop/UPower/devices/battery_BAT0   native-path:          (null)   power supply:         no   updated:     Do 01 Jan 1970 01:00:00 CET (1561282273 seconds ago)   has history:    no   has statistics:       no   unknown     warning-level:       unknown     battery-level:       unknown     icon-name:          '(null)' 

The acpi -V , shows the below results,

~$   acpi -V Battery 0: Discharging, 0%, 11:57:45 remaining Adapter 0: off-line Thermal 0: ok, 53.0 degrees C Thermal 0: trip point 0 switches to mode critical at temperature 127.0 degrees C Thermal 0: trip point 1 switches to mode hot at temperature 127.0 degrees C Cooling 0: Processor 0 of 10 Cooling 1: Processor 0 of 10 Cooling 2: x86_pkg_temp no state information available Cooling 3: Processor 0 of 10 Cooling 4: intel_powerclamp no state information available Cooling 5: Processor 0 of 10 

Any idea what is going on ?

Battery always shows Estimating on Ubuntu 19.04

Been using Ubuntu for over a year now had no issues, Since a few days my battery status is bit weird. It always shows me 100% and when I plugin charging it shows Estimating and it remains like this when I plug out the charger or plug in.

The following command shows the below result,

~$   upower -i /org/freedesktop/UPower/devices/battery_BAT0   native-path:          (null)   power supply:         no   updated:     Do 01 Jan 1970 01:00:00 CET (1561282273 seconds ago)   has history:    no   has statistics:       no   unknown     warning-level:       unknown     battery-level:       unknown     icon-name:          '(null)' 

The acpi -V , shows the below results,

~$   acpi -V Battery 0: Discharging, 0%, 11:57:45 remaining Adapter 0: off-line Thermal 0: ok, 53.0 degrees C Thermal 0: trip point 0 switches to mode critical at temperature 127.0 degrees C Thermal 0: trip point 1 switches to mode hot at temperature 127.0 degrees C Cooling 0: Processor 0 of 10 Cooling 1: Processor 0 of 10 Cooling 2: x86_pkg_temp no state information available Cooling 3: Processor 0 of 10 Cooling 4: intel_powerclamp no state information available Cooling 5: Processor 0 of 10 

Any idea what is going on ?

Estimating Password Cracking Speed Based on GPU?

I was wondering if there was a calculator or formula I could use to find a rough estimation of the time it takes to crack hashes based on GPU. I am trying to assess how much performance I would lose/gain based on different build cases. Specifically, x3-4 RTX 2080’s vs. x3-4 RTX 2060’s.

I’ve found a few threads or websites about it but they seem more concerned with the actual algorithm and password length/complexity than anything to do with GPU performance (clock speed). Or they say just to run it in hashcat and find out but obviously that’s impractical if I’m trying to assess which build case to go with for business justification purposes.

I’ll be using Hashcat and really don’t care about any variables other than the clock speed, so ideally we could make password length, complexity, space, hash type, attack type, etc. constants just so I can have a speed differential to compare GPU models/amounts.

Estimating the number of functions which are at most $c$-to-$1$ for some constant $c \ge 2$

Notation: $ [m] := \{1, 2, \dots, m \}$ .

How many functions are there $ f: [a] \to [b]$ ? The answer is easily seen to be $ b^a$ .

How many $ 1$ -to-$ 1$ functions are there $ f: [a] \to [b]$ ? Again the answer is well known, and it is sometimes called the falling factorial: $ $ b(b-1) \dots (b-a+1).$ $

How many functions are there $ f: [a] \to [b]$ that are no more than $ c$ -to-$ 1$ ?

I don’t expect that there is an exact formula, and I am more interested in the asymptotics. For example, can we give “reasonable” upper and lower bounds, in the case that $ c \ge 2$ and $ |A| / |B|$ are fixed, and $ |A| \to \infty$ ?

For a concrete example, roughly how many functions are there $ [5n] \to [n]$ that are at most $ 8$ -to-$ 1$ ? Call this function $ g(n)$ .

Clearly we have $ $ \frac{(5n)!}{5!^n} \le g(n) \le n^{5n}.$ $ The function $ {(5n)!}/{5!^n}$ counts functions that are exactly 5-to-1 (which all satisfy the criterion that they are at most 8-to-1), and the function $ n^{5n}$ counts all functions.

Applying Stirling’s approximation to the first function gives something like $ $ \alpha^n n^{5n} \le g(n) \le n^{5n},$ $ for some small constant $ \alpha > 0$ .

It seems like there is room for improvement. Is it true, for example, that $ $ \log g(n) = 5n \log n + C n + o(n) $ $ for some constant $ C > 0$ ?