Predicting the outcome of sporting events with multiplicative scoring

In the Olympic format for sport climbing, eight athletes compete in three rounds of climbing. Their final score is the multiplication of their rankings in each round. For example, an athlete who comes 1st in the first round, 5th in the second round, and 7th in the third will have a final score of $ 1\times5\times7=35$ . The athletes with the lowest final score wins.

Assuming that the competition is already partly underway (possibly even mid-round), is there a computer algorithm to quickly compute the probabilities $ P_{ar}$ of each athlete $ a$ achieving a final ranking $ r$ , assuming the performance of the athletes is entirely random from here on? Even with 8 athletes the brute force method seems too computationally intensive.

If this isn’t computationally possible in a reasonable time, is there an algorithm to get “close enough” to those probabilities?

Does every multiplicative function have a logarithmic average?


Is it true that for every completely multiplicative function $ f:\mathbb N\to\mathbb C$ with $ |f(n)|=1$ for all $ n$ , the logarithmic average $ $ \lim_{N\to\infty}\frac1{\log N}\sum_{n=1}^N\frac1nf(n)\quad\text{exists?}$ $

In Elliott’s book “Probabilistic number theory”, a theorem attributed to Delange, Wirsing, Halasz (Theorem 6.3) describes exactly when a multiplicative function taking values in the unit disk has a (Cesaro) mean, and what is the value of the mean when it exists. It seems that the main obstacle to the existence of the (Cesaro) mean exist are the multiplicative functions $ n^{it}$ . On the other hand, it is easy to check that the logarithmic average of any such function exists (and is 0).

If the series $ \sum_{p\text{ prime}}\frac{1-f(p)}p$ converges, then the answer is yes (this follows from the aforementioned theorem). On the other hand, an application of the Turan-Kubillius inequality shows that if for some $ \epsilon>0$ the set $ S:=\{p\text{ prime}:|f(p)-1|>\epsilon\}$ is divergent (in the sense that $ \sum_{p\in S}\frac1p=\infty$ ), then again the answer to the question is yes (and the limit is $ 0$ ). However I don’t see how to handle the general case.

$\sum_{i=1}^x\sum_{j=1}^xf(i\cdot j)$ Double Summing a (Not Completely) Multiplicative Function

Let $ f(n)$ be a multiplicative function that is not completely multiplicative, i.e $ f(m)\cdot f(n)= f(m\cdot n)$ only if $ gcd(m,n)=1$ . Let $ S(x)$ be the double sum over $ f$ , that is:

$ $ S(x)=\sum_{i=1}^x\sum_{j=1}^xf(i\cdot j)$ $

It is not difficult to see that if $ f(n)$ were completely multiplicative, then $ S(x)$ could be simplified:

$ $ S(x)=\sum_{i=1}^x\sum_{j=1}^xf(i\cdot j)= \sum_{i=1}^xf(i)\sum_{j=1}^xf(j)= \biggl(\sum_{k=1}^xf(k)\biggr)^2$ $

But since $ f(n)$ is not completely multiplicative, this simplification is not completely true, and it fails in every combination where $ gcd(i,j)\neq1$ . Hence, $ S(x)$ can be written this way provided we add some additional error term, let’s call it $ E$ :

$ $ S(x)=\sum_{i=1}^x\sum_{j=1}^xf(i\cdot j)= \biggl(\sum_{k=1}^xf(k)\biggr)^2+E$ $

$ E$ is either negative or positive, I’m not sure. Obviously, $ E$ is comprised of all the small errors generated by the initial sum term, when $ gcd(i,j)\neq1$ . I am mainly interested in the cases where $ f(n)$ takes the form of:

  1. Euler totient function: $ $ S_{\varphi}(x)=\sum_{i=1}^x\sum_{j=1}^x\varphi(i\cdot j)$ $
  2. Sum of divisors function: $ $ S_{\sigma_1}(x)=\sum_{i=1}^x\sum_{j=1}^x\sigma_1(i\cdot j)$ $
  3. Moebius function: $ $ S_{\mu}(x)=\sum_{i=1}^x\sum_{j=1}^x\mu(i\cdot j)$ $

My question is, what is this error term $ E$ exactly? how can I calculate it? How can I properly sum all those small errors to get a correct evaluation of $ S(x)$ ? For clarification, I am concerned with evaluating $ S(x)$ , but I think I must evaluate $ E$ first in order to do it. I am taking this approach because I can compute $ \biggl(\sum_{k=1}^xf(k)\biggr)^2$ very efficiently, and so, finding the error term $ E$ will solve my question.

Modular multiplicative inverse in Ruby

I implemented an algorithm to find the modular multiplicative inverse of an integer. The code works, but it is too slow and I don’t know why. I compared it with an algorithm I found in Rosetta Code, which is longer but way faster.

My implementation:

def modinv1(a, c)   raise "#{a} and #{c} are not coprime" unless a.gcd(c) == 1   0.upto(c - 1).map { |b| (a * b) % c }.index(1) end 

Rosetta Code’s implementation:

def modinv2(a, m) # compute a^-1 mod m if possible   raise "NO INVERSE - #{a} and #{m} not coprime" unless a.gcd(m) == 1   return m if m == 1   m0, inv, x0 = m, 1, 0   while a > 1     inv -= (a / m) * x0     a, m = m, a % m     inv, x0 = x0, inv   end   inv += m0 if inv < 0   inv end 

Benchmark results (used benchmark-ips):

Warming up --------------------------------------         Rosetta Code   141.248k i/100ms                 Mine   462.000  i/100ms Calculating -------------------------------------         Rosetta Code      2.179M (± 6.5%) i/s -     10.876M in   5.022459s                 Mine      4.667k (± 3.7%) i/s -     23.562k in   5.055259s  Comparison:         Rosetta Code:  2179237.4 i/s                 Mine:     4667.4 i/s - 466.90x  slower 

Why is mine so slow? Should I use the one I found in Rosetta Code?

Identity involving Dirichlet series and totally multiplicative functions

This is Exercise 4.3.4(a) in Montgomery and Vaughn “Multiplicative Number Theory…”

Let $ f_1$ , $ f_2$ be totally multiplicative functions with $ |f_i(n)| \leq 1$ . Show that for Re$ (s)> 1$ , $ $ \left(\sum_{n \geq 1} n^{-s} \left(\sum_{d \mid n} f_1(d) \right)\left(\sum_{d \mid n} f_2(d) \right) \right) \left(\sum_{n \geq 1} n^{-2s} f_1(n)f_2(n) \right)$ $ $ $ = \zeta(s)\left(\sum_{n \geq 1} n^{-s} f_1(n) \right)\left(\sum_{n \geq 1} n^{-s} f_2(n) \right)\left(\sum_{n \geq 1} n^{-s} f_1(n)f_2(n) \right).$ $

This contains Ramanujan’s $ \sum n^{-s} \sigma_a(n)\sigma_b(n) = \frac{\zeta(s)\zeta(s-a)\zeta(s-b)\zeta(s-a-b)}{\zeta(2s-a-b)}$ as a special case.

I would like to know if there’s a simpler proof of the above general result other than what I did.

After some manipulation, the LHS is $ $ \sum_{n \geq 1} n^{-s} \sum_{d^2 \mid n} \left(\sum_{e \ d \mid e \mid \frac{n}{d}} f_1(e)\right)\left(\sum_{e \ d \mid e \mid \frac{n}{d}} f_2(e)\right).$ $

After some manipulation, the RHS is $ $ \sum_{n \geq 1} n^{-s} \sum_{d \mid n} f_2\left(\frac{n}{d}\right)\left(\sum_{e \mid d} f_1(e) \right)\left(\sum_{e \mid \frac{n}{d}} f_1(e) \right).$ $

I was unable to show directly that these coefficients matched up, so using the fact that both sides are multiplicative functions of $ n$ (the convolution of (totally) multiplicative functions is multiplicative), I checked that both sides matched on prime powers. This was straightforward, but tedious.

Is there a simpler way?

I was also unable to get anywhere by looking at the Euler products.

Written with convolution notation, $ f*g(n) := \sum_{d \mid n} f(d)g \left(\frac{n}{d} \right)$ , the identity is the following complicated thing.

$ $ \left((f_1 * 1)(f_2 * 1) \right) * (f_1 \circ \sqrt{.})(f_2 \circ \sqrt{.})1_{n \ \text{-square}} = 1 * f_1 * f_2 * f_1f_2.$ $

Perhaps $ f_i^{-1}= \mu f_i$ might be helpful here.

At the very least, it would be nice if someone could point me in the direction of simple proofs of Ramanujan’s identity above. (I verified that also by looking at prime powers, and it wasn’t very insightful.)

Does the multiplicative property apply to modern, non-text book RSA?

I’m aware of the multiplicative property of the textbook RSA and how it can be used to get a signature from a CA without having the CA directly signing it.

My question is – can this apply to the real world in modern implementations of RSA?

More specifically – assume I have this CA that will sign every CSR (Certificate Signature Request) I give it except a specific one, a “forbidden” one.

Theoretically, I would like to have this CA sign 2 (or more) “valid” CSRs and “multiply” their signatures to generate the desired “forbidden” signature (that one CSR the CA refused to sign)

Could this work?

Approximation algorithm for weighted set cover, using multiplicative weights

It is known that the problem of fractional set cover can be rephrased as a linear programming problem and be approximated using the multiplicative weights method, for instance this lecture note shows how to do so.
The running time depends on the “width” of the problem, which equals to the number of sets in the unweighted case. However, in the weighted case, the width of the problem depends on the weight function, hence the running time is exponential with respect to the representation of the problem. Is there a way to overcome this issue? Either a way to reduce it to polynomial running time or proof that it’s impossible (under plausible complexity assumptions)?

Additive and multiplicative convolution deeply related in modular forms

From the fact spaces of modular forms are finite dimensional, from the decomposition in Hecke eigenforms, and from the duplication formula for $ \Gamma(s)$ there are a lot of identities mixing additive convolution $ \oplus$ and multiplicative convolution $ \otimes$ , the basic example being $ 1_{\mathbb{Z}^2} \ast 1_{\mathbb{Z}^2} = 1 \otimes \chi_4$

That is in the algebra $ M,\oplus$ generated by modular forms for $ \Gamma_0(N)$ there will be a lot of identifies involving $ \otimes$ . How do the Hecke algebras and their representations come into play ? Is there a more precise formulation, using higher things like automorphic representations ?

Do you have some heuristics making visible what property of the primes tells that connexion between additive and multiplicative convolution ? (if the heuristic implies the RH it is good too)

Assuming the primes were wildly distributed, can we still expect such identifies ? The proof of additive-multiplicative identifies rely on the functional equation and Euler product of L-functions, so I’m asking if it is plausible the PNT and the RH still play a role in there. What about the inverse convolutions, the convolution with modular functions like $ 1/j(z)$ , the additive convolution with $ \mu(n)$ ?