## What is a good method for modelling combinatorial tree (sport competition results)?

Probably newbie question here, please point me out to relevant theories/tutorials if needed :

• let say I want to evaluate the probabilities of the final rankings for a sport competition
• the competition involved 8 teams, but I can simplify the problem to 4 contestants (say A – B – C – D)
• the competition is splitted into n journeys during which every team faces another team (and only one). So with 4 contestants, I have 2 matches per journey
• at the end of a match, 5 different points attributions are possible (depending on the score)

After one journey, there are 30 different possible combinations in terms of team’s points. So the model looks like a tree with a journey at each level.

Even if I simplify the situation to 2 journeys left, I can’t think of a elegant way to implement this problem (in Python for example) rather than “manually” creating the tree with the 30 combinations at each level and counting the leaves ?

I’m not familiar with this kind of problem so I’m not sure about the path forward.

## Odds of rolling the same results twice on advantage rolls

I know that this has probably been answered before, but I couldn’t find any specific threads on it.

This happened in a tabletop game recently, where all dice rolls are rolled with advantage anyways because of how the VTT system works.

So what are the odds that you get the same results twice in a row like this? Is it 1/400 or 1/160000, assuming order matters or not?

## Pagination of results from different sources merged by a unified scoring function

Assume a Hotel reservation scenario, given $$m$$ ranked lists of attribute values such as distance, price, amenities (normalized between $$0$$ and $$1$$), and a unifying linear score function $$F(\cdot)=\alpha_1*score_1+ \alpha_2*score_2+ \alpha_3*score_3$$, the Threshold Algorithm (TA) is optimal in finding top-$$k$$ results that have higher $$F$$ values.

However, consider a pagination scenario with page index $$p$$ and page size $$k$$. Indeed, instead of asking for top-$$k$$ that can be obtained from indices [0,k] in the final ranked list, we ask for [pk, (p+1)k]. What is the best solution to obtain this window of the results?

You may consider this problem as the pagination of the merged results over a unified scoring function when there are multiple data sources that each contains a score value but the merged results have a combined score value as a (linear) function of individual score values.

Some solutions:

Totally naive: given the multiple ranked results, compute the unified score, sort them, slice it as needed.

Potentially better but inefficient when asking lower-ranked results (farther pages): Execute Threshold Algorithm and ask for top-(p+1)k, return the [pk, (p+1)k] from it.

## What benefit does Amazon get from including search results in product URLs? [closed]

https://www.amazon.com/JOYO-Wireless-Infinite-Sustainer-Handheld/dp/B07WZL5ZDK/ref=mp_s_a_1_6?keywords=heet+ebow&qid=1580579507&sr=8-6 Above is a link to a product. You can see in the URL what I looked up to find it. As an information conscious person, we know that this benefits Amazon enough to have bothered making the decision. What are the possible benefits, if they’re not already known? What are the risks?

## How can I have a Roll Macro to show different Results in Roll20?

I would like to roll for the random Chaos Bolt (first level spell) damage type, which is variable.

In play, I have to roll 1d8 and as a result I get the damage type of the spell:

I guess it should be something like:

&{template:default} {{name=Damage Type}}  {{[[1d8]]|acid|cold|fire|force|lightning|poison|psychic|thunder}}

The result should be:

• Damage Type: cold

Nice to Have:

• if you hover over cold you see ‘2’

## RSolveValue Results Is Strange

I executed : sol = RSolveValue[{b[n + 1] – b[n] == 0.01 b[n] – 1000, b[0] == 80000}, b, n]

Result is : Function[{n}, 0.990099^(-1. n) 10.^(-2. n) (80000. 10.^(2. n) + 100000. 0.990099^(1. n) 10.^(2. n) – 100000. 0.990099^(1. n) 101.^n)]

It is strange the result includes some numbers like 0.990099, I am a beginner on MMA, Is there anything wrong on my code ?

## nmap different results when scanning from different sources

I get slightly different results when scanning a IP from 2 different hosts. Here is scan 1 from an internet server with a public IP:

Nmap 7.80 scan initiated Tue Jan 21 18:48:08 2020 as: nmap -Pn -sS -p25 -T 2 --reason -v 3.XXX.XXX.XXX Nmap scan report for XXX.eu-central-1.compute.amazonaws.com Host is up, received user-set.  PORT   STATE    SERVICE REASON 25/tcp filtered smtp    no-response  Read data files from: /usr/bin/../share/nmap Nmap done at Tue Jan 21 18:48:22 2020 -- 1 IP address (1 host up) scanned in 13.49 seconds

And here scan 2 from a local network PC:

root@kali:/# nmap -Pn -sS -p25 -T 2 --reason -v 3.XXX.XXX.XXX Starting Nmap 7.80 ( https://nmap.org ) at 2020-01-21 17:52 CET [ ... ] Scanning XXX.eu-central-1.compute.amazonaws.com [1 port] Completed SYN Stealth Scan at 17:52, 0.40s elapsed (1 total ports) Nmap scan report for XXX.eu-central-1.compute.amazonaws.com Host is up, received user-set (0.0013s latency).  PORT   STATE  SERVICE REASON 25/tcp closed smtp    reset ttl 62  Read data files from: /usr/bin/../share/nmap Nmap done: 1 IP address (1 host up) scanned in 0.51 seconds            Raw packets sent: 1 (44B) | Rcvd: 1 (40B)

The nmap command line was exactly the same, but the port state differs. And since my local machine gets more info – it gets an actual response from the target – it shouldn’t be a firewall-related problem or sth. similar.

Any idea why I get diffrent results?

## Change Image in Mobile Search Results

When searching my company, “Tattini Boots”, on a mobile device you will see that the following cut off image is displayed next to the search result (see image below). This is not the image that I have set in Yoast or the Featured Image for Search Results on my index page. Can someone dive into the source and allocate why this image is being chosen? This image is simply a slider overlay on the main page: www.TattiniBoots.com

## Why assume Turing machine can compute arbitrary results in Kraft-Chaitin theorem?

The Kraft-Chaitin theorem (aka KC theorem in Downey & Hirschfeldt, Machine Existence Theorem in Nies, or I2 in Chaitin’s 1987 book) says, in part, that given a possibly infinite list of strings $$s_i$$, each paired with a desired program length $$n_i$$, there is a prefix-free Turing machine s.t. there is a program $$p_i$$ of length $$n_i$$ for producing each $$s_i$$, as long as $$\sum_i 2^{-n_i} \leq 1$$. The list of $$\langle s_i, n_i \rangle$$ pairs is required to be computable (D&H, Chaitin) or computably enumerable (Nies).

The proofs in the above textbooks work, it seems, by showing how to choose program strings of the desired lengths while preserving prefix-free-dom. The details differ, but that construction is the main part of the proof in each case. The proofs don’t try to explain how to write a machine that can compute $$s_i$$ from $$p_i$$ (of course).

What I’m puzzled about is why we can assume that an infinite list of results can be computed from an infinite list of arbitrarily chosen program strings using a Turing machine that by definition has a finite number of internal states. Of course many infinite sequences of results require only finite internal states; consider a machine that simply adds 1 to any input. But in this case we’re given a countably infinite list of arbitrary results and we match them to arbitrary (except for length and not sharing prefixes) inputs, and it’s assumed that there’s no need to argue that a Turing machine can always be constructed to perform each of those calculations.

(My guess is that the fact that the list of pairs is computable or c.e. has something to do with this, but I am not seeing how that implies an answer to my question.)

## A survey on ranking keyword search results

To rank the keyword search results, I’m trying to crack the way the Airbnb algorithm or similar ones work. I’m not asking which features they are using since those are different depending on the business needs. What I’m asking is where can I find a paper/survey/book to see what are the various relevancy metrics in addition to TF-IDF and PageRank? and how can I merge various metrics/algorithms into a single algorithm to sort the search results?

Input: a couple of keywords Output: a ranked list of options relevant to the input keywords

Anything from links to surveys/books/algorithms/open-source software would work.

Best