## Does Barovia measure time in weeks or tendays?

Barovia has its own calendar and measures its year in 12 "Moons" lasting a full lunar cycle.

The adventure text in Curse of Strahd mentions weeks. Does this mean the Moons are broken into 4 weeks of 7 days? Do Barovians call these weeks?

## How can one measure the time dependency of an RNN?

Most of the discussion about RNN and LSTM alludes to the varying ability of different RNNs to capture "long term dependency". However, most demonstrations use generated text to show the absence of long term dependency for vanilla RNN.

Is there any way to explicitly measure the term dependency of a given trained RNN, much like ACF and PACF of a given ARMA time series?

I am currently trying to look at the (Frobenius norm of) gradients of memories $$s_k$$ against inputs $$x_k$$, summed over training examples $$\{x^i\}_{i=1}^N$$$$\sum_{i=1}^N \big\|\frac{\partial s_k}{\partial x_k}(x^i)\big\|_F$$. I would like to know if there are more refined or widely-used alternatives.

Thank you very much!

## How to measure on a grid the area of a sphere area spell or effect centered on a creature?

Per the Wild Magic sorcerer’s Wild Magic Surge table (PHB, p. 104)

07-08: You cast fireball as a 3rd-level spell centered on yourself.

There’s other questionable area spells in that list (grease is a 10 foot square, so your square and which three?), but I’ll focus on spheres like fireball (PHB p. 241):

(…) Each creature in a 20-foot-radius sphere centered on the point must (…)

My problem is, the sources I found so far – including the other answers to similar questions here – talk about measuring spheres from grid square intersections, whereas this clearly talks about it being centered on the unlucky Wild Magic sorcerer. Plus, since it’s a wild magic effect, it doesn’t exactly have a caster that can pick a point of origin. Is there an accepted way to measure an effect like this?

## In a data warehouse, should a measure be based on a fact or a dimension?

Let’s say there is a data warehouse created from a shop data. A fact is a single purchase of a product. There is a dimension that describes a customer.

There is a need to create a measure that stores a number of distinct customers. Can this measure be created based on customer identifier in the dimension table, or it needs to be fact table? In which cases one or the other solution is better?

Below I post a visualization based on an AdventureWorks2016 database:

## Is a firewall enough of a security measure for an Ubuntu server that hosts a website?

I recently got a VPS with Ubuntu on it, and I’d like to start creating a very basic website. However, I don’t know what steps I should take to secure this server.

I’m new with Ubuntu, new with security and new with creating websites (the website will probably be just HTML, CSS, Django/Python and some database).

My biggest concern is that some hacker could try to use it as a zombie and I won’t know. Or that robots could try to log in and sneak at whatever data I’ll store on that machine and I won’t know. Or who knows what else.

I found the firewall information page on the Ubuntu website, but will that be enough ?

P.S.: If it’s impossible to give an answer, I’d also appreciate a book/website recommendation for Ubuntu and security complete beginners

## Why do we use the number of compares to measure the time complexity when compare is quite cheap?

I think one reason a compare is regarded as quite costly is due to the historical research as remarked by Knuth, that it came from tennis match trying to find the second or third best tennis player correctly, assuming the tennis players are not a “rock paper scissors” situation (but has an “absolute ‘combat’ power”).

If we have an array of size n being 1,000,000, we don’t usually mind comparing 2,000,000 times to find the second largest number. With the tennis tournament, having a Player A match with Player B can be costly as it can take a whole afternoon.

With sorting or selection algorithms, for example, what if the number of comparisons can be O(n log n) or O(n), but then, other operations had to be O(n²) or O(n log n), then wouldn’t the higher O() still override the number of comparisons? (Maybe it didn’t happen yet, or else we would have a study case about this situation). So ultimately, shouldn’t the number of atomic steps, instead of comparisons, measured by the order of growth as compared to n (O()) that determines the time complexity?

## Linear algorithm to measure how sorted an array is

I’ve just attended an algorithm course, in which I’ve seen many sorting algorithms performing better or worse depending on how much the elements of an array are sorted already. The typical example are quicksort, performing in $$O(n^2)$$ time, and mergesort which operates in linear time on sorted arrays. Vice versa, quicksort performs better in case we are dealing with an array sorted from the highest to the lowest value.

My question is if there is a way to measure in linear time how sorted the array is, and then decide which algorithm is better to use.

## Why do we need security measure likes control flow integrity and buffer overflow guard if we have good access control protocol in place?

Reading into information security, I noticed two branches. Access control when communication with external device by using some type of cryptographic authentication and encryption mechanism and things like control flow integrity. My question is why do we need the latter if former is good enough. Are there example of control flow exploits on access control protocol implementation themselves? My focus is mainly on embedded devices.

## How to identify measure words in Chinese text?

Measure words (aka classifiers) are used in Chinese to “measure” things, e.g.

Three glasses of milk

That person

One crow

We don’t have an equivalent in English [they’re not collective nouns (e.g. a murder of crows)]. It’d help reading Chinese text to be able to highlight measure words in a distinct color.

Thus, I’m interested in the following problem:

Input: Chinese plaintext.
Output: Identification of which characters are measure words in that plaintext.

It’s a highly restricted version of Chinese text segmentation, and I expect it would be substantially simpler, e.g. there’s only a short list of characters which can be measure words. It may even be considered “solved”.

However, there’s some nuances which make this challenging, e.g., repeated measure words e.g. 个个 and 一根根, and characters in measure words can also belong to Chinese words 个人. Nevertheless, there may only be a small number of exceptions here.

Question: How to identify measure words in Chinese text?

Searching for Chinese measure words in ACM didn’t give anything relevant, but this problem may have arisen as a sideline in other work on Chinese text segmentation.

## Blum complexity measure for lambda calculus

Is there a formal complexity measure for lambda expressions which satisfies the Blum axioms and measures the complexity of reducing the expression to its normal form?