How do I handle a group that does not understand the ‘assumption rule’?

1st rule of D&D (as of 3rd edition/pathfinder): The GM is the final arbiter on all rules.

I understand this, completely, and do not disagree. That said, I’m having trouble with a group that doesn’t seem to understand what I call the ‘assumption rule,’ which is as follows.

Assumption Rule: Unless and until the GM makes a ruling to the contrary, the rules of the game are assumed to be as they are stated in the book.

I have yet to find anywhere that it actually says this, but it just seems like common sense to me. If I cannot assume, at least for the majority of the time, that the rules of the game are as they are stated in the books we’re using to play the game, how can I expect to be able to make and use a character that I made with those rules in mind?

Also, I’m not referring to one instance. This is not me complaining about a GM denying me the ability to abuse one minor loophole in the system to break the game. The group I am referring to will, as soon as anyone says anything like ‘the book says,’ almost yell ‘the GM trumps book.’ Whether or not the GM has actually said anything about making a ruling, or even disagreeing with the book in the first place.

The biggest issue I’m having with this is that, for the current campaign, the GM we have is relatively new, and isn’t an expert on the rules, as they are depicted in the book. He is so used to playing with this same group, that he assumed a common house rule to actually be the rule as it was stated in the book. (The rule in question was the re-rolling of 1’s when rolling stats. He was surprised when I asked him if we were doing that for his game.) This is becoming a frequent problem for me, as I play in multiple groups, and house rules vary between them. So, I’m forced to fall-back on the ‘assumption rule’ more often then not, only to have it blow up in my face every time I try to use a completely-legal tactic to gain an advantage in combat, or make a check that the rules say I can make, then have the group turn on me when I point that out because the GM did not specifically state that we weren’t using that rule.

UPDATE: To clarify something I’m not entirely sure everyone reading this is getting, I’m not being a ‘rules-lawyer.’ I’m not quoting the rule book religiously, or trying to use it to argue with the GM, or anything like that. I’ll do something like try to change a random NPC’s opinion of my character with a diplomacy check, only to be told that I have to role-play it out. It won’t be someone important to the plot, or even someone that I could potentially get some huge advantage from. This exact situation was me trying to rp my character talking the bartender into giving him a minor discount. And it was the group that told me I had to RP it, not the GM.

Or, as another example, during one combat session, I said that I was going to use the withdraw action, only to have the whole table look at me like I was stupid, except the GM who just looked confused. When they (the group, not the the GM) told me that I would take an AoP for it, they yelled their ‘GM trumps rules’ mantra at me just because I looked up what a ‘withdraw action’ is.

This is not a rant. I’m not trying to vent about something. I’m trying to convey what happened, as it happened, and ask for advice on how best to deal with this. Answers from experience would be appreciated, but I will listen to any advice anyone has to offer.

I like the group, and I enjoy being part of the campaigns they play, it’s just this one issue that seems to keep coming up. I would hate for something like this to get me kicked out.

How to get json object from json array where one of it’s value fulfill assumption

In mysql I have json array like this

[{"amount": "53.00", "paid_at": 2019-05-10, "payment_date": "2019-05-16"}, {"amount": "53.00", "paid_at": false, "payment_date": "2019-06-16"},  {"amount": "53.00", "paid_at": false, "payment_date": "2019-07-16"}] 

Now I would like to get the first “payment_date” value where “paid_at” value is false and compare it with todays date. How can I do it?

show that the following construction is CPA-secure under the DDH assumption

let $ \mathbb G$ be a cyclic group, q be a prime number, g$ \in \mathbb G$ be a generator of $ \mathbb G$ . $ \Pi $ will be defined as follows:

$ Gen(1^n)$ samples uniformly $ x_0, x_1 $ from $ \mathbb Z_q$ and sets sk=$ (x_0, x_1)$ and pk=$ g^{x_0}, g^{x_1}$ .

the encryption algorithm given a message b$ \in \{0,1\}$ samples r uniformly from $ \mathbb Z_q$ and outputs $ Enc_{pk}(b)=(g^r, g^{x_b\cdot r}, g^{x_(1-b)\cdot r}$ ).

prove that under the DDH assumption this construction provides CPA-security.

Simple Uniform Hashing Assumption and worst-case complexity for hash tables

My question: Is the Simple Uniform Hashing Assumption (SUHA) sufficient to show that the worst-case amortized time complexity of hash table lookups is O(1)?

It says in the Wikipedia article that this assumption implies that the average length of a chain is $ \alpha = m / n$ , but…

  • …this is true even without this assumption, right? If the distribution is [4, 0, 0, 0] the average length is still 1.
  • …this is a probabilistic statement, which is of little use when discussing worst case complexity, no?

It seems to me like a different assumption would be needed. Something like:

The difference between the largest and smallest bucket is bounded by a constant factor.

Maybe this is this implied by SUHA? If so, I don’t see how.

Assumption $d>2$ on Proposition 2.12 from Knapp’s Elliptic Curves

I’m going through Knapp’s book on elliptic curves and I got stuck in a minor detail.

This is a part of the proof of Proposition 2.12:

I could understand everything except for this little detail: Where are we making use of the assumption $ d>2$ ?

I will post some pictures about the references that the proof makes use of, in order for you to understand the whole argument.

Proposition 2.7 and identity (2.12):

Lemma 2.11:

enter image description here

Parametrized reduction from 3-SAT to Independent Set to lower bound running time under ETH assumption

I want to proof that, assuming Exponential Time Hypothesis is true, there is not algorithm that solves Independent Set in $ 2^{o(|V|+|E|})$ time. I want to apply the following strong parameterized many-one reduction $ f$ from 3-Sat to Independent set. Let $ \psi$ be the input to 3-SAT with parameter $ \kappa_{3-SAT} = \#variables + \# clauses$ and let $ (G=(V,E),k)$ be the input for Independent Set with parameter $ \kappa_{IS} = |V| + |E|$

For every clause in the input formula $ \psi$ , add three vertices to the Graph, corresponding the the respective literals. Add an edge between two vertices if:

a) They correspond to literals in the same clause or

b) they correspond to a variable and its inverse

Then 3-Sat has a satisfying assignment if and only if the graph defined by this reduction has an independent set of size $ m$ , where $ m$ is the number of clauses in $ \psi$ . For example: enter image description here

I am now wondering whether this reduction suffices to show that (assuming ETH), Independent Set cannot be solved in $ 2^{o(|V|+|E|)}$ time. If I understand correctly, the number of vertices $ |V| = 3m$ and the number of edges $ |E| \leq 3m+nm$ , since for each clause, we have $ 3$ edges between the respective vertices and then for each variable we have at most $ m$ edges between a variable and its inverse. However, this is not linear in $ \kappa_{3-Sat}$ anymore.

Is my upper bound on the numer of edges wrong or do I a different reduction to show the desired result?

Minimization with asymptotic assumption

Given the function

$ g(n,m)=\min\Big\{f(a,b)+f(n-a,c)+f(n,m-bc)\Big|\a,b,c\ \ \text{with} \left\{\begin{matrix} a,\ b,\ n-a,\ c,\ m-bc \geq 0 \ b\leq a! \ c\leq (n-a)! \ \end{matrix}\right. \Big\} $

Assuming that $ n,m\geq 0,\ ((\lceil n/2\rceil)!)^2\leq m\leq n!,\ f(n,m)=\Omega (n)$ ,

is it true that $ g(n,m) \geq 2f(\lfloor n/2\rfloor ,(\lfloor n/2\rfloor)!)+f(n,m-((\lceil n/2\rceil)!)^2)$ ?

I tried KKT conditions, but can’t derive this (as it contains factorial).

Also, it seems that the condition $ f(n,m)=\Omega (n)$ implies that $ f$ is convex on our domain (and thus, satisfies the regularity condition for using KKT), but I managed to prove it only if $ f$ is polynomial.

So I am fully stuck in this…

Any help would be highly appreciated!

Assumption of a generation of the dataset by a probability distribution

Consider the following paragraph from the deeplearningbook

The training and test data are generated by a probability distribution over datasets called the data-generating process. We typically make a set of assumptions known collectively as the i.i.d. assumptions. These assumptions are that the examples in each dataset are independent from each other, and that the training set and test set are identically distributed, drawn from the same probability distribution as each other. This assumption enables us to describe the data-generating process with a probability distribution over a single example. The same distribution is then used to generate every train example and every test example. We call that shared underlying distribution the data-generating distribution, denoted $ p_{data}$ . This probabilistic framework and the i.i.d. assumptions enables us to mathematically study the relationship between training error and test error.

Bolded area is difficult for me to comprehend. Here I have the following issues in interpreting.

1) How probability distribution is generating a dataset?

2) Are the generation process and probability distribution the same?

3) What is the sample space and random experiment for the underlying probability distribution?

Growth assumption and example of finite (arbitrarily small) time blow up for ODE

Consider the following ODE initial value problem \begin{align*} &\frac{d}{dt}\Phi(t,x) = \boldsymbol{F}(t,\Phi(t,x)), & t \in [0,T], \ \ x \in \mathbb{R}^N,\ &\Phi(0,x) = x, & x \in \mathbb{R}^N. \end{align*}

We say that $ \Phi: [0,T] \times \mathbb{R}^N \to \mathbb{R}^N$ is the flow of the ODE (as in this paper) if it solves it in some sense.

We assume that the vector field $ \boldsymbol{F}:[0,T]\times \mathbb{R}^N \to \mathbb{R}^N$ is Sobolev and such that that $ $ (*) \qquad \frac{|\boldsymbol{F}|}{1+|x|} \in L^1\left([0,T]; L^1(\mathbb{R}^N) \right) + L^1\left([0,T]; L^\infty(\mathbb{R}^N) \right),$ $ that is, there exist \begin{align*} &\boldsymbol{F}_1 \in L^1\left([0,T]; L^1(\mathbb{R}^N) \right)\ &\boldsymbol{F}_2 \in L^1\left([0,T]; L^\infty(\mathbb{R}^N) \right) \end{align*} such that $ $ \frac{\boldsymbol{F}}{1+|x|} = \boldsymbol{F}_1 + \boldsymbol{F}_2.$ $

In an answer to Quantitative finite speed of propagation property for ODE (cone of dependence), it has been remarked that the flow $ \Phi$ can blow up in finite (and arbitrarily small) time if the $ F_1\neq 0$ .

  1. Can you provide an example of such flow that blows up in finite (and arbitrarily small) time?

  2. Why is this not in contrast with the fact that assumption (*) is used in the existence and uniqueness result of Theorem 30 (page 23) of this paper?

  3. In the theorem cited in the previous point, is assumption (*) key for existence or uniqueness?

Does Bitcoin security rely on the assumption of nearly continuous transactions? [duplicate]

This question already has an answer here:

  • What happens if there are no transactions in a block? 3 answers

So for an attacker not to overcome good nodes, I believe we rely on the fact that blocks are always being produced, making the attacker essentially outpaced. But in the extreme and ridiculously unlikely case that no Bitcoin transactions occur within a certain sufficiently large time window, could an attacker gain control of the chain and do damage? If not, why?