How do I model the fighter’s Great Weapon Fighting fighting style in Anydice?

I was trying to create an AnyDice function to model the Great Weapon Fighting fighting style (which lets you reroll 1s and 2s), but I couldn’t get it to work on any arbitrary dice.

I’ve found this one:

function: reroll R:n under N:n {    if R < N { result: d12 } else {result: R} } output [reroll 1d12 under 3] named "greataxe weapon fighting" 

And it works fine. But I don’t know how to make the function generic so i don’t need to change the d12 every time i want a different dice to reroll.

I’ve tried

function: reroll R:n under N:n {    if R < N { result: d{1..R} } else {result: R} } output [reroll 1d12 under 3] named "greataxe weapon fighting" 

but it is not giving the right probabilities. Maybe if I could fetch the die size inside the function…

Is it possible to model these probabilities in AnyDice?

I was asked by a pal to help him model a dice mechanic in AnyDice. I must admit I am an absolute neophyte with it, and I offered to solve it using software I’m better versed in. I’d like to be able to help them do this in AnyDice.

The mechanic is as follows:

  • The player and their opponent are each assigned a pool of dice. This is done via other mechanics of the game, of which details are not germane. Suffice to say, the player will have some set of dice (say 2D6, 1D8, and 1D12) that will face off against the opponents pool (which will generally be different from the player’s, say 3D6, 2D8 and 1D12).
  • The player and their oppoent roll their pools.
  • The opponent notes their highest value die. This is the target.
  • The player counts the number of their dice that have a higher value than the target, if any.
  • The count of the dice exceeding the target, if any, is the number of success points.

I searched the AnyDice tag here for questions that might be similar, the closest I found was "Modelling an opposed dice pool mechanic in AnyDice", specifically the answer by Ilmari Karonen.

That question and answer, however, deals with only a single die type.

Can a question like "What are the probabilities of N successes when rolling 4D6 and 6D20 as the player against 6D6 and 4D20 for the opponent?", be handled in AnyDice and produce a chart similar to that below?

enter image description here

Problem Lotka Volterra Model – Modelling & Plotting in Python

I urgently need your help. Currently I’m conducting a research in regards of the revenue calculation and the dynamics within revenue calculation for my masters. I thought of revenue/profit margin as of a dynamical system – Lotka Volterra differential equations. I thought of a contribution margin calculation as within this simple formula:

https://www.accountingformanagement.org/wp-content/uploads/2012/07/contribution-margin-ratio-img4.png

In consequence my idea was the following, but I receive an error:
enter image description here

enter image description here

enter image description here

  • Can anyone help me?
  • Do you think it’s a bad idea to use Lotka Volterra equations for this nonlinear purpose of Sales/Cost/Margin Simulation?
  • How would you model it and why? Am I missing equations or does the simple system already fit the requirements?
  • How do I generate a valid plot out of these results (Phase Portrait etc.)?

Signal translation with Seq2Seq model

I’m currently doing some research on signal processing and I got a dataset which includes the signal in itself and its "translation".

A signal and its translation

So I want to use a Many-to-Many RNN to translate the first into the second.

After spending a week reading about the different option I have, I ended up learning about RNN and Seq2Seq models. I believe this is the right solution for the problem (correct me if I’m wrong).

Now, as the input and the output are of the same length, I don’t need to add padding and thus I tried a simple LSTM layer and TimeDistributed Dense layer (Keras):

model = Sequential([     LSTM(256, return_sequences=True, input_shape=SHAPE, dropout=0.2),     TimeDistributed(Dense(units=1, activation="softmax")) ])  model.compile(optimizer='adam', loss='categorical_crossentropy') 

But the model seems to learn nothing from the sequence and when I plot the "prediction", it nothing but values between 0 and 1.

As you can see, I’m a beginner and the code I wrote might not make sense to you but I need guidance on few questions:

  • Does the model make sense for the problem I’m trying to solve ?
  • Am I’m using the right loss/activation functions ?
  • And finally, please correct/teach me

What Happens if I swap the forget gate and update gate in LSTM model?

Consider the following eqautions used in LSTM ( taken from Andrew ng’s course on Sequential model)

In an LSTM model, LSTM Cell has three inputs at any time step t

  • Input($ X_t , a^{(t-1)}, C^{(t-1)})$ ,

    Here $ X_t$ is the input vector, $ a^{(t-1)}$ is the previous hidden state and $ C^{(t-1)}$ is the previous cell state

Now the new cell state $ c^t$ is given by the following formula :

$ C^t = $ forget_gate * $ C^{(t-1)} + $ update_gate* $ \overline{C^t}$

Question:

If I swap the places of forget_gate and update_gate, I still get a valid $ C^t$ , So why are we multilyting the previous cell state with forget gate only and the current cell state with update gate only, what if Imultiply previous cell state with update gate ?

Edit : After swapping, the formula would look like this,

$ C^t = $ update_gate * $ C^{(t-1)} + $ forget_gate* $ \overline{C^t}$

enter image description here

Regression problem with trace regression model

i’m working on a regression task with trace regression model that recieve as input a matrix X. It’s formula is as following:

$ $ y = tr(\beta_*^{T} X)+ \epsilon$ $

where tr(·) denotes the trace, $ \beta$ is some unknown matrix of regression coefficients, and $ \epsilon$ is random noise.

Can someone with that knowledge provide me the steps to achieve the regression task where we must calculate the trace (sum of the diagonal elements) in the regression phase ? Also i need to know how to generate the regression coefficients of such matrix.

I found the description of the model here: https://arxiv.org/pdf/0912.5338.pdf

What motivates the RAM model?

It looks like most of today’s algorithm analysis is done in models with constant-time random access, such as the word RAM model. I don’t know much about physics but from popular sources I’ve heard that there’s supposed to be a limit on information storage per volume and on information travel speed, so RAMs don’t seem to be physically realizable. Modern computers have all these cache levels which I’d say makes them not very RAM-like. So why should theoretical algorithms be set in RAM?

Show that if $\mathcal{H}$ is PAC learnable in the standard one-oracle model, then $\mathcal{H}$ is PAC learnable in the two-oracle model

This is a question $ 9.1$ from Understanding Machine Learning Chapter 3. It goes like this:

Consider a variant of the PAC model in which there are two example oracles: one that generates positive examples and one that generates negative examples, both according to the underlying distribution $ \mathcal{D}$ on $ \mathcal{X}$ . Formally, given a target function $ f : \mathcal{X} \to {0,1}$ , let $ \mathcal{D}^+$ be the distribution over $ \mathcal{X}^+ = \{x \in \mathcal{X}: f(x) = 1\}$ defined by $ \mathcal{D}^+(A) = \frac{\mathcal{D}(A)}{\mathcal{D}(X^+)}$ , for every $ A \subset \mathcal{X}$ . Similary $ \mathcal{D}^-$ is the distribution over $ \mathcal{X}^{-}$ induced by $ \mathcal{D}$ .

The definition of PAC learnability in the two-oracle model is the same as the standard definition of PAC learnability except that here the learner has access to $ m^{+}_{\mathcal{H}}(\epsilon, \delta)$ i.i.d. examples from $ \mathcal{D}^+$ and $ m^{-}_{\mathcal{H}}(\epsilon, \delta)$ i.i.d. examples from $ \mathcal{D}^{-}$ . The learner’s goal is to output $ h$ s.t. with probability at least $ 1-\delta$ (over the choice of the two training sets, and possibly over the nondeterministic decisions made by the learning algorithm), both $ L_{(\mathcal{D}^+,f)}(h) \leq \epsilon$ and $ L_{(\mathcal{D}^−,f)}(h) \leq \epsilon$

I am trying to prove that if $ \mathcal{H}$ is PAC learnable in the standard one-oracle model, then $ \mathcal{H}$ is PAC learnable in the two-oracle model. My attempt so far:

Note that $ $ L_{(D,f)}(h) = \mathcal{D}(\mathcal{X}^+)L_{(\mathcal{D}^+,f)}(h) + \mathcal{D}(\mathcal{X^{-}})L_{(\mathcal{D}^-,f)}(h).$ $ Let $ d = min \{ \mathcal{D^+}, \mathcal{D^-}\}$ , then if $ m\geq m_\mathcal{H}(\epsilon d, \delta)$ , then it is clear that: $ $ \mathbb{P}[L_{(D,f)}(h)\leq \epsilon d] \geq 1-\delta \implies \mathbb{P}[L_{(D^+,f)}(h)\leq \epsilon] \geq 1-\delta$ $ And, $ $ \mathbb{P}[L_{(D,f)}(h)\leq \epsilon d] \geq 1-\delta \implies \mathbb{P}[L_{(D^-,f)}(h)\leq \epsilon] \geq 1-\delta$ $

So we know that if we have $ m\geq m_{\mathcal{H}}(\epsilon d, \delta)$ samples drawn iid from $ \mathcal{D}$ , then we can guarantee $ \mathbb{P}[L_{(D^+,f)}(h)\leq \epsilon] \geq 1-\delta$ and $ \mathbb{P}[L_{(D^-,f)}(h)\leq \epsilon] \geq 1-\delta$ .

How do I choose $ m_{\mathcal{H}}^+(\epsilon, \delta)$ and $ m_{\mathcal{H}}^-(\epsilon, \delta)$ such that if we have $ m^+ \geq m_{\mathcal{H}}^-(\epsilon, \delta)$ samples iid according to $ \mathcal{D}^+$ and $ m^- \geq m_{\mathcal{H}}^-(\epsilon, \delta)$ drawn iid according to $ \mathcal{D}^{-}$ , then we can guarantee $ \mathbb{P}[L_{(D^+,f)}(h)\leq \epsilon] \geq 1-\delta$ and $ \mathbb{P}[L_{(D^-,f)}(h)\leq \epsilon] \geq 1-\delta$ ?

When is drawing $ m^+$ samples according to $ \mathcal{D}^+$ and $ m^{-}$ samples according to $ \mathcal{D}^-$ the same as drawing $ (m^+ + m^-)$ samples according to $ \mathcal{D}$ ?

How do I model time passing differently for different characters?

I am the DM in this situation and a character has wandered into a small area which links to another plane. In this plane time passes differently, so 1 minute in that plane would be 1 hour in the real world.

Normally I would make sure the players just get equal screen time to ensure they all feel included, but this is complicated by the fact that the players in normal time are in combat, and the character inside the distorted time area is not.

If the player inside the time bubble were to immediately walk straight out, they would have been gone for a full minute, at which point combat would have likely finished.

How can I best play this at the table without making the player in the time bubble just sit around while everyone else fights?

I would also like to keep it vague, so the players outside of the time bubble don’t know if it is safe or dangerous inside it (IE: they worry for the character trapped inside), but the only way I can see this being accomplished is to deal with combat first, and anyone in the time bubble gets to sit out of the game until either combat is over, or everyone is in the bubble.

A good answer will draw from experience of similar situations, and ensure that everyone at the table feels like they are getting to do something. My players can be pretty slow in combat (and even slower out of it!) so sitting for a full combat might mean not doing anything for over an hour (actually worst in this situation because the nature of the combat in this area could well take the full session).

Addendum: The time distortion effect isn’t mandatory, and if I can’t figure out a way to deal with it properly I will just drop it for an easier to work effect, so frame challenges are not needed.

What is the appropriate way to represent type attribute of one entity in a conceptual model?

I have a vehicle entity, and this vehicle can be of different types. How a model this in a conceptual design?, I place the type as an attribute of the vehicle entity? and if each vehicle has its own set of attributes, how do I model each vehicle type if it is already as a vehicle attribute?