## install and setup wordpress theme exactly as demo for \$5

by: angelnx
Created: —
Category: WordPress
Viewed: 158

## Must a warlock learn new spells of *exactly* their warlock slot level?

The Warlock Pact Magic feature says:

The Spells Known column of the Warlock table shows when you learn more warlock spells of your choice of 1st level and higher. A spell you choose must be of a level no higher than what’s shown in the table’s Slot Level column for your level. When you reach 6th level, for example, you learn a new warlock spell, which can be 1st, 2nd, or 3rd level.

Additionally, when you gain a level in this class, you can choose one of the warlock spells you know and replace it with another spell from the warlock spell list, which also must be of a level for which you have spell slots.

Consider a warlock leveling up from level 4 to level 5, where their Pact Magic spell slots change from 2nd level to 3rd level. They can replace a warlock spell they know with a spell from the warlock spell list which “must be of a level for which they have spell slots,” and they don’t precisely have 2nd level spell slots anymore. Can the warlock therefore only replace a warlock spell they know with a 3rd level warlock spell? Or can they learn a new 2nd level warlock spell instead?

(The word “also” suggests the same conditions apply as in the previous paragraph, but the first paragraph uses different wording — “a level no higher than what’s shown in the table’s Slot Level column,” and the second paragraph refers to “a level for which you have spell slots.”)

I am aware that this answer claims the interpretation that you can learn lower-level spells, but it doesn’t give any justification for that interpretation or discuss the specific wording.

## What ifnormation exactly does a mac adress provide

For instance if someone had my mac adress would they have my name and address location or would they just have the name of my device. Also could they track me down by having this mac adress or would they need to go to court and sobpeana this information sorry for this I asked a question earlier but still want more clarity.

## Error with Image.putpixel(). return self.im.putpixel(xy, value) TypeError: function takes exactly 1 argument (3 given)

Here’s the code:

import numpy as np import matplotlib.pyplot as plt from PIL import Image #from scipy import misc import cv2  i = Image.open("originalfit.jpg") w = Image.open("originalfit_wiener.jpg") r = Image.open("binary_img.png")  pre = Image.open('preprocessed.png') rcv =  cv2.imread("binary_img.png")  # get image properties. h,w,bpp = np.shape(rcv)  # iterate over the entire image. for py in range(0,h):     for px in range(0,w):         if r.getpixel((px,py))==0 and i.getpixel((px,py))-w.getpixel((px,py))>0:              pre.putpixel((px,py),w.getpixel((px,py)))         else:              pre.putpixel((px,py),i.getpixel((px,py))) 

Here’s the error message:

Traceback (most recent call last): File “preprocessing.py”, line 23, in pre.putpixel((px,py),i.getpixel((px,py)))

File “/usr/local/lib/python3.5/dist-packages/PIL/Image.py”, line 1696, in putpixel return self.im.putpixel(xy, value)

TypeError: function takes exactly 1 argument (3 given)

## What exactly is a Cartan radius vector (and its role in Poincaré gauge theories)

I am studying approaches to gravity where the Poincaré group is “gauged”. The original motivation of this is to understand what is meant on the statement that “Teleparallel gravity is a gauge theory of the translation group”. The standard references are highly confusing and imprecise.

The situation with Poincaré-based theories is also messy with lots of papers using very “physicist-y” math where the geometric meaning or even the validity of construction is questionable. I have also found some papers where more rigorous mathematics is employed, however in this case, I have difficulty translating between the two languages.

I suspect I can clarify a great deal of my (mis)understanding if I understand properly what a “Cartan radius vector” is.

• I am doing a bit of a “translation work” here, so it is also possible I completely misunderstand my references, but it seems to me a “naive” approach to gauging the Poincaré group is to work (at least initially) in flat Minkowski spacetime, (with general curvilinear coordinates $$x^\mu$$ is necessary) and we are given four functions $$y^a$$ on the space, which are interpreted as flat/inertial/Cartesian coordinates. In this case a holonomic, orthonormal vielbein is given by $$\theta^a=\mathrm dy^a.$$ Under a Poincaré transformation with constant coefficients $$y^{\prime a}=\Lambda^a_{\ b}y^b+\tau^a$$, the vielbein transforms as $$\theta^{\prime a}=\Lambda^a_{\ b}\theta^b,$$ however under a Poincaré transformation with point-dependent coefficients, this is not the case. $$\$$We can save the day however by considering the inertial coordinates $$y^a$$ as some kind of section of an affine bundle, and introduce the affine connection $$\mathscr Dy^a=\mathrm dy^a+\Gamma^a_{\ b}y^b+B^a,$$ and define $$\theta^a=\mathscr Dy^a$$. $$\$$ From this point on, however it gets fuzzy, because teleparallel gravitists (see for ex. Aldrovandi, Pereira) tend to use this expression to define the vielbein. $$\$$ But in for example Metric-affine gauge theory of gravity by Hehl et al. it is stated that $$y^a$$ is the “Cartan radius vector” if $$B^a=0$$, and also that in order to have the $$(\theta^a,\Gamma^a_{\ b})$$ double as a Cartan-connection, we must have (here apparantly only the linear part of the conenction is used) $$Dy^a=\mathrm dy^a+\Gamma^a_{\ b}y^b=0.$$
• A bit later in the same Hehl paper, it is stated that the Cartan radius vector is defined by the equation (they used the notation $$\xi$$ for what I called $$y$$ before as well) $$D\xi^a=\theta^a.$$ Here apprantly it is a linear object, not an affine one, and the covariant derivative $$D$$ is linear, and is claimed that the above equation is not totally integrable in general, but if integrated along an infinitesimal loop, it gives essentially affine holonomy of the form $$\Delta\xi^a=\frac{1}{2}\left(R^a_{\ b\mu\nu}\xi^b+T^a_{\ \mu\nu}\right)\mathrm dx^\mu\wedge\mathrm dx^\nu.$$
• Based on what I have read about Cartan connections, one can describe a Cartan connection modelled on $$G/H$$ by having a $$G$$-fiber bundle $$(E,\pi,M,G/H,G)$$ with typical fiber $$G/H$$, an Ehresmann $$G$$-connection on $$E$$ specified by a vertical projector $$\mathrm v:TE\rightarrow VE$$, and a section $$s:M\rightarrow E$$ such that the pullback $$s^\ast\mathrm v|_x:T_xM\rightarrow V_{s(x)}E\simeq\mathfrak g/\mathfrak h$$ is an isomorphism. $$\$$ Here the section $$s$$ has interpretation of specifying the point of contact between the model geometry $$E_x$$ and the manifold $$M$$, and the last condition states that at the point of contact the tangent space of the model geometry must be isomorphic to the tangent space of the base geometry. $$\$$ In fibred coordinates $$(x^\mu,y^a)$$ for $$E$$, we can write the connection as $$\mathrm v=\partial_a\otimes\left( \mathrm dy^a+\Gamma^a(x,y) \right).$$ In case we have $$G=\text{ISO}(3,1)$$ and $$H=\text{SO}(3,1)$$, the model space is $$G/H\simeq\mathbb R^4$$ the affine Minkowski space, and the connection is $$\mathrm v=\partial_a\otimes(\mathrm dy^a+\Gamma^a_{\ b}(x)y^b+B^a(x)),$$ since $$G$$ is an affine group, and the pullback condition is that $$s^\ast\mathrm v=\partial_a\otimes(\mathrm ds^a(x)+\Gamma^a_{\ b}(x)s^b(x)+B^a(x))$$ is nondegenerate. But this is basically the affine covariant derivative of $$s$$.

So my question is, how are the objects $$y^a$$, $$\xi^a$$, $$s$$ defined in my bullet points related? What is it we actually mean under a Cartan radius vector? What is its interpretation?

It is clear to me that my $$y^a$$ in the first bullet point is basically $$s$$ (in the last bullet point), however confusingly, Hehl says that our affine connection is a Cartan connection if $$dy^a+\Gamma^a_{\ b}y^b=0$$, which seems to me that i) is impossible to be integrated in general, ii) is in conflict with the more abstract definition in the third bullet point, where for the connection to be Cartan it is enough that $$dy^a+\Gamma^a_{\ b}y^b+B^a$$ is nondegenerate (which is consistent with the interpretation of $$\mathscr D y^a$$ as a vielbein).

But I also know that a Cartan connection is, from another point of view, basically a coframe and a linear connection together, and $$B^a$$ is not in general a coframe in terms of transformation properties, as it has been elucidated by Hehl.

I basically would like to clarify this mess into something coherent. References for papers treating Poincaré gauge gravity with mathematic rigour, consistency and geometric clarity is something I also would like.

## Execute a bit of Lua code at rewrite stage but exactly once per request, no matter how many times location matching restarts?

From a nginx config (openresty) I need to execute a bit of Lua code in the following way:

• the bit of code must be executed during the rewrite phase, NOT earlier, because certain ngx.var variables that the code needs are not yet set at earlier stages
• it must be executed at most once per externally incoming request, no matter how many times nginx internally restarts its location matching algorithm as a result of rewrite, try_files or index

How can this be achieved?

## Defining a class that behaves exactly like an int

Am I missing anything?
Is there anything I could improve on?

Are there any tricks that I could learn from this?
How about style; does the style look OK?

#include <iostream> // std::cout #include <utility>  // std::move  class jd_int { public:     jd_int() = default;      jd_int(int i)             : _i{i}                   { }     jd_int(const jd_int& jdi) : _i{jdi._i}              { }     jd_int(jd_int&& jdi)      : _i{std::move(jdi._i)}   { }      jd_int operator= (int i)             { _i = i; return *this;                 }     jd_int operator= (double d)          { _i = d; return *this;                 }     jd_int operator= (const jd_int& jdi) { _i = jdi._i; return *this;            }     jd_int operator= (jd_int&& jdi)      { _i = std::move(jdi._i); return *this; }      ~jd_int() = default;      operator bool()   { return !!_i;                    }     operator int()    { return static_cast<int>(_i);    }     operator double() { return static_cast<double>(_i); }      jd_int operator+=(jd_int jdi) { return _i += jdi._i; }     jd_int operator+ (jd_int jdi) { return _i +  jdi._i; }      jd_int operator-=(jd_int jdi) { return _i -= jdi._i; }     jd_int operator- (jd_int jdi) { return _i -  jdi._i; }      jd_int operator*=(jd_int jdi) { return _i *= jdi._i; }     jd_int operator* (jd_int jdi) { return _i *  jdi._i; }      jd_int operator/=(jd_int jdi) { return _i /= jdi._i; }     jd_int operator/ (jd_int jdi) { return _i /  jdi._i; }      jd_int operator%=(jd_int jdi) { return _i %= jdi._i; }     jd_int operator% (jd_int jdi) { return _i %  jdi._i; }      jd_int operator++()    { return ++_i;                          }     jd_int operator++(int) { jd_int tmp = *this; ++_i; return tmp; }      jd_int operator--()    { return --_i;                          }     jd_int operator--(int) { jd_int tmp = *this; --_i; return tmp; }      friend bool operator< (jd_int lhs, jd_int rhs);     friend bool operator> (jd_int lhs, jd_int rhs);     friend bool operator<=(jd_int lhs, jd_int rhs);     friend bool operator>=(jd_int lhs, jd_int rhs);     friend bool operator==(jd_int lhs, jd_int rhs);     friend bool operator!=(jd_int lhs, jd_int rhs);  private:     int _i;      friend std::ostream& operator<<(std::ostream& os, const jd_int jdi);     friend std::istream& operator>>(std::istream& is, jd_int jdi); };  bool operator< (jd_int lhs, jd_int rhs) { return (lhs._i <  rhs._i); } bool operator> (jd_int lhs, jd_int rhs) { return (lhs._i >  rhs._i); } bool operator<=(jd_int lhs, jd_int rhs) { return (lhs._i <= rhs._i); } bool operator>=(jd_int lhs, jd_int rhs) { return (lhs._i >= rhs._i); } bool operator==(jd_int lhs, jd_int rhs) { return (lhs._i == rhs._i); } bool operator!=(jd_int lhs, jd_int rhs) { return (lhs._i != rhs._i); }  std::ostream& operator<<(std::ostream& os, const jd_int jdi) {     os << jdi._i;     return os; }  std::istream& operator>>(std::istream& is, jd_int jdi) {     is >> jdi._i;     return is; } $$$$ `

## How exactly does fullscreen work, and why do some fullscreen modes not exist over the desktop?

Recently I found an application for my nanoleaf light panels which can read the colors off of the screen of my computer (you specify a region, I “boxed” the entire screen) and then places those colors accordingly onto the panels, for game/video immersion. This works well in most applications which use fullscreen, however I’ve encountered a strange issue with applications ranging from freespace 2 (an ancient computer game) to minecraft. When entering fullscreen mode in these applications, the panels continue to show the colors from the desktop. To prove this is not an issue with the software’s reading method, if I take a screenshot using win+print screen the screenshot shows the desktop, not the contents of the fullscreen window. However, if I take one of these apps and put them into windowed mode, and make the application as large as I want, the colors are synced. Why is this? What makes full screen exist somewhere else, and where does it go in relation to the desktop? I apologize for the broadness of the question, but I’m not sure how else to ask it.

## How exactly does combat work?

I am currently a complete stranger to DnD, so this question may come as quite idiotic to you, but how does Armor Class and hit die work? I’ve seen some sources that say you have to roll a d20, and add your modifiers, but what about your weapon damage or hit die? Let’s say I’m playing a fighter with a longsword. So a weapon that deals 1d8 damage with proficiency. Do I then add that to the d20 I roll? Or are those completely different matters?

## How exactly to adapt Brown’s collapse from monoids to algebras?

In The Geometry of Rewriting Systems, Brown describes a method to collapse the bar resolution of a monoid. Roughly:

• Given a simplicial set $$X$$ equipped with a collapsing scheme (a partition of the geometric realization $$\lvert X \rvert$$ into essential, redundant and collapsible cells, which satisfy some properties) it is possible to collapse $$\lvert X \rvert$$ into a smaller CW-complex with a cell for each essential cell of $$\lvert X \rvert$$.
• A monoid $$M$$ presented with a complete rewriting system (the set of relations $$R$$ is terminating Church-Rosser) induces a collapsing scheme on the simplicial set $$BM$$.
• The $$n$$-cells of $$\lvert BM \rvert$$ are in correspondence with the generators of $$B_n$$ in the normalized bar resolution of $$M$$. In particular, each $$B_n$$ can be collapsed in a similar way that $$\lvert BM \rvert$$ is.
• If $$M$$ has a good set of normal forms of finite type ($$M$$ has a finite presentation $$(S,R)$$ and $$R$$ is terminating Church-Rosser), then the classifying space $$\lvert BM \rvert$$ can be collapsed into a finite CW-complex
• Under the assumptions on the last bullet, we can also collapse the bar resolution of $$M$$ into one where all $$B_n$$ are finitely generated. In particular, $$M$$ is of type $$(FL)_{\infty}$$.

Brown states (emphasis mine):

The Method used in this section works, with no essential change, if the ring $$\mathbb{Z}[M]$$ is replaced by an arbitrary augmented $$k$$-algebra $$A$$ which comes equipped with a presentation satisfying the conditions of Bergman’s diamond lemma (The diamond lemma for ring theory, Theorem 1.2). Here $$k$$ can be any commutative ring. One starts with the normalized bar resolution $$C$$ of $$k$$ over $$A$$, and one obtains a quotient resolution $$D$$, with one generator for each “essential” generator of the bar resolution. In particular, we recover Anick’s Theorem (On the homology of associative algebras, Theorem 1.4).

Let me state here the conditions of the diamond lemma:

Theorem: Let $$S$$ be a reduction system for a free associative algebra $$k\langle X \rangle$$ (a subset of $$\langle X \rangle$$ \times k\langle X \rangle), and $$\leq$$ a semigroup partial ordering on $$\langle X \rangle$$, compatible with $$S$$, and having descending chain condition. Then…

I can’t understand two aspects of this adaptation:

• The conditions of the diamond lemma are stated for the free associative algebra $$k\langle X \rangle$$. What does it mean for the presentation of $$A$$ to satisfy these conditions? If we assume $$A \cong k\langle X \rangle/I$$ we can interpret Brown’s sentence as $$k\langle X \rangle$$ satisfying the conditions, but what about the ideal $$I$$? I assume it is related somehow to the reduction system, but how exactly?
• What would be the essential generators of the normalized bar resolution? In the monoid setting, they originate from the essential cells of the classifying space of the monoid. In the algebra setting, we don’t have that tool anymore (unless we generalize classifying spaces for internal monoids over monoidal categories, which I don’t think is the case).