## A plan coding a card game with a different concept

I want to design a multi-player card game that inculde: – two teams of multiple players. – the game has rounds and each round a player is kicked. – a manager who can start/end each round but cannot see the cards. – one of the cards of team A can kick a player every round. – one of team B is able to flip a card of any other player every round (once of course) – voting system.

I need to know what should I learn to design this game (Im a graphic designer)..

Thanks

## Who first discovered the concept corresponding to the symbol of class comprehension?

Who first discovered the concept corresponding to the symbol of class comprehension {x/𝛗}used today in the set theory ? Gérard Lang

## How to create graphical concept for a game?

I don’t even know the right formulation for this, but basically, I have got a gameplay idea for a mobile game and now I need to create / develop / decide:

1. The theme of my game (e.g is it space, fantasy, abstract)
2. Color palette for my game.
3. The overall graphical style.

Where should I work on this stuff? Should I make a sample scene in some 3d software (e.g. blender) or straight inside the game engine?

p.s. Is the stuff that I’ve described called graphical concept or art concept or something else?

## Is there a concept of a “complete” regular language?

Is there a formulation of the notion of a complete regular language, which any other regular language can be reduced to?

## What is the Bruss-Yor concept of no information?

A few years ago, a question related to a paper of Thomas Bruss and Marc Yor on the so-called last arrival problem received some attention on this forum.

What I’d like to know now is:

What are the assumptions under which Theorem 5.1 of the Bruss-Yor paper is derived?

In the last arrival problem as I first thought about it, an unknown number $$N$$ of items arrive at independent times chosen from uniform distribution on the interval $$[0,1]$$, and a selector (who sees the arrival times in increasing order) tries to choose “online” the very last one.

The way I first thought about the problem (in the context of a variation of the secretary problem), there is no a priori random distribution of $$N$$, the number of items. Instead, the task is to find a policy that maximizes the probability of success under a worst case (aka adversarial) assumption. In other words we want a policy that succeeds with (at least) some fixed positive probability for every $$N$$.

For instance, one such policy that admits a rather straightforward analysis is to wait until for the first time half of the remaining time goes by without any new item arriving, and then to accept the next one. This policy succeeds for $$N$$ items with probability exactly $$\prod_{k=1}^N\left(1-\frac1{2^k}\right),$$ which is bounded from below by its limit of about $$0.288788$$.

In a paper from 2011, I showed (with a somewhat clumsy computation) that there is an optimal policy that wins for the selector with probability around 0.3529170002.

But the last arrival problem can also be considered in other settings. We might have just a point-process on the interval $$[0,1]$$. A type of point-process considered by Bruss and Yor is what they call a process of proportional increments, or p.i. This is a process where as soon as we have seen one item, the expected rate of future items will be (at all times) equal to the frequency of items that we have already seen.

Since the selector will not have to take any decision until they have seen the first item, the p.i. assumption seems natural in this context. When I first saw the Bruss-Yor paper I was excited to see that they seemed to have obtained an explicit optimal policy for the selector for a p.i. process. Their suggested policy (Theorem 5.1) is to accept the $$k$$:th item if it arrives after time $$1-\frac1{k+1}.$$ At least it appears from their Conclusion 3 of Section 2.2 that what they are considering, even in Section 5, are p.i.-processes.

But when I looked more closely, it seemed that the Bruss-Yor policy can’t be optimal under the assumption of p.i.

Suppose for instance that we have seen exactly one item up to time $$1/2$$. Then the probability of no more item ought to be $$1/2$$: If we discretize by chopping up the unit interval in $$2m$$ slots, then after seeing one item in the first $$m$$ time-slots, the probability of no new item in the next slot is about $$m/(m+1)$$, and conditioning on that, the probability of no new item in the following slot is about $$(m+1)/(m+2)$$ and so on, in total $$\frac{m}{m+1}\cdot \frac{m+1}{m+2}\cdots \frac{2m-1}{2m} = \frac12.$$

So it seems that if the first item arrives at time $$1/2$$, the selector succeeds with probability $$1/2$$ if they accept it. But that also means that they can’t possibly succeed with probability $$1/2$$ if they don’t. If they reject, the probability of seeing another item is $$1/2$$, but there might be more than one and they still have the nontrivial task of picking the right one.

Therefore it seems that an optimal policy in the p.i. setting must accept the first item even if it arrives slightly before time $$1/2$$.

I haven’t been able to follow the derivation of Theorem 5.1, but what I’d like to know is:

What is the model under which the authors claim that the $$(1-\frac1{k+1})$$-policy is optimal?

Looking back again at their Section 2.2, it seems in Conclusion 1 and the discussion around it that there is also an assumption that the process, conditioned on giving rise to exactly $$N$$ items, should behave as if the arrival times were independent and uniform on $$[0,1]$$.

I haven’t been able to figure out if this is what the authors are assuming later on, but it seems to me that this would be incompatible with the p.i. assumption already for $$N=1$$: If the first item arrives at time $$T_1$$, then assuming p.i., the probability of no more item is equal to $$T_1$$. In order for $$T_1$$ to be uniform on $$[0,1]$$ after conditioning on $$N=1$$, it would therefore have to have density at $$T_1=t$$ proportional to $$1/t$$, which is impossible.

## Is there any flaw in my API-to-API authentication concept I missed?

I’ve been thinking lately about how all of the authentication methods used in API to API communications (RESTful API’s) are mostly methods that have been designed to be human oriented first (tokens\passwords\etc) & how in API this often means you need a secret store to have the passwords\tokens stored in it.

My idea is that the requester API will contact the receiver API with a hashed token & a callback webhook address, the callback URL effectively acting as the user identity which the receiver API then contacts (in a new connection) which he then get the unhashed token from, the receiver token then compares the hashed and unhashed version of the token and should they match he knows that the requester of the original request is in fact the API of the webhook.

A workflow of the auth process is described in full in the following diagram:

I’ve also created a docker-compose POC of the concept at my github which works as i thought it will (please note that while I will gladly receive notes about the POC this question is only about the theory of the concept as a whole being secure).

The question

Is this really secure? Is there any vulnerability I missed to this authentication method which will allow an attacker to trick the receiver API by impersonating another API?

If there isn’t is there any way to prove without a doubt (mathematically or otherwise) that this is in fact secure?

Assume the following are given:

• HTTPS is used (to protect against MITM) on all requests
• The backend DB is secure
• Tokens are generated randomly & are not reused
• The hashing function is modern enough to provide good protection
• The DNS register of both API’s is secure
• This is an authentication only method, not authorization nor encryption.

## Is there any flaw in my API-to-API authentication concept I missed?

I’ve been thinking lately about how all of the authentication methods used in API to API communications (RESTful API’s) are mostly methods that have been designed to be human oriented first (tokens\passwords\etc) & how in API this often means you need a secret store to have the passwords\tokens stored in it.

My idea is that the requester API will contact the receiver API with a hashed token & a callback webhook address, the callback URL effectively acting as the user identity which the receiver API then contacts (in a new connection) which he then get the unhashed token from, the receiver token then compares the hashed and unhashed version of the token and should they match he knows that the requester of the original request is in fact the API of the webhook.

A workflow of the auth process is described in full in the following diagram:

I’ve also created a docker-compose POC of the concept at my github which works as i thought it will (please note that while I will gladly receive notes about the POC this question is only about the theory of the concept as a whole being secure).

The question

Is this really secure? Is there any vulnerability I missed to this authentication method which will allow an attacker to trick the receiver API by impersonating another API?

If there isn’t is there any way to prove without a doubt (mathematically or otherwise) that this is in fact secure?

Assume the following are given:

• HTTPS is used (to protect against MITM) on all requests
• The backend DB is secure
• Tokens are generated randomly & are not reused
• The hashing function is modern enough to provide good protection
• The DNS register of both API’s is secure
• This is an authentication only method, not authorization nor encryption.

## Is there any flaw in my API-to-API authentication concept I missed?

I’ve been thinking lately about how all of the authentication methods used in API to API communications (RESTful API’s) are mostly methods that have been designed to be human oriented first (tokens\passwords\etc) & how in API this often means you need a secret store to have the passwords\tokens stored in it.

My idea is that the requester API will contact the receiver API with a hashed token & a callback webhook address, the callback URL effectively acting as the user identity which the receiver API then contacts (in a new connection) which he then get the unhashed token from, the receiver token then compares the hashed and unhashed version of the token and should they match he knows that the requester of the original request is in fact the API of the webhook.

A workflow of the auth process is described in full in the following diagram:

I’ve also created a docker-compose POC of the concept at my github which works as i thought it will (please note that while I will gladly receive notes about the POC this question is only about the theory of the concept as a whole being secure).

The question

Is this really secure? Is there any vulnerability I missed to this authentication method which will allow an attacker to trick the receiver API by impersonating another API?

If there isn’t is there any way to prove without a doubt (mathematically or otherwise) that this is in fact secure?

Assume the following are given:

• HTTPS is used (to protect against MITM) on all requests
• The backend DB is secure
• Tokens are generated randomly & are not reused
• The hashing function is modern enough to provide good protection
• The DNS register of both API’s is secure
• This is an authentication only method, not authorization nor encryption.

## Magento 2 – Warm Cache Vs Cold Cache Concept (With Varnish & Redis)

I want to know the concept difference between warm cache and cold cache. How do we use both these techniques effectively with varnish and Redis cache.

Any help, knowledge and experience sharing would be appreciated.