Registration process by OAuth2

I have been analyzing security process in 3gpp specs and the registration process in the spec. explained with that flow. The flow represents the registration of a client to the server.From SPEC. spec 2

I am not a security specialist and i do not understand why we send Oauth2 token in step 1. Doesn’t the access token send after the registration? I know that the access token used for authorization if the client known. I know that is a weird topic and I tried to explain it briefly i hope you understand if you dont you can always ask me.

Examples from pathfinder lore of the process of destroying an artifact

So I was reading the description of mages disjunction trying to look for a way to make mechanics for rulebreaker from FSN and noticed it said that mages disjunction has a 1% chance per CL to destroy an artifact and that doing so may draw the attention of a "powerful being".

I want to know about examples from pathfinder lore that describe what that powerful being would be and what that "attention" would look like.

You can also use this spell to target a single item. The item gets a Will save at a -5 penalty to avoid being permanently destroyed. Even artifacts are subject to mage’s disjunction, though there is only a 1% chance per caster level of actually affecting such powerful items. If successful, the artifact’s power unravels, and it is destroyed (with no save). If an artifact is destroyed, you must make a DC 25 Will save or permanently lose all spellcasting abilities. These abilities cannot be recovered by mortal magic, not even miracle or wish. Destroying artifacts is a dangerous business, and it is 95% likely to attract the attention of some powerful being who has an interest in or connection with the device.

Obviously I’m going to lend more of an ear to the SRD than some random Redditor but it makes me curious about how exac

How to program a situation like the following in mathematics and generalize the process to other configurations?

Distribute the numbers from 1 to 10(view image) so that the sum of each row and each column is the same and a) the maximum possible b) the minimum possible (I put it from 1 to 10 for ease)

I know it is a problem that could work with matrices or lists but I can’t think how to start


In general, how does a DFA know how to successfully process a string the intended way?

Suppose we have:

$ $ A\text{ }\colon=\{x, y, z\}$ $

$ $ M\text{ }\colon=\text{some DFA using A}$ $

$ $ S\text{ }\colon=xyzxyzxyz$ $

Intuitively, one might say $ S$ is fed to $ M$ on a per-character basis.

This means that somehow we have an undisclosed mechanism that can tell where a symbol starts and ends.

One might say, simply use the maximum valid substring similar to how Lexers tokenise plaintext. To that I say, suppose instead that we defined $ A$ as: $ $ A\text{}\colon= \{x, xx, xxx\}$ $

Now we have 3 unique symbols, that, as it so happens, using the maximum valid substring will yield in a restriction to what our our $ M$ can actually process, because any string longer than 2 characters will always be assumed to start with $ xxx$ rather than perhaps, $ x$ and $ xx$ .

One way I see around this is to actually have a character synonymous to a symbol. That is, $ x$ and $ xxx$ (from $ A$ ) are both a single character each.


mysql source recovers after process restarts

I am doing a large import (~300GB – MyISAM) into a MySQL server version 5.6.48 within a FreeBSD jail, before starting I did:

SET GLOBAL bulk_insert_buffer_size = 1024 * 1024 * 1024; SET GLOBAL net_buffer_length=1000000; SET GLOBAL max_allowed_packet=1000000000; SET foreign_key_checks = 0; 

And then:

source file.sql 

The dump started to load but after a while, the server memory RAM/SWAP got full and the mysqld process exited.

What I notice and surprised me is that when the MySQL server came up again, the source command continued to load the dump, kind of “resuming” the process.

From the docs, I haven’t found much documentation about the source command, only from the MySQL shell:

mysql> source ERROR: Usage: \. <filename> | source <filename> 

Therefore wondering what is keeping the state or track of the inserted data and how could I check/monitor the source progress, besides willing to understand if this “resume” option is the expected behavior and if it could be configured.

Process and order of verification of a X.509 certificate chain

While going through the rfc5280 Certificate Path Validation to understand how the X.509 certificate chain is validated, I found out that the X.509 path processing Algorithm processes the chain in an order from trust anchor to the end entity. After reading this, I am a bit confused as to how the chains are validated. Is my understanding correct that Certificate Path validation mentioned in the RFC is sufficient to validate an X.509 certificate chain ?

The Algorithm accepts 9 parameters, out of which one is

 (d)  trust anchor information, describing a CA that serves as a        trust anchor for the certification path.  The trust anchor        information includes:       (1)  the trusted issuer name,       (2)  the trusted public key algorithm,       (3)  the trusted public key, and       (4)  optionally, the trusted public key parameters associated           with the public key. 

How does a verifier know what trust anchor to use before the path is validated.

The outputs of this algorithm as state by the RFC is

If path processing succeeds, the procedure terminates, returning a success indication together with final value of the valid_policy_tree, the working_public_key, the working_public_key_algorithm, and the working_public_key_parameters.

It’s returning the working_public_key* attributes and a success indication. Is that sufficient.

My understanding of the X.509 chain validation.

For certificates supplied as Cert[1] signed by Cert[2] signed by Cert[3] where Cert[3] is a CA,

  1. Find certificate responsible for Cert[1].AKID and extract public key
  2. Verify Cert[1]’s signature and other attributes
  3. Check if Cert[2].SKID is trusted in the system trust/loaded trust of the client performing the operation. If not found, exit with an invalid verdict.
  4. If yes, the chain is trusted, exit with a valid verdict
  5. If no, then repeat the steps from 1 with Cert[2].

If my understanding is correct, why is the RFC version written in such a way.

Reasons of why I think the RFC mentions the chain is processed in the reverse order

  1. The prospective certificate path section 6.1

    (a) for all x in {1, …, n-1}, the subject of certificate x is the issuer of certificate x+1;

    (b) certificate 1 is issued by the trust anchor;

    This states that certificate is ordered from trust anchor to end entity. Also supported by Subject(x) == Issuer(x + 1). Meaning, Cert[n] is the end entity cert

  2. Certificate processing in section 6.1.3.

     The basic path processing actions to be performed for   certificate i (for all i in [1..n]) are listed below. 

    This also reads the certificates from 1, i.e. the trust anchor.

  3. In the section for Preparation for Certificate i+1,

    To prepare for processing of certificate i+1, perform the following steps for certificate i:

    (c)  Assign the certificate subject name to working_issuer_name. 

The subject of Cert[i] is assigned to Cert[i + 1] as an issuer.

  1. In the section for Wrap-Up Procedure
    To complete the processing of the target certificate, perform the following  steps for certificate n: 

This also mentions Cert[n] as the target certificate.

Scheduling of process manufacturing with setup times


The process manufacturing is (in contrast to discrete manufacturing) focused on the production of continuous goods such as oil. The planning is typically solvable by means of Linear Programming, come constraints can be introduced for MILP.

Problem Formulation

The problem consists of

  • Sequence of consecutive time intervals $ t\in\{1,\dots,n_t\}$ , each with start and end $ (s_t,e_t)$ and length $ l_t=e_t-s_t$ . Consecutive means $ e_{t}=s_{t+1}$ for all $ t\in\{1,\dots,n_t-1\}$ .
  • List of type of goods that are being produced: $ j\in \{1,…,n_j\}$
  • Demand of each type of good per time interval $ d_{j,t}$ .
  • List of production lines $ i\in{1,\dots,n_i}$
  • Availability of production lines per time interval $ a_{i,t}$ . $ a_{i,t}$ is binary – whether available or not.
  • Manufacturing speed per production line per type goods $ v_{i,j}$ .
  • Setup time of production line from one type of goods to another one $ u_{i,j,j’}$ .
  • Price for using a production line (leasing based), counted per minute $ c_{i}$

The goal is to plan the production lines so the demand is covered and the price for leasing is minimal.


  • The setup time can be shorter or longer or equal to the length of the intervals
  • It is acceptable that the production line will not work the whole time interval if the supply has been completed sooner
  • The setup to the production of another good can start any time, not necessarily at the beginning of an interval.


There are two production lines, i.e., $ n_i = 2$ and there are two types of goods, i.e. $ n_j=2$ .

We have two intervals, i.e. $ n_t=2$ , each has a leght of 1 hour. Say one starts at 1 pm, the second at 2 pm.

The demand is:

  • $ d_{1,1}=1.1$
  • $ d_{1,2}=1$
  • $ d_{2,1}=0.5$
  • $ d_{2,2}=1$

The of running the production lines are:

  • $ c_{1} =c_{2} = 1$ USD/minute

All possible setup times are twelve minutes, i.e.:

  • $ u_{i,j,j’}=0.2$ for all $ i,j,j’$ where $ j\neq j’$ .

The speeds are:

  • $ v_{1,1}=1.1$
  • $ v_{1,2}=1.5$
  • $ v_{1,1}=1$
  • $ v_{1,1}=1$

Obviously, the demand is met for a total cost of $ 4$ if the first line is producing the first type of goods at both intervals and the second line is producing the second type.

However, it might be tempting to switch them after the first interval. If there would be no setup time needed, the cost would be $ 1+1+1+0.5/1.5=3.33$ which is better. However, this is not possible because of the setup time of the second production line.


What is the algorithm to schedule this manufacturing process optimally?

An answer is welcome even if it would outline the way and approach (MILP, SAT, CSP,…).

Ideas fo far

  • If the length of intervals would be fixed, say 1 hour and the setup time would be defined in terms of these units, say 2 hours. Then, it might be solvable by SAT/CSP.
  • An idea is to use an evolutionary algorithm that would: consist of a sequence of activities with mutations (add an activity, delete activity, prolong activity) and crossover (mix two plans in a random way).