Why does DFS in Dinic’s Algorithm finds a blocking flow

I came upon this implementation of the dfs in Dinic’s algorithm written in Python

def dfs(c, f, current, capacity):   tmp = capacity # What's the purpose of that?    # we want to get to the sink, but we want it to be a blocking flow path   if current == NROW - 1:     return capacity    for i in range(NROW):     is_next = levels[i] == levels[current] + 1     residual_capacity = c[current][i] - f[current][i]     has_more_capacity = residual_capacity > 0     if is_next and has_more_capacity:       min_capacity_so_far = min(tmp, residual_capacity)       flow = dfs(c, f, i, min_capacity_so_far)       f[current][i] += flow       f[i][current] -= flow       tmp -= flow # Why do we do that   return capacity - tmp # Why do we return capacity - tmp 

How do we know that this dfs finds a blocking path? Also, I can’t seem to understand the usage of the temp variable.

Thanks in advance!

Cloudflare Full Strict HTTPS flow

I would like to understand Cloudflare full(strict) SSL flow. Because if user type https://example.com it redirects to Cloudflare web servers. So how Cloudflare decrypt HTTPS data before sending to origin host without browser warning ? How Cloudflare match certificate data between Browser and Cloudflare servers. Because main certificates situated on origin web server and only Original web server can decrypt after get HTTPS data.

Difference between data flow and control flow

I am trying to read a paper. I can’t understand the difference between data flow and control. Maybe control flow means OS’s or hardware’s steps taken for execution of statements whereas data flow stands for passage of data as a result of execution of statements. Link of paper is

https://www.usenix.org/system/files/osdi18-cui.pdf, named REPT: Reverse Debugging of Failures in Deployed Software

I have problem with following:

REPT reconstructs the execution history with high fidelity by combining online lightweight hardware tracing of a program’s control flow with offline binary analysis that recovers its data flow.

Kindly explain me the difference between dataflow and control flow.

Zulfi.

Is an “inversed” Device Authorization Grant flow secure for authenticating a daemon/service native app to a web server?

I am working on a hobby project which will involve a web server (hosted and owned by me) and a native app (which will communicate with the web server periodically) an end-user can install via a deb/rpm package. This native app has no traditional UI (besides via command line) and can be installed on browser-less environments. Additionally, I’m trying to avoid registering custom URL schemes. As such, I do not wish to use redirect flows, if possible.

The web server and the native app will both be open source and the code will be visible to everyone, but I suppose it shouldn’t matter in the context of authentication. However, I wanted to point that out in case it matters.

So far, during my research, I’ve come across two mechanisms which seem suitable for what I am trying to achieve:

  • Resource Owner Password Credentials Grant
  • Device Authorization Grant

Unfortunately, I’ve come across a lot of articles and blogs stating that Resource Owner Password Credentials Grant should no longer be used. Not sure how much weight I should give these articles, but I’m leaning towards Device Authorization Grant for now.

From my understanding, one of the steps involved in this grant is the client will continuously poll the server to check if the user has authenticated the client. However, instead of polling the server, why not flip the place where the code is entered?

In other words, instead of the client/device displaying a code to the user and the user then entering the code on the server, why not display the code on the server and have the user enter the code into the client? This way the client doesn’t have to needlessly poll the server? Does this not achieve the same thing? I’m really not sure though. I want to ensure I’m not missing something before I implement this.

This is how I envision the general flow for users using my project:

  1. The user would register an account on my site (i.e, the web server). This is just a traditional username and password authentication.
  2. The user can then download and install the deb/rpm package which contains my native app. Although, it should be noted that there’s obviously nothing preventing the user from installing the package without registering an account on the server. The whole point of this authentication is create a link between the account on the server and the native app.
  3. Prior to enabling the daemon/service functionality of the native app, the user will need to authenticate the native app to the server.
  4. To do so, the user can log into the server (using their regular username/password creds) and generate a temporary token.
  5. The user can then use the CLI functionality of the native app to use this temporary token. For example, the user may type my_app_executable authenticate, where my_app_executable is the binary executable and authenticate is the parameter.
  6. This will prompt the user to enter their username and the temporary token.
  7. The app will then send the entered username and temp token to the server which will validate this combination. If it’s valid, the server will send a access token back to the app.
  8. The app can then use this access token to communicate with the server. Authentication complete.

Based on this, I have a couple of questions:

  1. Does this flow seem secure? Is there an aspect of this that I’m overlooking?
  2. Is it okay to more or less permanently encrypt and persist this access token on the filesystem? If the user turns off the native app for months and then they turn it back on, I would like it to function normally without making the user authenticate again. I suppose I’ll need to implement a way to revoke an access token, and I’m thinking about tracking this in the database on the server side. This would mean that for each HTTP request from the app to the server, the server will need to make a DB check to ensure the access token hasn’t been revoked.

Should a refresh token be linked to a single access token, and what is the ideal refresh flow?

I’ve been reading about access tokens and refresh tokens, and am implementing it in my own site. Right now, based on an example codebase on GitHub, a refresh token of random characters is created and stored in the database with some details such as the user id and expiry time, and returned alongside the JWT access token. The refresh route accepts both the old access token and refresh token, as well as some other request information (client id and IP), and as long as the refresh token exists in the database and is not expired, is assumed to be valid to grant the user a new access token (which is generated using the payload of the old token) before itself being reissued.

I then read this article, which notes among other things that the refresh route should not require the old token in the request payload (not being tied to a single access token).

Based on this, I have a few questions:

  1. Shouldn’t the refresh token be linked to a specific access token? If my access token has a jti, should I not store this in the database with the refresh token, so that a single refresh token can only be used for a single access token? If a refresh token is stolen, and it is not linked to an access token, this token can be used to generate a new access token regardless of what the old access token looks like. Sure, if it’s being rotated and an expired refresh token is used (i.e. real user attempts to refresh their expired access token), I can detect that the refresh token was breached and invalidate all of the user’s refresh tokens, but until then the attacker will be able to continue requesting new access tokens whenever they expire and have access to the user’s account.
  2. If an access token should not be sent to the refresh route (invalidating question 1), how is the payload for the new access token sourced? Is this retrieved fresh from the database? Should this happen anyway so that any changes made to the database are at most accessTokenTtl stale?
  3. What about the other information stored alongside the refresh token? In the example GitHub repo, the client id and user ip address are stored with the refresh token but not used for anything. Should a refresh token only be valid if the same ip address and client id are provided when a refresh is attempted? What if a user is on their mobile phone, for example, where their ip address can change quite frequently? This defeats the purpose of a long-lasting (weeks/months, not minutes or hours) refresh token.
  4. Is a refresh token of random characters sufficient? Can it be a JWT itself? I wanted to not store the refresh token in plain-text in the database, but in order to find the right token it needs to be stored alongside a unique identifier which is also a part of the payload. I.e. I could make the refresh token a JWT with a jti that is the id of the corresponding database row, and a random payload. This is sent to the user as a normal JWT, but it is hashed using bcrypt before being stored in the database. Then, when the refresh token is being used in the refresh route, I can validate the token provided by the user, grab the jti, find the hashed token in the database, and then compare them using bcrypt as you would with a user’s password.

data carried during TCP control flow

I am learning about TCP control flow and came across a question about how much data is carried by the 4th segment. The answer is supposed to be 1200 (2230 – 1030) but I don’t quite understand why.

By definition, I know that the acknowledgement number is telling the other side what is next expected from it. Thus 5 is the server acknowledging everything up to 2230 and telling the client that the server is expecting 2230 next, while 3 is the client acknowledging everything up to 3848 and telling the server that it next expects 3848. But I still don’t understand why we’re considering segment 5 and segment 2? If it’s only “expecting” then how do we know how much data is being carried?

enter image description here

Maximum edge-disjoint flow

Consider the case where you have two types of flow, let’s say “red” flow and “blue” flow. You want to send $ k_r$ red flow and $ k_b$ blue flow through a DAG $ G$ from a source $ s$ to a sink $ t$ in such a way that no edge carries both red and blue flow. Is there an efficient way to determine if an assignment of $ k_r$ red and and $ k_b$ blue flow exists? If so, can it be extended to multiple “colors”?

An extremely naive solution would be to take every subset of edges $ E’$ and check whether or not you can send $ k_r$ flow through $ G[E’]$ and $ k_b$ flow through $ G[E\setminus E’]$ .

Combining User Context in Machine-2-Machine OAuth2 Client Credential Flow

I have a REST API that is used by 2 separate application and is authenticating them by M2M OAuth2 Client Credential Flow.

enter image description here

One of the two application is an automation service without user context. The second one is a REST API where users authenticate with OAuth2 Implicit Flow.

Now I need to include the user context in my common REST API too, since some information should only be shared to certain users.

What is a secure strategy to implement that scenario with OAuth2? I thought I could just include the user (or a fixed string in case of the automation service) into the Access Token of the Client Credential Flow but that doesn’t seem possible.