OpenID Connect with user bound roles and M2M access

I’m trying to get my head straight about how to properly design a OpenID connect provider and the roles to use with it. I understand the basic of scopes, claims and the different flow one can use. However, I’m trying to get my head around how I should handle the cases where i want M2M access to all resources, and a end user should only have access to his/her data.

My question is more related to how I should handle roles, is it overkill to have roles such as:

  • view_company_data
  • view_all_data

An example could be to provide a public API to access all data, e.g. collaborating companies, while also allowing me to have specific users to only access the data created by him/her. In my case that would be government body that wants access to all data, whilst the business owners should only have access to their own data.

I have an authentication provider, along with several resource servers. The business owners access their data through our client with only read/write permission for their own entity, and the government body wants access through our APIs to access all the data.

I wish to have all access control in a central entity, so generating access tokens on each separate resource server along with default JWT tokens from the authentication server seems like a bad idea. I’d rather handle it all from the authentication server.

Also a user should be able to generate these full-access tokens, given that they have an Global administration role.

So, what would be the right thing to do here?

Why decision tree method for lower bound on finding a minimum doesn’t work

(Motivated by this question. Also I suspect that my question is a bit too broad)

We know $ \Omega(n \log n)$ lower bound for sorting: we can build a decision tree where each inner node is a comparison and each leaf is a permutation. Since there are $ n!$ leaves, the minimum tree height is $ \Omega(\log (n!)) = \Omega (n \log n)$ .

However, it doesn’t work for the following problem: find a minimum in the array. For this problem, the results (the leaves) are just indices of the minimum element. There are $ n$ of them, and therefore the reasoning above gives $ \Omega(\log n)$ lower bound, which is obviously an understatement.

My question: why does this method works for sorting and doesn’t work for minimum? Is there some greater intuition or simply "it just happens" and we were "lucky" that sorting has so many possible answers?

I guess the lower bound from decision tree makes perfect sense: we do can ask yes/no questions so that we need $ O(\log n)$ answers: namely, we can use binary search for the desired index. My question still remains.

Scheduling jobs online on 3 identical machines – a lower bound of 5/3

Consider the Online Scheduling Problem with $ 3$ identical machines. Jobs, with arbitrary size, arrive online one after another and need to be scheduled on one of the $ 3$ machines without ever moving them again.

How can I show, that there can’t be any deterministic Online-Algorithm which achieves a competitive ratio of $ c<\frac{5}{3}$ .

This should be solved by just giving some instance $ \sigma$ and arguing that no det. algorithm can do better. Same can easily be done for $ 2$ machines and $ c<\frac{3}{2}$ . Sadly I can’t find any solution to this (more or less) textbook answer.

Lower bound on comparison-based sorting

I have a question from one of the exercises in CLRS.

Show that there is no comparison sort whose running time is linear for at least half of the $ n!$ inputs of length $ n$ . What about a fraction of $ 1/n$ of the inputs of length $ n$ ? What about a fraction $ 1/2^n$ ?

I have arrived at the step where for a linear time sorter, there will we $ 2^n$ nodes in the decision tree, which is smaller than the $ n!$ leaves so this is a contradiction but I am unsure of how to formally write out the proof and extend it to the other fractions? The question also states that "for at least half of the $ n!$ inputs of length $ n$ ". I do not quite understand how it affects the number of leaves in the decision tree as any input of length $ n$ will have $ n!$ possible permutations.

Can an algorithm complexity be lower than its tight low bound / higher than its tight high bound?

The worst case time complexity of a given algorithm is $ \theta(n^3logn)$ .
Is it possible that the worst time complexity is $ \Omega(n^2)$ ?
Is it possible that the worst time complexity is $ O(n^4)$ ?
The average time complexity is $ O(n^4)$ ?

IMO it is possible as long as you control the constant $ c$ , but then what’s the point of mentioning any other bound than the tight bounds?

Tight upper bound for forming an $n$ element Red-Black Tree from scratch

I learnt that in a order-statistic tree (augmented Red-Black Tree, in which each node $ x$ contains an extra field denoting the number of nodes in the sub-tree rooted at $ x$ ) finding the $ i$ th order statistics can be done in $ O(lg(n))$ time in the worst case. Now in case of an array representing the dynamic set of elements finding the $ i$ th order statistic can be achieved in the $ O(n)$ time in the worst case.[ where $ n$ is the number of elements].

Now I felt like finding a tight upper bound for forming an $ n$ element Red-Black Tree so that I could comment about which alternative is better : "maintain the set elements in an array and perform query in $ O(n)$ time" or "maintaining the elements in a Red-Black Tree (formation of which takes $ O(f(n))$ time say) and then perform query in $ O(lg(n))$ time".


So a very rough analysis is as follows, inserting an element into an $ n$ element Red-Black Tree takes $ O(lg(n))$ time and there are $ n$ elements to insert , so it takes $ O(nlg(n))$ time. Now this analysis is quite loose as when there are only few elements in the Red-Black tree the height is quite less and so is the time to insert in the tree.

I tried to attempt a detailed analysis as follows (but failed however):

Let while trying to insert the $ j=i+1$ th element the height of the tree is atmost $ 2.lg(i+1)+1$ . For an appropriate $ c$ , the total running time,

$ $ T(n)\leq \sum_{j=1}^{n}c.(2.lg(i+1)+1)$ $

$ $ =c.\sum_{i=0}^{n-1}(2.lg(i+1)+1)$ $

$ $ =c.\left[\sum_{i=0}^{n-1}2.lg(i+1)+\sum_{i=0}^{n-1}1\right]$ $

$ $ =2c\sum_{i=0}^{n-1}lg(i+1)+cn\tag1$ $

Now

$ $ \sum_{i=0}^{n-1}lg(i+1)=lg(1)+lg(2)+lg(3)+…+lg(n)=lg(1.2.3….n)\tag2$ $

Now $ $ \prod_{k=1}^{n}k\leq n^n, \text{which is a very loose upper bound}\tag 3$ $

Using $ (3)$ in $ (2)$ and substituting the result in $ (1)$ we have $ T(n)=O(nlg(n))$ which is the same as the rough analysis…

Can I do anything better than $ (3)$ ?


All the nodes referred to are the internal nodes in the Red-Black Tree.

Intuition of lower bound for finding the minimum of $n$ (distinct) elements is $n-1$ as dealt with in CLRS

I was going through the text Introduction to Algorithms by Cormen et. al. where there was a discussion regarding the fact that finding the minimum of a set of $ n$ (distinct) elements with $ n-1$ comparisons is optimal as we cannot do better than it, which means that we need to show that running time of algorithm which finds the minimum of a set of $ n$ elements is $ \Omega(n)$ .

This is what the text says to justify the lower bound.

We can obtain a lower bound of $ n – 1$ comparisons for the problem of determining the minimum. Think of any algorithm that determines the minimum as a tournament among the elements. Each comparison is a match in the tournament in which the smaller of the two elements wins. Observing that every element except the winner must lose at least one match, we conclude that $ n-1$ comparisons are necessary to determine the minimum.

Now I could make the thing out in my own way as:

TOP-DOWN

What I have done is a top down comparison, but the authors by their words "Observing that every element except the winner must lose at least one match, we conclude that $ n-1$ comparisons are necessary to determine the minimum." seems they are pointing to some bottom up approach which unfortunately I cannot make out.

How,

"That every element except the winner must lose at least one match" $ \implies$ "$ n-1$ comparisons are necessary to determine the minimum".