Access Microsoft APIs using Azure AD-App (OAuth2)

I’m trying to use Azure AD application (using oauth2) to access another tenants Microsoft API data (graph API, storage API , etc…)

My question is, is it possible to use an app created in global cloud to authorize and fetch dat from a tenant who is in another national cloud (USGov/Germany/China) ?

I was able to successfully fetch another tenant data by setting the app to “multi tenant”

As a summary I want to create an app in global national cloud and ask a user from a tenant in another national cloud to authorize my app, and then fetch data given he/she authorized my app

Python and REST APIs for my Batch Job Executor software

I want to expose Python APIs for my Batch Job Executor software. Batch Job(s) runs periodically, periodicity could be for ex: run this job every 5 minutes, or run this job every hour, or run a particular job only on weekdays at so and so time etc. Each run of a job has a name and ID associated with it. Each run of a Job writes about its state in the file system at a path like “/dateddirectory/name/jobid/”. Now, I want to develop Python APIs for this Batch Job Executor software so that I can query information like

  • return jobs that ran in a given duration,
  • return jobs in the failed state,
  • return checkpoints/commands of a particular job and many more.

I have all this information available in File System, hence I have written a simple “get_jobs(names, start_time = UNIX_START_TIME, end_time = TODAY)” method (at persistence layer) that takes job_name, start_time and end_time as input arguments (all optional), reads corresponding jobs from the file system and returns matched jobs back.

I have a Job class as defined below. Instance of job class represents a particular Job run. Job can have multiple checkpoints and each checkpoint can have multiple commands.

Job  - job_name  - job_id  - start_time  - end_time  - status  - checkpoints  - ...  Checkpoint  - checkpoint_name   - checkpoint_id  - start_time  - end_time  - status  - commands  - ...  Command  - command_name  - command_id  - start_time  - end_time  - status  - exec_str   - ... 

I am thinking of providing the following method in my user API module “get_jobs(name, state, start_time, end_time, category)” which is nothing but something that wraps persistence layer “get_jobs” method defined above and has other filtering capabilities. I am creating this separate method so that I have the flexibility to change my persistence level “get_jobs” method without worrying about breaking my client’s code.

I am also thinking to expose REST endpoints for querying batch_jobs using Django.

My questions:

  1. How should I reuse get_jobs methods in user API module for REST APIs?
  2. Performance: loading data from file system every time a query is made would be too slow. How can I make these read calls faster?
  3. Any other suggestion?

OData – Best practices for building REST Apis?

I am newby in OData and I am reading documentation about OData. The documentation tells that OData defines a set of best practices for building and consuming RESTful APIs. I am not seeing which best practices is defining OData. I am seeing the advantatge to be able to filter with generic filter via query options without defining in the controller an object to filter the results.

Another thing is the urls in OData. They follow the convention of REST Apis (instead localhost/person/17 you have the route localhost/people(17) ).

Which more best practices is defining OData?

Having one OIDC provider and multiple APIs from third parties, how can I federate logins?

If I have an app which authenticates against one OIDC provider eg. Google but then uses the provided id- and access-token to make request against a 1. app-api and 2. a third-party-api using the tokens from before.

Is this possible how does this work where can I learn more? I know about OpenID Connect but only in a “single backend api flow”. I came across OpenID Federation but do not know if this is the standard. Can anybody help me out?

Last but not least how to I manage roles in this type of setup? Someone mentioned custom claims for this, as a property of the token but I could not really get a clue about this either.

In summary: How do I do enterprise authentication and access management having third party APIs but only one place to sign up and login?

Having one OIDC provider and multiple APIs from third parties, how can I federate logins?

If I have an app which authenticates against one OIDC provider eg. Google but then uses the provided id- and access-token to make request against a 1. app-api and 2. a third-party-api using the tokens from before.

Is this possible how does this work where can I learn more? I know about OpenID Connect but only in a “single backend api flow”. I came across OpenID Federation but do not know if this is the standard. Can anybody help me out?

Last but not least how to I manage roles in this type of setup? Someone mentioned custom claims for this, as a property of the token but I could not really get a clue about this either.

In summary: How do I do enterprise authentication and access management having third party APIs but only one place to sign up and login?

Bringing data from 30 APIs based on data came from the APIs

I have to write server (.net core) that is going to read data from 30 different remote APIs. I suppose to run some sort of decision trees, if I find something in one API goes to another API, else some other and so on.

Goes something like that:

if apiA.response.data has 'some info' {   //read apiB   if apiB.response.data has 'some info' {      //read apiC      ....   }else{     //read apiJ     ....   } }else{     //read apiQ     .... }  

So each cycle can end up with between 20-30 API calls depends on the result and the use case.

What I thought to do is to put some sort of queue management (rabbit) and let it handle it.

So ApiAHandler will queue a task to run AnotherApiHandler base on the result.
AnotherApiHandler will queue a task to run SomeOtherApiHandler base on the result.

and so on.

Each handler will persist to the result to the database assuming that each handler knows what to with it and how to persist it.

What do you think? is it efficient to handle it like that?
(I hope you got me).
Thanks

Why point out implementation-specifics on native APIs? [on hold]

I’ve reviewed code, both in Stack Exchange and in person. A common thing I do is look for manually-written loops and replace them with native APIs where applicable. This way, code is more concise, developer avoids dealing with counters, and it takes advantage of native APIs which get optimized over time.

For example:

  • Using string.replace() for a global replace instead of a manually looping and concatenating strings.
  • Using array.includes()/string.includes() for finding content, instead of manually looping, checking, and flipping a flag variable.
  • Using array.indexOf()/string.indexOf() for finding the index, instead of manually looping, checking, and returning the counter.
  • Using Web Sockets instead of manually looping and doing AJAX calls at every iteration.

Now with all that done, I almost always get this kind of comment back:

…internally, it’s still looping.

True, these APIs probably are looping under the hood. But my question is: why do people point it out? It’s like they’re saying “all your loop-removal work was for nothing because something somewhere still uses loops under the hood”

Writing in a higher-level language, I shouldn’t care how the engine works (except in cases where performance down to the last drop matters, which are not the case for any of these reviews). If there’s a native API that does the same job with less code and headache, it should be a good thing. If I managed to push the responsibility of looping to the engine, it’s still a good thing.

Is there something fundamental I’m missing? Is there some concept I’m not really grasping here? It can’t be a coincidence that multiple, unrelated individuals of various programming seniority are pointing this out to me.

How to handle dependencies in Web APIs

I’m struggling with a decision about how to design a web-API where I create new “things”. We roughly follow the API guidelines of Zalando, which do provide a nice starting point for web-APIs (https://opensource.zalando.com/restful-api-guidelines/). But there’s no guidance on how to handle creating new resources, which might have dependencies.

To provide a simple example, I have a beloved automotive example.

Assume the following API:

GET /vehicle – will get a list of vehicles
POST /vehicle – will create a new vehicle

The vehicle might look something like this

class Vehicle {   VehicleType Type { get; set; }  }  enum VehicleType { // This enum is an example - it might as well be some complex type.   eCar,   Car,   Truck } 

Now for the Post, I need to know about valid VehicleTypes.

Would I rather do:
GET /vehicle-type or
GET /vehicle/types or
GET /vehicle/dependencies/types or
GET /new-vehicle and include the dependencies?

Which approach is “well-known”? Are there other well known approaches?

Preventing unauthorized access to APIs

I have an application which calls various services to load data(of different entities shown in the web page). Opening the network tab in Chrome can tell me which APIs are being called. Now this can be used by other users to get the data, say calling the same API endpoint in another tab. I do not want the users to be able to do that. I want some way where my API returns data only when my application calls it and not when directly called from browser or postman. Is it possible? If yes, how do I achieve the same?