JWT vs custom encryption for REST APIs over https

For our REST API architecture, we are currently thinking over two options –

  1. Json Web Token – pros are that it is industry standard, we pass a key which adds a layer of access control and using which we can also add secondary authorisation restrictions at our backend, maintenance of session and related security features are provided by Django by default.

    Cons are that the params are open for anyone to see, it seems (and correct me if I’m mistaken) that it is possible that if someone gets access to our link, he could alter a param that is not linked with the core authentication process and thus compromise the data.

  2. An in house encryption process we developed that encrypts all the params. Pros are that we are fairly certain of it to have never been compromised, for even if the link would have gotten into the hands of someone they wouldn’t have known how to decrypt it to look at the params.

    Cons are that we have to manage all the session data through our backend code in our tables, so we aren’t able to utilize the Django features. Also, the idea that what we are doing isn’t industry standard.

What is the right way to decide in this situation, and what are the factors that we should take into account?

Mapping two apis together

I have two services which I can not modify, and I want to push data from service A to service B

Both have rest API, but are not using the same object models and JSon structure, so what I have done this far is writting a service that will call API A every X hours and Push datas to API B:

I designed the service C this way :

  • a wrapper class for querying API A
  • a wrapper class for querying API B
  • a Logger class

(I’m using python but I do not think this is revelant here)

Now I feel that what i’ve got to do is just create a “Mapping” class with methods calling API A, getting entity A.a, A.b, A.g of API A, generate my JSon to create object B.h of API B and Post it to API B, and repeating this until I’ve pushed all the different entities and datas I want to push to Service B.

It feels quite unefficient and rather boring to write, is there a pattern or a smart way to push data from one Endpoint to another ?

Thanks,

Legacy deep-inheritance XML schemas: how to design patterns for APIs that map those to and from flat schemas?

Consider a purely hypothetical legacy proprietary library for XML models, which has some really deep nested inheritance within its corresponding POJOs — 1-10 fields per class, lots of special instance classes that extend archetypal classes, types as wrappers of lists of type instances etc. The resulting model looks really pretty with some dubious performance specs but that’s besides the point.

I want to make this work with the ugly, high-performance flat models that kids these days and people that claim not to have a drinking or substance abuse problem prefer for some reason. So say my beautiful, shiny model is something like this:

<QueryRequestSubtypeObject>   <QueryRequestHeaders>     <QueryReqParams>       <Param value = 4/>       <ParamDescWrapper index = 12>          <WrappedParamDesc>Foobar</WrappedParamDesc>   ... 

And the corresponding object as modeled by the vendor instead looks like

{    paramVal = 4    paramTypeIndex = 12    paramDesc = "Foobar"     } 

There are also regular updates to this ivory Tower of Babylon as well as updates to the business logic as vendor specs change.

Now the part where I convert my ancient classic into a teen flick is straightforward enough, however ugly it might be. Say something like below would be used by a query constructor and that would be enough abstraction for all business logic involved:

def extractParamVal(queryRequestSubtypeObject):     return queryRequestSubtypeObject.getQueryRequestHeaders.getQueryReqParams.getParam.getValue 

Alas, it is not that simple. Now I want to convert whatever ugly, flat, subsecond latency response that comes back into our elegant, delicate model (with 5-10 second latency, patience is a virtue after all!). With some code like this:

queryRequestSubtypeObject = new QueryRequestSubtypeObject queryRequestHeaders = new QueryRequestHeaders queryReqParams = new QueryReqParams queryReqParamList = new ArrayList param = new Param param.setValue(4) queryReqParamList.add(param) queryReqParams.setQueryReqParamList(queryReqParamList) queryRequestSubtypeObject.setQueryRequestHeaders(queryRequestHeaders) ... 

Code like this needs to be somewhere somehow for each and every field that is returned if someone were to convert data into this hypothetical format. Some solutions I have tried:

  • External libraries: Libraries like Dozer use reflections which does not scale well for bulk mapping massive objects like this. Mapstruct et al use code generation which does not do well with deep nesting involving cases like the list wrappers I mentioned.

  • Factory approach: Generic response factories that take a set of transformation functions. Idea is to bury all model specific implementation into business logic based abstractions. In reality this results in some FAT functions.

  • Chain of responsibility: Methods that handle initialization of each field and other methods that handle what goes where from vendor response and some other methods that handle creation of a portion of the mapping and some other methods that handle a sub-group… loooooong chains of responsibility

Given all of these approaches resulted in technical nightmares of some sort, is there an established way to handle cases like this specifically? Ideally it would have minimal non-business logic abstractions involved while providing enough granularity to implement updates and have it technically solid as well. Bonus points for the ability to isolate any given component, wherever it might be in the model hierarchy, without null pointers getting thrown somewhere for unit testing

Confused about how APIs are called

I’m new to APIs. Conceptually, I understand what an API is, but I get confused when it comes to the some of the technical details.

All of the tutorials I’ve read talk about URLs and endpoints, and they describe them as being the paths/addresses through which APIs can be accessed. This part is very clear; no confusion here.

However, what I don’t understand is how APIs are called. In other words, in a real-world scenario, people don’t actually type in a URL in some input box to call APIs, so I assume the calling is done behind the scenes, by some program, and in response to some trigger event? If my assumption is true, is it also true that when using a client like Postman to test APIs, you are basically emulating the behavior of said program?

Lastly, where are APIs typically stored?

Is Kafka the right choice to decouple my APIs from the service implementation?

I wan’t to decouple my exposed API service from their implementation. So I was planning to use KafKa as a mediation layer between my exposed API and the service implementation by going fully asynchronous and using a request/reply pattern with kafka. Here a schema of what I am planning to do :

enter image description here

I think that with this architecture I could easily :

  • separate the entry point in the system from my service implementation
  • easily allow scale up of the solution
  • allow service to go offline for maintenance/upgrade

But I am also wondering if this is the right pattern to choose because I never seen it anywhere used before? Do you see any possible drawback with my architecture?

adding ip addresses in the netbox through api’s

I used this query but getting an error

curl -X POST -H “Authorization: Token fbdae03cf8cf549dcf79910eb0964f35939af098” -H “Content-Type: application/json” -H “accept: application/json; indent =4” http://10.221.16.40:8000/api/ipam/ip-addresses/ –data ‘{“address”:”172.17.42.1/16″,”interface”:{“device”:{“display_name”:”lnl66a-4802″},”name”:”docker0″}}’ { “interface”: [ “Primary key must be an integer” ]