When to use a reserved word, null or undefined as a key in an object?

I have a case where I have a dictionary object and a value with no key.

The object can have the system values and then a user value. I have to store that value.

I could use a reserved word or I could use null.

userValue = "enabled"; data = {version: 10, date: 10101010}; data.name = "Charleen";  // reserved word data["MyReservedWordIHopeNoOneStumblesUpon"] = userValue;  // or use null data[null] = userValues; 

If I use a reserved word a user may stumble upon it one day (highly unlikely). That reserved word would also be stored or saved and may not make sense.

I could also use null or undefined as keys. But if I was iterating through the object and I encountered null or undefined they could be bugs that I dismissed as coming from user values.

The code is not that complicated in that I could easily debug it and I could check for reserved words or null.

Have you ever used null or undefined as keys and is there any pros or cons or recommendations to doing so?

Note: Typescript gives a warning:

Type ‘null’ cannot be used as an index

But it works in ES6.

What is a good technique to process an event stream to ensure sequential or transactional consistency

I am trying to improve an event-driven processing system, which is having a few problems because the events are not guaranteed to arrive in the correct chronological sequence. This is due to batching and caching upstream which is currently out of my control.

The sequence errors are not a complete disaster for my processor, mainly because it’s FSM oriented and so copes with “weird” transitions and mostly ignores them. Also the events are timestamped and expected to be delayed so we are used to reconstructing history. However there are a few cases which cause unwanted behaviour or unnecessary double-processing.

I’m looking for a technique which can either somehow queue up, group the events and sort them before processing, or perhaps identify “transactions” in the stream.

An example correct event stream would be like this, noting that all events are timestamped:

10:01 User Session Start (user A) 10:02 Device event 1 10:03 Device event 2 10:10 User Session End (user A) 10:15 User Session Start (user A) 10:16 User Session End (user A) 10:32 User Session Start (user B) 10:34 Device event 3 10:35 Device event 4 10:50 User Session End (user B) 

My downstream processor keeps track of who is using a device, but also needs to correlate the other device events with the users. It does that by keeping state of the sessions, while receiving the other events.

Each event is in practise processed by different message queue workers, with a central database. So there are potential race hazards, but that’s not the focus of this question.

The problems arise when the same stream arrives like this, where the … indicate gaps between the three “batches” as they are received much later.

10:10 User Session End (user A) 10:01 User Session Start (user A) 10:02 Device event 1 10:03 Device event 2 ... 10:16 User Session End (user A) 10:15 User Session Start (user A) 10:34 Device event 3 10:35 Device event 4 ... 10:50 User Session End (user B) 10:32 User Session Start (user B) 

I am particularly interested in “the final device event in a session”. So here I need the 10:10 session and + the 10:03 Device event 2, to complete the picture. I know that any device event timeboxed between 10:01 and 10:10 is “owned” by user A, so when I receive device event 2, I can correlate it – OK. When I receive the 10:01 start event I can ignore it, as I already saw the corresponding end (just annoying). When I receive device event 1, I can’t tell if it’s the final one or not, so I process it. Then I receive device event 2 immediately after and re-do the same work, update the state to presume this is the last one. I cannot predict if there will be any more device events coming, so the FSM has to just remain with that assumption – which in this case is correct.

The next batch is harder to deal with. I get a second “empty” session from user A – not a problem in itself. Then I get some device events out of sequence which are for the user B session which I’ve not received yet. This isn’t a critical problem, I can update the associated device model with this information, but cannot complete the processing yet.

Eventually the user B events arrive and I can correlate back with the device events, again ignoring “older” events where possible.

You can hopefully see this adds a lot of difficulty to the processing and is probably leading to some missing cases.

What can I do to massage this event stream to make it more processable?

I have been thinking about:

  • event sourcing (but it requires correct sequence)
  • re-buffering the queue for X minutes (but I still can’t be sure how long)
  • implementing something like the Nagle Algorithm for chunking and pause/gap detection
  • Combine all the workers into one, with an FSM (mirroring the session-boxing) which then outputs the events once they’ve satisfied the inter-dependency sequence checks
  • Don’t fix the queue and implement a random-order resilient processor

Because I can make some assumptions about the likely contents of the stream, I can considering a “transaction detector” or making no assumptions just a more generalised “stream re-order” approach.

I know sequence numbers would solve it easily, but as mentioned I cannot presently modify the upstream publisher.

I am not looking for an entire solution – just pointers to algorithms or techniques used for this class of problem, so I can do further research.

Send local notification for desktop application

is it possible to send a local notification for desktop application ? the desktop application i’m using is winforms, not WPF

i have search google, but i didn’t find any for winforms, they were for wpf.

my scenario is, my environment are using vpn (no internet at all) i need to send a notification from office to the another office (different location) with my application (winforms, C#) with different address.

Clean architecture – how can a component become a micro service?

I’ve read and enjoyed the “Clean Architecture” book. So the first thing I tried to do is to implement my project with it.

Where I work we follow a design method called “IDesign” in which the architecture is broken into: 1. “clients” – encapsulate communication with consumers (e.g. UI) 2. “managers” – manages business flows (drives use cases) 3. “engines” – stateless computational components (not always needed) 4. “resource accesses” – encapsulates accesses to data sources

A component of each type is logically a service so I was trying to treat each one as a component in “clean architecture”

what I’m having a hard time with is this.

RA (Resource access) component is clearly low level so manager for instance should not depend on it.

We can have multiple managers and engines using the same RA so according to my favourite uncle each should define its own interface and the RA component should implement all of them.

What I cannot understand is, if I’ll want to make the RA component into a micro-service, how would its API look?

I’m trying to follow this approach in a NodeJS project which makes it even harder for me to grasp how to implement such a solution..

The best I could up with so far is to have a single API for the RA and on each engine/manager to have internal onion architecture approach in which a repository layer class is implementing a “use case” layer interface and calls the RA api (in memory at first stage, through network potentially in the future).

another thing I struggled with are the entity objects. should the RA service map to them when returning response? again it makes sense when this is the same process but between micro services I’m not so sure..

Any thoughts?

Problem to make a PUT request to add @ManyToOne relation on JPA Spring REST

I can’t updated the record with entity relationed with @ManyToOne relationship.

I’ve two entities: Customer and City, each customer can have a City.

@Entity public class Customer {      @Id @GeneratedValue(strategy = GenerationType.AUTO)     Long id;      @DateTimeFormat(pattern = "yyyy-MM-dd")     private Date bornDate;      @ManyToOne     private City cityEntity;      private String fullName;     private String sex;     private Integer age;      // setters and getters... } 

@Entity public class City {      @Id @GeneratedValue(strategy= GenerationType.AUTO) Long id;     private String name;     private String state;     @OneToMany(mappedBy = "city") private List<Customer> customers;     // ... } 

I can use a POST request to create a new Customer omiting cityEntity parameter, but when I try a PUT request to update cityEntity a error occurs and data ins’t updated:

{ "timestamp": "2019-03-25T01:33:06.252+0000", "status": 415, "error": "Unsupported Media Type", "message": "Content type 'text/uri-list;charset=UTF-8' not supported", "path": "/api/customers/4" }

RESTED request:

enter image description here

PUT method:

  @PutMapping("/customers/{id}")     ResponseEntity<?> replaceCustomer(@RequestBody Customer newCustomer, @PathVariable Long id) throws URISyntaxException {         Customer updatedCustomer = repository.findById(id).map( customer -> {             if (newCustomer.getFullName() != null) customer.setFullName(newCustomer.getFullName());             if (newCustomer.getBornDate() != null) customer.setBornDate(newCustomer.getBornDate());             if (newCustomer.getSex() != null) customer.setSex(newCustomer.getSex());             customer.setCityEntity(newCustomer.getCityEntity());             return repository.save(customer);         })         .orElseGet(() -> {             newCustomer.setId(id);             return repository.save(newCustomer);         });          Resource<Customer> resource = assembler.toResource(updatedCustomer);         return ResponseEntity.created(new URI(resource.getId().expand().getHref())).body(resource);     } 

The links used above are valid and have yours respective content.

Question: How I make/create a record with relationship?

Advantages of using a message broker for scaling websocket client-to-client communications

I’m designing a system where pairs of clients need to exchange messages proxied by a backend service. My initial plan was to use websockets and have the clients connect to a single websocket server, but this doesn’t scale obviously. One popular design for scaling websocket servers is to use multiple websocket servers, and broker messages on the back end with some sort of pub/sub message broker (redis, or similar).

Given that my requirement is only for pairs of clients to communicate (no chat-like behavior), and I only need at-most-once delivery, is there any advantage to using a message broker over just having the hosts communicate directly? Or phrased differently, is there any reason not to just have the servers communicate directly?

i.e., in this context, are there any advantages to this:

brokered

over this:

direct

To clarify the question a bit more, my concern is that a message broker adds an extra bit of infrastructure without the expected gains in scalability in this scenario. In most uses where a message broker is recommended, there is 1-to-many communication (chat apps, multiplayer gaming etc), where a message broker could add pub/sub capabilities, or message buffering/durability. In this case, I have only pairs of clients, and fire-and-forget messaging is adequate. It would also be trivial to have the servers hosting each side of the conversations exchange messages directly. So I’m wondering if there are other advantages to message brokers that I am missing in this context.

For reference, I’m anticipating several thousand concurrent sessions each consisting of a pair of clients.

What are the pro or cons between a if statement and two different functions

I was wondering what are the pro and cons between having one function with a if statement dictating 2 sections of code or two discreet functions that are called on separately.

In terms of pro and cons I was thinking. Computational Complexity Readability Maintenance And any other component I did not consider.

Here is a example

//One if statement two parts of code.  Foo1(data,flag){  if (flag==True {       //Do Code      }  else {       //Do Code a little bit differently       } }  //Or two discrete statements  Foo1_True(data){ //Do Code }  Foo1_False(data){ //Do Code a little bit differently  } 

Thanks for your help! I am just trying to learn best practice.

(I have a feeling the answer is going to be neither, and I have to find a more elegant way to capture both functionality.

Network programming (boost::asio), architecture and communication protocols

I’m trying to make a simple network application using boost::asio. I think that I understood the basic things like io_context, async functions etc, but I really don’t know how to deal with buffers.

To be clear – I know how to use boost::buffers, but I don’t know how to deal with it in architectural aspect.

Let’s consider that 10 clients are connected to my server. The communication works bidirectional – from server to client, and from client to server. I also have threads group, which was added two thread and each thread called io_context.run().

As I know that it means, that for example two connections can be processed (read or write) in the same time. (of course, it does not mean that the maximum connection amount is equal to threads count).

Based on above assumptations – how I should deal with buffers? Are there should be two buffers per connection (one for read, and the second for write)? If yes, are there should be two buffors * actual connections in my applications?

Or maybe I should only use two buffers, because if I have two working threads, there isn’t any way, that a larger amount of buffers will be needed?

I found a lot sources about socket programming in general, but it is hard to find some sources that shows how to prepare architecture, hwo to works with communication protocols etc. If you have any good sources, I will be greatful.