Technologies for full-stack JavaScript project

I have some questions regarding the technologies i should/should not use for a new project. I have previously used Java for back-end, primarily hibernate for database management and Jax-RS as a Rest api to communicate with my front-end. I would like to substitute those technologies with JavaScript frameworks and libraries, and since I haven’t used any previously I would prefer to learn the more modern ones.

I am thinking about creating a SPA and use React for the front-end, and using mongodb as my database. I would indeed prefer an architecture that separates the front-end and back-end.

So my question(s) is specifically: Does it make sense to use a rest api, when your back-end and front-end aren’t separated? What frameworks/libraries could be the equivalent of Hibernate(database mapping), and supplements a layered architecture with mongodb and react?

Phenomenon related to emergent properties?

I’m looking for a certain word that I am not able to properly define apart from emergent property, and perhaps there is a better alternative.

I am designing a software library, and have 2 decisions to make. Each decision has several alternatives each with it’s own drawbacks. In each decision I choose the seemingly best alternative. However, when I put the two decisions together in the final system, there is an flaw that emerges from the system.

This is like making locally optimal decisions that lead to a globally non-optimal outcome. Is there a known word for this?

How to integrate different “microservices” into a transaction?

We’re building a new web-based industrial application and one of the questions that are hammering our heads for the last few days is about the integration between different “microservices” on this architecture.

I’m using microservices with just a pinch of salt because we’re not totally embracing the concepts to define real microservices. One (and I think the biggest) difference relies on the fact that we’re using the same shared database along the different modules (that I’m calling “microservices”). A sort-of logical view of our system could be drawn as:

                  ╔══════════════╗                   ║    Client    ║ ══╗                   ╚══════════════╝   ║ (2)                                      ║                                      ▼                 ╔══════════════╗  (1) ╔══════════════╗         ║  Serv. Reg.  ║ <==> ║  API Gatew.  ║         ╚══════════════╝      ╚══════════════╝             █       █   █████████████     (4)            █         █              ████ ╔══════════════╗  ╔══════════════╗  ╔══════════════╗ ║   Module A   ║  ║   Module B   ║  ║   Module C   ║  <===== "Microservices" ╚══════════════╝  ╚══════════════╝  ╚══════════════╝         ║║ (3)           ║║ (3)            ║║ (3)         ║║               ║║                ║║ ╔══════════════════════════════════════════════════╗ ║                Database Server                   ║ ╚══════════════════════════════════════════════════╝ 

Some things that we’ve already figured out:

  • The Clients (External Systems, Frontend Applications) will access the different Backend Modules using the Discovery/Routing pattern. We’re considering the mix of Netflix OSS Eureka and Zuul to provide this. Services (Modules A,B,C) registers themselves (4) on the Service Registration Module and the API Gateway coordinates (1) with the Register to find Service Instances to fullfill the requests (2).
  • All the different Modules use the same Database. (3) This is more of a client’s request than a architecture decision.

The point that we (or me, personally) are stuck is about how to do the communication between the different modules. I’ve read a ton of different patterns and anti-patterns to do this, and almost every single one will tell that API Integration via RestTemplate or some specialized client like Feign or Ribbon.

I tend to dislike this approach for some reasons, mainly the synchronous and stateless nature of HTTP requests. The stateless-ly nature of HTTP is my biggest issue, as the service layer of different modules can have some strong bindings. For example, a action that is fired up on Module A can have ramifications on Modules B and C and everything needs to be coordinated from a “Transaction” standpoint. I really don’t think HTTP would be the best way to control this!

The Java EE part inside of me screams to use some kind of Service Integration like EJB or RMI or anything that does not use HTTP in the end. For me, it would be much more “natural” to wire a certain Service from Module B inside Module A and be sure that they participate together on a transaction.

Another thing that needs to be emphasized is that paradigms like eventual inconsistencies on the database are not enough for our client, as they’re dealing with some serious kind of data. SO, the “I promise to do my best with the data” does not fit very well here.

Time for the question:

Is this “Service Integration” really a thing when dealing with “Microservices”? Or the “Resource Integration” wins over it?

It seems that Spring, for example, provides Spring Integration to enable messaging between services as would a tech like EJB do. Is this the best way to integrate those services? Am I missing something?

PS: You may call my “Microservices” as “Microliths”, how we usually name them around here. 🙂

Method Inlining Considerations

Generally, I read that large methods benefit from some sort of inlining and the C# compiler does this sort of micro-optimizations automatically. I understand that if a method is called just one time in an enclosing method, performance might improve when inlined even if the method itself is large. However, if a large method is called in many places, performance could decrease if it is inlined, because it reduces the locality of reference. I know that all method calls have a cost such as adding to evaluation stack etc., So, my question is how do we find reduction in instruction count when inlined and the performance impact to determine if a method can benefit from inlining manually? The idea is to inline method calls selectively and manually for performance improvements. Any ideas and thoughts on this subject will be appreciated.

What are best practices for deploying different configurations per environment in Kubernetes/OpenShift?

Kubernetes provides a very elegant mechanism for managing configuration for pods using ConfigMaps. What’s not clear from the documentation is what the recommended practice is for using ConfigMaps to manage different configurations for different environments, and also to deploy configuration changes when they occur.

Assume I’m using a ConfigMap for my pod to set various environment variables or to inject cofiguration files into my container. Evidently some (or all) of those variables or files need to be different depending on which environment the pod is deployed to.

In an ideal world I can make configuration changes and deploy those to the pod without re-building or re-deploying the container image. The implication is that those configuration settings, and the ConfigMap, should probably be stored in a separate source code repository (otherwise a build of the image would be triggered every time configuration changes).

What are some recommended practices for:

  1. maintaining different configuration settings per environment (e.g. separate branch per environment)

  2. automatically deploying configuration changes when they change under source control, but only to the respective environment

Language and Technologies for a DataVis Application for a race car

me and my team need to develop a program for analyzing sensor data from a student race car. The problem is we all come from a more backend/computer graphics background.

We already build a prototype with plain JavaScript and Highcharts, but found it is not scalable.

Prototype of the application with Javascript and Highcharts.

Since no one from our team has experience with such applications we have difficulty deciding which technologies we should use to develop the application.

Requirements

  • Loading Data from files (we already have a Json format)
    • One file holds data from multiple sensors from the same test/race
  • Displaying data from multiple sensors at the same time
    • In different graphs
    • In the same graph
  • A universal synchronized timeline over all displayed graphs
  • Perform math operations on data
    • add, subtract, multiply etc. sensors to create new data for new graphs
  • Graphs should be able to display ca. 500,000 data points without lagging
  • Graphs may later need to be able to display live data
  • It needs to look good and modern
  • The application needs to be user friendly

Our Ideas

Our research so far concluded that these technologies may do the job:

C++ & QT

  • pros
    • C++ is a high performance language
    • As computer graphics people we know the language well
    • QT is a popular UI-Framework
    • QT has a fast and customizable graph library
  • cons
    • QT has an ugly API
    • Many functionalities in the graph library are missing that other solutions already have

Typescript & Electron & Highcharts & React & Redux

  • pros
    • Highcharts has an amazing API that offers nearly all features we need
    • Highcharts can handle a lot of data points
    • React and Redux seem to be very popular and have an active community
    • It is easy to build UIs with React?
  • cons
    • Nobody in the team has experience with JS, TS or React
    • Performance may become an issue

Do you think one of the above tech stacks is able to do the job? Or have you an idea for a better tech stack?

UDP socket error handling best practices

I have many UDP Clients (from different endpoints/machines) that send data to a single server.
The current code is legacy, it basically use open source SyslogUdpSender.

There is a wrapper class for SyslogUdpSender, but, for each client endpoint there is 1 instance of wrapper, there is no error handling mechanism, if exception is thrown from socket, it will log exception and then wait for next call (next execution of .Send(..) method).
I need to fix this bad behavior, if socket is not functioning anymore, i want to reopen and fix its state. so i could continue sending data, current code basically abuse the same socket over and over.

From what I understand, basically, The client endpoint machine, create only 1 UDP Socket, and use the same open socket over and over, the socket is only opened when instance is created, and never closed. So when it gets to a faulted state, there is no attempt to reopen it.

I thought about adding the following logic:

  1. call SyslogUdpSender.Send(), if SocketException is catched:
    Try X times with Y Halt between attempts, usually 3 times with 100 MS delay.
    If this fails also, Close the socket and re-open it, If reopening the socket does not work for X times, then halt the entire client from sending for Z seconds (maybe server is down, or OS is so overloaded it cannot even open new UDP Socket).

  2. Create SyslogUdpSender pool, Create 10 (configurable) clients for each machine, and send the message to the pool, this way, we decrease the load on the same socket.

Are there any best practices to use socket (especially UDP) ?

Thanks

If passwords are stored hashed, how would a computer know that your password is similar to the last one if you try resetting your password?

If passwords are stored hashed, how would a computer know that your password is similar to the last one if you try resetting your password? Wouldn’t the two passwords be totally different since one is hashed, and unable to be reversed?