Does the leader election problem only apply to process replication/redundancy?

The leader election problem is said to be

In distributed computing, leader election is the process of designating a single process as the organizer of some task distributed among several computers (nodes). Before the task is begun, all network nodes are either unaware which node will serve as the “leader” (or coordinator) of the task, or unable to communicate with the current coordinator. After a leader election algorithm has been run, however, each node throughout the network recognizes a particular, unique node as the task leader.

As far as the leader’s responsibility is concerned, are the candidates identical?

Does the leader election problem apply only to processes which are replicas, i.e. to process redundancy? In other words, does the leader election problem not apply to processes which are not replicas, i.e. processes without redundancy?

My confusions come from:

  • Design Data Intensive Programs by Kleppmann introduces the concept of “leader” in Chapter 5 Replication, while the election problem in Chapter 9: Consistency and Consensus.

  • Distributed Systems by Coulouris introduces the concept of “primary replica manager” and “backup replica manager” in Chapter 18 Replication, while the election problem in Chapter 15: Coordination and Agreement.

So I wonder if the election problem applies only to replication (more specifically, process replication), or also to cases which don’t involve replication?

Thanks.

[ Politics ] Open Question : I thought being POTUS is a 24/7 job. Do you think any other world leader spends his mornings like this?

After waking up early, Mr. Trump typically watches news shows recorded the previous night on his “Super TiVo,” several DVRs connected to a single remote. (The devices are set to record “Lou Dobbs Tonight” on Fox Business Network; “Hannity,” “Tucker Carlson Tonight” and “The Story With Martha MacCallum” on Fox News; and “Anderson Cooper 360” on CNN.) He takes in those shows, and the “Fox & Friends” morning program, then flings out comments on his iPhone. Then he watches as his tweets reverberate on cable channels and news sites. Source: https://www.nytimes.com/interactive/2019/11/02/us/politics/trump-twitter-presidency.html

How to do Distributed Job Scheduling after the Leader was elected and slaves are known – (bully)

I’m building a system that processes key,value pairs on a continuous basis in a loop, until terminated.

How do I schedule tasks on the leader and make slaves execute them? This is a realm of distributed objects. It will require the leader to connect to every slave via TCP/IP. I got that part.

Then, the leader should deploy tasks to slaves and be notified of progress. It is known as Job Scheduling, Distributed Task Scheduling, etc.

Computer science slides I’ve seen were too high level, without a working Java example. Can you point me to any example of an isolated, practical implementation? I don’t want an all-in-one system like Spark. I need just the distributed job scheduling, as simple as you can. Like when you’d show me a lab-grown heart that lives and beats, without showing me a whole human.

Can you point to any such computer science example, or java code, or describe yourself how the algorithm should look in the leader and in the slave? Some CS experts know this from distributed operating systems classes. I didn’t take those.

Multi leader replication vs Single-leader performance

I’m trying to wrap my head around a hypothetical scenario.

Imagine we have a very write-heavy distributed database and sharding is not an option.

I wonder if / how is multi-leader replication (write-write) more efficient than single-leader scenario (write-read) since write-write has overhead to sync databases and propagate writes to the other master(s) ending up with the same number of write operations, ultimately.

In which cases is multi-leader replication for write-heavy applications considered more performant than single-leader and in which cases it is not?

I understand the question is broad and nuanced but would be happy to read some thoughts on the subject.

Service temporarily unavailable due to an ongoing leader election. Please refresh

I have three physical nodes with docker installed on each of them. I configured Mesos,Flink,Zookeeper,Marathon and Hadoop on docker of each node. When ever I changed a docker, unfortunately, Flink UI ended up with this error:

Service temporarily unavailable due to an ongoing leader election. Please refresh.

I have searched a lot and not understood what is wrong with configurations.

Would you please guide me how to solve the problem?

Thank you in advance.

qualities of a good leader essay

bribery essay of mice and men essay introduction communication in the workplace essay essay natural disaster easy controversial essay topics how to write an expository essay step by step don quixote essay topics essay checker grammar tuesdays with morrie essay job application essay
college life essays…

qualities of a good leader essay

Is there any way we can elect leader in my application in Kubernetes using node.js?

I am trying to implement leader election algorithm in my distributed node.js application.I am using Kubernetes as the container manager and deploying docker containers.I do not have details of host and port of other instances of application running in different pods.

Kubernetes already uses leader election algorithm to choose one leader at a time but my application is inside the container and pod , it is not aware whether i am leader or not. There is one sample code also available to know who is the current leader.

https://github.com/kubernetes-retired/contrib/tree/master/election

This code helps to know which container is the leader but it requires some special privileges to call kubernetes API.

Bully/Ring algorithm can be a solution, but since there is a restriction to not to use any queue/Centralized services like redis or zookeeper as we need another process to run these, but i can use shared filesystem as it will not require monitoring of and extra process.

How can i implement this solution using the application either by pinging them to check if it is down and then make itself to master. In this process lot of race conditions also occur.

I have tried Kubenetes leader algorithm to elect the leader, but due to restricted access to API, it is not possible to use.

Tried Bully algorithm, but since containers are in different pods they are not aware of each other.

Tried Ring algorithm , but the same issue as Bully algorithm.

I expect to implement leader election using any algorithm but without introducing any new processes.