reduction of independence problem and cluster problem

independent problem is: there is a simple and undirected graph, we are looking for the maximum vertex in which there is no edge between any two of them.

cluster problem is: there is a simple and undirected graph, we are looking for the maximum number of the vertex in which there are proximity every two vertexes ( there is an edge between any two vertexes)

how can I reduct independent problem to cluster problem and vise versa?

SQL Server Assembly on Failover cluster after Framework update

We have database that relies on a CLR assembly. The assembly references other assemblies. After a Windows Framework update we need to drop and reinstall the assembly to have it work.

Now the issue is that the database is part of a failover cluster. Yesterday the secondary server was updated and with it came a framework update. The assembly no longer works on the secondary, but we cannot reinstall it because it is a read only.

The primary server hasn’t been updated, so no point in reinstalling the assembly.

Does anybody has experience in dealing with these sort of issues? If so is there a recommended way of updating assemblies?

We could do a failover and update the assembly. However is this case the assembly doesn’t match the Framework on the other server. I guess we need to do a failover, reinstall the assembly and update the other server to the same Framework?

Thanks!

MySQL innoDB cluster auto rejoin failed

3 node cluster, single primary. heavy read/write was happening on the master node. Restart the Master node. Then node 3 became the master. After the restart, the old master was in recovery state

"recoveryStatusText": "Distributed recovery in progress",                  "role": "HA",                  "status": "RECOVERING"    select * from gr_member_routing_candidate_status; +------------------+-----------+---------------------+----------------------+ | viable_candidate | read_only | transactions_behind | transactions_to_cert | +------------------+-----------+---------------------+----------------------+ | NO               | YES       |                   0 |                 8401 | +------------------+-----------+---------------------+----------------------+ 

this trx_to_cert never decreased even after 15mins,

then I tried to reboot node2

this also went to recovery mode.

Finally restart the node3, that’s all

It is saying no eligible primary in the cluster. Not able to recover it.

ERROR LOG:

2020-06-03T15:24:19.735261Z 2 [Note] Plugin group_replication reported: '[GCS] Configured number of attempts to join: 0' 2020-06-03T15:24:19.735271Z 2 [Note] Plugin group_replication reported: '[GCS] Configured time between attempts to join: 5 seconds' 2020-06-03T15:24:19.735285Z 2 [Note] Plugin group_replication reported: 'Member configuration: member_id: 1; member_uuid: "41add3fb-9abc-11ea-a59d-42010a00040b"; single-primary mode: "true"; group_replication_auto_increment_increment: 7; ' 2020-06-03T15:24:19.748017Z 6 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. 2020-06-03T15:24:19.846752Z 9 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './dev-mysql-01-relay-bin-group_replication_applier.000002' position: 4 2020-06-03T15:24:19.846765Z 2 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!' 2020-06-03T15:24:19.868161Z 0 [Note] Plugin group_replication reported: 'XCom protocol version: 3' 2020-06-03T15:24:19.868183Z 0 [Note] Plugin group_replication reported: 'XCom initialized and ready to accept incoming connections on port 33061' 2020-06-03T15:24:21.722047Z 2 [Note] Plugin group_replication reported: 'This server is working as secondary member with primary member address dev-mysql-03:3306.' 2020-06-03T15:24:21.722179Z 0 [ERROR] Plugin group_replication reported: 'Group contains 3 members which is greater than auto_increment_increment value of 1. This can lead to an higher rate of transactional aborts.' 2020-06-03T15:24:21.722427Z 24 [Note] Plugin group_replication reported: 'Establishing group recovery connection with a possible donor. Attempt 1/10' 2020-06-03T15:24:21.722550Z 0 [Note] Plugin group_replication reported: 'Group membership changed to dev-mysql-01:3306, dev-mysql-03:3306, dev-mysql-02:3306 on view 15910200188085516:19.' 2020-06-03T15:24:21.803914Z 24 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='dev-mysql-02', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. 2020-06-03T15:24:21.855802Z 24 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor bd472ec4-9abc-11ea-976d-42010a00040c at dev-mysql-02 port: 3306.' 2020-06-03T15:24:21.856155Z 26 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information. 2020-06-03T15:24:21.862169Z 26 [Note] Slave I/O thread for channel 'group_replication_recovery': connected to master 'mysql_innodb_cluster_1@dev-mysql-02:3306',replication started in log 'FIRST' at position 4 2020-06-03T15:24:21.918855Z 27 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log './dev-mysql-01-relay-bin-group_replication_recovery.000001' position: 4 2020-06-03T15:24:42.718769Z 0 [Note] InnoDB: Buffer pool(s) load completed at 200603 15:24:42 2020-06-03T15:24:55.206155Z 41 [Note] Got packets out of order 2020-06-03T15:29:29.682585Z 0 [Warning] Plugin group_replication reported: 'Members removed from the group: dev-mysql-02:3306' 2020-06-03T15:29:29.682635Z 0 [Note] Plugin group_replication reported: 'The member with address dev-mysql-02:3306 has unexpectedly disappeared, killing the current group replication recovery connection' 2020-06-03T15:29:29.682635Z 0 [Note] Plugin group_replication reported: 'The member with address dev-mysql-02:3306 has unexpectedly disappeared, killing the current group replication recovery connection' 2020-06-03T15:29:29.682729Z 27 [Note] Error reading relay log event for channel 'group_replication_recovery': slave SQL thread was killed 2020-06-03T15:29:29.682759Z 0 [Note] Plugin group_replication reported: 'Group membership changed to dev-mysql-01:3306, dev-mysql-03:3306 on view 15910200188085516:20.' 2020-06-03T15:29:29.683116Z 27 [Note] Slave SQL thread for channel 'group_replication_recovery' exiting, replication stopped in log 'mysql-bin.000009' at position 846668856 2020-06-03T15:29:29.689073Z 26 [Note] Slave I/O thread killed while reading event for channel 'group_replication_recovery' 2020-06-03T15:29:29.689089Z 26 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'mysql-bin.000009', position 846668856 2020-06-03T15:29:29.700329Z 24 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 2/10' 

HPC cluster slows to standstill, is it currently under attack?

I am part of a university with an HPC cluster, which has just slowed to an almost-standstill for no clear reason. The login nodes and the compute nodes both seem to be affected. I can connect, and do basic things (cd, ls) but anything more just seems to hang. My internet connection is fine. There is no scheduled maintenance.

Is this cluster under attack?

Is this a problem that needs to be solved urgently (as in “call people in out of hours”) to prevent some kind of damage?

RPI cluster performance related to network performance

I’m writing my thesis and i have built a RPI cluster, containing 10 nodes which consists of RPI model 3b. I’ve them connected to two gigabit switches. I don’t know the CAT of the cables. They are not connected to the Internet, they just live in their private network.

Further more, i’ve calculated the theoretical performance by the formula:

number of cores * average frequency * 16 FLOPs/cycle = x GFLOPS (Got the formula from https://www.slothparadise.com/how-to-run-hpl-linpack-across-nodes/ )

after i applied it, it turns out i should have 76,8 GFLOPS in theory assuming that the source is relaiable and correct. When i benchmark it using HPL 2.1 i’ve only reached little about 7 GFLOPS at best when trying out various variations.

Now to my question: i’ve read up on the RPI model, and it says it only have 10/100 Ethernet. Is that the source of my problem? Seeing i get out GFLOPS from a node, but the network transportation is much less then Gigabit speed which would mean 76,8 GFLOPS * 0,1 Gigabit/second = 7,68 GFLOPS (in theory). Or am i way off the tracks in my thinking?

I really appriciate any help, so i can know if i have to keep working on the configuration for the benchmarking or if i can move on. Also, i’m sorry if i have posted in the wrong place.

Stay safe out there!

Destroy cluster windows server 2012 r2

I want know how to delete a Windows Server Failover Cluster. My case is a bit special.

I encountered some problems while creating a failover cluster, and I stopped the cluster services then I did “evict nodes”, then “destroy cluster”.

Then the cluster disappears and now I can no longer recreate a cluster because it tells me that the servers are in a cluster. In this case how to delete a cluster which is not visible in server manager?

It’s for SQL Server high availability.

Cluster 2D points in particular group-size clusters?

Imagine I have a set of 2D points pts, and I want to partition them into groups by spatial adjacency, but limit such partition to particular group sizes. I thought that NearestNeighborGraph[] could be a starting point, but the problem is that percolation leads soon to a big connected component:

pts = RandomReal[1, {1000, 2}];

NeighborsToCluster = 4;

NearestNeighborGraph[pts, NeighborsToCluster]

Gives:

enter image description here

How could I limit the clustering so that only n number of points belong to each group (but still use Euclidean distance as the basis)?

I don’t need an exact n, in fact (which will also be impossible in many cases), I would like some distribution around n (normal distribution, with some arbitrary mean, std).

Any help towards this goal is appreciated!

Clustering algorithm with specific cluster grouping areas

I’m looking for a way to cluster points in a given space, where clusters form around specific closed, allowed zones of that initial space. Each allowed zone should be surrounded by points of its cluster.

However, it is not only necessary to match the points to the closest allowed zone, but also to take into consideration the distance to other points of the same cluster.

In the example image given, in the clustered picture, the green points at the top right belong to the green cluster despite being closest to the blue area, because their neighbours are all green.

Is there any algorithm available that can do this type of clustering? Otherwise how would you proceed?

enter image description here

Is it a security issue that underlying infrastructure (like e.g. Kubernetes cluster) can easily be revealed?

I have recently found out that a very common setup of Kubernetes for some use cases of access over TLS returns an invalid certificate with name Kubernetes Ingress Controller Fake Certificate. I.e. making it obvious to anyone that one is using Kubernetes.

So the question is not really so much about Kubernetes itself, but if disclosing such information about underlying infrastructure is considered undesirable or it does not matter?

P.S. Kuberentes information in more details:

Default installation of nginx ingress controller provides a “fallback” (called a default backend) that will respond to anything it does not know about. Sounds good, but the thing is that it does not have TLS configured, but does answer on port 443 as well and returns an (obviously) invalid certificate with name Kubernetes Ingress Controller Fake Certificate.

Why is VACUUM FULL locking ALL databases on the cluster?

From everything I have read about VACUUM FULL, I would expect it to lock the database I’m running it on, but it renders every database in the cluster inaccessible. Is there perhaps something we might have wrong in our configuration? I do have autovacuum on, but we have a few unwieldy databases I’d like to clean up thoroughly. This is PostgreSQL v10. Thanks for any insights.