Minimum Clique Cover – Mixed Integer Programming

I have a general (undirected) graph with a set of nodes, a set of edges, and a weight for each edge. I want to find a minimum clique cover of the graph, that is, a partition of the graph into the smallest number of cliques. I also want to maximize the sum of edge weights over the cliques. I want to use an integer programming approach for this problem.

Can any one give me some hints or some references that use mixed integer linear programming for the (maximum weight) minimum clique partition?

Thank you very much.

Forbidden Sequence Dynamic Programming

Given a finite set $ \Omega$ , I have the following problem. Say there is a list of forbidden subsequences $ F \subset \Omega \cup \Omega^2 \cup \Omega^3 \dots \Omega^\infty$ , while we do not know the contents of list before hand, we can make a query about any sequence $ S \in \Omega^i$ to see if $ \exists f \in F, f \subseteq S$ . I want to construct a sequence $ S \in \Omega^n$ such that $ f \not \subseteq S, \forall f \in F$ .

I want to construct all the sequences $ S \in \Omega^n$ such that $ f \not \subset S, \forall f \in F$ .

The approach I thought would be best is to use dynamic programming. We iteratively construct valid sets $ V_k := \{S \in \Omega_k: f \not \subset S ,\forall f \in F, |f|< k\}$ , by requiring each subsequence of $ s \in V_1 \cup \dots V_{k-1}, \forall s \subsetneq S$ , and then remove all $ S \in F$ with queries. My question is, what’s the most efficient way to construct $ V_k$ ? One simple way would be to take $ V_{k-1}$ and then try adding each element in $ \Omega$ at the end, and then do some extra queries, but is there some better way?

Additionally, are there elegant ways to use incomplete valid sets $ I_k \subseteq V_k$ , where if $ I_{k+1} := \{S \in \Omega^{k+1} \setminus F: s \in I^1 \cup \dots I^k, \forall s \subsetneq S\}$ is empty, we can try to retroactively expand everything without mostly starting from scratch?

Why do errors occur in programming?

I like to reduce the number of exceptions raised in my code, and I thought it might help to consider why these exception are raised in the first place, and whether they are a really a fundamental part of code or just an artefact of not really finishing the program.

As a related note I hear OS programming requires avoiding errors where possible, perhaps there are some answers there.

Can a Man in the Middle attack on NFC be prevented by programming when working with NFC?

I have done research on how to authenticate NFC tags. Seeing how you can use digital signatures, or a hidden key on newer NFC tags, it seems safe. However none of it would prevent a Man in the Middle attack where a device can read and relay the commands a NFC reader/writer sends to a NFC tag, and use this to corrupt the data that is sent to be written on the NFC tag (even if the data was originally sent encrypted, it could still be turned into fake date).

Convex quadratic approximation to binary linear programming

Munapo (2016, American Journal of Operations Research, http://dx.doi.org/10.4236/ajor.2016.61001) purports to have a proof that binary linear programming is solvable in polynomial time, and hence that P=NP.

Unsurprisingly, it does not really show this.

Its results are based on a convex quadratic approximation to the problem with a penalty term whose weight $ \mathcal{l}$ needs to be infinitely large for the approximation to recover the true problem.

My questions are the following:

  1. Is this an approximation which already existed in the literature (I rather expect it did)?
  2. Is this approximation useful in practice? For example, could one solve a mixed integer linear programming problem by homotopy continuation, gradually increasing the weight $ \mathcal{l}$ ?

Note: After writing this question I discovered this related question: Time Complexity of Binary Linear Programming . The related question considers a specific binary linear programming problem, but mentions the paper above.

How to send packets at 512 nano sec delay using Socket Programming and UDP socket

Using SOCK_DGRAM for UDP sockets

All packets are 22 bytes in length (ie 64 including headers)

client.c

...     no_of_packets--;     sprintf(buf, "#:!0 rem");     sprintf(buf, format , buf);     sprintf(buf_aux, "#: 0 rem");     sprintf(buf_aux, format , buf_aux);     buf[MAX_LINE-1] = '';     buf_aux[MAX_LINE-1] = '';     len = strlen(buf) + 1;     send(s, buf, len, 0);     while (no_of_packets-- > 1) {         nanosleep(&T, NULL);         send(s, buf, len, 0);     }     send(s, buf_aux, len, 0); 

server.c

... while(1) {         if (len = recv(s, buf, sizeof(buf), 0)){             // do nothing         } }  

When I open Wireshark to see avg delay between the packets which are sent,

I can see the following:

  • MIN delay: 0.000 006 795 sec => 6 micro sec

  • MAX delay: 0.000 260 952 sec => 260 micro sec

  • But I want to send packets every 512 nano sec (ie., 0.512 micro sec).

How can I achieve this speed?

Introduction to python programming language

Overview

Python is a high-level , structured , open-source programming language that can be used for a wide variety of programming tasks.

Python was created by Guido Van Rossum near 1990s, he is a Dutch programmer best known as the author of the python programming language. its following has grown steadily and interest is increased markedly in the last few years or so. It is named after Monty Python’s Flying Circus comedy program.

Python is used extensively for…

Introduction to python programming language