Which files to scan for viruses (or how to reduce the cost of a full scan) on Linux (Debian)?

It may be irrelevant but anyway: I’m using clamscan to check my system.

The problem is that a full system scan needs far more than one day.
When you plan to do one scan a day, that’s not acceptable.
Additionally the scan sometimes consumes a huge amount of CPU – reducing the performance of the rest of the software.

For me there are two possibilities to tackle this:
Don’t scan the whole system.
Find a way to reduce the workload but still scan the whole system.

The problem: I have no idea if there is a number of directories, big enough to noticeable solve my problem, which can be safely excluded let alone how to scan the whole system in a more efficient way.

Are there any best practices to scan a system for viruses?

Adding edges to a DAG to make it strongly connected with minimum cost

I have a weighted DAG and a function computing the weight of edges that is not connected in the DAG. The weight of u to v equals to the weight of v to u.

I want to connect edges to make the DAG strongly connected with minimum total weights of added edges.

I know that the minimum number of added edges equals to $ \text{max}(|source|, |sink|)$ but which vertex connects to which vertex that the total weights is minimum?

About logarithmic cost model and bit complexity model?

So I heard that there are such models for finding time complexity of an algorithm as logarithmic cost model and bit complexity model. But from the information I have found in the internet, I cannot understand what operations are taken to be constant time, but I understood that arithmetic operations are not taken to be constant time. I suppose these could be operations of reading and rewriting of the bits, but its just my guess. So what operations are taken to be constant time in logarithmic cost model and bit complexity model ? It may be that I am losing some details by thinking that these models are different from others by/only by operations taken to be constant time, if thats the case I will be grateful if you explain me the details. Also it may be that my question is not that correct, then I will be grateful, if you help me make it better.
Thanks in advance.

Are there FPTASs for the min cost flow problem?

In literature, one can find many approximation algorithms for the multicommodity min cost flow problem or other variants of the standard single-commodity min cost flow problem. But are there FPTASs for the min cost flow problem?

Possibly, there is no need for an FPTAS here since an optimal solution can be computed very fast (using double scaling or the enhanced capacity scaling algorithm, for example). But from a theoretical point of view, this would be interesting to know.