Isn’t ZKP is a reduction to a hard problem, rather than true zero knowledge?

Take for example “Hamiltonian cycle for a large graph”. The proof works by starting with a graph G that contains a hamiltonian cycle, then constructing an isomorphic graph H, and then either showing the mapping between the graphs G and H or releaving the cycle in H.

It is said that we prove that that we know a hamiltonian cycle in G without revealing it.

But this assumes the verifier does not have unlimited computational power. If he had it, he could ask to reveal the cycle in H, and use his unlimited computational power to work out the isomorphism. I understand that if the verifier had unlimited power, he could find the cycle in G directly. But that’s not my point. What I find strange is that we are relying on “hard problems” in the proof itself.

Are there ZKP protocols that do not rely on hard problems? Hard problems are only hard according to the state of the art. It is not proven that NP is not P, therefore, in my mind, this sounds like security through obscurity in some sense.

Should software engineers write code for people to read, rather than for a computer to read?

I have heard of:

“Programs must be written for people to read, and only incidentally for machines to execute.”

― Harold Abelson, Structure and Interpretation of Computer Programs

and Donald Knuth said that too:

but very often at work, I just read programs that have lines of lines of them, without saying what it tries to do. For example, I can be reading 17 lines of code, and wonder what the programmer wanted to do, and after a couple of minutes, only to found out it was just to filter out some data into 2 sets and get the intersection. The same thing could be done in 2 or 3 lines. Or if there was a comment at the beginning of the 17, 18 lines

# put data into 2 sets and get the intersection 

then I wouldn’t have to follow line by line and find out what it was trying to do.

But at the same time, my coworkers might even say, that he removed every single line of comments — programmers should read code, not read comments.

Should software engineers write code for people to read? I can only think of 2 contrary cases:

  1. If you write code that is easy for human to understand, that means the company can easily fire you. People who write code that are difficult to understand, the company needs them to stay.

  2. If other people “read you like an open book”, they may think you are weak. They may keep themselves hard to understand, to maintain their power.

Why do developers bother writing long naming schemes rather than using Unicode, Foreign Languages and Specialized Editors + Keyboards

Why do developers bother writing long naming schemes rather than using Unicode, Foreign Languages and Specialized Editors + Keyboards?

These tools can also greatly simplify the implementation of programming languages…

This is not completely unrelated to my last question.

How can I tell Ubuntu to display using my 1050 Ti rather than my 1080 Ti?

I built a linux box with a GTX 1080 Ti, mainly for Machine Learning experiments, which has been working fine for about a year.

I am now adding a 4K screen, so I will have dual screen setup. In order to leave the 1080 Ti unburdened by driving screens, I added a GTX 1050 Ti just for that purpose.

However, at the moment I can’t get Ubuntu to use the 1050 for the displays; the screens stay dark.

The driver seems to work fine though, at least nvidia-smi lists both cards:

+-----------------------------------------------------------------------------+ | NVIDIA-SMI 430.40       Driver Version: 430.40       CUDA Version: 10.1     | |-------------------------------+----------------------+----------------------+ | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC | | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. | |===============================+======================+======================| |   0  GeForce GTX 108...  Off  | 00000000:01:00.0  On |                  N/A | |  0%   46C    P8    16W / 250W |    284MiB / 11169MiB |      0%      Default | +-------------------------------+----------------------+----------------------+ |   1  GeForce GTX 105...  Off  | 00000000:02:00.0 Off |                  N/A | | 29%   32C    P8    N/A /  75W |      2MiB /  4040MiB |      0%      Default | +-------------------------------+----------------------+----------------------+  +-----------------------------------------------------------------------------+ | Processes:                                                       GPU Memory | |  GPU       PID   Type   Process name                             Usage      | |=============================================================================| |    0      1464      G   /usr/lib/xorg/Xorg                            18MiB | |    0      1500      G   /usr/bin/gnome-shell                          49MiB | |    0      1808      G   /usr/lib/xorg/Xorg                           108MiB | |    0      1939      G   /usr/bin/gnome-shell                         100MiB | |    0      4332      G   nvidia-settings                                4MiB | +-----------------------------------------------------------------------------+ 

Any hints welcome!

iOS Settings Standards: Using Checkbox Rather Than Switch

If you are familiar with iOS devices, you know that the settings application uses the UISwitch control to show that a parameter is enabled or disabled. This control appears as a toggle switch and even has an animated switch motion that tracks the user’s finger as the switch slides from one side to the other.

On my iPad, I did notice an exception to this. A checkbox is used for what I would normally associate a radio button for.

The toggle switch takes up a significant amount of space on an iPhone, which I can put to better use.

I have access to the settings with a user interface from inside my application (and only from my app), and can depart from this practice of using the switch, by replacing it with a checkbox. (This would simply be a custom UIButton.) I am wondering whether there is a downside to doing this from the user experience perspective.

How do I rank landing pages for my website in Google video SERPs rather than YouTube pages? [on hold]

While searching for my keywords in Google videos, SERPs show respective youtube pages rather than landing page of my website for that keyword. However, for other company websites SERPs show their landing pages not youtube page. Why it is and how to show landing pages?

Why does the TRACE level exist, and when should I use it rather than DEBUG?

In Log4J, Slf4J and a couple other logging frameworks in Java, you have two “developper” level for logging:

  • DEBUG
  • TRACE

I understand what DEBUG does, because the explanation is clear:

The DEBUG Level designates fine-grained informational events that are most useful to debug an application.

But the TRACE level is not very specific about its use case:

The TRACE Level designates finer-grained informational events than the DEBUG

(Source: the log4J JavaDoc)

This does not tell me how or when to use TRACE. Interestingly, this is not a severity level defined in the syslog standard. Googling for the difference between TRACE and DEBUG only seem to return “use DEBUG, oh, and there is TRACE too”. I couldn’t find a specific use case for the TRACE level. The best I could find was this old wiki page debating the merit of the existence of the level.

This, as an architect, raises a lot of flags and questions in my head. If a young developer asked me to add TRACE to my architecture, I would bombard him with questions:

  • What are some examples of information that should be logged with TRACE and not with DEBUG?
  • What specific problem do I solve by logging that information?
  • In those examples, what are the properties of the logged information that clearly discriminate between logging at the TRACE level rather than the DEBUG level?
  • Why must that information go through the log infrastructure?
    • What are the benefits of persisting that information in a logs journals rather than just using System.out.println ?
    • Why is it better to use log for this rather than a debugger?
  • What would be a canonical example of logging at the TRACE level?
    • What are the specific gains that have been made by logging at the TRACE level instead of DEBUG in the example?
    • Why are those gains important?
    • In reverse: What problems did I avoid by logging it at TRACE instead of DEBUG?
    • How else could I solve those problems? Why is logging at the TRACE level better than those other solutions?
  • Should TRACE level log statement be left in the production code? Why?

But given that it is present in most major framework, I am guessing it is useful for something? So… what is TRACE for, and what distinguishes it from DEBUG?