Is it safe to assume that my computer’s clock will always be synced with actual time within the second or a few seconds at the worst?

Years ago, I was running a service where the moderators were able to do various actions with massive privacy implications if the accounts or contributions were less than a short period of time. I did this by checking the timestamp against the current Unix epoch, allowing for X hours/days. Normally, this worked well.

One day, the server where this was hosted on had been “knocked offline” in the data centre where I was renting it, according to the hosting company. When it came back on again, its clock had been reset to the factory default, which was many years back.

This resulted in all my moderators potentially being able to see every single account’s history and contributions in my service until I came back and noticed the wrong time (which I might not even have done!) and re-synced it. After that, I hardcoded a timestamp into the code which the current time had to be more than or else it would trigger “offline mode”, to avoid any potential disasters like this in the future. I also set up some kind of automatic timekeeping mechanism (in FreeBSD).

You’d think that by now, not only would every single computer be always auto-synced by default with tons of fallback mechanisms to never, ever be in a situation where the clock isn’t perfectly synced with “actual time”, at least down to the second, if not more accurately; it would be impossible or extremely difficult to set the clock to anything but the current actual time, even if you go out of your way to do it.

I can’t remember my Windows computer ever having been out of time for the last “many years”. However, I do important logging of events in my system running on it. Should I just assume that the OS can keep the time at all times? Or should I use some kind of time-syncing service myself? Like some free HTTPS API, where I make a lookup every minute and force the system clock to me whatever it reports? Should I just leave it be and assume that this is “taken care of”/solved?

Basing average case hardness from worst case hardness

I am trying to understand the current knowledge about questions of the form: “If a language $ L$ is difficult on the worse case, then there exists a distribution $ D$ such that it is difficult to solve $ L$ on instances drawn from $ D$ “.

  1. Random self reducibility captures this concept: it asks if it is possible to reduce the problem of solving arbitrary instance $ x \in L$ to solving a set of random instances. In AGGM it is shown that unless the polynomial time hierarchy collapses, there is no $ L \in NPC$ which is randomized self reducible. It means that it is unlikely that we can prove that some $ L$ is hard on average assuming only that $ L$ is hard in worst case.
  2. For circuits, seems that the situation is a bit different: Impaglizaao’s hardcore lemma (informally) states that if $ L$ cannot be solved on a fraction of $ 1-\delta$ of the inputs of size $ n$ by a circuit of size $ S$ , then there exists a distribution such that no circuit of roughly the same size can solve more than $ 0.5 + \epsilon$ of the inputs. Thus, if we had a worse case hard language $ L$ , we could build a “difficult distribution”.

First, I don’t understand how these results settle: unless the polynomial hierarchy collapses, $ SAT \notin P/poly$ , which means that every polynomial size circuit $ C_n$ can solve at most fraction of $ 1-\delta$ of the inputs, so by Impaglizaao’s hardcore lemma it is possible to construct a distribution over $ SAT$ which is hard for all polynomial circuits – assuming only worst case assumptions.

Second, are there known results about hardcore sets for uniform computation (i.e. Turing machines)?

[ Rock and Pop ] Open Question : Who are the best and worst bass guitarists in your opinion?

Best: Flea (Red Hot Chili Peppers) Geddy Lee (Rush) John Entwistle (The Who) Jack Bruce  Les Claypool Tony Levin Cliff Burton (Metallica) John Deacon (Queen) Dee Murray (Elton John’s band) Tiran Porter (Doobie Brothers) John Paul Jones (Led Zeppelin) Stu Cook (CCR) Duff McKagan (Guns N Roses) Krist…whatever his name is (Nirvana) Worst (All root notes. Who cares if they “fit the song?”): Cliff Williams (AC/DC) Adam Clayton (U2) Nate Mendal (Foo Fighters) Dusty Hill (ZZTOP) Michael Anthony (Van Halen) Gene Simmons (Kiss) Sir Paul McCartney  Roger Waters The Coldplay bassist Howie Epstein (Tom Petty’s band) None of the above are qualified to be bass players, unless they at least play some fills. 

What’s the worst security issue what can happen by using eval() in Android WebView?

I’ve come across a hybrid Android app – meaning most of its UI is implemented in a WebView using HTML and JavaScript technologies. The app itself is connecting to the server and one of the possible responses can include evaluate field, which is then directly executed via JavaScript’s eval() command.

Is this a security issue? What kind of attacks can attacker do via this attack vector?

What is the worst I can do, if I know OpenId Connect client secret?

This plugin to kubectl helps to setup OpenId Connect authentication for clients to connect to a kubernetes cluster.

In order for this to work for me authenticating against a cluster, the administrator have to hand me oidc-issuer-url, oidc-client-id and oidc-client-secret so that the tool can do its job authenticating me with OpenId Connect.

The administrator also will have to hand out the same data to all other users of the cluster.

If this secret is leaked, what’s worst the attacker can do? If I understand this correctly, they should not be able to access the cluster, as the secret will only allow them to supply their own identity that normally would not have any permissions. Anything else?

Analysing worst case time complexity of quick sort for various cases

I am trying to understand worst case time complexity of quick sort for various pivots. Here is what I came across:

  1. When array is already sorted in either ascending order or descending order and we select either leftmost or rightmost element as pivot, then it results in worst case $ O(n^2)$ time complexity.

  2. When array is not already sorted and we select random element as pivot, then it gives worst case “expected” time complexity as $ O(n log n)$ . But worst case time complexity is still $ O(n^2)$ . [1]

  3. When we select median of [2] first, last and middle element as pivot, then it results in worst case time complexity of $ O(n log n)$ [1]

I have following doubts

D1. Link 2 says, if all elements in array are same then both random pivot and median pivot will lead to $ O(n^2)$ time complexity. However link 1 says median pivot yields $ O(n log n)$ worst case time complexity. What is correct?

D2. How median of first, last and middle element can be median of all elements?

D3. What we do when random pivot is ith element? Do we always have to swap it with either leftmost or rightmost element before partitioning? Or is there any algorithm which does not require such swap?

Worst Site Speed – Woocommerce

Hi Guys,

Currently running an ecommerce website and my sites speed is only 3.6 based on Gmetrix. There is opportunity that needs to fix.

1. Add Expires headers
2. Make fewer HTTP requests

I already installed CDN (maxcdn), wprockets and imagify. My target speed is 1-2 seconds loading time, as i'm currently running PPC campaign. Pretty sure this will affect.

Hope you everyone can share their thoughts and experience.

Thank you!

Worst Case running time of the Minimum Vertex Cover Approximation algorithm

Considering this factor $ 2$ minimum vertex cover approximation algorithm :

Repeat while there is an edge:

Arbitrarily pick an uncovered edge $ e=(u,v)$ and add $ u$ and $ v$ to the solution. Delete $ u$ and $ v$ from the graph. Finally output the candidate cover.

I want to find the worst case running time of this algorithm. Since in a fully connected graph we have $ O(n^2)$ edges, then the loop will run at most $ O(n^2)$ times, I guess here I am not sure what the maximal number of delete operations could be or perhaps for less than $ O(n^2)$ edges there would be some scenario with a large number of delete operations.

Any insights appreciated.