I was living in a country where Internet wasn’t that fast so the younger me had the following idea for compressing data
Let say 2 parties want to send data to each other, we know that data can be represented by a number literally any data no matter the size can be just a giant number.
So let say the 2 parties have this perfect synchronized clock and a very fast one, I mean a clock that can count billions of billions ( equivalent of gbits of data ) in a short amount of time.
So for party 1 to send data to party 2 it only need to send 2 messages : start counting and stop counting.
So any data can be sent using only 2 bits lol.
How hard is it to design that clock ?
What are the advantages and disadvantages when it comes to clock replacement algorithm
Actually it differs from CPU to CPU but it is possible to choose a mainstream CPU technology used in moderate servers or home computers.
How many clock cycles it takes to read a file of 50KB from Ethernet socket and then resend this file to another IP address without saving the file to disk? I assume this operation can be done on RAM. If not, how many clock cycles required if a copy of this 50Kb file is to be stored into the disk (7200 rpm)?
Years ago, I was running a service where the moderators were able to do various actions with massive privacy implications if the accounts or contributions were less than a short period of time. I did this by checking the timestamp against the current Unix epoch, allowing for X hours/days. Normally, this worked well.
One day, the server where this was hosted on had been “knocked offline” in the data centre where I was renting it, according to the hosting company. When it came back on again, its clock had been reset to the factory default, which was many years back.
This resulted in all my moderators potentially being able to see every single account’s history and contributions in my service until I came back and noticed the wrong time (which I might not even have done!) and re-synced it. After that, I hardcoded a timestamp into the code which the current time had to be more than or else it would trigger “offline mode”, to avoid any potential disasters like this in the future. I also set up some kind of automatic timekeeping mechanism (in FreeBSD).
You’d think that by now, not only would every single computer be always auto-synced by default with tons of fallback mechanisms to never, ever be in a situation where the clock isn’t perfectly synced with “actual time”, at least down to the second, if not more accurately; it would be impossible or extremely difficult to set the clock to anything but the current actual time, even if you go out of your way to do it.
I can’t remember my Windows computer ever having been out of time for the last “many years”. However, I do important logging of events in my system running on it. Should I just assume that the OS can keep the time at all times? Or should I use some kind of time-syncing service myself? Like some free HTTPS API, where I make a lookup every minute and force the system clock to me whatever it reports? Should I just leave it be and assume that this is “taken care of”/solved?
A very long time ago, Dallas Semiconductor released the Java-Powered iButton:
These devices were somewhat similar in purpose to modern Java Card compliant smart cards, except for one detail: they had a built-in primary-cell battery and a secure real-time clock (RTC). The battery was estimated to be good for up to a decade.
Sadly, it seems that the Java-Powered iButton didn’t get market traction. However, having a programmable token with an integrated primary-battery and secure real-time clock is extremely useful since it enables the following features:
- Active countermeasures that instantly zeroize all contained secrets if the secure element is physically tampered with or the battery is disconnected
- Preventing operations after a certain date
- Preventing operations before a certain date
If there is a need to have a security token with an integrated real-time clock, are there any modern solutions that don’t require custom hardware engineering? Is there a modern equivalent to the “Java-Powered iButton”?
A player’s character is in possession of a bomb wired up to a clock. If that character cast time stop, would the bomb keep running while time stop is in effect or would the timer stop? Or does the clock only stop after the bomb is no longer in the character’s possession?
I am trying to determine how Ubuntu 18.04 determines how to set the system clock for a computer that has a broken RTC clock and no access an NTP server and systemd-timesyncd disabled. Upon the boot the time is always 2018-01-28 10:58:48 EST. This appears very similar to
Prevent clock from advancing to a system time after Ubuntu Server build time
Where the time is reported as 2018-01-28 15:58. The only advice he got was to turn off timesyncd, which I already have disabled and also didn’t solve his problem.
Normally, an application starts, gets a GPS signal, sets the clock and starts running. But it doesn’t really need GPS to run. What it does need is for the clock to not go backwards in time. I thought I might be able to fix that if I knew how Ubuntu decides to set the time to 2018-01-28 10:58:48 EST.
One thing I did try was to enable systemd-timesyncd. While the computer isn’t normally connected to the Internet, as a maintenance procedure, I may connect it. Then I get the correct time and it touch(es) a file at /var/lib/private/systemd/timesync/clock. If I disconnect from the Internet and manually touch the file, the next boot will use that time. But even that approach, while better, still can set the clock backwards as it effectively remembers the last time the computer was connected to the Internet.
That aside, it seems a mystery that Ubuntu would use the same time at boot if it can’t determine a time and that time isn’t something like Jan 1 of some year. If I knew what Linux was doing, I might be able to craft a solution. So far, except for the URL above, what I find is a lot of “how to use NTP” and “Using NTP is a good idea” etc. I would if I could but no Internet except in maintenance mode.
I’ve newly installed Ubuntu 18.04.3 on two hosts, desktop on one, server on another. The timezone on the server resets to UTC on each boot.
# cat /etc/timezone America/Los_Angeles # ls -la /etc/localtime lrwxrwxrwx 1 root root 39 Sep 2 22:47 /etc/localtime -> /usr/share/zoneinfo/America/Los_Angeles
I’ve run “dpkg-reconfigure tzdata” several times. I’ve tried “timedatectl set-timezone “America/Los_Angeles””.
# timedatectl Local time: Mon 2019-09-02 23:00:54 America Universal time: Mon 2019-09-02 23:00:54 UTC RTC time: Mon 2019-09-02 16:00:44 Time zone: America/Los_Angeles (America, +0000) System clock synchronized: no
systemd-timesyncd.service active: yes RTC in local TZ: yes
I can’t get the Local Time to show PDT like the desktop? I do have ntp installed and configured.
I am doing an assignment in designing a very simple CPS and in one of the problem, I was asked to design an event-triggered combinational component. In the context, the system will compare inputs during rounds in which clock is present.
I cannot make it clear what this clock being present exactly means. Can anyone give me a clue?
What happen with the Log backups if I set the SQL Server clock back? In countries with daylight saving this happens at least once a year. Will the point in time restore with the STOPAT option fail?
Using fn_dump_dblog I see this after setting the clock back two hours.