I’m implementing a sync algorithm where multiple apps sync files with a data source. Syncing is already working and has been for several years so there’s no issue with this.
Now I want to implement a way to lock the data source, to tell clients that they shouldn’t write to it anymore. This will be used to upgrade the data source – i.e. upgrade its structure, move folders around, etc. which needs to be done when nothing else is syncing.
So I came up with the following algorithm, inspired by [SQLite 3 locking mechanism], but changed to take into account that it’s network based.
There are three types of locks, and a client request a lock by POSTing a file to the data source. The locks are:
- SYNCING: The client is syncing – any other client can still read or write to the data source. There can be multiple SYNCING locks.
- PENDING: The client wants to acquire an exclusive lock on the data source – any other clients can still read or write to the data source, but no new SYNCING lock can be posted. There can be multiple PENDING locks.
- EXCLUSIVE: The client has locked the data source – no other client can read or write to it. There can be only one EXCLUSIVE lock.
And it would work like so:
- When a client starts syncing with the data source, they acquire a SYNCING lock. When a client finishes syncing, they release the SYNCING lock they’ve created
- When a client needs to lock the data source, it first posts a PENDING lock. When a PENDING lock is present, no new SYNCING or PENDING locks can be posted. Clients that are syncing however can complete the process. The client who has acquired a PENDING lock will poll the data source and wait for all SYNCING lock to be released. When they are all gone, the client checks for all the PENDING locks – if there are others, the client checks the timestamps of these other locks and if his lock is not the oldest one, it deletes it and exit. Locking failed, and it will need to try again later.
- If the PENDING lock is the oldest, then the client posts an EXCLUSIVE lock. At this point, no other client can post any other lock.
I’m wondering if I overlook something with this system, like I’m wondering if there could be some race conditions in some cases?
For now, I’m not dealing with clients that post a lock then crash, there will be some logic to clean up. At this point, I just want to make sure that this system will only allow one client to acquire an EXCLUSIVE lock. Any ideas?
Basically, when we execute a generate key command such as
A0 then we receive a
key-under-lmk for future use. What if we have multiple HSMs in a high availability configuration? How would we make sure that all keys-under-LMK mean the same thing to all HSM instances?
The documentation I have doesn’t cover this and I didn’t find anything online about that particular model.
we have a postgres 9.6 replica configured among 2 servers. We used the following configuration to create the replica:
postgresql.conf wal_level = hot_standby max_wal_senders = 5 wal_keep_segments = 32 archive_mode = on archive_command = 'cp %p /archive/%f'
The problem is that the servers has been restarted due to some maintenance tasks and now they are out of sync.
Since the DB is very large, how can we restore the replica and then synchronize the data without having the application down more then 5/10 minutes? Can it be done in background while the application on the master site is being used?
* Post here to get 10% Cash back after purchased.
Order At www.LinksPlug.com
It seems every program that purports to record webcams fails to keep audio and video in sync. Moreover, there are probably man millenia being spent by people trying to correct for this during editing.
While it is understandable that OS’s may have to buffer incoming raw audio and video prior to compression, if necessary taking gigabytes of RAM, it is less understandable that they can’t keep the two in sync during recording.
This points to an architectural flaw in common with the major OSs. What is the problem?
- I have a live data feed
- Users load 50 items on the home page and click load more to scroll infnitely
- As new data comes in every minute, I also push this new data to each user via websockets
- The user could be disconnected temporarily and they would miss a few updates and then they could reconnect in which case they would be OUT of SYNC with the backend
- The way I thought I would fix this problem is to get the timestamp of the last item they have on their UI
- Whenever the connection opens, I would send ALL the items from that timestamp till current time to each user
- There is however a risk here
- My server could be down for maintenance for a few hours when doing upgrades and on connecting back I would have to send 10000s of items to each user potentially causing the server to crash
- The remedy that I thought of is to keep a time limit say 4 hours, if it has been more than 4 hours since the last item the user has on their UI, I would reset the UI completely
So the question is, how do I deal with this?
- Dont do anything, let there be that risk of server crash
- Clear the user s UI completely without asking the user even if they scrolled down 1000 items on their screen
- Keep a limit of say 50 items or so and send those 50 items, if it has been more than 4 hours, tell the user they may be out of sync and they need to refresh the page
- Any suggestions?
Hope someone here can help with an issue which is strange and new to me.
We have hybrid environment, running both SharePoint on-premises and Online. We have lot of services running in our organization. One of it is, employee directory where user has an option to change their profile picture. (This is the only place if they want to change any profile related information including profile picture). We have blocked users to change profile picture everywhere using exchange policies. Also, we have modified the default option for editing picture in SharePoint Online user profile service – “Picture” property and unchecked the option “Allow users to edit values for this property”.
Now, as a result if they are going to skype for business, exchange or SharePoint they do not see option for change the profile picture.
When user updated the profile picture in Employee Directory application, the profile picture will be updated in AAD. Once AAD has the latest image all service in O365 gets the updated image except SharePoint Online.
Is this because we unchecked the property of “Picture” in user profile service? As I understood, this option is just to disable the option for upload picture in Delve. But it is also not syncing picture from AAD.
Anyone has anything to say?
thanks in advance
I have tens of millions of small files and directories I wish to migrate from linux server A to B.
But, I wish to have minimal downtime and… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1784388&goto=newpost
I was working with Sync feature of document library. In this feature, a user can Sync document library to the local machine.
What I am looking for is if there is a way to do the same with Programmatically(C# or other language)
Happy coding 🙂
I recently got a new computer for use at the office with the intent of taking my old one home to work at home. The new computer was set up by “cloning” the old one. Same user account, same Microsoft account, same etc…. When working at the office everything in sharepoint works as expected. When I work on files at home and save to sharepoint everything looks good. Then when I get back to work, the files I have modified at home and saved back to sharepoint get moved to the recycle bin and older files restored. So then I have to go and sort through the recycle bin to restore the files I have modified. I even went to the extent of creating a new folder from the home computer and copying files into that intending to move them to the correct location and updating files manually when I got back into the office. When I started up my office PC the files (and the folder I created from home) were all moved to the recycle bin. What is going on here? This is not how we want sharepoint to work!