Regarding data sharding,
Sharding based on UserID: We can try storing all the data of a user on one server. While storing, we can pass the UserID to our hash function that will map the user to a database server where we will store all of the user’s tweets, favorites, follows, etc. While querying for tweets/follows/favorites of a user, we can ask our hash function where can we find the data of a user and then read it from there. This approach has a couple of issues:
What if a user becomes hot? There could be a lot of queries on the server holding the user. This high load will affect the performance of our service. Over time some users can end up storing a lot of tweets or having a lot of follows compared to others. Maintaining a uniform distribution of growing user data is quite difficult. To recover from these situations either we have to repartition/redistribute our data or use Sharding based on TweetID: Our hash function will map each TweetID to a random server where we will store that Tweet. To search for tweets, we have to query all servers, and each server will return a set of tweets. A centralized server will aggregate these results to return them to the user. Let’s look into timeline generation example; here are the number of steps our system has to perform to generate a user’s timeline:
Our application (app) server will find all the people the user follows. App server will send the query to all database servers to find tweets from these people. Each database server will find the tweets for each user, sort them by recency and return the top tweets. App server will merge all the results and sort them again to return the top results to the user. This approach solves the problem of hot users, but, in contrast to sharding by UserID, we have to query all database partitions to find tweets of a user, which can result in higher latencies.
Consistent Hashing can be used to overcome the issue of “Search Hot words”(people are searching the words frequently) and “Status Hot Words” (People are using the word in the status frequently). My question is how can Consistent Hashing technique help here? In consistent hashing if we hash the word to get a index on the ring since hash function does not get changed if multiple people search the word will end up same server until and unless we are adding any other attribute to hash function. So, how the “Search Hot words” problem gets solved? Similarly, in key-value pair where key is word and value are statusIDs. if any key gets popular in statuses will result more statusIDs in values. How does consistent hashing help?
I also had a look at How do I deal with side effects in Event Sourcing? but the solution wasn’t clear to me.
If I store “EmailSent” event in the event stream, I might issue the external request to send the email again, thinking the previous send timed out, moments before the confirmation of the first email being successfully sent arrives.
However, if I never store that, and instead have all sent email IDs persistently stored by the email service, I will never know to stop bothering the email service with my old requests to send a particular email, despite it answering “this email has been previously sent successfully” every time.
Should I do both? This way the system will stop bothering the email service a lot of the time, but when it accidentally bothers it multiple times, only one email will be sent. (Yes, the email service can never be transactional, and I still have to choose “at most once” or “at least once” but having its limited scope and data locality it can give practical results much closer to “once”.)
I am installing nvidia drivers on Ubuntu 18.04. I completed the process successfully once, and was able to run
nvidia-smi to see my graphics card usage.
However, after reinstalling my operating system and the drivers, every time I run
nvidia-smi it prints the system time and date and then prints nothing else. My system becomes unresponsive to all inputs, and I have to reboot.
I am not sure how to debug this issue. I have tried:
- Nouveau is disabled and I have run
- I have attempted to install other versions (410.xx and 418.xx)
- I have tried reinstalling the OS and doing a fresh driver install. The issue persists.
- I have installed using the scripts available on NVIDIA’s website as well as through apt-get by adding the graphics ppa repositories.
How should I try to get
nvidia-smi running successfully?
Peace of You try this commend but problem not solved
sudo apt-get remove gnome-control-center sudo apt autoremove sudo apt-get install gnome-control-center
[Xcode 10.1, MacOS 10.14.1]
I have a project that uses
bmake (could be any
make though) and the Makefile provides a number of targets. I would like to use Xcode to build
host and clean the build folder, but I’m having trouble working out how configure Xcode to allow me to this.
From the command line, I would build using
bmake host and clean using
bmake clean. The reason I’m using Xcode for this is because I like to use an IDE for debugging.
Project -> Info (External Build Tool Configuration), I have:
Build Tool : /usr/local/bin/bmake Arguments : host Directory : None <- I'm using the current path
With these settings,
Product -> Build builds my target, but
Product -> Clean Build Folder does nothing even though Xcode reports that the clean succeeded.
In order to actually do a clean, I either need to define another target with the
Arguments field set to
clean and then switch between targets when building/cleaning, or, use a single target and change the argument field depending on whether I’m building or cleaning. (A really clumsy way of going about it.)
If I leave
Arguments with it’s default value
$ (ACTION) all targets get built (except clean), and cleaning does nothing useful.
I’ve read https://stackoverflow.com/questions/15652316/setup-xcode-for-using-external-compiler but that question does not address this problem.
Is there a better way of doing this?
I am working for a Non-Profit Organisation Sangath,Goa. For our Project ToQuit – ToQuit aims to develop and then preliminarily evaluate, a contextually appropriate intervention, that can be delivered using mobile text messaging to large numbers of tobacco users, quickly and at low cost.
To deliver this intervention we have planned to develop SMS and IVR system internally only for Sangath which can be use by various projects in future.
I have past experience in software development. I am good in .net MVC, SQL and learning Angular2+4&5.
I am trying to collect some information prior to start developing IVR and SMS systems platform. Is it a good idea to go with .net MVC, SQL Server and Angular? How to begin with? What all steps should I follow?
Any help/information would be appreciated.
I’m curious about how does low storage space affects some functionality on device? thanks!
I have had problems when importing, I get that error, I attach the file I used at the time of import Does anyone know how to solve the problem? My version of magento is 2.1.6
Was attempting to run
sudo apt upgradeand hit Ctrl c when it gave me a prompt that I could not figure out how to get out of.
Tried to rerun
sudo apt upgrade. Got a message about some kind of lock process being occupied. Foolishly, tried to restart it. This happens after selecting Ubuntu to start. http://imgur.com/anZYAsd Any ideas on how to fix that?
Suppose I have modeled a deterministic finite automaton of my system. How can I check if the traces generated by this system, actually represent the model I had in mind?
For example, say I have DFA A which models a vending machine. To ensure I did not model any wrong transitions, is coming up with (safety/liveness) specifications and performing verification the only method I can follow to ensure my model is correct? Many thanks.