Assume we have N kids who all like dogs. One day they go to the shelter and there are M dogs of unique breeds. Each kid has a favorite dog breed and a second favorite dog breed. They are also lined up in order. Then way the dogs are selected are as follows. For the first kid in line, if their first choice breed is available, take it and leave. Else if, first not available and second is, take it and leave. Else, leave crying with no dog lol. We want to find, for each 0 <= i <= N – 1, determine how many kids would get a dog if the owner if the first i kids were removed from the line.
Example. 4 kids, 2 dog breeds(denoted as 1 and 2). Let’s say all 4 kids have breed 1 as first choice and breed 2 as second choice. The output should be 2,2,2,1. Because, if we remove the first 0 kids from line, only 2 get dogs. If we remove first kid from line, still 2 get dogs. If we remove first 2 kids from line, 2 get dogs. If we remove first 3 from line, 1 gets a dog. This is a trivial case.
Obviously a solution would be to have a list of the the kids preferences and a set of the available dog breeds, and then for each i, start from the ith index of the list, refresh the dog set, and simulate it, but that would take O(NM) time. Is there a way to do this more cleverly and efficiently?
There is, in Adventurers League Dnd, a monster statblock with the following ability :
At the start of his turn, every living creature within 100 feet must succeed on a DC 15 Constitution saving throw or they lose 5 hit points and [he] regains 5 hit points.
That monster also has an ability to spawn Swarms of Rot Grubs (Medium swarms of Tiny Beasts). I am thus wondering how those two abilities interact.
More generally : how are swarms considered vis-à-vis creatures ?
- Considered as a single creature [here, would be a single CON save]
- Considered as many creatures (how do you say how many?) [here, would be X CON saves]
- Considered as no creature [here, would be 0 CON saves]
Recently I started new work, and going through documentation and code to understand what company is doing. While doing that, I noticed there is logged number of special characters in his password.
Personally, I don’t think it is good idea as disclose some information regarding password, especially for users who didn’t used any special characters. From other hand, this issue wasn’t picked up by pen testers.
I was wonder, is it me being too paranoiac and this is not a real issue, or it is a issue which was overlooked during pentesting.
I have below models
class Order(models.Model): .... class Component(models.Model): line = models.ForeignKey( Line, on_delete=models.CASCADE, blank=True, null=True, related_name="components", ) ... class Detail(models.Model): line = models.ForeignKey( "Line", on_delete=models.CASCADE, blank=True, null=True, related_name="details", ) order= models.ForeignKey(Order, on_delete=models.CASCADE, related_name="details") ..... class Line(models.Model): .... **Serializer** class ComponentSerializer(serializers.ModelSerializer): qty = serializers.SerializerMethodField(read_only=True) def get_qty(self,component): return (component.qty)-sum( map( some_calculation, Detail.objects.filter( line__components=component,order__active=True) ) )
I have a list view using model viewsets
def list(self, request): queryset = Order.objects.filter(order__user=request.user.id,active=True) serilizer = OrderSerializer(queryset,many=true)
The component serializer is used inside the order serializer. My question is the query inside the ComponentSerializer hits DB fpr every order record. If my understanding is correct, is there any way to reduce this?
I have noticed that when I google for jobs for example ‘plumber jobs in Melbourne’ some results have a prepended piece of data ‘407 jobs’ before the normal meta description is shown.
Anyone know what seek has done to get this data shown in google search results?
What information and usefulness does knowing the consensus number of a shared object give me?
Donald Knuth demonstrated that the codebreaker in the board game Mastermind can solve the pattern in five moves or fewer using the following algorithm:
- Create a set S of remaining possibilities (at this point there are 1296). The first guess is aabb.
- Remove all possibilities from S that would not give the same score of colored and white pegs if they were the answer.
- For each possible guess (not necessarily in S) calculate how many possibilities from S would be eliminated for each possible colored/white score. The score of the guess is the least of such values. Play the guess with the highest score (minimax).
- Go back to step 2 until you have got it right.
I am curious: what would be the maximum number of guesses necessary to win a Mastermind-like game with 5 pegs instead of 4? How about 1,000 pegs, or a million?
I have a question but unable to find out the solution to it.
Thanks for the answer in advance. How should I proceed to it?
I think one reason a compare is regarded as quite costly is due to the historical research as remarked by Knuth, that it came from tennis match trying to find the second or third best tennis player correctly, assuming the tennis players are not a “rock paper scissors” situation (but has an “absolute ‘combat’ power”).
If we have an array of size
1,000,000, we don’t usually mind comparing
2,000,000 times to find the second largest number. With the tennis tournament, having a Player A match with Player B can be costly as it can take a whole afternoon.
With sorting or selection algorithms, for example, what if the number of comparisons can be O(n log n) or O(n), but then, other operations had to be O(n²) or O(n log n), then wouldn’t the higher O() still override the number of comparisons? (Maybe it didn’t happen yet, or else we would have a study case about this situation). So ultimately, shouldn’t the number of atomic steps, instead of comparisons, measured by the order of growth as compared to
n (O()) that determines the time complexity?
I need to analyse a directed graph but I don’t know the name of the algorithm I would need to use. The graph has many cycles.
My desired behaviour is: given a graph source and graph sink, find the longest path by number of edges, excluding cycles.
By graph source, I mean a vertex with one or more edges to other vertices and no incoming edges, and the opposite for sink. If there’s better terminology, then please let me know about this.
By excluding cycles, this might entail not traversing an edge the process traversed previously.
Do you recognise this algorithm and could you tell me the name, please?
Thanks in advance