List of equally distanced numbers in an interval [a,b] that contains both a and b

I’m writing a program that should split a given interval $ [a,b]$ into a list of $ \sqrt{N}$ equidistant numbers:

N = 27; a = -1; b = 1; p = 3; Range[a, b, RealAbs[b - a]/(N^(1/p) - 1)]  {-1, 0, 1} 

The result should be a list that has $ N^\frac{1}{p}$ numbers, and that contains both $ a$ and $ b$ . The program works when $ N=x^p$ , where $ x$ is an integer, but fails to include $ b$ in the list when this condition is not met.

For example, when $ p=2$ and $ N$ is not a perfect square:

Np = 10; a = -1; b = 1; p = 2;  Range[a, b, RealAbs[b - a]/(Np^(1/p) - 1)] // N  {-1., -0.0750494, 0.849901} 

Is there a way to specify that both ends, $ a$ and $ b$ , should be part of the list, and then equally split the interval into a total of $ \sqrt{N}$ equidistant numbers?

Finding multiple paths through a grid such that every grid square is equally used


Here’s the setup: I have an $ N$ x $ N$ grid of tiles, and a list of $ M$ agents that need to move across the grid. Each agent has its own start tile $ S(a)$ , end tile $ E(a)$ , and an exact number of steps $ D(a)$ it must make. Each step consists of a one-tile move horizontally, vertically, or staying in place. For each agent, $ D(a)$ is usually much larger than the Manhattan distance between $ S(a)$ and $ E(a)$ , so the path the agent takes is not necessarily a straight line from $ S(a)$ to $ E(a)$ . Furthermore, the sum of all $ D(a)$ is expected to be much larger than $ N$ x $ N$ , so every tile will be used at least once. Agent paths are allowed to intersect with other paths and with themselves, and the number of agents on a tile at any given time doesn’t matter.

The Problem

I would like to find paths for each agent that begin at $ S(a)$ , end at $ E(a)$ , and are exactly $ D(a)$ steps long, with the goal of minimizing the maximum number of times any given tile is used. More formally, given an agent path $ P_0 \ldots P_n$ , let $ C(P, t)$ be the number of times tile $ t$ appears in $ P$ , and let $ A(t)$ be the sum of $ C(P, t)$ over all agent paths. I would like to find agent paths that minimize the maximum $ A(t)$ over all tiles $ t$ .

My intuition tells me that this problem is almost certainly NP hard, so I’m looking for some kind of approximation or heuristic.

First Attempt

My first stab at solving this was to find each path sequentially. For each agent, I create a 3-dimensional $ N$ x $ N$ x $ D(a)$ search space, then use A* search to find the min-cost path from $ [S(a), 0]$ to $ [E(a), D(a)]$ . The cost of each node in the search is the number of times that tile has been used by previous paths. Then, once the path is found, I add to the cost of each tile used, and proceed to the next agent. Of course, this leads to the problem that while the last agent path will be pretty good, the first agent path will be essentially random because the grid is yet totally unused. So, I just loop this process a few times; once the last path is computed and the tile costs updated, I loop back to the first path, subtract from the grid the costs that agent contributed, then recompute that path and add the new costs in. After 3 or 4 loops, I converge on a pretty reasonable solution.

But I’m hoping there’s a better idea out there. Any ideas or references to similar problems that I could read up on would be very welcome.

Is CTR really equally secure than CBC?

Here is a typical cryptographic situation:
A secret key exists that is only known to a sender and a receiver of messages. As it is hard to replace that key, since you either need a secure channel for transmission or a way how the receiver can send something to the sender to perform a key exchange and both may not exist, a lot of different messages will be encrypted all with the same key. Note, however, that the messages exchanged will all be different. It’s not impossible that two messages could start with the same couple of bytes or contain the same byte sequences somewhere within the messages but that would be pure coincident and is not generally expected to happen frequently.

Now when using CBC encryption, there is an IV and that IV is randomly chosen for every message exchanged. With a 128 bit block cipher, like AES, the IV has 128 bits as well, so the chances that two messages are encrypted with the same IV is only 1 to 2^128, which is rather tiny. And even if the same IV would have been used for two messages, does it really matter if the messages are entirely different in the beginning? After all the IV is XORed with the first 128 bit of the message first, so even for the same IV that operation has a different result if the first 16 byte of the message are different than the last message that had the same IV.

However, CBC is considered outdated by most people today, pretty much every paper about block cipher chaining recommends to only use CTR for new development, praising all it’s advantages. Sure, CTR has a couple of nice features but is it really equally secure to CBC in a situation initially described?

CTR also uses an IV, yet that IV is split into two parts: A nonce and a counter. As the counter values are for sure repeating for different messages, since all counters start at zero for the first block of every new message, the only randomness comes from the nonce. Yet the nonce will be less than 128 bit because there must be room for the counter. All papers say, you must never use the same IV with the same key to encrypt two different data blocks but the nonce space of CTR is always for sure smaller than the IV space of CBC, so the chances for a collision are much higher, aren’t they?

I’ve seen CTR implementation that split the IV in half, so there are 64 bit nonce and 64 bit counter. In that case the chances for a nonce collision are just 1 to 2^64 compared to 1 to 2^128 for the CBC case. While 2^64 is still a big number, it’s a whole lot smaller than 2^128.

Thus won’t using CTR force you to replace the key much more frequently, unless you want to risk the security of your encrypted data exchange? Is CTR really a suitable replacement for CBC in a situation as described above?

Aside from that, CTR doesn’t seem compatible to itself. Every CBC implementation can decrypt data correctly that any CBC implementation has been encrypted. That’s because there are no open question on how CBC works, everything is standardized. The same cannot be said for CTR as different CTR implementation can use different ways to split the IV into nonce and counter. When I know that my messages will never have more than 2^20 blocks, I could use only a 20 bit counter and thus get a 108 bit nonce, yet this won’t work if the other side expects a nonce to be exactly 64 bit long.

To make things even more complicated, instead of splitting the IV into two parts, one can also create the IV by adding nonce and counter together or XORing nonce and counter together, which avoids the issue with the IV space reduction, yet I have no idea what such a behavior means in regards to security of CTR. Also it will make the implementation incompatible to most existing CTR implementations.

Do I need to enable Trace Flag 1117 for equally sized data files?

I was reading about fill proportional algorithm in SQL Server and then I recalled TF1117. BOL states:

When a file in the filegroup meets the autogrow threshold, all files in the filegroup grow. This trace flag affects all databases and is recommended only if every database is safe to be grow all files in a filegroup by the same amount.

What I can’t understand is if data files are filling proportionally, won’t they auto-grow proportionally either? In that case we can’t omit using TF1117.

Distribute values equally

Each ship docks at a port. If too much ships arrive at a single port the crew gets overwhelmed.

How to return a result set so a port does not get overloaded?!18/f77e19/1

declare @ships table ( shipnr int, portnr varchar(50) );  insert into @ships (shipnr, portnr) values (1, 'A'), (2, 'A'), (3, 'A'), (4, 'A'), (5, 'A'), (6, 'B'), (7, 'B'), (8, 'B'), (9, 'B'), (10, 'C'), (11,'D'), (12, 'D'), (13, 'E'), (14, 'E'), (15, 'E'), (16, 'F'), (17, 'F'), (18, 'F'), (19, 'F'), (20, 'F') , (21, 'F') , (22, 'F') ; 

One possible solution and set based result would be:

SELECT shipnr, portnr FROM ( SELECT shipnr, portnr, DENSE_RANK() OVER(PARTITION BY portnr ORDER BY shipnr) AS N FROM ships ) AS a ORDER BY N 

However, this is not what I am looking for since at the end we get three ships arriving at F.

If I seed a CSPRNG with a truly random number and call the output, does this make the number more, less or equally “random”?

If I have a JavaScript CSPRNG such as isaac.random(), and I seed it using a truly random number T as such: isaac.seed(T), does this make the result of the CSPRNG more, less, or equally random?
I would imagine that with a CSPRNG such as isaac, (Which has passed TestU01) that the result would not add any randomness since isaac produces numbers indistinguishable from truly random values.
The concern I have is if this would cause less “randomness”.

Splitting an amount of money equally between a group of people

I’m building a clone banking app at the moment, and one of the things I’m trying to do is add Split Transaction (which can then be shared with a set of friends paying a given amount each).

Initially, the transaction is split equally amongst friends (unless it doesn’t split equally, in which case the remainder gets added on to one unlucky friend). The user can then manually adjust the amount each pays, which then updates the others. If the user has manually adjusted an amount for a friend, this friends split doesn’t get updated automatically when the user adjusts another friend’s amount (i.e. if the user says friend1 pays £12, it will always be £12 until the user says otherwise).

I’ve been fiddling for a while trying to make the method as concise and Swifty as possible – but I’d really appreciate any feedback on my approach.

For the purposes here, I’m only trying split the money equally between people (but I still wanted to explain the user-defined split so that the current code makes sense).

I’m using to represent the transaction value, all within a Transaction class. I need to round quite a bit to ensure the split and remainder stick to 2 decimal places. Here’s the relevant code:

A struct to hold an amount along with if the user set it or not (needs to be a custom object for codable reasons):

struct SplitTransactionAmount: Codable {     let amount: Money<GBP>     let setByUser: Bool } 

A dictionary to hold the friend names, along with their split, and if it’s set by the user – also a namesOfPeopleSplittingTransaction array for easy display.

var splitTransaction: [String: SplitTransactionAmount] var namesOfPeopleSplittingTransaction = [String]() 

And here’s the method to split the transaction:

private func splitTransaction(amount: Money<GBP>, with friends: [String]) -> [String: SplitTransactionAmount] {      //First we remove any duplicate names.     let uniqueFriends = friends.removingDuplicates()     //Create an empty dictionary to hold the new values before returning.     var newSplitTransaction = [String: SplitTransactionAmount]()      let totalAmountToSplitRounded = amount.rounded.amount     let numberOfSplitters = uniqueFriends.count      let eachTotalRaw = totalAmountToSplitRounded / Decimal(numberOfSplitters)     let eachTotalRounded = Money<GBP>(eachTotalRaw).rounded.amount      let remainder = totalAmountToSplitRounded - (Decimal(numberOfSplitters) * eachTotalRounded)      if remainder == 0 {         //If the amount to split each goes in to the total with no remainder, everyone pays the same.         for friend in uniqueFriends {             newSplitTransaction[friend] = SplitTransactionAmount(amount: Money(eachTotalRounded), setByUser: false)         }     } else {         for friend in uniqueFriends {             if friend == uniqueFriends.first! {                 //Unlucky first friend has to pay a few pence more!                 newSplitTransaction[friend] = SplitTransactionAmount(amount: Money(eachTotalRounded + remainder), setByUser: false)             } else {                 newSplitTransaction[friend] = SplitTransactionAmount(amount: Money(eachTotalRounded), setByUser: false)             }         }     }     return newSplitTransaction } 

I think the problem I’m finding is the code makes perfect sense to me, but I’m not sure how clear it is to an outside reader. Any thoughts on my approach would be much appreciated (and sorry for the long question!). And I’d also love to know if there’s any way to write this more concisely!

I’ve also extended array to remove duplicates:

extension Array where Element: Hashable {     func removingDuplicates() -> [Element] {         var addedDict = [Element: Bool]()          return filter {             //When filter() is called on a dictionary, it returns nil if the key is new, so we can find out which items are unique.             addedDict.updateValue(true, forKey: $  0) == nil         }     }      //This will change self to remove duplicates.     mutating func removeDuplicates() {         self = self.removingDuplicates()     } } 

divide a gold bar into minimum number of pieces so that it can be divided equally among 7,8 or 9 people

One night nine gangsters stole a gold bar. When the time came for dividing the bar, they faced a problem: two of the criminals put guns to each other’s faces. Now it’s up to fate whether one of them lives, they both live or both die.

While these two are dealing with each other, the others decide to continue dividing the gold bar. What is the minimal amount of pieces they should divide the bar into, so that no matter how things pan out, everyone can be given an equal share?

Scenario 1: Both gangsters blow each other’s brains out. The gold must be divided evenly among the seven remaining gangsters.

Scenario 2: One gangster is quicker on the draw, and manages to take out his opponent. The gold must be divided evenly among the eight remaining gangsters.

Scenario 3: The duelling gangsters discuss their differences, come to a mutually beneficial agreement, and put away their guns. The gold must be divided evenly among all nine gangsters.

Note: 1.One obvious solution is to divide it in 7x8x9=504 equal pieces. But this can be improved by using pieces of different sizes

2.I am looking for all possible ways to solve it, be it through

1) Dynamic programming

2) Network flow

3) divide and conquer,etc