What steps could improve the throughput of this system?

I got this question on an interview (Codility) the other day.

Imagine we have a system responsible for counting the total number of pages on a web site. It’s a single process on a single machine, and it has a RESTful HTTP API with one endpoint. It looks like

POST /increment?n=N

Whenever the endpoint is hit, the system will increment its internal count by N, open its state file on disk, overwrite the state file with the new value of N and close the file.

There are many webserver processes on many machines, all calling into this system. Every time a webserver sees a web request, it calls out to the counter system like this

POST /increment?n=1

The system is having trouble keeping up. What steps would you take to improve the throughput of the system?

I had a difficult time understanding the scenario they were trying to explain. I answered that instead of having the counting system do the reading and writing to disk, it would simply put the work into a queue (like Redis or something similar) and then the queue could process the work in due course.

I have no idea if that comes anywhere close and there were no other clarifications given in the question. Curious how other folks would approach this.

2018 MBP Network Throughput decreases over time


I was recently provided with a 2018 MBP by my employer. The network throughput slows gradually until I’m forced to reboot.


The following machines are connected via wired ethernet to a gigabit switch:

  • 2018 MBP (the problematic machine, 10.14.4)
  • 2015 MBP (the control, thunderbolt ethernet adapter, 10.14.4)
  • 2010 iMac (the test target, built in ethernet, ubuntu)

The 2018 MBP has a usbc-thunderbolt adapter, with a thunderbolt display attached and the ethernet cable connected to the display.

When the throughput drops, I’ve tried the following:

  • Switch to WiFi – doesn’t help, speed starts off around 300MBit/s but drops to the level the wired connection is currently at.
  • Use a different switch port / ethernet cable – doesn’t help
  • Test speed on control MBP using 2018 MBP’s ethernet cable & switch port (it’s fine)
  • Connect thunderbolt monitor to control MBP and test throughput (also fine)
  • Disconnect usbc-thunderbolt adapter incase it’s doing something odd (no difference)
  • Disconnect power from 2018 MBP (no difference)
  • Sleep/Wake 2018 MBP (no difference)

I setup a test to show the problem: rebooted the 2018 MBP then looped iPerf making connections to the iMac and plotted the results, sure enough after a couple of days I’m getting 10MBits/s across a gigabit switch:

throughput vs time graph

The problem isn’t the iMac (test target) as I also made periodic tests from the control MBP to the iMac which consistently showed ~940Mbits/s.

I’ve been using the 2015 MBP and thunderbolt display with it’s ethernet connection for years without issue, and have ruled that out as a cause. Also WiFi shows the same problem, which suggests its not related to a specific interface. I suspect a software problem with the networking stack on the 2018.

Potentially suspicious software:

  • Symantec Endpoint Protection (mandated by employer)
  • Pulse Secure (required for VPN connection to employer, but not connected during these tests)
  • Docker (required for development, does add bridges etc to local networking – this doesn’t cause a problem on the 2015 MBP though.)


  1. Any idea what this might be about?
  2. Is there any way to reset the networking stack or some workaround that doesn’t involve a reboot?

Unfortunately removing SEP is not an option 🙁

ALOHA – Throughput and probabilities

I have a few questions regarding slotted-ALOHA. Assume a network have 25 users and transmission request probability = 0.25.

1) What is the throughput and what is the probability that a user will successfully transmit a frame after three unsuccessful attempts? I have managed to calculate the throughput as 0.00627. But the major problem is the probability to succeed after 3 attempts. Should I use these two formulas?

$ $ n_a = \sum_{n=0}^\infty n(1-p_a)^n p_a $ $

$ $ n_a = \frac{1-p_a}{p_a} $ $

2) What is the average number of unsuccessful attempts before a user can transmit a frame in the above problem?

Can somebody assist me?

Best regards

How to analyse the throughput of multithreaded client server programs?

I am practicing socket programming in C language. I have two codes – server and client (based on TCP) on two different laptops. The server forks a new process for every new request. For simulating multiple simultaneous clients, I have used pthreads library in my client program. I have a lots of files at the server, each of fixed size (2 MB).

I am calculating two things at the client – throughput and response time. Throughput is the average no. of files downloaded per second. And response time is the average time taken to download a file.

I want to see that at what no. of simultaneous users (threads), the throughput gets saturated. But I am facing problem in analyzing that since the network speed is varying largely.

Can someone suggest a way to analyse that? May be some factor other than throughput and response time which does not depend on network speed, because I need to find the number N (simultaneous users) for which the server is at maximum load.

If network bandwidth control is necessary, than is there some simple way to control the maximum upload speed at server. I am using Ubuntu 18.04. The max limit may be imposed on the whole system only (for simplicity) since there is no other foreground process running parallely. For example, my upload speed varies from 3-5 MBps, can it be restricted to some lower value like 2 MBps.

Or should I restrict the download speed at client rather than server’s upload speed so it is constant and always less than that of upload speed at server, since the analysis is done by client program.

Latency and Throughput Bounds

Say that I have a superscalar processor and I am given the latency, issue and capacity (in clock cycles) for different instructions.

What is the general formula for latency bound and throughput bound? (I will convert to cost per instruction and billion instructions per second)

This field seems to be too niche to find information online.

In other words, how can I find the latency bound and throughput bound for fdiv

     Latency   Issue  Capacity fdiv    L        I       C 

where L is the number of cycles for the fdiv latency, I for issue and C for capacity?

Would $ \text{latency bound} = {L \over I} \times C$ and $ \text{throughput bound} = {L \over C}$ ?

This is not the homework problem, but a “required” definition in order to be able to start it.

Issues with 10gb Network Throughput

I’ve got a question about 10gb networking that has me very stumped…

Here is my situation:

I’m only getting transfer rates of ~150mb/sec between my PC and NAS on my 10gb home network.

My setup:

[PC] Has Intel X550-T2 & a Samsung EVO 970 NVMe drive.

[Server] Has Intel X540-T2 & 3x 250gb WD Blue SSDs in RAID 5

[NAS] Synology DS1517+ with an Intel X550-T2.

[Switch] MikroTik CRS305-1G-4S+IN with four Mikrotik 10GB SFP+ modules

NOTE: All cabling is Cat6a SFTP.

ISSUE: Copying test files from [Server] to [NAS] has normal/fast speed (about 700mb/sec average with spikes up to 1gb/sec). Unfortunately, [PC] to [NAS] is ultra slow (~150mb/sec average).

Troubleshooting Info: I ran iperf on all devices and speed was always ok (from the NAS to PC and also from the NAS to Server). I tried swapping out cables, tested cables via Intel driver software, tried using both ports of the X550-T2 on [PC] and also tried alternate ports on the MikroTik Switch. No joy.

I also ran disk speed tests on [PC] with the Samsung Magician software. Sequential Read speed is 3,525 and Sequential Write speed is 2,302. “Real world” speed on [PC] copying files within the NVMe is also fast (over 1gb/sec for random assortment of files totaling ~6gb).

Supplemental Info: [PC] is running Windows 10 64bit. [Server] is running Windows Server 2016. [NAS] has newest version of DSM (6.2.1-23824 Update 6)

Any suggestions?

Hello i have a project that need me to calculate data throughput?

This is what my project asks but i am really stacked on how to calculate the actual throughtput. The throughput measurement should be carried out for two scenarios, with and without a security protocol implemented. To investigate where the observed difference comes from, you need to look into the security mechanism you have implemented and find out what additional overhead has been introduced by it and how this additional overhead results in the reduced throughput.