Stress-testing for `tcp_mem`

Linux has a tcp_mem setting for the amount of memory it will allocate to TCP connections for all running applications. As per the official documentation:

tcp_mem – vector of 3 INTEGERs: min, pressure, max min: below this number of pages TCP is not bothered about its memory appetite.

pressure: when amount of memory allocated by TCP exceeds this number of pages, TCP moderates its memory consumption and enters memory pressure mode, which is exited when memory consumption falls under “min”.

max: number of pages allowed for queueing by all TCP sockets.

Defaults are calculated at boot time from amount of available memory.

We see that one application is breaching this threshold in Prod and its dmesg log contains a line like: TCP: out of memory -- consider tuning tcp_mem.

For some reason, I can’t re-run the same application locally. So, I’d like to write another simple application that reproduces this error locally.

So far, I’ve tried:

  • Large network downloads (HTTP GETs for pre-signed S3 URLs using NodeJS and S3-Get-Objects using Python’s Boto3 SDK).
  • Python’s socket library to do client-server transfers (while closing/not-closing client sockets) on TCP.

In all cases, I see that I could reach the tcp_mem upper-limit but not breach it. Whereas I see that the Prod application is significantly breaching the limit (for e.g., 6000 vs 4400).

So, question: what can I try to reproduce breaching the tcp_mem limit locally?