Oracle Network Bandwidth

When planning to port my application on Oracle to a cloud hosting, I’ve found that one important factor to consider is the volume of data transferred to/from the database, as it is limited by cloud providers. I haven’t found a convenient way to estimate this volume based on usage statistics (e.g. number of queries and average row size), neither how to measure current utilization.

Is there any statistics build into Oracle I could use or it is better to monitor this from the network perspective, monitoring the data transferred to/from the port used by Oracle’s .

What can be done increase bandwidth from Citrix XEN guest to Citrix XEN guest on the same physical host?

Whether using UDP or TCP, guest VM to guest VM bandwidth on a single Citrix 7.6 XenServer host caps out at about 6Gbits/sec. Would this 5Gbps be comparable to Amazon AWS Xen performance? Need this to much faster for iSCSi performance reasons.

GOOD: iperf3 client to localhost iperf3 server on same VM:

root@ubuntu:~# iperf3  -c localhost Connecting to host localhost, port 5201 [  4] local ::1 port 43350 connected to ::1 port 5201 [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd [  4]   0.00-1.00   sec  3.10 GBytes  26.6 Gbits/sec    0   1.62 MBytes [  4]   1.00-2.00   sec  3.11 GBytes  26.7 Gbits/sec    0   2.37 MBytes  [ ID] Interval           Transfer     Bandwidth       Retr [  4]   0.00-10.00  sec  36.2 GBytes  31.1 Gbits/sec    0             sender [  4]   0.00-10.00  sec  36.2 GBytes  31.1 Gbits/sec                  receiver  iperf Done. root@ubuntu:~# 

BAD: UDP Guest VM to Guest VM bandwidth: Does not reach GigaBit ethernet speed, but this should be regulated shared memory. Suppose the packet loss points to something.

PS C:\Users\Administrator> iperf3 -u -b 10000000000 -c 192.168.2.251 Connecting to host 192.168.2.251, port 5201 [  4] local 192.168.2.159 port 51835 connected to 192.168.2.251 port 5201 [ ID] Interval           Transfer     Bandwidth       Total Datagrams [  4]   0.00-1.00   sec  68.2 MBytes   572 Mbits/sec  8731 [  4]   1.00-2.00   sec  79.7 MBytes   669 Mbits/sec  10205 [  4]   2.00-3.00   sec  76.8 MBytes   644 Mbits/sec  9825 [  4]   3.00-4.00   sec  80.5 MBytes   675 Mbits/sec  10308 [  4]   4.00-5.00   sec  73.9 MBytes   620 Mbits/sec  9463 [  4]   5.00-6.00   sec  70.5 MBytes   591 Mbits/sec  9020 [  4]   6.00-7.00   sec  74.8 MBytes   628 Mbits/sec  9575 [  4]   7.00-8.00   sec  82.3 MBytes   691 Mbits/sec  10536 [  4]   8.00-9.00   sec  79.5 MBytes   667 Mbits/sec  10178 [  4]   9.00-10.00  sec  73.0 MBytes   613 Mbits/sec  9350 - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams [  4]   0.00-10.00  sec   759 MBytes   637 Mbits/sec  0.054 ms  166/97191 (0.17%) [  4] Sent 97191 datagrams  iperf Done. PS C:\Users\Administrator> 

Still BAD: TCP Guest VM to Guest VM bandwidth: About as fast as an old harddrive.

PS C:\Users\Administrator> iperf3 -b 900000000000 -c 192.168.2.251 Connecting to host 192.168.2.251, port 5201 [  4] local 192.168.2.159 port 49187 connected to 192.168.2.251 port 5201 [ ID] Interval           Transfer     Bandwidth [  4]   0.00-1.00   sec   659 MBytes  5.53 Gbits/sec [  4]   1.00-2.00   sec   599 MBytes  5.02 Gbits/sec [  4]   2.00-3.00   sec   610 MBytes  5.11 Gbits/sec [  4]   3.00-4.00   sec   650 MBytes  5.45 Gbits/sec [  4]   4.00-5.00   sec   600 MBytes  5.04 Gbits/sec [  4]   5.00-6.00   sec   632 MBytes  5.31 Gbits/sec [  4]   6.00-7.00   sec   602 MBytes  5.05 Gbits/sec [  4]   7.00-8.00   sec   626 MBytes  5.26 Gbits/sec [  4]   8.00-9.00   sec   625 MBytes  5.24 Gbits/sec [  4]   9.00-10.00  sec   615 MBytes  5.16 Gbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval           Transfer     Bandwidth [  4]   0.00-10.00  sec  6.07 GBytes  5.22 Gbits/sec                  sender [  4]   0.00-10.00  sec  6.07 GBytes  5.22 Gbits/sec                  receiver  iperf Done. PS C:\Users\Administrator> 

VERYBAD: UDP Iperf3 server running on Citrix Xen host: Would not even reach GigaBit ethernet speed.

iperf3 -u -b 10000000000 -c LocalXenHost result was only 650Mbits/s 

BAD: TCP Iperf3 server running on Citrix Xen host:

iperf3 -b 10000000000 -c LocalXenHost     [ ID] Interval           Transfer     Bandwidth     [  4]   0.00-10.00  sec  6.46 GBytes  5.54 Gbits/sec                  sender     [  4]   0.00-10.00  sec  6.46 GBytes  5.54 Gbits/sec                  receiver 

SparkVPS – High Bandwidth KVM VPS! 2GB SSD KVM VPS with 5TB Bandwidth for $25/year and more in Dallas and New York (Free Month Offer)

Dylan from SparkVPS wrote in to share their new Spring specials with the community. We thought the offers looked pretty attractive for a KVM offer, so here it is. We have been told that everything here is powered by PURE SSD’s for maximum performance.

They’ve received positive reviews to their previous offers on here, and we hope you enjoy what they have to offer to you today! As always – we want to hear your feedback, so if you decide to buy one – please be sure to comment below!

Their WHOIS is public, and you can find their ToS/Legal Docs here. We have been informed that they are now accepting payments via Alipay. Additionally, they accept PayPal, Credit Cards, Debit Cards, and Cryptocurrency (Bitcoin, Litecoin, Ethereum, Monero, Bitcoin Cash) as available payment methods.

Here’s what they had to say:

“SparkVPS is a faster-than-SSD VPS provider. Here at SparkVPS, we specialize in providing VPS hosting. We are able to deliver unparalleled performance in the industry via our proprietary technology, which we call “MaxIO”, combined with our optimized local RAID-10 pure SSD access storage allows us to deliver VPS hosting that is twice as fast as traditional SSD VPS hosting.

Exclusive for the LowEndBox community, we have some KVM SSD VPS offers, which support Docker, Custom Kernel Builds, and so much more!”

Here’s the offers:

****
Order any special below and get a 1 month free service extension. After your order has been completed, open a support ticket with our billing department and mention this post to claim your free month extension!
****

2GB RAM SSD KVM VPS

  • 2 CPU Cores
  • 2048MB RAM
  • 25GB SSD Storage
  • 5000GB Bandwidth
  • 1Gbps
  • KVM/SolusVM
  • Root Access Included
  • Docker, Kubernetes, ServerPilot, Custom Kernels, Custom OS 100% Supported!
  • $ 25/yr
  • [ORDER]

4GB RAM SSD KVM VPS

  • 4 CPU Cores
  • 4096MB RAM
  • 50GB SSD Storage
  • 15000GB Bandwidth
  • 1Gbps
  • KVM/SolusVM
  • Root Access Included
  • Docker, Kubernetes, ServerPilot, Custom Kernels, Custom OS 100% Supported!
  • $ 55/yr
  • [ORDER]

6GB RAM SSD KVM VPS

  • 6 CPU Cores
  • 6144MB RAM
  • 80GB SSD Storage
  • 20000GB Bandwidth
  • 1Gbps
  • KVM/SolusVM
  • Root Access Included
  • Docker, Kubernetes, ServerPilot, Custom Kernels, Custom OS 100% Supported!
  • $ 90/yr
  • [ORDER]

NETWORK INFO:

Dallas, TX, USA
Test IPv4: 192.3.237.150
Test file: http://192.3.237.150/1000MB.test

Buffalo, NY, USA
Test IPv4: 192.3.180.103
Test file: http://192.3.180.103/1000MB.test


KVM Host Nodes
– Intel Dual Xeon E5 Series Processors
– 128GB DDR3 RAM
– Samsung Enterprise SSD’s
– Dual/Redundant Power Supply
– 1Gbps Network Uplink

Please let us know if you have any questions/comments and enjoy!

4k wallpapers app high bandwidth problem [on hold]

i got a wallpaper app with 10k+ wallpapers with minimum 1mb for each wallpaper, the app is in testing and lately i noticed a that only me testing this app during few days took around 15gb of 1tb bandwidth from my hosting plan, the app brings wallpapers from specific api from my database, so what do you suggest me to do with this problem. sorry i don’t know if this is the right forum to post my question, so consider that, and my bad English Thanks. As Mr.Stephen Ostermiller asked the website(server) is just a backend server nothing for users to see, it’s only accessible by url as an admin panel to upload or delete or managing wallpapers in general, i monitor traffic through my apache2 access.log and there are not unusual actions there, and to make sure the app cache is really big which i clear after using it anytime, so yes i guess it’s only my traffic.

Why snapd service uses so much bandwidth

I am on Ubuntu 18.04 LTS. Updated the system to the latest. Recently I noticed something unusual on Ubuntu with my Internet connection. My limited Internet quota was used up quickly by something invisible. As a user came from Windows this was something odd as Ubuntu never did such a thing to me. I installed nethogs and found out that the devil who vanished my data was /usr/lib/snapd/snapd

I found a somewhat similer question here, but it does not answer what I am going to ask. Removing snapd from start up did not help either.

  • Why snapd uses this much data?
  • Is there a way to stop those connections without disabling snap apps?

    enter image description here

How do I calculate the runtime bandwidth of the PCIe socket using Process Control Monitor?

I’m working in deep learning and I’m trying to identify a bottleneck in our GPU pipeline.

We’re running Ubuntu on an Intel Xeon motherboard with 4 NVIDIA Titan RTXs. GPU utilization as measured by nvidia-smi seems to be pretty low even though the GPU memory usage is around 97%.

So I’m trying to see if the bus is the bottleneck.

I’ve downloaded PCM and I’m running it to monitor the PCIe 3.0 x16 traffic.

Processor Counter Monitor: PCIe Bandwidth Monitoring Utility   This utility measures PCIe bandwidth in real-time   PCIe event definitions (each event counts as a transfer):     PCIe read events (PCI devices reading from memory - application writes to disk/network/PCIe device):      PCIeRdCur* - PCIe read current transfer (full cache line)          On Haswell Server PCIeRdCur counts both full/partial cache lines      RFO*      - Demand Data RFO      CRd*      - Demand Code Read      DRd       - Demand Data Read    PCIe write events (PCI devices writing to memory - application reads from disk/network/PCIe device):      ItoM      - PCIe write full cache line      RFO       - PCIe partial Write    CPU MMIO events (CPU reading/writing to PCIe devices):      PRd       - MMIO Read [Haswell Server only] (Partial Cache Line)      WiL       - MMIO Write (Full/Partial) ... Socket 0: 2 memory controllers detected with total number of 6 channels. 3 QPI ports detected. 2 M2M (mesh to memory) blocks detected. Socket 1: 2 memory controllers detected with total number of 6 channels. 3 QPI ports detected. 2 M2M (mesh to memory) blocks detected. Trying to use Linux perf events... Successfully programmed on-core PMU using Linux perf Link 3 is disabled Link 3 is disabled Socket 0 Max QPI link 0 speed: 23.3 GBytes/second (10.4 GT/second) Max QPI link 1 speed: 23.3 GBytes/second (10.4 GT/second) Socket 1 Max QPI link 0 speed: 23.3 GBytes/second (10.4 GT/second) Max QPI link 1 speed: 23.3 GBytes/second (10.4 GT/second)  Detected Intel(R) Xeon(R) Gold 5122 CPU @ 3.60GHz "Intel(r) microarchitecture codename Skylake-SP" stepping 4 microcode level 0x200004d Update every 1.0 seconds delay_ms: 54 Skt | PCIeRdCur |  RFO  |  CRd  |  DRd  |  ItoM  |  PRd  |  WiL  0      13 K        19 K     0       0      220 K    84     588    1       0        3024       0       0        0       0     264   -----------------------------------------------------------------------  *      13 K        22 K     0       0      220 K    84     852   

Ignore the actual values for a moment. I have much bigger values. 🙂

How do I calculate the runtime bandwidth of the PCIe socket using Process Control Monitor?

Why are there only two sockets listed?

What happens when transaction bandwidth is exceeded?

A search for definitive information turned up literally nothing, so, what happens when transaction bandwidth is exceeded for an extended period?

After a period of two weeks (default) transactions are dropped from mempool, if this continues for three months then all transactions from weeks one through eleven will need to be resubmitted a) when the load on mempool dies down a bit or b) with higher fees excluding some other transaction in its place.

I have seen mempool with over 50,000 transactions for many days in a row, perhaps weeks, so, should this actually be called a quasi-payment system where it is possible any given transaction may eventually be forgotten about if the user does not keep on top of it?

How to Define the Bandwidth in Mean Shift Clustering?

I am making a program using Java todo colo quantization using mean shift clustering algorithm and the image is RGB with resolution of 512×512. I want to reduce the image file size by reducing the total color in the input image.

I have a problem with defining the bandwidth for calculating the euclidian squared distance in the mean shift algorithm.

How do you know the appropriate bandwidth for the data? Are there any formulation to define it?