SupremeVPS – Large Cloud SSD VPS Resource Pools – Various locations, starting at $69 a year!

SupremeVPS is back with some exclusive offers for the community. These are large VPS resource pools available in various locations.
A SSD VPS resource pool allows you to create multiple VPS servers within your resource pool limitations based on your plan. You can create one large VPS, or multiple VPS’s utilizing the resources available in your pool.

Their WHOIS is public, and you can find their ToS/Legal Docs here. They accept PayPal, Credit Cards, Alipay, WeChat Pay, Bitcoin, Litecoin and Ethereum as payment methods.

Here’s what they had to say: 

“We are on a mission to make VPS hosting affordable, easy to use, and transparent. Since day one, we have been on a constant mission to change the VPS hosting industry. Having experienced it ourselves, we have found VPS hosting to be rather tedious with hidden fees, upsells, poor support, etc. – and SupremeVPS was born to change that and to set a new standard – a high standard, for that matter. Today we are successfully empowering over 1500 customers from all over the world!

Our pricing is simple, flat-rate, and easy to understand. No calculator needed, and there are absolutely zero hidden fees. SupremeVPS was born to be simple & easy to use – and our intuitive platform allows you to deploy in under 60 seconds.”

Here are the offers: 

8x SSD Cloud VPS Pool

  • Create Up To 8 VPS’s
  • 8 vCPU Cores
  • 8GB RAM
  • 100GB SSD Storage
  • 10TB Monthly Transfer
  • 1Gbps Port
  • 8x IPv4 Addresses
  • Linux OS Options
  • Resource Manager Panel
  • OpenVZ
  • Los Angeles, Chicago and New York Locations
  • $ 69/yr
  • [ORDER]

16x SSD Cloud VPS Pool

  • Create Up To 16 VPS’s
  • 16 vCPU Cores
  • 16GB RAM
  • 300GB SSD Storage
  • 30TB Monthly Transfer
  • 1Gbps Port
  • 16x IPv4 Addresses
  • Linux OS Options
  • Resource Manager Panel
  • OpenVZ
  • Los Angeles, Chicago and New York Locations
  • $ 110/yr
  • [ORDER]

NETWORK INFO:

Los Angeles, California (530 W. 6th St. Datacenter Facility)

Test IPv4: 107.175.180.6

Test file: http://107.175.180.6/1000MB.test

Chicago, Illinois (2200 Busse Rd., Elk Grove Village Facility)

Test IPv4: 172.245.240.34

Test file: http://172.245.240.34/1000MB.test

Buffalo, New York (325 Delaware Ave. Buffalo, NY Facility)

Test IPv4: 192.3.180.103

Test file: http://192.3.180.103/1000MB.test

Host Node Specifications:

– Dual Intel Xeon E5-2660v2

– 128GB DDR3 RAM

– 4x RAID-10 Samsung 860 SSD’s

– 1Gbps uplinks

Please let us know if you have any questions/comments and enjoy!

The post SupremeVPS – Large Cloud SSD VPS Resource Pools – Various locations, starting at $ 69 a year! appeared first on Low End Box.

When a druid reverts to their normal form having wild shaped into a Large beast, can they choose which square they occupy?

When a druid reverts to their normal form they appear where their beast shape was. But if their beast shape was Large and their normal form is Medium, can they choose which of the four squares they reappear in?

For example if a druid wild-shaped into a Polar Bear (occupying 4 squares) and was engaged in melee with several opponents standing in a line, when their beast shape drops to 0 hit points can they choose for their normal form to appear in a square that isn’t adjacent to those opponents?

Depending on the placement of enemies, this allows a druid low on temporary HP to retreat without taking opportunity attacks and take an action on the same turn, rather than spending their action disengaging.

Is it any different if they leave wild shape as a bonus action, or cease to concentrate on a polymorph spell?

Both R and Python provide wrong arithmetic with very large numbers

Background:

I was watching a video about the “Diophantine equation” on YouTube (click here). tl;dr – someone finally found the 3 numbers x, y, and z, where:

x^3 + y^3 + z^3 = 33  

The three numbers are:

x = 8866128975287528

y = -8778405442862239

z = -2736111468807040

However, both R and Python provide the wrong answers when I sum the cubes of these three numbers, while Unix’s bc calculator provides the right answer.

R output:

> x <- 8866128975287528 > y <- -8778405442862239 > z <- -2736111468807040 > (x^3 + y^3 + z^3) [1] -2535301200456458802993406410752 

Python output (in Jupyter notebook):

x = 8866128975287528 y = -8778405442862239 z = -2736111468807040  x^3 + y^3 + z^3 27885068152614 

bc (unix calculator) output:

x = 8866128975287528 y = -8778405442862239 z = -2736111468807040  x^3 + y^3 + z^3 33 

I was just doing this out of boredom at work, but this has me worried. Only the Unix calculator provides the right answer, and both of the wrong answers provided by R and Python are different from each other! Does this mean they are unreliable if/when I might need to use large numbers in a data science project?

Any help would be appreciated. Thank you!

Both R and Python provide wrong arithmetic with very large numbers

Background:

I was watching a video about the “Diophantine equation” on YouTube (click here). tl;dr – someone finally found the 3 numbers x, y, and z, where:

x^3 + y^3 + z^3 = 33  

The three numbers are:

x = 8866128975287528

y = -8778405442862239

z = -2736111468807040

However, both R and Python provide the wrong answers when I sum the cubes of these three numbers, while Unix’s bc calculator provides the right answer.

R output:

> x <- 8866128975287528 > y <- -8778405442862239 > z <- -2736111468807040 > (x^3 + y^3 + z^3) [1] -2535301200456458802993406410752 

Python output (in Jupyter notebook):

x = 8866128975287528 y = -8778405442862239 z = -2736111468807040  x^3 + y^3 + z^3 27885068152614 

bc (unix calculator) output:

x = 8866128975287528 y = -8778405442862239 z = -2736111468807040  x^3 + y^3 + z^3 33 

I was just doing this out of boredom at work, but this has me worried. Only the Unix calculator provides the right answer, and both of the wrong answers provided by R and Python are different from each other! Does this mean they are unreliable if/when I might need to use large numbers in a data science project?

Any help would be appreciated. Thank you!

Does the Lesser Beast Totem upscale to 1d8 claw damage for large creatures?

I have a half-giant barbarian (built from the Advanced Races Guide). I’ve reached 2nd level and have taken the Lesser Beast Totem rage power.

I’ve noticed that the claws only mention scaling down from the medium 1d6 damage to the small 1d4 damage. Considering I’m large, does it upscale to the large 1d8 damage?

Are there any rules or guidelines for large character races?

I’m designing some custom races for my campaign and I’d like to allow large size characters. It seems that large characters are discouraged for some reason and although I’d like to know why what’s more important is finding some official rules or guidelines on how to create large character races like minotaurs, half-ogres, centaurs, etc.

How do I scale up a monster for a large party?

I am DMing a game for 6 PCs who are currently level 5.

During gameplay, even high-CR monsters have been easy pickings for them due to their advantage in numbers. They are starting to underestimate single-monster encounters.

I don’t want to always try to match the monster numbers (because sometimes it makes sense that that particular opponent is alone).

Problem to solve

I don’t want to shower them with hordes of monsters at every encounter. How do I scale up some of the more story-driving monsters for a more memorable campaign?

Dealing with large code base quickly in agile

At my current company, the project I work on is coded in Java, at least for the systems / backend part. Whenever I get assigned a task dealing with the Java code, it take me hours or even days to figure everything out and apply my solutions. The reasons are:

1) Very large Java EE code base

2) A lot of abstraction

3) Get lost in figuring out all the abstractions such as methods then i often spiral down a hole where i think of something else and forget my original solution etc.

My work environment is agile and I am expected to deliver quickly, but as a fairly new member of the company and a huge code base that was built before i joined, it is difficult for me to meet “agile” timing.

How is one supposed to deal with such a huge code base where nearly every single line/function leads to another abstraction and within those are even more abstractions in a timely manner?

Edit: I know I can ask my peers but at the same time i do not want to be constantly bothering them since problem solving is part of my job

Efficient algorithm to simulate dealing cards from a large deck of cards?

In shedding-type card games, the dealer starts by dealing a shuffled deck of cards to the players (if there are N players, card i goes to player i mod N). If the number of each type of card is known, is there a way to simulate dealing the cards and get a count of each type of card each player has?

For example, if there are N=3 players, and there are 5 card types and the count of each card is [3000000000, 3000000000, 3000000000, 3000000000, 2], one possible output is [1000000000, 1000000000, 1000000000, 1000000000, 1], [1000000000, 1000000000, 1000000000, 1000000000, 1], [1000000000, 1000000000, 1000000000, 1000000000, 0], which represents the number of cards of each type that each player has.

A naive algorithm is to create a large array of all of the cards, do the Fisher–Yates shuffle on it, and then go through each element and increment the count of the card for the current player and set the current player to the next player.

Is there a more efficient algorithm? It would have to generate the output with the same probability that the naive algorithm would.