InfluxDB performance

please, what is the current state of the art with respect to InfluxDB performance tuning? What are the limits of InfluxDB in terms of operations per second, at different architectures? What are the recommended practices to setup InfluxDB for large scale deployment? What about interfacing with Zabbix?

What was bardic performance like in the D&D Next playtest?

My RPG group is made up of players who had all played either 2nd or 3rd edition before, where bards had a combat-bonus ability significantly different than what’s in D&D 5. In both older versions, the bard was able to grant a combat bonus to all of its allies within earshot.

This was significantly changed for 5e, with bards instead being able to give their allies a “bonus die” which can only be used on a single roll. The bonus is so different from what everyone’s used to that, so far, no one’s actually used the bardic inspiration die to modify a roll.

Based on statements like this answer to “What changed between the playtest and 5e?”, it seems like for at least part of the playtest bards had a significantly different ability. While we wait for whatever optional rule is (hopefully) in the DMG, can someone describe in broad strokes what the 5e “bardic performance” was like?

How to setup SQL Server on VMWare for best performance

We have Dell r940 server with 64 CPUs.

Processor Type: Intel(R) Xeon(R) Platinum 8253 CPU @ 2.20GHz 4 Sockets + 16 Cores per Socket and 128 Logical processors Memory 767 GB

We set the server up on a VMWare v7.0 hypervisor. One of the virtual machines runs Sql Server 2019 Enterprise Edition on Windows 2019 Standard where we have mission-critical workloads.

Can anyone suggest a virtual CPU (Cores per Socket) + RAM configuration for best performance from this machine?

Is there performance loss in out of sequence inserted rows (MySQL InnoDB)

I am trying to migrate from a bigger sized MySQL AWS RDS instance to a small one and data migration is the only method. There are four tables in the range of 330GB-450GB and executing mysqldump, in a single thread, while piped directly to the target RDS instance is estimated to take about 24 hours by pv (copying at 5 mbps).

I wrote a bash script that calls multiple mysqldump using ‘ & ‘ at the end and a calculated --where parameter, to simulate multithreading. This works and currently takes less than an hour with 28 threads.

However, I am concerned about any potential loss of performance while querying in the future, since I’ll not be inserting in the sequence of the auto_increment id columns.

Can someone confirm whether this would be the case or whether I am being paranoid for no reasons.

What solution did you use for a single table that is in the 100s of GBs? Due to a particular reason, I want to avoid using AWS DMS and definitely don’t want to use tools that haven’t been maintained in a while.

For every imperative function, is there a functional counterpart with identical performance or even instructions?

Currently, I haven’t learned about a functional language that can achieve the same performance as C/C++. And I have learned that some languages that favor functional programming to imperative programming, such as Scala and Rust, use imperative ways to implement their library functions for better efficiency.

So here comes my question, on today’s comptuters that execute imperative instructions, is this a limitation of the compiler or functional programming itself? For every imperative function with no side effects, either in a language without GC such as C/C++/Rust/assembly or one with GC such as Java, is there a pure functional counterpart in Haskell, Scala, etc. that can be compiled to run with identical performance in time and space (not just asymptotic but exactly the same) or even to the same instructions, with an optimal functional compiler that utilizes all modern and even undiscovered optimization techniques such as tail recursion, laziness, static analysis, formal verification, and so on which I don’t know about?

I am aware of the equivalence between λ-computable and Turing computable, but but I couldn’t find an answer to this question online. If there is, please share a compiler example or a proof. If not, please explain why and show a counter-example. Or is this a non-trivial open question?

AWS RDS is showing very high wait/synch/mutex/sql/ values and EXPLAIN statements in performance insights

I’m running a CRON script which checks the database for work and executes anything that needs to be done. It does this across ~500 customers per minute, but we are using AWS RDS with a 16 vCPU machine which, until recently, has been plentiful to keep it happy (normally plugging along under 20%).

This weekend we updated customers to the latest version of the code and implemented some tooling, and since then we’ve started seeing these huge waits: enter image description here

Further I’m seeing that about half of our busiest queries are EXPLAIN statements, somewhere illustrated here: enter image description here

Nowhere in our code base is an "EXPLAIN" performed (though we are using AWS RDS performance insights, ProxySQL and New Relic for monitoring). I did notice that in the past week our number of DB connections was previously baselined around 10 and is now closer to 90. enter image description here

Any ideas on where I should be digging to find the cause of these waits and explain statements? And if they could justify the large number of open connections?

MySQL performance issue with ST_Contains not using spatial index

We are having what seems to be a fairly large mysql performance issue on trying to run a fairly simple update statement. We have a table(1.8mil) with houses that contains a Lat+Long geometry point column(geo), and then a table(6k) that has a list of schools with a boundary geometry polygon column(boundary). We have spatial indexes on both, we are trying to set the school’s id, that contains the point, to the house table with the update. The update is taking 1 hour and 47 minutes to update 1.6mil records. In other systems I have used in my paste experience, something like that would take just a few minutes. Any recommendations?

I have posted this same question in the GIS SE site as well, as it is very much a GIS & DBA question.

CREATE TABLE houses (   ID int PRIMARY KEY NOT NULL,   Latitude float DEFAULT NULL,   Longitude float DEFAULT NULL,   geo point GENERATED ALWAYS AS (st_srid(point(ifnull(`Longitude`,0),ifnull(`Latitude`, 0)),4326)) STORED NOT NULL,   SPATIAL INDEX spidx_houses(geo) ) ENGINE = INNODB, CHARACTER SET utf8mb4, COLLATE utf8mb4_0900_ai_ci;  CREATE TABLE schoolBound (   ID int PRIMARY KEY NOT NULL,   BOUNDARY GEOMETRY NOT NULL,   reference VARCHAR(200) DEFAULT NULL,   type bigint DEFAULT NULL,   INDEX idx_reference(reference),   INDEX idx_type(type),   SPATIAL INDEX spidx_schoolBound(BOUNDARY) ) ENGINE = INNODB, CHARACTER SET utf8mb4, COLLATE utf8mb4_0900_ai_ci;  
-- type 4 means it's a elementary Update houses hs     INNER JOIN schoolBound AS sb ON ST_Contains(sb.boundary, hs.geo) AND sb.type = 4 SET hs.elementary_nces_code = sb.reference 

The explain seems to show that it is not going to use the spatial index for schoolBound.

+----+-------------+-------+------------+------+---------------+------+---------+------+---------+----------+------------------------------------------------+ | id | select_type | table | partitions | type | possible_keys | key  | key_len | ref  | rows    | filtered | Extra                                          | +----+-------------+-------+------------+------+---------------+------+---------+------+---------+----------+------------------------------------------------+ |  1 | SIMPLE      | sb    | NULL       | ALL  | NULL          | NULL | NULL    | NULL |    6078 |    10.00 | Using where                                    | |  1 | UPDATE      | hs    | NULL       | ALL  | spidx_houses  | NULL | NULL    | NULL | 1856567 |   100.00 | Range checked for each record (index map: 0x4) | +----+-------------+-------+------------+------+---------------+------+---------+------+---------+----------+------------------------------------------------+ 

How to negate disadvantage on performance?

My bard will need to play for a long time. Long long time. Long enough that they’ll basically collapse from exhaustion (exhaustion level 5, speed reduced to 0). I assume I will need to roll performance at least once per exhaustion level. I would very much like to do that without disadvantage on performance.

My bard will be level 10, and utilizing level 10 Magical Secrets is an option. This is a one-time event, but duration will depend on DM (a few days maybe). My bard has relevant instrument proficiency, if that can help. Solution can be, but does not have to be, somehow getting advantage for all these rolls. Getting a specific magic item might not be out of the question. Multi-classing is not an option. (Ask for more details and I’ll add them here.)

In this situation, what ways exist to negate the disadvantage caused by exhaustion level 1 on Performance(CHA) ability checks?

If I start a performance as part of the the Countercharm feature, are my actions restricted on my next turn?

Bards gain the Countercharm feature at 6th level (emphasis added):

As an action, you can start a performance that lasts until the end of your next turn. During that time, you and any friendly creatures within 30 feet of you have advantage on saving throws against being frightened or charmed. A creature must be able to hear you to gain this benefit. The performance ends early if you are incapacitated or silenced or if you voluntarily end it (no action required).

If the performance has not ended (and I have not voluntarily ended it), am I able to take my movement, action, and bonus action on my next turn? Or do I lose some or all of them?

The performance lasts until the end of my next turn, but it doesn’t state whether or not I am able to take other actions while keeping the performance up.