Unity 2D Game: does transparency count as overdraw?

I’m working on a 16:9, 1920×1080, 2.5D point-and-click adventure game and I just today learned the word overdraw. I have a few questions, but first some details about the game:

Backgrounds

Backgrounds are hand drawn, and broken up into layers. For instance, if there’s a room with a table, some chairs, and a lamp on the table, then I’ll have a sprite for each of those objects whenever I want the player to be able to walk in-front of or behind that object.

To make positioning of these objects consistent with what’s in photoshop, I’ve exported each layer as a full HD image with huge swaths of transparency. These images have a very small memory footprint because only a small portion of image is populated with non-transparency.

Animation

Animation is hand drawn. To make character position consistent across all characters, each animation is exported as full HD images with huge swaths of transparency. The characters reside at the bottom center of frame. I call it the anchor point. In point of fact, it’s where I put the pivot.

I’ve got all animations and backgrounds in sprite atlases to save on video memory. So far, it’s low.

MY QUESTION:

Am I a huge idiot? Are these full screen images with lots of transparency going to be a big problem for me? The game is for mobile and PC and Switch.

Unexpected very slow user count in WordPress

I have a WordPress page that returns timeout error every so often. When reviewing the MySQL "slow queries" log, I could notice that there is an extremely slow query coming from WordPress, but I don’t know exactly which module:

SELECT SQL_CALC_FOUND_ROWS wpe_users.* FROM wpe_users WHERE 1=1 AND wpe_users.ID IN ( SELECT DISTINCT wpe_posts.post_author FROM wpe_posts WHERE wpe_posts.post_status = 'publish' AND wpe_posts.post_type IN ( 'post', 'page', 'attachment', 'wp_block', 'products' ) ) ORDER BY display_name ASC LIMIT 0, 10; 

The query appears two or more times, with State Copying to tmp table and Time of up to 53815.44 (secs? 15 hours? it’s crazy!). And it makes the Mysql process consume 557% of CPU and 8.7% of RAM

Server Error and Mysql Process List

The wpe_users table contains almost 3 million entries and the wpe_posts table contains 68,000 entries. The wpe_comments table is also starting to cause problems as it contains 361,000 entries.

I use pagination for comments and posts but not for users. The question is, how can I fix the problem without disabling paging? How can I identify where the user query is coming from if I don’t show users anywhere?

What have I tried?

  • I updated WordPress to the latest version available.
  • I tried to identify where these queries come from using the "Query Monitor" plugin but they don’t appear.
  • I verified the only 3 plugins that I have active and none of them generate such queries.
  • I closed the administration panel and killed said processes to see if the queries were generated by the administration panel but they start again…

I don’t have much experience with MySQL configuration. I am using MySQL 5.5.62 with Plesk panel on a VPS 5GB RAM 6 Cores

Count number of time in sql

I have two tables , ‘Checkpoint-table Movement and ‘Station‘. Now i want to count, number of times PNR(this is a record) has passed between 06:00-20:00pm today(Date Time Passed record) to Station Description record. How can i do this by using inner join with count?

// Two tables SELECT TOP 1000 [Plant Code]       ,[Production Year]       ,[PNR]       ,[DateTime Passed]          FROM [Tracking_Server_DB].[dbo].[Checkpoint Movement]  SELECT TOP 1000 [Station Code]       [Station Description]       ,[Tracking Client Name]       ,[Previous Station Code]          FROM [Tracking_Server_DB].[dbo].[TS_Station] 

How to get count of an object, through 3 different tables in postgres with ID’s stored in each table

I’m currently using Postgres 9.6.16.

I am currently using 3 different tables to store a hypothetical users details.

The first table, called contact, this contains:

ID, Preferred_Contact_Method 

The second table, called orders, This contains:

ID, UserID, Contact_ID (the id of a row, in the contact table that relates to this order) 

The Third Table, Called order_details

ID, Orders_ID (the id in the orders table that relates to this order details) 

The tables contain other data as well, but for minimal reproduction, these are the columns that are relevant to this question.

I am trying to return some data so that i can generate a graph, in this hypothetical store, There’s only three ways we can contact a user: Email, SMS, or Physical Mail.

The graph is supposed to be 3 numbers, how many mails, emails, and SMS we’ve sent to the user; since in this hypothetical store whenever you purchase something you get notified of the successful shipment, these methods are 1:1 to the order_details, so if there’s 10 order_detail rows for the same user, then we sent 10 tracking numbers, and since there can be multiple order_details (each item has a different row in order_details) in an order, we can get the count by counting the total rows of order details belonging to a single user/contact, then attributing to what kind of contact method that user preferred at the time of making that order.

To represent this better: If a new user makes a new order, and orders 1 apple, 1 banana, and 1 orange. For the apple, the user set preferred tracking number delivery as SMS, for the banana, they set it to EMAIL, for the orange, they thought it would be funny to set the tracking number delivery via MAIL. Now, i want to generate a graph to this users preferred delivery method. So i’d like to query all those rows and obtain:

SMS, 1 EMAIL, 1 MAIL, 1 

Here’s a SQL Fiddle link with the schema and test data: http://sqlfiddle.com/#!17/eb8c0

the response with the above dataset should look like this:

method | count SMS,     4 EMAIL,   4 MAIL,    4 

Select maximum of a count in a grouped clause

I have the following tables:

Vehicles(v͟i͟n͟, model,category) Sales(s͟a͟l͟e͟I͟D͟, staffID,customerID,date) vehicleSold(saleID,v͟i͟n͟,salePrice) 

When I join these tables using:

select YEAR(Sales.saleDate)      , Vehicles.model      , count(Vehicles.model) 'Sold'      , Vehicles.category   from Vehicles    JOIN vehicleSold     on Vehicles.vin = vehicleSold.vin   JOIN Sales      on Sales.saleID = vehicleSold.saleID  group      by YEAR(Sales.saleDate)      , Vehicles.model      , Vehicles.category; 

Result is:

+----------------------+-------------+------+----------------+ | YEAR(Sales.saleDate) | model       | Sold | category       | +----------------------+-------------+------+----------------+ |                 2020 | Altima      |    1 | car            | |                 2020 | Flying Spur |    2 | car            | |                 2020 | Lifan E3    |    3 | Electric Moped | |                 2020 | Ridgeline   |    2 | truck          | |                 2020 | Shiver      |    4 | motorbike      | +----------------------+-------------+------+----------------+ 

Out of this table I want to get the model that was most sold in a category. So, in this case I only want to return a 2020, Flying Spur, car as the only row in category car because it was the most sold in 2020 in its category. I tried using a subquery is MAX(COUNT(*)) but I guess that is not supported in mysql. If anyone could point out my mistake and has any idea how to do this then that would be big help!

Count deep nested tables

since this SQL schemas I want to count the number of times a user is in a contest.

schema SQL user has trials

SELECT * FROM users u LEFT JOIN trials_has_users tu ON (tu.users_id = '1') LEFT JOIN trials AS t ON (t.id = tu.trials_id) WHERE u.id = '1'; 

Previously, I have the expected number of lines, but I want to make a count

SELECT contest_total FROM users u LEFT JOIN trials_has_users tu ON (tu.users_id = '1') LEFT JOIN (     SELECT          id,         COUNT(*) AS contest_total     FROM         trials     WHERE deleted_at IS NULL     GROUP BY          id ) AS t ON (t.id = tu.trials_id) WHERE u.id = '1'; 

Previously, have 6 rows want 1 (current user id) need LIMIT 1 ?

SELECT contest_total FROM users u LEFT JOIN trials_has_users tu ON (tu.users_id = '1') LEFT JOIN (     SELECT          id,         COUNT(*) AS contest_total     FROM         trials     WHERE deleted_at IS NULL     GROUP BY          id ) AS t ON (t.id = tu.trials_id) WHERE u.id = '1' LIMIT 1; 

I would like to receive the number of contests in which the user has participated. My query is right ?

5e: Artificer: Does infused magic item count against maximum number of infusions?

I would assume the answer is ‘no’ but I wanted to check.

It feels very odd to be able to replicate (up to) 4 magic items at level 2, but only ever be able to use two of them at a time at that level, because it just saves you a bit of money early on, Or, for example, if you replicate a bag of holding, you can only ever have one other infusion active.

I have seen a few people claim that the replicate magic item infusion does not count against the standard 2 max infusions (at level 2). Just that you are limited to having only 1 replica of your chosen magic item at any one time. The claim is that (TCOE p12) the reference to an ‘infusion ending on a bag of holding’ is only applied if you attempt to make a new bag of holding, but I wanted to ask what other’s thought

I like this second option better, as it means you can play more with your enhancement infusions and so that if you ever want to change your other infusions, you don’t have to constantly pick up every item that pops out of your bag of holding (since it would be the oldest infusion every other time, you would have to re-create the infusion and put everything back into the bag)

Response time optimization – Getting record count based on Input Parameters

I’m trying to optimize the process of calculating count of records, based on variable input parameters. The whole proces spans several queries, functions and stored procedures.

1/ Basically, front-end sends a request to the DB (it calls a Stored procedure) with an input parameter (DataTable). This DataTable (input parameter collection) contains 1 to X records. Each record corresponds to one specific rule.

2/ SP receives the collection of rules (as a custom typed table) and iterates through them one by one. Each rule apart from other meta-data contains a name of a specific function that should be used in evaluating the said rule.

For every rule, the SP prepares a dynamic query wherein it calls the mentioned function with 3 input parameters.

a/ Custom type Memory Optimized Table (Hashed index) b/ collection of lookup values (usually INTs) that the SELECT query uses to filtr data. Ie. "Get me all records, that have fkKey in (x1, x2, x3)" c/ BIT determining if this is the first rule in the whole process.

Each function has an IF statement, that determines based on the c/ parameter if it should return "all" records that fullfill the input criteria (b/). Or if it should fullfill the criteria on top of joining the result of previous rule that is contained in the custom table (a/)

3/ Once the function is run, it’s result is INSERTed into a table variable called @tmpResult. @result is then compared to tmpResult and records that are not in the tmpResult are DELETEd from result.

  • @result is a table variable (custom memory optimized table type), that holds intermediate result during the whole SP execution. It is fully filled up on the first rule, every consequent rule only removes records from it.

4/ The cycle repeats for every rule until all of the rules are done. At the end, count is called on the records in @result and returned as a result of SP.

Few things to take into account:

  • There are dozens of different types of rules. And the list of rules only grows bigger over time. That’s why dynamic query is used.
  • The most effective way to temporarily store records between individual rule execution so far proved to be custom Memory-Optimized table type. We tried a lot of things, but this one seems to be the fastest.
  • The number of records that are usually returned for 1 single rule is roughly somewhere between 100 000 and 3 000 000. That’s why a bucket_size of 5 000 000 for the HASHed temporary tables is used. And even though we tried nonclustered index, it was slower than that HASH.
  • The input collection of rules can vary strongly. There can be anything from 1 rule up to dozens of rules used at once.
  • Most every rules can be defined with at minimum 2 lookup values .. at most with dozens or in a few cases even hundred values. For a better understanding of rules, here are some examples:

Rule1Color, {1, 5, 7, 12} Rule2Size, {100, 200, 300} Rule3Material, {22, 23, 24}

Basically every rule is specified by it’s Designation, which corresponds to a specific Function. And by it’s collection of Lookup values. The possible lookup values differ based on the designation.

What we have done to optimize the process so far:

  • Where big number of records need to be temporarily stored, we use Memory-Optimized variable tables (also tried with temp ones, but it was basically same when using Memory-Optimized variants).
  • We strongly reduced and optimized the source tables the SELECT statements are run against.

Currently, the overal load is somewhat balanced 50/50 between I/O costs pertaining to SELECT statements and manipulation with records between temporary tables. Which is frankly not so good .. ideally the only bottleneck should be the I/O operations, but so far we were not able to come up with a better solution since the whole process has a lot of variability.

I will be happy for any idea you can throw my way. Of course feel free to ask questions if I failed to explain some part of the process adequately.

Thank you

Does the mandatory piloting check at the beginning of a helm phase count as the pilot “acting” for the purposes of a Taunt action?

In starship combat, a successful Taunt captain action imposes penalties on opposing crew members for a period of 1d4 rounds if they "act" during the phase in which the Taunt occurs. To quote the rules (emphasis added):

If you are successful, each enemy character acting during the selected phase takes a –2 penalty to all checks for 1d4 rounds

So if the captain of Ship A Taunts Ship B at the beginning of the helm phase, before Ship B’s pilot has taken their pilot action, then the pilot of Ship B has the option to decline to take an action for that round in an effort to avoid taking the penalty for 1d4 rounds. However, the pilot must roll a piloting check that round to determine which ship moves first. This mandatory piloting check is not a starship combat action, but does it count as "acting" for the purposes of the Taunt?