Responsive Tables : Adjusting row count to fit the available real estate

I’m designing a screen with tabular data, and trying to explain to the developers what is required. First off, I want to make sure that my design is feasible.

My goal is to…

  1. Have tabular data on the screen, including filters and pagination.
  2. The filters should be across the top and near the header row
  3. The header row should always be visible
  4. The pagination control should always be visible near the bottom displayed row
  5. Row height is fixed at 36px

So for responsiveness, this leaves only the displayed number of rows as a variable. If there is room for only three rows to be displayed while keeping all of the mandatory controls visible, then only display three rows. If the same is rendered on a larger screen and there is room for 25 rows, then display 25. The user should always be able to expect the mandatory controls to be roughly in the same spot, regardless of real estate size.

So do you folks have some examples of this I can send to the dev team?

Thanks

Using Temp Tables in Azure Data Studio Notebooks

tl;dr I want to use temp tables across multiple cells in a Jupyter Notebook to save CPU time on our SQL Server instances.

I’m trying to modernize a bunch of the monitoring queries that I run daily as a DBA. We use a real monitoring tool for almost all of our server level stuff, but we’re a small shop, so monitoring the actual application logs falls on the DBA team as well (we’re trying to fix that). Currently we just have a pile of mostly undocumented stored procedures we run every morning, but I want something a little less arcane, so I am looking into Jupyter Notebooks in Azure SQL Data Studio.

One of our standard practices is take all of the logs from the past day and drop them into a temp table, filtering out all of the noise. After that we run a dozen or so aggregate queries on the filtered temp table to produce meaningful results. I want to do something like this:

Cell 1

Markdown description of the loading process, with details on available variables 

Cell 2

T SQL statements to populate temp table(s) 

Cell 3

Markdown description of next aggregate 

Cell 4

T SQL to produce aggregate 

The problem is that, it seems, each cell is run in an independent session, so the temp tables from cell 2 are all gone by the time I run any later cells (even if I use the “Run cells” button to run everything in order).

I could simply create staging tables in the user database and write my filtered logs there, but eventually I’d like to be able to pass off the notebooks to the dev teams and have them run the monitoring queries themselves. We don’t give write access on any prod reporting replicas, and it would not be feasible to create a separate schema which devs can write to (for several reasons, not the least of which being that I am nowhere near qualified to recreate tempdb in a user database).

Best way to handle Table headings and tables

I am trying to work out a way where a user can have multiple subheadings to build a table. For example, a user can work with this type of workflow.

Section header  --Table Heading    ---Table 

OR

Section header  --Table Heading   --Table Subheading (optional)       ---Table 

OR

  Section header      --Table Heading       ---Table Subheading (*optional*, if user has more sub-sub heading, they might need more one more subheading)          ----Table Sub-subheading             -----Table 

What is the best way to determine if the user will require that subheading and then allow them to enter the information in a table?

Export SCHEMAS and TABLES at the same time

My task is export schemas sch1, sch2

Also need to export tablespaces ts88, ts89 from schema sch3

How can I do it as one operation? Because, if I run different operations :

DIRECTORY=dir ESTIMATE_ONLY=Y LOGFILE=file.log SCHEMAS=sch1, sch2 PARALLEL=4 

and

DIRECTORY=dir ESTIMATE_ONLY=Y LOGFILE=file.log TABLES=sch3.ts88, sch3.ts89 PARALLEL=4 

it work fine.

But when I try something like this :

DIRECTORY=dir ESTIMATE_ONLY=Y LOGFILE=file.log SCHEMAS=sch1, sch2 TABLES=sch3.ts88, sch3.ts89 PARALLEL=4 

I got UDE-00010: multiple job modes requested, schema and tables.

As I understood It not possible to run export this way with schemas and tables.

Can I run it with INCLUDE or any other way? Or it must be 2 different operations?

Is right click on tables bad UX

We’ve a grid where users can select multiple rows and perform actions on them. When rows are selected, we show action buttons at the top of the grid. We’ve been thinking about making these actions available in a context menu where users can right click on the grid and see this menu. We think it’s useful in some cases where the selected rows are at the bottom and user won’t have to scroll all the way up to click on those actions.

Any thoughts about right click on the grid from UX perspective?

MariaDB Inserts are getting slower and slower (7x tables, ~ 2.8M and 200MB)

I have auto increment on each of a table. One unique ID that is consistent of 10 numbers, and each table have ~ 6 big int columns (values are small from 1-60k), and from 0 to 4 var chars (~ up to 500 characters, on average from 5 to 50 characters).

I am fighting with this for months and can’t make it production stage :(, basically it drops from ~ 170 inserts (from app perspective) to ~ 40 just after ~ 200-500k inserts.

This is nothing as I’ve worked with DB that was holding trillions of columns and auto increment and huge varchars. (however paid solution :().

I already tweaked the config so many times but still getting to the point where server is using ~ 950% & .net core 25% (of all cores).

Machine has i9 9900k 8c/16t, 64GB RAM, 2x NVME 2TB

I can’t even run @ 5 minute API test as it won’t be able to process all data from a queue 🙁 (API can accept ~ 20k/s).

Buffers, read io, inno_db other tweaks for a commit etc. were applied, nothing seems to be working.

Looks like it cannot for some reason handle just so little data and I cannot figure out why (I never had any real experience with free databases, so I only assume that it should be able to insert 300k records within 60 seconds and sustain this for ~ 10TB).

Is it faster to split a large table into 12 rolling monthly tables & use UNION them for reports or keep large table & delete rows older than 1 year?

My co-worker wants to split a large 158M row stats table into stats_jan, stats_feb, … and use UNION to select from them for reports. Is that standard practice and is it faster than to just use the large table in place and delete rows older than one year? The table is many small rows.

mysql> describe stats; +----------------+---------------------+------+-----+---------+----------------+ | Field          | Type                | Null | Key | Default | Extra          | +----------------+---------------------+------+-----+---------+----------------+ | id             | bigint(20) unsigned | NO   | PRI | NULL    | auto_increment | | badge_id       | bigint(20) unsigned | NO   | MUL | NULL    |                | | hit_date       | datetime            | YES  | MUL | NULL    |                | | hit_type       | tinyint(4)          | YES  |     | NULL    |                | | source_id      | bigint(20) unsigned | YES  | MUL | NULL    |                | | fingerprint_id | bigint(20) unsigned | YES  |     | NULL    |                | +----------------+---------------------+------+-----+---------+----------------+ 

I did manually split the table up and copy the rows into the appropriate month tables and created a giant UNION query. The large UNION query took 14s versus 4.5m for the single table query. Why would many smaller tables take a significantly shorter time than one large table, when it’s the same number of rows total?

create table stats_jan (...); create table stats_feb (...); ... create index stats_jan_hit_date_idx on stats_jan (hit_date); ... insert into stats_jan select * from stats where hit_date >= '2019-01-01' and hit_date < '2019-02-01'; ... delete from stats where hit_date < '2018-09-01'; ... 

The monthly tables have from 1.7M rows to 35M rows.

select host as `key`, count(*) as value from stats join sources on source_id = sources.id where hit_date >= '2019-08-21 19:43:19' and sources.host != 'NONE' group by source_id order by value desc limit 10; 4 min 30.39 sec  flush tables; reset query cache;  select host as `key`, count(*) as value from stats_jan join sources on source_id = sources.id where hit_date >= '2019-08-21 19:43:19' and sources.host != 'NONE' group by source_id UNION ... order by value desc limit 10; 14.16 sec 

Designing database with filter tables

I am tasked with the redesign of an old database that is not performing very efficiently. Now I am no where near an expierenced database designer, so I am hoping you guys can help me figure out some things.

First of all the application has the user answer a few questions and performs some calculations based on the answers. This is the base of the application and the database should perform this with the best possible performance. So we have products that can be shown based on the answers of the questions. But the questions can also be “filtered” based on the answer(s) of a previous question(s). And also based on the user logged in and whether the user is using the application from within the platform instead of a webmodule. So the current structure has entities that can be “filtered” with each a filter table. So products has a product_filter table and questions has a question_filter table.

To explain the current structure further here is a diagram:

product ------------- product_id   product_filter ------------------- product_filter_id product_id // FK filter_label operator // i.e. equals, not equals, greather than etc. filter_value   question ------------- question_id   question_product ---------------------- question_product_id question_id // FK product_id // FK   question_product_filter ------------------- question_product_filter_id question_product_id // FK depenend_on_question_product_id filter_label operator // i.e. equals, not equals, greather than etc. filter_value 

For context the application currently is running on ASP classic and the wish for the database to be redesigned is because the system will be migrated to a ASP.NET application with the use of Entity Framework. So the senior dev has asked me to take a code first approach on redesigning the Database.

Group results from two differenct tables on same hours

I have two simple tables:

indoor

id | timestamp | temp | humi 

outdoor

id | timestamp | temp 

and two selects which give me time and average temperature grouped by same hour for the last 24 hours:

SELECT DATE_FORMAT(timestamp, '%H:00') AS time, round(avg(temp), 1) as avg_out_temp FROM outdoor WHERE timestamp >= now() - INTERVAL 1 DAY GROUP BY DATE_FORMAT(timestamp, '%Y-%m-%d %H') ORDER BY timestamp ASC;  SELECT DATE_FORMAT(timestamp, '%H:00') AS time, round(avg(temp), 1) as avg_in_temp FROM indoor WHERE timestamp >= now() - INTERVAL 1 DAY GROUP BY DATE_FORMAT(timestamp, '%Y-%m-%d %H') ORDER BY timestamp ASC; 

and now what I need to do is to group those two results by same hour, with respect to possibility that there can be no records in indoor or outdoor table for whole hour, so I need to get:

time | avg_out_temp | avg_in_temp 11:00 | 12.5 | 21.4 12:00 | 13.9 | null 13:00 | null | 22.4 14:00 | 14.0 | 22.5