Should I use a temp table or join

I am creating a stored procedure in sql server 2019 that needs to use multiple select statements to get a parent row and then its related data. I have the primary key clustered value for the parent so the first query will at most return 1 row.

Should I select everything into a temp table for the first query and then join my subsequent queries to the temp or should I just keep joining to the original table?

I am not sure if the overhead of creating the temp table will overshadow the overhead of joining to the actual table repeatedly.

I have looked at the performance plans and they come out to be the same and the statistics for reads/scans and time are about the same as well.

I think what I am trying to figure out is if I use the temp table, will it relieve pressure on the original table. The original table is heavily read and written to.

I should note that these statements will be inside of a Stored Procedure so I may potentially get a boost from Temporary Object Caching.

Assume table A has > 1 million rows and has 0-10 rows per entry in TableB and 0-10 per entry in TableC

Simplistic Table Diagram

Queries without temp table

declare @taID bigint=123  select     ta.* from     TableA ta where     ta.ID=@taID   select     tb.* from     TableA ta     inner join TableB tb on ta.ID=tb.TableAID where     ta.ID=@taID  select     tc.* from     TableA ta     inner join TableC tc on ta.ID=tc.TableAID where     ta.ID=@taID 

Queries with temp table

declare @taID bigint=123  select *  into #tmpA from     TableA ta where     ta.ID=@taID  select * from #tmpA  select     tb.* from     #tmpA ta     inner join TableB tb on ta.ID=tb.TableAID  select     tc.* from     #tmpA ta     inner join TableC tc on ta.ID=tc.TableAID 

Queueing MySQL record inserts to avoid over-subscription of a related resource … table locking?

Given a simplified hypothetical of seats in a lifeboat, if I have the following setup with a lifeboats table and a seats table where each record is one occupied seat in the given lifeboat:

CREATE TABLE lifeboats (   id INT UNSIGNED NOT NULL,   total_seats TINYINT UNSIGNED NOT NULL,   PRIMARY KEY (id));  INSERT INTO lifeboats (id, total_seats) VALUES (1, 3); INSERT INTO lifeboats (id, total_seats) VALUES (2, 5);  CREATE TABLE seats (   lifeboat_id INT UNSIGNED NOT NULL);  INSERT INTO seats (lifeboat_id) VALUES (1); INSERT INTO seats (lifeboat_id) VALUES (1); INSERT INTO seats (lifeboat_id) VALUES (1); INSERT INTO seats (lifeboat_id) VALUES (2); 

I can find lifeboats with available seats by querying:

SELECT      l.id, l.total_seats, COUNT(s.lifeboat_id) AS seats_taken FROM     lifeboats AS l         LEFT JOIN     seats AS s ON s.lifeboat_id = l.id GROUP BY l.id HAVING COUNT(s.lifeboat_id) < l.total_seats 

What is the best way to ensure 2 clients do not grab the last seat in a lifeboat without implementing some coordinating process queue?

My only idea (assuming I’m trying to grab seat in lifeboat 2) is going LOCK TABLE rambo like:

LOCK TABLE seats WRITE, lifeboats AS l READ, seats AS s READ;  INSERT INTO seats (lifeboat_id) SELECT      id FROM     (SELECT          l.id, l.total_seats, COUNT(s.lifeboat_id) AS seats_taken     FROM         lifeboats AS l     LEFT JOIN seats AS s ON s.lifeboat_id = l.id     WHERE l.id = 2     GROUP BY l.id     HAVING COUNT(s.lifeboat_id) < l.total_seats) AS still_available;  UNLOCK TABLES; 

but this is not very elegant, needless to say.

(My environment is MySQL8/InnoDB)

Problem with the Table function when adding together

I am a newbie, I have a problem with the Table. Can anyone explain to me understand? For example> I did the expression Qk1, Qk2 with table function such : Qk1=Table[A[[i+1,1]].B,{i,0,4], Qk2=Table[C[[i+1,1]].D,{i,0,4] and then make Qk=Qk1+Qk2.

When I call again Qk to calculate another expression Mk=Qk[[i+1,1]].X, it only understands that Qk is Qk1 if I change Qk[[i+1,1]] to Qk[[i+1,2]] it understands that Qk now is Qk2.

For clearly: Results from Qk1={a,b,c,d}, and Qk2={e,f,g,h} => Qk={a+e,b+f,c+g,d+h} When calculating Mk=Qk[[i+1,1]].X the result is Mk={ax,bx,cx,dx} while Mk=Qk[[i+1,2]].X the result is Mk={ex,fx,gx,hx}. Following logically,It should be Mk={(a+e)x,(b+f)x,(c+g)x,(d+h)x}

Referencing a Column in the same table

Keeps telling me that I’m referencing SubTotal and I’m not sure how to fix that.

The error message I’m getting is "Column CHECK constraint for column ‘Total’ references another column, table ‘Job’."

Create Table Job(     JobNumber int not null         Identity (1,1)         Constraint PK_Job primary key clustered,     Date datetime not null,     Address varchar(100) not null,     City varchar(50) not null,     Province char(2) not null         Constraint CK_Province Check (Province like '[A-Z][A-Z]'),     PostalCode char(7) not null         Constraint CK_PostalCode Check (PostalCode like '[A-Z][0-9][A-Z][0-9][A-Z][0-9]'),     SubTotal money not null,     GST money not null,     Total money not null         Constraint CK_Total Check (Total>SubTotal),               /*COME BACK TO THIS */     ClientID int not null         Constraint FK_JobToClient references Client(ClientID),     StaffID int not null         Constraint FK_JobToStaff references Staff(StaffID), ) 

Multiple-Question choice – TABLE for category of question?

I am working on a multiple choice question form. I am using Flask on the back-end and MySQL 5.7 as a database.

  • There will be more than 1 end-user. I already made a users table but haven’t work on it yet
  • There are multiple questions but I only display one question at once.
  • Every question have either 3 or 4 possible choices (A, B, C or A, B, C, D)
  • There is always one correct answer
  • The user can filter question based on category and get stats by category (% of question answered for this category for example)

Should I create a new table called category which would look like category_id (int, primary_key), category_text (varchar(50))?

CREATE TABLE `questions` (   `question_id` int NOT NULL AUTO_INCREMENT,   `contributor_id` int NOT NULL,   `question_text` varchar(1000) NOT NULL,   `category` varchar(50) NOT NULL,   `answer_a` varchar(200) NOT NULL,   `answer_b` varchar(200) NOT NULL,   `answer_c` varchar(200) NOT NULL,   `answer_d` varchar(200) DEFAULT NULL,   `correct_answer` varchar(20) NOT NULL,   PRIMARY KEY (`question_id`) ) ENGINE=InnoDB AUTO_INCREMENT=10 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci; 

MySQL: message “Incorrect key file for table” “try to repair it”

I’m using MySQL 5.7.10

I have a table like this, with 100M rows and a size of 16GB .

CREATE TABLE `my_table` (     `id` DOUBLE NOT NULL AUTO_INCREMENT,     `entity_id` DOUBLE NOT NULL,     `concept_id` VARCHAR(50) NOT NULL COLLATE 'utf8_spanish_ci',     `value` DOUBLE(15,6) NOT NULL,     `increment` DOUBLE(10,6) NULL DEFAULT NULL,     PRIMARY KEY (`id`),     INDEX `IDX_concept` (`concept_id`),     INDEX `IDX_entity` (`entity_id`) ) COLLATE='utf8_general_ci' ENGINE=InnoDB ROW_FORMAT=DYNAMIC AUTO_INCREMENT=118166425 

Once a month, I execute:

ALTER TABLE my_table ENGINE=InnoDB; 

My intention is to defrag the table, so the data are put together and the size keeps as low as possible.

This time, it failed and the failure message is: "Incorrect key file for table ‘my_table’; try to repair it".

I have made the following steps:

  1. Create a table like this: my_table2.
  2. Use mysqldump to dump my_table data in a file.
  3. Replace the create and the inserts to be done in "my_table2".
  4. Execute the file. my_table2 is created and each row in my_table exists in my_table2.
  5. Execute ALTER TABLE my_table2 ENGINE=InnoDB;

And it failed too, with the same message "Incorrect key file for table ‘my_table2’; try to repair it".

How could I fix the error? Thank you.

EDIT 1: I have executed CHECK TABLE for both tables, and the result is status OK, for both of them.

Build Table of Edge Weight Sums

Is there a convenient way to build a table as generated by the code below where a 5th column (at the far right) could be added to the table that is the sum of the edge weights for the nodes in a given row. Then also be able to sort that table from smallest edge weight sum to largest edge weight sum?

Clear[edges,g] edges = {N1 -> N2, N1 -> N3, N1 -> N4,          N2 -> N5, N3 -> N5, N3 -> N6,          N4 -> N6, N5 -> N7, N6 -> N7};  g = Graph[       edges,        VertexLabels -> "Name",        EdgeWeight -> {1, 2, 3, 4, 5, 6, 7, 8, 9},        EdgeLabels -> {"EdgeWeight"},       EdgeLabelStyle -> Directive[Red, 20]     ]  WeightedAdjacencyMatrix[g] // MatrixForm  TableForm[FindPath[g, N1, N7, Infinity, All]] 

table with Ni

I’m looking for the sum of the edge weights of each row in the above table. For example, the last row would be generated as N1 N2 N5 N7 13, where 13 is the sum of the edge weights 1 (N1 -> N2) + 4 (N2 -> N5) + 8 (N5 -> N7) = 13. So, 13 would be the 5th column computed for each row in the above table generated by the last line of code in the above.

Insert custom data to custom table on wordpress database

i need some help here

i wanna create some form for user to subscribe our article and store the data to database. i have tried to search it on google, but nothing worked when i tried that. i know that some plugin can do that but i dont know how to create the validation when user already subscribed and the form never show again.

how do i do to insert the data to my database? hope you guys understand what the problem and any help will appreciate

thank you

here is my code

the form

                    <form action="<?php echo site_url() . '/insert-data.php'; ?>" method="POST" name="form-subscribe">                         <div class="row">                             <div class="col-12 col-lg-12 col-md-12">                                 <p>Name*</p>                                 <input type="text" name="subs_name" placeholder="Full Name*" required="">                                                            </div>                         </div>                         <div class="row mt-3">                             <div class="col-12 col-lg-12 col-md-12">                                 <p>Email*</p>                                 <input type="text" name="subs_email" placeholder="Email Address*" required="">                                                           </div>                         </div>                         <div class="row mt-3">                             <div class="col-12 col-lg-12 col-md-12">                                 <p><button type="submit" class="btn btn-primary" name="submitForm">Submit</button></p>                             </div>                         </div>                     </form> 

the action file aka insert-data.php

<?php      //setting up the form     function insertuser() {        $  name   = $  _POST['subs_name'];       $  email  = $  _POST['subs_email'];          global $  wpdb;        $  table_name = $  wpdb->prefix . "subscriber";       $  wpdb->insert($  table_name, array('subs_name' => $  name, 'subs_email' => $  email) );       }  if( isset($  _POST['submitForm']) ) insertuser();  ?> 

Here is the table of wp_subscriber enter image description here

Defining OLAP or OLTP on table level in Oracle/postgres/SQLserver

Usually companies have separate databases for OLTP and OLAP (Row stores and column stores) where data is loaded through ETL jobs to another database for analytical processing. I am particularly interested to know if there is any way we can define which tables or objects to store for OLAP and which tables or objects to store for OLTP queries in any RDBMS like oracle or postgres on any level of granularity in the same database.