Drop tables but space not claimed in postgres 12

I have upgraded Postgresql 9.5 to Postgresql 12.4 a few days back using pg_upgrade utility with link (-k) option.

So basically I am having two data directories i.e. One is old data directory (v9.5) and the current one in running state (v12.4).

Yesterday I have dropped two tables of size 700MB and 300MB.

After connecting to postgres using psql utility I can see database size whose tables was dropped got decreased (with \l+ ) but what is making me worry is that only a few MBs have been freed from storage partition.

I have run vacuumdb only on that database but no luck. I have checked if any deleted open file is there on OS level using lsof but there is none.

Looking for the solution.

Postgres Combine Summed Values from 2 Queries / Tables into Single Row

Say I had the following 2 queries, summing values from separate tables.

I would like the sum of recorded time

SELECT      SUM(minutes) as recorded_minutes,     SUM(hours) as recorded_hours FROM recorded_time WHERE     project_id = 1 

To be combined with the sum of budgeted time in a single row

SELECT      SUM(minutes) as budgeted_minutes,     SUM(hours) as budgeted_hours FROM budgeted_time WHERE     project_id = 1 

Is it possible to do this in a single query?

Multiple Primary-Foreign Key connections between Tables Redundant?

I have two tables, Reviews and Critic

enter image description here

I’m pretty new to RDBMS and I was wondering if the connection between Review.rID and Critic.Review is redundant?

My reasoning is that a single critic can have many reviews but each review is unique so I need a way to enforce the uniqueness of the Reviews. I did this via Review.rID which is unique.

However, I am already connecting the critic to the review by including a criticID within Review, so is it necessary for me to also connect a reviewID to a critic within Critic?

How to get multiple max values for 1 column form multiple tables

I am using sqlplus and i need to achieve this result, where stipend is the max stipend for each faculty

There are 3 tables, student ( with name and surname), faculty ( with faculty name) and money (with the stipends), connections are faculty to student and student to money

This is my current code, which onyly returns one main maximums from one faculty

SELECT ROW_NUMBER() OVER (ORDER BY faculty_name DESC) AS "Nr.",  faculty.faculty_name,  student.surname, student.name, money.stipend AS "STIPEND" FROM faculty INNER JOIN student ON faculty.id_faculty = student.faculty_id INNER JOIN money ON student.id_student = money.student_id GROUP By money.stipend, faculty.faculty_name, student.surname, student.name having max(money.stipend) = ( select max(stipend) FROM faculty INNER JOIN student ON faculty.id_faculty = student.faculty_id INNER JOIN money ON student.id_student = money.student_id ); 

how would i get this end result End result

Benefits of not having a clustered index on tables (Heaps)

What are the benefits of not having a clustered index on a table in SQL server. Will a

SELECT * INTO TABLE_A FROM TABLE_B 

Be faster if TABLE_A is a heap? Which operation will benefit if the table is a heap? I am quite sure UPDATE‘s and DELETE‘s will benefit from a clustered index. What about INSERTS? My understanding is that INSERT "might" benefit from the table being a heap, both in term of speed but also other resources and hardware (I/O, CPU, memory and storage…).

What is the most scarce resource in terms of hardware? In terms of storage is a heap going to occupy less space? Is disk storage not the least expensive resource? If so is it rational to keep table as heap in order to save disk space? How will a heap affect CPU and I/O with SELECT, INSERT, UPDATE and DELETE? What cost goes up when table is a heap and we SELECT, UPDATE and DELETE from it?

Thansk

Ensuring relationship between child tables exists prior to SQL insert

I have a situation where I have three tables: user, assignment and test. A user must have completed an assignment before he can take the test. This means the test table has both both a user foreign key and an assignment foreign key on it.

I could write a sql statement like this: insert into test (name, user_id, assignment_id) values ("final exam", 1, 1) which would check to see if the user and assignment exist before doing the insert. However it would not check to see if the user and assignment were related.

The easy way to solve this problem is to do a separate query before the insert to ensure the user has an assignment. I’m wondering if I can accomplish both in one query though. I’m not all that experienced with constraints or subqueries, both of which could be solutions. Looking for a best practice here as it will be used throughout an application.

How to delete all records which are not referenced from other tables

I have a table to which a bunch of other tables has an FK reference. Is there any way of deleting records from this table only if they are not being referenced?

I know that I can left join the referencing table and check for null, but I have about 10 tables (more will be added) with FKs referencing this table, so it would be cool to have a generic way of doing it.

There are often not more than a handful of records I need to remove. I suppose I could loop and try to remove each record individually and protect each deletion with BEGIN/EXCEPT, but that is an ugly concept.

Does this kind of functionality exist in Postgres? Kind of a soft delete, or delete-if-allowed.

Is a relational database with a dynamic number of tables a good design?

I have a use case where I wish to create a table for each entity to which the underlying application that owns this entity will publish records.

This table has a fixed structure, so if there are 5 such entities in my system, there will be 5 different tables with the same schema.

The schema is generic with one of the columns in the schema as JSON for flexibility. I do not expect queries based on the fields in the JSON. I expect the following queries on each entity:

  1. On the auto-increment id primary key column with LIMIT and OFFSET where I need to read X rows from the record with id Y.
  2. On the creation date column with LIMIT X.

I expect thousands of such entities to be created on the fly so in turn there will be thousands of tables in the database.

In future when one of these entities have fulfilled their purpose, the table would be simply deleted.

I expect most of these tables to have not more than 100 rows while there will be a few with at least 1M rows as time goes by. This design makes data easy to query as my application can determine the table name from the entity name.

Is this a bad design?

Is there a limit to the number of tables in a database in RDBMS (the above design is with Postgresql 11 in mind) keeping performance in mind?

Should I use any different datastore to achieve this other than RDBMS? Any suggestions?

Multi-level paging where the inner level page tables are split into pages with entries occupying half the page size

A processor uses $ 36$ bit physical address and $ 32$ bit virtual addresses, with a page frame size of $ 4$ Kbytes. Each page table entry is of size $ 4$ bytes. A three level page table is used for virtual to physical address translation, where the virtual address is used as follows:

  • Bits $ 30-31$ are used to index into the first level page table.
  • Bits $ 21-29$ are used to index into the 2nd level page table.
  • Bits $ 12-20$ are used to index into the 3rd level page table.
  • Bits $ 0-11$ are used as offset within the page.

The number of bits required for addressing the next level page table(or page frame) in the page table entry of the first, second and third level page tables are respectively

(a) $ \text{20,20,20}$

(b) $ \text{24,24,24}$

(c) $ \text{24,24,20}$

(d) $ \text{25,25,24}$

I got the answer as (b) as in each page table we are after all required to point to a frame number in the main memory for the base address.

But in this site here it says that the answer is (d) and the logic which they use of working in chunks of $ 2^{11} B$ I feel ruins or does not go in with the entire concept of paging. Why the system shall suddenly start storing data in main memory in chucks other than the granularity defined by the page size of frame size. I do not get it.