Hash Table Testing in C++

I’m attempting to learn C++ right now, but I am struggling on this project from here: https://ucsb-cs32.github.io/f19/lab/lab04a/

I’m desperate for help right now, as I have no idea how to do it. If anybody would be so kind as to explain how to do this project in detail, that would be wonderful.

The accounting Method analysis for table expansion by tripleling instead of doubling an array

If we double the array every time we get the amortized cost of 3n or 3$ if you prefer.

I was wondering what would it be if we tripled the array size instead of doubling it.

The rational between the 3$ cost for every insertion is as follow:

  • 1 dollar for the insertion of an element.
  • 1 dollar is saved for when it will have to move itself to the new array with double the size
  • 1 dollar paying for another element then itself when transfer will be required.

I can’t seem to find the cost for an array that triple each time.

Delete from table one row with 2 variable with limit

I’m developing a PHP script who store session of users in a Database.

When an user logout from server, remove only one row ( because same user can login more than one time )

When server reboot, remove all session of all users from that server with same ip.

Table structure:

  • Table name:
    • totalconcurrent
  • Column in table:
    • ID [int, autoincrement, 11]
    • serverip [mediumint]
    • userid [text]

Running on :

mysql  Ver 15.1 Distrib 5.5.64-MariaDB, for Linux (x86_64) using readline 5.1 

Case: User logout

Query runs correctly but doesn’t delete anything.

    elseif ($  _GET['status'] == "logout"){         $  sql = "DELETE FROM totalconcurrent WHERE (serverip,userid) IN ((INET_ATON('".get_server_ip()."'),'".$  _GET['id']."')) LIMIT 1;";         if ($  conn->query($  sql) === TRUE) {             echo "1 Session of ".$  _GET['id']." removed";         } else {             echo "Error: " . $  sql . "<br>" . $  conn->error;         }     } 

Case: Server Reboot

Query runs correctly but doesn’t delete anything.

    elseif ($  _GET['status'] == "reboot"){         $  sql = "DELETE FROM totalconcurrent WHERE serverip IN ((INET_ATON('".get_server_ip()."')));";          if ($  conn->query($  sql) === TRUE) {             echo "Server rebooted, removed all session stored in this server";         } else {             echo "Error: " . $  sql . "<br>" . $  conn->error;         }     } 

Problem:

I’ve tryed many times and many types of query to do this but without finding the correct way to do this.

What queries i need for do this?

Return a ranking points of User follow week, month, year from multiple table with mySQL

Here is the structure of my tables:

User

|--------|------------| | id     | name       | |--------|------------| | 1      | Name1      | | 2      | Name2      | | 3      | Name3      | |--------|------------| 

Post

|--------|------------|-------------|--------------------| | id     | content    | user_id     |   created_at       |  |--------|------------|-------------|--------------------| | 1      | Content1   |  1          |2020-01-17 14:03:31 | | 2      | Content2   |  1          |2020-01-17 16:18:23 | | 3      | Content3   |  2          |2020-01-17 16:29:13 |  |--------|------------|-------------|--------------------| 

Comment

|--------|------------|-------------|----------|---------------------| | id     | comment    | user_id     | post_id  |   created_at        | |--------|------------|-------------|----------|---------------------| | 1      | Comment1   |  1          |   1      | 2020-01-20 18:29:19 | | 2      | Comment2   |  1          |   1      | 2020-01-22 17:25:49 | | 3      | Comment3   |  2          |   2      | 2020-01-28 11:37:59 |  |--------|------------|-------------|----------|---------------------| 

Vote

|--------|-------------|----------|-----------------------| | id     |  user_id    | post_id  |    created_at         | |--------|-------------|----------|-----------------------| | 1      |   1         |   1      | 2020-01-20 15:08:55.0 | | 2      |   1         |   2      | 2020-01-20 15:13:29   | | 3      |   2         |   2      | 2020-01-20 15:13:32   | |--------|-------------|----------|-----------------------| 

I want to find the top score of 10 users by week, month, the year following to formula:

A user creates 1 post will have 10 points. A user creates 1 comment in a post that will have 5 points. A user votes in a post that will have 2 points. Could anyone help me with this, please!

Thank you so much.

Find the nearest time from another table and add column

Please help. I need to create a view of Assembly details and when they have gone through a particular machine. I have two tables. Table 1 has the main assembly data and Table 2 has the machine data.

The final query has all Table 1 fields and the Machine field of Table 2 where the BuildDate (Table 1) is = to the WaveDate (Table 2) and the Wavetime (Table 2) is less than and nearest to the BuildTime (Table 1).

Hopefully the pictures make more sense than my description :/ Any help is greatly appreciated. Many thanks

enter image description here

Another feasible way to implement multi-level page table?

The advantage of a multi-level page table is that we can swap the inner-level page tables to some secondary storage. If however we want quick access to the whole address space, we have to keep all page tables in memory. Then there are no savings.

However, imagine that not the innermost page-table pointed to the final frame, but that each page table contributes a bit to get the final address. In other words, we divide each virtual address into sections and map these instead.

I.e. we have a virtual address 1011 that maps to 1110 using a 2-level page table. Then the outer-level page table maps 10 -> 11 and the 2nd-level page table with index 3 (from binary 11) maps 11 -> 10. Together we get the address 1110.

I was learning about multi-level page tables and they were quite confusing to me. This is the way I initially imagined they worked. Now obviously, this restricts how we can map the virtual address space to the physical address space, i.e. pages with the same prefix will have physical locations close to each other. However, I don’t see the problem with this approach.

Why is this approach not used if it can save memory? Or do I have some error in my thinking?

Is creating a joining/bridging table the most practical and efficient way of normalizing numerous M:M relationships in a database?

Let me start with an example:

Table users:

ID | Name --------- 1, Kirk 2, John 

Table class:

ID | Class ---------- 1, MATH 2, FIN 

Now, based on what I studied so far, in order to properly normalize this database, I’d create another table, a joining/bridging table:

Table class_enrollment:

UID | CID 1     1 1     2 2     1 2     2 

Well, it works fine in these kinds of examples.

But, what if my database has 35 or 50 M:M relationships? Is it really best to create yet another 35/50 joining tables?

What is the correct way of grabbing a RANDOM record from a PostgreSQL table, which isn’t painfully slow or not-random?

I always used to do:

SELECT column FROM table ORDER BY random() LIMIT 1; 

For large tables, this was unbearably, impossibly slow, to the point of being useless in practice. That’s why I started hunting for more efficient methods. People recommended:

SELECT column FROM table TABLESAMPLE BERNOULLI(1) LIMIT 1; 

While fast, it also provides worthless randomness. It appears to always pick the same damn records, so this is also worthless.

I’ve also tried:

SELECT column FROM table TABLESAMPLE BERNOULLI(100) LIMIT 1; 

It gives even worse randomness. It picks the same few records every time. This is completely worthless. I need actual randomness.

Why is it apparently so difficult to just pick a random record? Why does it have to grab EVERY record and then sort them (in the first case)? And why do the “TABLESAMPLE” versions just grab the same stupid records all the time? Why aren’t they random whatsoever? Who would ever want to use this “BERNOULLI” stuff when it just picks the same few records over and over? I can’t believe I’m still, after all these years, asking about grabbing a random record… it’s one of the most basic possible queries.

What is the actual command to use for grabbing a random record from a table in PG which isn’t so slow that it takes several full seconds for a decent-sized table?