Storing dates and their history of changes and emailing – Does my database design pose any problems?

Background

I’m designing some tables to allow dates to be assigned to an object so progress can be tracked on that object, along with functionality to store emails sent pertaining to these contracts and their dates. For example lets say that the Object is a Contract (A contracted task to be completed).

This means we would have a contract table

ContractTable

Now for this contract we can have Date fields such as a obtained date, start date, review date, estimated finish date, finish date etc. Right now most contracts will contain the same columns but we don’t know in the future if more columns will be added or less will be used as requirements change quite often.

Because of this, I figured to separate the dates from the contract table. Each date in this ContractDate table would have a foreign key to a FieldHeading table that describes what aspect of the contract the date is. And finally, a AuditTrail type table called ContractDateHistory that would keep track of changes of a Date.

Contract and Date table seperate with a history of dates table.

This way we can also selectively choose what dates go to a contract. One contract may have 4 dates and another may have 6.

These contracts can have emails sent to people that inform them of how these dates are going or if a date has changed. Because multiple contracts can be contained in one email I have a ContractsForEmail table which contains the ID of the email of each contract that is contained in an email.

Contracts with emailing tables

A row would be input for each email sent to a user, needsResolution is there if the user has to take action with the email (depends on the type of email) and resolved is there if the user has taken action, a program will check if the user has taken action after a set amount of time, and if not, the program will send a reminder email (which could possibly also show up in this table)

The template can have a list of users that the emails derived from the template sends to (an email will be sent for each user assigned to the mailing list of a template, additional could be sent too but users that without fail get an email sent are kept here.

My Question

I have many questions about if my design is okay but in specific I’d like to know, is what I am doing with contracts and contractDates is okay?

Instead of having the Contract and Dates separate should I just add Date columns to the Contracts table?

If they get used then good and if they don’t for a specific contract then its just kept as NULL.

This could lead to columns in the contract table that get added and never used again (almost always NULL). Like so:

Contract table with dates inside

AddedDateInFuture is there to illustrate that a column may be added in the future, it wouldn’t literally be there.

  • What are the trade-offs between the two options?
  • Is there anything I’m missing with the emailing tables? Would this kind of design work?

Currently the system would be catering to under 200 users but I’d like it to be built a little future proof for if many more use it.

As a last note some database tables such as the usual “User” table has been left out.

Consequences of cascade drop on sequences/functions/views

I am migrating some data to another server while dropping all sequences/functions/views. When I try to do simply

DROP VIEW | SEQUENCE | FUNCTION | AGGREGATE without CASCADE, errors are thrown to indicate dependencies. The goal is that if no data (=tables?) will be affected by using CASCADE, then I can use it.

Assumption: if no table is dependent on views/functions/sequences, then no table will be affected by using CASCADE.

Then I guess using CASCADE for views will be fine since they always depend on some base tables.

For functions and sequences I am not very sure. I haven’t written that many of sql functions myself and I don’t know if the data in a table are generated by some function, whether it means the table depends on the function.

For sequences, it seems that a sequence will be generated automatically by postgres if I use serial as column type. My first guess would be that in this case, the table that is using serial depends on the sequence. However, by testing on a dummy database on my laptop, I found that the data seems to be unaffected. (I just use SELECT COUNT(*) FROM TABLE_NAME after dropping the sequence)

Create a row based on a condition (column value)

I have a table that contains the following data;

EmpNo | GivenName | PreferredName  --------------------------------- 1     | Paris     | Paris 2     | Ashley    | Ash 

I am trying to get an output where, if the GivenName and PreferredName values are same, then the row should be returned as it is, but if they are different then the row should be returned twice (if possible, the row returned twice should have a NULL in PreferredName in the first occurence and GivenName a NULL in the second). Something like this:

EmpNo | GivenName | PreferredName  --------------------------------- 1     | Paris     | Paris 2     | Ashley    | NULL 2     | NULL      | Ash 

I have no idea where to begin. I tried using CASE statement and FULL OUTER JOIN to no avail. I have as well tried using the code below, and it is giving me the answer for the first part of my question:

SELECT [EmpNo]       ,[GivenName]       ,[PreferredName] FROM [Emp] E WHERE [GivenName]= [PreferredName] UNION ALL SELECT [EmpNo]       ,[GivenName]       ,[PreferredName] FROM [Emp]  

Prevent updating table using procedure in different database

This is my scenario:

  • Windows User Gabe has db_owner access of database DB1.
  • Windows User Gabe has db_reader on database DB2.
  • When the user executes UPDATE DB2.dbo.Tbl1 SET X = 1, they get “The UPDATE permission was denied..” as expected.
  • when user executes a proc DB1.dbo.uspUpdateTable containing the same UPDATE statement above, it succeeds.

I thought this might be due to database chaining or trustworthy settings but they are not enabled.

Is this the expected behavior for db_owner privilege?? And is there a way to prevent this?

SQL Server 2017

Multi leader replication vs Single-leader performance

I’m trying to wrap my head around a hypothetical scenario.

Imagine we have a very write-heavy distributed database and sharding is not an option.

I wonder if / how is multi-leader replication (write-write) more efficient than single-leader scenario (write-read) since write-write has overhead to sync databases and propagate writes to the other master(s) ending up with the same number of write operations, ultimately.

In which cases is multi-leader replication for write-heavy applications considered more performant than single-leader and in which cases it is not?

I understand the question is broad and nuanced but would be happy to read some thoughts on the subject.

DynamoDB: How to model data that’s shared and queried by all users?

I am very intrigued by DynamoDB, and it works incredibly well when I model the data for my main use case for my application. That being said, there is one specific use case that I can’t wrap my head around.

Let’s say I have users in a table, with the user id being the primary key. Most information is specific to the user. I want to be able to communicate with my users so I want the ability to make announcements to them. These announcements are shared across all the users. I can store user specific information about announcements in their own attributes like read and unread announcements.

The problem (if it’s not clear already) is that there is only one set of announcements but they will be queried by every user frequently, leading to an anti-pattern of DynamoDB and potential throttling.

My initial thoughts are to make k copies of announcements and label the keys announcement_copy_1, announcement_copy_2 … announcement_copy_k, and then on the query to check for new announcements, I would randomly assign an integer 1-k to query the announcements. Each announcement copy would be the partition key and I would have sort keys with the date of the specific announcement, and attributes with the text and type of announcement.

I’m not confident if this is the best approach to this problem, or if I’m missing something. Also I am looking at going serverless with AWS Lambdas if that affects anything.

Thank you in advance for any suggestions or advice!

Admin on phpMyAdmin does not have Super privileges

I have Plesk via GoDaddy and I am the sole admin and user. When I go to phpMyAdmin and try to execute the following:

SET GLOBAL event_scheduler = 'ON'

I get the error “#1227 – Access denied; you need (at least one of) the SUPER privilege(s) for this operation”

Considering I don’t have the tab for security how can I change the sql database privileges to allow this? It would be easy to do if it was local hosted but I can’t find anything on how to do this with my specific setup.

Adding a database with Service Broker to an existing Availability Group on a new replica

  • Server 1 – SQL Server 2016 (Primary in existing AG)
  • Server 2 – SQL Server 2016 (Secondary in existing AG)
  • Server 3 – SQL Server 2016 (Offsite Secondary in existing AG)
  • Server 4 – SQL Server 2017 (New server, destined to become new primary)

An ~800GB database is happily running on the 2016 AG. It has Service Broker enabled and is running fine.

The plan is to restore the database onto Server 4, add it to the AG, then manually failover so Server 4 is the primary in order to upgrade Servers 1, 2 & 3 to SQL 2017.

Server 4 has a different drive configuration to the other Servers so in order to add the database to the AG, I am using dba-tools Restore-DbaDatabase command. The database must be left in NORECOVERY mode in order to finally add it to the AG.

The problem here is that Service Broker must be enabled via ENABLE_BROKER on the DB before it is added to the AG whereas this is not possible because of the NORECOVERY state of the DB. Also SET TRUSTWORTHY ON must be set.

Is there a way to achieve this without tearing down the whole AG?

I’m really trying to avoid this because it takes a long time to copy and restore backups onto the offsite server (Server 3) when initially joining the AG.

This guide has proved useful for the rest of the process.