MongoDB cross datacenter replication without elections or high data availaility

I want to replicate a MongoDB database to another node, located in another datacenter. This is to help guard against data loss in the case of a hardware failure.

We don’t want/need high availability or elections; just a ‘close to live’ read-only copy of the database, located in another DC.

Everything I read says that you need an odd number of nodes, due to elections, but this isn’t something we need/want and I can’t find anything related to just having one primary, and one secondary (I might be being blind).

Is this something we can achieve with MongoDB, and if so are there any ‘gotchas’ or serious downsides we should consider?

how to delete specific range of data from system queue data table?

due to sudden increase of my database .mdf file we check there are 4 internal queue message table are increase unexpectedly. so we decided to delete some range of data from these table.

we try this code:

declare @c uniqueidentifier while(1=1) begin     select top 1 @c = conversation_handle from dbo.queuename     if (@@ROWCOUNT = 0)     break     end conversation @c with cleanup end   

but its try to delete all data and due to space limitation the query cannot execute. my internal message queue table :

enter image description here

enter image description here

InfluxDB performance

please, what is the current state of the art with respect to InfluxDB performance tuning? What are the limits of InfluxDB in terms of operations per second, at different architectures? What are the recommended practices to setup InfluxDB for large scale deployment? What about interfacing with Zabbix?

Database Role based access control design for survey app

I’m going to design the RBAC for a survey app. Each survey has the same role as below and only 1 manager, 1 leader and multiple participants.

Role_ID|Role_Name  | -------|-----------|       1|admin      |       2|manager    |       3|leader     |       4|participant| 

Each user can have multiple roles and this user can choose to be the participant role to join one survey or not when he is the manager or leader of a survey. To make the user only can action the survey he owns, for example, UserA is assigned to be the leader of SurveyA, so he can only has the capability, edit_survey, on SurveyA but not another surveyB which he’s not assigned to, how should I design the database?

I generated two options, can someone check which one is better or there is another better solution?

Option1

I put the role manager and leader column as the FK of User_ID from User table on Survey table as it’s one to one relationship and create a new participant table for participants.

enter image description here

Option2

I create a new user_role_in_survey table to store manager and leader role and this table replaces old user_role table. Survey_ID is the FK of Survey table and participant table for participants.

enter image description here

Enabling TLS and TLS-MA simultaneously for different clients of a Oracle DB

We have a requirement where we want to enable TLS-MA for some of the clients connecting to a specific Oracle database while the other clients can continue to use TLS with server certificate.

  • We are using Oracle 12.c in our environment.
  • Clients are connecting using the jdbc thin driver

I am an Oracle noob and i am not able to understand the documentation here

Will it work if i create 2 listeners; one with SSL_CLIENT_AUTHENTICATION=TRUE and another with SSL_CLIENT_AUTHENTICATION=FALSE

What can cause higher CPU time and duration for a given set of queries in trace(s) ran on two separate environments?

I’m troubleshooting a performance issue in a SQL Server DR environment for a customer. They are running queries that consistently take longer in their environment than our QA environment. After analyzing traces that were performed in both environments with the same parameters/filters and with the same version of SQL Server (2016 SP2) and the exact same database, we observed that both environment were picking the same execution plan(s) for the queries in question, and the number of reads/writes were close in both environments, however the total duration of the process in question and the CPU time logged in the trace were significantly higher in the customer environment. Duration of all processes in our QA environment was around 18 seconds, the customer was over 80 seconds, our CPU time was close to 10 seconds, theirs was also over 80 seconds. Also worth mentioning, both environments are currently configured to MAXDOP 1.

The customer has less memory (~100GB vs 120GB), and slower disks (10k HHD vs SSD) than our QA environment, but but more CPUs. Both environments are dedicated to this activity and should have little/no external load that wouldn’t match. I don’t have all the details on CPU architecture they are using, waiting for some of that information now. The customer has confirmed they have excluded SQL Server and the data/log files from their virus scanning. Obviously there could be a ton of issues in the hardware configuration.

I’m currently waiting to see a recent snapshot of their wait stats and system DMVs, the data we originally received, didn’t appear to have any major CPU, memory or Disk latency pressure. I recently asked them to check to see if the windows power setting was in performance or balanced mode, however I’m not certain that would have the impact we’re seeing or not if the CPUs were being throttled.

My question is, what factors can affect CPU time and ultimately total duration? Is CPU time, as shown in a sql trace, based primarily on the speed of the processors or are their other factors I should be taking in to consideration. The fact that both are generating the same query plans and all other things being as close as possible to equal, makes me think it’s related to the hardware SQL is installed on.

Review of Log Messages Database Schema

so i have been tasked to design a log database and i would appreciate some feedback about my design.

I have an application that consists of three basic parts:

  • A frontend
  • A backend
  • A low level component

Each part can create a log message which needs to be stored in a database. The parts(front end, backend, low level component) that create the log messages should be uniquely identified. At the same time when looking at a message it should be possible to see which part created the message.

Each message has a specific type assigned to it and the type can be one of the following destinct values

  • Error,
  • Warning,
  • Info,
  • Debug

The message itself should also be unique, it should have a text that says what is the problem and possibly also a description with extra information about the problem and under which circumstances it could have happened. In addition the time the message was created is very important. Because of the low level component we need microsecond accuracy.

Examples

Message : Pump Failure

Description: The pump is not pumping enough oil. Check the amount of oil and also check the temperature of the system.

Finally there are some extra "requirements" that in my opinion could affect the design of the system: The low level component produces a lot of messages in a short amount of time. This could lead to the database reaching its storage limit relatively fast. In that case the older messages should be deleted first. However there are rules that need to be taken into consideration before deleting a message. An info is less important than a warning and a warning is less important than an error. Another rule is unless I have reached a specific threshold I am not allowed to delete messages of a specific type e.g Only if have more than 500 errors am I allowed to start deleting the older errors.

My current design is the following:

Message     Id (PK)     Name varchar     Description varchar 
MessageType     Id (PK)     Name Varchar 
Sender     Id (PK)     Name Varchar 
MessagesLog     Id (PK)     MessageId (FK)     SenderId (FK)     NotificationTypeId(FK)     Date BigInt 

However taking into consideration these extra requirements and thinking that I will need to do a lot of checking on an application level if certain criteria are fullfiled before i delete a record from the database, i thought about creating a seperate table for each message type:

Message     Id (PK)     Name varchar     Description 
Sender     Id (PK)     Name Varchar 
MessagesLogError     Id (PK)     MessageId (FK)     SenderId (FK)     Date BigInt 
MessagesLogWarning     Id (PK)     MessageId (FK)     SenderId (FK)     Date BigInt 
MessagesLogInfo     Id (PK)     MessageId (FK)     SenderId (FK)     Date BigInt 
MessagesLogDebug     Id (PK)     MessageId (FK)     SenderId (FK)     Date BigInt 

What do you think?

Cluster is causing my query to run SLOWER then before

So I have been stuck on this for 6 hours now and I have no clue what to do. I am doing university homework that requires us to create a unoptomized sql query (does not have to make sense) and then apply index’s and see if it makes it faster (which it did for me, from 0.70 elapsed time to 0.66) and then we had to apply clusters.

I applied clusters and it has now almost doubled the amount taken to finish the query. From 0.70 to 1.15. Below is how I specified my cluster:

CREATE CLUSTER customer2_custid25 (custid NUMBER(8))    SIZE 270  TABLESPACE student_ts;  

I tried all my previous times with INITIAL and NEXT but that seemed not to make a difference. Below are the tables:

CREATE TABLE CUSTOMER18 (       CustID         NUMBER(8) NOT NULL,     FIRST_NAME     VARCHAR2(15),     SURNAME     VARCHAR2(15),     ADDRESS     VARCHAR2(20),     PHONE_NUMBER NUMBER(12))       CLUSTER customer2_custid25(CustID);   CREATE TABLE product18(       ProdID     NUMBER(10) NOT NULL,     PName    Varchar2(6),     PDesc    Varchar2(15),     Price    Number(8),     QOH        Number(5));   CREATE TABLE sales18(       SaleID     NUMBER(10) NOT NULL,     SaleDate    DATE,     Qty            Number(5),     SellPrice    Number(10),     CustID        NUMBER(8),     ProdID        NUMBER(10))       CLUSTER customer2_custid25(CustID);     CREATE INDEX customer2_custid_clusterindxqg ON CLUSTER customer2_custid25 TABLESPACE student_ts ;  

I also tried taking the tablespace section in the cluster index away.

I followed this formula to help calculate cluster sizes:

"Size of a cluster is the size of a parent row + (size of Child row * average number of children). "

This brought me to the size of 270. However, after testing sizes (going up 20) from 250 to 350 I found 320 to be the fastest at 1.15.


No matter what I try, I can not for the love of me get it lower then my base query times.

Other students have done the same and halved their query time.

All help is really appreciated.