MS SQL running out of Disk Space – can FileGroups be used to free up some space?


Problem

We have a Sql Server 2014 with a 1TB attached disk (on Azure) that’s running out of disk space. We have about 20GB’s left (maybe a few weeks of space). As such, we need to move some data off the CURRENT disk and onto a NEW disk.

Details

Server:

Microsoft SQL Server 2014 - 12.0.2548.0 (X64)      Jun  8 2015 11:08:03      Copyright (c) Microsoft Corporation     Web Edition (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor) 

MS-SQL 2014 is installed onto a classic Azure VM. This VM is in a classic VNET. The storage DISKS are also classic. As such, we couldn’t just expand the existing disk. MS-Support said that we need to update all of these, if we wish to leverage the modern Azure Storage to allow disk resizing, using the newer SSD disk, etc. TL;DR; we can’t take this offline for hours including all the other subsystems that communicate with the VM via IP address. Now, don’t turn this into a flame-fest .. this is what we’ve been given to work with and will need to fix all of this up later.

So right now, an idea what to try and leverage FILEGROUPS and to move one or more tables into a FILEGROUP and push this FILEGROUP onto another DISK which we have attached.

So the questions here are:

  • Is this a really really crazy and lame idea, first of all?
  • If it’s crap but OK, then will using FILEGROUPS actually move the data from the CURRENT disk onto the NEW disk (which frees up some disk space on the nearly full CURRENT DISK)?
  • If this is still possible, does moving these tables mean the data is locked/unavailable … which means were still back to our initial problem 🙁
  • What about logs? Moving this data means the logs just get a copy of this? (we are doing hourly and weekly backups, I believe).

It’s ok if the data is small, but here’s a quick look at some of our tables…

enter image description here

That first table is massive (with respect to the rest of the data). 750GB ish.

I was thinking of moving maybe the 2nd, 3rd of 4th line, in the result image. Remember how I said the infrastructure is all on OLD classic stuff? This means the HD’s are old and slow so copying data could take some time also.

As an example, I just tried to copy the .mdf‘s (this DB has 1 main mdf and 2 other small ones) from OLD over to NEW disks. that had a quick ETA of 24 hours.

Having the site offline for a few hours is totally acceptable. We can take stuff offline when our customers are asleep. but … 24 hours .. that hurts. The 24 hour idea was a simple test for:

  • Create new 2TB Disk (if possible)
  • turn off sql server.
  • copy mdf + log files to new disk. (24 hours or so)
  • point filegroups from old location to new location
  • start sql server again.

Now, we’re open to ideas and I know that stack exchange is not a site for ‘opinions’ so I’m trying to keep this on target with a suggested answer and to get feedback on it … but we’re open to other solutions to reduce the offline time.

So – can anyone help please?

In what ways can the contents of RAM be (inadvertently) written to disk?

I’m working on improving the security of my own system by mitigating the chance sensitive information, (e.g. encryption keys) stored in RAM, are inadvertently written to disk. As of now I know of three common ways this can occur and how they could be mitigated:

  1. The contents of RAM are copied to hiberfil.sys when Windows Hibernates
    • Solution: Disable Windows Hibernation
  2. Some contents of RAM are copied into the swap file.
    • Solution: Encrypt the swap file.
  3. Memory Dumps during Windows Blue-screens.
    • Solution: Disable memory dump file generation

Excluding these (as well as tools specifically designed to dump memory) are there any other reasons RAM could unintentionally be written to disk by the operating system?

I would really appreciate any help I could get!

Disk encryption with automatic reboot and no-network

I am using crytsetup with LUKS to encrypt a data drive, separate to the system drive, under Ubuntu 16.04. The issue I am facing is that this system will also be required to automatically start itself up in the event of power loss. The other constraint is that this system can not be connected to a network.

Essentially the only security risk with respect to the data is the case that someone physically steals the system with the HD on board. So of course I can provide an associated key to the drive but given that I have no network access, and yet I still require unattended rebooting, I’m a bit lost on how to proceed.

Looking for general thoughts on how to handle such a situation. Perhaps there are physical security solutions (i.e. self destructing USB -though who knows upon what condition given my requirements!) that might be helpful. Or really any comments from anyone who has faced the same constraint: i.e. encrypted drive with unattended reboot, and no network. Maybe I’m thinking about the problem the wrong way?

Thank you in advance.

How to manage disk space allocation for materialized views?

Summary: I have materialized views in oracle 11g that seem to hog disk space, unlike normal tables that mark rows as deleted and stats eventually show them as free space (allocated to the table, allowing reuse). Tablespace usage only grows for materialized views unlike stats for origin tables. Tested in Oracle 12c with same results. How to ensure MV reuse space from deleted rows?

What have I done? I have these partitioned materialized views set up in a separate schema, separate tablespace from the origin tables (i know they could have partitions created dynamically, call it technical debt).

CREATE MATERIALIZED VIEW replication_schema.origin_table PARTITION BY RANGE(tbl_timestamp)  (     PARTITION tbl_before_2016 VALUES LESS THAN (TO_TIMESTAMP('2016-01-01 00:00:00','YYYY-MM-DD HH24:MI:SS')),     PARTITION tbl_2016_01 VALUES LESS THAN (TO_TIMESTAMP('2016-02-01 00:00:00','YYYY-MM-DD HH24:MI:SS')),     PARTITION tbl_2016_02 VALUES LESS THAN (TO_TIMESTAMP('2016-03-01 00:00:00','YYYY-MM-DD HH24:MI:SS')), ...  PARTITION tbl_after_2025 VALUES LESS THAN (MAXVALUE) ) REFRESH FORCE ON DEMAND START WITH SYSDATE NEXT sysdate+1/1440 AS SELECT * FROM origin_schema.table; 

And they have some indexes on them as well, some global and some are local.

CREATE INDEX tbl_account_index ON replication_schema.origin_table (tbl_account DESC) LOCAL; CREATE INDEX tbl_column1_index ON replication_schema.origin_table (tbl_column1 DESC) LOCAL; CREATE INDEX tbl_column2_index ON replication_schema.origin_table (tbl_column2 DESC) LOCAL; CREATE INDEX tbl_column3_index ON replication_schema.origin_table (tbl_column3 DESC); CREATE INDEX tbl_column4_index ON replication_schema.origin_table (tbl_column4 DESC); 

Most of the time they get new rows (about 4M/mo) but users have set up a process to delete old rows from the origin table every two weeks. They can delete up to 500K/1M rows from each replicated table, every time.

There are seven materialized views in this schema. Each one extract data from one origin table.

What we see is that, contrary to what happens with the origin table, the space reported as free in dba_ tables does not change over time and tablespace usage only grows from these materialized views.

If I wait a while after deleting rows and run this query:

select df.tablespace_name "Tablespace", totalusedspace "Used MB", (df.totalspace - tu.totalusedspace) "Free MB", df.totalspace "Total MB", round(100 * ( (df.totalspace - tu.totalusedspace)/ df.totalspace)) "Pct. Free" from (select tablespace_name, round(sum(bytes) / 1048576) TotalSpace from dba_data_files  group by tablespace_name) df, (select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name from dba_segments  group by tablespace_name) tu where df.tablespace_name = tu.tablespace_name and df.totalspace <>0 ; 

It shows an increase in the Free MB column (space in dba_data_files minus allocation declared in dba_segment) for origin tablespace but the used MB for replication never decrease, only increase on new rows (over three years now)

Tablespace      Used MB    Free MB  Total MB   Pct. Free SYSTEM          491        9        500        2 SYSAUX          1628       162      1790       9 UNDOTBS1        0          9645     9645       100 ORIGIN_DATA     2705       1391     4096       34 ORIGIN_REP_DATA **1975**   2121     4096       52 

That tablespace only holds these materialized views. There’s no other object there being used.

I tried the advisor to see what can I do:

variable id number; begin   declare   name varchar2(100);   descr varchar2(500);   obj_id number;   begin   name:='REPCHECK';   descr:='Replication advisory';    dbms_advisor.create_task (     advisor_name     => 'Segment Advisor',     task_id          => :id,     task_name        => name,     task_desc        => descr);    dbms_advisor.create_object (     task_name        => name,     object_type      => 'TABLE',     attr1            => 'REPLICATION_SCHEMA',     attr2            => 'ORIGIN_TABLE',     attr3            => NULL,     attr4            => NULL,     attr5            => NULL,     object_id        => obj_id);    dbms_advisor.set_task_parameter(     task_name        => name,     parameter        => 'recommend_all',     value            => 'TRUE');    dbms_advisor.execute_task(name);   end; end;  / 

And it says

Perform re-org on the origin_table object, estimated savings is xxx bytes

If I try querying recommendations through procedure:

select    tablespace_name,    allocated_space,    used_space reclaimable_space from    table(dbms_space.asa_recommendations('TRUE', 'TRUE', 'ALL')) 

It returns

ORIGIN_REP_DATA 100663296   38419844 

But I only get errors when trying to run SHRINK SPACE or COMPRESS options

ORA-10635: Invalid segment or tablespace type 10635. 00000 – “Invalid segment or tablespace type” *Cause: Cannot shrink the segment because it is not in auto segment space managed tablespace or it is not a data, index or lob segment. *Action: Check the tablespace and segment type and reissue the statement

Long story short: What can I do to avoiding disk space wasting in this materialized views? How to perform maintenance on them? Shall I drop them and recreate them? Datafiles usage in tablespace is growing about 10GB per month and I’m running out of time (and space). Thanks.

What could be the cause for my high disk IO latency (from sys.dm_io_virtual_file_stats DMV)?

These are the results from querying the sys.dm_io_virtual_file_stats DMV. (I add some calculated fields for clarity).

DM IO Virtual File Stats

I’m finding certain basic operations in Database 7 are bottlenecking core operations I have running in Database 6.

For example, I have a table in Database 7 that isn’t used anywhere, and has a lot of rows (about 300 million). In Database 6, a bulk insert of hundreds of thousands of records is happening every hour throughout the entire day. If I run a simple SELECT COUNT(1) FROM Database7.dbo.UnusedTableInDatabase it takes about 10 minutes to return, and it appears to eat up a lot of the server resources, IO in particular. My continuously running bulk insert job on Database 6 crawls to almost a halt and a backlog starts to build up.

One thing I know is Database 7 has it’s data and log files on the same logical drive. Database 6 is appropriately setup so that it’s data and log files are on separate drives.

Where should I start digging next to try to resolve these issues?

How do crypto coprocessors securely decrypt a disk without allowing bus sniffing?

I’m trying to understand how using a crypto co-processor chip can securely decrypt a disk without someone getting the decryption key by sniffing the bus it communicates on or loading the disk onto another computer and viewing the contents that way. Specifically, I’m trying to figure out how this works when someone has physical access to the device and the device needs to be decrypted without a password.

An example would be an ATM that has an embedded device that will boot on OS only if it is running on trusted hardware that has a crypto chip. In this case it is passwordless (there is no login to be able to use the ATM terminal). A malicious actor can get physical access to the board, but shouldn’t be allowed to sniff the bus between the crypto coprocessor and the main processor nor be allowed to remove the SD card and view the contents on a separate computer.

Consider the following situation:

  • Embedded system running Linux with a crypto chip that communciates over i2c
  • Disk is removable media such as an SD card and has full disk encryption
  • The device is passwordless, but only runs on the trusted hardware
  • The removable media cannot be loaded into another device and analyzed
  • The device/keys can be provisioned in a secure environment

Questions:

  • How is it possible that the decryption key can be transmitted across the i2c bus without being intercepted?
    • I assume it uses public key encryption, but how is the private key on the disk side kept secret?
  • Can this setup work if the removable media is not paired with a specific crypto chip?
    • Example being the device gets a new SD card (but can still be decrypted with the crypto chip)

What is the PortableBaseLayer Disk

I recently looked in the Disk Management tool, and I noticed a (virtual) disk that I didn’t recognize. The volume’s name is “PortableBaseLayer.”

I also noticed it while using the manage-bde -status command. It concerned me, as my whole disk is supposed to be encrypted by BitLocker. Please see the images below.

Can anyone tell me anything about this?

enter image description here

enter image description here