How can I check index fragmentation in the quickest way possible?

Checking index fragmentation in my database seems unruly slow. Regardless if I use the DMV sys.dm_db_index_physical_stats (for a specific database, table, or even index) or if I use the SSMS Index Properties window to look at Fragmentation on a specific index, it takes a really long time.

For example, using the Index Properties window will take upwards of 5 minutes to open up for a single index on my largest (~20 billion rows) table.

I do want to push to implement partitioning but until then I have to support an existing index maintenance job and I’m not sure how we can even check index fragmentation when one index on our heaviest table takes about 5 minutes to analyze. (Each of our tables has at least a few indexes.)

Here’s a case where it took so long in the Index Properties window that I think it timed out and returned nothing in window: enter image description here

fragmentation in ipv4 is done at source or routers or at both

fragmentation in ipv4 is done at source or routers? according to what i read, fragmentation can be avoided at source, by doing segmentation at transport layer wisely to match MTU of source network… we can also avoid fragmentation at routers by using PMTUD(path MTU discovery), where we use ICMP packets with dummy port to determine smallest MTU size required in path from source to destination. Then segmentation done at source to match smallest MTU in the path… so, fragmentation is done at both source or routers as per requirements if we don’t use above methods.. is it correct?

Rearrange items in order reduce fragmentation and reduce wasted space

I have a segment with some offsets at irregular intervals

There are items of various length inside. Items cannot be placed randomly. Instead, their left side must match some offset.

Items are free to go past offests.

As you can see in this image, item number 3 is long enough that it goes through an offset, shown with a dotted line.

I also have some special type of offset that I’ll call “barrier” that items cannot go past and cannot be placed on:

One last constraint is that items cannot overlap each other:

Items can be moved one at a time. So I can pick up an item and place it somewhere else as long as no constraint is violated.

I’m trying to come up with an algorithm/solver/optimizer which would find some good enough sequence of steps to reduce fragmentation and compact up these items. It follows that this procedure will reduce empty space between items and offsets:

Can you give some suggstion on how you would tackle a problem like this or point me in the right direction, give some ideas, name algorithms to take inspiration from, etc..?

The fragmentation of the page status introduced by multiple design patterns – best practice for page loading?

In one of the recent updates to Google Chrome, we have seen yet another method of dealing with page loading status with the introduction of the loading animation in the favicon area of the browser tab (by the way, the Firefox browser uses the side to side indeterminate loading state animation made notorious by LinkedIn).

enter image description here

As far as I can tell, this makes at least five or six different ways that you can indicate a loading status on a page, many of which occur simultaneously and makes the current state of the page content rather confusing for users.

So the ones that I have seen include:

  • Browser tab favicon area loading indicator seen in image above (is there a name for this?)
  • Mouse cursor loading indicator
  • Page header loading progress indicator
  • Modal/pop-up page loading progress indicator
  • Call-to-action button progress indicator animation
  • Bottom of the page loading indicator (e.g. when infinite scrolling is implemented)

Assuming that there is a ‘best practice’ when it comes to dealing with page content status, is there a reason why there needs to be so many different ways of indicating to the user that the status of the page is not completed loaded? Doesn’t this provide a very inconsistent user experience and add to the user frustration?

Android fragmentation: What’s designer’s role? [on hold]

I’m working on a research of understanding Android designer-developer collaboration, and particularly interested in learning whether and how designers and developers work together to solve Android fragmentation problems (e.g., when fragmentation problems are brought up and how these problems are communicated between designers and developers).

But before jumping into my questions, I want to first validate the assumption that designers are actively involved in the process of dealing with fragmentation issues with developers. Is it true?

  • Do designers even care about fragmentation?

  • Do developers think it’s necessary to involve designers?

  • Would designer’s involvement help reduce some of developers’ workload, and build better user experience for their app?

It might be different among different teams and products, so I want to open a discussion and see how everyone practices in their work.

Thank you very much for your contribution!

Understanding paging and internal fragmentation

I am currently studing questions but stuck on this one, I hope someone can help me out to understand.

Question: Assume that we have a paged virtual memory with a page size of 4Ki byte. Assume that each process has four segments (for example: code, data, stack, extra) and that these can be of arbitrary but given size. How much will the operating system loose in internal fragmentation?

The answer is: Each segment will in average give rise to 2Ki byte of fragmentation. This will in average mean 8 Ki byte per process. If we for example have 100 processes this is a total loss of 800 Ki byte.

My question: That 2Ki byte each segment is confusing but I think that is just a guess. Anyway, if we have 8Ki byte per process, that would not even fit in a 4Ki byte page isn’t that actually a external fragmentation? Can someone explain the correct answer that is easier to understand?

ZFS Heavy Write Amplification due to Free Space Fragmentation

I have setup ZFS RAID0 Setup for PostgreSQL database. The Storage and Instances are in AWS EC2 and EBS volumes.

NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT pgpool   479G   289G   190G         -    70%    60%  1.00x  ONLINE  -   xvdf  59.9G  36.6G  23.3G         -    71%    61%   xvdg  59.9G  34.7G  25.2G         -    70%    57%   xvdh  59.9G  35.7G  24.2G         -    71%    59%   xvdi  59.9G  35.7G  24.2G         -    71%    59%   xvdj  59.9G  36.3G  23.6G         -    71%    60%   xvdk  59.9G  36.5G  23.4G         -    71%    60%   xvdl  59.9G  36.6G  23.3G         -    71%    61%   xvdm  59.9G  36.6G  23.2G         -    71%    61% 

Previously the FRAG is at 80% on most the devices and we have suffered a heavy write IOPS. As the pool capacity is previously at 75% utilization (400GB), provisioned additional 10GB to each device (400GB + 80GB). Now the FRAG is reduced to 70%. Once important metric is that the write IOPS is much lesser for the same workload.

enter image description here

As per the Cloudwatch metrics after increase in EBS size, write IOPS drastically reduced to 1200 – 1400 IOPS from 4000 IOPS for Master PG and reduced to 600 IOPS from 3000 IOPS for Slave PG. I have suspected that this is due to how FRAG affects IO as explained in this answer.

We have set recordsize=128K as the compressratio is much better than recordsize=8K. I think due to higher recordsize, FRAG is increased quickly and results in write amplification and heavy write IOPS. Will decreasing the record size would prevent write amplification or any other problem which I am missing?


ubuntu@ip-10-0-1-59:~$   sudo zpool get all NAME    PROPERTY                       VALUE                          SOURCE pgpool  size                           479G                           - pgpool  capacity                       60%                            - pgpool  altroot                        -                              default pgpool  health                         ONLINE                         - pgpool  guid                           1565875598252756833            - pgpool  version                        -                              default pgpool  bootfs                         -                              default pgpool  delegation                     on                             default pgpool  autoreplace                    off                            default pgpool  cachefile                      -                              default pgpool  failmode                       wait                           default pgpool  listsnapshots                  off                            default pgpool  autoexpand                     on                             local pgpool  dedupditto                     0                              default pgpool  dedupratio                     1.00x                          - pgpool  free                           190G                           - pgpool  allocated                      289G                           - pgpool  readonly                       off                            - pgpool  ashift                         0                              default pgpool  comment                        -                              default pgpool  expandsize                     -                              - pgpool  freeing                        0                              - pgpool  fragmentation                  71%                            - pgpool  leaked                         0                              - pgpool  multihost                      off                            default pgpool  feature@async_destroy          enabled                        local pgpool  feature@empty_bpobj            enabled                        local pgpool  feature@lz4_compress           active                         local pgpool  feature@multi_vdev_crash_dump  enabled                        local pgpool  feature@spacemap_histogram     active                         local pgpool  feature@enabled_txg            active                         local pgpool  feature@hole_birth             active                         local pgpool  feature@extensible_dataset     active                         local pgpool  feature@embedded_data          active                         local pgpool  feature@bookmarks              enabled                        local pgpool  feature@filesystem_limits      enabled                        local pgpool  feature@large_blocks           enabled                        local pgpool  feature@large_dnode            enabled                        local pgpool  feature@sha512                 enabled                        local pgpool  feature@skein                  enabled                        local pgpool  feature@edonr                  enabled                        local pgpool  feature@userobj_accounting     active                         local 

ZFS Props

ubuntu@ip-10-0-1-59:~$   sudo zfs get all NAME    PROPERTY              VALUE                  SOURCE pgpool  type                  filesystem             - pgpool  creation              Mon Oct  8 18:45 2018  - pgpool  used                  289G                   - pgpool  available             175G                   - pgpool  referenced            288G                   - pgpool  compressratio         5.06x                  - pgpool  mounted               yes                    - pgpool  quota                 none                   default pgpool  reservation           none                   default pgpool  recordsize            128K                   default pgpool  mountpoint            /mnt/PGPOOL            local pgpool  sharenfs              off                    default pgpool  checksum              on                     default pgpool  compression           lz4                    local pgpool  atime                 off                    local pgpool  devices               on                     default pgpool  exec                  on                     default pgpool  setuid                on                     default pgpool  readonly              off                    default pgpool  zoned                 off                    default pgpool  snapdir               hidden                 default pgpool  aclinherit            restricted             default pgpool  createtxg             1                      - pgpool  canmount              on                     default pgpool  xattr                 sa                     local pgpool  copies                1                      default pgpool  version               5                      - pgpool  utf8only              off                    - pgpool  normalization         none                   - pgpool  casesensitivity       sensitive              - pgpool  vscan                 off                    default pgpool  nbmand                off                    default pgpool  sharesmb              off                    default pgpool  refquota              none                   default pgpool  refreservation        none                   default pgpool  guid                  571000568545391306     - pgpool  primarycache          all                    default pgpool  secondarycache        all                    default pgpool  usedbysnapshots       0B                     - pgpool  usedbydataset         288G                   - pgpool  usedbychildren        364M                   - pgpool  usedbyrefreservation  0B                     - pgpool  logbias               throughput             local pgpool  dedup                 off                    default pgpool  mlslabel              none                   default pgpool  sync                  standard               default pgpool  dnodesize             legacy                 default pgpool  refcompressratio      5.07x                  - pgpool  written               288G                   - pgpool  logicalused           1.42T                  - pgpool  logicalreferenced     1.42T                  - pgpool  volmode               default                default pgpool  filesystem_limit      none                   default pgpool  snapshot_limit        none                   default pgpool  filesystem_count      none                   default pgpool  snapshot_count        none                   default pgpool  snapdev               hidden                 default pgpool  acltype               off                    default pgpool  context               none                   default pgpool  fscontext             none                   default pgpool  defcontext            none                   default pgpool  rootcontext           none                   default pgpool  relatime              off                    default pgpool  redundant_metadata    most                   local pgpool  overlay               off                    default 

“Fragmentation” of a distribution (from paper)

I’ve been reading a paper by Robert Morris (“Sets, Scales and Rhythmic Cycles; A Classification of Talas in Indian Music”) and came across a formula that I’ve found a bit tricky. He is referring to the “fragmentation” of a distribution and includes the formula below without derivation or reference. I’m pretty new to statistics, so this may be a standard formula that I’m just unaware of. However, I haven’t been able to find it in the same format online.

One feature for use in ordering talas is fragmentation. We have already grouped talas into partition classes. All talas in a particular partition class have the same fragmentation. We use the partition P as the input to a function that yields the fragmentation of the partition. Fragmentation varies between 0 and 1 and is a measure of the uniformity of a distribution—the higher the fragmentation, the more even the distribution. We calculate the fragmentation of a partition of the number N into z parts using the following formula…:

$ FRAG(P)=1 – \frac{\sum_{k=1}^{z}{PAIRS(p_{k})}}{PAIRS(N)}$ where $ PAIRS(s)=\frac{{s^2}-s}{2} \:, \: P=\{{p_{1},p_{2}, p_{3}},…p_{z}\},\ N = sum(P), and \: z = card(p) . $

I found the formula to be much more readable in this format:

$ Let \: P = \{p_{1}, p_{2}, p_{3},…, p_{z}\}, \: z = card(P),\: and \: N = sum(P).\ FRAG(P)=1- \frac{\sum_{k=1}^{z} \frac{p_{k}^{2}-p_{k}}{2}}{\frac{N^{2}-N}{2}}=1-2\frac{\sum_{k=1}^{z}\frac{p_{k}^2-p_{k}}{2}}{N^2-N}$

The author uses the formula with the example $ P=\{2, 2, 4\} \rightarrow N = 2 + 2 + 4 = 8$ and $ z = 3.$ This returns $ FRAG(P)=1-2(\frac{8}{56})=0.714285714…$

Does this formula (or a similar one) have a name? Are there any places where I can find some further information? More generally, what does this mean?

Thanks for the help!