Search Rocket site

Understanding UniData Hashed and Dynamic Files with Resize and Memresize

Jonathan Smith

March 13, 2019

UNIDATA HASHED FILES AND OVERFLOW

In this first section we are going to discuss the two main types of UniData Hashed File (Static and Dynamic) and we’re going to discuss the two types of overflow that can occur in these files.

The objective of file resizing is to minimize overflow as overflow is a performance overhead.

32-BIT V 64-BIT DYNAMIC HASHED FILES

32-Bit

A UniData static file or a sub-file of a dynamic file are limited to 2GB in size.

Theoretically a UniData dynamic file can grow to 500GB but the internal management of these multiple sub-files requires complicated code and multiple file I/O operations and hence reduces performance of the files.

The block-size of the data file is limited to 16K and as data records are now getting larger and larger this also is a performance limitation of the 32Bit model.

64-Bit

The 64-Bit data model introduced at UniData 8.1 will resolve these issues so that data files will have virtually no restrictions on file size and block size. The new maximum physical file size for a single file is 8 Exabytes (2^63). For Dynamic files this gives a maximum size between 8 and 16 Exabytes depending on the data distribution between the primary sub-file and first overflow sub-file.

  • A 64-bit dynamic file will only have one primary sub-file (dat001) and one overflow sub-file (over001)
  • The length of a single record will still be limited to 2G-1.
  • The modulo will be limited to 2G-1.
  • The block size will still be restricted to 16K for 32-Bit.
  • The block size will be limited to 2G-1 For 64-Bit

The udtconfig parameters MAX_FLENGTH and STATIC_GROWTH_WARN_SIZE have no effect on 64-Bit Files.

UNIDATA STATIC HASHED FILES

A UniData Static Hashed file has a set modulo and set group size. These are used to determine the primary space available to a file. If the primary space available is exceeded, then overflow groups are added to the file as needed.

As more overflow groups are added then the performance of the file will degrade. Even if all the records in the overflow group are subsequently deleted, the overflow group itself is not deleted.

To claim back unused overflow blocks or to change the primary space available to the file, the file will need to be resized.

UNIDATA DYNAMIC HASHED FILES

A UniData Dynamic File will increase and decrease (split and merge) its modulo to dramatically reduce the requirement of overflow groups should the file outgrow its original primary space.

As with a static file, if an overflow group is added and all the records in the overflow group are subsequently deleted then the overflow group itself is not deleted.

To claim back unused overflow blocks the file needs to go through the process of being resized.

Certain parameters of a dynamic file can be changed without resizing the file, however they only take effect from the point at which they are changed and are not retrospectively applied to a file. If you wish these parameters to be applied retrospectively or other file characteristics such as the hash type or split type are to be changed, then the file will need to resized.

UNIDATA GROUP STRUCTURE

A UniData Group is divided into two parts. The first part of the group contains the keys to records including the offset information to access the data record itself. The second part of the group contains the data records.

 

Level 1 Overflow

Occurs if the amount of space needed to store a data record in the group is exceeded. In which case the data record is moved to an overflow group, but the key remains in the primary group.

This is a performance overhead but may not be totally avoidable.

Level 2 Overflow

Occurs If the key to a record cannot be stored in the primary group and must be stored in an overflow group, resulting in the some of the keys and all of data residing in overflow blocks.

This carries a larger performance overhead than level 1 overflow.

Benefits of the Group Structure

Operations that require only the record key are optimized.

  • SELECT of a file based only on the record key
  • COUNT of records in a file
  • READ / WRITE operations
    • After hashing to the primary group, only that group needs to be read and searched (assuming no Level 2 overflow) to find the key
    • Only the keys need to be scanned, not the keys and data
    • Level 1 overflow requires a secondary read of the overflow file to return the record data, however the offset information stored with the record id results in a direct read of the overflow file.

HOW DO UNIDATA DYNAMIC HASHED FILES WORK?

As stated previously, a UniData Dynamic File will increase and decrease (split and merge) its modulo to dramatically reduce the requirement of overflow groups, should the file outgrow its original primary space. So how does this work?

HOW DOES UNIDATA DECIDE WHICH GROUP IS TO BE SPLIT?

When the decision is made to split a group, the group that is split is determined by the value of the split pointer. Almost invariably this will NOT be the group that is currently being updated.

How is the Split Pointer calculated?

When a Dynamic File is first created, the split pointer will point to the first group in the file. Each time a new group is added (i.e. a split occurs) then the split pointer is moved to the next group. This continues until the Base Modulo changes at which point the split pointer is reset to point at the first group again.

  • When a file is created, the Base Modulo will match the creation modulo.
  • When a group is added to the file, the current modulo increases by 1.
  • When the current modulo is twice the Base Modulo, then the Base Modulo is changed to the current modulo and the split pointer reset to 0.

We can provide a program that will tell you what the next group to split would be from looking at the current file. The example below shows the first 20 splits of a dynamic file initially created with a modulo of 3.

Base Modulo Current Modulo Next Group to Split (Split Pointer)
3 3 1
3 4 2
3 5 3
6 6 1
6 7 2
6 8 3
6 9 4
6 10 5
6 11 6
12 12 1
12 13 2
12 14 3
12 15 4
12 16 5
12 17 6
12 18 7

WHAT HAPPENS WHEN A GROUP IS SPLIT?

An internal algorithm is applied to the record ids in the group to be split resulting in some records remaining in the group to be split and some records being moved to the new group.

HOW DOES UNIDATA DECIDE WHEN A GROUP IS SPLIT?

The decision to split (or merge) a group is based on the Split Policy and the associated Split and Merge load parameters. UniData provides three split types KEYONLY, KEYDATA and starting at UniData 8 WHOLEFILE.

KEYONLY

The decision to split is based upon the percentage loading of the Key Space used in the group that is being updated. The default values are defined in the udtconfig file as SPLIT_LOAD and MERGE_LOAD. These can be overridden when the file is created, resized or changed in flight. An aggressive (small) SPLIT_LOAD is recommended to minimize level one overflow.

KEYDATA

The decision to split is based upon the percentage loading of the whole group that is being updated and hence more intuitive than KEYONLY. The default values are defined in the udtconfig file as SPLIT_LOAD and MERGE_LOAD. These can be overridden when the file is created, resized or changed in flight. If the average record size is close to the group size, then excessive splitting may occur.

WHOLEFILE

Both KEYONLY and KEYDATA use the loading of the group that is being updated to decide if a split should occur. This – along with the fact that it is extremely unlikely that the group being updated will be the one to split – has led to some cases where UniData files have a lot of empty space due to several overloaded groups being consistently updated.

The decision to split for WHOLEFILE is based on the loading of the file as a whole, and not on the group that is being updated. The aim being to eliminate unnecessary splitting. The default values are defined in the udtconfig file as WHOLEFILE_SPLIT_LOAD and WHOLEFILE_MERGE_LOAD. These can be overridden when the file is created, resized or changed in flight.

UDT_SPLIT_POLICY

Prior to the introduction of WHOLEFILE the udtconfig parameter UDT_SPLIT_POLICY was added to reduce the unnecessary splitting of KEYONLY and KEYDATA dynamic files. It is used to determine if a dynamic file splits when an existing record is rewritten to the file without any changes.

If the value of this parameter is set to 1, rewriting an existing record to an overloaded group only triggers a split if the record length changes. If the value of this parameter is set to 0, any update to an existing record in a dynamic file group that was already over the defined split load triggers a split for the file.

Default Split Type

The udtconfig parameter DEFAULT_SPLIT_TYPE is used to control which Split policy is used if none is specified when the file is created. In UniData 8 this is set to 3 for WHOLEFILE. (1 is KEYONLY and 2 is KEYDATA).

RESIZE V MEMRESIZE

UniData provides the two tools RESIZE and memresize to resize files. These are NOT interchangeable, and they are NOT replacements for each other.

The benefit of memresize over RESIZE is speed. This is partly achieved by using a configurable memory buffer to perform as much of the re-writing of records to the new file in memory as possible, thus reducing the amount of disk writes required when populating the new file. The rest of the speed improvement is gained by bypassing several of the other tasks RESIZE performs.

The main aim of this document is to provide enough understanding both of how dynamic files work and the limitations of memresize to allow an informed decision as to which tool to use.

LIMITATIONS OF MEMRESIZE

Whereas RESIZE uses standard dynamic file updating algorithms when populating the new file, memresize uses a different methodology.

While RESIZE allows for splitting and overflow file creation as needed, memresize does not.

There are two phases to memresize or RESIZE. First the new RSZ file is created – using the file characteristics specified by the resizing command (or assumed if some are not specified). In the second phase, records are read from the original file and written to the newly created RSZ file. During the CREATE.FILE phase, all normal file creation rules are obeyed for both RESIZE and memresize. The udtconfig MAX_FLENGTH parameter is used to determine the size of the datnnn files for 32-Bit Files.

Since memresize uses the memory buffer and performs block updates to the new file during the file population phase, the following dynamic file update activities cannot occur:

  • group splitting. The modulo of the file does not change.
  • new overflow file creation.
  • reallocation of records in a group from 1 overflow file to another.

All overflow blocks associated with a single group must be in the same overnnn file.

During normal updates (i.e. updates not carried out by memresize or RESIZE), if a group needs another overflow block and there are not any available blocks in the current overflow file it is using, all overflowed records for that group are re-written to another overnnn file that has enough blocks available.

MAX_FLENGTH is not observed as overnnn files are populated, but there is the hard limit at 2GB for 32-Bit Files. There is no problem with having overnnn files that are larger than MAX_FLENGTH.

How is the number of overnnn files for 32-Bit files determined when the memresize command is run? You can specify an OVERFLOW file count on the command line for memresize. This will be the specific number of overnnn files created. For example, to create 100 overnnn files:

memresize ORDER.HISTORY 18769901,4 OVERFLOW 100 MEMORY 1024000

If you don’t use the OVERFLOW keyword to specify the number of overnnn files, memresize will use the current file size of all overflow files and estimate how many overflow files are needed.

During the RSZ file population phase of memresize, the overflow files created are assigned to sets of groups in the primary file. They are divided up evenly. For example: if there are just 9 groups in a file and we created 3 overflow files, then the first 3 groups (0,1,2) will be assigned to the first overflow file. The next 3 groups (3,4,5) will be assigned to the 2nd overflow file, and the last 3 groups will be assigned to 3rd overflow file.

With a modulo of 18,769,901 and 32 overnnn files, the first 586,559 groups will all use over001 and ONLY over001 to store overflow associated with those groups.

Why did the memresize command fail?

An attempt to add a block to over010 would have breached the 2GB maximum file size for a 32-Bit dynamic file part file.

There was plenty of capacity on other overnnn files. Why wasn’t this used?

As noted above, during memresize a single overnnn file is assigned to a specific range of groups in the file. All overflow associated with these groups must be able to be contained within the 2GB limit for this overnnn file.

Why did over010 fill up when other overnnn files didn’t?

There were either more records hashed to the groups using over010 or the records that did hash there happened to have a larger number of ‘large’ records in the groups. A ‘large’ record is one that won’t fit in the file’s block size (4096 in this case). All large records are stored in overflow blocks.

What can be changed to allow memresize to complete without filling up any overnnn files?

You can specify more overnnn files using the OVERFLOW keyword.

  • If the problem is with large records in the groups associated with over010, you could use a larger file block size.
  • If the problem is with lumpy distribution of keys in the file, you could try using a different hash type. You can review this using ANALYZE.FILE.

WORKED EXAMPLE TO DEMONSTRATE DIFFERENCES

I used a UniBasic program to populate 3 KEYONLY data files with almost 3,000,000 records each of 100 bytes long. The keys were in the form nnn!nnnnnn and ranged from 001!000001 to 003!999999 and hashed type 0 was used. The default split and merge ratios were used and the file was originally created with a modulo of 5003 and separation of 1.

The original file from a Directory viewed appeared as

total 775008
drwxr-xr-x    2 root     system          256 Feb 25 07:03 .
drwxr-xr-x   16 root     users          4096 Feb 25 07:03 ..
-rw-r--r--    1 root     system    132252672 Feb 25 07:08 dat001
-rw-r--r--    1 root     system    264504320 Feb 25 07:08 over001

 I then chose a new modulo of 14653 with a group/block size of 3K. In order to try and keep the files as compact as possible for KEYONLY SPLIT = 9 MERGE = 4, KEYDATA SPLIT = 95 MERGE = 45 and for WHOLEFILE SPLIT = 88 MERGE = 44.

The results were as follows (KEYONLY = KO, KEYDATA = KD, WHOLEFILE = WF).

!ls -al SCP.TEST.KO
total 1289320
drwxr-xr-x    2 root     system          256 Feb 25 07:48 .
drwxr-xr-x   16 root     users          4096 Feb 25 07:49 ..
-rw-r--r--    1 root     system    330061824 Feb 25 07:48 dat001
-rw-r--r--    1 root     system    330061824 Feb 25 07:48 over001
!ls -al SCP.TEST.KD
total 1289344
drwxr-xr-x    2 root     system          256 Feb 25 07:49 .
drwxr-xr-x   16 root     users          4096 Feb 25 07:49 ..
-rw-r--r--    1 root     system    330061824 Feb 25 07:49 dat001
-rw-r--r--    1 root     system    330061824 Feb 25 07:49 over001
!ls -al SCP.TEST.WF
total 1289344
drwxr-xr-x    2 root     system          256 Feb 25 08:01 .
drwxr-xr-x   16 root     users          4096 Feb 25 08:01 ..
-rw-r--r--    1 root     system    330061824 Feb 25 08:01 dat001
-rw-r--r--    1 root     system    330061824 Feb 25 08:01 over001

The example above only has one overnnn file but the profile of the behaviour would be repeated if multiple overnnn were present.

I then repeated the test and used RESIZE instead of memresize

!ls -al SCP.TEST.KO
total 1289320
drwxr-xr-x    2 root     system          256 Feb 25 08:08 .
drwxr-xr-x   16 root     users          4096 Feb 25 08:12 ..
-rw-r--r--    1 root     system    660120576 Feb 25 08:09 dat001
-rw-r--r--    1 root     system         3072 Feb 25 08:09 over001
:.X15
!ls -al SCP.TEST.KD
total 1396248
drwxr-xr-x    2 root     system          256 Feb 25 08:09 .
drwxr-xr-x   16 root     users          4096 Feb 25 08:12 ..
-rw-r--r--    1 root     system    655454208 Feb 25 08:10 dat001
-rw-r--r--    1 root     system     59354112 Feb 25 08:10 over001
!ls -al SCP.TEST.WF
total 1357400
drwxr-xr-x    2 root     system          256 Feb 25 08:10 .
drwxr-xr-x   16 root     users          4096 Feb 25 08:12 ..
-rw-r--r--    1 root     system    413945856 Feb 25 08:12 dat001
-rw-r--r--    1 root     system    280971264 Feb 25 08:12 over001

 I then RESIZE one more time in order to claim back unused overflow space, as once an overflow block has been added during a file conversion or normal running it is not returned until the file is resized.

!ls -al SCP.TEST.KO
total 1289416
drwxr-xr-x    2 root     system          256 Feb 25 08:13 .
drwxr-xr-x   16 root     users          4096 Feb 25 08:22 ..
-rw-r--r--    1 root     system    660123648 Feb 25 08:19 dat001
-rw-r--r--    1 root     system         3072 Feb 25 08:13 over001
!ls -al SCP.TEST.KD
total 1280320
drwxr-xr-x    2 root     system          256 Feb 25 08:20 .
drwxr-xr-x   16 root     users          4096 Feb 25 08:22 ..
-rw-r--r--    1 root     system    655515648 Feb 25 08:21 dat001
-rw-r--r--    1 root     system         3072 Feb 25 08:20 over001
!ls -al SCP.TEST.WF
total 808640
drwxr-xr-x    2 root     system          256 Feb 25 08:21 .
drwxr-xr-x   16 root     users          4096 Feb 25 08:22 ..
-rw-r--r--    1 root     system    413964288 Feb 25 08:22 dat001
-rw-r--r--    1 root     system         3072 Feb 25 08:22 over001

I hope I’ve helped you understand the limits of memresize.