Search Rocket site

UniData 8.2.1 Replication Part 1 of 2

Jonathan Smith

January 18, 2018

UNIDATA REPLICATION

UniData Data Replication is increasingly important in today’s business environments. The enhancements we’ve added to UniData 8.2 are all driven to ensure UniData Data Replication works better and faster in your environment. We’ve enhanced several areas and we will cover a couple in today’s post and more in future posts:

  • Field Level Replication – Only the changed fields in a record are sent to the subscriber – this post
  • Intelligent Write Queue – Enhances the speed of repeated updating of the same record on the subscriber – this post
  • Delayed Standby Replication – Allows a hot-standby system to be a specified time behind the publisher – future post
  • Asynchronous Transactional Updates – Cross Group Transactions can now be handled in an asynchronous or synchronous mode – future post
  • Replication Pacing (introduced in UniData 8.1.2) – Allows the graceful slowdown of UniData Replication to avoid Replication Disablement – future post

FIELD LEVEL REPLICATION

Starting at UniData 8.2, replication now supports field level replication. Prior to Field Level Replication, the entire record was sent from the publisher to the subscriber when a record was updated on the publisher. When updates to a large record only changed a small part of the record, the full record was still transmitted from the publisher to the subscriber. If a large volume of large record updates is processed in this manner, the performance of UniData Replication could be impacted.

For example: if a customer application repeatedly updated a counter in a large parameter record, each update would result in the entire record being replicated. The large volume of data being sent from the publisher to the subscriber could then cause the subscriber to ‘fall behind’ the publisher. Additionally, the disk space used by the replication logs on both the publisher and subscriber would correspondingly increase and add to the growing backlog.

Field Level Replication ensures that only the changed or declared fields are passed to the subscriber and this is achieved using one of two methods.

  • Using with the new FIELDWRITE statement and the updated WRITEV statement in UniBasic
  • Or by utilizing the Automatic Field Level Updates mechanism

AUTOMATIC FIELD LEVEL UPDATES

Automatic Field Level Updates allow the whole record WRITE statements to only store changed fields in the replication logs thus allowing you to take advantage of field level replication without requiring any changes to your existing application code.

Automatic Field level updates can be configured in two ways.  The new FIELD_UPDATE_THRESHOLD configurable parameter in the udtconfig file can set a system-wide threshold. Additionally, the new FIELD_LEVEL configuration parameter in the repconfig FILE phase can be used to set the threshold on a file-by-file basis. The value defined in the repconfig file for an individual file will override the udtconfig setting.

The FIELD_UPDATE_THRESHOLD configurable parameter allows you to define the size in Kilobytes at which a record can be considered for automatic field level updates. A value of 0 (the default) defines that the feature is turned off.

The repconfig FIELD_LEVEL keyword in the FILE phrase allows you to define the size in Kilobytes on a file-by-file level. If the size is not defined then all whole record writes to the file can be considered for automatic field level updates.

If an update is considered an automatic field level update and no data has changed then no replication log is generated as the record is NOT written back to the database.

INTELLIGENT WRITE QUEUE

We have all written or used applications where programs repeatedly update the same record in quick succession and in some cases no locking is applied as the application ‘knows’ the ownership of the record. The record itself is also likely to stay in the disk cache so the net result is very fast updates. When using UniData Replication these updates are sent to the subscriber and the repeated updates are done by multiple processes with record locking to enforce the synchronization of the process. The resulting conflict of locking on the record slows down the replication writer processes applying updates to the subscriber.

Enablement

The intelligent write queue mechanism was introduced into UniData Replication starting at UniData 8.2 to optimized the performance of repeated updates to the same record.

HOW DOES IT WORK?

When a replication log is picked up by a replication writer process, the record id, file id and LSN (Log Sequence Number) are recorded. The replication writer process then checks its own queue followed by the queue of the other replication writer processes to see if same record id and file id are already in a queue. If the details are in an existing queue then the new details are added to the existing queue and control of that log will pass to the associated replication writer process.

When a replication writer process applies a log to the database, before releasing the record lock the queue is checked for the record id and file id. If the details are located then the subsequent updates done for that record in the queue are done without locking. If there is more than one log in the queue then updates in the middle of the queue may be skipped and only the last one applied to the database, and we refer to that as Skip Logic.

The size of the queue for each replication writer process is defined on a group basis by the RW_QUEUESZ phrase in the repconfig file. It has a default value of 0 which means no queuing. A non-zero value defines the number of replication logs in which information can be stored in each queue.

Skip Logic

The queue contains logic which allows for skipping unnecessary wholerecord updates. Field Level updates in the queue are not eligible for skipping but still benefit from the locking improvements. For example, if we have 100 updates of the same wholerecord in the Queue then only the 100th wholerecord update needs to be applied on the subscriber.

Next week I’ll finish this 5 part series with the second blog post on Replication.