...
Check if any exports from the previous batch are waiting or processing, if so exit
Cutting: Check for all records which have been successfully written to the destination cluster groups (typically during the previous batch)
Logically delete the record and pool combinations belonging to the source cluster groups
See Garbage Collection /wiki/spaces/DEVELOPMENT/pages/2735079625 for cascading effect
The previous batch is now complete
Exports: Create a new number of export jobs from the source cluster group(s) to the destination cluster groups
These exports form the new batch and can take hours to complete
Exit
...
Controls the source and destination cluster groups
CLUSTER_GROUP_SOURCES
CLUSTER_GROUP_ARCHIVE
CLUSTER_GROUP_BACKUP
CLUSTER_GROUP_VAULT
Controls what to schedule
SCHEDULER_SKIP_ORGANISATIONS
Controls to condition when to schedule
SCHEDULER_SIZE_TO_FREE
SCHEDULER_MINIMUM_SIZE
SCHEDULER_FREE_SPACE_THRESHOLD
SCHEDULER_MAX_WAITING_PERIOD
Limits
SCHEDULER_MAXIMUM_FILES
SCHEDULER_CUTTING_MINIMUM_AGE
Controls whether to write export files in parallel
SCHEDULER_PARALLEL_ORGANISATIONS
SCHEDULER_PARALLEL_AMOUNT
Is Online
In this mode the scheduler checks the tape databases belonging to its destination cluster groups where tapes are present. An newly detected tapes are marked as online, while any no longer detected tapes are marked as offline. See the property “is online” at Pools .
...
Healing is a procedure where an export job is created from another mirrored copy (the other mirrored copy lead to the failed export) to the source cluster group and marking the mirrored copy that failed as logically deleted. In the next batch the scheduler will pick up the file from the source cluster group and write again to all destination cluster groups. For example if a record as written to three tapes A0, B0 and C0 and the B0 copy was discovered to be corrupt, the B0 will be deleted. After the next completed batch it will be written to new mirrored copies A1, B1 and C1 in addition to the already existing A0 and C0 copies.
Parallel scheduling
By default the scheduler will export files to the same storage pool(s), until the configured Storage allocation algorithm picks (a) different pool.
This can mean that when there are 2 tapes available, only one tape gets used until it’s full. As writing to tape is fairly slow, this is not optimal.
In 22.1
the following a new feature was introduced that, if activated, will write files to tape triplets in parallel:
SCHEDULER_PARALLEL_ORGANISATIONS
Configures the organisations this feature is activated for.
SCHEDULER_PARALLEL_AMOUNT
The number of tape triplets to use in parallel
Future
In the future the scheduler should written to write from one super cluster to another super cluster. For example from the super cluster “gpfs-buffer” to the super cluster “tape LTO-8”.