IBM Global Mailbox - High availability properties and limitations

High availability of Global Mailbox is limited during some situations, for example when a data center goes down or when one or more nodes in a data center are not operational.

Global Mailbox is divided into the following functional areas. Each area is designed to support a specific level of availability:
  • Protocol actions or trading partner actions (upload or download of files)
  • Message processing
  • Global Mailbox management node user interface
  • Payload replication
  • Command line utilities

Protocol action or trading partner action constitutes uploading or downloading a message or file to or from Global Mailbox.

Protocol actions can be completed by using one of the following methods:
  • File Transfer Protocol (FTP)
  • Secure File Transfer Protocol (SFTP)
  • Connect:Direct
  • myFileGateway

Availability or unavailability of some Global Mailbox components impact the availability of protocol actions. The following sections explain the same in detail:

How Cassandra impacts availability of protocol actions

With delayed (asynchronous) replication (default setting), 50% + 1 of the nodes in the data center where the action is performed must be available. For example, if your data center has 3 Cassandra nodes, then 50% +1 would be 1.5 + 1 = 2.5, which must be rounded off to 2. Therefore, 2 Cassandra nodes must be available for the protocol action to complete. The nodes in other data centers are not needed for the request to be successful.

Let us look at it with another example. If your data center has 4 Cassandra nodes, then 50% + 1 would be 2+1 = 3. Therefore 3 Cassandra nodes must be available for protocol action to complete.

With immediate (synchronous) replication, 50% + 1 of the nodes across the data centers must be available, and two of the available nodes must be in the data center where a user is doing an upload or download. Consider an environment with 3 nodes in data center 1 and 3 nodes in data center 2. Totally, there are 6 nodes in both the data centers. To meet the 50%+1 requirement, 4 nodes are required. Out of the 4 nodes, 2 nodes must be in the data center where the user is uploading or downloading a file.

Immediate replication requires that the file be replicated to at least one other data center. To achieve this the other data center must have 50% + 1 nodes up and running. If the nodes are not available, the upload fails.

In both the scenarios, it is required to have 2 nodes in the data center where a user is performing an action. To ensure that Global Mailbox can survive the loss of a node in the data center, we must add an additional node. This means we must have a minimum of 3 nodes in each data center.

How ZooKeeper nodes impact availability of protocol actions

During normal functioning of the Global Mailbox, a minimum of 2 ZooKeeper servers must be operational across the data centers for trading partner action or protocol actions to complete. The ZooKeeper watchdog process must also be running on all ZooKeeper nodes so that ZooKeeper failures can be detected, and recovery with a reduced ensemble can be initiated. While the watchdog is setting up the reduced ensemble, there is a window of time where trading partner action fails. The action proceeds after the reduced ensemble is started. If the number of running ZooKeeper servers reduces to 1 or 0, all trading partner actions fail until the number of ZooKeeper servers comes back to 2 or more.

How shared disk impacts availability of protocol actions

When the Sterling B2B Integrator Global Mailbox Client Adapter starts, it needs to load the configuration files on the shared disk. If the shared disk fails before the configuration file is loaded, protocol actions cannot be completed. If the shared disk fails after the configuration file is loaded, all files regardless of any size will fail unless there is already an active session before the shared disk is unavailable.

How replication server and payload replication type impact the availability of protocol actions

If your replication type is delayed (asynchronous), you can upload files of any size to the data center when the replication server is down in the local or other data centers. Files are replicated later when the replication server becomes operational.

If your replication type is immediate (synchronous), non-inline payloads must be replicated to at least one other data center for the upload to be successful. For example, if you have uploaded a file in data center 1, then the following components must be operational:
  • At least one Global Mailbox management node in data center 2
  • Shared disk in data center 2
  • At least one replication server in data center 1

These components are required so that the Global Mailbox management node in data center 2 can pull the file into the shared disk in data center 2 (local shared disk). The Global Mailbox management node pulls the file by using the replication server in data center 1.

Mailboxes can be created by using a protocol because those tasks are independent of payload replication.

How WebSphere MQ impacts availability of protocol actions

When files are uploaded, a message is created on WebSphere® MQ that an event should be created. If WebSphere MQ is down, the message is not created and the upload fails.

File downloads are not impacted by WebSphere MQ availability.

Important: It is suggested to discuss implementation best practices with IT services and business process teams in your organization.

Messages are processed in the data center in which they are received. A message or file need not be replicated before the event is raised.

The following tasks are included in message processing, after a message is uploaded:
  • An event is raised on the queue in the data center in which the file is received.
  • WebSphere MQ accepts the event.
  • Sterling B2B Integrator pulls the event from the queue and triggers business process or contract based on the event rule configuration.
  • If processing is successful, status of the message is updated in Global Mailbox as success or else, it is updated as failed.

Processing of messages is performed by the Event Rule Adapter in Sterling B2B Integrator. The following sections outline the availability aspects of Event Rule Adapter in relation with the availability or non-availability of other Global Mailbox components:

How Cassandra impacts message processing

To process messages, the data center where a file is uploaded must have a majority of Cassandra nodes available. For example, if you have 3 nodes in data center 1, for messages to be processed in data center 1, you must have 2 nodes up in data center 1.

How ZooKeeper impacts message processing

During normal functioning of Global Mailbox, a minimum of 2 ZooKeeper servers must be operational across data centers for message processing to complete. The ZooKeeper watchdog process must also be running on all ZooKeeper nodes so that ZooKeeper failures can be detected, and recovery with a reduced ensemble can be initiated. While the watchdog is setting up the reduced ensemble, there is a window of time where message processing fails. The processing proceeds after the reduced ensemble is started. If the number of running ZooKeeper servers reduces to 1 or 0, all trading partner actions fail until the number of ZooKeeper servers comes back to 2 or more.

How shared disk impacts message processing

A file is processed in the data center in which it is received. The shared disk in which the file is received must be available to process the file in that data center.

How replication server and payload replication type impact message processing

Message processing is not dependent on replication server .

How WebSphere MQ impacts message processing

A multi-instance (active, passive) queue manager is required to ensure that events can be sent from Global Mailbox to Sterling B2B Integrator for processing.

WebSphere® MQ must be available in the data center where a file is uploaded for Global Mailbox to publish events (producer side) to it and for Sterling B2B Integrator to consume events from it (consumer side). For a Sterling B2B Integrator node to process a message, it must connect to the local WebSphere MQ queue to get those messages.

Global Mailbox management node provides the Global Mailbox administration user interface. At least one Global Mailbox management node must be available in a data center for an administrator to use the administration user interface.

If any Global Mailbox management node is not operational in a data center, the administrator must log in to the user interface from the other data center.
Important: The user interface can be used by starting it on a web browser and logging in to it, or by clicking the Launch Global Mailbox Management Tool link from Sterling B2B Integrator dashboard on that data center.

Global Mailbox management node also performs non-inline payload replication. If any Global Mailbox managed node is not operational in a data center, then that data center cannot replicate non-inline payloads for files that are uploaded to other data center. Therefore, not having a Global Mailbox management node operational in a data center is equivalent to not having any replication server operational in the data center.

Global Mailbox management node also runs the scheduler. Some scheduled jobs operate on resources that belong to the local data center, for example, PayloadPurgeJob. If there are no Global Mailbox management nodes running in the data center, then the jobs that operate on the local data center resources do not run until a node is started on that data center.

How Cassandra impacts availability of Global Mailbox management node

For Global Mailbox management node to be available, the local data center must have a majority of Cassandra nodes operational.

How ZooKeeper impacts availability of Global Mailbox management node

For Global Mailbox management node to be available, a minimum of 2 ZooKeeper servers must be operational across all data centers.

Payload replication constitutes creating a copy of the message payload that is uploaded to one data center, in all other data centers.

In case of immediate (synchronous) replication, for message processing to complete, a copy of the message payload must be created in at least one another data center in the Global Mailbox topology. Following tasks are involved in payload replication:
  1. If size of the payload is larger than the inline payload threshold, segments are created in shared disk.
  2. If size of the payload is less than the inline payload threshold, payload is stored in Cassandra.
  3. Payload segments from shared disk are replicated to another data center. Replication is triggered by the Global Mailbox management node in the other data center. Payload is transferred by the replication server in the receiving data center.
  4. Payload blobs are replicated to another data center by Cassandra.

Availability or unavailability of some Global Mailbox components impact the availability of the payload replication. The following sections explain the same in detail:

How Cassandra nodes impact availability of payload replication

For inline payloads, replication is done by Cassandra. Payload does not appear in another data center unless the Cassandra nodes are up in that data center. If the Cassandra nodes are down when the file was uploaded, you must run a repair to replicate the payload to the nodes that were down.

For non-inline payloads, after payload is received by a data center, the Global Mailbox management nodes in other data centers detect the new payload and pull them into their resident data centers. The Global Mailbox management nodes query the Cassandra database looking for new messages that have non-inline payload. 50% + 1 of the nodes in the local data center must be up for the Global Mailbox management node to replicate payload to the local data center.

How ZooKeeper nodes impact availability of payload replication

During normal functioning of the Global Mailbox, a minimum of 2 ZooKeeper servers must be operational across the data centers for payload replication to complete. The ZooKeeper watchdog process must also be running on all ZooKeeper nodes so that ZooKeeper failures can be detected, and recovery with a reduced ensemble can be initiated. While the watchdog is setting up the reduced ensemble, there is a window of time when replication fails. Payload replication proceeds after the reduced ensemble is started. If the number of running ZooKeeper servers reduces to 1 or 0, payload replication fails until the number of ZooKeeper servers comes back to 2 or more.

How shared disk impacts availability of payload replication

The shared disk is not used to store inline payload, therefore the replication of these payloads is not dependent on the shared disk.

Non-inline payloads are stored in shared disk. The shared disk must be available for the Global Mailbox management node to pull the payload from another data center to the local data center.

How replication server impacts payload replication

For inline payloads, there is no dependency on replication server, because inline payloads are replicated by Cassandra.

For non-inline payloads, the Global Mailbox management node pulls payloads from another data center to the local data center. To do so, at least one replication server must be operational in the local and the other data center.

How WebSphere MQ impacts payload replication

Payload replication is not dependent on WebSphere® MQ.

Global Mailbox offers command line utilities to modify default configuration with Sterling B2B Integrator, configure Cassandra database, configure data centers, scheduler, and storage passphrase.

The command line utilities are as follows:
  • appConfigUtility
  • schedulerConfigUtility
  • storagePassphrase
  • dbConfigUtility
  • dcConfigUtility
  • masterPassphrase

Availability or unavailability of some Global Mailbox components impact the availability of command line utilities. The following sections explain the same in detail:

How Cassandra impacts availability of command line utilities

For command line utilities to function, 50% + 1 number of Cassandra nodes must be available in the data center where you run the command.

For masterPassphrase, 50% + 1 number of Cassandra nodes must be available in each data center in the Global Mailbox topology.

How ZooKeeper nodes impact availability of command line utilities

During normal functioning of the Global Mailbox, to use the command line utilities, a minimum of 2 ZooKeeper servers must be operational across the data centers. The ZooKeeper watchdog process must also be running on all ZooKeeper nodes so that ZooKeeper failures can be detected, and recovery with a reduced ensemble can be initiated. While the watchdog is setting up the reduced ensemble, there is a window of time when the utilities cannot be used. However, the utilities can be used after the reduced ensemble is started. If the number of running ZooKeeper servers reduces to 1 or 0, the command line utilities cannot be used until the number of ZooKeeper servers comes back to 2 or more.

How shared disk impacts availability of command line utilities

The command line utility uses the configuration files that are stored in the shared disk. Therefore shared disk must be available in the data center where you run the command.

How replication server impacts the availability of command line utilities

Command line utilities are not dependent on replication server .

How WebSphere MQ impacts command line utilities

Command line utilities are not dependent on WebSphere® MQ.

In general terms, split-brain arises when two or more data centers cannot connect and communicate with each other. Split-brain is also called as network partition.

Important: It is suggested that data center administrators inform Global Mailbox administrators about data center connectivity issues and split-brain, so that they refrain from making updates in the Global Mailbox management node user interface.

In most of the cases, split-brain can lead to lack of availability and data inconsistency issues. In Global Mailbox, split-brain happens when Cassandra nodes in one data center cannot communicate with the Cassandra nodes in another data center. The communication is lost primarily because of network issues.

Cassandra servers in all data centers work together to provide a consistent view of data. They do this by communicating over the network to replicate data as quickly as possible, ensuring the data is kept as up to date as possible on all servers. If the servers in a data center cannot communicate with servers in the other data centers, this might lead to operations and updates done on out-of-date data. Conflicting operations might be performed on each side of the partition. For example, a user might remove a user mailbox permission in data center 1, but someone else might add permissions for that user in data center 2. Since there is a network partition, the data is out of sync.

After the partition is resolved, Cassandra attempts to resolve any conflicts on the objects that are created, updated, and deleted during the partition. It chooses the change that has the latest time stamp.

General behavior observed during split-brain is as follows:
  • When data centers cannot communicate and a few transactions happen on the same object in more than one data center. Such changes are called as conflicting changes. When the connection is restored and the conflicting Cassandra changes are resolved, the record with the latest time stamp always overwrites any previous version of the object.
  • When you delete a mailbox, the first change takes precedence over the later changes. For example, if a mailbox /acme_mbx is deleted first in data center 1, and a submailbox /acme_mbx/acme_user is created in data center 2 after the deletion, /acme_mbx and /acme_mbx/acme_user mailboxes are deleted after the merge.

What you cannot do during split-brain

During a split-brain, you can only disable or enable an event rule.

To protect integrity of event rule records, the system does not allow you to do the following changes during split-brain:
  • Create new event rules
  • Rename event rules
  • Delete event rules
  • Update event rules
In addition, the system does not allow you to do the following operations in Sterling File Gateway:
  • Create routing rule with Global Mailbox producer partner.
  • Convert producer partner who has routing channel associated with producer role.
  • Delete a routing rule which has Global Mailbox partner.
  • Delete a partner who has routing channel associated with producer role.
  • Update a routing rule which has a Global Mailbox producer partner.

Global Mailbox behavior during split-brain

During split-brain, the following exceptional behavior is observed in Global Mailbox:
Duplicate messages
During split-brain, there are chances of duplicate messages being uploaded across other data centers, though allowDuplicates=false. This is because the system cannot reach the other data center to verify the message. However, upload of duplicate messages in the local data center is not allowed if allowDuplicates=false.
Extraction count
There can be a potential loss of extract count if extraction criteria of a message is changed in another data center during split-brain. For example, during split-brain, if a trading partner extracts a message in data center 1, and later an admin changes extraction criteria for the same message in data center 2, then the extraction count is lost after the data centers are merged. The latest update by the administrator takes precedence over the extraction count that is recorded after the trading partner extracted the message.
Extraction criteria
There can be a potential loss of message extraction criteria update if a message is extracted in another data center during split-brain, after the criteria changed. For example, during split-brain, if an administrator changes extraction criteria for a message in data center 1, and later a trading partner extracts the same message in data center 2, the changes made by the administrator are lost after split-brain is resolved. As obvious, the latest change that happened is extraction of the message by the trading partner, and that takes precedence.
Orphaned mailboxes
Creating or deleting a mailbox during split-brain can leave behind orphaned mailboxes that cannot be cleaned up. If a submailbox is created in data center 1 and one of its parent mailboxes is deleted later in data center 2, then after the merge, the deleted mailbox, and submailboxes under it are not displayed. In addition, a user cannot create a submailbox with the same name.

A quorum of data centers is used to ensure that synchronous replication is achieved even when replication is not completed across all data centers. A quorum is a majority of the number of data centers in a setup. The quorum is automatically set when you install or upgrade your setup.

The quorum is calculated by halving the number of data centers plus 1.

quorum = (number of data centers)/2 + 1

Example

If your setup has 5 data centers, your quorum would be 3.

Previous Topic

IBM Global Mailbox - System principles

Parent Topic

IBM Global Mailbox - Technical overview

Next Topic

Global Mailbox deployment considerations

Thank you for the Registration Request, Our team will confirm your request shortly.

Invite and share the event with your colleagues 

FileGPS - End To End File Monitoring

Subscribe to our newsletter

Elevate your approach to technology with expert-authored blogs, articles, and industry perspectives.

Thank You!

Thanks for signing up! We look forward to sharing resources and updates with you.

Continue to view our resources below.

Thank You!

Your Article submission request has been successfully sent. We will review your article & contact you very soon!

Sign up for Free Trail

Community manager solution extends IBM Sterling B2B Integrator to deliver next-generation B2B and File Transfer capabilities that meet the growing demands of the customer.

Thank You!

Thanks for contacting us. We will reach you shortly!

Select Industry & Watch IBM Partner Engagement Manager Demo

Start SRE Journey to AIOPs

FileGPS - End To End File Monitoring

Pragma Edge Jarvis Monitoring tool (Jarvis)

Thank you for submitting your details.

For more information, Download the PDF.

Community Manager - PCM

Community Manager - PCM

To deliver next-generation B2B and File Transfer capabilities

Pragma Edge - API Connect

IBM Partner Engagement Manager Standard

IBM Partner Engagement Manager Standard is the right solution
addressing the following business challenges

IBM Partner Engagement Manager Standard

IBM Partner Engagement Manager Standard is the right solution
addressing the following business challenges

IBM Partner Engagement Manager Standard

IBM Partner Engagement Manager Standard is the right solution
addressing the following business challenges

Thank you for the Registration Request, Our team will confirm your request shortly.

Invite and share the event with your colleagues 

Please Join us
On April 21 2021, 11 AM CT