Version 2.2 - 2.3 Copyright © 2019 TigerGraph. All Rights Reserved.
A TigerGraph system with High Availability (HA) is a cluster of server machines which uses replication to provide continuous service when one or more servers are not available or when some service components fail. TigerGraph HA service provides loading balancing when all components are operational, as well as automatic failover in the event of a service disruption. One TigerGraph server consists of several components (e.g., GSE, GPE, RESTPP). The default HA configuration has a replication factor of 2, meaning that a fully-functioning system maintains two copies of the data, stored on separate machines. In advanced HA setup, users can set a higher replication factor.
An HA cluster needs at least 3 server machines . Machines can be physical or virtual. This is true even the system only has one graph partition.
For a distributed system with N partitions (where N > 1), the system must have at least 2N machines.
The same version of the TigerGraph software package is installed on each machine.
HA configuration should be done immediately after system installation and before deploying the system for database use.
To convert a non-HA system to an HA system, the current version of TigerGraph requires that all the data and metadata be cleared, and all TigerGraph services be stopped. This limitation will be removed in a future release.
Starting from version 2.1, configuring a HA cluster is integrated into platform installation, please check the document TigerGraph Platform Installation Guide for detail.
Follow the instructions in the document TigerGraph Platform Installation Guide to install the TigerGraph system in your cluster.
Be sure you are logged in as the tigergraph OS user on machine "m1". Before setting up HA or changing HA configuration, the current TigerGraph system must be fully stopped. If the system has any graph data, clear out the data (e.g., with "gsql DROP ALL").
gadmin stop ts3 -fygadmin stop all -fygadmin stop admin -fy
After the cluster installation, create an HA configuration using the following command:
gadmin --enable ha
This command will automatically generate a configuration for a distributed (partitioned) database with an HA system replication factor of 2. Some individual components may have a higher replication factor .
tigergraph@m1$ gadmin --enable ha[FAB ][m3,m2] mkdir -p ~/.gium[FAB ][m3,m2] scp -r -P 22 ~/.gium ~/[FAB ][m3,m2] mkdir -p ~/.gsql[FAB ][m3,m2] scp -r -P 22 ~/.gsql ~/[FAB ][m3,m2] mkdir -p ~/.venv[FAB ][m3,m2] scp -r -P 22 ~/.venv ~/[FAB ][m3,m2] cd ~/.gium; ./add_to_path.sh[RUN ] /home/tigergraph/.gsql/gpe_auto_start_add2cron.sh[FAB ][m3,m2] mkdir -p /home/tigergraph/.gsql/[FAB ][m3,m2] scp -r -P 22 /home/tigergraph/.gsql/all_log_cleanup /home/tigergraph/.gsql/[FAB ][m3,m2] mkdir -p /home/tigergraph/.gsql/[FAB ][m3,m2] scp -r -P 22 /home/tigergraph/.gsql/all_log_cleanup_add2cron.sh /home/tigergraph/.gsql/[FAB ][m1,m3,m2] /home/tigergraph/.gsql/all_log_cleanup_add2cron.sh[FAB ][m1,m3,m2] rm -rf /home/tigergraph/tigergraph_coredump[FAB ][m1,m3,m2] mkdir -p /home/tigergraph/tigergraph/logs/coredump[FAB ][m1,m3,m2] ln -s /home/tigergraph/tigergraph/logs/coredump /home/tigergraph/tigergraph_coredump
If the HA configuration fails, e.g, if the cluster doesn’t satisfy the HA requirements, then the command will stop running with a warning.
tigergraph@m1$ gadmin --enable haDetect config change. Please run 'gadmin config-apply' to apply.ERROR:root: To enable HA configuration, you need at least 3 machines.Enable HA configuration failed.
In this optional additional step, advanced users can run several "gadmin --set" commands to control the replication factor and manually specify the host machine for each TigerGraph component. The table below shows the recommended settings for each component. See the later example section for different configuration cases.
Suggested Number of Hosts
Suggested Number of Replicas
3 or 5
3 or 5
same as GPE
same as GPE
2 or 3
2 or 3
Example: There is a 3-machine cluster m1, m2 and m3. Kafka, GPE, GSE and RESTPP are all on m1 and m2, with replication factor 2. This is a non-distributed graph HA setup.
gadmin --set zk.servers m1,m2,m3gadmin --set dictserver.servers m1,m2,m3gadmin --set dictserver.base_ports 17797,17797,17797gadmin --set kafka.servers m1,m2gadmin --set kafka.num.replicas 2gadmin --set gse.replicas 2gadmin --set gpe.replicas 2gadmin --set gse.servers m1,m2gadmin --set gpe.servers m1,m2gadmin --set restpp.servers m1,m2
Once the HA configuration is done, proceed to install the package from the first machine (named “m1” in the cluster installation configuration).
gadmin pkg-install reset -fy
The table below shows how to setup for the common setups. Note if convert the system from another configuration, must stop the old TigerGraph system first.
Cluster Configuration (number of servers in cluster is X)
How to A,B,C, etc. refer to the Steps in the section above.
Non-distributed graph with HA
Each server machine holds the complete graph.
Distributed graph without HA
Graph is partitioned among all the cluster servers.
Distributed graph with HA
Graph is partitioned with replica factor N. Number of partitions Y equals X/N.