1. Overview
Most distributed applications require some stateful component to be consistent and fault-tolerant. Atomix is an embeddable library helping in achieving fault-tolerance and consistency for distributed resources.
It provides a rich set of APIs for managing its resources like collections, groups, and tools for concurrency.
To get started, we need to add the following Maven dependency into our pom:
<dependency> <groupId>io.atomix</groupId> <artifactId>atomix-all</artifactId> <version>1.0.8</version> </dependency>
This dependency provides a Netty-based transport needed by its nodes to communicate with each other.
2. Bootstrapping a Cluster
To get started with Atomix, we need to bootstrap a cluster first.
Atomix consists of a set of replicas which are used for creating stateful distributed resources. Each replica maintains a copy of the state of each resource existing in the cluster.
Replicas are two types in a cluster: active and passive.
State changes of distributed resources are propagated through active replicas while passive replicas are kept in sync to maintain fault-tolerance.
2.1. Bootstrapping an Embedded Cluster
To bootstrap a single node cluster, we need to create an instance of AtomixReplica first:
AtomixReplica replica = AtomixReplica.builder( new Address("localhost", 8700)) .withStorage(storage) .withTransport(new NettyTransport()) .build();
Here replica is configured with Storage and Transport. Code snippet to declare storage:
Storage storage = Storage.builder() .withDirectory(new File("logs")) .withStorageLevel(StorageLevel.DISK) .build();
Once the replica is declared and configured with storage and transport, we can bootstrap it by simply calling bootstrap() – which returns a CompletableFuture that can be used to block until the server is bootstrapped by calling associated blocking join() method:
CompletableFuture<AtomixReplica> future = replica.bootstrap(); future.join();
So far we’ve constructed a single node cluster. Now we can add more nodes to it.
To do this, we need to create other replicas and join them with the existing cluster; we need to spawn a new thread for calling the join(Address) method:
AtomixReplica replica2 = AtomixReplica.builder( new Address("localhost", 8701)) .withStorage(storage) .withTransport(new NettyTransport()) .build(); replica2 .join(new Address("localhost", 8700)) .join(); AtomixReplica replica3 = AtomixReplica.builder( new Address("localhost", 8702)) .withStorage(storage) .withTransport(new NettyTransport()) .build(); replica3.join( new Address("localhost", 8700), new Address("localhost", 8701)) .join();
Now we have a three nodes cluster bootstrapped. Alternatively, we can bootstrap a cluster by passing a List of addresses in bootstrap(List<Address>) method:
List<Address> cluster = Arrays.asList( new Address("localhost", 8700), new Address("localhost", 8701), new Address("localhsot", 8702)); AtomixReplica replica1 = AtomixReplica .builder(cluster.get(0)) .build(); replica1.bootstrap(cluster).join(); AtomixReplica replica2 = AtomixReplica .builder(cluster.get(1)) .build(); replica2.bootstrap(cluster).join(); AtomixReplica replica3 = AtomixReplica .builder(cluster.get(2)) .build(); replica3.bootstrap(cluster).join();
We need to spawn a new thread for each replica.
2.2. Bootstrapping a Standalone Cluster
Atomix server can be run as a standalone server which can be downloaded from Maven Central. Simply put – it’s a Java archive which can be run via the terminal by providing
Simply put – it’s a Java archive which can be run via the terminal by providing host: port parameter in the address flag and using the -bootstrap flag.
Here’s the command to bootstrap a cluster:
java -jar atomix-standalone-server.jar -address 127.0.0.1:8700 -bootstrap -config atomix.properties
Here atomix.properties is the configuration file to configure storage and transport. To make a multinode cluster, we can add nodes to the existing cluster using -join flag.
The format for it is:
java -jar atomix-standalone-server.jar -address 127.0.0.1:8701 -join 127.0.0.1:8700
3. Working with a Client
Atomix supports creating a client to have remote access to its cluster, via the AtomixClient API.
Since clients need not be stateful, AtomixClient doesn’t have any storage. We simply need to configure transport while creating client since transport will be used to communicate with a cluster.
Let’s create a client with a transport:
AtomixClient client = AtomixClient.builder() .withTransport(new NettyTransport()) .build();
We now need to connect the client to cluster.
We can declare a List of Address and pass the List as an argument to the connect() method of client:
client.connect(cluster) .thenRun(() -> { System.out.println("Client is connected to the cluster!"); });
4. Handling Resources
The true power of Atomix lies in its strong set of APIs for creating and managing distributed resources. Resources are replicated and persisted in a cluster and are bolstered by a replicated state machine – managed by its underlying implementation of the Raft Consensus Protocol.
Distributed resources can be created and managed by one of its get() method. We can create a distributed resource instance from AtomixReplica.
Considering replica is an instance of AtomixReplica, the code snippet to create a distributed map resource and to set a value to it:
replica.getMap("map") .thenCompose(m -> m.put("bar", "Hello world!")) .thenRun(() -> System.out.println("Value is set in Distributed Map")) .join();
Here join() method will block program until the resource is created and value is set to it. We can get the same object using AtomixClient and retrieve the value with the get(“bar”) method.
We can use get() method at the end to wait for the result :
String value = client.getMap("map")) .thenCompose(m -> m.get("bar")) .thenApply(a -> (String) a) .get();
5. Consistency and Fault Tolerance
Atomix is utilized for mission-critical small scale data-sets for which consistency is a much bigger concern than availability.
It provides strong configurable consistency through linearizability for both reads and writes. In linearizability, once a write is committed, all clients are guaranteed to be aware of the resulting state.
Consistency in Atomix’s cluster is guaranteed by underlying Raft consensus algorithm where an elected leader will have all the writes that were previously successful.
All new writes will go through the cluster leader and synchronously replicated to a majority of the server before completion.
To maintain fault-tolerance, majority server of the cluster needs to be alive. If minority number of nodes fail, nodes will be marked inactive and will be replaced by passive nodes or standby nodes.
In case of leader failure, the remaining servers in the cluster will begin a new leader election. Meanwhile, the cluster will be unavailable.
In case of partition, if a leader is on the non-quorum side of the partition, it steps down, and a new leader is elected in the side with quorum.
And, if the leader is on the majority side, it’ll continue with no change. When the partition is resolved, nodes on the non-quorum side will join the quorum and update their log accordingly.
6. Conclusion
Like ZooKeeper, Atomix provides a robust set of libraries for dealing with distributed computing issues.
And, as always, the full source code for this task is available over on GitHub.