
1. Introduction
In today’s rapidly evolving tech landscape, organizations need reliable solutions to deploy and manage applications across diverse environments, including on-premises infrastructure and multi-cloud setups.
Unlike other complex orchestration tools like Kubernetes, Apache Mesos, and Docker Swarm, Nomad provides a lightweight, straightforward approach to scheduling and running applications. It supports both containerized and non-containerized workloads seamlessly.
In this article, we’ll explore the Nomad Cloud Platform, covering its installation, setup, and features to gain a comprehensive understanding of how to leverage this powerful platform.
2. What Is Nomad?
The Nomad Cloud Platform is an open-source workload orchestration solution developed by HashiCorp. It enables flexible deployment and management of applications across diverse computing environments, including multi-cloud and on-premises infrastructure.
Nomad simplifies application deployment through its lightweight, vendor-agnostic design, offering a scalable solution for containerized and non-containerized workloads. Moreover, its simple architecture and powerful scheduling capabilities help organizations streamline deployments, optimize resource utilization, and maintain operational consistency.
3. Installation and Setup
First, let’s use Homebrew to tap HashiCorp’s repository:
$ brew tap hashicorp/tap
Then, we’ll install Nomad from the tapped Hashicorp repository:
$ brew install hashicorp/tap/nomad
Next, we can verify the installation by checking its version:
$ nomad -v
The output displays details such as the version, the build date, and the revision hash:
Nomad v1.9.3
BuildDate 2024-11-11T16:35:41Z
Revision d92bf1014886c0ff9f882f4a2691d5ae8ad8131c
Let’s start a cluster in the dev mode using the agent command with the logging set to the INFO level:
$ nomad agent -dev -log-level=INFO
Then, we can observe the startup logs to get insights into the default setup of the Nomad environment:
==> No configuration files loaded
==> Starting Nomad agent...
==> Nomad agent configuration:
Advertise Addrs: HTTP: 127.0.0.1:4646; RPC: 127.0.0.1:4647; Serf: 127.0.0.1:4648
Bind Addrs: HTTP: [127.0.0.1:4646]; RPC: 127.0.0.1:4647; Serf: 127.0.0.1:4648
Client: true
Log Level: INFO
Node Id: 1845c6b5-d670-6518-2ca1-a1a4ec8e2088
Region: global (DC: dc1)
Server: true
Version: 1.9.3
==> Nomad agent started! Log data will stream in below:
2024-11-30T16:36:20.046+0530 [INFO] nomad.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d54ce4e9-055c-8c54-bdc1-aeae88a7998f Address:127.0.0.1:4647}]"
2024-11-30T16:36:20.046+0530 [INFO] nomad.raft: entering follower state: follower="Node at 127.0.0.1:4647 [Follower]" leader-address= leader-id=
2024-11-30T16:36:20.047+0530 [INFO] nomad: serf: EventMemberJoin: global 127.0.0.1
2024-11-30T16:36:20.047+0530 [INFO] nomad: starting scheduling worker(s): num_workers=10 schedulers=["service", "batch", "system", "sysbatch", "_core"]
2024-11-30T16:36:20.047+0530 [INFO] nomad: started scheduling worker(s): num_workers=10 schedulers=["service", "batch", "system", "sysbatch", "_core"]
2024-11-30T16:36:20.047+0530 [INFO] nomad: adding server: server="anshulbansal.global (Addr: 127.0.0.1:4647) (DC: dc1)"
2024-11-30T16:36:20.048+0530 [INFO] agent: detected plugin: name=qemu type=driver plugin_version=0.1.0
2024-11-30T16:36:20.048+0530 [INFO] agent: detected plugin: name=java type=driver plugin_version=0.1.0
2024-11-30T16:36:20.048+0530 [INFO] agent: detected plugin: name=docker type=driver plugin_version=0.1.0
2024-11-30T16:36:20.048+0530 [INFO] agent: detected plugin: name=raw_exec type=driver plugin_version=0.1.0
2024-11-30T16:36:20.048+0530 [INFO] agent: detected plugin: name=exec type=driver plugin_version=0.1.0
2024-11-30T16:36:20.049+0530 [INFO] client: using state directory: state_dir=/private/var/folders/w2/cks0zhmn5yz94f3r3nmwkvc80000gp/T/NomadClient1998098531
2024-11-30T16:36:20.049+0530 [INFO] client: using alloc directory: alloc_dir=/private/var/folders/w2/cks0zhmn5yz94f3r3nmwkvc80000gp/T/NomadClient161152373
2024-11-30T16:36:20.049+0530 [INFO] client: using dynamic ports: min=20000 max=32000 reserved=""
2024-11-30T16:36:20.149+0530 [INFO] client.plugin: starting plugin manager: plugin-type=csi
2024-11-30T16:36:20.149+0530 [INFO] client.plugin: starting plugin manager: plugin-type=driver
2024-11-30T16:36:20.149+0530 [INFO] client.plugin: starting plugin manager: plugin-type=device
2024-11-30T16:36:20.308+0530 [INFO] client: started client: node_id=91e820d7-c1f3-c8cc-34dd-06db519f9182
2024-11-30T16:36:21.143+0530 [WARN] nomad.raft: heartbeat timeout reached, starting election: last-leader-addr= last-leader-id=
2024-11-30T16:36:21.143+0530 [INFO] nomad.raft: election won: term=2 tally=1
2024-11-30T16:36:21.144+0530 [INFO] nomad.raft: entering leader state: leader="Node at 127.0.0.1:4647 [Leader]"
2024-11-30T16:36:21.145+0530 [INFO] nomad: cluster leadership acquired
2024-11-30T16:36:21.149+0530 [INFO] nomad.core: established cluster id: cluster_id=76d038da-5f9c-eb03-497a-4f25836b2ee8 create_time=1732964781148928000
2024-11-30T16:36:21.149+0530 [INFO] nomad: eval broker status modified: paused=false
2024-11-30T16:36:21.149+0530 [INFO] nomad: blocked evals status modified: paused=false
2024-11-30T16:36:21.219+0530 [INFO] nomad.keyring: initialized keyring: id=23aa3075-c53e-77fa-daa2-a51217475658
2024-11-30T16:36:21.348+0530 [INFO] client: node registration complete
The startup logs reveal critical infrastructure details, including server configuration, node initialization, available task drivers, and network settings:
- Single-node cluster initialization: automatic localhost configuration (ports 4646, 4647, and 4648)
- Task drivers: Nomad detects multiple drivers like Docker, Java, and QEMU
- Dynamic ports: allocates ports dynamically within the 20000-32000 range
- Cluster Leadership: Nomad quickly establishes leadership and starts scheduling workers
Also, once the Nomad cluster starts, we can access the Nomad UI at http://localhost:4646:

The Nomad UI centralizes job management in its Jobs section, allowing us to view, search, and filter cluster workloads. Also, we can create and monitor job deployments with a simple Run Job button.
Similarly, the left-hand panel provides quick access to critical cluster components like Clients, Servers, and Topology, alongside essential management tools for Storage and Variables.
Furthermore, the Operations section, particularly Evaluations, provides detailed insights into job evaluation processes and their progression across the infrastructure.
4. Deploying Applications Using Nomad Jobs
To demonstrate Nomad’s capabilities, let’s deploy a PostgreSQL container step-by-step.
4.1. Create Nomad Job File
First, we’ll create a simple postgres.nomad file:
job "postgres" {
datacenters = ["dc1"]
type = "service"
group "database" {
network {
port "postgres" {
static = 5432
}
}
task "postgres" {
driver = "docker"
config {
image = "postgres:15"
ports = ["postgres"]
}
env {
POSTGRES_PASSWORD = "password"
POSTGRES_DB = "mydb"
}
resources {
cpu = 500
memory = 512
}
}
}
}
The file specifies a service job named postgres that runs in the dc1 datacenter. Within this job, there is a task in the database group.
Specifically, it uses the Docker driver to deploy PostgreSQL 15, expose port 5432, and set environment variables for the database password and name.
Furthermore, it allocates specific resources like 500 MHz of CPU and 512 MB of memory to the task.
4.2. Run Nomad Job
Then, let’s run the job using the Nomad run command:
$ nomad job run postgres.nomad
The output of the above command shares a few details of the deployment process:
==> 2024-11-30T17:11:28+05:30: Monitoring evaluation "ccb7b29c"
2024-11-30T17:11:28+05:30: Evaluation triggered by job "postgres"
2024-11-30T17:11:29+05:30: Evaluation within deployment: "c8172895"
2024-11-30T17:11:29+05:30: Allocation "9085cb9c" created: node "91e820d7", group "database"
2024-11-30T17:11:29+05:30: Evaluation status changed: "pending" -> "complete"
==> 2024-11-30T17:11:29+05:30: Evaluation "ccb7b29c" finished with status "complete"
==> 2024-11-30T17:11:29+05:30: Monitoring deployment "c8172895"
⠧ Deployment "c8172895" in progress...
2024-11-30T17:15:03+05:30
ID = c8172895
Job ID = postgres
Job Version = 0
Status = running
Description = Deployment is running
Deployed
Task Group Desired Placed Healthy Unhealthy Progress Deadline
database 1 4 0 4 2024-11-30T17:21:28+05:30
An evaluation was triggered, creating an allocation (id 9085cb9c) on a specific node for the database task group. Although the job’s evaluation was marked complete, the deployment is still in progress.
Lastly, since the deployment is running, it’s attempting to place one desired instance for the database task group. However, it also shows zero healthy instances and four unhealthy ones, with a progress deadline set.
4.3. Nomad Job Status
Next, we’ll check out the status of the postgres job:
$ nomad job status postgres
The output provides a detailed summary of the job’s state in the Nomad cluster:
ID = postgres
Name = postgres
Submit Date = 2024-11-30T17:32:11+05:30
Type = service
Priority = 50
Datacenters = dc1
Namespace = default
Node Pool = default
Status = running
Periodic = false
Parameterized = false
Summary
Task Group Queued Starting Running Failed Complete Lost Unknown
database 0 0 1 0 0 0 0
Latest Deployment
ID = b1486482
Status = successful
Description = Deployment completed successfully
Deployed
Task Group Desired Placed Healthy Unhealthy Progress Deadline
database 1 1 1 0 2024-11-30T17:43:15+05:30
Allocations
ID Node ID Task Group Version Desired Status Created Modified
95a0741c f4834583 database 0 run running 4m27s ago 3m24s ago
We can observe the following information from the status logs:
- service type job identified as postgres with date and time of submission
- postgres job with the running status on the dc1 datacenter
- latest deployment (b1486482) was completed successfully, with 1 desired instance placed and healthy
- allocation details for the task group database with the running status
4.4. Nomad Allocation Status
Last, we can check the allocation status of the postgres job:
$ nomad alloc status 95a0741c
The output of the command reveals the intricate details of how Nomad orchestrates and manages workloads across the infrastructure:
ID = 95a0741c-aace-805c-c0b3-fedd8052db0b
Eval ID = 69f38b58
Name = postgres.database[0]
Node ID = f4834583
Node Name = anshulbansal
Job ID = postgres
Client Status = running
Client Description = Tasks are running
Desired Status = run
Created = 5m14s ago
Modified = 4m11s ago
Deployment ID = b1486482
Deployment Health = healthy
Allocation Addresses:
Label Dynamic Address
*postgres yes 127.0.0.1:5432
Task "postgres" is "running"
Task Resources:
CPU Memory Disk Addresses
0/500 MHz 0 B/512 MiB 300 MiB
Task Events:
Started At = 2024-11-30T12:03:05Z
Finished At = N/A
Total Restarts = 0
Last Restart = N/A
Recent Events:
Time Type Description
2024-11-30T17:33:05+05:30 Started Task started by client
2024-11-30T17:32:11+05:30 Driver Downloading image
2024-11-30T17:32:11+05:30 Task Setup Building Task Directory
2024-11-30T17:32:11+05:30 Received Task received by client
Here, the specific allocation for a PostgreSQL database demonstrates Nomad’s ability to dynamically assign resources, manage networking, and provide transparent visibility into the deployment lifecycle:
- allocation status contains the unique allocation ID (95a0741c), associated node (anshulbansal), and deployment health status (healthy)
- precise details about resource utilization, showing the task’s configuration of 500 MHz CPU and 512 MiB memory, with the database dynamically mapped to the localhost:5432
- recent events showing the complete lifecycle of the task – from image download to running state
5. Features and Tools
As a versatile and efficient platform for workload orchestration, the Nomad Cloud Platform offers a robust set of features and tools to address modern infrastructure needs:
- Multi-Environment Orchestration: Nomad provides a flexible, vendor-agnostic platform for deploying and migrating applications seamlessly across cloud providers, on-premises data centers, and edge locations
- Workload Scheduling: The platform efficiently manages diverse workloads – from containerized to traditional applications and supports multiple job types like services, system tasks, and batch jobs
- Resource Management: Nomad provides dynamic resource allocation for CPU, memory, network, and storage. It offers fine-grained control and can schedule workloads requiring specialized hardware like GPUs
- High Availability and Fault Tolerance: The platform ensures system reliability and zero-downtime updates through leader election, automatic failover, and sophisticated deployment techniques like rolling and canary updates
- Security Features: Nomad enhances security through role-based access control, HashiCorp Vault integration for secret management, and end-to-end encryption to protect server-client communications
- Native Integrations: The platform integrates smoothly with HashiCorp tools like Consul and Terraform, featuring an API-driven architecture that facilitates workflow customization and automation
- Monitoring and Observability: Nomad offers real-time monitoring tools and integrates with platforms like Prometheus and Grafana, enabling comprehensive performance tracking and issue diagnosis
- Lightweight and User-Friendly: Unlike complex orchestrators, Nomad runs as a single binary, reducing setup complexity and operational overhead while providing powerful orchestration capabilities
- Extensibility: Custom plugins allow Nomad to adapt to unique infrastructure and application requirements, offering flexibility for specialized workloads
- Simplifying Tools: Key tools like Nomad Pack for deployment templates, Nomad CLI for cluster management, and Nomad UI for web-based monitoring and job management
6. Conclusion
In this article, we explored the Nomad Cloud Platform – a lightweight open-source workload orchestration solution designed to meet the challenges of modern infrastructure management.
From straightforward installation to powerful scheduling and seamless integrations, Nomad empowers teams to manage containerized microservices, legacy applications, and everything in between. Also, its versatility and reliability make it an invaluable tool for addressing the demands of modern infrastructure.
First, we explored the steps for installation and setup. Then, we deployed the PostgreSQL database job on the Nomad cluster. Last, we familiarized ourselves with the available features and tools.
The post A Guide to Nomad Cloud Platform first appeared on Baeldung.