Integrate service discovery
Nomad easily integrates with Consul's service discovery features to helps developers and operations teams automate and monitor application services at scale. Consul tracks services by registering them into a service catalog that you can access with Consul DNS names instead of hard-coded node IP addresses or hostnames.
The next step of this monolith migration is to integrate Consul and use its features to discover, secure, and monitor the HashiCups services.
In this tutorial, you deploy two versions of HashiCups with Consul service discovery integration. One version runs all of the services on a single publicly accessible node, and the other version splits the deployment between public and private nodes.
Infrastructure overview
At the beginning of the tutorial you have a Nomad and Consul cluster with three server nodes, three private client nodes, and one publicly accessible client node. Each node runs a Consul agent and a Nomad agent.
This infrastructure matches the end state of the previous tutorial.
Prerequisites
This tutorial uses the infrastructure set up in a previous tutorial in this collection, Set up the cluster. Complete that tutorial to set up the infrastructure if you have not done so.
Deploy HashiCups on a single VM
In your terminal, navigate to the directory that contains the code from the code repository.
Review the jobspec
Navigate to the jobs
directory.
$ cd shared/jobs
Open the 02.hashicups.nomad.hcl
jobspec file and view the contents.
This version adds service
blocks to each of the tasks and sets the provider
attribute to Consul. This configuration instructs Nomad to register services in the Consul catalog. A check
block inside of the service block configures a health check for the service.
/shared/jobs/02.hashicups.nomad.hcl
# ...task "db" { driver = "docker" service { name = "database" provider = "consul" port = "db" address = attr.unique.platform.aws.local-ipv4 check { name = "database check" type = "script" command = "/usr/bin/pg_isready" args = ["-d", "${var.db_port}"] interval = "5s" timeout = "2s" on_update = "ignore_warnings" task = "db" } } # ...}
When services are registered in the Consul catalog, you can use Consul DNS instead of the client node addresses from Nomad for upstream service resolution.
/shared/jobs/02.hashicups.nomad.hcl
# ...task "public-api" { driver = "docker" service { name = "public-api" provider = "consul" port = "public-api" address = attr.unique.platform.aws.local-ipv4 check { type = "http" path = "/health" interval = "5s" timeout = "5s" } } meta { service = "public-api" } config { image = "hashicorpdemoapp/public-api:${var.public_api_version}" ports = ["public-api"] } env { BIND_ADDRESS = ":${var.public_api_port}" PRODUCT_API_URI = "http://product-api.service.dc1.global:${var.product_api_port}" PAYMENT_API_URI = "http://payments-api.service.dc1.global:${var.payments_api_port}" }} # ...
Run the job
Submit the job to Nomad. Note that if the previous version of the HashiCups job is running, this command redeploys the job with the new version.
$ nomad job run 02.hashicups.nomad.hcl==> 2024-11-04T16:04:00-05:00: Monitoring evaluation "13c69d30" 2024-11-04T16:04:00-05:00: Evaluation triggered by job "hashicups" 2024-11-04T16:04:01-05:00: Evaluation within deployment: "3139009f" 2024-11-04T16:04:01-05:00: Allocation "2cd0b07a" created: node "b12113ef", group "hashicups" 2024-11-04T16:04:01-05:00: Evaluation status changed: "pending" -> "complete"==> 2024-11-04T16:04:01-05:00: Evaluation "13c69d30" finished with status "complete"==> 2024-11-04T16:04:01-05:00: Monitoring deployment "3139009f" ✓ Deployment "3139009f" successful 2024-11-04T16:04:54-05:00 ID = 3139009f Job ID = hashicups Job Version = 1 Status = successful Description = Deployment completed successfully Deployed Task Group Desired Placed Healthy Unhealthy Progress Deadline hashicups 1 1 1 0 2024-11-04T21:14:52Z
Verify deployment
Verify that the HashiCups application deployed successfully.
Use the nomad job
command to retrieve information about the hashicups
job.
$ nomad job allocs hashicupsID Node ID Task Group Version Desired Status Created Modified819f186d 30b5f033 hashicups 0 run running 3m38s ago 2m48s ago
Use the consul catalog
command to verify that the services are correctly registered inside Consul's catalog.
$ consul catalog servicesconsuldatabasefrontendnginxnomadnomad-clientpayments-apiproduct-apipublic-api
Retrieve the HashiCups URL to verify deployment.
$ nomad node status -verbose \ $(nomad job allocs hashicups | grep -i running | awk '{print $2}') | \ grep -i public-ipv4 | awk -F "=" '{print $2}' | xargs | \ awk '{print "http://"$1}'
Output from the above command.
http://3.15.17.40
Copy the IP address and open it in your browser to see the HashiCups application. You do not need to specify a port as nginx is running on port 80
.
Interact with the application in your browser and observe the changes in both Consul and Nomad.
Cleanup
Stop the deployment when you are ready to move on. The -purge
flag removes the job from the system and the Nomad UI.
$ nomad job stop -purge hashicups==> 2024-11-12T21:24:18+01:00: Monitoring evaluation "e2af4a1c" 2024-11-12T21:24:18+01:00: Evaluation triggered by job "hashicups" 2024-11-12T21:24:18+01:00: Evaluation status changed: "pending" -> "complete"==> 2024-11-12T21:24:18+01:00: Evaluation "e2af4a1c" finished with status "complete"
Deploy HashiCups on multiple VMs
In this section, you run a jobspec that deploys the HashiCups application across separate client nodes.
Review the jobspec
Open the 03.hashicups.nomad.hcl
jobspec file and review the contents.
In this jobspec, each service is part of its own group instead of being part of group "hashicups"
. As a result, Nomad can schedule each service on a different node as needed. Most groups include a constraint
so that Nomad cannot schedule it on nodes with the "ingress" role. The nginx
group also has a constraint
so that Nomad always schedules it on nodes with the "ingress" role.
/shared/jobs/03.hashicups.nomad.hcl
job "hashicups" { # ... group "db" { # ... service { # ... } task "db" { constraint { attribute = "${meta.nodeRole}" operator = "!=" value = "ingress" } # ... } } group "product-api" { # ... service { # ... } task "product-api" { # ... } } group "payments" { # ... service { # ... } task "payments-api" { # ... } } group "public-api" { # ... service { # ... } task "public-api" { # ... } } group "frontend" { # ... service { # ... } task "frontend" { # ... } } group "nginx" { # ... service { # ... } task "nginx" { constraint { attribute = "${meta.nodeRole}" operator = "=" value = "ingress" } # ... } }}
Run the job
Submit the job to Nomad. Note that now the output shows the status of each service group instead of just one group like previous versions.
Be aware that Nomad's scheduling algorithm may decide to place all of the non-public services on the same private client node or spread them out over several nodes. There are three nodes that match the constraint rule of meta.nodeRole != ingress
, so your output will differ from the example.
$ nomad job run 03.hashicups.nomad.hcl==> 2024-11-12T21:26:06+01:00: Monitoring evaluation "8f35df8a" 2024-11-12T21:26:06+01:00: Evaluation triggered by job "hashicups" 2024-11-12T21:26:06+01:00: Allocation "a67f6273" created: node "7fb20437", group "public-api" 2024-11-12T21:26:06+01:00: Allocation "c83120cc" created: node "7fb20437", group "product-api" 2024-11-12T21:26:06+01:00: Allocation "2f680e43" created: node "c131bce2", group "db" 2024-11-12T21:26:06+01:00: Allocation "4a3f2e8b" created: node "30b5f033", group "nginx" 2024-11-12T21:26:06+01:00: Allocation "6512bee8" created: node "7fb20437", group "payments" 2024-11-12T21:26:06+01:00: Allocation "7190a16a" created: node "7fb20437", group "frontend" 2024-11-12T21:26:07+01:00: Evaluation within deployment: "0e1114c1" 2024-11-12T21:26:07+01:00: Evaluation status changed: "pending" -> "complete"==> 2024-11-12T21:26:07+01:00: Evaluation "8f35df8a" finished with status "complete"==> 2024-11-12T21:26:07+01:00: Monitoring deployment "0e1114c1" ✓ Deployment "0e1114c1" successful 2024-11-12T21:26:51+01:00 ID = 0e1114c1 Job ID = hashicups Job Version = 0 Status = successful Description = Deployment completed successfully Deployed Task Group Desired Placed Healthy Unhealthy Progress Deadline db 1 1 1 0 2024-11-12T20:36:31Z frontend 1 1 1 0 2024-11-12T20:36:36Z nginx 1 1 1 0 2024-11-12T20:36:49Z payments 1 1 1 0 2024-11-12T20:36:45Z product-api 1 1 1 0 2024-11-12T20:36:35Z public-api 1 1 1 0 2024-11-12T20:36:46Z
Verify deployment
Verify that the HashiCups application deployed successfully.
Use the nomad job
command to retrieve information about the hashicups
job.
$ nomad job allocs hashicupsID Node ID Task Group Version Desired Status Created Modified2f680e43 c131bce2 db 0 run running 2m ago 1m35s ago4a3f2e8b 30b5f033 nginx 0 run running 2m ago 1m16s ago6512bee8 7fb20437 payments 0 run running 2m ago 1m19s ago7190a16a 7fb20437 frontend 0 run running 2m ago 1m28s agoa67f6273 7fb20437 public-api 0 run running 2m ago 1m19s agoc83120cc 7fb20437 product-api 0 run running 2m ago 1m30s ago
Use the consul catalog
command to verify that the services are correctly registered inside Consul's catalog.
$ consul catalog servicesconsuldatabasefrontendnginxnomadnomad-clientpayments-apiproduct-apipublic-api
Retrieve the HashiCups URL to verify deployment.
$ nomad node status -verbose \ $(nomad job allocs hashicups | grep nginx | grep -i running | awk '{print $2}') | \ grep -i public-ipv4 | awk -F "=" '{print $2}' | xargs | \ awk '{print "http://"$1}'
Output from the above command.
http://3.15.17.40
Copy the IP address and open it in your browser to view the HashiCups application. You do not need to specify a port because nginx is running on port 80
.
Before you proceed, interact with the Nomad and Consul UIs to explore their features and understand the relationships between Nomad jobs, allocations, and nodes and Consul services, instances, and nodes.
Cleanup
Stop the deployment when you are ready to move on.
$ nomad job stop -purge hashicups==> 2024-11-12T22:24:56+01:00: Monitoring evaluation "13866796" 2024-11-12T22:24:57+01:00: Evaluation triggered by job "hashicups" 2024-11-12T22:24:57+01:00: Evaluation status changed: "pending" -> "complete"==> 2024-11-12T22:24:57+01:00: Evaluation "13866796" finished with status "complete"
Next steps
In this tutorial, you deployed two versions of HashiCups with Consul integration. In the first deployment, all of the services ran on one client node, while in the second deployment the services ran on different nodes.
In the next tutorial, you will integrate Consul service mesh and use an API gateway for external public access.