In this tutorial, we are going to move from a single-node Elasticsearch setup to a real multi-node cluster consisting of three Elasticsearch nodes and one Kibana instance, all running locally using Docker Compose.
This is an extremely important milestone in learning Elasticsearch, because features such as sharding, replication, fault tolerance, node discovery, and master election only truly make sense when you see them working in a real multi-node cluster rather than in theory.
By the end of this tutorial, you will:
- Learn some very practical cluster inspection APIs.
- Run a three-node Elasticsearch cluster on your machine.
- Understand why specific cluster settings are required.
- Verify that the cluster is correctly formed.
- Create multiple indices and observe how shards and replicas are distributed across nodes.
- Learn some very practical cluster inspection APIs.
The Docker Compose File
Create a file called:
docker-compose.yml
And paste exactly this content:
services:
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:9.0.2
environment:
- node.name=es01
- cluster.name=my-cluster
- xpack.security.enabled=false
- xpack.security.http.ssl.enabled=false
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es02,es03
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- 9201:9200
es02:
image: docker.elastic.co/elasticsearch/elasticsearch:9.0.2
environment:
- node.name=es02
- cluster.name=my-cluster
- xpack.security.enabled=false
- xpack.security.http.ssl.enabled=false
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es03
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- 9202:9200
es03:
image: docker.elastic.co/elasticsearch/elasticsearch:9.0.2
environment:
- node.name=es03
- cluster.name=my-cluster
- xpack.security.enabled=false
- xpack.security.http.ssl.enabled=false
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es02
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- 9203:9200
kibana:
image: docker.elastic.co/kibana/kibana:9.0.2
ports:
- 5601:5601
environment:
- ELASTICSEARCH_HOSTS=["http://es01:9200","http://es02:9200","http://es03:9200"]
High-Level Architecture
We are running:
- Three Elasticsearch nodes:
es01,es02,es03 - One Kibana instance
All three Elasticsearch services:
- Use the same Docker image and version.
- Belong to the same cluster called
my-cluster. - Discover each other using discovery settings.
- Automatically elect one node as the master.
Important Prerequisite: Docker Memory and CPU
Before you run this cluster, you must go to Docker Desktop settings and ensure that:
- Enough memory is allocated (at least 4 GB is recommended for comfort).
- Enough CPU cores are available.
If memory is insufficient:
- Containers will start and then crash.
- The cluster will never stabilize.
- You may think your configuration is wrong when the real problem is simply lack of memory.
Understanding the Configuration in Detail
Now let us carefully understand why each important setting exists.
1. node.name
Example:
- node.name=es01
Each node is given a unique name such as es01, es02, and es03.
This is important because:
- Nodes need unique identities inside the cluster.
- Cluster APIs and logs become readable and understandable.
- Master election and shard allocation reports clearly show which node is doing what.
2. cluster.name
- cluster.name=my-cluster
By default, Elasticsearch uses a cluster name like docker-cluster.
We change it to:
- Make the cluster identity explicit and intentional.
- Avoid accidental cross-connection with other clusters.
- Make debugging and verification easier.
3. Disabling Security (For Learning Only)
- xpack.security.enabled=false - xpack.security.http.ssl.enabled=false
We are disabling:
- Authentication
- Authorization
- TLS
This is done only to simplify learning.
Later, in a dedicated security chapter, this will be enabled properly and configured correctly.
4. cluster.initial_master_nodes (Bootstrap Setting)
- cluster.initial_master_nodes=es01,es02,es03
This is one of the most confusing settings for beginners, so let us explain it clearly.
- This setting is used only once, when a brand-new cluster is created.
- It tells Elasticsearch: “These nodes are allowed to participate in the very first master election.”
In simple words:
- When the cluster starts for the first time, one of these nodes becomes the first master.
- After the cluster is formed, this setting is no longer used.
It is a bootstrap setting, not a runtime coordination setting.
5. discovery.seed_hosts (How Nodes Find Each Other)
Example:
- discovery.seed_hosts=es02,es03
This means:
- When a node starts, it does not know:
- Who the master is.
- What other nodes exist.
- What the cluster state is.
- So it needs to contact at least one known node to learn all this information.
This setting tells the node:
“When you start, go and talk to one of these nodes to learn about the cluster.”
That is exactly how new nodes join an existing cluster.
6. Java Heap Settings
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
This allocates:
- 512 MB initial heap
- 512 MB max heap
This is:
- Enough for learning and demos.
- Small enough to run three nodes on a laptop.
7. Port Mapping
9201:9200 9202:9200 9203:9200
This means:
es01is accessible atlocalhost:9201es02is accessible atlocalhost:9202es03is accessible atlocalhost:9203
This is not required for the cluster itself, but it is very useful for learning and debugging.
8. Kibana Configuration
ELASTICSEARCH_HOSTS=["http://es01:9200","http://es02:9200","http://es03:9200"]
Earlier, when we had only one node, we gave only one host.
Now:
- Since we have three nodes, we give all three.
- This allows Kibana to connect even if one node is temporarily unavailable.
Starting the Cluster
Open a terminal in the folder containing docker-compose.yml and run:
docker compose up
Wait patiently and observe the logs.
After some time, you should see a message similar to:
cluster health status changed from yellow to green
This means:
- All three nodes have discovered each other.
- A master has been elected.
- All shards and replicas are properly allocated.
Checking Cluster Health
Run:
GET _cluster/health
You should see:
number_of_nodes : 3status : greencluster_name : my-cluster
This confirms that all three nodes are part of the same cluster.
Creating an Index
Create an index:
PUT products
Now check health again:
GET _cluster/health
It should still be green, which means:
- The primary shard is allocated.
- The replica shard is also allocated on another node.
Checking Node Information
Run:
GET _cat/nodes?v
You will see:
- es01
- es02
- es03
One of them will be marked as master.
- In your case, it might be
es01. - In someone else’s case, it might be
es02ores03.
This is completely normal and expected.
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 172.19.0.2 43 90 3 0.22 0.25 0.49 cdfhilmrstw - es02 172.19.0.3 57 90 3 0.22 0.25 0.49 cdfhilmrstw - es01 172.19.0.4 75 90 3 0.22 0.25 0.49 cdfhilmrstw * es03
Checking Shards for the Products Index
Run:
GET _cat/shards/products?v
You will see:
- One primary shard
- One replica shard
They will be on different nodes, for example:
- Primary on
es03 - Replica on
es02
That is exactly why the cluster is green.
Creating More Indices
Create a few more:
PUT products1 PUT products2 PUT products3 PUT products4
Now list all shards:
GET _cat/shards/products*?v
You will observe that:
- For every index, both primary and replica shards exist.
- Across all indices, shards are distributed across all three nodes.
- No single node holds everything.
This is automatic load distribution and fault tolerance in action.
