Some people start their ElasticSearch cluster with very few shards; some even start with one somehow.
Watch out! If you do this, you might experience issues when your data grows much bigger — l ike slow query performance and endless service crashes with out-of-memory issues. Adding more VMs won't help!
And the cure? You have to re-index. But doing so comes with a cost that you may not wish to pay. Trust me, it hurts. Badly.
Note: There are tons of best practices about ES. In this post, we will just focus on shard count.
Naturally, you don't have too much data at the very beginning. So, you just want to start with very few resources and save money.
Let's say you start three VMs and create indices with two replicas. This means all data will have three copies. Even if one or two VMs crash, you're still safe. Very reliable. As for the shards, you may think to start with a small number, as well. Quite reasonable, right?
When your indices grow, you can change the replica count to a bigger number. After adding more VMs, the ES cluster will automatically get more copies of your data. It works like a charm.
However, you cannot change the shard count of existing indices to a bigger number. (You can't restore from a backup with modified shard count, either.)
As your indices grow, your shards keep growing. Very soon, you will get giant shards .
In our case, we have started with five shards and two replicas. Months later, one of our indices has grown to 1TB. So, each primary shard is as big as 200GB, and our ES VMs has around 28GB RAM with 25 ES VMs in total.
So, what's the impact of giant shards?
- We see more than three OOM (out-of-memory) incidences every single day . Pretty scary. Each shard has 200GB data, but the VMs have only 28GB RAM with 14GB allocated to the ES heap. The shard is simply too big for the VMs.
- Adding more VMs won't help . In our example, we have 15 shards (primary and replica) in total. Let's say that each shard has already distributed to one dedicated VM. Only 15 VMs can help for this index. Having any more VMs than 15 is helpless.
- Query performance is bad . Only 15 VMs can serve the giant index. What's worse, this index keeps growing.
- Any maintenance of the giant shards takes hours or even days . Re-indexing this 1TB giant index took us ~17 hours. Force-merge 1TB index took ~25 hours.
As a general best practice, it's better to keep ES shards no bigger than 30GB.
Note: Both giant shards and giant indices should be avoided. Your application might have strong requirements, which can't avoid giant indices. Here we just talk about giant shards.
But how many shards should you choose at the very beginning? Well, it depends on how much data your index will hold eventually and what types of queries your applications might perform.
We definitely wish we had started with 50 shards instead of five. This would have saved us lots of time and effort — not to mention the pressure!
Even though I know little about your situation, the below suggestions will probably help:
- Keep watching the size of the shard . When any shard grows bigger than 60GB, send alerts.
- Check the shard count for existing and new indices . Raise alerts if some indices have too few shards.
From our practice, we have noticed some indices have only one shard, mysteriously. This is definitely not good. There are probably some bugs somewhere.
I'm sure you can do better than me. You might not run into this unexpected issue, but double checking won't hurt. You can't guarantee this won't happen to your environment, especially if you have teammates. It's better to automate the check.
Here's check_elasticsearch_shard.py :
I hope you can find more peace with your ES shards after reading this blog post. Please share it if you find it useful!