Home > Blockchain >  Adding a sample node to my test Elasticsearch server
Adding a sample node to my test Elasticsearch server

Time:12-20

I've downloaded ES and Kibana to my laptop, and have started ES.

I'm now trying to set up a node with indices, based on some sample data.

I'm getting a stream of msgs like:

[2022-12-20T10:13:14,388][WARN ][o.e.c.r.a.DiskThresholdMonitor] [LAPTOP-xxx] high disk watermark [90%] exceeded on [xxx][LAPTOP-xxx][C:\Technical\Elasticsearch\elasticsearch-8.5.3\data] free: 35gb[7.3%], shards will be relocated away from this node; currently relocating away shards totalling [0] bytes; the node is expected to continue to exceed the high disk watermark when these relocations are complete

I also see:

C:\Users\peter>curl -k -X GET "http://localhost:9200/_cat/allocation?v=true
shards disk.indices disk.used disk.avail disk.total disk.percent host      ip        node
     0           0b   440.8gb     34.9gb    475.8gb           92 127.0.0.1 127.0.0.1 LAPTOP

As I understand it, ES is trying to use the entire contents of my C: drive and create a node/index. The insert that I posted above show that there's clearly insufficient disk space for this.

I've worked out how to import a sample data file.

How can I create a sample node/index based on my sample data rather than based on the my entire C: drive? This seems to be blocking my progress.

Ok, so I tried to index a small JSON file (1kb) as follows:

curl -k -H "Content-type:application/json" -X POST --data-binary @C:\path1\ES\SampleData.txt http://localhost:9200/test123/_doc

I get:

{"error":{"root_cause":[{"type":"unavailable_shards_exception","reason":"[test123][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[test123][0]] containing [index {[test123][J_R2KoUBkVUKNPWLoLS_],

Surely this is because there is no shard available:

Ok, so I tried to index a small JSON file (1kb) as follows:

curl -k -H "Content-type:application/json" -X POST --data-binary @C:\path1\ES\SampleData.txt http://localhost:9200/test123/_doc

I get:

{"error":{"root_cause":[{"type":"unavailable_shards_exception","reason":"[test123][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[test123][0]] containing [index {[test123][J_R2KoUBkVUKNPWLoLS_],

Surely this is because there is no shard available:

curl -k -X GET "http://localhost:9200/_cat/count?v=true

{"error":{"root_cause":[{"type":"no_shard_available_action_exception","reason":null,"index_uuid":"RqBQkT1mTs6_hfe1loVYpw","shard":"0","index":"test123"}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"test123","node":null,"reason":{"type":"no_shard_available_action_exception","reason":null,"index_uuid":"RqBQkT1mTs6_hfe1loVYpw","shard":"0","index":"test123"}}]},"status":503}

I got it working by freeing up a great deal of space on my c: drive.

CodePudding user response:

It is not creating index from your entire C: drive. this warning is coming because you dont have sufficient disk available on system where you are running Elasticsearch and your system disk utilization is above 90% of total disk available on system.

You can see only 34.9gb are available from total 440.8gb of system and which is less then 10% of total.

You can check disk.indices value which indicate how much elasticsearch is using for storing indices data.

If you are going to index few MBs data which will not cross 100% disk utilization or less then 34.9gb then you can ignore this warnings.

I will suggest to free some disk on your system by removing unnecessary things and try it out.

  • Related