Home > front end >  extremely high SSD write rate with multiple concurrent writers
extremely high SSD write rate with multiple concurrent writers

Time:12-06

I'm using QuestDB as backend for storing collected data using the same script for different data sources. My problem ist the extremly high disk (ssd) usage. During 4 days it has written 335MB per second.

What am I doing wrong?

Inserting data using the ILP interface

sender.row(
    metric,
    symbols=symbols,
    columns=data,
    at=row['ts']
)

CodePudding user response:

I don't know how much data you are ingesting, so not sure if 335 MB per second is much or not. But since you are surprised by it I am going to assume your throughput is lower than that. It might be the case your data is out of order, specially if ingesting from multiple data sources.

QuestDB keeps the data per table always in incremental order by designated timestamp. If data arrives out of order, the whole partition needs to be rewritten. This might lead to write amplification where you see your data is being rewritten very often.

Until literally a few days ago, to fine tune this you would need to change the default config, but since version 6.6.1, this is dynamically adjusted.

Maybe you want to give a try to version 6.6.1, or alternatively if data from different sources is arriving out of order (relative to each other), you might want to create separate tables for different sources, so data is always in order for each table.

CodePudding user response:

I have been experimenting a lot and it seems that you're absolutely right. I was ingesting 14 different clients into a single table. After having splitted this to 14 tables, one for each client, the problem disappeared. Another advantage is the fact that I need a symbol less as I do not have to distinguish the rows.

By the way - thank you and your team for this marvellous tool you gave us! It makes my work so much easier!!

Saludos

  • Related