Home > Software engineering >  how to handle big relational data
how to handle big relational data

Time:05-18

I have a MySQL database. In one of the tables, I have a json column named event. The input type of the event is an array of the following:

{
 cam:{x:number,y:number,z:number},
 scn: {x:number,y:number,z:number},
 time: Date
}

It contains analytical data, one record may contain up to 20,000 elements inside the array. And for one product I might get up to 12,000 records. When I run my simple query of select * from ABC where product_id=3 it takes up to 2-3 minutes to fetch this big data. If I perform some group or any other query it will take much more time. My question is how can I handle such big data? How do big companies manage it?

CodePudding user response:

when storing files is an option and you really want to use a relational db than store the json as a .json file and read it with a filestream to get it. The path to the file could be stored in the collum. Other than that please use something like mongodb or sqlite3

CodePudding user response:

As someone has already pointed out in the comments, a relational database isn't best suited for such data (assuming your schema of 'event' might change). However MySQL does provide a workaround by using generated columns and indexing them.

You can read up on it here

Apart from this, since your data does have a structure to it, You could use another table as well to store event data. Simply storing something as a JSON doesn't mean that data doesn't have a structure to it. You need to understand your data to understand what truly is best to store and process it.

  • Related