I am performing an SQL query in a 600k row table and want to pick out rows with match a certain criteria (WHERE clause). Since I know that only max 500 rows may match the same criteria I want to skip searching the hole table for performance reasons. Also, it is most likely to find the desired rows in the "most recent" rows (with the highest row #counter).
SELECT
timestamp AS "time",
TimeAxis,
WeightAxis
FROM ArrayLog
WHERE UNIX_TIMESTAMP(coffeeTimestamp) = $usedTimestamp
ORDER BY counter LIMIT 500
does not improve the speed of the query, I guess because the found rows are almost always < 500? The query takes around 20 seconds on my Pi through Grafana.
Is there a way to do this neatly or is this done automatically by the query optimizer already and it is already "as good as it gets"?
CodePudding user response:
It sounds like you don't really care about the size of the result set, because you expect a reasonably small size. Rather, your concern is about the performance of the query. In that case, you make the query SARGable and then add an index on the coffeeTimestamp
column.
CREATE INDEX cIdx ON ArrayLog (coffeeTimestamp);
Then, use this query:
SELECT
timestamp AS "time",
TimeAxis,
WeightAxis
FROM ArrayLog
WHERE coffeeTimestamp = FROM_UNIXTIME(?);
The assumes that the ?
parameter (perhaps contained in the $usedTimestamp
variable) would be a timestamp in seconds since the epoch.