I have a postgresql database and there is a table having millions of record. This table has columns auto_id (sequence added), id, schedule, date, status
.
auto_id is the primary key and sequence added to it.
In my jpa repository I have a method with following query,
@Query("delete from Rate r where r.id IN :rateIds")
void deleteByIds(@Param("rateIds") List<Integer> rateIds);
In this query I'm going to delete by id column value and this list size is around 100. But in the table for this id there are thousands of records. When I execute this code it getting too long time to respond.
There is a index added to this table combining id, schedule, date and status columns.
Can anyone give me a solution to speed this deletion? Is it ok to add a index to id column and will it faster?
Thank you.
CodePudding user response:
Several solutions can be applied! Proposing below to my knowledge
Index would certainly help but they are good when data needs to be
retrieved
instead of deleted.Try creating a
Stored procedure
since they arepre-compiled
and faster in executions instead of native queries!Give this delete operation
Asynchronous
touch if possible. i.e. Accept the operation of delete and let it run in background, send the acknowledge request as soon as you receive the call (As your list maygrow or shrink
and you cannot always rely on DB performance)Make use of
JdbcTemplate
instead of JPA as it is faster (This article has some related discussion Spring Data JPA Is Too Slow)
CodePudding user response:
I have created a procedure like this,
CREATE OR REPLACE FUNCTION schema."fn_delete"(ids text) RETURNS boolean AS $BODY$ DECLARE
query VARCHAR(8000);
BEGIN
query := 'delete from table where id in ('||ids||')';
EXECUTE query;
RETURN TRUE;
END; $BODY$ LANGUAGE plpgsql VOLATILE COST 100;
In the repository added this,
@Query(value = "SELECT schema.fn_delete(:ids);", nativeQuery = true)
boolean deleteFunc(@Param("ids") String ids);
called it.
Also I have added this property,
spring.datasource.tomcat.remove-abandoned-timeout=120
Thank you @Harsh for your answer.