I'm building a nestjs application that uses typeorm to communicate with postgres.
My tables are created dynamically, and data is inserted also dynamically. That's why I use raw query instead of entities.
The problem is that some data in tables are related and I can't insert new data unless previous insert query has finished.
How do I check if query execution finished? Here is an example of workflow that I use. It works with small data but fails with big data (10 000 000 entries and more)
export class Test {
constructor(
private readonly connection: Connection;
) {}
public async insertData(table1, table2, arr1, arr2) {
await insertInto(table1, arr1);
//I want second insertInto() to be executed after I get confirmation from database that insertInto() from above is finished
await insertInto(table2, arr2);
}
private async insertInto(table, data) {
const queryRunner = this.connection.createQueryRunner();
await queryRunner.connect();
await queryRunner.startTransaction();
const preparedData = [];
//prepare data to be inserted as raw query
//...
try {
await queryRunner.query(`INSERT INTO "${table}" VALUES ${preparedData}`);
await queryRunner.commitTransaction();
} catch (e) {
await queryRunner.rollbackTransaction();
throw new InternalServerErrorException(e, Error while executing custom query. Rollback transaction.)
} finally {
await queryRunner.release();
}
}
}
Desired result is to have some callback for queryRunner.query
like this queryRunner.query('raw_sql', (err, res) => {})
Is it possible with typeorm?
Thanks
CodePudding user response:
The way your code is written, the transaction commit will only happen after the insert finishes. Which means that, at that point you can also execute your new query. You don't necessarily need a callback because you're using the async/await syntax.
However, it seems that with very large inserts, something wrong is happening (some sort of query/connection timeout, or server resource fail). Try debugging/printing the error to see what really happened.
I suggest you try to split the insert into multiple batches (of something like 1k records, for example).