I pull data from API. The pull results in struct arrays, each containing 100 elements. The code I use to insert them into DB is:
func (ACUpdater) InsertToTable(rawData *models.Contacts, table *gorm.DB) {
for i := 0; i < len(rawData.Results); i {
temp := models.ACQ_report{
ID : "", //to be created later by hashing below data fields
RecordID: rawData.Results[i].ID,
FirstName: rawData.Results[i].Properties.Firstname,
LastName: rawData.Results[i].Properties.Lastname,
Email: rawData.Results[i].Properties.Email,
PhoneNumber: rawData.Results[i].Properties.Phone,
ContactOwner: rawData.Results[i].Properties.HubspotOwnerID,
CompanyName: rawData.Results[i].Properties.Company,
}
temp.ID = hashContactRecord(&temp)
table.Create(&temp)
fmt.Println(&temp)
fmt.Println(i)
}
}
I used the hash of data fields as primary key for the table, so that in case any of those data field changes, the hash also changes. That way, I can append to the existing table the changed record without worrying about duplicate primary keys.
The issue is, the above function totally "gives up" INSERTING
after 1 duplicate primary key error from GORM. If the first record to be inserted into database is a duplicate, then tx.Created(&temp)
will still run, but it did not insert the changed records. As if tx.Create()
gives up after the first duplicate primary key error.
How to fix this behavior?
CodePudding user response:
The first issue here, which may be covering up the actual root cause, is that you're not checking to see if the INSERT
is returning an error. You would do that with:
result := tx.Create(&temp)
if result.Error != nil {
// handle it somehow
}
If you do this and inspect the error, you might see a couple things:
ERROR: duplicate key value violates unique constraint
You will likely see this even if you also see the error below. In this case, your INSERT is executing, but failing. If you do not see any other errors and this only prints once, then you probably passed in a gorm.DB handle that's been chained off a DB session and will fail at the first error.
For example, as mentioned in the comments, if you passed in the result of db.Table("my_table")
to this method, that would be in te situation described above. To fix it, pass instead either just db
or db.NewSession()
, and update your method to specify Table (or Model, to be more Gorm-like):
result := db.Table("my_table_name").Create(&temp)
if result.Error != nil {
// ...
}
Option 2: ERROR: current transaction is aborted, commands ignored until end of transaction block
If you see this, it means your method is running its inserts in a transaction. This proved not to be the case for you, but since this a general forum, I'll leave this and the below explanation here: In Postgres, if any statement fails inside a transaction, you can't execute any further statements except a ROLLBACK.
To resolve this, you have a few options:
Do more data validation before attempting the insert, to the point where you can reliably expect every insert to succeed. You could also use Gorm's batch insert functionality to optimize the insert with this approach.
Do not use a transaction. If you're okay with skipped rows, and not worried about duplicates, this is a reasonable option.
Use SAVEPOINTs. In Postgres, a SAVEPOINT is a like a checkpoint in a transaction that you can roll back to instead of rolling back the entire transaction, which is pretty much what you want:
tx.SavePoint("sp1") // SAVEPOINT sp1;
result := tx.Create(&temp)
if result.Error != nil {
tx.RollbackTo("sp1") // ROLLBACK TO sp1;
}