Home > Enterprise >  How to disable transaction logs in recovery mode simple?
How to disable transaction logs in recovery mode simple?

Time:03-12

I need to transfer lots of data between two tables in different databases, such as:

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

ALTER PROCEDURE [dbo].[MoveThenTruncateMyData]
AS 
BEGIN
    INSERT INTO MyDB2_TEST.dbo.MyData (Column1, Column2, Column3, Column4)
        SELECT Column1, Column2, Column3, Column4 
        FROM MyDB1_TEST.dbo.MyData;

    TRUNCATE TABLE MyDB1_TEST.dbo.MyData;
END

I set MyDB2_TEST in recovery mode simple.

However, once the rows start to move, the log file size starts to increase.

Since the database data file is 80 gb, it seems logs will be 3x times bigger while the transaction is running.

The problem is that I don't have enough space on that machine: it will saturate.

Is there any way to disable also transaction logs? In this case, if the operation fails, I can just re-do it.

CodePudding user response:

You cannot disable transaction logs in any recovery model. The reason is that you have to be able to roll back any transaction in whole; the log has to contain everything required to perform that rollback. So if you have a single insert that jams 1 billion rows into a table, the log for that database has to keep all that info around until the transaction commits. Ways you can offset that:

CodePudding user response:

When you want to perform a bulk operation it's recommended to set recovery model to Bulked-log not simple. With Bulked-log recovery model the logs will be written at minimal mode.

Another point is to divide your insert in several batches; Because the logs won't be truncated until the transaction commits.

CodePudding user response:

Instead of how you're trying to do the data movement (i.e. all-or-nothing), I present an alternative:

delete top(1000)
from MyDB1_TEST.dbo.MyData
output deleted.Column1, deleted.Column2, deleted.Column3, deleted.Column4
into MyDB2_TEST.dbo.MyData(Column1, Column2, Column3, Column4);

I chose 1000 rows arbitrarily; pick a number that works for you. Either way, this lets you move a small number of rows at a time in a transactionally consistent manner. You will have to make sure that your application is aware of the state of things during the transition (maybe a view that spans both tables?) but this lets you keep the transaction log impact to one batch's worth.

  • Related