Home > Back-end >  Virtual Machine SQL Log growing when adding fields
Virtual Machine SQL Log growing when adding fields

Time:12-02

I realize everyone wants to just look the other way on this question. I appreciate it if you continue to read on. Of course the log grows when adding a field to a large table. Let me just explain to my best ability: We have a database upgrade utility that we deploy to our customers. In that utility we manipulate the database with changes that are specific to our version. Our testing department is seeing varying results locally vs Virtual and Virtual vs Virtual machine. Some VMs do not have much log growth while others grow by 30gb. The database is set to SIMPLE. The transaction log shouldn't be used, "technically". I understand that the log is used as a cache till a disc is free enough to accept the change requested. I know there is not much to be done on the SQL side. We are stuck with some sort of shrink to handle the change after the upgrade is complete.
I am curious why Physical and VM would act differently and what to look for in a VM environment to see if this is going to be problematic. Do I look at something on the disc, MSINFO32, CPU? I have looked to make sure there is no compression on the VM. I also did profiles and looked at fn_dblog to see the indexes growing on the specific table. I just can't figure out why some grow exponentially while others do not grow. Also if you know of any DB_Owner permission level shrink style commands i would appreciate it. Currently we are testing a checkpoint since ShrinkDB will not be available to us due to permission level.

--table has usually has between 10 and 20 million records.  It is a table that has 10 fields strictly typed
IF col_length('[dbo].[foo]','Field1') IS NULL
BEGIN
    ALTER TABLE [dbo].[foo] ADD [Field1] smalldatetime NOT NULL CONSTRAINT [df_Field1] DEFAULT (GETDATE())
END
--This has growth and can get a bit out of control on VM.  Physical machines it does not affect as much.  

--Try 2 hoping Getdate was the issue
IF col_length('[dbo].[foo]','Field1') IS NULL
BEGIN
    ALTER TABLE [dbo].[foo] ADD [Field1] smalldatetime NULL 
END

DECLARE @TheDate smalldatetime = GETDATE()
UPDATE [dbo].[foo] SET [Field1] = @TheDate

--This was even more problematic to the log and took considerably longer

CodePudding user response:

Doing a CHECKPOINT after each column change would be my first try for SIMPLE recovery model, honestly. SQL doesn't let us have much control of how the transaction log behaves other than recovery mode, backups, growth size, and checkpoints.

Check how many VLF's exist and their sizes with sys.dm_db_log_info or DBCC LOGINFO.

Check the recovery model - maybe it's either not SIMPLE or not always SIMPLE on some machines?

Check indexes and index fragmentation; that difference might matter, and it'll change after each column. Make sure you have a clustered index.

Check total table size.

  • Related