1

I have a database (recovery mode = full), its transaction log is full. I know that I can solve it by shrinking the transaction log to fix the issue.

But, I would like to know the reason why it became full?

Since I read others and they said large amount of insertion & deletion can increase the transaction log size. But I cannot replicate the issue to reproduce the error of full transaction log (it somehow able to reallocate the space)

The questions are: How to simulate the issue of "transaction log full"? How to check the physical transaction log?

This query is applied to trace the current occupied log space but it is properly not the actual transaction log, as the occupied space of the query is trembling instead of static increase in the process of bulk insertion & deletion

it is supposed that the log space will just increment as long as backup is not executed

DECLARE @TMPTBL AS TABLE (
DatabaseName nvarchar(500),
LogSize decimal(18,2),
LogSpaceUsed decimal (18,2),
[Status] int
) 

INSERT INTO @TMPTBL
EXEC ('DBCC SQLPERF(LOGSPACE)')

SELECT * FROM @TMPTBL WHERE DATABASENAME LIKE '%MYDBNAME%'
GO
SKLTFZ
  • 841
  • 2
  • 10
  • 30
  • you can use this post to find long running queries which may be increasing the log file- https://stackoverflow.com/questions/941763/list-the-queries-running-on-sql-server – DatabaseCoder Mar 25 '19 at 09:33
  • You can solve it for now by taking full backup of your database. Note - Shrinking database is not correct way to resolve this issue. – DatabaseCoder Mar 25 '19 at 09:35
  • what is the reason that i need to find the long running queries? are they the cause that cause the transaction log to full? as i have tried pretty long deletion & insertion (4 tables, each of them contains trigger to produce 1 insert), it takes 4 minutes to complete, but transaction log is fine afterward – SKLTFZ Mar 25 '19 at 09:36
  • i know the solution to fix transaction log full issue (manually), i just curious why it eventually became full :), and its great if you can provide the script for me to replicate the issue (made the transaction log became full) – SKLTFZ Mar 25 '19 at 09:38
  • i just curious if it has to be about transaction or deadlock issues – SKLTFZ Mar 25 '19 at 09:39
  • Under FULL recovery model, the log file is *never* emptied until a *backup* is taken. This is inherent to the FULL recovery model. So the reason that your log file is full, is because you are not taking backups. – TT. Mar 25 '19 at 11:15
  • if it is the case, what is the correct command to query the accumulating transaction space? as so far the SQL query used to find the transaction space is not reflecting the behavior "accumulating applied space of transaction log". just a note that I pretty sure no one using and db thus there is no explicit backup or checkpoint is ran – SKLTFZ Mar 27 '19 at 02:55
  • in fact i think you are correct, the command i applied so far may show the occupied space of virtual storage only instead of the physical one. thus everytime the virtual log is over a threshold, it will grow automatically and thus the space is "trembling". just the issue is, i still need to know the method to find the actual storage ( i expects its applied space should increase proportional to the number of transaction i ran as long as there is no explicit backup operation is done. – SKLTFZ Mar 27 '19 at 02:58

1 Answers1

0

There are multiple points to your question, so I will try to hit them individually:

Getting the size of the log: I prefer the following as it includes max size info. I am not saying never use SQLPERF, I am just adding another tool to the toolbox. Also realize there is a Disk Usage report in SSMS.

SELECT name, size/128 FileSizeInMB
, size/128 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128  AS EmptySpaceInMB
, iif (max_size = -1, null, (max_size)/128) AS MaxSize, iif(max_size > 0, cast(cast((cast(FILEPROPERTY(name, 'SpaceUsed') as decimal) /cast(max_size as decimal)*100) as decimal(18,2)) as varchar), '-') PctFilled
, file_id, type_desc, physical_name
FROM sys.database_files;

Checking what is in the transaction log. You should explore fn_dblog. Here is a link to a brief introduction https://logicalread.com/sql-server-dbcc-log-command-tl01/#.YAnC39hKiUk These queries should get you started. It is an undocumented function, so use caution in production.

SELECT *
FROM fn_dblog(null, null);

SELECT COUNT(1) as OperationCount,
    SUM (CAST([Log Record Length] as decimal)) as SumRecLen,
    [Operation]
FROM fn_dblog(null, null)
GROUP BY [Operation]    
ORDER BY 2 DESC;

Possible Causes: Checking the contents of the log will help you see where the load is coming from. Look at the operations with the high Log Record Length and dig on in. If you see the operation LOP_SHRINK_NOOP near the top of your list, you probably need to turn off Auto Shrink.

How to simulate? I am guessing you mean trigger the 9002 error. I am not sure what this accomplishes but here is an approach.

  1. Turn off auto-growth on the log.
  2. Set the max size of your log to something small and just larger than the current size. You may need to backup/shrink your log file if a large amount of empty space is already allocated.
  3. Perform data manipulation until you hit the max size. Consider a loop or a manual transaction that moves a large amount of data. Example: Make a new physical table, insert 1000 integers into a it, then repeatedly update all values by 1.
JoCu
  • 51
  • 4