2

I have a SQL Server 2008 table with approx 1.5 million records and I want to add another field and edit another. When I save the changes it errors out:

'Member' table
- Unable to modify table.  
The transaction log for database 'storeboard' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

Even if I empty the transaction log it still does the same error. How can I avoid this problem and make the necessary changes to my table?

Pondlife
  • 15,992
  • 6
  • 37
  • 51
neojakey
  • 1,633
  • 4
  • 24
  • 36
  • You haven't explained exactly what you're doing, but I suspect that you're using SSMS to add a column in a specific position in the middle of the table. If so, that causes SSMS to [create a full copy of your table](http://stackoverflow.com/questions/769828/add-a-new-table-column-to-specific-ordinal-position), which may give the transaction log error. Is this the scenario you have? – Pondlife Dec 18 '12 at 17:55
  • Yes, this is what I am trying to do... – neojakey Dec 18 '12 at 18:04
  • You may want to review some existing questions, such as [this one](http://stackoverflow.com/questions/3968954/add-column-on-sql-server-on-specific-place); there is usually no point in adding a column in a specific position. And the [documentation](http://msdn.microsoft.com/en-us/library/ms175495(v=sql.100).aspx) describes possible actions to take if the transaction log fills up. – Pondlife Dec 18 '12 at 18:13

3 Answers3

1

Try this one:

USE master ;
ALTER DATABASE YourDBName SET RECOVERY SIMPLE ;
Nikita Nesterov
  • 426
  • 2
  • 5
  • Thanks for you reply, unfortunately this didn't work either, and I still get the same error. Do I have to clear the log before trying to save the table again? If so, How? – neojakey Dec 18 '12 at 18:05
  • You can also try to shrink your db, extent log file, free some extra space on the hard-drive. One time I got this situation when my hard-drive was completely full. – Nikita Nesterov Dec 18 '12 at 18:16
  • Hm... I would avoid following the suggestion to just switch to simple recovery model without an understanding of your scenario. – Interface Unknown Jun 21 '15 at 13:16
1

Like "Nikita" suggested, you could just change the recovery model to simple, so that changes are not tracked in the transaction log... However, if this is a production database, it is likely that you have log shipping in play, or ???, which requires the transaction log to be used to replicate changes to another server... If that's the case, then you need to understand what's causing the transaction log to fill up, and how to remedy it without disabling the trans log.

It sounds like you are using SSMS to do the changes, and the nature of the changes requires SSMS to:

  1. Delete all foreign keys, indexes, etc., from the changed table
  2. Copy all the data from the table to a #temp table
  3. Create a new table with the proposed changes
  4. Copy all the data from the #temp table to the newly created table
  5. Drop the old table
  6. Rename the new table to the old table's name
  7. Recreate all the indexes, foreign keys, etc.

The SSMS script inserts BEGIN TRANSACTION and COMMIT TRANSACTION throughout the script so that if something fails, hopefully your table and data will not be ruined. But since it also inserts GO throughout the script, it is possible that an error will cause problems, especially since one of the last things it does it drop the original table and then rename the newly created table to the original table name.

It sounds like the transaction log fills up before the script finishes running. If you can't change your production database to a simple recovery model (e.g., because it will break log shipping, etc.), then you need to approach running the script in a way that won't fill up the transaction log.

I recommend:

  1. Build and test the script on a dev / test server first that doesn't have 1.5 million rows, to ensure that the script does exactly what you want it to do.
  2. Modify the script and add COMMIT in key places so that the space in the transaction log can be reclaimed as the script continues to run.
  3. Copy 100,000 records at a time from the #temp table back to the newly created table with a COMMIT between each block.
  4. Test the modified script again on a dev / test server before running it on production.
  5. Backup your production database before you run the script...
James L.
  • 9,384
  • 5
  • 38
  • 77
0

Even if you use a SIMPLE recovery model for your database, SQL Server keeps writing everyhting to a transaction log.

Suppose you are adding a column of datetime which requires 8 bytes of storage. Assuming your table has 1.5 Mio records, SQL Server will need literally 8 * 1.5 bytes (+overheads) to complete the transaction. All the bits are written to a log and there's no way to stop'em.

If your scenario (and/or policy) doesn't require any point-in-time recovery, then the SIMPLE recovery mode is likely for you. Just enable auto-growth, set fair limits for maximum log size and toggle auto-shrink to true.

Otherwise (i.e. a production DB, highly business-critical app, etc.), you might want to work out a proper maintenece plan based on FULL recovery model with log backups. There are many whitepapers and articles on that.

Interface Unknown
  • 713
  • 10
  • 26