3

We have a lot of sql server databases in Simple Recovery Mode. We need to move these databases on a different datacenter. These databases can not be closed (they are serving 24x7 mission critical services). We are thinking to use log shipping to create a remote standby database and then switching the remote copy to be primary.

How we can estimate the amount o transaction log written daily by each of these databases? We need to know this amount to estimate netowork bandwith and size of transaction log backup.

Regards Giovanni

Giova
  • 1,137
  • 1
  • 9
  • 17

2 Answers2

3

You don't have to estimate: set the database to full recovery and start taking log backups. You'll have to do this anyways to do log shipping and you should really be doing this anyways if the db is that important to you. Once you're taking the log backups, you'll know how big the log backups are over any given prior of time so you can estimate bandwidth based on that.

Ben Thul
  • 31,080
  • 4
  • 45
  • 68
  • Hi, thank you but I can't do it! This is a good empirical method but in this case I have to know the total size of the daily total size of transalction log backup. I have to be sure that the backup system is properly sized to do this work!!! I must estimate!!! – Giova Jan 14 '15 at 11:08
  • If you can't switch the recovery model to full, you can't use log shipping. – Ben Thul Jan 14 '15 at 11:10
  • 1
    @Ben we can use logh sipping in bulk logged recovery model as well. – Shanky Jan 14 '15 at 11:47
  • @Ben, I know that Full recovery model is a prerequisites. We are speaking about terabytes of data modificaions per day in this case. I have to do an estimation in advance to switch the recovery model from simple to full and then to configure log shipping. – Giova Jan 14 '15 at 11:59
  • @Shanky: I know I can use the bulk logged recovery model, but in any case I have to estimate the size of my future backup. In the Oracle case I simply count the redo log switches. – Giova Jan 14 '15 at 12:00
  • It sounds like you already have your answer: "terabytes of data modifications per day". What more are you looking for? And I don't know the ins and outs of how Oracle manages log, but simple recovery is completely different than the other recovery models. Simply changing the recovery model away from simple is likely to change the size of the log file itself as the engine now needs to keep transactions until a log backup happens. In short, I don't know of a good way to estimate how large transaction log backups will be for a database that is in simple recovery. If it's important to you, switch. – Ben Thul Jan 14 '15 at 14:45
0

I wrote a stored procedure that do the following:

  • read num_of_bytes_written from sys.dm_io_virtual_file_stats
  • random insert and delete rows from tables
  • read num_of_bytes_written from sys.dm_io_virtual_file_stats
  • calculate difference in megabytes between the two values of num_of_bytes_written
  • make compressed transaction log backup
  • check the difference between log backup size and the num of bytes written

After 1000 executions I'll calculate the ratio between log size and backup size

regards Giova

Giova
  • 1,137
  • 1
  • 9
  • 17