2

How can I run a migration to change the type of a field in Mongoid/MongoDB without losing any data?

In my case I'm trying to convert from a BigDecimal (stored as string) to an Integer to store some money. I need to convert the string decimal representation to cents for the integer. I don't want to lose the existing data.

I'm assuming the steps might be something like:

  1. create new Integer field with a new name, say amount2
  2. deploy to production and run a migration (or rake task) that converts each amount to the right value for amount2
  3. (this whole time existing code is still using amount and there is no downtime from the users perspective)
  4. take the site down for maintenance, run the migration one more time to capture any amount fields that could have changed in the last few minutes
  5. delete amount and rename amount2 to amount
  6. deploy new code which expects amount to be an integer
  7. bring site back up

It looks like Mongoid offers a rename method: http://mongoid.org/docs/persistence/atomic.html#rename

But I'm a little confused how this is used. If you have a field named amount2 (and you've already deleted amount), do you just run Transaction.rename :amount2, :amount? Then I imagine this immediately breaks the underlying representation so you have to restart your app server after that? What happens if you run that while amount still exists? Does it get overwritten, fail, or try to convert on it's own?

Thanks!

Brian Armstrong
  • 19,707
  • 17
  • 115
  • 144
  • does sound logical. But I'm not sure whether you need restart or not. For one thing, you would need to update the field name in the model right? As for overwrite, I'm not sure, but the upsert part of this link http://whyjava.wordpress.com/2012/02/07/how-to-rename-field-in-all-the-mongodb-documents/ does seem to suggest it will try to write, and optionally overwrite the field if it exists. So it's not like a regular SQL based rename, but rather moving each document value individually. Makes sense? – Bashar Abdullah Feb 22 '12 at 06:22
  • Hmm...not sure, would still like to get a working example from start to finish. I'll post one if mine goes successfully. – Brian Armstrong Feb 23 '12 at 05:03

1 Answers1

4

Ok I made it through. I think there is a faster way using the mongo console with something like this: MongoDB: How to change the type of a field?

But I couldn't get the conversion working, so opted for this slower method in the rails console with more downtime. If anyone has a faster solution please post it.

  • create new Integer field with a new name, say amount2
  • convert each amount to the right value for amount2 in a console or rake task

Mongoid.identity_map_enabled = false
Transaction.all.each_with_index do |t,i|
  puts i if i%1000==0
  t.amount2 = t.amount.to_money
  break if !t.save
end

Note that .all.each works fine (you don't need to use .find_each or .find_in_batches like regular activerecord with mysql) because of mongodb cursors. It won't fill up memory as long as the identity_map is off.

  • take the site down for maintenance, run the migration one more time to capture any amount fields that could have changed in the last few minutes (something like Transaction.where(:updated_at.gt => 1.hour.ago).each_with_index...

  • comment out field :amount, type: BigDecimal in your model, you don't want mongoid to know about this field anymore, and push this code

  • now run another script to rename your column (it overwrites any old BigDecimal string values in the process). You might need to comment any validations you have on the model which expect the old field.

Mongoid.identity_map_enabled = false
Transaction.all.each_with_index do |t,i|
  puts i if i%1000==0
  t.rename :amount2, :amount
end

This is atomic and doesn't require a save on the model.

  • update your model to reflect the new column type field :amount, type: Integer
  • deploy and bring the site back up

As mentioned I think there is a better way, so if anyone has some tips please share. Thanks!

Community
  • 1
  • 1
Brian Armstrong
  • 19,707
  • 17
  • 115
  • 144
  • just a question. Any chance your cursor will timeout for large datasets? – Bashar Abdullah Feb 24 '12 at 00:09
  • yes, i think mongo's default cursor timeout is 10 minutes, can happen for sure. if you can write it idempotently (check if amount2 is nil) and run it multiple times this might work. Definitely seems harder than it should be though. – Brian Armstrong Aug 16 '12 at 06:08
  • Update, I probably should have used this: https://github.com/nviennot/mongoid_lazy_migration Looks easier than what I did – Brian Armstrong Aug 16 '12 at 06:38
  • Ohh... I mostly followed your previous suggestion, working around the timeout. Thanks for sharing the gem – Bashar Abdullah Aug 16 '12 at 14:39