Is it okay to run Hibernate applications configured with hbm2ddl.auto=update
to update the database schema in a production environment?
-
9We do it. Never had any problems. – cretzel Oct 21 '08 at 10:54
-
5Very good question. I face it now. So now 2018 - after 10 years what is your opinion? Is it safe to use Hibernate's update on important client's production databases with complex schemas? – Kirill Ch Apr 26 '18 at 16:02
15 Answers
No, it's unsafe.
Despite the best efforts of the Hibernate team, you simply cannot rely on automatic updates in production. Write your own patches, review them with DBA, test them, then apply them manually.
Theoretically, if hbm2ddl update worked in development, it should work in production too. But in reality, it's not always the case.
Even if it worked OK, it may be sub-optimal. DBAs are paid that much for a reason.

- 1,481
- 1
- 16
- 33

- 18,130
- 10
- 48
- 62
-
80It's unsafe because the applied patches may have side effects which hbm2ddl hardly can predict (such as disabling triggers that were installed for table being modified). For complex schemas the safest way is manual. Automatic with post-regression testing is distant second. All IMHO. – Vladimir Dyuzhev Oct 21 '08 at 14:06
-
-
4
-
20Also updating a db schema should be handled by the professionals ( dbas ). Recovering from a bad db change is difficult at best. Vova didn't mention it - but what happens if hibernate's update decides to drop a column and re-add it because the type or size changed. And lets say the column is all your users email addresses? :-) bye, bye company..... You want the DDL change generated automatically - but you absolutely want the change inspected by a human. – Pat May 18 '10 at 18:19
-
11
-
3Restore on enterprise-size data is a huge pain. Now imagine you've found an issue Hib Update introduces a week later? It's better to spend an extra hour to let the DBAs to make change carefully than to waste couple of shifts (if not days) trying to bring backup back AND keep later updates... – Vladimir Dyuzhev Sep 03 '10 at 18:31
-
25Fwiw, at the moment Hibernate's schema update doesn't drop tables or columns. – Brian Deterling Nov 29 '10 at 19:04
-
1and let's not forget.. no one likes to be the `rm -rf /` guy :).. or in this case the guy who's responsible for the DB havoc that could occur as a result of auto=update – Scoobie Feb 22 '11 at 21:20
-
Okey, this is a noob question. But if I dont use hbm2ddl.auto=update, how do I create my tables? I mean, do I need to do all my changes manually every time I change something in my app? For me who working on a medium size webpage alone it's quite much work.. – nilsi Mar 13 '13 at 20:12
-
I would say: if only using update to create brand new tables or add new columns, YES. otherwise NO. because it will not really update anything if tables are already filled with data. – user447586 May 10 '13 at 14:38
-
In my opinion, here seems have a gap between update and validate, the validate mode have more restrictive validating options. while update can not detect that, and my suggestion is never use options other than validate on production env – Rugal Dec 17 '13 at 05:57
-
Rugal,i think validate dosent looks after the referential integrity...so y to use it in production...if ur backend is not safe – Neerav Shah Feb 22 '14 at 07:27
-
@nilsi This question is about making changes to schemas in production servers. Switching this on in dev is fine, but you need to keep track of the changes being made so that production can be patched. There are tools for versioning schemas: http://schemasync.org/ (according to google) – Philip Couling Dec 02 '14 at 16:17
-
1Update does not remove objects, don't drop tables, triggers or anything else. What it can do to cause problems is adding a no null column to a table with data (failing the update) or new constraints that are not already satisfied (also failing the update) – dtortola Nov 20 '15 at 12:40
-
Triggers are disabled by some DB engines when changes are made to DDL. – Vladimir Dyuzhev Nov 22 '15 at 05:04
-
[Example when mistaking constant declaration in entity create unnecessary column](http://stackoverflow.com/questions/34467121/hibernate-jpa-storing-constants-in-entity-class) – gavenkoa Dec 25 '15 at 22:59
-
As others have mentioned, you can keep better track of your db changes with flyway or liquibase, DBA can review them. – rvazquezglez Jul 07 '17 at 18:31
-
What about using a migration system so you don't have to apply them manually? – Pablo Fernandez Aug 26 '17 at 21:08
We do it in production albeit with an application that's not mission critical and with no highly paid DBAs on staff. It's just one less manual process that's subject to human error - the application can detect the difference and do the right thing, plus you've presumably tested it in various development and test environments.
One caveat - in a clustered environment you may want to avoid it because multiple apps can come up at the same time and try to modify the schema which could be bad. Or put in some mechanism where only one instance is allowed to update the schema.

- 13,556
- 4
- 55
- 59
-
3We use it in Production also, similar use case. Analytics platform that's not mission critical. We have deployed 16K times over 4 environments (4Years) without so much of a hiccup with hibernate. We are a small team and are all mostly SQL RDBS beginners and have better faith in Hibernate handling schema than we do ourselves. I wonder what the error rate is for having a DBA on staff managing migrations and schema changes? Is it better than 0 in ~16K deploys? – abarraford Feb 27 '19 at 14:31
-
What would you say abour this comment by pat? https://stackoverflow.com/questions/221379/hibernate-hbm2ddl-auto-update-in-production#comment2903941_221422 – Shiva kumar Nov 23 '19 at 05:11
Hibernate creators discourage doing so in a production environment in their book "Java Persistence with Hibernate":
WARNING: We've seen Hibernate users trying to use SchemaUpdate to update the schema of a production database automatically. This can quickly end in disaster and won't be allowed by your DBA.

- 35,493
- 19
- 190
- 259

- 1,278
- 2
- 12
- 14
Hibernate has to put the disclaimer about not using auto updates in prod to cover themselves when people who don't know what they are doing use it in situations where it should not be used.
Granted the situations where it should not be used greatly outnumber the ones where it's OK.
I have used it for years on lots of different projects and have never had a single issue. That's not a lame answer, and it's not cowboy coding. It's a historic fact.
A person who says "never do it in production" is thinking of a specific set of production deployments, namely the ones he is familiar with (his company, his industry, etc).
The universe of "production deployments" is vast and varied.
An experienced Hibernate developer knows exactly what DDL is going to result from a given mapping configuration. As long as you test and validate that what you expect ends up in the DDL (in dev, qa, staging, etc), you are fine.
When you are adding lots of features, auto schema updates can be a real time saver.
The list of stuff auto updates won't handle is endless, but some examples are data migration, adding non-nullable columns, column name changes, etc, etc.
Also you need to take care in clustered environments.
But then again, if you knew all this stuff, you wouldn't be asking this question. Hmm . . . OK, if you are asking this question, you should wait until you have lots of experience with Hibernate and auto schema updates before you think about using it in prod.

- 13,375
- 13
- 47
- 46

- 535
- 4
- 8
Check out LiquiBase XML for keeping a changelog of updates. I had never used it until this year, but I found that it's very easy to learn and make DB revision control/migration/change management very foolproof. I work on a Groovy/Grails project, and Grails uses Hibernate underneath for all its ORM (called "GORM"). We use Liquibase to manage all SQL schema changes, which we do fairly often as our app evolves with new features.
Basically, you keep an XML file of changesets that you continue to add to as your application evolves. This file is kept in git (or whatever you are using) with the rest of your project. When your app is deployed, Liquibase checks it's changelog table in the DB you are connecting to so it will know what has already been applied, then it intelligently just applies whatever changesets have not been applied yet from the file. It works absolutely great in practice, and if you use it for all your schema changes, then you can be 100% confident that code you checkout and deploy will always be able to connect to a fully compatible database schema.
The awesome thing is that I can take a totally blank slate mysql database on my laptop, fire up the app, and right away the schema is set up for me. It also makes it easy to test schema changes by applying these to a local-dev or staging db first.
The easiest way to get started with it would probably be to take your existing DB and then use Liquibase to generate an initial baseline.xml file. Then in the future you can just append to it and let liquibase take over managing schema changes.

- 14,642
- 8
- 58
- 63
-
Perfect, just what I was about to shift to. I feel the best thing, going one step forward would be to add `hbm2ddl.auto=update` so that your Class/DB mappings are validated and you have complete control of DB creation through liquibase. What do you think? – gabbar0x Mar 07 '17 at 20:42
-
-
liquibase is better at managing script using "include-import" like support and Versioning support and "Type" attribute for Files which helps you to have different SQL Files for different environment having Parent Child relationship. in a nutshell, Go traditional SQL Mgmt. in Production. For Development, we need Speed For Production, we need Guarantees and Stability and Backup. – Karan Kaw Mar 12 '17 at 04:49
-
@jpswain Thanks. Mind looking into this SO? https://stackoverflow.com/questions/75826561/spring-boot-liquibase-versioned-database-migration-dealing-with-different-envi – pixel Mar 23 '23 at 18:20
It's not a good idea to use hbm2ddl.auto
in production.
The only way to manage the database schema is to use incremental migration scripts because:
- the scripts will reside in VCS along with your codebase. When you check out a branch, you recreate the whole schema from scratch.
- the incremental scripts can be tested on a QA server before being applied in production
- there is no need for manual intervention since the scripts can be run by Flyway, hence it reduces the possibility of human error associated with running scripts manually.
Even the Hibernate User Guide advise you to avoid using the hbm2ddl
tool for production environments.

- 142,745
- 71
- 566
- 911
-
This is a perfect answer and agree with it. But I really find the idea of creating the first database script manually cumbersom (i.e. V1_0__initial_script.sql in case of the example in the link). Is there a way that i can create a script from my existing development db that Hibernate created for me and store into V1_0_intial_script.sql ?? – HopeKing Jan 26 '19 at 13:21
-
1Use `SchemaExport` as demonstrated by this [test case](https://github.com/hibernate/hibernate-orm/blob/master/hibernate-core/src/test/java/org/hibernate/test/schemaupdate/SchemaExportTest.java). – Vlad Mihalcea Jan 26 '19 at 14:06
-
Thanks. I came across a single line dump "mysqldump -u root -p --no-data dbname > schema.sql". Is there any drawback in using the dump generated out of this ? – HopeKing Jan 26 '19 at 14:24
-
1
-
A part from Flyway, you can use liquibase, the good thing about liquibase is that it can be configured to generate migration scripts for you – Emmanuel Ogoma Oct 26 '20 at 20:05
I would vote no. Hibernate doesn't seem to understand when datatypes for columns have changed. Examples (using MySQL):
String with @Column(length=50) ==> varchar(50)
changed to
String with @Column(length=100) ==> still varchar(50), not changed to varchar(100)
@Temporal(TemporalType.TIMESTAMP,TIME,DATE) will not update the DB columns if changed
There are probably other examples as well, such as pushing the length of a String column up over 255 and seeing it convert to text, mediumtext, etc etc.
Granted, I don't think there is really a way to "convert datatypes" with without creating a new column, copying the data and blowing away the old column. But the minute your database has columns which don't reflect the current Hibernate mapping you are living very dangerously...
Flyway is a good option to deal with this problem:

- 34,542
- 16
- 106
- 137

- 17,666
- 5
- 51
- 66
-
1I just tried the first part of your example - in my case changing `@Column(length = 45)` to `@Column(length = 255)`. Can verify that Hibernate 4.3.6.Final correctly updated the database schema using `hbm2ddl.auto=update`. (One thing to mention is the database doesn't currently have any data in it - only the structure.) – Steve Chambers Sep 17 '14 at 09:42
-
Very possible that they fixed that bug somewhere along the past ~6 years or so. However if you did have data in the schema and made a change that resulted in a decrease in column width, you're going to run into either errors or unmanaged data truncation. – cliff.meyers Dec 20 '14 at 13:50
-
1flywaydb needs a SQL script crafted manually, it should be better than what an automatic program can do but who could write a big script, then it is problem. – Bằng Rikimaru Jun 13 '17 at 17:52
I wouldn't risk it because you might end up losing data that should have been preserved. hbm2ddl.auto=update is purely an easy way to keep your dev database up to date.

- 1,465
- 4
- 15
- 24
-
4
-
8Yes of course I do, but it is a lot of work to restore from a backup. It's not worth the hassle of restoring backups when you can also update your database in an orderly fashion. – Jaap Coomans Sep 10 '10 at 08:57
We do it in a project running in production for months now and never had a problem so far. Keep in mind the 2 ingredients needed for this recipe:
Design your object model with a backwards-compatibility approach, that is deprecate objects and attributes rather than removing/altering them. This means that if you need to change the name of an object or attribute, leave the old one as is, add the new one and write some kind of migration script. If you need to change an association between objects, if you already are in production, this means that your design was wrong in the first place, so try to think of a new way of expressing the new relationship, without affecting old data.
Always backup the database prior to deployment.
My sense is - after reading this post - that 90% of the people taking part in this discussion are horrified just with the thought of using automations like this in a production environment. Some throw the ball at the DBA. Take a moment though to consider that not all production environments will provide a DBA and not many dev teams are able to afford one (at least for medium size projects). So, if we're talking about teams where everyone has to do everything, the ball is on them.
In this case, why not just try to have the best of both worlds? Tools like this are here to give a helping hand, which - with a careful design and plan - can help in many situations. And believe me, administrators may initially be hard to convince but if they know that the ball is not on their hands, they will love it.
Personally, I'd never go back to writing scripts by hand for extending any type of schema, but that's just my opinion. And after starting to adopt NoSQL schema-less databases recently, I can see that more than soon, all these schema-based operations will belong to the past, so you'd better start changing your perspective and look ahead.

- 97
- 1
- 1
-
4I disagree about the NoSQL comment. It definitely is on the rise and has its place, but there many applications that absolutely depend on ACID compliance for data integrity with concurrency and transactions that NoSQL simply can't provide. – jpswain Aug 13 '11 at 19:54
It's not safe, not recommended, but it's possible.
I have experience in an application using the auto-update option in production.
Well, the main problems and risks found in this solution are:
- Deploy in the wrong database. If you commit the mistake to run the application server with a old version of the application (EAR/WAR/etc) in the wrong database... You will have a lot of new columns, tables, foreign keys and errors. The same problem can occur with a simple mistake in the datasource file, (copy/paste file and forgot to change the database). In resume, the situation can be a disaster in your database.
- Application server takes too long to start. This occur because the Hibernate try to find all created tables/columns/etc every time you start the application. He needs to know what (table, column, etc) needs to be created. This problem will only gets worse as the database tables grows up.
- Database tools it's almost impossible to use. To create database DDL or DML scripts to run with a new version, you need to think about what will be created by the auto-update after you start the application server. Per example, If you need to fill a new column with some data, you need to start the application server, wait to Hibernate crete the new column and run the SQL script only after that. As can you see, database migration tools (like Flyway, Liquibase, etc) it's almost impossible to use with auto-update enabled.
- Database changes is not centralized. With the possibility of the Hibernate create tables and everything else, it's hard to watch the changes on database in each version of the application, because most of them are made automatically.
- Encourages garbage on database. Because of the "easy" use of auto-update, there is a chance your team neglecting to drop old columns and old tables, because the hibernate auto-update can't do that.
- Imminent disaster. The imminent risk of some disaster to occur in production (like some people mentioned in other answers). Even with an application running and being updated for years, I don't think it's a safe choice. I never felt safe with this option being used.
So, I will not recommend to use auto-update in production.
If you really want to use auto-update in production, I recommend:
- Separated networks. Your test environment cannot access the homolog environment. This helps prevent a deployment that was supposed to be in the Test environment change the Homologation database.
- Manage scripts order. You need to organize your scripts to run before your deploy (structure table change, drop table/columns) and script after the deploy (fill information for the new columns/tables).
And, different of the another posts, I don't think the auto-update enabled it's related with "very well paid" DBAs (as mentioned in other posts). DBAs have more important things to do than write SQL statements to create/change/delete tables and columns. These simple everyday tasks can be done and automated by developers and only passed for DBA team to review, not needing Hibernate and DBAs "very well paid" to write them.

- 17,757
- 11
- 115
- 164
In my case (Hibernate 3.5.2, Postgresql, Ubuntu), setting
hibernate.hbm2ddl.auto=update
only created new tables and created new columns in already existing tables.It did neither drop tables, nor drop columns, nor alter columns. It can be called a safe option, but something like
hibernate.hbm2ddl.auto=create_tables add_columns
would be more clear.

- 12,825
- 9
- 67
- 90

- 106
- 1
- 2
Typically enterprise applications in large organizations run with reduced privileges.
Database username may not have
DDL
privilege for adding columns whichhbm2ddl.auto=update
requires.

- 12,825
- 9
- 67
- 90
-
this is a problem I frequently encounter. We try to use hibernate for initial DB creation but it is often not possible to do so. – Dan Aug 17 '10 at 14:59
-
6
-
Thanks, mind looking at these 2 SOs https://stackoverflow.com/questions/75826561/spring-boot-liquibase-versioned-database-migration-dealing-with-different-envi and https://stackoverflow.com/questions/75817730/spring-boot-flyway-dealing-with-different-environments-and-restrictions-in-enter – pixel Mar 23 '23 at 18:23
No, don't ever do it. Hibernate does not handle data migration. Yes, it will make your schema look correctly but it does not ensure that valuable production data is not lost in the process.

- 59
- 1
- 1
I agree with Vladimir. The administrators in my company would definitely not appreciate it if I even suggested such a course.
Further, creating an SQL script in stead of blindly trusting Hibernate gives you the opportunity to remove fields which are no longer in use. Hibernate does not do that.
And I find comparing the production schema with the new schema gives you even better insight to wat you changed in the data model. You know, of course, because you made it, but now you see all the changes in one go. Even the ones which make you go like "What the heck?!".
There are tools which can make a schema delta for you, so it isn't even hard work. And then you know exactly what's going to happen.

- 23,575
- 2
- 47
- 51
-
"There are tools which can make a schema delta for you": Could you point at some such tools? – Daniel Cassidy Apr 15 '10 at 13:26
-
I think http://www.apexsql.com/sql_tools_diff.asp does this, and possibly more apps. I usually do it by hand by dumping the schema and diffing (using diff). – extraneon Apr 15 '10 at 13:50
Applications' schema may evolve in time; if you have several installations, which may be at different versions, you should have some way to ensure that your application, some kind of tool or script is capable of migrating schema and data from one version stepwise to any following one.
Having all your persistence in Hibernate mappings (or annotations) is a very good way for keeping schema evolution under control.
You should consider that schema evolution has several aspects to be considered:
evolution of the database schema in adding more columns and tables
dropping of old columns, tables and relations
filling new columns with defaults
Hibernate tools are important in particular in case (like in my experience) you have different versions of the same application on many different kinds of databases.
Point 3 is very sensitive in case you are using Hibernate, as in case you introduce a new boolean valued property or numeric one, if Hibernate will find any null value in such columns, if will raise an exception.
So what I would do is: do indeed use the Hibernate tools capacity of schema update, but you must add alongside of it some data and schema maintenance callback, like for filling defaults, dropping no longer used columns, and similar. In this way you get the advantages (database independent schema update scripts and avoiding duplicated coding of the updates, in peristence and in scripts) but you also cover all the aspects of the operation.
So for example if a version update consists simply in adding a varchar valued property (hence column), which may default to null, with auto update you'll be done. Where more complexity is necessary, more work will be necessary.
This is assuming that the application when updated is capable of updating its schema (it can be done), which also means that it must have the user rights to do so on the schema. If the policy of the customer prevents this (likely Lizard Brain case), you will have to provide the database - specific scripts.

- 650
- 7
- 12