0

I am trying to update from 4.0 to 4.5.1 but the process always fails at UpdateMeasuresDebtToMinutes. I am using MySQL 5.5.27 as a database with InnoDB as table engine.

Basically the problem looks like this problem

After the writeTimeout exceeds (600 seconds) there is an exception in the log

Caused by: java.io.EOFException: Can not read response from server. Expected to read 81 bytes, read 15 bytes before connection was unexpectedly lost.
    at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3166) ~[mysql-connector-java-5.1.27.jar:na]
    at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3676) ~[mysql-connector-java-5.1.27.jar:na]

Adding the indexes as proposed in the linked issue did not help.

Investigating further I noticed several things:

  • the migration step reads data from a table and wants to write back to the same table (project_measures)
  • project_measures contains more than 770000 rows
  • the process always hangs after 249 rows
  • the hanging happens in org.sonar.server.migrations.MassUpdate when calling update.addBatch() which after the BatchSession.MAX_BATCH_SIZE (250) forces an execute and a commit

is there a way to configure the DB connection to allow this to proceed?

Community
  • 1
  • 1

2 Answers2

0

First of all, could you try to revert your db to 4.0 and try again ? Then, could you please give us the JDBC url (sonar.jdbc.url) you're using ?

Thanks

  • Reverting just leads to the same problem, again and again and again .. As jdbc URL I tried several versions with different timeout settings, but those just change the moment when the exception is logged. Right now i use this setting `sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance` – Michael Grunert Jan 13 '15 at 14:33
0

As I need that sonar server to run I finally implemented a workaround.

It seems I cannot write to the database at all, as long as a big result set is still open (I tried with a second table but the same issue as before).

Therefore I changed all migrations that need to read and write the project_measurestable (org.sonar.server.db.migrations.v43.TechnicalDebtMeasuresMigration, org.sonar.server.db.migrations.v43.RequirementMeasuresMigration, org.sonar.server.db.migrations.v44.MeasureDataMigration) to load the changed data into a memory structure and after closing the read resultset write it back. This is as hacky as it sounds and will not work for larger datasets where you would need to this with paging through the data or storing everything into a secondary datastore.

Furthermore I found that later on (in 546_inverse_rule_key_index.rb) an index needs to be created on the rules table which is larger than the max key length on mysql (2 varchar(255) columns with UTF-8 is more than 1000bytes .. ) so I had to limit the key length on that too ..

As I said, it is a workaround and therefore I will not accept it as an answer ..