138

How can I change the limit

Row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help. In current row format, BLOB prefix of 768 bytes is stored inline.

Table:

id  int(11) No       
name    text    No       
date    date    No       
time    time    No       
schedule    int(11) No       
category    int(11) No       
top_a   varchar(255)    No       
top_b   varchar(255)    No       
top_c   varchar(255)    No       
top_d   varchar(255)    No       
top_e   varchar(255)    No       
top_f   varchar(255)    No       
top_g   varchar(255)    No       
top_h   varchar(255)    No       
top_i   varchar(255)    No       
top_j   varchar(255)    No       
top_title_a varchar(255)    No       
top_title_b varchar(255)    No       
top_title_c varchar(255)    No       
top_title_d varchar(255)    No       
top_title_e varchar(255)    No       
top_title_f varchar(255)    No       
top_title_g varchar(255)    No       
top_title_h varchar(255)    No       
top_title_i varchar(255)    No       
top_title_j varchar(255)    No       
top_desc_a  text    No       
top_desc_b  text    No       
top_desc_c  text    No       
top_desc_d  text    No       
top_desc_e  text    No       
top_desc_f  text    No       
top_desc_g  text    No       
top_desc_h  text    No       
top_desc_i  text    No       
top_desc_j  text    No       
status  int(11) No       
admin_id    int(11) No 
igorsantos07
  • 4,456
  • 5
  • 43
  • 62
Lasha Kurt
  • 1,411
  • 2
  • 10
  • 4

19 Answers19

142

The question has been asked on serverfault too.

You may want to take a look at this article which explains a lot about MySQL row sizes. It's important to note that even if you use TEXT or BLOB fields, your row size could still be over 8K (limit for InnoDB) because it stores the first 768 bytes for each field inline in the page.

The simplest way to fix this is to use the Barracuda file format with InnoDB. This basically gets rid of the problem altogether by only storing the 20 byte pointer to the text data instead of storing the first 768 bytes.


The method that worked for the OP there was:

  1. Add the following to the my.cnf file under [mysqld] section.

    innodb_file_per_table=1
    innodb_file_format = Barracuda
    
  2. ALTER the table to use ROW_FORMAT=COMPRESSED.

    ALTER TABLE nombre_tabla
        ENGINE=InnoDB
        ROW_FORMAT=COMPRESSED 
        KEY_BLOCK_SIZE=8;
    

There is a possibility that the above still does not resolve your issues. It is a known (and verified) bug with the InnoDB engine, and a temporary fix for now is to fallback to MyISAM engine as temporary storage. So, in your my.cnf file:

internal_tmp_disk_storage_engine=MyISAM
hjpotter92
  • 78,589
  • 36
  • 144
  • 183
  • 15
    +1 -> The innodb_file_per_table part is crucial and goes hand-in-hand with changing the format to Barracuda. Without it, MySQL will silently continue to use Antelope. – Nick Sep 16 '14 at 18:59
  • I was getting the exact same error as the OP. I tried this but kept getting the same error, minus the suggestion to set the `ROW_FORMAT`. Changing one field that was `LONGTEXT` to `TEXT` worked for me. – Tim Malone Mar 12 '16 at 01:23
  • Hello, do you guys recommend this for production? I am a newbie and currently making a system that had this error and I am considering to do this in the production server to solve the problem. –  Sep 03 '16 at 03:55
  • 1
    @Eduardo I'd recommend backing up the data first. But yes, you can do this on production too! – hjpotter92 Sep 03 '16 at 09:18
  • @hjpotter92 THANK YOU! I thought this was a hack!!! I never knew this was an OK method to do for production. Thank you again!!! –  Sep 03 '16 at 09:45
  • 3
    Use `innodb_file_per_table=1` to activate this option. – Stijn Geukens Oct 09 '16 at 13:58
  • 3
    This isn't guaranteed to fix the problem. See [this post](https://bugs.mysql.com/bug.php?id=69336) for an example script that adds these settings but is still able to reproduce the error. – Cerin Mar 15 '17 at 00:08
  • Same here. I followed these steps and still have the same error. – PDoria Oct 01 '17 at 14:33
  • @Cerin updated the reply above with more recent findings. Apparently, the bug is still there in recent MySQL releases... – hjpotter92 Oct 01 '17 at 14:52
  • 1
    @user1183352 Updated my reply above. Thanks for reminding me again to dive deeper into this. :) – hjpotter92 Oct 01 '17 at 14:52
  • Since I use transactions in my application, rolling to myisam is not a desirable option. And I chose InnoDB since it is generally more stable as far as database concurrency goes... – PDoria Oct 01 '17 at 14:58
  • Even with transactions, @user1183352, you're only changing the temporary storage engine. The ending tables would still be in InnoDB, just the procedure execution would use a MyISAM engine for computation etc. Though I would advice against non-normalised DB structure. Try asking for design implementations on [dba.se]? – hjpotter92 Oct 01 '17 at 15:13
  • Normalising databases (or not) have their pros and cons - Personally - I think this discussion shouldn't be done here. In this particular case, my database isn't - but that was a design choice considering our system requirements (mainly, efficiency and speed above all else). Even if I normalise the table, this type of issue can happen. – PDoria Oct 01 '17 at 18:47
  • 1
    Update: I ended up solving my particular issue by modifying data types on my table, and thus not breaking the row size limit. – PDoria Oct 01 '17 at 19:02
  • `DYNAMIC` and `COMPRESSED` work the same way (as far as this Question is concerned): Some varchar/text columns will be moved to off-record storage, leaving behind a 20-byte pointer. Enough columns are moved (per row) to shrink the row to under 8KB. The result will be grossly inefficient; adding a table (see my answer) will be more efficient. – Rick James Oct 12 '17 at 16:32
  • Like @StijnGeukens stated use `innodb_file_per_table=1`, without the `=1` it didn't work, now it does. – Semicolon Nov 06 '17 at 15:12
  • 1
    @Nick In my case adding `innodb_strict_mode=0` was far more crucial, as described [here](https://stackoverflow.com/a/44343703/1883256) in MariaDB 10.2. My problem is caused by a Wordpress plugin called "PhotoGallery" as described [there](https://web-dorado.com/forum/photo-gallery/11797-wp-bwg-theme-table-on-mysql.html) – Pathros Mar 16 '18 at 17:16
  • Is this still a good recommendation? The answer is from 2013. I had good results changing to row format "DYNAMIC" as explained here: https://mariadb.com/kb/en/innodb-dynamic-row-format/#overflow-pages-with-the-dynamic-row-format MariaDB 10.4.12 – Sybille Peters Jun 12 '20 at 13:02
  • @SybillePeters Not sure if the bug is there in mysql 8+, but barracuda format is for the `DYNAMIC` row format. – hjpotter92 Jun 12 '20 at 13:13
  • 3
    Seems to me this answer is not relevant for modern versions of MySQL. At least not based on https://dba.stackexchange.com/a/125893/79826. – Corin Jan 15 '21 at 16:15
70

I ran into this problem recently and solved it a different way. If you are running MySQL version 5.6.20 there is a known bug in the system. See MySQL docs

Important Due to Bug #69477, redo log writes for large, externally stored BLOB fields could overwrite the most recent checkpoint. To address this bug, a patch introduced in MySQL 5.6.20 limits the size of redo log BLOB writes to 10% of the redo log file size. As a result of this limit, innodb_log_file_size should be set to a value greater than 10 times the largest BLOB data size found in the rows of your tables plus the length of other variable length fields (VARCHAR, VARBINARY, and TEXT type fields).

In my situation the offending blob table was around 16MB. Thus, the way I solved it was by adding a line to my.cnf that ensured I had at least 10x that amount and then some:

innodb_log_file_size = 256M

Epoc
  • 7,208
  • 8
  • 62
  • 66
Colin
  • 961
  • 7
  • 8
  • Spot on. My "Row Size too large" error on a `longblob` field disappeared as soon as I increased the `innodb_log_file_size` parameter. Also was running 5.6.20. – adamup Sep 14 '14 at 14:06
  • Thanks. Was running homebrew 5.6.21, changing the param fixed the issue. – nanoman Nov 28 '14 at 13:58
  • Both this and @hjpotter92's answer were required to fix this error for me. – Cerin May 13 '15 at 00:25
  • 1
    Thanks. Was running 5.6.21 and this solution helped. – petiar Oct 21 '15 at 20:57
  • This did not solve my issue either. I'm considering replacing many of my VARCHAR fields to TEXTs, as I've seen on similar threads. – PDoria Oct 01 '17 at 15:11
  • @user1183352 - I _think_ `VARCHAR` and `TEXT` columns are handled the same when it comes to the error at hand. (Also `VARBINARY` and `BLOB`.) – Rick James Oct 12 '17 at 16:42
  • IMO Best answer, but if you can't change db config, I suggest have a read on the following link, that explains column size limit especially discussing utf8mb4 character set. In short change varchar/char to text or set size to max varchar(191) https://dev.mysql.com/doc/refman/5.7/en/charset-unicode-conversion.html – Abdul Rehman Jul 16 '21 at 08:03
62

Set followings on your my.cnf file and restart mysql server.

innodb_strict_mode=0
Karl
  • 791
  • 7
  • 12
  • 2
    `innodb_strict_mode=0` -> no spaces. Otherwise, restarting mariaDB 10.2 results in error :-P – Pathros Mar 26 '18 at 04:30
  • I did not try with mariadb, for MySQL it does not need to remove spaces. – Karl Apr 19 '18 at 08:58
  • 7
    Accordingly to the documentation, disabling the strict mode only prevents erros and warnings from appearing, so it doesn'y seem to solve the problem! http://download.nust.na/pub6/mysql/doc/innodb-plugin/1.1/en/innodb-other-changes-strict-mode.html – João Teixeira Jan 10 '20 at 12:32
  • 2
    This seems to work and let me create the tables without issue and store data – Chris Muench Apr 13 '20 at 18:09
  • @João, it does more.. https://dev.mysql.com/doc/refman/8.0/en/innodb-parameters.html#sysvar_innodb_strict_mode – Karl Jul 23 '20 at 08:11
  • Amazon RDS MySQL 5.7 did everything else in this thread; e.g. row size = dynamic; barracuda etc.. In the end all that worked on RDS was to downgrade to MySQL 5.7; do everything else in this thread AND set strict_mode to 0 in the RDS parameter group – Reece Sep 03 '20 at 23:30
  • MySQL 8.0.27 innodb-strict-mode=OFF – Karl Dec 29 '21 at 10:35
24

If you can switch the ENGINE and use MyISAM instead of InnoDB, that should help:

ENGINE=MyISAM

There are two caveats with MyISAM (arguably more):

  1. You can't use transactions.
  2. You can't use foreign key constraints.
Julian
  • 8,808
  • 8
  • 51
  • 90
phoenix
  • 1,629
  • 20
  • 11
  • 10
    Fine if you don't mind going without transactions and foreign key constraints. But be aware that you loose these when switching to MyISAM. – wobbily_col Aug 10 '16 at 14:45
  • 4
    This advice is insane -- at least phrase it will al the caveats ! Transactions and foregith key constraints are the least of the problems with this format ! – Hassan Syed Oct 06 '16 at 19:13
  • 1
    Sorry downvoted this answer. Please update to mention all the caveats involved. Anyone just following this advice switches to a format that I would never use on any production system. – Remco Wendt Jan 18 '17 at 13:11
  • 3
    And MyISAM is going away. – Rick James Oct 12 '17 at 16:43
  • 2
    worked for me. My case I had over 300 fields about 10 in length each. I do not have any relationships in this table so switching to MyISAM was the fix. – Robert Saylor Feb 28 '19 at 12:28
21

I had the same issue, this solved it for me:

ALTER TABLE `my_table` ROW_FORMAT=DYNAMIC;

From MYSQL Documentation:

The DYNAMIC row format maintains the efficiency of storing the entire row in the index node if it fits (as do the COMPACT and REDUNDANT formats), but this new format avoids the problem of filling B-tree nodes with a large number of data bytes of long columns. The DYNAMIC format is based on the idea that if a portion of a long data value is stored off-page, it is usually most efficient to store all of the value off-page. With DYNAMIC format, shorter columns are likely to remain in the B-tree node, minimizing the number of overflow pages needed for any given row.

multimediaxp
  • 9,348
  • 13
  • 49
  • 80
  • ... but you need to have a different file format, so u are already on Barracuda? Cause I get an error trying that command, saying `Warnings from last query: InnoDB: ROW_FORMAT=DYNAMIC requires innodb_file_format > Antelope` – Ted Aug 06 '18 at 15:36
  • When I let HeidiSQL run the same command via its built-in GUI, it somehow succeeded, but it didnt help at all, I get the same error, `Row size too large (>8126)` – Ted Aug 06 '18 at 15:38
  • Try to set innodb_file_format = Barracuda in my.ini and try ALTER TABLE `my_table` ROW_FORMAT=DYNAMIC; again – multimediaxp Aug 07 '18 at 18:23
  • Yeah, I think thats what I did in the end actually, and I think that resolved it. But I also changed a lot of columns to TEXT at the same time in desperation =) Thx though, I actually think that was it. – Ted Aug 07 '18 at 18:59
  • Good!, if you think this is a good answer, give it an up vote and accept as valid response, thanks! – multimediaxp Aug 16 '18 at 00:24
  • I cant accept, its not my question =) And, I _think_ it was also required to set the file system to Barracuda... – Ted Aug 16 '18 at 00:26
15

I would like to share an awesome answer, it might be helpful. Credits Bill Karwin see here https://dba.stackexchange.com/questions/6598/innodb-create-table-error-row-size-too-large

They vary by InnoDB file format.At present there are 2 formats called Antelope and Barracuda.

The central tablespace file (ibdata1) is always in Antelope format. If you use file-per-table, you can make the individual files use Barracuda format by setting innodb_file_format=Barracuda in my.cnf.

Basic points:

  1. One 16KB page of InnoDB data must hold at least two rows of data. Plus each page has a header and a footer containing page checksums and log sequence number and so on. That's where you get your limit of a bit less than 8KB per row.

  2. Fixed-size data types like INTEGER, DATE, FLOAT, CHAR are stored on this primary data page and count toward the row size limit.

  3. Variable-sized data types like VARCHAR, TEXT, BLOB are stored on overflow pages, so they don't count fully toward the row size limit. In Antelope, up to 768 bytes of such columns are stored on the primary data page in addition to being stored on the overflow page. Barracuda supports a dynamic row format, so it may store only a 20-byte pointer on the primary data page.

  4. Variable-size data types are also prefixed with 1 or more bytes to encode the length. And InnoDB row format also has an array of field offsets. So there's an internal structure more or less documented in their wiki.

Barracuda also supports a ROW_FORMAT=COMPRESSED to gain further storage efficiency for overflow data.

I also have to comment that I've never seen a well-designed table exceed the row size limit. It's a strong "code smell" that you're violating the repeating groups condition of First Normal Form.

Community
  • 1
  • 1
Aman Chhabra
  • 607
  • 1
  • 8
  • 21
7

You need to do some changes to the my.ini file

Add this under [mysqld]

innodb_strict_mode=0

Update These two lines

innodb_log_file_size=256M
innodb_log_buffer_size=256M

innodb_strict_mode: When it is enabled, certain InnoDB warnings become errors instead.

Reference: https://mariadb.com/kb/en/innodb-strict-mode/

innodb_log_file_size & innodb_log_buffer_size needs to increase in size.

Soubhagya Kumar Barik
  • 1,979
  • 20
  • 26
  • It solved my issue on local machine. I have innodb database and table has 119 fields and got this issue when I tried to create a new 120th field. Your solution solved my issue for now. Thanks Soubhagya, – Kamlesh Mar 18 '23 at 12:14
6

After spending hours I have found the solution: just run the following SQL in your MySQL admin to convert the table to MyISAM:

USE db_name;
ALTER TABLE table_name ENGINE=MYISAM;
Ivan Aracki
  • 4,861
  • 11
  • 59
  • 73
Iftikhar Khan
  • 313
  • 1
  • 5
  • 9
  • Doesn't help much if the issue is happening in a create table statement. – Mark Fraser Dec 07 '18 at 02:45
  • 3
    `create table` has the same option: `CREATE TABLE ... ENGINE=MyISAM ...` – xebeche Apr 16 '19 at 13:26
  • You gonna loose all advantages `InnoDB` has compared to `MyISAM` except that `MyISAM` is maybe faster (you can still increase key cache to overcome that). Like transactions and constrains. – Roland Jun 19 '22 at 20:49
6

The maximum row size for an InnoDB table, which applies to data stored locally within a database page, is slightly less than half a page for 4KB, 8KB, 16KB, and 32KB

For 16kb pages (default), we can calculate:

Slightly less than half a page 8126 / Number of bytes to threshold for overflow 767 = 10.59 fields of 767 bytes maximum

Basically, you could max out a row with:

  • 11 varchar fields > 767 characters (latin1 = 1 byte per char) or
  • 11 varchar fields > 255 characters (utf-8 on mysql = 3 bytes per char).

Remember, it will only overflow to an overflow page if the field is > 767 bytes. If there are too many fields of 767 bytes, it will bust (passing beyond max row_size). Not usual with latin1 but very possible with utf-8 if the developers aren’t careful.

For this case, I think you could possibly bump the innodb_page_size to 32kb.

in my.cnf:

innodb_page_size=32K

References:

U0001
  • 575
  • 1
  • 6
  • 21
  • 1
    Very good info, I would suggest to add this link for first read, that explains column size limit especially discussing utf8mb4 character set https://dev.mysql.com/doc/refman/5.7/en/charset-unicode-conversion.html – Abdul Rehman Jul 16 '21 at 07:59
4

I am using MySQL 5.6 on AWS RDS. I updated following in parameter group.

innodb_file_per_table=1
innodb_file_format = Barracuda

I had to reboot DB instance for parameter group changes to be in effect.

Also, ROW_FORMAT=COMPRESSED was not supported. I used DYNAMIC as below and it worked fine.

ALTER TABLE nombre_tabla ENGINE=InnoDB ROW_FORMAT=DYNAMIC KEY_BLOCK_SIZE=8
Mihir Kagrana
  • 479
  • 4
  • 4
3

I ran into this issue when I was trying to restore a backed up mysql database from a different server. What solved this issue for me was adding certain settings to my.conf (like in the questions above) and additionally changing the sql backup file:

Step 1: add or edit the following lines in my.conf:

innodb_page_size=32K
innodb_file_format=Barracuda
innodb_file_per_table=1

Step 2 add ROW_FORMAT=DYNAMIC to the table create statement in the sql backup file for the table that is causing this error:

DROP TABLE IF EXISTS `problematic_table`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `problematic_table` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
  ...
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3 ROW_FORMAT=DYNAMIC;

the important change above is ROW_FORMAT=DYNAMIC; (that was not included in the orignal sql backup file)

source that helped me to resolve this issue: MariaDB and InnoDB MySQL Row size too large

matyas
  • 2,696
  • 23
  • 29
  • 2
    Lines described in step1 should not have any space. Line `innodb_page_size=32K` didn't work, since it caused error to restart MariaDB 10.2 in my case. Instead, I added `innodb_strict_mode=0` line, as described [here](https://stackoverflow.com/a/44343703/1883256) – Pathros Mar 16 '18 at 17:12
  • 1
    thank you for the feedback Pathros, I updated my answer and removed the spaces from step1. – matyas Mar 24 '18 at 13:10
  • Note for MySQL <= 5.7.5: 32K is not valid option for innodb_page_size. See: https://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_page_size – Dub Nazar Dec 11 '18 at 21:25
  • Also, you must recreate your whole mysql server for scracth, as changing `innodb_page_size` requires `ibdata*` recreation, which is destructive operation. Watch your syslog for more information – Matija Nalis May 30 '19 at 17:52
  • seems like a copy or at least to be very similar to the best rated answere – Dwza Sep 11 '20 at 14:21
2

The other answers address the question asked. I will address the underlying cause: poor schema design.

Do not splay an array across columns. Here you have 3*10 columns that should be turned into 10 rows of 3 columns in a new table (plus id, etc)

Your Main table would have only

id  int(11) No       
name    text    No       
date    date    No       
time    time    No       
schedule    int(11) No       
category    int(11) No       
status  int(11) No       
admin_id    int(11) No 

Your extra table (Top) would have

id  int(11) No          -- for joining to Main
seq TINYINT UNSIGNED    -- containing 1..10
img   varchar(255)    No       
title varchar(255)    No       
desc  text    No    
PRIMARY KEY(id, seq)    -- so you can easily find the 10 top_titles

There would be 10 (or fewer? or more?) rows in Top for each id.

This eliminates your original problem, and cleans up the schema. (This is not "normalization", as debated in some of the Comments.)

Do not switch to MyISAM; it is going away.
Don't worry about ROW_FORMAT.

You will need to change your code to do the JOIN and to handle multiple rows instead of multiple columns.

Rick James
  • 135,179
  • 13
  • 127
  • 222
2

I keep running into this issue when I destroy my Laravel Homestead (Vagrant) box and start fresh.

  1. SSH into the box from the command line homestead ssh

  2. Go to the my.cnf file sudo vi /etc/mysql/my.cnf

  3. Add the below lines to the bottom of the file (below the !includedir)

    [mysqld]

    innodb_log_file_size=512M

    innodb_strict_mode=0

  4. Save the changes to my.cnf, then reload MYSQL sudo service mysql restart

Jon
  • 2,277
  • 2
  • 23
  • 33
1

To solve this problem, you have to change the innodb to myisam in your phpmyadmin. Look at this picture.

enter image description here

swdpankaj
  • 107
  • 2
  • 11
1

For MariaDB this issue is solved by increasing the value from 16k to 32k in file my.ini

innodb_page_size=32K

which is located at D:\wamp\bin\mariadb\mariadb10.6.5 in my case.

Rana Nadeem
  • 1,115
  • 1
  • 9
  • 17
  • Note that this answer was published later this one: https://stackoverflow.com/a/58654840/3451846 that is somehow more complete as far as I can see – Valerio Bozz Jan 24 '23 at 18:35
0

I also encountered the same problem. I solve the problem by executing the following sql:

ALTER ${table} ROW_FORMAT=COMPRESSED;

But, I think u should know about the Row Storage.
There are two kinds of columns: variable-length column(such as VARCHAR, VARBINARY, and BLOB and TEXT types) and fixed-length column. They are stored in different types of pages.

Variable-length columns are an exception to this rule. Columns such as BLOB and VARCHAR that are too long to fit on a B-tree page are stored on separately allocated disk pages called overflow pages. We call such columns off-page columns. The values of these columns are stored in singly-linked lists of overflow pages, and each such column has its own list of one or more overflow pages. In some cases, all or a prefix of the long column value is stored in the B-tree, to avoid wasting storage and eliminating the need to read a separate page.

and when purpose of setting ROW_FORMAT is

When a table is created with ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED, InnoDB can store long variable-length column values (for VARCHAR, VARBINARY, and BLOB and TEXT types) fully off-page, with the clustered index record containing only a 20-byte pointer to the overflow page.

Wanna know more about DYNAMIC and COMPRESSED Row Formats

Will
  • 159
  • 1
  • 3
0

If this occures on a SELECT with many columns, the cause can be that mysql is creating a temporary table. If this table is too large to fit in memory, it will use its default temp table format, which is InnoDB, to store it on Disk. In this case the InnoDB size limits apply.

You then have 4 options:

  1. change the innodb row size limit like stated in another post, which requires reinitialization of the server.
  2. change your query to include less columns or avoid causing it to create a temporary table (by i.e. removing order by and limit clauses).
  3. changing max_heap_table_size to be large so the result fits in memory and does not need to get written to disk.
  4. change the default temp table format to MYISAM, this is what i did. Change in my.cnf:

    internal_tmp_disk_storage_engine=MYISAM
    

Restart mysql, query works.

bhelm
  • 695
  • 1
  • 7
  • 14
0

Here is simple tip for anyone interested:

After upgrade from Debian 9 to Debian 10 with 10.3.17-MariaDB, I have some errors from Joomla databases:

[Warning] InnoDB: Cannot add field field in table database.table because after adding it, the row size is 8742 which is greater than maximum allowed size (8126) for a record on index leaf page.

Just in case, I set innodb_default_row_format = DYNAMIC in /etc/mysql/mariadb.conf.d/50-server.cnf (it was default anyway)

Than, I have used phpmyadmin to run "Optimize table" for all the tables in Joomla database. I think table recreation done by phpmyadmin in the process helped. If you happen to have phpmyadmin installed it is just few clicks to do.

0

On Mysql 5.6:

Execute the following SQL COMMANDS:

Mysql > SET GLOBAL innodb_file_format=Barracuda; 
Mysql > ALTER TABLE `name_of_my_table_here` ENGINE=InnoDB ROW_FORMAT=DYNAMIC KEY_BLOCK_SIZE=8; 
Claudio Silva
  • 3,743
  • 1
  • 26
  • 27
  • I tried this and got this error, "ERROR: Table storage engine 'InnoDB' does not support the create option 'KEY_BLOCK_SIZE'" – ttemple Jun 07 '22 at 16:24