0

I can get a list of all columns I want to verify the space available.

SELECT 
  TABLE_NAME, COLUMN_NAME, COLUMN_TYPE 
FROM 
  INFORMATION_SCHEMA.COLUMNS 
WHERE 
  COLUMN_TYPE = 'int(11)' AND 
  TABLE_NAME LIKE 'catalog_category_entity%';

Considering that int(11) is up to 2147483648 (not considering unsigned) I would like to calculate how much I am using from this range.

Individually I could check one like this:

select 
  max(value_id)/2147483648 as usage 
from 
  catalog_product_entity_int;

But I would like to do each on a nice way for all the columns found on the first query.

I would like to know if recursive CTE is the right resource in this case and how to do it or if there is a more elegant way of checking it.

I would like to have this nice quick way of checking without any external tools.

I've found this solution for postgres but I was wondering if I really need the function. postgres: find all integer columns with its current max value in it

neisantos
  • 492
  • 1
  • 5
  • 16

1 Answers1

1

I wrote a solution for this task, but I'm hardly the only person to have done something like this.

select concat('`', table_schema, '`.`', table_name, '`.`', column_name, '`') as `column`,
  auto_increment as `current_int`, max_int, round((auto_increment/max_int)*100, 2) as `pct_max`
from (select table_schema, table_name, column_name, auto_increment,
  pow(2, case data_type
    when 'tinyint'   then 7
    when 'smallint'  then 15
    when 'mediumint' then 23
    when 'int'       then 31
    when 'bigint'    then 63
    end+(column_type like '% unsigned'))-1 as max_int
  from information_schema.tables t
  join information_schema.columns c using (table_schema,table_name)
  join information_schema.key_column_usage k using (table_schema,table_name,column_name)
  where t.table_schema in ('test')
    and k.constraint_name = 'PRIMARY'
    and k.ordinal_position = 1
    and t.auto_increment is not null
) as dt;

https://github.com/billkarwin/bk-tools/blob/master/pk-full-ratio.sql

That query is hard-coded for the test schema, so you need to edit it for your own schema.

The short answer to the question of "is my primary key going to overflow?" is to just alter it to BIGINT UNSIGNED now. That will surely last until the collapse of civilization.

In the same git repo, I have another similar script to check all integer columns, not just auto-increment primary keys. But it's not as much of a concern for other columns.

Bill Karwin
  • 538,548
  • 86
  • 673
  • 828
  • @RaymondNijland `0.00` just means that you are at least 99.995% away from reaching the limit. But it should be `round((auto_increment/max_int)*100, 2)` to avoid `out of range` errors for BIGINT. – Paul Spiegel Mar 08 '19 at 14:56
  • Thanks @BillKarwin that seems to work pretty well. The only issue I've noticed is when you have a compond primary key and then your query will calculate for both keys which one will be wrong. it is returning some outliers of 5233% in this case – neisantos Mar 09 '19 at 17:56
  • Okay, the auto-inc column has to be the first column (at least in InnoDB), so you can also filter for `k.ordinal_position=1`. MyISAM tables allow the auto-inc column to be in a second position, but [I recommend never using MyISAM](https://stackoverflow.com/a/17706717/20860). – Bill Karwin Mar 09 '19 at 19:03