If your table has one null value hiding among a billion rows, finding it is usually going to require a full scan of the whole table (stopping when it finds one).
That's true for the answers using various SQL forms, answers based on gathering statistics, and answers based on trying to enforce a NOT NULL
constraint on the column.
Once you're using ROWNUM=1
to make sure you stop after the first null value, what you can try to do is:
- Use a
/*+ PARALLEL */
hint to make the full scan go faster
- If you've gathered stats on the table using a 100% sample size, you can look in
ALL_TAB_COLUMNS.NUM_NULLS
. This doesn't get around the need for a full table scan, it just avoids it by using the one that was performed when statistics were gathered. The downside of this approach is that the statistics may not be up-to-date.
- If there is an index on the column of interest AND an index on a
NOT NULL
column in the same table, you can try to SELECT COUNT(INDEXED_COLUMN_NAME) FROM TABLE
and SELECT COUNT(INDEXED_NOT_NULL_COLUMN) FROM TABLE
and subtract them. This takes advantage of the fact that null
values would not appear in the index, so the counts would be different. This would be faster because the indexes will be smaller and therefore faster to read than the full table.
If this is something you will need to do frequently, you can create a function based index to speed this search up in the future:
CREATE index_name ON table_name ( NVL(column_name, impossible_value_for_column) );
e.g.,
CREATE index_name ON table_name ( NVL(column_name, -99999999) );
Then
SELECT * FROM table_name
WHERE NVL(column_name, -99999999) = -99999999
AND column_name IS NULL -- in case your "impossible value" really wasn't impossible;