There's really no need to gather row sizes like this yourself using VSIZE
. Oracle has already done this for you with its own statistics gathering. Simply gather stats (dbms_stats.gather_table_stats
) and examine dba_tables.avg_row_len
. For total table size, the formatted blocks under the high water mark (HWM) are recorded in dba_tables.blocks
(have to multiply by the block size), and the total allocated space can always be obtained without any stats gathering at all from dba_segments.bytes
.
I wrote a monitoring app that does what you're describing: gathers object space utilization every day for trending/projection purposes. You really don't need anything other than dba_segments
for this purpose (though you will have to join to various other views to get fully identifying information, like dba_indexes, dba_lobs, dba_tables, dba_nested_tables,
and the dba_*_partitions
and dba_*_subpartitions
varieties of the first three of these). This way, you can gather sizing on as many objects as you'd like as often as you'd like (I do 100% of the objects in every database, every day, which is a lot) because there's no need to do expensive querying or statistics gathering.
If you have a user-defined date column in a special table or special set of tables in which you are absolutely certain that they represent when the rows were first inserted, are never out of sequence, never incorrect, are never updated, and no deletes ever happen, you could get a historical picture after the fact from these special tables only. But it's easier to simply take the percentage of rows for a date over the total table row count and multiply that by segment size. Rows aren't going to vary in size from day to day statistically enough to matter in an average.
SELECT datecol,
COUNT(*)/COUNT(*) OVER (PARTITION BY 1)*MAX(s.bytes) estimated_bytes
FROM mytable t,
dba_segments s
WHERE s.segment_name = 'MYTABLE'
GROUP BY datecol
But any kind of serious trending should instead take a snapshot on a daily basis and store the results rather than relying on specific column values. Think about how compression impacts things. A table can have a mixture of compressed and uncompressed rows, even a mixture of compression types. Yesterday's rows could occupy 2x-10x more space than a similar day last month with the same number of rows and VSIZE numbers (which represents the uncompressed byte length). If you're interested in Oracle space usage, you really can't compute it on a row-by-row basis. It has to be done at the segment level by daily snapshotting, not by any after-to-fact query method.