Is it true that with the contemporary advanced SV RTL simulators, the simulation footprint may increase when using unpacked arrays vs the packed arrays? If so, is this a problem and do verification teams insist on rules to use packed arrays? TIA. Sanjay
-
It's very easy to do the experiment. Using the simulator you have to create very big packed/unpacked array and assign values to them; many simulator can report memory usage. However, I recommend you to choose packed/unpacked array based on design intention. BTW, if you're using simulator with 3rd party tools via PLI/VPI, I think packed array is better for small array because it can be fetched at once; unpacked array might need to iterate over each one. For big array, unpacked array is better because read/write a huge block but for data change of small portion is waste of CPU and memory space. – jclin Jul 30 '13 at 10:16
1 Answers
"[Does] the simulation footprint may increase when using unpacked arrays vs the packed arrays?"
It depends on the simulator allocates and accesses its memory. Most cases packed arrays will have a smaller memory footprint then unpacked arrays. Usually the footprint size differences in not very sufficient. When the simulator accesses an array from memory, packed arrays exercise the whole array while unpacked arrays access a portion. When the array is large and do not require access to the whole array at once, unpacked arrays have better performance then packed arrays.
"is this a problem and do design teams insist on design rules to use packed arrays?"
If the machine running the simulator has an sufficient memory to run the simulation then it doesn't matter. Even still, memory footprint limitations should not be a design rule. Design rules should focus on the quality, performance, silicon/FPGA limitations, and readability. If adjusting array structures helps meet real design rules, then the reduced memory footprint is a side benefit.
Test benches and non-synthesizable models are a different story when dealing with limited system memory (or very long simulation time). Calibrating the packed and unpacked arrays is one of many factors to look into. Many commercial simulators come with documentation for guidelines to get best simulation results.
General array guidelines:
- packed array - synthesizable - best when accessing an entire array algorithmic operations and supports bit-select and part select (LRM § 7.4.1)
- example:
reg [31:0] packed_array;
- example:
- unpacked array - synthesizable - best when the array is huge or each entry must be accessed individually (LRM § 7.4.2)
- example
reg unpacked_array [31:0]; reg [31:0] unpacked_array_of_packed_arrays [15:0];
- example
- associative array - not synthesizable - best when ability access to all entries is necessary and unlikely access most entities in simulation (LRM § 7.8)
- example
int associative_wildkey [*]; logic [127:0] associative_keytype [int];
- example
- queue - not synthesizable - best when number of entries are unknown and data access is like a pipeline (LRM § 7.10)
- example
bit [7:0] queue [$];
- example
- dynamic array - not synthesizable - best when needing to create an entire array on the fly, good practice to delete the array when done with it and simulation is not done (LRM § 7.5)
- example
int dynamic_array[]; initial dynamic_array = new[8];
- example
- vectored net - (check synthesizer manual) - best when only accessing the whole packed entry, bit-select and part-select are not allowed (might have smaller memory footprint because of this). Limited to net types (LRM § 6.9.2)
- example
wire vectored [7:0] vec;
- example

- 18,111
- 5
- 46
- 68
-
Thank you Greg. I should've said verification design rules and verification teams. Edited above. I didn't follow what you meant by this sentence - "Usually the footprint size differences in not very sufficient." – shparekh Jul 31 '13 at 17:02