One possible approach is to attempt a direct translation to C. However, I now think that I should have used builtin matrix
type instead. This code relies on quite a bit of advanced functionality (I even left one unproven lemma for exercise: it shows that N*sizeof(T) == sizeof(@[T][N])
.
The loop to initialize a 2-dimensional array is implemented in the function:
extern
fun
multable_init (
mat: &(@[@[int][10]][10])? >> _
): void // end of [multable_init]
This function, in turn, uses two functions (both initialize an array of elements, basically). Also, the global variable multable
is allocated, and is then initialized using multable_init
(I thought it wouldn't work, but it did!).
Here's the code of initialization of the global variable:
var multable : @[int?][100]
val p_multable = addr@multable
prval pf_multable = array_v_group (view@multable)
val () = multable_init (!p_multable)
prval pf_multable = array_v_ungroup (pf_multable)
prval pf_multable = array2matrix_v (pf_multable)
val theMultable = ref_make_viewptr {matrix (int, 10, 10)} (pf_multable | p_multable)
A mutable array is allocated on stack, then we take its address (line 2), turns its corresponding at-proof from @[int?][100]
to @[@[int?][10]][10]
(via grouping on line 3), and initialize it. Then, we turn the grouped-array view into a matrix view, and finally, put it into a ref-cell.
The full code is at Glot.io