data LayoutOrder : TypeWe often deal with the 'logical' representation of tensors, but for
performance characteristics we need to be cognisant of how these tensors
are stored in the physical memory, which is in 1D linear order.
There are two options: row-major and column-major format
NumPy, PyTorch, TensorFlow and JAX use row-major indexing
The idea is that once linearised, with:
- row-major the last index of the array varies fastest
- column-major the first index of the array varies fastest
Eq LayoutOrderShow LayoutOrderDefaultLayoutOrder : LayoutOrderFollowing most popular conventions, we use row-major ordering by default
splitFinProd : LayoutOrder -> Fin (m * n) -> (Fin m, Fin n)Layout-aware version of splitProd from Data.Fin.Split.
Row-major: index k in a m×n matrix maps to (k/n, k%n)
- goes through all columns before moving to next row
Column-major: index k maps to (k%m, k/m)
- goes through all rows before moving to next column
For a 2×3 matrix:
Row-major order: (0,0), (0,1), (0,2), (1,0), (1,1), (1,2)
Column-major order: (0,0), (1,0), (0,1), (1,1), (0,2), (1,2)
indexFinProd : LayoutOrder -> Fin m -> Fin n -> Fin (m * n)Layout-aware version of indexProd from Data.Fin.Split.
Inverse of splitFinProd: given (row, col) indices, compute linear index.
Row-major: linear index = row * n + col
Column-major: linear index = col * m + row
For a 2×3 matrix with (row=1, col=2):
Row-major: 1 * 3 + 2 = 5
Column-major: 2 * 2 + 1 = 5