Idris2Doc : Data.Layout

Data.Layout

(source)

Definitions

dataLayoutOrder : Type
  We often deal with the 'logical' representation of tensors, but for
performance characteristics we need to be cognisant of how these tensors
are stored in the physical memory, which is in 1D linear order.
There are two options: row-major and column-major format
NumPy, PyTorch, TensorFlow and JAX use row-major indexing
The idea is that once linearised, with:
- row-major the last index of the array varies fastest
- column-major the first index of the array varies fastest

Totality: total
Visibility: public export
Constructors:
RowMajor : LayoutOrder
ColumnMajor : LayoutOrder

Hints:
EqLayoutOrder
ShowLayoutOrder
DefaultLayoutOrder : LayoutOrder
  Following most popular conventions, we use row-major ordering by default

Visibility: public export
splitFinProd : LayoutOrder->Fin (m*n) -> (Finm, Finn)
  Layout-aware version of splitProd from Data.Fin.Split.

Row-major: index k in a m×n matrix maps to (k/n, k%n)
- goes through all columns before moving to next row
Column-major: index k maps to (k%m, k/m)
- goes through all rows before moving to next column

For a 2×3 matrix:
Row-major order: (0,0), (0,1), (0,2), (1,0), (1,1), (1,2)
Column-major order: (0,0), (1,0), (0,1), (1,1), (0,2), (1,2)

Visibility: public export
indexFinProd : LayoutOrder->Finm->Finn->Fin (m*n)
  Layout-aware version of indexProd from Data.Fin.Split.
Inverse of splitFinProd: given (row, col) indices, compute linear index.

Row-major: linear index = row * n + col
Column-major: linear index = col * m + row

For a 2×3 matrix with (row=1, col=2):
Row-major: 1 * 3 + 2 = 5
Column-major: 2 * 2 + 1 = 5

Visibility: public export