7 | %language ElabReflection
9 | ||| We often deal with the 'logical' representation of tensors, but for
10 | ||| performance characteristics we need to be cognisant of how these tensors
11 | ||| are stored in the physical memory, which is in 1D linear order.
12 | ||| There are two options: row-major and column-major format
13 | ||| NumPy, PyTorch, TensorFlow and JAX use row-major indexing
14 | ||| The idea is that once linearised, with:
15 | ||| - row-major the last index of the array varies fastest
16 | ||| - column-major the first index of the array varies fastest
25 | ||| Following most popular conventions, we use row-major ordering by default
30 | ||| Layout-aware version of splitProd from Data.Fin.Split.
31 | |||
32 | ||| Row-major: index k in a m×n matrix maps to (k/n, k%n)
33 | ||| - goes through all columns before moving to next row
34 | ||| Column-major: index k maps to (k%m, k/m)
35 | ||| - goes through all rows before moving to next column
36 | |||
37 | ||| For a 2×3 matrix:
38 | ||| Row-major order: (0,0), (0,1), (0,2), (1,0), (1,1), (1,2)
39 | ||| Column-major order: (0,0), (1,0), (0,1), (1,1), (0,2), (1,2)
49 | ||| Layout-aware version of indexProd from Data.Fin.Split.
50 | ||| Inverse of splitFinProd: given (row, col) indices, compute linear index.
51 | |||
52 | ||| Row-major: linear index = row * n + col
53 | ||| Column-major: linear index = col * m + row
54 | |||
55 | ||| For a 2×3 matrix with (row=1, col=2):
56 | ||| Row-major: 1 * 3 + 2 = 5
57 | ||| Column-major: 2 * 2 + 1 = 5