data Tensor : Shape -> Type -> Type A scalar or array. Construct a `Tensor` with the function `tensor`.
@shape The `Tensor` shape.
@dtype The element type.
Totality: total
Visibility: export
Constructor: MkTensor : Value -> Tensor shape dtype
Hints:
Primitive dtype => Show (Tag (Tensor shape dtype)) Taggable (Tensor shape dtype)
data TagT : (Type -> Type) -> Type -> Type The effect of tagging nodes in a computational graph.
Totality: total
Visibility: export
Constructor: MkTagT : StateT Env m a -> TagT m a
Hints:
Monad m => Applicative (TagT m) Functor m => Functor (TagT m) Monad m => Monad (TagT m) MonadTrans TagT
0 Tag : Type -> Type- Totality: total
Visibility: public export interface Taggable : Type -> Type- Parameters: a
Methods:
tag : Monad m => a -> TagT m a Mark an expression to be efficiently reused. For example, in
```
bad : Tensor [9999999] F64
bad = let x = fill {shape = [9999999]} 1.0 in x + x
good : Tag $ Tensor [9999999] F64
good = do x <- tag $ fill {shape = [9999999]} 1.0
pure (x + x)
```
the large vector `x` is calculated twice in `bad`, but once in `good`, as `tag` marks it for
sharing.
Types that implement this interface should `tag` constituent components it deems worth sharing.
For example, see the implementation for tuples.
See tutorial _Nuisances in the Tensor API_ for details.
Implementations:
Taggable (Dataset f t) Taggable (Gaussian event dim) Taggable Op Taggable (Tensor shape dtype) (Taggable a, Taggable b) => Taggable (a, b)
tag : Taggable a => Monad m => a -> TagT m a Mark an expression to be efficiently reused. For example, in
```
bad : Tensor [9999999] F64
bad = let x = fill {shape = [9999999]} 1.0 in x + x
good : Tag $ Tensor [9999999] F64
good = do x <- tag $ fill {shape = [9999999]} 1.0
pure (x + x)
```
the large vector `x` is calculated twice in `bad`, but once in `good`, as `tag` marks it for
sharing.
Types that implement this interface should `tag` constituent components it deems worth sharing.
For example, see the implementation for tuples.
See tutorial _Nuisances in the Tensor API_ for details.
Totality: total
Visibility: public exporttensor : PrimitiveRW dtype a => Literal shape a -> Tensor shape dtype Construct a `Tensor` from `Literal` data. For example
```
x : Tensor [2, 3] S32
x = tensor [[1, 2, 3],
[4, 5, 6]]
```
Totality: total
Visibility: exportfromDouble : Double -> Tensor [] F64- Totality: total
Visibility: export fromInteger : Integer -> Tensor [] S32- Totality: total
Visibility: export data TensorList : List Shape -> List Type -> Type A list of `Tensor`s, along with the conversions needed to evaluate them to `Literal`s.
The list is parametrized by the shapes and types of the resulting `Literal`s.
Totality: total
Visibility: public export
Constructors:
Nil : TensorList [] [] (::) : PrimitiveRW dtype ty => Tensor shape dtype -> TensorList shapes tys -> TensorList (shape :: shapes) (ty :: tys)
eval : Device -> Tag (TensorList shapes tys) -> IO (All2 Literal shapes tys) Evaluate a list of `Tensor`s as a list of `Literal`s. Tensors in the list can have different
shapes and element types. For example,
```
main : Device -> IO ()
main device = do [x, y] <- eval device $ do let x = tensor {dtype = F64} [1.2, 3.4]
y <- reduce @{Sum} [0] x
pure [x, y]
printLn x
printLn y
```
In contrast to `Tensor.eval` when called on multiple tensors, this function constructs and
compiles the graph just once.
Visibility: exporteval : Device -> TensorList shapes tys -> IO (All2 Literal shapes tys) A convenience wrapper for `TensorList.Tag.eval`, for use with a bare `TensorList`.
Visibility: exporteval : Device -> PrimitiveRW dtype ty => Tag (Tensor shape dtype) -> IO (Literal shape ty) Evaluate a `Tensor`, returning its value as a `Literal`. This function builds and executes the
computational graph.
**Note:** Each call to `eval` will rebuild and execute the graph; multiple calls to `eval` on
different tensors, even if they are in the same computation, will be treated independently.
To efficiently evaluate multiple tensors at once, use `TensorList.Tag.eval`.
Visibility: exporteval : Device -> PrimitiveRW dtype ty => Tensor shape dtype -> IO (Literal shape ty) A convenience wrapper for `Tag.eval`, for use with a bare `Tensor`.
Visibility: exportinf : Tensor [] F64 Positive infinity.
Totality: total
Visibility: exportnan : Tensor [] F64 NaN (not a number).
Totality: total
Visibility: exportinterface Ord : Type -> Type Ordered element types.
Parameters: dtype
Constraints: Eq dtype
Methods:
min : Tensor [] dtype Compares less than or equal to any other value (except NaN).
max : Tensor [] dtype Compares greater than or equal to any other value (except NaN).
Implementations:
Ord U32 Ord S32 Ord U64 Ord F64
min : Ord dtype => Tensor [] dtype Compares less than or equal to any other value (except NaN).
Totality: total
Visibility: public exportmax : Ord dtype => Tensor [] dtype Compares greater than or equal to any other value (except NaN).
Totality: total
Visibility: public exportminFinite : Tensor [] F64 The most negative possible finite float, approx. -1.8e308
Totality: total
Visibility: exportmaxFinite : Tensor [] F64 The most positive possible finite float, approx. 1.8e308
Totality: total
Visibility: exportcastDtype : Integral a => Tensor shape a -> Tensor shape F64 Cast the element type. For example, `castDtype (tensor {dtype = S32} [1, -2])` is
`tensor {dtype = F64} [1.0, -2.0]`.
Totality: total
Visibility: exportgrad : (Tensor shape F64 -> Tag (Tensor [] F64)) -> Tensor shape F64 -> Tag (Tensor shape F64) Reverse-mode automatic differentiation.
`grad` can be applied repeatedly to obtain higher derivatives, though we do not yet support
derivatives of vector-valued functions.
For example, for
```
f : Tensor [2] F64 -> Tag $ Tensor [] F64
f x = do
x <- tag x
let (x0, x1) = (slice [at 0] x, slice [at 1] x)
pure $ x0 / x1
```
`grad f (tensor [3.0, 2.0])` produces `tensor [0.5, -0.75]`.
**Warning:** `grad` is experimental, and only implemented for a subset of the tensor API.
Visibility: exportreshape : Primitive dtype => {auto 0 _ : product from = product to} -> Tensor from dtype -> Tensor to dtype Reshape a `Tensor`. For example, `reshape {to = [2, 1]} (tensor [3, 4])` is
`tensor [[3], [4]]`. The output can have a different rank to the input.
Totality: total
Visibility: exportexpand : Primitive dtype => (axis : Nat) -> {auto 0 inBounds : LTE axis (length shape)} -> Tensor shape dtype -> Tensor (insertAt axis 1 shape) dtype Add a dimension of length one at the specified `axis`. The new dimension will be at the
specified `axis` in the new `Tensor` (as opposed to the original `Tensor`). For example,
`expand 1 $ tensor [[1, 2], [3, 4], [5, 6]]` is `tensor [[[1, 2]], [[3, 4]], [[5, 6]]]`.
Totality: total
Visibility: exportdata Squeezable : (0 _ : Shape) -> (0 _ : Shape) -> Type A `Squeezable from to` constitutes proof that the shape `from` can be squeezed to the
shape `to`. Squeezing is the process of removing any number of dimensions of length one.
Totality: total
Visibility: public export
Constructors:
Same : Squeezable x x Proof that a shape can be squeezed to itself. For example:
[] to []
[3, 4] to [3, 4]
Match : Squeezable from to -> Squeezable (x :: from) (x :: to) Proof that any dimensions (including those of length 1) can be preserved in the process of
squeezing. For example:
...
Nest : Squeezable from to -> Squeezable (1 :: from) to Proof that any dimensions of length one can be squeezed out. For example:
[1, 3, 1, 1, 4] to [3, 4]
squeeze : Primitive dtype => {auto 0 _ : Squeezable from to} -> Tensor from dtype -> Tensor to dtype Remove dimensions of length one from a `Tensor` such that it has the desired shape. For example:
```
x : Tensor [2, 1, 3, 1] S32
x = tensor [[[[4], [5], [6]]],
[[[7], [8], [9]]]]
y : Tensor [2, 1, 3] S32
y = squeeze x
```
is
```
y : Tensor [2, 1, 3] S32
y = tensor [[[4, 5, 6]],
[[7, 8, 9]]]
```
Totality: total
Visibility: exportdata SliceOrIndex : Nat -> Type A `SliceOrIndex d` is a valid slice or index into a dimension of size `d`. See `slice` for
details.
Totality: total
Visibility: export
Constructors:
Slice : (from : Nat) -> (to : Nat) -> {auto 0 _ : from + size = to} -> {auto 0 _ : LTE to d} -> SliceOrIndex d Index : (idx : Nat) -> {auto 0 _ : LT idx d} -> SliceOrIndex d DynamicSlice : Tensor [] U64 -> (size : Nat) -> {auto 0 _ : LTE size d} -> SliceOrIndex d DynamicIndex : Tensor [] U64 -> SliceOrIndex d
at : (idx : Nat) -> {auto 0 _ : LT idx d} -> SliceOrIndex d Index at `idx`. See `slice` for details.
Totality: total
Visibility: public exportat : Tensor [] U64 -> SliceOrIndex d Index at the specified index. See `slice` for details.
Totality: total
Visibility: public export.to : (from : Nat) -> (to : Nat) -> {auto 0 _ : from + size = to} -> {auto 0 _ : LTE to d} -> SliceOrIndex d Slice from `from` (inclusive) to `to` (exclusive). See `slice` for details.
Totality: total
Visibility: public export.size : Tensor [] U64 -> (size : Nat) -> {auto 0 _ : LTE size d} -> SliceOrIndex d Slice `size` elements starting at the specified scalar `U64` index. See `slice` for details.
Totality: total
Visibility: public exportall : SliceOrIndex d Slice across all indices along an axis. See `slice` for details.
Totality: total
Visibility: public exportdata MultiSlice : Shape -> Type A `MultiSlice shape` is a valid multi-dimensional slice into a tensor with shape `shape`.
See `slice` for details.
Totality: total
Visibility: public export
Constructors:
Nil : MultiSlice ds (::) : SliceOrIndex d -> MultiSlice ds -> MultiSlice (d :: ds)
slice : MultiSlice shape -> Shape The shape of a tensor produced by slicing with the specified multi-dimensional slice. See
`Tensor.slice` for details.
Totality: total
Visibility: public exportslice : Primitive dtype => (at : MultiSlice shape) -> Tensor shape dtype -> Tensor (slice at) dtype Slice or index `Tensor` axes. Each axis can be sliced or indexed, and this can be done with
either static (`Nat`) or dynamic (scalar `U64`) indices.
**Static indices**
Static indices are `Nat`s. For example, for
```
x : Tensor [5, 6] S32
x = tensor [[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29]]
```
we can index as `slice [at 1] x` to get
```
x : Tensor [6] S32
x = tensor [6, 7, 8, 9, 10, 11]
```
or we can slice as `slice [2.to 4] x` to get
```
x : Tensor [2, 6] S32
x = tensor [[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23]]
```
Note that in `2.to 4`, the 2 is inclusive, and the 4 exclusive, so we return indices 2 and 3.
**Dynamic indices**
Dynamic indices are scalar `U64` values, and the API works slightly differently because we
can't know the value of dynamic indices until the graph is executed. For indexing, with scalar
`U64` index `i` in `slice [at i] x`, `i` is clamped to be a valid index into that dimension.
For example, for `i = tensor 1`, `slice [at i] x` is
```
x : Tensor [6] S32
x = tensor [6, 7, 8, 9, 10, 11]
```
as in the static case. However, for `i = tensor 10`, `slice [at i] x` returns the last row
```
x : Tensor [6] S32
x = tensor [24, 25, 26, 27, 28, 29]
```
We can also slice by specifying a scalar `U64` start index, and a static size, as
`slice [i.size 2] x` with `i = tensor 2` to get
```
x : Tensor [2, 6] S32
x = tensor [[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23]]
```
For a given slice `size`, the dynamic start index is clamped such that we always get `size`
elements along that axis. For example, `slice [i.size 2] x` with `i = tensor 4` is
```
x : Tensor [2, 6] S32
x = tensor [[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29]]
```
which starts at index 3 rather than index 4.
**Mixed static, dynamic, slicing and indexing**
Each axis can only be sliced or indexed, and must use only static or dynamic indices. However,
across axes, we can mix these four arbitrarily. For example, with `slice [2.to 4, at 1] x` to
get
```
x : Tensor [2] S32
x = tensor [13, 19]
```
or with `i = tensor 2` in `slice [at 1, i.size 2] x` to get
```
x : Tensor [2] S32
x = tensor [7, 8]
```
Slices and indices apply to the leading axes of the tensor. For trailing axes omitted from the
multi-dimensional slice, the whole of the axis is returned. If we want to slice or index over
later axes and retain all indices in a leading axis, we can use the convenience function `all`,
as `slice [all, at 3] x` to get
```
x : Tensor [5] S32
x = tensor [[3], [9], [15], [21], [27]]
```
This is exactly the same as the more manual `slice [0.to 5, at 3] x` and
`slice [(tensor 0).size 5, at 3] x`.
@at The multi-dimensional slices and indices at which to slice the tensor.
Totality: total
Visibility: exportconcat : Primitive dtype => (axis : Nat) -> Tensor s dtype -> Tensor s' dtype -> {auto 0 inBounds : (InBounds axis s, InBounds axis s')} -> {auto 0 _ : deleteAt axis s = deleteAt axis s'} -> Tensor (replaceAt axis (index axis s + index axis s') s) dtype Concatenate two `Tensor`s along the specified `axis`. For example,
`concat 0 (tensor [[1, 2], [3, 4]]) (tensor [[5, 6]])` and
`concat 1 (tensor [[3], [6]]) (tensor [[4, 5], [7, 8]])` are both
`tensor [[1, 2], [3, 4], [5, 6]]`.
Totality: total
Visibility: export.T : Tensor [m, n] dtype -> Tensor [n, m] dtype Transpose a matrix. For example, `(tensor [[1, 2], [3, 4]]).T` is `tensor [[1, 3], [2, 4]]`.
Totality: total
Visibility: exporttranspose : (ordering : List Nat) -> Tensor shape dtype -> {auto 0 _ : length ordering = length shape} -> {auto 0 _ : unique ordering = True} -> {auto 0 inBounds : All (flip InBounds shape) ordering} -> Tensor (multiIndex ordering shape) dtype Transpose axes of a tensor. This is a more general version of `(.T)`, in which you can
transpose any number of axes in a tensor of arbitrary rank. The i'th axis in the resulting
tensor corresponds to the `index i ordering`'th axis in the input tensor. For example, for
```
x : Tensor [2, 3, 4] S32
x = tensor [[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]]]
```
`transpose [0, 2, 1] x` is
```
x : Tensor [2, 4, 3] S32
x = tensor [[[ 0, 4, 8],
[ 1, 5, 9],
[ 2, 6, 10],
[ 3, 7, 11]],
[[12, 16, 20],
[13, 17, 21],
[14, 18, 22],
[15, 19, 23]]]
```
`transpose [2, 0, 1] x` is
```
x : Tensor [4, 2, 3] S32
x = tensor [[[ 0, 4, 8],
[12, 16, 20]],
[[ 1, 5, 9],
[13, 17, 21]],
[[ 2, 6, 10],
[14, 18, 22]],
[[ 3, 7, 11],
[15, 19, 23]]]
```
In order to see what effect transposing a tensor has, it can help to bear in mind the following:
* if an element can be found with `slice [at 3, at 4, at 5] x` in the original tensor,
that same element can instead be found with `slice [at 5, at 3, at 4]` given a
`transpose [2, 0, 1]`. That is, transposing axes re-orders indices when indexing.
* with `transpose [2, 0, 1]`, traversing the first axis in the result is equivalent to
traversing the last axis in the input. Similarly, traversing the last axis in the result is
equivalent to traversing the second axis in the input.
Totality: total
Visibility: exportdata DimBroadcastable : (0 _ : Nat) -> (0 _ : Nat) -> Type A `DimBroadcastable from to` proves that a dimension of size `from` can be broadcast to a
dimension of size `to`.
Totality: total
Visibility: public export
Constructors:
Same : DimBroadcastable x x Proof that any dimension can be broadcast to itself. For example in shapes `[2, 3]` to
`[2, 3]`.
Stack : DimBroadcastable 1 {_:5508} Proof that a dimension of length one can be broadcast to any size. For example in shapes
`[2, 1]` to `[2, 3]`
Zero : DimBroadcastable {_:5514} 0 Proof that any dimension can be broadcast to zero. For example in shapes `[2, 3]` to `[2, 0]`.
data Broadcastable : (0 _ : Shape) -> (0 _ : Shape) -> Type A `Broadcastable from to` constitutes proof that the shape `from` can be broadcast to the
shape `to`.
Totality: total
Visibility: public export
Constructors:
Same : Broadcastable x x Proof that a shape can be broadcast to itself. For example:
[] to []
[3, 4] to [3, 4]
Implementation note: we could have used `Broadcast [] []`, which would have resulted in more
atomic constructors for `Broadcastable`, but the author guesses that this implementation helps
the type checker avoid applications of `Match`.
Match : {auto 0 _ : length from = length to} -> {auto 0 _ : DimBroadcastable f t} -> Broadcastable from to -> Broadcastable (f :: from) (t :: to) Proof that a dimension of size `f` can be broadcast to size `t` if these dimensions
are `DimBroadcastable f t`. For example:
[2, 3] to [2, 3]
[2, 1] to [2, 3]
[2, 1] to [2, 0]
Nest : Broadcastable f t -> Broadcastable f ({_:5565} :: t) Proof that broadcasting can add outer dimensions i.e. nesting. For example:
[3] to [1, 3]
[3] to [5, 3]
Hint: (to : Shape) -> Broadcastable [] to
broadcastableByLeading : (leading : List Nat) -> Broadcastable shape (leading ++ shape) A shape can be extended with any number of leading dimensions.
@leading The leading dimensions.
Totality: total
Visibility: exportscalarToAnyOk : (to : Shape) -> Broadcastable [] to A scalar can be broadcast to any shape.
Totality: total
Visibility: exportbroadcast : Primitive dtype => Broadcastable from to => Tensor from dtype -> Tensor to dtype Broadcast a `Tensor` to a new compatible shape. For example,
```
x : Tensor [2, 3] S32
x = broadcast (tensor [4, 5, 6])
```
is
```
x : Tensor [2, 3] S32
x = tensor [[4, 5, 6], [4, 5, 6]]
```
Totality: total
Visibility: exportfill : PrimitiveRW dtype ty => ty -> Tensor shape dtype A `Tensor` where every element has the specified value. For example,
```
fives : Tensor [2, 3] S32
fives = fill 5
```
is
```
fives : Tensor [2, 3] S32
fives = tensor [[5, 5, 5],
[5, 5, 5]]
```
Totality: total
Visibility: exportiota : Num dtype => (axis : Nat) -> {auto 0 _ : InBounds axis shape} -> Tensor shape dtype A constant where values increment from zero along the specified `axis`. For example,
```
x : Tensor [3, 5] S32
x = iota 1
```
is the same as
```
x : Tensor [3, 5] S32
x = tensor [[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]]
```
and
```
x : Tensor [3, 5] S32
x = iota 0
```
is the same as
```
x : Tensor [3, 5] S32
x = tensor [[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2]]
```
Totality: total
Visibility: exportwhile1 : Primitive dtype => (Tensor shape dtype -> Tag (Tensor [] PRED)) -> (Tensor shape dtype -> Tag (Tensor shape dtype)) -> Tensor shape dtype -> Tag (Tensor shape dtype) A while-loop operating on a single tensor.
`while1` iteratively checks if the tensor satisfies `condition`, and if it does, updates it with
`body`.
**Note:** For the XLA compiler, there are scoping restrictions on functions passed to `while1`.
See the tutorial _Nuisances in the Tensor API_ for details.
@condition The guard condition for each iteration.
@body The update step.
@initial The initial tensor.
Visibility: exportwhile2 : Primitive a => Primitive a' => (Tensor s a -> Tensor s' a' -> Tag (Tensor [] PRED)) -> (Tensor s a -> Tensor s' a' -> Tag (Tensor s a, Tensor s' a')) -> Tensor s a -> Tensor s' a' -> Tag (Tensor s a, Tensor s' a') A while-loop operating on two tensors.
`while2` iteratively checks if the tensors together satisfy `condition`, and if so, updates them
with `body`.
**Note:** For the XLA compiler, there are scoping restrictions on functions passed to `while2`.
See the tutorial _Nuisances in the Tensor API_ for details.
@condition The guard condition for each iteration.
@body The update step.
@initial One initial tensor.
@initial' The other initial tensor.
Visibility: exportmap : (Primitive a, Primitive b) => (Tensor [] a -> Tag (Tensor [] b)) -> Tensor shape a -> Tag (Tensor shape b) Lift a unary function on scalars to an element-wise function on `Tensor`s of arbitrary shape.
For example,
```
recip : Tensor [] F64 -> Tag $ Tensor [] F64
recip x = pure $ 1.0 / x
```
can be lifted to an element-wise reciprocal function as `map recip (tensor [-2, 0.4])`,
which produces `tensor [-0.5, 2.5]`.
**Note:** For the XLA compiler, there are scoping restrictions on the function passed to `map`.
See the tutorial _Nuisances in the Tensor API_ for details.
Totality: total
Visibility: exportmap2 : (Primitive a, (Primitive b, Primitive c)) => (Tensor [] a -> Tensor [] b -> Tag (Tensor [] c)) -> Tensor shape a -> Tensor shape b -> Tag (Tensor shape c) Lift a binary function on scalars to an element-wise function on `Tensor`s of arbitrary shape.
For example,
```
addRecip : Tensor [] F64 -> Tensor [] F64 -> Tag $ Tensor [] F64
addRecip x y = pure $ x + 1.0 / y
```
can be lifted to an element-wise function as
`map2 addRecip (tensor [3.0, -3.0]) (tensor [-2.0, 0.4])`, which produces `tensor [2.5, -0.5]`.
**Note:** For the XLA compiler, there are scoping restrictions on the function passed to `map2`.
See the tutorial _Nuisances in the Tensor API_ for details.
Totality: total
Visibility: exportreduce : Monoid (Tensor [] dtype) => Primitive dtype => (axes : List Nat) -> {auto 0 _ : Sorted LT axes} -> {auto 0 axesInBounds : All (flip InBounds shape) axes} -> Tensor shape dtype -> Tag (Tensor (deleteAt axes shape) dtype) Reduce elements along one `axis` of a `Tensor` according to a specified `reducer` `Monoid`.
For example, if `x = tensor [[0, 1, 2], [3, 4, 5]]`, then reduce @{Sum} 0 x` produces
`tensor [3, 5, 7]`, and `reduce @{Sum} 1 x` produces `tensor [3, 12]`.
**Note:** `Semigroup` doesn't use `Tag`, which limits the functions that can be used in
`reduce`. However, the most commonly used semigroups don't need `Tag`, including `Sum`,
`Prod`, `Min` and `Max`, so for ergonomics, we have opted to use `Monoid` as is. We can
provide an overloaded variant if requested.
**Note:** For the XLA compiler, there are scoping restrictions on the monoid's semigroup.
See the tutorial _Nuisances in the Tensor API_ for details.
@reducer How to reduce elements along the given `axis`.
@axis The axis along which to reduce elements.
Totality: total
Visibility: exportsort : Primitive dtype => (Tensor [] dtype -> Tensor [] dtype -> Tensor [] PRED) -> (dimension : Nat) -> Tensor shape dtype -> {auto 0 _ : InBounds dimension shape} -> Tag (Tensor shape dtype) Sort the elements of a `Tensor` along a specified `dimension` according to a scalar-wise
ordering. For sorting function `f`, elements are sorted such that for consecutive sorted
elements `a` and `b`, either `f a b` is true, or `f a b` *and* `f b a` are false.
**Note:** Sorting is not stable, meaning elements that compare equal according the ordering may
be sorted in a different order to the order they appear in the input.
**Note:** `sort` is limited to use comparison function without `Tag`. However, since the most
commonly-used functions, including (>), (<), (>=), and (<=), don't use `Tag`, we have opted to
omit it for ergonomics. We can trivially provide an overloaded variant if requested.
For example, for `x = tensor [[1, 6, 4], [3, 2, 5]]`, `sort (<) 0 x` produces
`tensor [[1, 2, 4], [3, 6, 5]]`, while `sort (<) 1 x` produces
`tensor [[1, 4, 6], [2, 3, 5]]`.
**Note:** For the XLA compiler, there are scoping restrictions on the function passed to `sort`.
See the tutorial _Nuisances in the Tensor API_ for details.
Totality: total
Visibility: exportreverse : (axes : List Nat) -> {auto 0 _ : Sorted LT axes} -> {auto 0 _ : All (flip InBounds shape) axes} -> Tensor shape dtype -> Tensor shape dtype Reverse elements along the specified axes. For example, for
```
x : Tensor [2, 3] S32
x = tensor [[-2, -1, 0],
[ 1, 2, 3]]
```
`reverse [0] x` is
```
x : Tensor [2, 3] S32
x = tensor [[ 1, 2, 3],
[-2, -1, 0]]
```
`reverse [1] x` is
```
x : Tensor [2, 3] S32
x = tensor [[ 0, -1, -2],
[ 3, 2, 1]]
```
and `reverse [0, 1] x` is
```
x : Tensor [2, 3] S32
x = tensor [[ 3, 2, 1],
[ 0, -1, -2]]
```
**Note:** This function requires `axes` is ordered simply so that elements are unique.
The ordering itself is irrelevant to the implementation, but ensures uniqueness without using
proofs of contradiction that can be difficult for Idris to construct.
Totality: total
Visibility: export(==) : Eq dtype => Tensor shape dtype -> Tensor shape dtype -> Tensor shape PRED Element-wise equality. For example, `tensor [1, 2] /= tensor [1, 3]` is
`tensor [True, False]`.
Totality: total
Visibility: export
Fixity Declaration: infix operator, level 6(/=) : Eq dtype => Tensor shape dtype -> Tensor shape dtype -> Tensor shape PRED Element-wise inequality. For example, `tensor [1, 2] /= tensor [1, 3]` is
`tensor [False, True]`.
Totality: total
Visibility: export
Fixity Declaration: infix operator, level 6(<) : Ord dtype => Tensor shape dtype -> Tensor shape dtype -> Tensor shape PRED Element-wise less than. For example, `tensor [1, 2, 3] < tensor [2, 2, 2]` is
`tensor [True, False, False]`.
Totality: total
Visibility: export
Fixity Declaration: infix operator, level 6(>) : Ord dtype => Tensor shape dtype -> Tensor shape dtype -> Tensor shape PRED Element-wise greater than. For example, `tensor [1, 2, 3] > tensor [2, 2, 2]` is
`tensor [False, False, True]`.
Totality: total
Visibility: export
Fixity Declaration: infix operator, level 6(<=) : Ord dtype => Tensor shape dtype -> Tensor shape dtype -> Tensor shape PRED Element-wise less than or equal. For example, `tensor [1, 2, 3] <= tensor [2, 2, 2]`
is `tensor [True, True, False]`.
Totality: total
Visibility: export
Fixity Declaration: infix operator, level 6(>=) : Ord dtype => Tensor shape dtype -> Tensor shape dtype -> Tensor shape PRED Element-wise greater than or equal. For example,
`tensor [1, 2, 3] >= tensor [2, 2, 2]` is `tensor [False, True, True]`.
Totality: total
Visibility: export
Fixity Declaration: infix operator, level 6(&&) : Tensor shape PRED -> Tensor shape PRED -> Tensor shape PRED Element-wise boolean and. For example,
`tensor [True, True, False, False] && tensor [True, False, True, False]` is
`tensor [True, False, False, False]`.
Totality: total
Visibility: export
Fixity Declaration: infixr operator, level 5(||) : Tensor shape PRED -> Tensor shape PRED -> Tensor shape PRED Element-wise boolean or. For example,
`tensor [True, True, False, False] || tensor [True, False, True, False]` is
`tensor [True, True, True, False]`.
Totality: total
Visibility: export
Fixity Declaration: infixr operator, level 4not : Tensor shape PRED -> Tensor shape PRED Element-wise boolean negation. For example, `not (tensor [True, False])` is
`tensor [False, True]`.
Totality: total
Visibility: exportselect : Primitive dtype => Tensor shape PRED -> Tensor shape dtype -> Tensor shape dtype -> Tensor shape dtype Choose elements from two `Tensor`s based on a `Tensor` of predicates. For each element in the
predicates, the output will use the corresponding element from `onTrue` if the element is
truthy, else the element from `onFalse`. For example, for
```
preds : Tensor [3] PRED
preds = tensor [False, True, False]
onTrue : Tensor [3] S32
onTrue = tensor [1, 2, 3]
onFalse : Tensor [3] S32
onFalse = tensor [4, 5, 6]
```
`select preds onTrue onFalse` is `tensor [4, 2, 6]`.
@onTrue The elements to choose where the predicate elements are truthy.
@onFalse The elements to choose where the predicate elements are falsy.
Totality: total
Visibility: exportif_ : Primitive dtype => Tensor [] PRED -> Tag (Tensor shape dtype) -> Tag (Tensor shape dtype) -> Tag (Tensor shape dtype) Use a scalar predicate to evaluate one of two branches. If the predicate is truthy,
evaluate `onTrue`, else `onFalse`. Each branch is evaluated lazily; only one will be
evaluated.
For example, for
```
f : Tensor [] F64 -> Tag $ Tensor [] F64
f x = if_ (x < 1.0) (pure $ cos x) (do x <- tag x; x * x)
```
`f 0.0` produces `1.0`, and `f 4.0` produces `8.0`.
**Note:** For the XLA compiler, there are scoping restrictions on branches used in `if_`, as
StableHLO gives branches their own scope. See the tutorial _Nuisances in the Tensor API_
for details.
@onTrue The branch to evaluate if the predicate is truthy.
@onFalse The branch to evaluate if the predicate is falsy.
Totality: total
Visibility: exportidentity : Num dtype => Tensor [n, n] dtype The identity tensor, with inferred shape and element type. For example,
```
x : Tensor [2, 2] S32
x = identity
```
is
```
x : Tensor [2, 2] S32
x = tensor [[1, 0],
[0, 1]]
```
Totality: total
Visibility: export(@@) : Num dtype => Tensor [S m] dtype -> Tensor [S m] dtype -> Tensor [] dtype Vector dot product with a tensor of any rank. The vector dot product is with the first axis of
the right-hand side tensor. For example `tensor [0, 1, 2] @@ tensor [-1, -3, -1]` is
`-1`.
Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 9(@@) : (Primitive dtype, Num dtype) => Tensor [n, S m] dtype -> Tensor (S m :: tl) dtype -> {auto 0 _ : LTE (length tl) 1} -> Tensor (n :: tl) dtype Matrix multiplication with a matrix or vector. Contraction is along the last axis of the first
and the first axis of the last. For example,
```
x : Tensor [2, 3] S32
x = tensor [[-1, -2, -3],
[ 0, 1, 2]]
y : Tensor [3, 1] S32
y = tensor [[4, 0, 5]]
z : Tensor [2, 1] S32
z = x @@ y
```
is
```
z : Tensor [2, 1] S32
z = tensor [-19, 10]
```
Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 9contract : (lBatch : List Nat) -> (rBatch : List Nat) -> (lContract : List Nat) -> (rContract : List Nat) -> (ls : Shape) -> (rs : Shape) -> {auto 0 _ : All (flip InBounds ls) lBatch} -> {auto 0 _ : All (flip InBounds rs) rBatch} -> {auto 0 _ : All (flip InBounds ls) lContract} -> {auto 0 _ : All (flip InBounds rs) rContract} -> Shape The output shape of a `dotGeneral` operation.
Totality: total
Visibility: public exportdotGeneral : Num dtype => (lBatch : List Nat) -> (rBatch : List Nat) -> (lContract : List Nat) -> (rContract : List Nat) -> {auto 0 _ : unique (lBatch ++ lContract) = True} -> {auto 0 _ : unique (rBatch ++ rContract) = True} -> {auto 0 lInBoundsBatch : All (flip InBounds ls) lBatch} -> {auto 0 rInBoundsBatch : All (flip InBounds rs) rBatch} -> {auto 0 lInBoundsContract : All (flip InBounds ls) lContract} -> {auto 0 rInBoundsContract : All (flip InBounds rs) rContract} -> {auto 0 _ : multiIndex lBatch ls = multiIndex rBatch rs} -> {auto 0 _ : multiIndex lContract ls = multiIndex rContract rs} -> Tensor ls dtype -> Tensor rs dtype -> Tensor (contract lBatch rBatch lContract rContract ls rs) dtype Matrix multiplication.
This is a much more general version of `(@@)`, in which you can specify any number of batch
and contracting axes. Matrix multiplication is done over each contracting axis.
The operation is vectorized over batch axes. For each contracting axis on the left-hand
operand, there is one contracting axis on the right-hand operand. These can be different axes
in each operand. The same is true for each batch axis.
For example, we can vectorize over a typical rank-two matrix multiplication as follows: given
two inputs tensors
```
let x : Tensor [3, 4, 5, 6] F64
y : Tensor [3, 4, 6, 7] F64
```
we do
```
let z : Tensor [3, 4, 5, 7] F64 = dotGeneral [0, 1] [0, 1] [3] [2] x y
```
Here, we vectorized over the first two axes `[0, 1]`, and do standard matrix multiplication
over the remaining axes by specifying the axes 3 and 2 respectively as contracting axes. Notice
how the batch axes appear once each at the start of the output shape, and the contracting axis
disappears. Remaining axes appear in order from left to right.
Note this API is somewhat of a quickfix to bring general matrix multiplication to the tensor
API. It is not thoroughly tested. Expect it to change in the future.
Totality: total
Visibility: export(+) : Num dtype => Tensor shape dtype -> Tensor shape dtype -> Tensor shape dtype Element-wise addition. For example, `tensor [1, 2] + tensor [3, 4]` is
`tensor [4, 6]`.
Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 8negate : Neg dtype => Tensor shape dtype -> Tensor shape dtype Element-wise negation. For example, `- tensor [1, -2]` is `tensor [-1, 2]`.
Totality: total
Visibility: export
Fixity Declaration: prefix operator, level 10negate : Neg ty => ty -> ty The underlying of unary minus. `-5` desugars to `negate (fromInteger 5)`.
Totality: total
Visibility: public export
Fixity Declaration: prefix operator, level 10(-) : Neg dtype => Tensor shape dtype -> Tensor shape dtype -> Tensor shape dtype Element-wise subtraction. For example, `tensor [3, 4] - tensor [4, 2]` is
`tensor [-1, 2]`.
Totality: total
Visibility: export
Fixity Declarations:
infixl operator, level 8
prefix operator, level 10(*) : Num dtype => Tensor shape dtype -> Tensor shape dtype -> Tensor shape dtype Element-wise multiplication. For example, `tensor [2, 3] * tensor [4, 5]` is
`tensor [8, 15]`.
Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 9(*) : Num dtype => Tensor [] dtype -> Tensor (d :: ds) dtype -> Tensor (d :: ds) dtype Multiplication by a scalar. For example, `tensor 2 * tensor [3, 5]` is
`tensor [6, 10]`.
The RHS is required to be non-scalar simply to avoid ambiguities with element-wise `(*)`.
Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 9(/) : Fractional dtype => Tensor shape dtype -> Tensor shape dtype -> Tensor shape dtype Element-wise floating point division. For example, `tensor [2, 3] / tensor [4, 5]` is
`tensor [0.5, 0.6]`.
Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 9(/) : Fractional dtype => Tensor (d :: ds) dtype -> Tensor [] dtype -> Tensor (d :: ds) dtype Floating point division by a scalar. For example, `tensor [3.4, -5.6] / tensor 2` is
`tensor [1.7, -2.8]`.
The LHS is required to be non-scalar simply to avoid ambiguities with element-wise `(/)`.
Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 9div : Tensor shape U64 -> (denom : Literal shape Nat) -> {auto 0 _ : All IsSucc denom} -> Tensor shape U64 Element-wise division of natural numbers. For example,
`div (tensor [Scalar 13, Scalar 8]) [3, 4]` is `tensor [4, 2]`.
Totality: total
Visibility: export
Fixity Declaration: infixl operator, level 9rem : Tensor shape U64 -> (denom : Literal shape Nat) -> {auto 0 _ : All IsSucc denom} -> Tensor shape U64 Element-wise remainder for natural numbers. For example,
`rem (tensor [Scalar 13, Scalar 8]) [3, 4]` is `tensor [1, 0]`.
Totality: total
Visibility: export(^) : Tensor shape F64 -> Tensor shape F64 -> Tensor shape F64 Each element in `base` raised to the power of the corresponding element in `exponent`.
example, `tensor [2, 25, -9] ^ tensor [3, -0.5, 0.5]` is `tensor [8, 0.2, nan]`.
Note: The behaviour of this function is not well-defined at negative or positive infinity, or
NaN.
Note: The first root is used.
Totality: total
Visibility: export
Fixity Declaration: infixr operator, level 9abs : Abs dtype => Tensor shape dtype -> Tensor shape dtype Element-wise absolute value. For example, `abs (tensor [-2, 3])` is `tensor [2, 3]`.
Totality: total
Visibility: exportexp : Tensor shape F64 -> Tensor shape F64 The element-wise natural exponential. For example, `exp (tensor [-1, 0, 2])` is
`tensor [1 / euler, 1, pow euler 2]`.
Totality: total
Visibility: exportfloor : Tensor shape F64 -> Tensor shape F64 The element-wise floor function. For example,
`floor (tensor [-1.6, -1.5, -1.4, -1.0, 1.0, 1.4, 1.5, 1.6])` is
`tensor [-2.0, -2.0, -2.0, -1.0, 1.0, 1.0, 1.0, 1.0]`.
Totality: total
Visibility: exportceil : Tensor shape F64 -> Tensor shape F64 The element-wise ceiling function. For example,
`ceil (tensor [-1.6, -1.5, -1.4, -1.0, 1.0, 1.4, 1.5, 1.6])` is
`tensor [-1.0, -1.0, -1.0, -1.0, 1.0, 2.0, 2.0, 2.0]`.
Totality: total
Visibility: exportlog : Tensor shape F64 -> Tensor shape F64 The element-wise natural logarithm. Negative inputs yield NaN output. For example,
`log (tensor [1 / euler, 1, euler * euler])` is `tensor [-1, 0, 2]`.
Totality: total
Visibility: exportlogistic : Tensor shape F64 -> Tensor shape F64 The element-wise logistic function equivalent to `1 / 1 + exp (-x)`.
Totality: total
Visibility: exportsin : Tensor shape F64 -> Tensor shape F64 The element-wise sine.
Totality: total
Visibility: exportcos : Tensor shape F64 -> Tensor shape F64 The element-wise cosine.
Totality: total
Visibility: exporttan : Tensor shape F64 -> Tensor shape F64 The element-wise tangent.
Totality: total
Visibility: exportasin : Tensor shape F64 -> Tensor shape F64 The element-wise inverse sine.
Totality: total
Visibility: exportacos : Tensor shape F64 -> Tensor shape F64 The element-wise inverse cosine.
Totality: total
Visibility: exportatan : Tensor shape F64 -> Tensor shape F64 The element-wise inverse tangent.
Totality: total
Visibility: exportsinh : Tensor shape F64 -> Tensor shape F64 The element-wise hyperbolic sine.
Totality: total
Visibility: exportcosh : Tensor shape F64 -> Tensor shape F64 The element-wise hyperbolic cosine.
Totality: total
Visibility: exporttanh : Tensor shape F64 -> Tensor shape F64 The element-wise hyperbolic tangent.
Totality: total
Visibility: exportasinh : Tensor shape F64 -> Tensor shape F64 The element-wise inverse hyperbolic sine.
Totality: total
Visibility: exportacosh : Tensor shape F64 -> Tensor shape F64 The element-wise inverse hyperbolic cosine.
Totality: total
Visibility: exportatanh : Tensor shape F64 -> Tensor shape F64 The element-wise inverse hyperbolic tangent.
Totality: total
Visibility: exporterf : Tensor shape F64 -> Tensor shape F64 An approximation to the element-wise error function.
Totality: total
Visibility: exportsquare : Tensor shape F64 -> Tensor shape F64 The element-wise square. For example, `square (tensor [-2, 0, 3])`
is `tensor [4, 0, 9]`.
Totality: total
Visibility: exportsqrt : Tensor shape F64 -> Tensor shape F64 The element-wise square root. The first root is used. Negative inputs yield NaN output.
For example, `sqrt (tensor [0, 9])` is `tensor [0, 3]`.
Totality: total
Visibility: exportmin : Ord dtype => Tensor shape dtype -> Tensor shape dtype -> Tensor shape dtype The element-wise minimum of the first argument compared to the second. For example,
`min (tensor [-3, -1, 3]) (tensor [-1, 0, 1])` is `tensor [-3, -1, 1]`.
Totality: total
Visibility: exportmax : Ord dtype => Tensor shape dtype -> Tensor shape dtype -> Tensor shape dtype The element-wise maximum of the first argument compared to the second. For example,
`max (tensor [-3, -1, 3]) (tensor [-1, 0, 1])` is `tensor [-1, 0, 3]`.
Totality: total
Visibility: exportdiag : Num dtype => PrimitiveRW dtype ty => Num ty => Tensor [n, n] dtype -> Tensor [n] dtype The diagonal of a matrix as a vector. For example, for
```
x : Tensor [3, 3] S32
x = tensor [[0, 1, 2],
[3, 4, 5],
[6, 7, 8]]
```
`diag x` is `tensor [0, 4, 8]`.
Totality: total
Visibility: exportargmax : Ord dtype => Tensor [S n] dtype -> Tag (Tensor [] U64) The first index of the maximum value in a vector. For example,
`argmax (tensor [-1, 3, -2, -2, 3])` produces `tensor 1`. If the vector contains NaN values,
`argmax` returns the index of the first NaN.
Totality: total
Visibility: exportargmin : Ord dtype => Tensor [S n] dtype -> Tag (Tensor [] U64) The first index of the minimum value in a vector. For example,
`argmin (tensor [-1, 3, -2, -2, 3])` produces `tensor 2`. If the vector contains NaN values,
`argmin` returns the index of the first NaN.
Totality: total
Visibility: exportdata Triangle : Type Represents the upper- or lower-triangular component of a matrix.
Totality: total
Visibility: public export
Constructors:
Upper : Triangle Lower : Triangle
triangle : PrimitiveRW dtype ty => Num ty => Triangle -> Tensor [n, n] dtype -> Tag (Tensor [n, n] dtype) Get the upper- or lower-triangular component of a matrix, always including the matrix diagonal.
Remaining elements will be zero. For example, for
```
x : Tensor [3, 3] S32
x = tensor [[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]
```
`triangle Lower x` produces
```
x : Tensor [3, 3] S32
x = tensor [[1, 0, 0],
[4, 5, 0],
[7, 8, 9]]
```
Totality: total
Visibility: exportcholesky : Tensor [S n, S n] F64 -> Tag (Tensor [S n, S n] F64) Cholesky decomposition. Computes the lower triangular matrix `L` from the symmetric, positive
semi-definite matrix `X` s.t. `X = L @@ L.T`. Values will be NaN if the input matrix is not
positive semi-definite. The remaining matrix components - those not in the lower triangle or
diagonal - will always be zero.
Totality: total
Visibility: export(|\) : Tensor [m, m] F64 -> Tensor [m, n] F64 -> Tensor [m, n] F64 Solve the set of linear equations `a @@ x = b` for `x` where `a` is a lower-triangular matrix.
`a` is given by the lower-triangular elements of the first argument. Values in the
upper-triangular part are ignored. If `a` is lower-triangular already,
this is written `a |\ b`.
The operator is shaped like the lower-triangular portion of a matrix to signal that it uses
this portion of its argument. This is in contrast to `(\|)`.
Totality: total
Visibility: export
Fixity Declaration: infix operator, level 9(\|) : Tensor [m, m] F64 -> Tensor [m, n] F64 -> Tensor [m, n] F64 Solve the set of linear equations `a @@ x = b` for `x` where `a` is an upper-triangular
matrix. `a` is given by the upper-triangular elements of the first argument. Values in the
lower-triangular part are ignored. If `a` is upper-triangular already, this is written
`a \| b`.
The operator is shaped like the upper-triangular portion of a matrix to signal that it uses
this portion of its argument. This is in contrast to `(|\)`.
Totality: total
Visibility: export
Fixity Declaration: infix operator, level 9(|\) : Tensor [m, m] F64 -> Tensor [m] F64 -> Tensor [m] F64 Solve the set of linear equations `a @@ x = b` for `x` where `a` is a lower-triangular matrix.
`a` is given by the lower-triangular elements of the first argument. Values in the
upper-triangular part are ignored. If `a` is lower-triangular already,
this is written `a |\ b`.
The operator is shaped like the lower-triangular portion of a matrix to signal that it uses
this portion of its argument. This is in contrast to `(\|)`.
Totality: total
Visibility: export
Fixity Declaration: infix operator, level 9(\|) : Tensor [m, m] F64 -> Tensor [m] F64 -> Tensor [m] F64 Solve the set of linear equations `a @@ x = b` for `x` where `a` is an upper-triangular
matrix. `a` is given by the upper-triangular elements of the first argument. Values in the
lower-triangular part are ignored. If `a` is upper-triangular already, this is written
`a \| b`.
The operator is shaped like the upper-triangular portion of a matrix to signal that it uses
this portion of its argument. This is in contrast to `(|\)`.
Totality: total
Visibility: export
Fixity Declaration: infix operator, level 9trace : (Num dtype, Num a) => PrimitiveRW dtype a => Tensor [S n, S n] dtype -> Tag (Tensor [] dtype) Sum the elements along the diagonal of the input. For example,
`trace (tensor [[-1, 5], [1, 4]])` produces `3`.
Totality: total
Visibility: export0 Rand : Type -> Type A `Rand a` produces a pseudo-random value of type `a` from a `Tensor [2] U64` state.
The state is updated every time a new value is generated.
Totality: total
Visibility: public exportrng : Rand (Tensor shape U64) Generate independent and identically distributed (IID) uniform samples.
The generated samples are a deterministic function of the input key and state, but may vary
between PJRT plugin and library version.
Example usage, multiplying two uniform samples
```
x : Tag $ Tensor [3] U64
x = let seed = tensor [1, 1] in evalStateT seed [| rng * rng |]
```
Totality: total
Visibility: exportuniform : Rand (Tensor shape F64) Generate independent and identically distributed (IID) from the uniform distribution U(0, 1).
The generated samples are a deterministic function of the input key and state, but may vary
between PJRT plugin and library version.
Example usage, multiplying two uniform samples
```
x : Rand $ Tensor [3] F64
x = [| uniform * uniform |]
```
Totality: total
Visibility: exportnormal : Rand (Tensor shape F64) Generate independent and identically distributed (IID) samples from the standard normal
distribution N(0, 1).
The generated samples are a deterministic function of the input key and state, but may vary
between PJRT plugin and library version.
Example usage, multiplying two normal samples
```
x : Rand $ Tensor [3] F64
x = [| normal * normal |]
```
Totality: total
Visibility: export