simpletensor.tensor#

class simpletensor.tensor.Tensor(values, dtype=None, copy=True, device: Literal['cpu', 'cuda'] = 'cpu', name: str | None = None)[source]#

Bases: object

Tensor class. This is a box around a numpy array, but with support for reverse mode automatic differentiation.

Parameters:
valuesarray-like or scalar

Values to put into Tensor

dtypedata type, optional

numpy data type for underlying numpy array, by default None

namestr, optional

Name of tensor. If not specified, or if name is taken, a random 10-letter name is given, by default None

property T#
__add__(other)[source]#

Adds two tensor-like objects together.

Returns:
Tensor

Sum of both inputs

__getitem__(index)[source]#

Index an array, using numpy’s indexing semantics

Parameters:
indexIterable or scalar index

index used to index the tensor

Returns:
Tensor

Result of Tensor indexing

__matmul__(other)[source]#

Matrix Multiplication. Mimics numpy @ behavior.

There are 4 cases:

  • Both arguments are 2D (normal matrix multiplication)

  • Either argument is >2D (broadcasting is done accordingly)

  • First Tensor is 1D (1 is added to the beginning of the tensor’s shape, matrix multiplication is applied, then the added 1 is removed from the output tensor’s shape)

  • Second Tensor is 1D (1 is added to the end of the tensor’s shape, matrix multiplication is applied, then the added 1 is removed from the output tensor’s shape)

Parameters:
otherarray-like

Right side of matrix multiplication

Returns:
Tensor

Matrix product of two input arrays

__mul__(other)[source]#

Multiplies (element-wise) two tensor-like objects together.

Returns:
Tensor

Multiplication (Hadamard product) of two tensors

__pow__(other)[source]#

One tensor raised to the power of another tensor.

Parameters:
otherTensor

Exponent tensor

Returns:
Tensor

Result tensor of A**B

__sub__(other)[source]#

Subtracts two tensor-like objects.

Returns:
Tensor

Subtraction of both inputs

__truediv__(other)[source]#

Division between two Tensors. A division A / B is defined as A * B**-1.

Parameters:
otherarray-like

Divisor tensor

Returns:
Tensor

Result tensor of A / B

backward(zero_grad=True)[source]#

Backpropagation Method. If it is called on a size 1 tensor L, then the gradient for each tensor X which is a part of its creation graph w.r.t. this tensor’s value is calculated (\(\frac{\partial L}{\partial X}\)), and stored in the tensor’s .grad attribute.

Parameters:
zero_gradbool, optional

Option to set all gradients in autodiff graph to 0 before setting them. If False, then new gradients are added to previous ones, by default True

Raises:
NotImplementedError

Raised if the tensor which backward() is called on does not have a size of 1.

convolve(kernel)[source]#

N-Dimensional Valid Convolution over a batch using Fast-Fourier Transform method

Parameters:
selfTensor

Tensor to be convolved. Shape: (batch_size, channels, …)

kernelTensor

Kernel in convolution operation. Shape: (num_filters, channels, …)

Returns:
Tensor

Output. Shape: (batch_size, num_filters, …)

property dtype#
exp()[source]#

Element-wise exponential function

Returns:
Tensor

Result of exponentiation

expand_dims(axis=None)[source]#

Expand dimensions function

Parameters:
axistuple of ints

Axes to expand

Returns:
Tensor

Result of expansion

flip(axis=None)[source]#

Flip function

Parameters:
axistuple of ints

Axes to flip

Returns:
Tensor

Result of flip

grad_enabled = True#
logn(n=2.718281828459045)[source]#

Element-wise log base n function

Parameters:
nfloat

log base

Returns:
Tensor

Result of log operation

max(axis=None, keepdims=False)[source]#

Max over given axes.

Parameters:
axistuple, optional

Axes over which to perform max over, by default None

mean(axis=None, keepdims=False)[source]#

Mean of an array over given axes

Parameters:
axistuple, optional

Axes over which to sum over, by default None

Returns:
Tensor

Result of mean operation over axes

min(axis=None, keepdims=False)[source]#

Min over given axes.

Parameters:
axistuple, optional

Axes over which to perform min over, by default None

property ndim#
relu()[source]#

Element-wise relu function

Returns:
Tensor

Result of relu

reshape(shape)[source]#

Reshape function

Parameters:
shapetuple of ints

New shape for tensor

Returns:
Tensor

Result of reshape

property shape#
property size#
squeeze(axis=None)[source]#

Squeeze function

Parameters:
axistuple of ints

Axes to squeeze

Returns:
Tensor

Result of squeeze

std(axis=None, ddof=0, keepdims=False)[source]#

Standard deviation over given axes.

Parameters:
axistuple, optional

Axes over which to sum over, by default None

ddofint, optional

Delta degrees of freedom, where the divisor is N - ddof. ddof = 1 will give you sample std. By default, ddof = 0

Returns:
Tensor

Standard deviation of input tensor along axes

sum(axis=None, keepdims=False)[source]#

Sums an array over given axes

Parameters:
axistuple, optional

Axes over which to sum over, by default None

Returns:
Tensor

Result of sum operation over axes

to(device: Literal['cpu', 'cuda']) Tensor[source]#
transpose(axis=None)[source]#

Transpose function

Parameters:
axistuple of ints

Order of transpose

Returns:
Tensor

Result of transpose

var(axis=None, ddof=0, keepdims=False)[source]#

Variance over given axes.

Parameters:
axistuple, optional

Axes over which to sum over, by default None

ddofint, optional

Delta degrees of freedom, where the divisor is N - ddof. ddof = 1 will give you sample std. By default, ddof = 0

Returns:
Tensor

Variance of input tensor along axes

zero_grad()[source]#

Sets the gradient of the tensor to a 0 array with the same shape as the tensor.

simpletensor.tensor.astensor(a, dtype=None, device: Literal['cpu', 'cuda'] = 'cpu')[source]#

Converts input to a Tensor

Parameters:
aarray-like

Iterable or scalar quantity (not jagged)

dtypenumpy datatype, optional

Datatype for which the underlying numpy array’s datatype should be, by default None

Returns:
Tensor

Tensor version of input

simpletensor.tensor.sequence_to_array(seq, dtype, device: Literal['cpu', 'cuda'], copy: bool) ndarray[source]#