simpletensor.functions#

simpletensor.functions.add(a1, a2)[source]#

Helper function for __add__

simpletensor.functions.categorical_cross_entropy(y_true, y_pred)[source]#

Log loss / cross entropy loss over a batch. Defined as:

\[\text{categorical_cross_entropy}(Y, \hat Y) = -\frac{1}{\text{batch_size}}\sum_{b=1}^{\text{batch_size}} \sum_{i=1}^n Y[b, i] \log_{b}(\hat Y[b, i])\]
Parameters:
y_trueTensor

2D Tensor of shape (batch_size, # features)

y_predTensor

2D Tensor of shape (batch_size, # features)

Returns:
_type_

_description_

simpletensor.functions.divide(a1, a2)[source]#

Helper function for __truediv__

simpletensor.functions.exp(a)[source]#

Helper function for exp

simpletensor.functions.expand_dims(a, axis=None)[source]#

Helper function for expand_dims

simpletensor.functions.flip(a, axis=None)[source]#

Helper function for flip

simpletensor.functions.logn(a, n=2.718281828459045)[source]#

Helper function for logn

simpletensor.functions.matmul(a1, a2)[source]#

Helper function for __matmul__

simpletensor.functions.max(a, axis=None, keepdims=False)[source]#

Helper function for max

simpletensor.functions.mean(a, axis=None, keepdims=False)[source]#

Helper function for mean

simpletensor.functions.min(a, axis=None, keepdims=False)[source]#

Helper function for min

simpletensor.functions.multiply(a1, a2)[source]#

Helper function for __mul__

simpletensor.functions.power(a1, a2)[source]#

Helper function for __pow__

simpletensor.functions.relu(a)[source]#

Helper function for relu

simpletensor.functions.reshape(a, shape=None)[source]#

Helper function for reshape

simpletensor.functions.show_graph(root)[source]#

Returns a graphviz visualization of the computation graph, starting at this root Tensor. It shows all the Tensors, the Tensors’ names, shapes, dtypes, and all operations used to create all the Tensors in the graph. If the Tensor has size 1, then it will print the scalar within.

Parameters:
rootTensor

Root tensor for the computation graph. Anything beyond the root is not shown.

Returns:
Digraph

GraphViz DiGraph object representing the computation graph

simpletensor.functions.softmax(a: Tensor, axis=None)[source]#

Numerically stable n-dimensional softmax computation over any axes. Converts input into valid probability distributions.

Softmax is defined as:

\[\text{softmax}(\vec x) = \frac{e^{\vec x}}{\sum_{i=1}^n e^{x_i}}\]

However, the numerically stable version of softmax in this implementation is defined as:

\[\begin{split}\text{Let x_stable}(\vec x) = \vec x - \text{max}(\vec x) \\ \text{softmax_stable}(\vec x) = \frac{e^{\text{x_stable}(\vec x)}}{\sum_{i=1}^n e^{\text{x_stable}(\vec x)_i}}\end{split}\]
Parameters:
aTensor

Input tensor

axistuple of ints, optional

Axis over which values should be turned into a valid probability distribution, by default None

Returns:
Tensor

Output of softmax

simpletensor.functions.squeeze(a, axis=None)[source]#

Helper function for squeeze

simpletensor.functions.std(a, axis=None, ddof=0, keepdims=False)[source]#

Helper function for std

simpletensor.functions.subtract(a1, a2)[source]#

Helper function for __sub__

simpletensor.functions.sum(a, axis=None, keepdims=False)[source]#

Helper function for sum

simpletensor.functions.transpose(a, axis=None)[source]#

Helper function for transpose

simpletensor.functions.var(a, axis=None, ddof=0, keepdims=False)[source]#

Helper function for var