simpletensor.functions#
- simpletensor.functions.categorical_cross_entropy(y_true, y_pred)[source]#
Log loss / cross entropy loss over a batch. Defined as:
\[\text{categorical_cross_entropy}(Y, \hat Y) = -\frac{1}{\text{batch_size}}\sum_{b=1}^{\text{batch_size}} \sum_{i=1}^n Y[b, i] \log_{b}(\hat Y[b, i])\]- Parameters:
- y_trueTensor
2D Tensor of shape (batch_size, # features)
- y_predTensor
2D Tensor of shape (batch_size, # features)
- Returns:
- _type_
_description_
- simpletensor.functions.divide(a1, a2)[source]#
Helper function for
__truediv__
- simpletensor.functions.expand_dims(a, axis=None)[source]#
Helper function for
expand_dims
- simpletensor.functions.matmul(a1, a2)[source]#
Helper function for
__matmul__
- simpletensor.functions.show_graph(root)[source]#
Returns a graphviz visualization of the computation graph, starting at this root Tensor. It shows all the Tensors, the Tensors’ names, shapes, dtypes, and all operations used to create all the Tensors in the graph. If the Tensor has size 1, then it will print the scalar within.
- Parameters:
- rootTensor
Root tensor for the computation graph. Anything beyond the root is not shown.
- Returns:
- Digraph
GraphViz DiGraph object representing the computation graph
- simpletensor.functions.softmax(a: Tensor, axis=None)[source]#
Numerically stable n-dimensional softmax computation over any axes. Converts input into valid probability distributions.
Softmax is defined as:
\[\text{softmax}(\vec x) = \frac{e^{\vec x}}{\sum_{i=1}^n e^{x_i}}\]However, the numerically stable version of softmax in this implementation is defined as:
\[\begin{split}\text{Let x_stable}(\vec x) = \vec x - \text{max}(\vec x) \\ \text{softmax_stable}(\vec x) = \frac{e^{\text{x_stable}(\vec x)}}{\sum_{i=1}^n e^{\text{x_stable}(\vec x)_i}}\end{split}\]- Parameters:
- aTensor
Input tensor
- axistuple of ints, optional
Axis over which values should be turned into a valid probability distribution, by default None
- Returns:
- Tensor
Output of softmax