neural_networks#
- class datacheese.neural_networks.BaseLayer(n_inputs, n_nodes, seed=None, **kwargs)#
Bases:
object
Base class for neural network layer.
- Parameters:
n_inputs (int) – Number of inputs, usually equal to the number of nodes in the previous layer.
n_nodes (int) – Number of neuron nodes.
seed (int or None, default None) – Random seed for reproducible results.
**kwargs (dict) – Layer specific parameters.
- activation(x)#
Activation function.
- Parameters:
x (numpy.ndarray) – Input values.
- Returns:
f – Output values.
- Return type:
numpy.ndarray
- activation_derivative(x=None, f=None)#
Activation derivative function.
If
x
is provided, the output is computed directly. If the output of antiderivativef
is provided, the output is computed using the antiderivative.- Parameters:
x (numpy.ndarray, default
None
) – Input values.f (numpy.ndarray, default
None
) – Output values of antiderivative function.
- Returns:
f – Output values.
- Return type:
numpy.ndarray
- predict(x)#
Predict output of given inputs using current weights.
- Parameters:
x (numpy.ndarray) – Input values.
- Returns:
y_pred – Output values.
- Return type:
numpy.ndarray
- feed_forward(x)#
Feed forward given training input example.
- Parameters:
x (numpy.ndarray) – Given training input.
- Returns:
out – Output values.
- Return type:
numpy.ndarray
- back_propagate(e, lr)#
Perform backpropagation using forward layer errors.
- Parameters:
e (numpy.ndarray) – Errors propagated from forward layer.
lr (float) – Learning rate for weight updates.
- Returns:
e_next – Errors to be propagated to backward layer.
- Return type:
numpy.ndarray
- class datacheese.neural_networks.LinearLayer(n_inputs, n_nodes, seed=None, **kwargs)#
Bases:
BaseLayer
Layer with linear activation function.
- Parameters:
n_inputs (int) – Number of inputs, usually equal to the number of nodes in the previous layer.
n_nodes (int) – Number of neuron nodes.
- activation(x)#
Activation function.
- Parameters:
x (numpy.ndarray) – Input values.
- Returns:
f – Output values.
- Return type:
numpy.ndarray
- activation_derivative(x=None, f=None)#
Activation derivative function.
If
x
is provided, the output is computed directly. If the output of antiderivativef
is provided, the output is computed using the antiderivative.- Parameters:
x (numpy.ndarray, default
None
) – Input values.f (numpy.ndarray, default
None
) – Output values of antiderivative function.
- Returns:
f – Output values.
- Return type:
numpy.ndarray
- class datacheese.neural_networks.SigmoidLayer(n_inputs, n_nodes, seed=None, **kwargs)#
Bases:
BaseLayer
Layer with sigmoid activation function.
- Parameters:
n_inputs (int) – Number of inputs, usually equal to the number of nodes in the previous layer.
n_nodes (int) – Number of neuron nodes.
- activation(x)#
Activation function.
- Parameters:
x (numpy.ndarray) – Input values.
- Returns:
f – Output values.
- Return type:
numpy.ndarray
- activation_derivative(x=None, f=None)#
Activation derivative function.
If
x
is provided, the output is computed directly. If the output of antiderivativef
is provided, the output is computed using the antiderivative.- Parameters:
x (numpy.ndarray, default
None
) – Input values.f (numpy.ndarray, default
None
) – Output values of antiderivative function.
- Returns:
f – Output values.
- Return type:
numpy.ndarray
- class datacheese.neural_networks.TanhLayer(n_inputs, n_nodes, seed=None, **kwargs)#
Bases:
BaseLayer
Layer with hyperbolic tangent activation function.
- Parameters:
n_inputs (int) – Number of inputs, usually equal to the number of nodes in the previous layer.
n_nodes (int) – Number of neuron nodes.
- activation(x)#
Activation function.
- Parameters:
x (numpy.ndarray) – Input values.
- Returns:
f – Output values.
- Return type:
numpy.ndarray
- activation_derivative(x=None, f=None)#
Activation derivative function.
If
x
is provided, the output is computed directly. If the output of antiderivativef
is provided, the output is computed using the antiderivative.- Parameters:
x (numpy.ndarray, default
None
) – Input values.f (numpy.ndarray, default
None
) – Output values of antiderivative function.
- Returns:
f – Output values.
- Return type:
numpy.ndarray
- class datacheese.neural_networks.ReLULayer(n_inputs, n_nodes, seed=None, **kwargs)#
Bases:
BaseLayer
Layer with rectified linear unit activation function.
- Parameters:
n_inputs (int) – Number of inputs, usually equal to the number of nodes in the previous layer.
n_nodes (int) – Number of neuron nodes.
- activation(x)#
Activation function.
- Parameters:
x (numpy.ndarray) – Input values.
- Returns:
f – Output values.
- Return type:
numpy.ndarray
- activation_derivative(x=None, f=None)#
Activation derivative function.
If
x
is provided, the output is computed directly. If the output of antiderivativef
is provided, the output is computed using the antiderivative.- Parameters:
x (numpy.ndarray, default
None
) – Input values.f (numpy.ndarray, default
None
) – Output values of antiderivative function.
- Returns:
f – Output values.
- Return type:
numpy.ndarray
- class datacheese.neural_networks.LeakyReLULayer(n_inputs, n_nodes, seed=None, **kwargs)#
Bases:
BaseLayer
Layer with leaky rectified linear unit activation function.
- Parameters:
n_inputs (int) – Number of inputs, usually equal to the number of nodes in the previous layer.
n_nodes (int) – Number of neuron nodes.
seed (int or None, default None) – Random seed for reproducible results.
alpha (float, default 0.01) – Negative slope \(\alpha\).
- activation(x)#
Activation function.
- Parameters:
x (numpy.ndarray) – Input values.
- Returns:
f – Output values.
- Return type:
numpy.ndarray
- activation_derivative(x=None, f=None)#
Activation derivative function.
If
x
is provided, the output is computed directly. If the output of antiderivativef
is provided, the output is computed using the antiderivative.- Parameters:
x (numpy.ndarray, default
None
) – Input values.f (numpy.ndarray, default
None
) – Output values of antiderivative function.
- Returns:
f – Output values.
- Return type:
numpy.ndarray
- class datacheese.neural_networks.MultiLayerPerceptron(lr=0.5)#
Bases:
object
Multi-layer percetron feed-forward neural network that implements backpropagation.
- Parameters:
lr (float) – Learning rate.
Examples
>>> import numpy as np >>> from datacheese.neural_networks import ( ... MultiLayerPerceptron, ... SigmoidLayer, ... ReLULayer, ... )
Generate training data:
>>> n_patterns = 5 >>> n_dimensions = 3 >>> n_classes = 2 >>> rng = np.random.default_rng() >>> X = rng.random(size=(n_patterns, n_dimensions)) >>> Y = rng.random(size=(n_patterns, n_classes))
Initialize model with 2 hidden layers with ReLU and Sigmoid activations respectively:
>>> model = MultiLayerPerceptron(lr=0.5) >>> model.add_layer(ReLULayer(n_dimensions, 4)) >>> model.add_layer(SigmoidLayer(4, n_classes))
Train model over 20 epochs:
>>> model.fit(X, Y, epochs=20, verbose=1) Epoch: 0, Loss: 0.15181599599950849 Epoch: 4, Loss: 0.13701115369406147 Epoch: 8, Loss: 0.11337662383705667 Epoch: 12, Loss: 0.10121139637335393 Epoch: 16, Loss: 0.09388681525946835
Use model to make predictions:
>>> Y_pred = model.predict(X) >>> np.mean((Y_pred - Y) ** 2) 0.05310463606057757
- add_layer(layer)#
Add layer to network.
- Parameters:
layer (BaseLayer) – Layer object to add to network.
- feed_forward(x)#
Feed forward given training input example through each layer.
- Parameters:
x (numpy.ndarray) – Given training input.
- Returns:
out – Output values.
- Return type:
numpy.ndarray
- back_propagate(y, lr)#
Perform backpropagation through layers in the network.
- Parameters:
y (numpy.ndarray) – Actual output of last forwarded training input.
lr (float) – Learning rate for weight updates.
- fit(X, Y, epochs, verbose=0)#
Train network weights using training data over given number of epochs.
- Parameters:
X (numpy.ndarray) – 2D array of input patterns of shape
n x d
, wheren
is the number of training examples andd
is the number of dimensions.Y (numpy.ndarray) – 2D array of output patterns of shape
n x c
, wheren
is the number of training examples andc
is the number of classes.epochs (int) – Number of epochs to train over.
verbose (int, default 0) – Logging verbosity.
- predict(X)#
Predict output of given inputs using current network layers.
- Parameters:
X (numpy.ndarray) – 2D array of input patterns of shape
m x d
, wherem
is the number of testing examples andd
is the number of dimensions.- Returns:
Y_pred – 2D array of predicted output values.
- Return type:
numpy.ndarray