Neural Network library for Javascript
source link: https://dannjs.org/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Dann.js
Neural Network library for Javascript
let nn = new Dann(784,10);
// Add layers
nn.addHiddenLayer(256,'leakyReLU');
nn.addHiddenLayer(128,'leakyReLU');
nn.addHiddenLayer(64,'leakyReLU');
nn.outputActivation('sigmoid');
nn.makeWeights();
// Set other values
nn.lr = 0.001;
nn.setLossFunction('bce');
// Neural Network's info
nn.log();
https://cdn.jsdelivr.net/gh/matiasvlevi/dann@master/dann.min.js
npm install dannjs
Documentation
Dann Object Properties
Dann.arch
This value represents the architecture of the model in the form of an array.
Dann.lr
This defines the learning rate of the model.
Dann.lossfunc
This is the loss function the model uses, can be set with Dann.setLossFunction(lossfunc).
Dann.loss
This is the most recent loss value of the model. If the model has never been trained before, this value will be set to 0.
Constructor(
input_size
output_size
input_size
The number of neurons in the input layer.
output_size
The number of neurons in the output layer.
Dann.feedForward(
input
options
This function feeds data through the model to obtain an output.
input
Takes an array of inputs to feed forward through the network.
options
optionnal
An object including specific properties.
returns
Returns an array of outputs
Dann.backpropagate(
input
target
options
Train the model's weights.
input
Takes an array of inputs.
target
Takes an array of desired outputs.
options
optionnal
An object including specific properties.
Dann.mutateRandom(
range
probability
This function mutates each weights randomly. This is for Neuroevolution tasks.
range
This will multiply with a random number from -range to range and add to each weight.
probability
optionnal
The probability of a weight being affected by a random mutation. Ranging from 0 to 1. Setting this value to 1 would mutate all the model's weights.
Dann.mutateAdd(
percent
This function mutates the weights. This is for Neuroevolution tasks.
percent
This will multiply by percent and add to each weight.
Dann.addHiddenLayer(
activation
Adds one hidden layer.
Layer size, the number of neurons in the layer.
activation
Takes a string of the activation function's name. If left empty, the activation function will be set to 'sigmoid' by default. See available activation functions here.
Dann.makeWeights();
Creates the weights. This function should be called after all the hidden layers were added.
Dann.outputActivation(
activation
Sets the activation function of the output
activation
Takes a string of the activation function's name. If this function is not called, the activation function will be set to 'sigmoid' by default. See available activation functions here.
Dann.setLossFunction(
lossfunc
Set the loss function of the model
lossfunc
Takes a string of the loss function's name. If this function is not called, the loss function will be set to 'mse' by default. See available loss functions here.
Dann.log(
options
Displays information about the model in the console.
options
optionnal
An object including specific properties.
Dann.save(
CDN version
saves a name.json file containing information about the network and its current state. When the function is called, a local file dialogue is opened by the browser.
Dann.save(
Dann.load();
CDN version
When this function is called, an input tag requesting a file appears on screen. When clicked, it opens a local file dialogue. Once the appropriate file is selected the dann data automatically uploads. The filename argument is not required for this version since the browser dialog takes care of it.
Dann.load(
Node version
Load a previously saved json file from ./savedDanns/. If the network's architechture is not the same, it is going to overwrite the Dann object.
Activation Functions
Activation functions written below are provided as default, see how to add more
sigmoid
Sigmoid is the default activation function. This function outputs a value in the range [0,1].
Definition:
σ(x)=11+e−x
reLU activation function. This activation function is easy to compute since it does not require heavy calculations. This function outputs values in a range [0,∞].
Definition:
R(x)={x,x>00,x≤0}
leakyReLU
Similar to reLU, this activation function is easy to compute. It also allows the output to be negative which solves the "dying reLU neuron" problem. This function outputs values in a range [−∞,∞]
Definition:
R(x)={x,x>00.01x,x≤0}
This activation function shares a lot of similarities with sigmoid. Unlike sigmoid, tanh(x) outputs a value in the range [−1,1].
Definition:
tanh(x)=(ex−e−x)(ex+e−x)
This activation function is a sigmoid's output multiplied by x. SiLU shares a lot of similarities with leakyReLU, exept the function outputs a negative value for a range of x∈[−9.85,0] instead of x∈[−∞,0].
Definition:
S(x)=xσ(x)=x1+e−x
leakySigmoid
This is an experimental function, it is very simiar to arctan(x). Unlike arctan(x), this function outputs a value in the range [∼0,∼1].
ς(x)=100+x(e−x+1)100(e−x+1)
Loss Functions
loss functions written below are provided as default by dannjs, see how to add more
These functions are represented below with y^ being the dannjs model predictions and y being the target values. The value n represents the length of the target list.
Binary Cross Entropy Loss. This function is common in machine learning especially for classification tasks.
Definition:
H(y,y^)=−1n∑i=0n(yilog(yi^)+(1−yi)log(1−yi^))
Mean Squared Error, this is one of the most commonly used loss functions in deep learning. This function determines a loss value by averaging the square of the difference between the predicted and desired output. It is also the default value for a Dannjs model.
Definition:
H(y,y^)=1n∑i=0n(yi−yi^)2
Mean Cubed Error, this is an experimental function. The aim is to let the loss value have more gradient with near 0 values, cubing a number can output a negative value this explains the |x|.
Definition:
H(y,y^)=1n∑i=0n|yi−yi^|3
Root Mean Squared Error, this function is the root of an mse output.
Definition:
H(y,y^)=1n∑i=0n(yi−yi^)2
Mean Absolute Error, this function determines the loss value by averaging the absolute difference between predicted and desired output.
H(y,y^)=1n∑i=0n|yi−yi^|
Mean Bias Error, this function determines a loss value by averaging the raw difference between the predicted and desired output. The output of this function can be negative, which makes this function less preferable than others.
H(y,y^)=1n∑i=0n(yi−yi^)
Log Cosh Loss, this function determines a loss value by averaging the log(cosh(x)) of the difference between the predicted and desired output.
H(y,y^)=1n∑i=0nlog(cosh(yi−yi^))
See a more advanced live demo here.
XOR Neural Network example
const dn = require('dannjs'); //nodejs only
let Dann = dn.dann; //nodejs only
// xor dataset
let dataset = [
{
inputs: [0,0],
target: [0]
},
{
inputs: [1,1],
target: [0]
},
{
inputs: [1,0],
target: [1]
},
{
inputs: [0,1],
target: [1]
}
]
// creating the model
let nn = new Dann(2,1)
nn.addHiddenLayer(4,'leakyReLU');
nn.makeWeights();
nn.setLossFunction('mae');
nn.lr = 0.1;
// printing the output before training
console.log('Before training:')
for (data of dataset) {
nn.feedForward(data.inputs,{log:true});
}
// training the model (5000 epochs)
for (let i = 0; i < 5000; i++) {
for (data of dataset) {
nn.backpropagate(data.inputs,data.target,{mode:'cpu'});
}
}
// printing the output after training
console.log('After training:')
for (data of dataset) {
nn.feedForward(data.inputs,{log:true});
}
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK