Since a neural network is missing in the codebank
here is my version:
It's very simple.
It initializes with "CREATE"
-Where Array indicates the NN topology
Array (N of Inputs, Hidden layer neurons, ..., Hidden layer neurons, N of outputs)
-The second term is the Learning Rate.
-The third is the initial range of connections. (this value, together with the Learning Rate is very important and can drastically change the learning outcomes)
To get the output just call RUN with an array of inputs as arguments,
Return the Outputs Array
For learning (which is supervided) just call TRAIN.
The arguments are an array of Inputs and an array of expected Outputs
The learning process is done by backpropagation, the code is taken (and modified) by an article by Paras Chopra.
Neurons Index "Zero" [0] of each Layer is used for Bias. It is always 1 (The Biases are the weights of connections from 0-indexs neurons to next layer neurons) [Still not sure this way is correct tough]
Inputs and outputs range is from -1 to 1
the Activation function used is TANH.
Probably I'll put it on Github.
Enjoy
And, as always, anyone has ideas to improve it, is welcome
here is my version:
It's very simple.
It initializes with "CREATE"
Code:
NN.CREATE Array (2, 2, 1), 0.25, 4
Array (N of Inputs, Hidden layer neurons, ..., Hidden layer neurons, N of outputs)
-The second term is the Learning Rate.
-The third is the initial range of connections. (this value, together with the Learning Rate is very important and can drastically change the learning outcomes)
To get the output just call RUN with an array of inputs as arguments,
Return the Outputs Array
For learning (which is supervided) just call TRAIN.
The arguments are an array of Inputs and an array of expected Outputs
The learning process is done by backpropagation, the code is taken (and modified) by an article by Paras Chopra.
Neurons Index "Zero" [0] of each Layer is used for Bias. It is always 1 (The Biases are the weights of connections from 0-indexs neurons to next layer neurons) [Still not sure this way is correct tough]
Inputs and outputs range is from -1 to 1
the Activation function used is TANH.
Probably I'll put it on Github.
Enjoy
And, as always, anyone has ideas to improve it, is welcome