|
Parameter
|
Description
|
Default
|
Models
|
|
Data
|
Fundamental type on which calculations are performed
|
double
|
Floating point type
|
|
LayerContainerS
|
Selector for the type of the layers container
|
vecS
|
Container Selector
|
|
NodesContainerS
|
Selector for the type of the neurons container
|
vecS
|
Container Selector
|
|
ConnectionS
|
Selector that indicates the type of connectivity for the Neural Network
|
full_connectedS
|
Connectivity Selector
|
|
TypeS
|
Selector that indicates the type of the Neural Network
|
feedforwardS
|
NN Type Selector
|
|
InContainerS
|
Selector for the type of the neuron's input connections container
|
vecS
|
Container Selector
|
|
OutContainerS
|
Selector for the type of the neuron's output connections container
|
vecS
|
Container Selector
|
|
WeightsContainerS
|
Selector for the type of the neuron's weights container
|
vecS
|
Container Selector
|
|
InputsContainerS
|
Selector for the type of the neuron's inputs container
|
vecS
|
Container Selector
|
|
TFType
|
Selector that specifies the transfer function for the Neural Network
|
sigmoidS
|
Transfer function Selector
|
|
PoinType
|
Selector that specifies the type of the pointer to be used
|
BoostSmrtptrS
|
Pointer Selector
|
|
RandFuncType
|
Selector that specifies the type of randomization function
|
std_randS
|
Randomization function Selector
|
- Include the headers :
#include <neural_network.hpp>
#include <data.hpp>
#include <training.hpp>
- Typedef the nn, the training algorithm, and data classes if needed:
typedef network::neural_network<> net_test;
typedef data_ns::data<double> data_class_type;
typedef training::back_propagation<vector<vector<double> >::iterator, net_test> bp;
- Create the variables:
net_test network_test;
data_class_type all_data; // this matrix will be separated into different data-sets
- Create a container specifying the structure of the NN :
std::vector<unsigned int> ns;
ns.push_back(7); // 6 inputs (+ one bias)
ns.push_back(15); // 14 inputs (+ one bias)
ns.push_back(1); // one output
- Initialize the network :
network_test.initialize_nn(ns.begin(), ns.end(), -1.0, network_test);
^- bias value
- If needed, normalize data and separate the data into training, control and test sets.
The function select_data is overloaded, so you can exclude selecting the test set.
First normalize the data vector. new_bounds and old_bounds are containers of data type std::pair
first: min, second: max :
std::vector<std::pair<double,double> > new_bounds(7,std::make_pair(0.1,0.9)); // bounds for the inputs + outputs
std::vector<std::pair<double,double> > old_bounds;
all_data.arr_linear_norm(all_data.sets.begin(), all_data.sets.end(),
new_bounds.begin(), old_bounds.begin());
all_data.select_data(3, 4, all_data.sets.begin(), all_data.sets.end(),
train_data.sets, control_data.sets, test_data.sets);
all_data.select_data() parameters :
- first: period to select control sets from the all sets: Every 3 sets for
training, one is selected for control.
- second : period to select test sets from the training sets: Every 4 sets for
training and control, one is selected for test. After this, the counter is restarted.
- Following are the all sets iterators begin, end; the training, control and test matrix.
- Create training class, and initialize containers for the learning algorithm:
bp backprop(network_test);
backprop.initialize_containers(backprop);
- Call the training algorithm for one epoch:
backprop.train(train_data.sets.begin(), train_data.sets.end(),
control_data.sets.begin(), control_data.sets.end(), backprop,
network_test, 0.0833, 0.5);
^ ^- momentum value
|
- learning rate value
- If wanted, make predictions with the nn:
vector<vector<double> > predicted_values(test_data.sets.size() );
vector<vector<double> >::iterator pi;
for (int i=0; i<predicted_values.size(); ++i) {
predicted_values[i].resize(ns[2]);
}
pi=predicted_values.begin();
for (int i=0; i<test_data.sets.size(); ++i) {
// If the inputs and outputs are merged :
network_test.predict(test_data.sets[i].begin(), (test_data.sets[i].end() -1), pi->begin(), network_test);
// If there's only inputs :
network_test.predict(test_data.sets[i].begin(), test_data.sets[i].end(), pi->begin(), network_test);
++pi;
}
- Denormalize the predicted values:
all_data.arr_linear_norm(predicted_values.begin(), predicted_values.end(),
(new_bounds.begin()+(ns[0]-1)), (old_bounds.begin()+(ns[0]-1) ), 0);
Neat!
Now you can compare the predicted values in the predicted_values container,
and the real values you had at the beginning.