neural_network class <---------- neuron class ^ ^ | | | | | | | | network_detail class neuron_detail classBeing the neural_network<> class the principal object to be instantiated by the final user.
neural_network<Data,LayerContainerS,NodesContainerS,ConnectionS, TypeS, InContainerS, OutContainerS, WeightsContainerS, InputsContainerS, TFType, PoinType, RandFuncType>
Parameter | Description | Default | Models |
---|---|---|---|
Data | Fundamental type on which calculations are performed | double | Floating point type |
LayerContainerS | Selector for the type of the layers container | vecS | Container Selector |
NodesContainerS | Selector for the type of the neurons container | vecS | Container Selector |
ConnectionS | Selector that indicates the type of connectivity for the Neural Network | full_connectedS | Connectivity Selector |
TypeS | Selector that indicates the type of the Neural Network | feedforwardS | NN Type Selector |
InContainerS | Selector for the type of the neuron's input connections container | vecS | Container Selector |
OutContainerS | Selector for the type of the neuron's output connections container | vecS | Container Selector |
WeightsContainerS | Selector for the type of the neuron's weights container | vecS | Container Selector |
InputsContainerS | Selector for the type of the neuron's inputs container | vecS | Container Selector |
TFType | Selector that specifies the transfer function for the Neural Network | sigmoidS | Transfer function Selector |
PoinType | Selector that specifies the type of the pointer to be used | BoostSmrtptrS | Pointer Selector |
RandFuncType | Selector that specifies the type of randomization function | std_randS | Randomization function Selector |
morpho/nntl/neural_network.hpp morpho/nntl/data.hpp morpho/nntl/training.hpp
#include <neural_network.hpp> #include <data.hpp> #include <training.hpp>
typedef network::neural_network<> net_test; typedef data_ns::data<double> data_class_type; typedef training::back_propagation<vector<vector<double> >::iterator, net_test> bp;
net_test network_test; data_class_type all_data; // this matrix will be separated into different data-sets
std::vector<unsigned int> ns; ns.push_back(7); // 6 inputs (+ one bias) ns.push_back(15); // 14 inputs (+ one bias) ns.push_back(1); // one output
network_test.initialize_nn(ns.begin(), ns.end(), -1.0, network_test); ^- bias value
std::vector<std::pair<double,double> > new_bounds(7,std::make_pair(0.1,0.9)); // bounds for the inputs + outputs std::vector<std::pair<double,double> > old_bounds; all_data.arr_linear_norm(all_data.sets.begin(), all_data.sets.end(), new_bounds.begin(), old_bounds.begin()); all_data.select_data(3, 4, all_data.sets.begin(), all_data.sets.end(), train_data.sets, control_data.sets, test_data.sets);all_data.select_data() parameters :
bp backprop(network_test); backprop.initialize_containers(backprop);
backprop.train(train_data.sets.begin(), train_data.sets.end(), control_data.sets.begin(), control_data.sets.end(), backprop, network_test, 0.0833, 0.5); ^ ^- momentum value | - learning rate value
vector<vector<double> > predicted_values(test_data.sets.size() ); vector<vector<double> >::iterator pi; for (int i=0; i<predicted_values.size(); ++i) { predicted_values[i].resize(ns[2]); } pi=predicted_values.begin(); for (int i=0; i<test_data.sets.size(); ++i) { // If the inputs and outputs are merged : network_test.predict(test_data.sets[i].begin(), (test_data.sets[i].end() -1), pi->begin(), network_test); // If there's only inputs : network_test.predict(test_data.sets[i].begin(), test_data.sets[i].end(), pi->begin(), network_test); ++pi; }
all_data.arr_linear_norm(predicted_values.begin(), predicted_values.end(), (new_bounds.begin()+(ns[0]-1)), (old_bounds.begin()+(ns[0]-1) ), 0);