Home > database >  What are the optimal data structures for implementing a hidden layer neural network with backpropaga
What are the optimal data structures for implementing a hidden layer neural network with backpropaga

Time:08-14

Apologies if this seems like a duplicate post, but I am wondering what the optimal data structures for implementing and storing a simple hidden layer neural network with weights and biases and backpropagation in C are.

Off the top of my head I was thinking about the following:

  • Linked list
  • Pointer array

These two seem mostly equivalent to me for this purpose. I also often see people using 3D arrays/vectors to store the weights and biases, but this seems wasteful to me, since you're either limited to a neural network that has the same number of nodes at each layer, or you're storing a lot of zero-entries in your 3D array for node connections that don't exist.

CodePudding user response:

One option I see is doing it like this: Have one linear array for the nodes and one for all the edges. A sketch:

struct Node {
  std::size_t edgeBegin;
  std::size_t edgeEnd;
};
struct Edge {
  std::size_t to;
  float weight;
};
struct Layer {
  std::size_t layerBegin;
  std::size_t layerEnd;
};
struct Network {
  std::vector<Node> nodes;
  std::vector<Edge> edges;
  std::array<Layer,3> layers;
};

after populating this structure it might look like this:

nodes:  [n0, n1, n2, n3, n4, n5, n6, n7]
layers: [(0, 2), (2, 5), (5, 8)]
-> input layer has two nodes, hidden has three, output layer has three

where each node points to a section in edges, holding the edges of that particular node.

By doing it like this, you have a high chance to be cache-local and you only have to request dynamic memory twice if you set up the initialisation of the network correctly.

This assumes that the network will not change (i.e. new nodes being created while using it, and no new edges being created).

  • Related