From Nodes to Models#

While the Node class alone provides tools to define recurrently defined functions, such as the equations of recurrent neural networks, most users will need to combine nodes together to build powerful models.

Now that you have learned more about nodes in ReservoirPy in the Node functional API guide, read this guide to understand how to combine nodes within complex computational graphs, using the Model class.

Definition#

Models are object storing nodes and relations between them, to allow functional composition. Nodes combined within a Model instance will combine their forward function into one complex forward function \(F\). A Model forward function \(F\) can therefore be seen as a specific composition of all nodes forward functions within it, used to update all nodes internal states at once (4):

(4)#\[\begin{split} S[t+1] &= F(S[t], x[t]) \\ &= f_n(s_n[t], \dots f_2(s_2[t], f_1(s_1[t], x[t]))) \\ &= (f_n \circ \dots \circ f_2 \circ f_1)(\{s_n[t], \dots, s_2[t], s_1[t]\}, x[t])\end{split}\]

where \(S\) is the set of all internal states of the \(n\) nodes in the Model, and the \(f_n, \dots, f_2, f_1\) are the forward functions of these nodes.

We can represent this model as a computational graph, where composition of forward functions is represented as an edge in the graph. For instance, if node A is equipped with function \(f_A\), and node B with function \(f_B\), then \((f_B \circ f_A)(s_A[t], s_B[t], x[t]) = f_B(s_B[t], f_A(s_A[t], x[t]))\) can be represented as the graph in Fig. 8.

A simple graph with two nodes.

Fig. 8 A graph connecting node A to node B. In that case, appliying the model on some data point \(x[t]\) will first update the internal state of node A using function \(f_A\) on \(x[t]\), before updating the internal state of node B using function \(f_B\) on A’s internal state. This is equivalent to compose \(f_A\) with \(f_B\).#

Create a Model#

To define a Model, the simplest way is to use the >> operator on some nodes:

model = nodeA >> nodeB

This will create a very simple model storing the graph in Fig. 8.

The >> operation between models uses the function link() of ReservoirPy. You can either use the function or the >> operator to define models:

from reservoirpy import link

model = link(nodeA, nodeB)

As Models are essentially a subclass of Node, it is also possible to link models together, or nodes to models. This allow to chain the >> operator:

model = nodeA >> nodeB >> nodeC

This model forward function \(F\) is defined as \(f_C \circ f_B \circ f_A\).

Call and run a Model#

Models display the same interface as nodes. They can be called on some data points, or on a timeseries using the run() method. Consider the very simple model defined by:

model = nodeA >> nodeB

We can call or run this model:

# using 'call' on a single timestep of data x_t0
s_t1 = model.call(x_t0)
# using model as a function
s_t1 = model(x_t0)
# running on a sequence X
S = model.run(X)

In that case, the variable s_t1 (or S when using run()) stores the internal state of the model output node. Taking the model in Fig. 8 as example, the variable would contain the state of node B, as it is the last one to be visited in the graph.

These operations update the states of all nodes within the graph. It is still possible to access these updated states using the node instances:

s_t1 = model.call(x_t0)
# now that we have called the model, nodeB is updated
assert np.all(nodeB.state() == s_t1)

Access nodes attributes#

A list of all nodes in the model can be retrieved using the Model.nodes attribute. You can also retrieve them using their names (see Naming nodes) with the method get_node():

nodeA = Node(..., name="A")
nodeB = Node(..., name="B")
model = nodeA >> nodeB
assert id(model.get_node("A")) == id(nodeA)

Nodes parameters and hyperparameters can be accessed this way inside a model. They are also stored in the Model.params and Model.hypers attributes, using nested dictionaries:

assert model.params["A"]["param1"] == nodeA.param1

An example: building a simple Echo State Network#

Models allow us to create our first Echo State Network (ESN), a well-known neural network architecture within the Reservoir Computing field. An ESN is made of a reservoir, a recurrent neural network made of randomly connected neurons, and a readout, a simple feed-forward neural network connected to the reservoir. Connections between the reservoir and the readout layer of neurons can be learned (see Learning rules to learn how to train an ESN). For now, these connections are kept constant.

In ReservoirPy, a reservoir can be built using a Reservoir node. A readout equipped with a simple linear regression mechanism for connection weight learning can be created using the Ridge node. We start with creating a readout and a reservoir node. The reservoir contains 100 neurons, while the readout is a layer of only one neuron.

In [1]: from reservoirpy.nodes import Reservoir, Ridge

In [2]: reservoir = Reservoir(100)

In [3]: readout = Ridge(1)

Next, we can link these two nodes together to create our first ESN:

In [4]: esn = reservoir >> readout

This ESN can then be called and run over timeseries.

In [5]: X = np.sin(np.arange(0, 10))[:, np.newaxis]

In [6]: S = esn.run(X)

In [7]: print(S)
[[0.]
 [0.]
 [0.]
 [0.]
 [0.]
 [0.]
 [0.]
 [0.]
 [0.]
 [0.]]

Because we have not trained the connections between the readout and the reservoir yet, the output is just a null vector. You can see Learning rules to learn more about how to train these connections to perform task on some data.

Multi inputs models#

In some cases, models need to be connected to different source of data simultaneously, or to output several values. For instance, imagine that we need node B to receive two different inputs from node A1 and node A2 (Fig. 9):

A graph with two inputs.

Fig. 9 A graph connecting node A1 and node A2 to node B.#

To create this graph, we can apply >> on a list of nodes:

model = [nodeA1, nodeA2] >> nodeB

This model will give inputs to node A1 and node A2, concatenate their internal states and give the concatenated states to node B.

Note

Concatenation of A1 and A2 states will be handled by a Concat node. This node will be automatically inserted between nodes A1, A2 and B in that case.

To run this model, we can either give a single data point that will be used by both A1 and A2, or give different inputs to each nodes in the call or run method using a dictionary. In this dictionary, the key must be the name of a model input node, and the value a data point (or a timeseries) to give to these input nodes:

# same input for A1 and A2
s = model(x)
# different inputs for A1 and A2
s = model({"A1": x1, "A2": x2})

Note

Naming your nodes will help you doing this. We consider above that the nodes have been named “A1”, “A2” and “B” at instanciation.

Multi outputs models#

Similarly, imagine that we need node A to be connected to both node B1 and node B2 (Fig. 10):

A graph with two outputs.

Fig. 10 A graph connecting node A to node B1 and node B2.#

We can still use >> and a list of nodes:

model = nodeA >> [nodeB1, nodeB2]

This model will give inputs to node A, and then give A’s internal state to node B1 and node B2.

In that case, when calling or running the model, output internal states will be a dictionary. In this dictionary, the keys will be the names of model’s output nodes, and the values their respective internal states:

s = model(x)
assert s["B1"] == nodeB1.state()
assert s["B2"] == nodeB2.state()

Note

Naming your nodes will help you doing this. We consider above that the nodes have been named “A”, “B1” and “B2” at instanciation.

Merge models and build complex graphs#

Models can display any level of complexity. While most reservoir computing models can be seen as a simple chain of operations, as it is the case in an ESN, some models, like deep echo state networks, require to combine nodes in more elaborate ways.

Imagine now that we want to create the model defined by the complicated graph in Fig. 11:

A complicated graph.

Fig. 11 A complicated model.#

To create this model, we must decompose it into several path of connections between nodes, or several sub-models. All sub-models can then be merged using the & operator, or the merge() function.

First, let’s connect inputs to nodes A, B and C. To do this, we can use the Input node to indicate to the model where inputs should be fed.

path1 = Input() >> [A, B, C]

Now, we can create the big loop of connections going from node A to node F. To ensure that only node F will be used as output to the Model, we can use the Output node.

path2 = A >> B >> C >> D >> E >> F >> Output()

Only two more connections to create! We can now connect A to F and B to E:

path3 = A >> F
path4 = B >> E

To create the final model, we will use the merge() function, triggered by the & operator between models. This operation will gather all nodes and connections defined in all models named path# into one single model.

model = path1 & path2 & path3 & path4
# or using "merge"
from reservoirpy import merge
model = merge(path1, path2, path3, path4)

model variable now contains all nodes and all connections defined in the graph in Fig. 11.

Learn more#

Now that you are more familiar with the basic concepts of models, you can see:

References#

ReservoirPy Node API was heavily inspired by Explosion.ai Thinc functional deep learning library [1], and Nengo core API [2]. It also follows some scikit-learn schemes and guidelines [3].