reservoirpy.compat.ESN#

class reservoirpy.compat.ESN(lr, W, Win, input_bias=True, reg_model=None, ridge=0.0, Wfb=None, fbfunc=<function ESN.<lambda>>, noise_in=0.0, noise_rc=0.0, noise_out=0.0, activation=<ufunc 'tanh'>, seed=None, typefloat=<class 'numpy.float64'>)[source]#

Base class of Echo State Networks.

Simple, fast, parallelizable and object-oriented implementation of Echo State Networks [1] [2], using offline learning methods.

Warning

The v0.2 model compat.ESN is deprecated. Consider using the new Node API introduced in v0.3 (see Node functional API).

The compat.ESN class is the angular stone of ReservoirPy offline learning methods using reservoir computing. Echo State Network allows one to:

  • quickly build ESNs, using the reservoirpy.mat_gen module to initialize weights,

  • train and test ESNs on the task of your choice,

  • use the trained ESNs on the task of your choice, either in predictive mode or generative mode.

Parameters:
  • lr (float) – Leaking rate

  • W (np.ndarray) – Reservoir weights matrix

  • Win (np.ndarray) – Input weights matrix

  • input_bias (bool, optional) – If True, will add a constant bias to the input vector. By default, True.

  • reg_model (Callable, optional) – A scikit-learn linear model function to use for regression. Should be None if ridge is used.

  • ridge (float, optional) – Ridge regularization coefficient for Tikonov regression. Should be None if reg_model is used. By default, pseudo-inversion of internal states and teacher signals is used.

  • Wfb (np.array, optional) – Feedback weights matrix.

  • fbfunc (Callable, optional) – Feedback activation function.

  • typefloat (numpy.dtype, optional) –

lr#

Leaking rate.

Type:

float

activation#

Reservoir activation function.

Type:

Callable

fbfunc#

Feedback activation function.

Type:

Callable

noise_in#

Input noise gain.

Type:

float

noise_rc#

Reservoir states noise gain.

Type:

float

noise_out#

Feedback noise gain.

Type:

float

seed#

Random state seed.

Type:

int

typefloat#
Type:

numpy.dtype

References

Methods

__init__(lr, W, Win[, input_bias, ...])

compute_all_states(inputs[, ...])

Compute all states generated from sequences of inputs.

compute_outputs(states[, verbose])

Compute all readouts of a given sequence of states, when a readout matrix is available (i.e. after training).

fit_readout(states, teachers[, reg_model, ...])

Compute a readout matrix by fitting the states computed by the ESN to the expected values, using the regression model defined in the ESN.

generate(nb_timesteps[, warming_inputs, ...])

Run the ESN on generative mode.

run(inputs[, init_state, init_fb, workers, ...])

Run the model on a sequence of inputs, and returned the states and

save(directory)

Save the ESN to disk.

train(inputs, teachers[, wash_nr_time_step, ...])

Train the ESN model on set of input sequences.

zero_feedback()

Returns a zero feedback vector.

zero_state()

Returns zero state vector.

Attributes

N

Number of units.

W

Recurrent weight matrix.

Wfb

Feedback weight matrix.

Win

Input weight matrix.

Wout

Readout weight matrix.

dim_in

Input dimension.

dim_out

Output (readout) dimension.

input_bias

If True, constant bias is added to inputs.

ridge

L2 regularization coefficient for readout fitting.

use_raw_input

If True, raw inputs are concatenated to states before readout.

property N#

Number of units.

property W#

Recurrent weight matrix.

property Wfb#

Feedback weight matrix.

property Win#

Input weight matrix.

property Wout#

Readout weight matrix.

compute_all_states(inputs, forced_teachers=None, init_state=None, init_fb=None, workers=-1, seed=None, verbose=False)[source]#

Compute all states generated from sequences of inputs.

Parameters:
  • inputs (list or array of numpy.array) – All sequences of inputs used for internal state computation. Note that it should always be a list of sequences, i.e. if only one sequence of inputs is used, it should be alone in a list

  • forced_teachers (list or array of numpy.array, optional) – Sequence of ground truths, for computation with feedback without any trained readout. Note that is should always be a list of sequences of the same length than the inputs, i.e. if only one sequence of inputs is used, it should be alone in a list.

  • init_state (np.ndarray, optional) – State initialization vector for all inputs. By default, state is initialized at 0.

  • init_fb (np.ndarray, optional) – Feedback initialization vector for all inputs, if feedback is enabled. By default, feedback is initialized at 0.

  • workers (int, optional) – If n >= 1, will enable parallelization of states computation with n threads/processes, if possible. If n = -1, will use all available resources for parallelization. By default, -1.

  • verbose (bool, optional) –

Returns:

All computed states.

Return type:

list of np.ndarray

compute_outputs(states, verbose=False)[source]#

Compute all readouts of a given sequence of states, when a readout matrix is available (i.e. after training).

Parameters:
  • states (list of numpy.array) – All sequences of states used for readout.

  • verbose (bool, optional) –

Raises:
  • RuntimeError – no readout matrix Wout is available.:

  • Consider training model first, or load an existing matrix.

Returns:

All outputs of readout matrix.

Return type:

list of numpy.arrays

property dim_in#

Input dimension.

property dim_out#

Output (readout) dimension.

fit_readout(states, teachers, reg_model=None, ridge=None, force_pinv=False, verbose=False)[source]#

Compute a readout matrix by fitting the states computed by the ESN to the expected values, using the regression model defined in the ESN.

Parameters:
  • states (list of numpy.ndarray) – All states computed.

  • teachers (list of numpy.ndarray) – All ground truth vectors.

  • reg_model (scikit-learn regression model, optional) – A scikit-learn regression model to use for readout weights computation.

  • ridge (float, optional) – Use Tikhonov regression for readout weights computation and set regularization parameter to the parameter value.

  • force_pinv (bool, optional) – Overwrite all previous parameters and force computation of readout using pseudo-inversion.

  • verbose (bool, optional) –

Returns:

Readout matrix.

Return type:

numpy.ndarray

generate(nb_timesteps, warming_inputs=None, init_state=None, init_fb=None, verbose=False, init_inputs=None, seed=None, return_init=None)[source]#

Run the ESN on generative mode.

After the warming_inputs are consumed, new outputs are used as inputs for the next nb_timesteps, i.e. the ESN is feeding himself with its own outputs.

Note that this mode can only work if the ESN is trained on a regression task. The outputs of the ESN must be the same kind of data as its input.

To train an ESN on generative mode, use the ESN.train() method to train the ESN on a regression task (for instance, predict the future data point t+1 of a timeseries give the data at time t).

Parameters:
  • nb_timesteps (int) – Number of timesteps of data to generate from the initial input.

  • warming_inputs (numpy.ndarray) – Input data used to initiate generative mode. This data is meant to “seed” the ESN internal states with some real information, before it runs on its own created outputs.

  • init_state (numpy.ndarray, optional:) – State initialization vector for the reservoir. By default, internal state of the reservoir is initialized to 0.

  • init_fb (numpy.ndarray, optional) – Feedback initialization vector for the reservoir, if feedback is enabled. By default, feedback is initialized to 0.

  • verbose (bool, optional) –

  • init_inputs (list of numpy.ndarray, optional) – Same as warming_inputs. Kept for compatibility with previous version. Deprecated since 0.2.2, will be removed soon.

  • return_init (bool, optional) – Kept for compatibility with previous version. Deprecated since 0.2.2, will be removed soon.

Returns:

Generated outputs, generated states, warming outputs, warming states

Generated outputs are the timeseries predicted by the ESN from its own predictions over time. Generated states are the corresponding internal states.

Warming outputs are the predictions made by the ESN based on the warming inputs passed as parameters. These predictions are prior to the generated outputs. Warming states are the corresponding internal states. In the case no warming inputs are provided, warming outputs and warming states are None.

Return type:

tuple of numpy.ndarray

property input_bias#

If True, constant bias is added to inputs.

property ridge#

L2 regularization coefficient for readout fitting.

run(inputs, init_state=None, init_fb=None, workers=-1, return_states=False, backend=None, seed=None, verbose=False)[source]#
Run the model on a sequence of inputs, and returned the states and

readouts vectors.

Parameters:
  • inputs (list of numpy.ndarray) – List of inputs. Note that it should always be a list of sequences, i.e. if only one sequence (array with rows representing time axis) of inputs is used, it should be alone in a list.

  • init_state (numpy.ndarray) – State initialization vector for all inputs. By default, internal state of the reservoir is initialized to 0.

  • init_fb – Feedback initialization vector for all inputs, if feedback is enabled. By default, feedback is initialized to 0.

Returns:

All outputs computed from readout and all corresponding internal states, for all inputs.

Return type:

list of numpy.ndarray, list of numpy.ndarray

Note

If only one input sequence is provided (“continuous time” inputs), workers should be 1, because parallelization is impossible. In other cases, if using large NumPy arrays during computation (which is often the case), prefer using threading backend to avoid huge overhead. Multiprocess is a good idea only in very specific cases, and this code is not (yet) well suited for this.

save(directory)[source]#

Save the ESN to disk.

Parameters:

directory (str or Path) – Directory where to save the model.

train(inputs, teachers, wash_nr_time_step=0, workers=-1, seed=None, verbose=False, backend=None, use_memmap=None, return_states=False)[source]#

Train the ESN model on set of input sequences.

Parameters:
  • inputs (list of numpy.ndarray) – List of inputs. Note that it should always be a list of sequences, i.e. if only one sequence (array with rows representing time axis) of inputs is used, it should be alone in a list.

  • teachers (list of numpy.ndarray) – List of ground truths. Note that is should always be a list of sequences of the same length than the inputs, i.e. if only one sequence of inputs is used, it should be alone in a list.

  • wash_nr_time_step (int) – Number of states to considered as transient when training. Transient states will be discarded when computing readout matrix. By default, no states are removes.

  • workers (int, optional) – If n >= 1, will enable parallelization of states computation with n threads/processes, if possible. If n = -1, will use all available resources for parallelization. By default, -1.

  • return_states (bool, False by default) – If True, the function will return all the internal states computed during the training. Be warned that this may be too heavy for the memory of your computer.

  • backend – kept for compatibility with previous versions.

  • use_memmap – kept for compatibility with previous versions.

  • verbose (bool, optional) –

Returns:

All states computed, for all inputs.

Return type:

list of numpy.ndarray

Note

If only one input sequence is provided (“continuous time” inputs), workers should be 1, because parallelization is impossible. In other cases, if using large NumPy arrays during computation (which is often the case), prefer using threading backend to avoid huge overhead. Multiprocess is a good idea only in very specific cases, and this code is not (yet) well suited for this.

property use_raw_input#

If True, raw inputs are concatenated to states before readout.

zero_feedback()[source]#

Returns a zero feedback vector.

zero_state()[source]#

Returns zero state vector.