Networks are an abstraction of a grouping of Nengo objects
(i.e., Node
, Ensemble
, Connection
, and Network
instances,
though usually not Probe
instances.)
Like most abstractions, this helps with code-reuse and maintainability.
You’ll find the documentation
for the reusable networks included with Nengo below.
You may also want to build your own reusable networks. Doing so can help encapsulate parts of your model, making your code easier to understand, easier to re-use, and easier to share. The following examples will help you build your own reusable networks:
You may also find the config system documentation useful.
nengo.networks.EnsembleArray |
An array of ensembles. |
nengo.networks.BasalGanglia |
Winner take all network, typically used for action selection. |
nengo.networks.Thalamus |
Inhibits non-selected actions. |
nengo.networks.AssociativeMemory |
Associative memory network. |
nengo.networks.CircularConvolution |
Compute the circular convolution of two vectors. |
nengo.networks.Integrator |
An ensemble that accumulates input and maintains state. |
nengo.networks.Oscillator |
A two-dimensional ensemble with interacting recurrent connections. |
nengo.networks.Product |
Computes the element-wise product of two equally sized vectors. |
nengo.networks.InputGatedMemory |
Stores a given vector in memory, with input controlled by a gate. |
nengo.networks.
EnsembleArray
(n_neurons, n_ensembles, ens_dimensions=1, neuron_nodes=False, label=None, seed=None, add_to_container=None, **ens_kwargs)[source]¶An array of ensembles.
This acts, in some ways, like a single high-dimensional ensemble, but actually consists of many sub-ensembles, each one representing a separate dimension. This tends to be much faster to create and can be more accurate than having one huge high-dimensional ensemble. However, since the neurons represent different dimensions separately, we cannot compute nonlinear interactions between those dimensions.
Note that in addition to the parameters below, parameters affecting all of the sub-ensembles can be passed to the ensemble array. For example:
ea = nengo.networks.EnsembleArray(20, 2, radius=1.5)
creates an ensemble array with 2 sub-ensembles, each with 20 neurons, and a radius of 1.5.
Parameters: |
|
---|---|
Attributes: |
|
dimensions
¶(int) Dimensionality of the ensemble array.
add_neuron_input
()[source]¶Adds a node that provides input to the neurons of all ensembles.
Direct neuron input is useful for inhibiting the activity of all neurons in the ensemble array.
This node is accessible through the ‘neuron_input’ attribute of this ensemble array.
add_neuron_output
()[source]¶Adds a node that collects the neural output of all ensembles.
Direct neuron output is useful for plotting the spike raster of all neurons in the ensemble array.
This node is accessible through the ‘neuron_output’ attribute of this ensemble array.
add_output
(name, function, synapse=None, **conn_kwargs)[source]¶Adds a node that collects the decoded output of all ensembles.
By default, this is called once in __init__
with function=None
.
However, this can be called multiple times with different functions,
similar to the way in which an ensemble can be connected to many
downstream ensembles with different functions.
Note that in addition to the parameters below, parameters affecting all of the connections from the sub-ensembles to the new node can be passed to this function. For example:
ea.add_output('output', None, solver=nengo.solers.Lstsq())
creates a new output with the decoders of each connection solved for
with the Lstsq
solver.
Parameters: |
|
---|
nengo.networks.
BasalGanglia
(dimensions, n_neurons_per_ensemble=100, output_weight=-3.0, input_bias=0.0, ampa_config=None, gaba_config=None, net=None)[source]¶Winner take all network, typically used for action selection.
The basal ganglia network outputs approximately 0 at the dimension with the largest value, and is negative elsewhere.
While the basal ganglia is primarily defined by its winner-take-all function, it is also organized to match the organization of the human basal ganglia. It consists of five ensembles:
strD1
)strD2
)stn
)gpi
)gpe
)Interconnections between these areas are also based on known neuroanatomical connections. See [1] for more details, and [2] for the original non-spiking basal ganglia model by Gurney, Prescott & Redgrave that this model is based on.
Note
The default Solver
for the basal ganglia is NnlsL2nz
, which
requires SciPy. If SciPy is not installed, the global default
solver will be used instead.
Parameters: |
|
---|---|
Returns: |
|
References
[1] | (1, 2) Stewart, T. C., Choo, X., & Eliasmith, C. (2010). Dynamic behaviour of a spiking model of action selection in the basal ganglia. In Proceedings of the 10th international conference on cognitive modeling (pp. 235-40). |
[2] | (1, 2) Gurney, K., Prescott, T., & Redgrave, P. (2001). A computational model of action selection in the basal ganglia. Biological Cybernetics 84, 401-423. |
Attributes: |
|
---|
nengo.networks.
Thalamus
(dimensions, n_neurons_per_ensemble=50, mutual_inhib=1.0, threshold=0.0, net=None)[source]¶Inhibits non-selected actions.
The thalamus is intended to work in tandem with a basal ganglia network. It converts basal ganglia output into a signal with (approximately) 1 for the selected action and 0 elsewhere.
In order to suppress low responses and strengthen high responses, a constant bias is added to each dimension (i.e., action), and dimensions mutually inhibit each other. Additionally, the ensemble representing each dimension is created with positive encoders and can be assigned positive x-intercepts to threshold low responses.
Parameters: |
|
---|---|
Returns: |
|
Attributes: |
|
nengo.networks.
AssociativeMemory
(input_vectors, output_vectors=None, n_neurons=50, threshold=0.3, input_scales=1.0, inhibitable=False, label=None, seed=None, add_to_container=None)[source]¶Associative memory network.
Parameters: |
|
---|
am_ens_config
¶(Config) Defaults for associative memory ensemble creation.
default_ens_config
¶(Config) Defaults for other ensemble creation.
thresh_ens_config
¶(Config) Defaults for threshold ensemble creation.
add_input_mapping
(name, input_vectors, input_scales=1.0)[source]¶Adds a set of input vectors to the associative memory network.
Creates a transform with the given input vectors between the a named input node and associative memory element input to enable the inputs to be mapped onto ensembles of the Associative Memory.
Parameters: |
|
---|
add_output_mapping
(name, output_vectors)[source]¶Adds another output to the associative memory network.
Creates a transform with the given output vectors between the associative memory element output and a named output node to enable the selection of output vectors by the associative memory.
Parameters: |
|
---|
add_default_output_vector
(output_vector, output_name='output', n_neurons=50, min_activation_value=0.5)[source]¶Adds a default output vector to the associative memory network.
The default output vector is chosen if the input matches none of the given input vectors.
Parameters: |
|
---|
add_wta_network
(inhibit_scale=1.5, inhibit_synapse=0.005)[source]¶Add a winner-take-all (WTA) network to associative memory output.
Parameters: |
|
---|
add_threshold_to_outputs
(n_neurons=50, inhibit_scale=10)[source]¶Adds a thresholded output to the associative memory.
Parameters: |
|
---|
nengo.networks.
CircularConvolution
(n_neurons, dimensions, invert_a=False, invert_b=False, input_magnitude=1.0, net=None, **kwargs)[source]¶Compute the circular convolution of two vectors.
The circular convolution \(c\) of vectors \(a\) and \(b\) is given by
where negative indices on \(b\) wrap around to the end of the vector.
This computation can also be done in the Fourier domain,
where \(DFT\) is the Discrete Fourier Transform operator, and \(DFT^{-1}\) is its inverse. This network uses this method.
Parameters: |
|
---|---|
Returns: |
|
Notes
The network maps the input vectors \(a\) and \(b\) of length N into the Fourier domain and aligns them for complex multiplication. Letting \(F = DFT(a)\) and \(G = DFT(b)\), this is given by:
[ F[i].real ] [ G[i].real ] [ w[i] ]
[ F[i].imag ] * [ G[i].imag ] = [ x[i] ]
[ F[i].real ] [ G[i].imag ] [ y[i] ]
[ F[i].imag ] [ G[i].real ] [ z[i] ]
where \(i\) only ranges over the lower half of the spectrum, since the upper half of the spectrum is the flipped complex conjugate of the lower half, and therefore redundant. The input transforms are used to perform the DFT on the inputs and align them correctly for complex multiplication.
The complex product \(H = F * G\) is then
where \(I = \sqrt{-1}\). We can perform this addition along with the inverse DFT \(c = DFT^{-1}(H)\) in a single output transform, finding only the real part of \(c\) since the imaginary part is analytically zero.
Examples
A basic example computing the circular convolution of two 10-dimensional vectors represented by ensemble arrays:
A = EnsembleArray(50, n_ensembles=10)
B = EnsembleArray(50, n_ensembles=10)
C = EnsembleArray(50, n_ensembles=10)
cconv = nengo.networks.CircularConvolution(50, dimensions=10)
nengo.Connection(A.output, cconv.input_a)
nengo.Connection(B.output, cconv.input_b)
nengo.Connection(cconv.output, C.input)
Attributes: |
|
---|
nengo.networks.
Integrator
(recurrent_tau, n_neurons, dimensions, net=None, **kwargs)[source]¶An ensemble that accumulates input and maintains state.
This is accomplished through scaling the input signal and recurrently connecting an ensemble to itself to maintain state.
Parameters: |
|
---|---|
Returns: |
|
Attributes: |
|
nengo.networks.
Oscillator
(recurrent_tau, frequency, n_neurons, net=None, **kwargs)[source]¶A two-dimensional ensemble with interacting recurrent connections.
The ensemble connects to itself in a manner similar to the integrator; however, here the two dimensions interact with each other to implement a cyclic oscillator.
Parameters: |
|
---|---|
Returns: |
|
Attributes: |
|
nengo.networks.
Product
(n_neurons, dimensions, input_magnitude=1.0, net=None, **kwargs)[source]¶Computes the element-wise product of two equally sized vectors.
The network used to calculate the product is described in Gosmann, 2015. A simpler version of this network can be found in the Multiplication example.
Note that this network is optimized under the assumption that both input
values (or both values for each input dimensions of the input vectors) are
uniformly and independently distributed. Visualized in a joint 2D space,
this would give a square of equal probabilities for pairs of input values.
This assumption is violated with non-uniform input value distributions
(for example, if the input values follow a Gaussian or cosine similarity
distribution). In that case, no square of equal probabilities is obtained,
but a probability landscape with circular equi-probability lines. To obtain
the optimal network accuracy, scale the input_magnitude by a factor of
1 / sqrt(2)
.
Parameters: |
|
---|---|
Returns: |
|
Attributes: |
|
nengo.networks.
InputGatedMemory
(n_neurons, dimensions, feedback=1.0, difference_gain=1.0, recurrent_synapse=0.1, difference_synapse=None, net=None, **kwargs)[source]¶Stores a given vector in memory, with input controlled by a gate.
Parameters: |
|
---|---|
Returns: |
|
Attributes: |
|