- NengoLoihi
- How to Build a Brain
- Tutorials
- A Single Neuron Model
- Representing a scalar
- Representing a Vector
- Addition
- Arbitrary Linear Transformation
- Nonlinear Transformations
- Structured Representations
- Question Answering
- Question Answering with Control
- Question Answering with Memory
- Learning a communication channel
- Sequencing
- Routed Sequencing
- Routed Sequencing with Cleanup Memory
- Routed Sequencing with Cleanup all Memory
- 2D Decision Integrator
- Requirements
- License
- Tutorials
- NengoCore
- Ablating neurons
- Deep learning
Routed Sequencing¶
This model introduces routing in the sequencing model. The addition of routing allows the system to choose between two different actions: whether to go through the sequence, or be driven by the visual input as explained in the book. For instance, if the visual input has its value set to 0.8 * START + D, the model will begin cycling through the sequence starting at D -> E, etc. Thus in this model, the input doesn’t prevent the activation of the second rule in the sequence.
[1]:
# Setup the environment
from nengo import spa # Import spa related packages
Create the Model¶
The parameters used in the model are as described in the book, with 16 dimensions for all semantic pointers.
In Nengo 1.4, the buffer element for representing the vision
was created by using the Buffer()
object, as described in the book. However, in Nengo 2.0, you will use the State()
object with feedback
parameter set to 0 (which is the default value in Nengo 2.0).
[2]:
# Number of dimensions for the semantic pointers
dim = 16
# Create the spa.SPA network to which we can add SPA objects
model = spa.SPA(label="Routed_Sequence", seed=20)
with model:
# Specify the modules to be used
model.state = spa.State(dimensions=dim, feedback=1, feedback_synapse=0.01)
model.vision = spa.State(dimensions=dim)
# Specify the action mapping
actions = spa.Actions(
"dot(vision, START) --> state = vision",
"dot(state, A) --> state = B",
"dot(state, B) --> state = C",
"dot(state, C) --> state = D",
"dot(state, D) --> state = E",
"dot(state, E) --> state = A",
)
# Creating the BG and thalamus components that confirm to the specified rules
model.bg = spa.BasalGanglia(actions=actions)
model.thal = spa.Thalamus(model.bg)
# Function that provides the model with an initial input semantic pointer.
def start(t):
if t < 0.4:
return "0.8*START+D"
return "0"
# Input
model.input = spa.Input(vision=start)
Run the Model¶
[ ]:
# Import the nengo_gui visualizer
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "ch7-spa-sequence-routed.py.cfg")
Press the play button in the visualizer to run the simulation. You should see the graphs as shown in the figure below.
The graph on the bottom-left shows the visual input received by the model, the state graph in the middle shows the semantic pointer representation of the values stored in the state
ensemble. The actions
plot on bottom-right shows the current transition or the action being executed, and the state
plot on top-right shows the utility (similarity) of the current Basal Hanglia input (i.e., state
) with the possible vocabulary vectors.
You can see that in this case, even though the input is applied for 400ms, it doesn’t prevent the activation of the second and subsequent rules in the sequence.
[3]:
from IPython.display import Image
Image(filename="ch7-spa-sequence-routed.png")
[3]: