Note
This documentation is for a development version. Click here for the latest stable release (v1.1.0).
NengoLoihi¶
A backend for running Nengo models on Intel’s Loihi architecture.
To use the backend, simply replace:
nengo.Simulator(model)
with:
nengo_loihi.Simulator(model)
Release history¶
1.1.0 (January 25, 2022)¶
Compatible with Nengo 3.1.0 - 3.2.0
Compatible with NxSDK 0.9.0 - 1.0.0
Added
Added support for NxSDK 1.0.0. (#317)
Added
Simulator.clear_probes
to clear probe histories. This can help reduce memory usage during long runs, by running for a segment of the full run time, recording the relevant outputs, callingclear_probes
, and resuming the run. (#303)Added support for
padding='same'
onnengo.Convolution
transforms. (#297)Added support for
nengo.transforms.ConvolutionTranspose
. (#300)Block
andCompartment
now have.discretize_info
attributes that store parameters used for discretizing that block and compartment. (#309)Model
now has aconnection_decode_neurons
attribute that mapsConnection
objects that require decode neurons to the correspondingEnsemble
objects implementing them. (#309)Added the
GreedyInterchip
allocator, which reduces inter-chip communication, speeding up networks with high traffic between chips. (#309)Added the
PartitionInterchip
allocator, which reduces inter-chip communication with better partitioning thanGreedyInterchip
. Requires thenxmetis
package. (#309)
Changed
Build errors specify the associated objects, making them easier to debug. (#289)
Deobfuscated NxSDK API calls. (#320)
The builder now respects the precision.bits attribute in
nengorc
files, allowing for reduced-precision builds to save memory. (#309)The new
GreedyInterchip
allocator is now the default allocator. (#309)NeuronOutputNoise
classes and NengoDL builders forLoihiLIF
andLoihiSpikingRectifiedLinear
have been moved to NengoExtras. (#325)Examples have been moved to NengoExamples. (#325)
NengoLoihi is now licensed with the Apache 2.0 license. (#325)
Fixed
Fixed several issues to ensure that memory is freed when a
Simulator
is deleted. (#312)Fixed probe filters such that multiple
Simulator.run
calls now results in the same probe data as a single call of equivalent length. (#271, #303)Convolution with 1 x 1 kernels now works as expected. (#297)
Fixed a bug preventing targeting Loihi even if NxSDK is installed. (#300)
Fixed how
DecodeNeurons
handlesdt != 0.001
. (#309)
1.0.0 (January 20, 2021)¶
Compatible with Nengo 3.1.0
Compatible with NxSDK 0.9.0 - 0.9.9
Added
Added Legendre Memory Unit example. (#267)
Added a
timers
attribute toSimulator
that tracks the wall time taken by various parts of the model, including build time and run time. (#260)Added the
pop_type
configuration option to theConnection
config. See nengo_loihi.add_params for details. (#261)Added the
block_shape
configuration option to theEnsemble
config, and added thenengo_loihi.BlockShape
class to set that option. See nengo_loihi.add_params for details. (#264)Added the
Greedy
allocator, which uses all cores on one chip before proceeding to the next chip.Greedy
is now the default allocator. (#266)Added the
n_chips
parameter toHardwareInterface
for specifying the number of chips on the board. (#266)Added
Model.utilization_summary
to provide a summary of how much of the various resources of each block are utilized. (#279)Added some new documentation focused on mapping large models onto Loihi. (#279)
Added a new example showing how to map larger convolutional networks to Loihi (applied to CIFAR-10 dataset). (#282)
Added a Keras example showing how to directly convert a Keras convolutional network to run on Loihi using the NengoDL Keras Converter. (#281)
Added support for NxSDK 0.9.8 and 0.9.9. (#296)
Added support for the
nengo.RegularSpiking
neuron type, when usingLIFRate
orRectifiedLinear
as the base type (these are equivalent toLIF
andSpikingRectifiedLinear
, respectively). (#296)Added
nengo_loihi.dvs.DVSFileChipProcess
, for getting input from a pre-recorded DVS file and sending it to the Loihi board. (#306)
Changed
We improved performance when
precompute=False
through better spike packing, larger packets, and communicating to the host over a socket. (#260)The
precompute
argument ofSimulator
now defaults toNone
and will be automatically set toTrue
if the model can be precomputed. (#260)Added the
add_to_container
argument toDecodeNeurons.get_ensemble
, which makes it easier to add a decode neurons ensemble to a network. (#260)Convolution
transforms withchannels_last=True
now work with outputs up to 1024 neurons. (#261)The
Probe
has been renamed toLoihiProbe
to mirror theLoihiBlock
andLoihiInput
classes, which are conceptually very similar. It has also been moved fromnengo_loihi.block
tonengo_loihi.probe
. (#264)We now raise a more informative error if connecting to Loihi hardware fails. (#264)
It is now possible to build models with larger ensembles because the builder can now split large Loihi blocks into smaller ones. (#264)
Modules for discretizing and validating models have been moved to the
builder
directory. (#264)It is now possible to use multi-chip allocators with all models, including those that cannot be precomputed. (#266)
Allocators like
RoundRobin
no longer accept then_chips
parameter. Instead, the__call__
method acceptsn_chips
. (#266)NengoLoihi now supports NxSDK version 0.9.5.rc1. (#272)
NengoLoihi now supports Nengo version 3.1. Support for Nengo 3.0 has been dropped. (#296)
Minimum NengoDL version is now 3.4.0. (#296)
Removed
Removed the
OneToOne
allocator, which only worked for one chip. TheGreedy
allocator is identical for models that fit on one chip. (#266)
Fixed
We no longer create a spike generator if we are communicating through Snips. (#260)
Fixed an issue in which ignored axons were still having an effect in convolutional networks where not all input pixels are used in the output. (#261)
Fixed an issue that prevented population spikes to be sent to the chip when
precompute=True
. (#261)Fixed a bug preventing making sparse connections to an ensemble. (#245, #246)
We now ignore TensorFlow and NengoDL if an incompatible version is installed rather than exiting with an exception. (#264)
We now shut down the connection to the board more reliably, which should reduce the number of cases in which a model hangs indefinitely. (#266)
LoihiLIF
neurons now roundtau_rc
to mimic the discretization that occurs on Loihi, for more accurate simulation in Nengo (this was already done in the rate equation and NengoDL implementation of this neuron). (#275)LoihiLIF
andLoihiSpikingRectifiedLinear
now add the appropriate NengoDL builders when instantiated, so they work properly if used in NengoDL without making a NengoLoihi simulator. (#248, #275)Fixed bug when probing sliced objects. (#284)
Fixed bug when connecting to a single neuron ensemble with a single scalar weight. (#287)
Added an error if more than 32 “populations” (e.g. convolutional filters) are used with
pop_type=16
axons, since this is not yet supported by NxSDK. (#286)
0.10.0 (November 25, 2019)¶
Compatible with Nengo 3.0.0
Compatible with NxSDK 0.8.7 - 0.9.0
Changed
Nengo Loihi now requires NxSDK version 0.8.7 and supports NxSDK version 0.9.0. (#255)
0.9.0 (November 20, 2019)¶
Compatible with Nengo 3.0.0
Compatible with NxSDK 0.8.5
Added
It is now possible to slice the
pre
neurons in a neuron->neuron connection. (#226)Connections now support
Sparse
transforms. (#240)A more informative error message is raised if any encoders contain NaNs. (#251)
Changed
Connections from neurons with scalar transforms are now sparse internally. This allows much larger neuron->neuron connections with scalar transforms. (#226)
The
scipy
package is now required to run Nengo Loihi. (#240)Increased minimum NengoDL version to 3.0 (and this transitively increases the minimum TensorFlow version to 2.0). (#259)
Nengo Loihi is now compatible with Nengo version 3.0.0. (#259)
Fixed
0.8.0 (June 23, 2019)¶
Compatible with Nengo 2.8.0
Compatible with NxSDK 0.8.5
Changed
Nengo Loihi now requires NxSDK version 0.8.5. (#225)
0.7.0 (June 21, 2019)¶
Compatible with Nengo 2.8.0
Compatible with NxSDK 0.8.0 - 0.8.1
Added
Added
RoundRobin
allocator, which allows networks to be run across multiple chips (multi-chip) by assigning each ensemble to a different chip in a round-robin format. This allocator can be selected using thehardware_options
argument when creatingnengo_loihi.Simulator
. (#197)Added support for
Ensemble.neurons -> Ensemble
connections. (#156)
Changed
Switched to nengo-bones templating system for TravisCI config/scripts. (#204)
It is no longer possible to pass
network=None
toSimulator
. Previously this was possible, but unlikely to work as expected. (#202)Better error messages are raised when attempting to simulate networks in which certain objects participating in a learning rule are on-chip. (#202, #208, #209)
Nengo Loihi now requires at least NxSDK version 0.8.0. (#218)
The default intercept range set by
nengo_loihi.set_defaults()
is now (-1, 0.5), instead of (-0.5, 0.5). (#126)Obfuscated non-public information related to Intel’s NxSDK. (#228)
Fixed
The splitting and passthrough removal procedures were significantly refactored, which fixed an issue in which networks could be modified in the splitting process. (#202, #211)
It is now possible to make connections and probes with object slices (e.g.,
nengo.Probe(my_ensemble[0])
). (#202, #205, #206)We no longer disable the Nengo decoder cache for all models. (#202, #207)
Transforms to on-chip neurons are now applied on-chip, which avoids scaling issues and large off-chip transforms. (#126)
0.6.0 (February 22, 2019)¶
Compatible with NxSDK 0.7.0 - 0.8.0
Changed
New Nengo transforms are supported, including
nengo.Convolution
. Many of the classes previously inconv.py
have been moved to Nengo as part of this transition. The MNIST convnet example demonstrates the new syntax. (#142)Emulator now fails for any cx_base < 0, except -1 which indicates an unused axon. (#185)
Noise now works correctly with small exponents on both the chip and emulator. Previously, the emulator did not allow very small exponents, and such exponents produced noise with the wrong magnitude on the chip. (#185)
Models trained using NengoDL use tuning curves more similar to those of neuron on the chip, improving the accuracy of these model. (#140)
Removed
Removed the
NIF
andNIFRate
neuron types. These types were only used for encoding node values in spikes to send to the chip, which can be done just as well withnengo.SpikingRectifiedLinear
neurons. (#185)Removed the unused/untested
Synapse.set_diagonal_weights
. (#185)
Fixed
0.5.0 (February 12, 2019)¶
Compatible with NxSDK 0.7.0 - 0.8.0
Added
Allow
LIF.min_voltage
to have effect. The exact minimum voltage on the chip is highly affected by discritization (since the chip only allows minimum voltages in powers of two), but this will at least provide something in the ballpark. (#169)Population spikes can now be used to send information more efficiently to the chip. Population spikes are necessary for larger models like those using CIFAR-10 data. (#161)
Changed
PES learning in Nengo Loihi more closely matches learning in core Nengo. (#139)
Learning in the emulator more closely matches learning on hardware. (#139)
The neurons used to transmit decoded values on-chip can be configured. By default, we use ten pairs of heterogeneous neurons per dimension. (#132)
Internal classes and functions have been reorganized and refactored. See the pull request for more details. (#159)
Simulator now gives a warning if the user requests a progress bar, instead of an error. This avoids potential problems in
nengo_gui
and elsewhere. (#187)Nengo Loihi now supports NxSDK version 0.8.0. Versions 0.7.0 and 0.7.5 are still supported. (#188)
Fixed
We integrate current (U) and voltage (V) more accurately now by accounting for rounding during the decay process. This integral is used when discretizing weights and firing thresholds. This change significantly improves accuracy for many networks, but in particular dynamical systems like integrators. (#124, #114)
Ensure things in the build and execution happen in a consistent order from one build/run to the next (by using
OrderedDict
, which is deterministic, instead ofdict
, which is not). This makes debugging easier and seeding consistent. (#151)Probes that use snips on the chip (when running with
precompute=False
) now deal with negative values correctly. (#169, #141)Filtering for probes on the chip is guaranteed to use floating-point now (so that the filtered output is correct, even if the underlying values are integers). (#169, #141)
Neuron (spike) probes can now be filtered with
synapse
objects. (#182, #183)
0.4.0 (December 6, 2018)¶
Compatible with NxSDK 0.7.0
Added
Added version tracking to documentation.
Changed
An error is now raised if a learning rule is applied to a non-decoded connection. (#103)
Switched documentation to new nengo-sphinx-theme. (#143)
Fixed
0.3.0 (September 28, 2018)¶
Compatible with NxSDK 0.7.0
Added
Models can now use the
nengo.SpikingRectifiedLinear
neuron model on both the emulator and hardware backends.Models can now run with different
dt
values (the default is 0.001, or 1 millisecond).Added support for Distributions on Connection transforms.
Changed
Now compatible with NxSDK 0.7. We are currently not supporting older versions of NxSDK, but may in the future.
Models will not be precomputed by default. To precompute models, you must explicitly pass
precompute=True
tonengo_loihi.Simulator
.Models that do not run any objects on Loihi will raise an error.
Ensemble intercept values are capped to 0.95 to fix issues with the current discretization method.
Fixed
Tuning curves now take into account the Loihi discretization, improving accuracy on most models.
PES learning can now be done with multidimensional error signals.
Manually reset spike probes when Simulator is initialized.
Several fixes to filtering and connecting between objects on and off chip.
0.2.0 (August 27, 2018)¶
First public alpha release of Nengo Loihi! If you have any questions, please ask on our forum and if you run into any issues let us know.
0.1.0 (July 4, 2018)¶
Pre-alpha release of Nengo Loihi for testing at the 2018 Telluride neuromorphic engineering conference. Thanks to all participants who tried out this early version of Nengo Loihi and provided feedback.