Release history¶
Changelog¶
3.6.0 (January 26, 2023)¶
Compatible with Nengo 3.0 - 3.2
Compatible with TensorFlow 2.3 - 2.11
Added
Included
tensorflow-macos
in the alternative tensorflow package names checked during installation. (#228)Added support for
groups
parameter to ConvertConv. (#223)
Changed
Pinned TensorFlow version to <2.11 on Windows. As of 2.11 the TensorFlow package for Windows is maintained by a third party (Intel), and there are currently bugs in that package affecting functionality that is required by NengoDL. (#229)
Removed
Removed support for “graph mode” (i.e., running with
tf.compat.v1.disable_eager_execution()
). TensorFlow is no longer supporting this functionality, and it is increasingly buggy. Graph mode may still be faster for some models; if you need this functionality, try using a previous version of NengoDL. (#229)Dropped support for TensorFlow 2.2. The minimum supported version is now 2.3.4 (earlier 2.3.x versions should work as well, but TensorFlow may install an incompatible
protobuf
version that the user will need to manually correct). (#228)
3.5.0 (May 18, 2022)¶
Compatible with Nengo 3.0 - 3.2
Compatible with TensorFlow 2.2 - 2.9
Changed
Dropped support for Python 3.6 and added support for 3.9 and 3.10. (#224)
3.4.4 (February 10, 2022)¶
Compatible with Nengo 3.0 - 3.2
Compatible with TensorFlow 2.2 - 2.8
Added
Added support for
nengo.transforms.ConvolutionTranspose
. (#183)
3.4.3 (November 9, 2021)¶
Compatible with Nengo 3.0.0 - 3.1.0
Compatible with TensorFlow 2.2.0 - 2.7.0
Added
Added support for TensorFlow 2.7.0. (#218)
Changed
Increased minimum
keras-spiking
version to 0.3.0. (#219)
3.4.2 (August 12, 2021)¶
Compatible with Nengo 3.0.0 - 3.1.0
Compatible with TensorFlow 2.2.0 - 2.6.0
Added
Added support for TensorFlow 2.6.0. (#216)
3.4.1 (May 28, 2021)¶
Compatible with Nengo 3.0.0 - 3.1.0
Compatible with TensorFlow 2.2.0 - 2.5.0
Added
Added support for TensorFlow 2.5.0. (#212)
Fixed
A more informative error message will be raised if a custom neuron build function returns the wrong number of values. (#199)
Removed
Dropped support for Python 3.5 (which reached its end of life in September 2020). (#184)
3.4.0 (November 26, 2020)¶
Compatible with Nengo 3.0.0 - 3.1.0
Compatible with TensorFlow 2.2.0 - 2.4.0
Added
Added support for KerasSpiking layers in the Converter. (#182)
Added support for
tf.keras.layers.TimeDistributed
in the Converter. (#182)Added support for TensorFlow 2.4. (#185)
Added support for Nengo 3.1. (#187)
Changed
Minor improvements to build speed by building constants outside of TensorFlow. (#173)
Support for PES implementation changes in Nengo core (see #1627 and #1640). (#181)
Fixed
Global default Keras dtype will now be reset correctly when an exception occurs in a Simulator method outside the
with Simulator
context. (#173)Support new LinearFilter step type introduced in Nengo core (see #1629). (#173)
Fixed a bug when slicing multi-dimensional Signals (e.g. Ensemble encoders). (#181)
Fixed a bug when loading weights saved in a different Python version. (#187)
3.3.0 (August 14, 2020)¶
Compatible with Nengo 3.0.0
Compatible with TensorFlow 2.2.0 - 2.3.0
Added
Added support for new Nengo core
NeuronType
state implementation. (#159)Compatible with TensorFlow 2.3.0. (#159)
Added support for
nengo.Tanh
,nengo.RegularSpiking
,nengo.StochasticSpiking
, andnengo.PoissonSpiking
neuron types. (#159)Added
nengo_dl.configure_settings(learning_phase=True/False)
configuration option. This mimics the previous behaviour oftf.keras.backend.learning_phase_scope
(which was deprecated by TensorFlow). That is, if you would like to override the default behaviour so that, e.g.,sim.predict
runs in training mode, setnengo_dl.configure_settings(learning_phase=True)
. (#163)
Changed
Simulator.evaluate
no longer prints any information to stdout in TensorFlow 2.2 in graph mode (due to a TensorFlow issue, see https://github.com/tensorflow/tensorflow/issues/39456). Loss/metric values will still be returned from the function as normal. (#153)A warning will now be raised if activation types are passed to
Converter.swap_activations
that aren’t actually in the model. (#168)Updated TensorFlow installation instruction in documentation. (#170)
NengoDL will now use TensorFlow’s eager mode by default. The previous graph-mode behaviour can be restored by calling
tf.compat.v1.disable_eager_execution()
, but we cannot guarantee that that behaviour will be supported in the future. (#163)NengoDL will now use TensorFlow’s “control flow v2” by default. The previous behaviour can be restored by calling
tf.compat.v1.disable_control_flow_v2()
, but we cannot guarantee that that behaviour will be supported in the future. (#163)NengoDL will now default to allowing TensorFlow’s “soft placement” logic, meaning that even if you specify an explicit device like
"/gpu:0"
, TensorFlow may not allocate an op to that device if there isn’t a compatible implementation available. The previous behaviour can be restored by callingtf.config.set_soft_device_placement(False)
. (#163)Internal NengoDL
OpBuilder
classes now separate the “pre build” stage fromOpBuilder.__init__
(so that the sameOpBuilder
class can be re-used across multiplecalls
, rather than instantiating a newOpBuilder
each time). Note that this has no impact on front-end users, this is only relevant to anyone that has implemented a custom build class. The logic that would previously have gone inOpBuilder.__init__
should now go inOpBuilder.build_pre
. In addition, theops
argument has been removed fromOpBuilder.build_pre
; that will be passed toOpBuilder.__init__
( and will be available inbuild_pre
asself.ops
). Similarly, theops
andconfig
argument have been removed frombuild_post
, and can instead be accessed throughself.ops/config
. (#163)Minimum TensorFlow version is now 2.2.0. (#163)
Fixed
Support Sparse transforms in
Simulator.get_nengo_params
. (#149)Fixed bug in TensorGraph log message when logging was enabled. (#151)
Updated the KerasWrapper class in the
tensorflow-models
example to fix a compatibility issue in TensorFlow 2.2. (#153)Handle Nodes that are not connected to anything else, but are probed (this only occurs in Nengo>=3.1.0). (#159)
More robust support for converting nested Keras models in TensorFlow 2.3. (#161)
Fix bug when probing slices of certain probeable attributes (those that are directly targeting a Signal in the model). (#164)
Removed
Removed
nengo_dl.utils.print_op
(usetf.print
instead). (#163)
3.2.0 (April 2, 2020)¶
Compatible with Nengo 3.0.0
Compatible with TensorFlow 2.0.0 - 2.2.0
Added
Added
nengo_dl.LeakyReLU
andnengo_dl.SpikingLeakyReLU
neuron models. (#126)Added support for leaky ReLU Keras layers to
nengo_dl.Converter
. (#126)Added a new
remove_reset_incs
graph simplification step. (#129)Added support for UpSampling layers to
nengo_dl.Converter
. (#130)Added tolerance parameters to
nengo_dl.Converter.verify
. (#130)Added
scale_firing_rates
option tonengo_dl.Converter
. (#134)Added
Converter.layers
attribute which will map Keras layers/tensors to the converted Nengo objects, to make it easier to access converted components. (#134)Compatible with TensorFlow 2.2.0. (#140)
Added a new
synapse
argument to the Converter, which can be used to automatically add synaptic filters on the output of neural layers during the conversion process. (#141)Added a new example demonstrating how to use the NengoDL Converter to convert a Keras model to a spiking Nengo network. (#141)
Changed
Re-enabled the
remove_constant_copies
graph simplification by default. (#129)Reduced the amount of state that needs to be stored in the simulation. (#129)
Added more information to the error message when loading saved parameters that don’t match the current model. (#129)
More efficient implementation of convolutional biases in the Converter. (#130)
Saved simulator state will no longer be included in
Simulator.keras_model.weights
. This means thatSimulator.keras_model.save/load_weights
will not include the saved simulator state, making it easier to reuse weights between models (as long as the models have the same weights, they do not need to have the same state variables).Simulator.save/load_params(..., include_state=True)
can be used to explicitly save the simulator state, if desired. (#140)Model parameters (e.g., connection weights) that are not trainable (because they’ve been marked non-trainable by user or targeted by an online learning rule) will now be treated separately from simulator state. For example,
Simulator.save_params(..., include_state=False)
will still include those parameters, and the results of any online learning will persist between calls even withstateful=False
. (#140)Added
include_probes
,include_trainable
, andinclude_processes
arguments toSimulator.reset
to provide more fine-grained control over Simulator resetting. This replicates the previous functionality inSimulator.soft_reset
. (#139)More informative error messages when accessing invalid Simulator functionality after the Simulator has been closed. (#139)
A warning is now raised when the number of input data items passed to the simulator does not match the number of input nodes, to help avoid unintentionally passing data to the wrong input node. This warning can be avoided by passing data for all nodes, or using the dictionary input style if you want to only pass data for a specific node. (#139)
Dictionaries returned by
sim.predict/evaluate
will now be ordered. (#141)
Fixed
Fixed bug in error message when passing data with batch size less than Simulator minibatch size. (#139)
More informative error message when
validation_split
does not result in batch sizes evenly divisible by minibatch size. (#139)Added
tensorflow-cpu
distributions to installation checks (so Nengo DL will not attempt to reinstall TensorFlow iftensorflow-cpu
is already installed). (#142)Fixed bug when applying the Converter to Keras models that re-use intermediate layers as output layers. (#137)
Fixed bug in conversion of Keras Dense layers with non-native activation functions. (#144)
Deprecated
3.1.0 (March 4, 2020)¶
Compatible with Nengo 3.0.0
Compatible with TensorFlow 2.0.0 - 2.1.0
Added
Added
inference_only=True
option to the Converter, which will allow some Layers/parameters that cannot be fully converted to native Nengo objects to be converted in a way that only matches the inference behaviour of the source Keras model (not the training behaviour). (#119)
Changed
Improved build time of networks containing lots of
TensorNodes
. (#119)Improved memory usage of build process. (#119)
Saved simulation state may now be placed on GPU (this should improve the speed of state updates, but may slightly increase GPU memory usage). (#119)
Changed Converter
freeze_batchnorm=True
option toinference_only=True
(effect of the parameter is the same on BatchNormalization layers, but also has broader effects). (#119)The precision of the Nengo core build process will now be set based on the
nengo_dl.configure_settings(dtype=...)
config option. Note that this will override the default precision set innengo.rc
. (#119)Minimum Numpy version is now 1.16.0 (required by TensorFlow). (#119)
Added support for the new
transform=None
default in Nengo connections (see Nengo#1591). Note that this may change the number of trainable parameters in a network as the scalar defaulttransform=1
weights on non-Ensemble connections will no longer be present. (#128)
Fixed
Provide a more informative error message if Layer
shape_in
/shape_out
contains undefined (None
) elements. (#119)Fixed bug in
Converter
when source model contains duplicate nodes. (#119)Fixed bug in
Converter
forConcatenate
layers withaxis != 1
. (#119)Fixed bug in
Converter
for models containing passthroughInput
layers inside submodels. (#119)Keras Layers inside TensorNodes will be called with the
training
argument set correctly (previously it was always set to the default value). (#119)Fixed compatibility with
progressbar2
version 3.50.0. (#136)
3.0.0 (December 17, 2019)¶
Compatible with Nengo 3.0.0
Compatible with TensorFlow 2.0.0
There are a lot of breaking changes in NengoDL 3.0. See the migration guide for all the details.
Added
Keras
Layer
classes can now be used withnengo_dl.Layer/tensor_layer
.TensorGraph
can now be used as a KerasLayer
.Added
Simulator.predict/evaluate/fit
functions, which implement the Keras Model API.Added a warning that changing the TensorFlow seed (e.g. on
Simulator.reset
) will not affect any existing TensorFlow operations (this was always true in TensorFlow, the warning is just to help avoid confusion).Added
TensorGraph.build_inputs
, which will return a set of KerasInput
layers that can be used as input to the TensorGraph layer itself.Added
nengo_dl.callbacks.TensorBoard
. This is identical totf.keras.callbacks.TensorBoard
, except it will also perform profiling during inference (rather than only during training).Added
stateful
option toSimulator.run
which can be set to False to avoid updating the saved simulation state at the end of a run.Added
nengo_dl.configure_settings(stateful=False)
option to avoid building the parts of the model responsible for preserving state between executions (this will override anystateful=True
arguments in individual functions).Added
nengo_dl.configure_settings(use_loop=False)
option to avoid building the simulation inside a symbolic TensorFlow loop. This may improve simulation speed, but the simulation can only run for exactlyunroll_simulation
timesteps.NengoDL now requires
jinja2
(used to template some of the docstrings).Added an
inputs
argument toSimulator.check_gradients
, which can be used to control the initial value of input Nodes during the gradient calculations.Added
nengo_dl.Converter
for automatically converting Keras models to native Nengo networks. See the documentation for more details.
Changed
Minimum TensorFlow version is now 2.0.0.
Simulator.save/load_params
now uses a singleinclude_non_trainable=True/False
(equivalent to the previousinclude_local
). Trainable parameters will always be saved, so theinclude_global
argument is removed.Standardized all signals/operations in a simulation to be batch-first.
The dtype option is now specified as a string (e.g.
"float32"
rather thantf.float32
).If the requested number of simulation steps is not evenly divisible by
Simulator.unroll_simulation
then probe values andsim.time/n_steps
will be updated based on the number of steps actually run (rather than the requested number of steps). Note that these extra steps were also run previously, but their results were hidden from the user.Renamed
TensorGraph.input_ph
toTensorGraph.node_inputs
.Simulator.time/n_steps
are now read-only.Simulator.n_steps/time
are now managed as part of the op graph, rather than manually in the Simulator.Renamed
nengo_dl.objectives
tonengo_dl.losses
(to align withtf.losses
).nengo_dl.objectives.Regularize
now takes two arguments (y_true
andy_pred
) in order to be compatible with thetf.losses.Loss
API (y_true
is ignored).The remove_constant_copies simplification step is now disabled by default. In certain situations this could be an unsafe manipulation (specifically, when using
Simulator.save/load_params
it could change which parameters are saved). It can be manually re-enabled through the simplifications configuration option.Simulator.check_gradients
now only accepts an optional list of Probes (no longer accepts arbitrary Tensors).Eager execution is no longer disabled on import (it is still disabled within the Simulator context, for performance reasons; see https://github.com/tensorflow/tensorflow/issues/33052).
nengo_dl.tensor_layer(x, func, ...)
now passes any extra kwargs to thenengo_dl.TensorNode
constructor (rather than tofunc
). If you need to pass information tofunc
consider using partial functions (e.g.tensor_layer(functools.partial(x, func, arg=5), ...)
or a callable class (e.g.,tensor_layer(x, MyFunc(arg=5), ...))
. When using Keras Layers withnengo_dl.tensor_layer
, a fully instantiated Layer object should be passed rather than a Layer class (e.g., usetensor_layer(x, tf.keras.layers.Dense(units=10), ...)
instead oftensor_layer(x, tf.keras.layers.Dense, units=10)
).benchmarks.run_profile
now uses the TensorBoard format when profiling, see the documentation for instructions on how to view this information (the information is the same, it is just accessed through TensorBoard rather than requiring that it be loaded directly in a Chrome browser).nengo_dl.TensorNode
now takesshape_in
andshape_out
arguments (which specify a possibly multidimensional shape), rather than the scalarsize_in
andsize_out
.TensorNode
functions no longer use thepre_build
/post_build
functionality. If you need to implement more complex behaviour in a TensorNode, use a custom Keras Layer subclass instead. For example, TensorNodes Layers can create new parameter Variables inside the Layerbuild
method.TensorNode
now has an optionalpass_time
parameter which can be set toFalse
to disable passing the current simulation time to the TensorNode function.Added
nengo_dl.Layer
. Similar to the oldnengo_dl.tensor_layer
, this is a wrapper for constructing TensorNodes, but it mimics the newtf.keras.layers.Layer
API rather than the oldtf.layers
.TensorFlow’s “control flow v2” is disabled on import, for performance reasons; see https://github.com/tensorflow/tensorflow/issues/33052.
Renamed
nengo_dl.objectives.mse
tonengo_dl.losses.nan_mse
(to emphasize the special logic it provides fornan
targets).Connections created by
nengo_dl.Layer/tensor_layer
will be marked as non-trainable by default.Updated all documentation and examples for the new syntax (in particular, see the updated Coming from TensorFlow tutorial and TensorFlow/Keras integration example, and the new Tips and tricks page).
The training/inference build logic (e.g., swapping spiking neurons with rate implementations) can be overridden by setting the global Keras learning phase (
tf.keras.backend.set_learning_phase
) before the Simulator is constructed.Increased minimum Nengo core version to 3.0.0.
Reduced size of TensorFlow constants created by Reset ops.
DotInc operators with different signal sizes will no longer be merged (these merged operators had to use a less efficient sparse matrix multiplication, and in general this cost outweighed the benefit of merging).
Trainability can now be configured in the config of subnetworks. This replaces the ability to mark Networks as (non)trainable. See the updated documentation for details.
Training/evaluation target data can now have a different number of timesteps than input data (as long as it aligns with the number of timesteps expected by the loss function).
Whether or not to display progress bars in
Simulator.run
andSimulator.run_steps
now defaults to the value ofSimulator(..., progress_bar=x)
.
Fixed
Fixed bug due to non-determinism of Process state ordering in Python 3.5.
Nested Keras layers passed to TensorNode will be rebuilt correctly if necessary.
Deprecated
nengo_dl.tensor_layer
has been deprecated. Usenengo_dl.Layer
instead;tensor_layer(x, func, **kwargs)
is equivalent toLayer(func)(x, **kwargs)
.
Removed
Removed the session_config configuration option. Use the updated TensorFlow config system instead.
Removed the deprecated
nengo_dl.Simulator(..., dtype=...)
argument. Usenengo_dl.configure_settings(dtype=...)
instead.Removed the deprecated
Simulator.run(..., input_feeds=...)
argument. UseSimulator.run(..., data=...)
instead.Removed the
Simulator.sess
attribute (Sessions are no longer used in TensorFlow 2.0). The underlying Keras model (Simulator.keras_model
) should be used as the entrypoint into the engine underlying a Simulator instead.Removed the
Simulator.loss
function (useSimulator.compile
andSimulator.evaluate
to compute loss values instead).Removed the
Simulator.train
function (useSimulator.compile
andSimulator.fit
to optimize a network instead).Removed the
nengo_dl.objectives.Regularize(weight=x, ...)
argument. Use theSimulator.compile(loss_weights=...)
functionality instead.Removed the
Simulator.run(..., extra_feeds=...)
argument. TensorFlow 2.0 no longer uses the Session/feed execution model.Removed
Simulator.run_batch
. This functionality is now managed by the underlyingSimulator.keras_model
.Removed
TensorGraph.training_step
. The training step is now managed by Keras.Removed
TensorGraph.build_outputs
andTensorGraph.build_optimizer_func
. Building loss functions/optimizers is now managed by Keras.Removed
nengo_dl.utils.find_non_differentiable
(this no longer works in TF2.0’s eager mode).Removed
Simulator(..., tensorboard=...)
argument. Use the Keras TensorBoard callback approach for TensorBoard logging instead (seetf.keras.callbacks.TensorBoard
ornengo_dl.callbacks.NengoSummaries
).NengoDL will no longer monkeypatch fix the
tf.dynamic_stitch
gradients on import. The gradients are still incorrect (see https://github.com/tensorflow/tensorflow/issues/7397), but we no longer use this operation within NengoDL so we leave it up to the user to fix it in their own code if needed.Removed
benchmarks.matmul_vs_reduce
. We use matmul for everything now, so this comparison is no longer necessary.Removed
utils.minibatch_generator
(training/inference loops are now managed by Keras).
2.2.2 (November 20, 2019)¶
Compatible with Nengo 2.8.0 - 3.0.0
Compatible with TensorFlow 1.4.0 - 2.0.0
Fixed
Compatibility with Nengo 3.0 release
2.2.1 (October 2, 2019)¶
Compatible with Nengo 2.8.0
Compatible with TensorFlow 1.4.0 - 2.0.0
Changed
Update testing framework to use new nengo pytest ecosystem (
pytest-rng
,pytest-allclose
, andpytest-nengo
)Disable TensorFlow 2.0 behaviour (e.g. control flow v2) by default. This will be re-enabled when full TensorFlow 2.0 support is added.
Fixed
Fixed
tensorflow-gpu
installation check in pep517-style isolated build environments.
2.2.0 (July 24, 2019)¶
Compatible with Nengo 2.8.0
Compatible with TensorFlow 1.4.0 - 2.0.0
Added
Added a new example demonstrating how to integrate a Keras model with NengoDL (thanks to new contributor @NickleDave).
Added support for TensorFlow 2.0 (pre-release).
Added support for sparse transforms (see https://github.com/nengo/nengo/pull/1532).
Added support for stateful Processes (see https://github.com/nengo/nengo/pull/1387).
Changed
The default session will now be set to the NengoDL session before calling TensorNodes’
post_build
function.Renamed the pytest
unroll_simulation
argument tounroll-simulation
.Switched to nengo-bones templating system for TravisCI config/scripts.
NengoDL will disable eager execution on import (and will probably not work properly if it is manually re-enabled).
Increased minimum numpy version to 1.14.5 (required by TensorFlow 1.14).
Minimum Nengo version is now 2.8.0.
Update LinearFilter synapse implementation to match recent changes in Nengo core (see https://github.com/nengo/nengo/pull/1535).
Fixed
Fixed TensorFlow seeding so that randomness can be reliably controlled by setting the Simulator seed.
Improved robustness of
tensorflow-gpu
installation check (in particular, it will now correctly detect GPU dists installed throughconda
).Fixed inspection of
TensorNode.tensor_func
arguments for partial functions.Simulator seed will now be deterministic for a given top-level Network seed.
Raise a more informative error if user attempts to pickle a Simulator (this is not possible to do with TensorFlow sessions; see the documentation for other methods of saving/loading a NengoDL model).
Removed
NengoDL no longer supports Python 3.4 (official support for 3.4 ended in March 2019).
2.1.1 (January 11, 2019)¶
Added
Added
nengo_dl.obj
as a shortcut alias fornengo_dl.objectives
.Added tutorial for Nengo users coming to NengoDL
Added tutorial for TensorFlow users coming to NengoDL
Changed
Increased minimum
progressbar2
version to 3.39.0.We now only provide
sdist
releases, notbdist_wheel
. Due to the way the TensorFlow packages are organized,bdist_wheel
forces any existing TensorFlow installations (e.g.tensorflow-gpu
ortf-nightly
) to be overwritten bytensorflow
, which we don’t want to do.
Removed
Removed the
nef-init
tutorial (replaced by the newfrom-nengo
tutorial).
2.1.0 (December 5, 2018)¶
Added
Added a built-in objective to assist in applying regularization during training.
Added keep_history config option, which can be set to
False
on Probes if only the data from the most recent simulation step is desired (as opposed to the default behaviour of keeping the data from all steps).
Changed
Moved
utils.mse
toobjectives.mse
.sim.loss
will now applynengo_dl.objectives.mse
to all probes indata
if no explicitobjective
is given (mirroring the default behaviour insim.train
).The Spaun benchmark network will now be installed through pip rather than manually cloning and importing the repo.
Fixed
Fixed objective argument parsing if objective is a callable class or method.
Fixed bug in
sim.train
1-step synapse warning when explicitly specifyingn_steps
(rather than passing indata
).
Deprecated
Passing
"mse"
as the objective insim.train
/sim.loss
is no longer supported. Use the functionnengo_dl.objectives.mse
instead.
2.0.0 (November 23, 2018)¶
Breaking API changes
sim.train
andsim.loss
now accept a singledata
argument, which combines the previousinputs
andtargets
arguments. For example,sim.train({my_node: x}, {my_probe: y}, ...)
is now equivalent to
sim.train({my_node: x, my_probe: y}, ...)
The motivation for this change is that not all objective functions require target values. Switching to the more generic
data
argument simplifies the API and makes it more flexible, allowing users to specify whatever training/loss data is actually required.The
objective
argument insim.train
/sim.loss
is now always specified as a dictionary mapping probes to objective functions. Note that this was available but optional previously; it was also possible to pass a single value for the objective function, which would be applied to all probes intargets
. The latter is no longer supported. For example,sim.train(..., objective="mse")
must now be explicitly specified as
sim.train(..., objective={my_probe: "mse"})
The motivation for this change is that, especially with the other new features introduced in the 2.0 update, there were a lot of different ways to specify the
objective
argument. This made it somewhat unclear how exactly this argument worked, and the automatic “broadcasting” was also ambiguous (e.g., should the single objective be applied to each probe individually, or to all of them together?). Making the argument explicit helps clarify the mental model.
Added
An integer number of steps can now be passed for the
sim.loss
/sim.train
data argument, if no input/target data is required.The
objective
dict insim.train
/sim.loss
can now contain tuples of probes as the keys, in which case the objective function will be called with a corresponding tuple of probe/target values as each argument.Added the
sim.run_batch
function. This exposes all the functionality that thesim.run
/sim.train
/sim.loss
functions are based on, allowing advanced users full control over how to run a NengoDL simulation.Added option to disable progress bar in
sim.train
andsim.loss
.Added
training
argument tosim.loss
to control whether the loss is evaluated in training or inference mode.Added support for the new Nengo
Transform
API (see https://github.com/nengo/nengo/pull/1481).
Changed
Custom objective functions passed to
sim.train
/sim.loss
can now accept a single argument (my_objective(outputs): ...
instead ofmy_objective(outputs, targets): ...
) if no target values are required.utils.minibatch_generator
now accepts a singledata
argument rather thaninputs
andtargets
(see discussion in “Breaking API changes”).sim.training_step
is now the same astf.train.get_or_create_global_step()
.Switched documentation to new nengo-sphinx-theme.
Reorganized documentation into “User guide” and “API reference” sections.
Improve build speed of models with large constants (#69)
Moved op-specific merge logic into the
OpBuilder
classes.
Fixed
Ensure that training step is always updated before TensorBoard events are added (previously it could update before or after depending on the platform).
Deprecated
The
sim.run
input_feeds
argument has been renamed todata
(for consistency with other simulator functions).
Removed
NengoDL no longer supports Python 2 (see https://python3statement.org/ for more information)
1.2.1 (November 2, 2018)¶
Added
Added a warning if users run one-timestep training with a network containing synaptic filters.
Changed
Test Simulator parameters are now controlled through pytest arguments, rather than environment variables.
Disable INFO-level TensorFlow logging (from C side) on import. Added a NengoDL log message indicating the device the simulation will run on, as a more concise replacement.
Boolean signals are now supported (#61)
Fixed
Avoid backpropagating NaN gradients from spiking neurons.
Fixed an error that was thrown when calling
get_tensor
on aSignal
that was first initialized inside the Simulation while loop (#56)Allow TensorNodes to run in Nengo GUI.
Avoid bug in TensorFlow 1.11.0 that prevents certain models from running (see https://github.com/tensorflow/tensorflow/issues/23383). Note that this doesn’t prevent this from occurring in user models, as we cannot control the model structure there. If your model hangs indefinitely when you call
sim.train
, try downgrading to TensorFlow 1.10.0.Ensure that
sim.training_step
is always updated after the optimization step (in certain race conditions it would sometimes update part-way through the optimization step).
1.2.0 (September 5, 2018)¶
Added
NengoDL will now automatically use a rate-based approximation to compute the gradient for spiking neuron types, if one is known (no more need to manually swap neuron types for training and inference).
Added
nengo_dl.configure_settings(inference_only=True)
option, which will build the network in inference-only mode. This will slightly improve the inference speed of the simulation, but the network will not be trainable.Added
nengo_dl.configure_settings(lif_smoothing=x)
option, which will control how much smoothing is applied to the LIF function during gradient calculations (if any).Added documentation on the various NengoDL config options.
Added better validation for TensorNode output when
size_out != None
(#51)
Changed
More informative error message if the user tries to pass target values for a probe that isn’t used in the objective function.
Switched to ADD_N gradient accumulation (from TREE); this will increase the memory usage during training, but improve performance.
Revert to
Timeline
profiling method.tf.profiler
can produce incorrect output, and isn’t maintained any more (https://github.com/tensorflow/tensorflow/issues/15214#issuecomment-382442357)Reduce memory usage during training by caching temporary variables used when computing
ScatterUpdate
gradient.Increase minimum TensorFlow version to 1.4.0.
Increased minimum NumPy version to 1.12.1 (required by TensorFlow)
Sort write signals as well as reads during graph optimization (encourages tighter partitioning, which can improve training/inference speed).
Moved
configure_settings
fromutils.py
toconfig.py
.
Fixed
Fixed a bug where
nengo_dl.dists.VarianceScaling(..., distribution="normal")
did not respect the seed if one was given.
Deprecated
The
Simulator(dtype=...)
argument has been deprecated; usenengo_dl.configure_settings(dtype=...)
instead. Will be removed in 1.3.0.
1.1.0 (July 24, 2018)¶
Added
The default TensorFlow Session is now set to the underlying Simulator session within the Simulator context.
Added CLI for benchmarks.py
Added
sim.freeze_params
tool, to more easily extract model parameters for reuse in different Simulators.Added Spaun example in
benchmarks.py
Changed
Move
tensorflow-gpu
installation check to Simulator init, and only apply ifdevice=None
.Switched to
pylint
for style checks.TensorFlow INFO-level log messages are now disabled by default on import
All previous releases now tracked in documentation
Updated spiking MNIST example to simplify and improve performance.
Passing unknown configuration options to
nengo_dl.configure_settings
will now give a more explicit error message.Improved speed of parameter fetching though
get_nengo_params
Raise a warning if user tries to train a network with non-differentiable elements (requires
tensorflow>=1.9.0
)Improved accuracy of
SoftLIFRate
implementation for small values (#45)Simplified how
TensorSignals
are loaded into the TensorFlow graph
Fixed
Better handling of Simulator errors not associated with a specific op (fixes #41)
Fixed node outputs changing after simulator is built (fixes #4)
Fixed some broken cross references in the documentation
Fixed several edge cases for
get_nengo_params
; don’t use trained gains for direct neuron connections, error raised ifget_nengo_params
applied to an Ensemble with Direct neuronsCompatible with
tensorflow==1.9.0
releaseFixed bug in
nengo_dl.configure_settings(session_config=...)
when passing a pre-build model to the Simulator instead of a NetworkFixed TensorFlow version comparisons for 1.10.0
Deprecated
Simulator.trange
argumentdt
has been deprecated (replaced withsample_every
, see https://github.com/nengo/nengo/pull/1384)
Removed
Removed
nengo_dl.DATA_DIR
constantRemoved
benchmarks.compare_backends
(usewhitepaper2018_plots.py:compare_backends
instead)Removed
ghp-import
dependency
1.0.0 (May 30, 2018)¶
Added
User can now directly specify the output error gradient, rather than using targets/objective (useful for when you have some external process for computing error that is not easy to implement as an objective function). See the documentation for details.
Added NengoDL white paper
Changed
Extra requirements for documentation/testing are now stored in
setup.py
’sextra_requires
instead ofrequirements-*.txt
. For example, instead of doingpip install -r requirements-test.txt
, instead usepip install nengo-dl[tests]
(orpip install -e .[tests]
for a developer installation).Improved efficiency of PES implementation
Removed
Removed
sphinxcontrib-versioning
dependency for building documentation
0.6.2 (May 4, 2018)¶
Added
Added
sim.get_nengo_params
function to more easily extract model parameters for reuse when building different models.Added
Simulator(..., progress_bar=False)
option to disable the progress information printed to console when the network is building.TensorFlow session config options can now be set using
nengo_dl.configure_settings
(e.g.,nengo_dl.configure_settings(session_config={"gpu_options.allow_growth": True})
)The signal sorting/graph simplificaton functions can now be configured through
nengo_dl.configure_settings
Added
extra_feeds
parameter tosim.run/train/loss
, which can be used to feed Tensor values directly into the TensorFlow session
Changed
Improved speed of PES implementation by adding a custom operator.
Renamed project from
nengo_dl
tonengo-dl
(to be more consistent with standard conventions). This only affects the display name of the project on PyPI/GitHub, and the documentation now resides at https://www.nengo.ai/nengo-dl/; there are no functional changes to user code.Minor efficiency improvements to graph planner
Avoid using
tf.constant
, to get around TensorFlow’s 2GB limit on graph size when building large models
Fixed
Checking
nengo_dl
version withoutnengo
installed will no longer result in an error.Updated progress bar to work with
progressbar2>=3.37.0
Updated PES implementation to work with generic synapse types (see https://github.com/nengo/nengo/pull/1095)
Fixed installation to work with
pip>=10.0
Fixed bug when using a TensorNode with a
pre_build
function andsize_in==0
0.6.1 (March 7, 2018)¶
Added
Added TensorFlow implementation for
nengo.SpikingRectifiedLinear
neuron type.
Changed
Optimizer variables (e.g., momentum values) will only be initialized the first time that optimizer is passed to
sim.train
. Subsequent calls tosim.train
will resume with the values from the previous call.Low-level simulation input/output formats have been reworked to make them slightly easier to use (for users who want to bypass
sim.run
orsim.train
and access the TensorFlow session directly).Batch dimension will always be first (if present) when checking model parameters via
sim.data
.TensorFlow ops created within the Simulator context will now default to the same device as the Simulator.
Update minimum Nengo version to 2.7.0
Fixed
Better error message if training data has incorrect rank
Avoid reinstalling TensorFlow if one of the nightly build packages is already installed
Lowpass synapse can now be applied to multidimensional inputs
TensorNodes will no longer be built into the default graph when checking their output dimensionality.
Removed
Removed
utils.cast_dtype
function
0.6.0 (December 13, 2017)¶
Added
The
SoftLIFRate
neuron type now has anamplitude
parameter, which scales the output in the same way as the newamplitude
parameter inLIF
/LIFRate
(see Nengo PR #1325).Added
progress_bar=False
option tosim.run
, which will disable the information about the simulation status printed to standard output (#17).Added progress bars for the build/simulation process.
Added truncated backpropagation option to
sim.train
(useful for reducing memory usage during training). See the documentation for details.
Changed
Changed the default
tensorboard
argument inSimulator
fromFalse
toNone
Use the new tf.profiler tool to collect profiling data in
sim.run_steps
andsim.train
whenprofile=True
.Minor improvements to efficiency of build process.
Minor improvements to simulation efficiency targeting small ops (
tf.reshape/identity/constant
).Process inputs are now reseeded for each input when batch processing (if seed is not manually set).
Users can pass a dict of config options for the
profile
argument inrun_steps
/train
, which will be passed on to the TensorFlow profiler; see thetf.profiler
documentation for the available options.
Removed
Removed
backports.print_function
dependency
Fixed
0.5.2 (October 11, 2017)¶
Added
TensorNode outputs can now define a
post_build
function that will be executed after the simulation is initialized (see the TensorNode documentation for details).Added functionality for outputting summary data during the training process that can be viewed in TensorBoard (see the sim.train documentation).
Added some examples demonstrating how to use Nengo DL in a more complicated task using semantic pointers to encode/retrieve information
Added
sim.training_step
variable which will track the current training iteration (can be used, e.g., for TensorFlow’s variable learning rate operations).Users can manually create
tf.summary
ops and pass them tosim.train
summariesThe Simulator context will now also set the default TensorFlow graph to the one associated with the Simulator (so any TensorFlow ops created within the Simulator context will automatically be added to the correct graph)
Users can now specify a different objective for each output probe during training/loss calculation (see the sim.train documentation).
Changed
Resetting the simulator now only rebuilds the necessary components in the graph (as opposed to rebuilding the whole graph)
The default
"mse"
loss implementation will now automatically convertnp.nan
values in the target to zero errorIf there are multiple target probes given to
sim.train
/sim.loss
the total error will now be summed across probes (instead of averaged)
Fixed
sim.data
now implements the fullcollections.Mapping
interfaceFixed bug where signal order was non-deterministic for Networks containing objects with duplicate names (#9)
Fixed bug where non-slot optimizer variables were not initialized (#11)
Implemented a modified PES builder in order to avoid slicing encoders on non-decoded PES connections
TensorBoard output directory will be automatically created if it doesn’t exist
0.5.1 (August 28, 2017)¶
Changed
sim.data[obj]
will now return live parameter values from the simulation, rather than initial values from the build process. That means that it can be used to get the values of object parameters after training, e.g.sim.data[my_conn].weights
.Increased minimum Nengo version to 2.5.0.
Increased minimum TensorFlow version to 1.3.0.
0.5.0 (July 11, 2017)¶
Added
Added
nengo_dl.tensor_layer
to help with the construction of layer-style TensorNodes (see the TensorNode documentation)Added an example demonstrating how to train a neural network that can run in spiking neurons
Added some distributions for weight initialization to
nengo_dl.dists
Added
sim.train(..., profile=True)
option to collect profiling information during trainingAdded new methods to simplify the Nengo operation graph, resulting in faster simulation/training speed
The default graph planner can now be modified by setting the
planner
attribute on the top-level Network configAdded TensorFlow implementation for general linear synapses
Added
backports.tempfile
andbackports.print_function
requirement for Python 2.7 systems
Changed
Increased minimum TensorFlow version to 1.2.0
Improved error checking for input/target data
Improved efficiency of stateful gradient operations, resulting in faster training speed
The functionality for
nengo_dl.configure_trainable
has been subsumed into the more generalnengo_dl.configure_settings(trainable=x)
. This has resulted in some small changes to how trainability is controlled within subnetworks; see the updated documentation for details.Calling
Simulator.train
/Simulator.loss
no longer resets the internal state of the simulation (so they can be safely intermixed with calls toSimulator.run
)
Deprecated
The old
step_blocks
/unroll_simulation
syntax has been fully deprecated, and will result in errors if used
Fixed
Fixed bug related to changing the output of a Node after the model is constructed (#4)
Order of variable creation is now deterministic (helps make saving/loading parameters more reliable)
Configuring whether or not a model element is trainable does not affect whether or not that element is minibatched
Correctly reuse variables created inside a TensorNode when
unroll_simulation
> 1Correctly handle probes that aren’t connected to any ops
Swapped
fan_in
/fan_out
indists.VarianceScaling
to align with the standard definitionsTemporary patch to fix memory leak in TensorFlow (see #11273)
Fixed bug related to nodes that had matching output functions but different size_out
Fixed bug related to probes that do not contain any data yet
0.4.0 (June 8, 2017)¶
Added
Added ability to manually specify which parts of a model are trainable (see the sim.train documentation)
Added some code examples (see the
docs/examples
directory, or the pre-built examples in the documentation)Added the SoftLIFRate neuron type for training LIF networks (based on this paper)
Changed
Updated TensorFuncParam to new Nengo Param syntax
The interface for Simulator
step_blocks
/unroll_simulation
has been changed. Nowunroll_simulation
takes an integer as argument which is equivalent to the oldstep_blocks
value, andunroll_simulation=1
is equivalent to the oldunroll_simulation=False
. For example,Simulator(..., unroll_simulation=True, step_blocks=10)
is now equivalent toSimulator(..., unroll_simulation=10)
.Simulator.train/Simulator.loss no longer require
step_blocks
(or the newunroll_simulation
) to be specified; the number of steps to train across will now be inferred from the input data.
0.3.1 (May 12, 2017)¶
Added
Added more documentation on Simulator arguments
Changed
Improved efficiency of tree_planner, made it the new default planner
Fixed
Correctly handle input feeds when n_steps > step_blocks
Detect cycles in transitive planner
Fix bug in uneven step_blocks rounding
Fix bug in Simulator.print_params
Fix bug related to merging of learning rule with different dimensionality
Use tf.Session instead of tf.InteractiveSession, to avoid strange side effects if the simulator isn’t closed properly
0.3.0 (April 25, 2017)¶
Added
Use logger for debug/builder output
Implemented TensorFlow gradients for sparse Variable update Ops, to allow models with those elements to be trained
Added tutorial/examples on using
Simulator.train
Added support for training models when
unroll_simulation=False
Compatibility changes for Nengo 2.4.0
Added a new graph planner algorithm, which can improve simulation speed at the cost of build time
Changed
Significant improvements to simulation speed
Use sparse Variable updates for signals.scatter/gather
Improved graph optimizer memory organization
Implemented sparse matrix multiplication op, to allow more aggressive merging of DotInc operators
Significant improvements to build speed
Added early termination to graph optimization
Algorithmic improvements to graph optimization functions
Reorganized documentation to more clearly direct new users to relevant material
Fixed
Fix bug where passing a built model to the Simulator more than once would result in an error
Cache result of calls to
tensor_graph.build_loss/build_optimizer
, so that we don’t unnecessarily create duplicate elements in the graph on repeated callsFix support for Variables on GPU when
unroll_simulation=False
SimPyFunc operators will always be assigned to CPU, even when
device="/gpu:0"
, since there is no GPU kernelFix bug where
Simulator.loss
was not being computed correctly for models with internal stateData/targets passed to
Simulator.train
will be truncated if not evenly divisible by the specified minibatch sizeFixed bug where in some cases Nodes with side effects would not be run if their output was not used in the simulation
Fixed bug where strided reads that cover a full array would be interpreted as non-strided reads of the full array
0.2.0 (March 13, 2017)¶
Initial release of TensorFlow-based NengoDL
0.1.0 (June 12, 2016)¶
Initial release of Lasagne-based NengoDL