Added
False
on Probes if only the data from the most recent simulation step
is desired (as opposed to the default behaviour of keeping the data from
all steps).Changed
utils.mse
to objectives.mse
.sim.loss
will now apply nengo_dl.objectives.mse
to all probes in
data
if no explicit objective
is given (mirroring the default
behaviour in sim.train
).Fixed
sim.train
1-step synapse warning when explicitly specifying
n_steps
(rather than passing in data
).Deprecated
"mse"
as the objective in sim.train
/sim.loss
is no longer
supported. Use the function nengo_dl.objectives.mse
instead.Breaking API changes
sim.train
and sim.loss
now accept a single data
argument, which
combines the previous inputs
and targets
arguments. For example,
sim.train({my_node: x}, {my_probe: y}, ...)
is now equivalent to
sim.train({my_node: x, my_probe: y}, ...)
The motivation for this change is that not all objective functions require
target values. Switching to the more generic data
argument simplifies
the API and makes it more flexible, allowing users to specify whatever
training/loss data is actually required.
The objective
argument in sim.train
/sim.loss
is now always
specified as a dictionary mapping probes to objective functions. Note that
this was available but optional previously; it was also possible to pass
a single value for the objective function, which would be applied to all
probes in targets
. The latter is no longer supported. For example,
sim.train(..., objective="mse")
must now be explicitly specified as
sim.train(..., objective={my_probe: "mse"})
The motivation for this change is that, especially with the other new
features introduced in the 2.0 update, there were a lot of different ways to
specify the objective
argument. This made it somewhat unclear how
exactly this argument worked, and the automatic “broadcasting” was also
ambiguous (e.g., should the single objective be applied to each probe
individually, or to all of them together?). Making the argument explicit
helps clarify the mental model.
Added
sim.loss
/sim.train
data argument, if no input/target data is
required.objective
dict in sim.train
/sim.loss
can now contain
tuples of probes as the keys, in which case the objective function will be
called with a corresponding tuple of probe/target values as each argument.sim.run_batch
function. This exposes all the functionality
that the sim.run
/sim.train
/sim.loss
functions are based on,
allowing advanced users full control over how to run a NengoDL simulation.sim.train
and sim.loss
.training
argument to sim.loss
to control whether the loss
is evaluated in training or inference mode.Transform
API (see
https://github.com/nengo/nengo/pull/1481).Changed
sim.train
/sim.loss
can now
accept a single argument (my_objective(outputs): ...
instead of
my_objective(outputs, targets): ...
) if no target values are required.utils.minibatch_generator
now accepts a single data
argument rather
than inputs
and targets
(see discussion in “Breaking API changes”).sim.training_step
is now the same as
tf.train.get_or_create_global_step()
.OpBuilder
classes.Fixed
Deprecated
sim.run
input_feeds
argument has been renamed to data
(for
consistency with other simulator functions).Removed
Added
Changed
Fixed
get_tensor
on a Signal
that was first initialized inside the Simulation while loop
(#56)sim.train
, try downgrading to TensorFlow 1.10.0.sim.training_step
is always updated after the optimization
step (in certain race conditions it would sometimes update part-way through
the optimization step).Added
nengo_dl.configure_settings(inference_only=True)
option, which will
build the network in inference-only mode. This will slightly improve the
inference speed of the simulation, but the network will not be trainable.nengo_dl.configure_settings(lif_smoothing=x)
option, which will
control how much smoothing is applied to the LIF function during gradient
calculations (if any).size_out != None
(#51)Changed
Timeline
profiling method. tf.profiler
can produce
incorrect output, and isn’t maintained any more
(https://github.com/tensorflow/tensorflow/issues/15214#issuecomment-382442357)ScatterUpdate
gradient.configure_settings
from utils.py
to config.py
.Fixed
nengo_dl.dists.VarianceScaling(..., distribution="normal")
did not
respect the seed if one was given.Deprecated
Simulator(dtype=...)
argument has been deprecated; use
nengo_dl.configure_settings(dtype=...)
instead. Will be removed in
1.3.0.Added
sim.freeze_params
tool, to more easily extract model parameters for
reuse in different Simulators.benchmarks.py
Changed
tensorflow-gpu
installation check to Simulator init, and only apply
if device=None
.pylint
for style checks.nengo_dl.configure_settings
will now give a more explicit error message.get_nengo_params
tensorflow>=1.9.0
)SoftLIFRate
implementation for small values (#45)TensorSignals
are loaded into the TensorFlow graphFixed
get_nengo_params
; don’t use trained gains
for direct neuron connections, error raised if get_nengo_params
applied
to an Ensemble with Direct neuronstensorflow==1.9.0
releasenengo_dl.configure_settings(session_config=...)
when passing
a pre-build model to the Simulator instead of a NetworkDeprecated
Simulator.trange
argument dt
has been deprecated (replaced with
sample_every
, see https://github.com/nengo/nengo/pull/1384)Removed
nengo_dl.DATA_DIR
constantbenchmarks.compare_backends
(use
whitepaper2018_plots.py:compare_backends
instead)ghp-import
dependencyAdded
Changed
setup.py
’s
extra_requires
instead of requirements-*.txt
. For example, instead
of doing pip install -r requirements-test.txt
, instead use
pip install nengo-dl[tests]
(or pip install -e .[tests]
for a
developer installation).Removed
sphinxcontrib-versioning
dependency for building documentationAdded
sim.get_nengo_params
function to more easily extract
model parameters for reuse when building different models.Simulator(..., progress_bar=False)
option to disable the progress
information printed to console when the network is building.nengo_dl.configure_settings
(e.g.,
nengo_dl.configure_settings(session_config={"gpu_options.allow_growth": True})
)nengo_dl.configure_settings
extra_feeds
parameter to sim.run/train/loss
, which can be
used to feed Tensor values directly into the TensorFlow sessionChanged
nengo_dl
to nengo-dl
(to be more consistent with
standard conventions). This only affects the display name of the project
on PyPI/GitHub, and the documentation now resides at
https://www.nengo.ai/nengo-dl/; there are no functional changes to user code.tf.constant
, to get around TensorFlow’s 2GB limit on graph
size when building large modelsFixed
nengo_dl
version without nengo
installed will no longer
result in an error.progressbar2>=3.37.0
pip>=10.0
pre_build
function and
size_in==0
Added
nengo.SpikingRectifiedLinear
neuron
type.Changed
sim.train
. Subsequent calls to
sim.train
will resume with the values from the previous call.sim.run
or
sim.train
and access the TensorFlow session directly).sim.data
.Fixed
Removed
utils.cast_dtype
functionAdded
SoftLIFRate
neuron type now has an amplitude
parameter, which
scales the output in the same way as the new amplitude
parameter in
LIF
/LIFRate
(see Nengo PR #1325).progress_bar=False
option to sim.run
, which will disable the
information about the simulation status printed to standard output (#17).sim.train
(useful for reducing
memory usage during training). See the documentation for details.Changed
tensorboard
argument in Simulator
from False
to None
sim.run_steps
and sim.train
when
profile=True
.tf.reshape/identity/constant
).profile
argument in
run_steps
/train
, which will be passed on to the TensorFlow
profiler; see the tf.profiler
documentation for the available options.Removed
backports.print_function
dependencyFixed
Added
post_build
function that will be
executed after the simulation is initialized (see the TensorNode
documentation for details).sim.training_step
variable which will track the current training
iteration (can be used, e.g., for TensorFlow’s variable learning rate
operations).tf.summary
ops and pass them to sim.train
summariesChanged
"mse"
loss implementation will now automatically convert
np.nan
values in the target to zero errorsim.train
/sim.loss
the
total error will now be summed across probes (instead of averaged)Fixed
sim.data
now implements the full collections.Mapping
interfaceChanged
sim.data[obj]
will now return live parameter values from the simulation,
rather than initial values from the build process. That means that it can
be used to get the values of object parameters after training, e.g.
sim.data[my_conn].weights
.Added
nengo_dl.tensor_layer
to help with the construction of
layer-style TensorNodes (see the TensorNode documentation)nengo_dl.dists
sim.train(..., profile=True)
option to collect profiling
information during trainingplanner
attribute on the top-level Network configbackports.tempfile
and backports.print_function
requirement for
Python 2.7 systemsChanged
nengo_dl.configure_trainable
has been subsumed into
the more general nengo_dl.configure_settings(trainable=x)
. This has
resulted in some small changes to how trainability is controlled within
subnetworks; see the updated documentation
for details.Simulator.train
/Simulator.loss
no longer resets the internal
state of the simulation (so they can be safely intermixed with calls to
Simulator.run
)Deprecated
step_blocks
/unroll_simulation
syntax has been fully
deprecated, and will result in errors if usedFixed
unroll_simulation
> 1fan_in
/fan_out
in dists.VarianceScaling
to align with
the standard definitionsAdded
docs/examples
directory, or the
pre-built examples in the documentation)Changed
step_blocks
/unroll_simulation
has been
changed. Now unroll_simulation
takes an integer as argument which is
equivalent to the old step_blocks
value, and unroll_simulation=1
is
equivalent to the old unroll_simulation=False
. For example,
Simulator(..., unroll_simulation=True, step_blocks=10)
is now equivalent
to Simulator(..., unroll_simulation=10)
.step_blocks
(or the new
unroll_simulation
) to be specified; the number of steps to train across
will now be inferred from the input data.Added
Changed
Fixed
Added
Simulator.train
unroll_simulation=False
Changed
Fixed
tensor_graph.build_loss/build_optimizer
, so that
we don’t unnecessarily create duplicate elements in the graph on repeated
callsunroll_simulation=False
device="/gpu:0"
, since there is no GPU kernelSimulator.loss
was not being computed correctly for
models with internal stateSimulator.train
will be truncated if not evenly
divisible by the specified minibatch sizeInitial release of TensorFlow-based NengoDL
Initial release of Lasagne-based NengoDL
Issues and pull requests are always welcome! We appreciate help from the community to make NengoDL better.
If you find a bug in NengoDL, or think that a certain feature is missing, please consider filing an issue. Please search the currently open issues first to see if your bug or feature request already exists. If so, feel free to add a comment to the issue so that we know that multiple people are affected.
If you want to fix a bug or add a feature to NengoDL, we welcome pull requests. We try to maintain 100% test coverage, so any new features should also include unit tests to cover that change. If you fix a bug it’s also a good idea to add a unit test, so that the bug doesn’t get un-fixed in the future!
We require that all contributions be covered under our contributor assignment agreement. Please see the agreement for instructions on how to sign.
Copyright (c) 2015-2018 Applied Brain Research Inc.
NengoDL is made available under a proprietary license that permits using, copying, sharing, and making derivative works from NengoDL and its source code for any non-commercial purpose, as long as the above copyright notice and this permission notice are included in all copies or substantial portions of the software.
If you would like to use NengoDL commercially, licenses can be purchased from Applied Brain Research, Inc. Please contact info@appliedbrainresearch.com for more information.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
NengoDL imports several open source libraries:
To build the documentation, NengoDL uses:
To run the unit tests, NengoDL uses:
If you would like to cite NengoDL in your research, please cite the white paper:
Rasmussen, D. (2018). NengoDL: Combining deep learning and neuromorphic
modelling methods. arXiv:1805.11144, 1–22.
@article{
Rasmussen2018,
archivePrefix = {arXiv},
arxivId = {1805.11144},
author = {Rasmussen, Daniel},
journal = {arXiv},
pages = {1--22},
title = {{NengoDL}: Combining deep learning and neuromorphic modelling
methods},
url = {http://arxiv.org/abs/1805.11144},
volume = {1805.11144},
year = {2018}
}