Release history¶
Changelog¶
0.3.1 (unreleased)¶
Compatible with TensorFlow 2.3 - 2.11
0.3.0 (November 8, 2021)¶
Compatible with TensorFlow 2.1.0 - 2.7.0
Added
LowpassCell
,Lowpass
,AlphaCell
, andAlpha
layers now accept bothinitial_level_constraint
andtau_constraint
to customize how their respective parameters are constrained during training. (#21)
Changed
The
tau
time constants forLowpassCell
,Lowpass
,AlphaCell
, andAlpha
are now always clipped to be positive in the forward pass rather than constraining the underlying trainable weights in between gradient updates. (#21)Renamed the
Lowpass/Alpha
tau
parameter totau_initializer
, and it now acceptstf.keras.initializers.Initializer
objects (in addition to floats, as before). Renamed thetau_var
weight attribute totau
. (#21)
Fixed
SpikingActivation
,Lowpass
, andAlpha
layers will now correctly usekeras_spiking.default.dt
. (#20)
0.2.0 (February 18, 2021)¶
Compatible with TensorFlow 2.1.0 - 2.4.0
Added
Added the
keras_spiking.Alpha
filter, which provides second-order lowpass filtering for better noise removal for spiking layers. (#4)Added
keras_spiking.callbacks.DtScheduler
, which can be used to update layerdt
parameters during training. (#5)Added
keras_spiking.default.dt
, which can be used to set the defaultdt
for all layers that don’t directly specifydt
. (#5)Added
keras_spiking.regularizers.RangedRegularizer
, which can be used to apply some other regularizer (e.g.tf.keras.regularizers.L2
) with respect to some non-zero target point, or a range of acceptable values. This functionality has also been added tokeras_spiking.regularizers.L1L2/L1/L2
(so they can now be applied with respect to a single reference point or a range). (#6)Added
keras_spiking.regularizers.Percentile
which computes a percentile across a number of examples, and regularize that statistic. (#6)Added
keras_spiking.ModelEnergy
to estimate energy usage for Keras Models. (#7)
Changed
keras_spiking.SpikingActivation
andkeras_spiking.Lowpass
now return sequences by default. This means that these layers will now have outputs that have the same number of timesteps as their inputs. This makes it easier to process create multi-layer spiking networks, where time is preserved throughout the network. The spiking fashion-MNIST example has been updated accordingly. (#3)Layers now support multi-dimensional inputs (e.g., output of
Conv2D
layers). (#5)
Fixed