Miscellaneous utilities¶
-
nengo_dl.utils.
sanitize_name
(name)[source]¶ Remove illegal TensorFlow name characters from string.
Valid TensorFlow name characters are
[A-Za-z0-9_.\-/]
Parameters: - name : str
Name to be sanitized
Returns: - str
Sanitized name
-
nengo_dl.utils.
function_name
(func, sanitize=True)[source]¶ Get the name of the callable object
func
.Parameters: - func : callable
Callable object (e.g., function, callable class)
- sanitize : bool, optional
If True, remove any illegal TensorFlow name characters from name
Returns: - str
Name of
func
(optionally sanitized)
-
nengo_dl.utils.
align_func
(output_shape, output_dtype)[source]¶ Decorator that ensures the output of
func
is anndarray
with the given shape and dtype.Parameters: - output_shape : tuple of int
Desired shape for function output (must have the same size as actual function output)
- output_dtype :
tf.DType
ordtype
Desired dtype of function output
Raises: - :class:`~nengo:nengo.exceptions.SimulationError`
If the function returns
None
or a non-finite value.
-
nengo_dl.utils.
print_op
(input, message)[source]¶ Inserts a print statement into the TensorFlow graph.
Parameters: - input :
tf.Tensor
The value of this tensor will be printed whenever it is computed in the graph
- message : str
String prepended to the value of
input
, to help with logging
Returns: - ``tf.Tensor``
New tensor representing the print operation applied to
input
Notes
This is what
tf.Print
is supposed to do, but it doesn’t seem to work consistently.- input :
-
nengo_dl.utils.
cast_dtype
(dtype, target)[source]¶ Changes float dtypes to the target dtype, leaves others unchanged.
Used to map all float values to a target precision. Also casts numpy dtypes to TensorFlow dtypes.
Parameters: - dtype :
tf.DType
ordtype
Input dtype to be converted
- target :
tf.DType
Floating point dtype to which all floating types should be converted
Returns: - ``tf.DType``
Input dtype, converted to
target
type if necessary
- dtype :
-
nengo_dl.utils.
find_non_differentiable
(inputs, outputs)[source]¶ Searches through a TensorFlow graph to find non-differentiable elements between
inputs
andoutputs
(elements that would prevent us from computingd_outputs / d_inputs
.Parameters: - inputs : list of
tf.Tensor
Input tensors
- outputs : list of
tf.Tensor
Output tensors
- inputs : list of
-
class
nengo_dl.utils.
MessageBar
(msg='', finish_msg='', **kwargs)[source]¶ ProgressBar widget for progress bars with possibly unknown duration.
Parameters: - msg : str, optional
A message to be displayed in the middle of the progress bar
- finish_msg : str, optional
A message to be displayed when the progress bar is finished
-
class
nengo_dl.utils.
ProgressBar
(present, past=None, max_value=1, vars=None, **kwargs)[source]¶ Handles progress bar display for some tracked process.
Parameters: - present : str
Description of process in present (e.g., “Simulating”)
- past : str, optional
Description of process in past (e.g., “Simulation”)
- max_value : int or None, optional
The maximum number of steps in the tracked process (or
None
if the maximum number of steps is unknown)- vars : list of str, optional
Extra variables that will be displayed at the end of the progress bar
Notes
Launches a separate thread to handle the progress bar display updates.
-
step
(**vars)[source]¶ Advance the progress bar one step.
Parameters: - vars : dict of {str: str}
Values for the extra variables displayed at the end of the progress bar (defined in
ProgressBar.__init__()
)
-
sub
(msg=None, **kwargs)[source]¶ Creates a new progress bar for tracking a sub-process.
Parameters: - msg : str, optional
Description of sub-process
-
next
()¶ Wraps an iterable using this progress bar.
-
class
nengo_dl.utils.
NullProgressBar
(min_value=0, max_value=None, widgets=None, left_justify=True, initial_value=0, poll_interval=None, widget_kwargs=None, custom_len=<built-in function len>, max_error=True, **kwargs)[source]¶ A progress bar that does nothing.
Used to replace ProgressBar when we want to disable output.
Initializes a progress bar with sane defaults
-
nengo_dl.utils.
minibatch_generator
(inputs, targets, minibatch_size, shuffle=True, truncation=None, rng=None)[source]¶ Generator to yield
minibatch_sized
subsets frominputs
andtargets
.Parameters: - inputs : dict of {
Node
:ndarray
} Input values for Nodes in the network
- targets : dict of {
Probe
:ndarray
} Desired output value at Probes, corresponding to each value in
inputs
- minibatch_size : int
The number of items in each minibatch
- shuffle : bool, optional
If True, the division of items into minibatches will be randomized each time the generator is created
- truncation : int, optional
If not None, divide the data up into sequences of
truncation
timesteps.- rng :
RandomState
, optional Seeded random number generator
Yields: - offset : int
The simulation step at which the returned data begins (will only be nonzero if
truncation
is notNone
).- inputs : dict of {
Node
:ndarray
} The same structure as
inputs
, but with each array reduced tominibatch_size
elements along the first dimension- targets : dict of {
Probe
:ndarray
} The same structure as
targets
, but with each array reduced tominibatch_size
elements along the first dimension
- inputs : dict of {
-
nengo_dl.utils.
configure_settings
(**kwargs)[source]¶ Pass settings to
nengo_dl
by setting them as parameters on the top-level Network config.The settings are passed as keyword arguments to
configure_settings
; e.g., to settrainable
useconfigure_settings(trainable=True)
.Parameters: - trainable : bool or None
Adds a parameter to Nengo Ensembles/Connections/Networks that controls whether or not they will be optimized by
Simulator.train()
. PassingNone
will use the defaultnengo_dl
trainable settings, or True/False will override the default for all objects. In either case trainability can be further configured on a per-object basis (e.g.net.config[my_ensemble].trainable = True
. See the documentation for more details.- planner : graph planning algorithm
Pass one of the graph planners to change the default planner.