evaluate
- FullyConnected.evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)
Returns the loss value & metrics values for the model in test mode.
Computation is done in batches (see the batch_size arg.)
- Parameters:
x –
Input data. It could be: - A Numpy array (or array-like), or a list of arrays
(in case the model has multiple inputs).
A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).
A dict mapping input names to the corresponding array/tensors, if the model has named inputs.
A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).
A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).
A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.
y – Target data. Like the input data x, it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).
batch_size – Integer or None. Number of samples per batch of computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).
verbose – “auto”, 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.
sample_weight –
Optional Numpy array of weights for the test samples, used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples
- (1:1 mapping between weights and samples), or in the case of
temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.
steps – Integer or None. Total number of steps (batches of samples) before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.
callbacks – List of keras.callbacks.Callback instances. List of callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).
max_queue_size – Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
workers – Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.
use_multiprocessing – Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.
return_dict – If True, loss and metric results are returned as a dict, with each key being the name of the metric. If False, they are returned as a list.
**kwargs – Unused at this time.
See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.
- Returns:
Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.
- Raises:
RuntimeError – If model.evaluate is wrapped in a tf.function.