predict

FullyConnected.predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Parameters:
  • x

    Input samples. It could be: - A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

    • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

    • A tf.data dataset.

    • A generator or keras.utils.Sequence instance.

    A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

  • batch_size – Integer or None. Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

  • verbose“auto”, 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

  • steps – Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

  • callbacks – List of keras.callbacks.Callback instances. List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

  • max_queue_size – Integer. Used for generator or keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

  • workers – Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

  • use_multiprocessing – Boolean. Used for generator or keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:
  • RuntimeError – If model.predict is wrapped in a tf.function.

  • ValueError – In case of mismatch between the provided input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.