class jina.executors.clients.BaseClientExecutor(host, port, timeout=-1, *args, **kwargs)[source]

Bases: jina.executors.BaseExecutor

BaseClientExecutor is the base class for the executors that wrap up a client to other server.

  • host (str) – the host address of the server

  • port (str) – the host port of the server

  • timeout (int) – waiting time in seconds until drop the request, by default 200

class jina.executors.clients.BaseTFServingClientExecutor(model_name, signature_name='serving_default', method_name='Predict', *args, **kwargs)[source]

Bases: jina.executors.clients.BaseClientExecutor

BaseTFServingClientExecutor is the base class for the executors that wrap up a tf serving client. For the

sake of generality, this implementation has the dependency on tensorflow_serving.

Assuming that the tf server is running with Predict method, one can implement an executor with a tfserving

client as following,

class MyAwesomeTFServingClientEncoder(BaseTFServingClientExecutor, BaseEncoder):
    def encode(self, data: Any, *args, **kwargs) -> Any:
        _req = self.get_request(data)
        return self.get_response(_req)

    def get_input(self, data):
        input_1 = data[:, 0]
        input_2 = data[:, 1:]
        return {
            'my_input_1': inpnut_1.reshape(-1, 1).astype(np.float32),
            'my_input_2': inpnut_2.astype(np.float32)

    def get_output(self, response):
        return np.array(response.result().outputs['output_feature'].float_val)
  • model_name (str) – the name of the tf serving model. It must match the MODEL_NAME parameter when starting the tf server.

  • signature_name (str) – the name of the tf serving signature. It must match the key in the signature_def_map when exporting the tf serving model.

  • method_name (str) –

    the name of the tf serving method. This parameter corresponds to the method_name parameter

    when building the signature map with build_signature_def(). Currently, only Predict is supported.

    The other methods including Classify, Regression needs users to implement the _fill_classify_request and _fill_regression_request, correspondingly. For the details of signature_defs, please refer to https://www.tensorflow.org/tfx/serving/signature_defs.


Initialize the channel and stub for the gRPC client


Construct the gRPC request to the tf server.

fill_request(request, input_dict)[source]
Convert the input data into a dict with the models input feature names as the keys and the input tensors as the


Return type



Get the response from the tf server and postprocess the response


Postprocess the response from the tf server


Construct the default gRPC request to the tf server.

Return type


Predict(request, data_dict)[source]

Fill in the PredictRequest with the data dict

Return type


Classify(request, data_dict)[source]

Fill in the ClassificationRequest with the data dict

Return type


Regression(request, data_dict)[source]

Fill in the RegressionRequest with the data dict

Return type