jina.executors.clients

class jina.executors.clients.BaseClientExecutor(host, port, timeout=-1, *args, **kwargs)[source]

Bases: jina.executors.BaseExecutor

BaseClientExecutor is the base class for the executors that wrap up a client to other server.

Parameters
  • host (str) – the host address of the server

  • port (str) – the host port of the server

  • timeout (int) – waiting time in seconds until drop the request, by default 200

class jina.executors.clients.BaseTFServingClientExecutor(model_name, signature_name='serving_default', method_name='Predict', *args, **kwargs)[source]

Bases: jina.executors.clients.BaseClientExecutor

BaseTFServingClientExecutor is the base class for the executors that wrap up a tf serving client. For the

sake of generality, this implementation has the dependency on tensorflow_serving.

Assuming that the tf server is running with Predict method, one can implement an executor with a tfserving

client as following,

class MyAwesomeTFServingClientEncoder(BaseTFServingClientExecutor, BaseEncoder):
    def encode(self, data: Any, *args, **kwargs) -> Any:
        _req = self.get_request(data)
        return self.get_response(_req)

    def get_input(self, data):
        input_1 = data[:, 0]
        input_2 = data[:, 1:]
        return {
            'my_input_1': inpnut_1.reshape(-1, 1).astype(np.float32),
            'my_input_2': inpnut_2.astype(np.float32)
            }

    def get_output(self, response):
        return np.array(response.result().outputs['output_feature'].float_val)
Parameters
  • model_name (str) – the name of the tf serving model. It must match the MODEL_NAME parameter when starting the tf server.

  • signature_name (str) – the name of the tf serving signature. It must match the key in the signature_def_map when exporting the tf serving model.

  • method_name (str) –

    the name of the tf serving method. This parameter corresponds to the method_name parameter

    when building the signature map with build_signature_def(). Currently, only Predict is supported.

    The other methods including Classify, Regression needs users to implement the _fill_classify_request and _fill_regression_request, correspondingly. For the details of signature_defs, please refer to https://www.tensorflow.org/tfx/serving/signature_defs.

post_init()[source]

Initialize the channel and stub for the gRPC client

get_request(data)[source]

Construct the gRPC request to the tf server.

fill_request(request, input_dict)[source]
get_input(data)[source]
Convert the input data into a dict with the models input feature names as the keys and the input tensors as the

values.

Return type

Dict

get_response(request)[source]

Get the response from the tf server and postprocess the response

get_output(response)[source]

Postprocess the response from the tf server

get_default_request()[source]

Construct the default gRPC request to the tf server.

Return type

predict_pb2.PredictRequest

Predict(request, data_dict)[source]

Fill in the PredictRequest with the data dict

Return type

predict_pb2.PredictRequest

Classify(request, data_dict)[source]

Fill in the ClassificationRequest with the data dict

Return type

classification_pb2.ClassificationRequest

Regression(request, data_dict)[source]

Fill in the RegressionRequest with the data dict

Return type

regression_pb2.RegressionRequest