jina.executors.devices

class jina.executors.devices.BaseDevice[source]

Bases: object

BaseFrameworkExecutor is the base class for the executors using other frameworks internally, including tensorflow, pytorch, onnx, faiss and paddlepaddle.

device

The decorator to cache property of a class.

abstract to_device(*args, **kwargs)[source]

Move the computation from GPU to CPU or vice versa.

class jina.executors.devices.TorchDevice[source]

Bases: jina.executors.devices.BaseDevice

BaseTorchDeviceHandler implements the base class for the executors using torch library. The common setups go into this class.

To implement your own executor with the torch library,

class MyAwesomeTorchEncoder(BaseEncoder, BaseTorchDeviceHandler):
    def post_init(self):
        # load your awesome model
        import torchvision.models as models
        self.model = models.mobilenet_v2().features.eval()
        self.to_device(self.model)

    def encode(self, data, *args, **kwargs):
        # use your awesome model to encode/craft/score
        import torch
        torch.set_grad_enabled(False)

        _input = torch.as_tensor(data, device=self.device)
        _output = self.model(_input).cpu()

        return _output.numpy()
device

The decorator to cache property of a class.

to_device(model, *args, **kwargs)[source]

Load the model to device.

class jina.executors.devices.PaddleDevice[source]

Bases: jina.executors.devices.BaseDevice

BasePaddleExecutor implements the base class for the executors using paddlepaddle library. The common setups go into this class.

To implement your own executor with the paddlepaddle library,

class MyAwesomePaddleEncoder(BasePaddleExecutor):
    def post_init(self):
        # load your awesome model
        import paddlehub as hub
        module = hub.Module(name='mobilenet_v2_imagenet')
        inputs, outputs, self.model = module.context(trainable=False)
        self.inputs_name = input_dict['image'].name
        self.outputs_name = output_dict['feature_map'].name
        self.exe = self.to_device()

    def encode(self, data, *args, **kwargs):
        # use your awesome model to encode/craft/score
        _output, *_ = self.exe.run(
            program=self.model,
            fetch_list=[self.outputs_name],
            feed={self.inputs_name: data},
            return_numpy=True
        )
        return feature_map
device

The decorator to cache property of a class.

to_device()[source]

Load the model to device.

class jina.executors.devices.TFDevice[source]

Bases: jina.executors.devices.BaseDevice

BaseTFDeviceHandler implements the base class for the executors using tensorflow library. The common setups go into this class.

To implement your own executor with the tensorflow library,

class MyAwesomeTFEncoder(BaseTFDeviceHandler):
    def post_init(self):
        # load your awesome model
        self.to_device()
        import tensorflow as tf
        model = tf.keras.applications.MobileNetV2(
            input_shape=(self.img_shape, self.img_shape, 3),
            include_top=False,
            pooling=self.pool_strategy,
            weights='imagenet')
        model.trainable = False
        self.model = model

    def encode(self, data, *args, **kwargs):
        # use your awesome model to encode/craft/score
        return self.model(data)
device

The decorator to cache property of a class.

to_device()[source]

Load the model to device.

class jina.executors.devices.OnnxDevice[source]

Bases: jina.executors.devices.BaseDevice

OnnxDevice implements the base class for the executors using onnxruntime library. The common setups go into this class.

To implement your own executor with the onnxruntime library,

class MyAwesomeOnnxEncoder(BaseOnnxDeviceHandler):
    def __init__(self, output_feature, model_path, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.outputs_name = output_feature
        self.model_path = model_path

    def post_init(self):
        import onnxruntime
        self.model = onnxruntime.InferenceSession(self.model_path, None)
        self.inputs_name = self.model.get_inputs()[0].name
        self.to_device(self.model)

    def encode(self, data, *args, **kwargs):
        # use your awesome model to encode/craft/score
        results = []
        for idx in data:
            data_encoded, *_ = self.model.run(
                [self.outputs_name, ], {self.inputs_name: data})
            results.append(data_encoded)
        return np.concatenate(results, axis=0)
device

The decorator to cache property of a class.

to_device(model, *args, **kwargs)[source]

Load the model to device.

class jina.executors.devices.FaissDevice[source]

Bases: jina.executors.devices.BaseDevice

FaissDevice implements the base class for the executors using faiss library. The common setups go into this class.

device

The decorator to cache property of a class.

to_device(index, *args, **kwargs)[source]

Load the model to device.

class jina.executors.devices.MindsporeDevice[source]

Bases: jina.executors.devices.BaseDevice

MindsporeDevice implements the base classes for the executors using mindspore library. The common setups go into this class.

device

The decorator to cache property of a class.

to_device()[source]

Load the model to device.