Command-Line Interface#

usage: jina [-h] [-v] [-vf]
            {executor, flow, ping, export ... 7 more choices} ...

Positional Arguments#

cli

Possible choices: executor, flow, ping, export, new, gateway, hub, help, pod, deployment, client

Named Arguments#

-v, --version

Show Jina version

-vf, --version-full

Show Jina and all dependencies’ versions

Sub-commands:#

executor#

Start an Executor. Executor is how Jina processes Document.

jina executor [-h] [--name] [--workspace] [--log-config] [--quiet]
              [--quiet-error] [--timeout-ctrl] [--polling] [--uses]
              [--uses-with [KEY: VALUE [KEY: VALUE ...]]]
              [--uses-metas [KEY: VALUE [KEY: VALUE ...]]]
              [--uses-requests [KEY: VALUE [KEY: VALUE ...]]]
              [--py-modules [PATH [PATH ...]]] [--port-in] [--host-in]
              [--native] [--output-array-type] [--entrypoint]
              [--docker-kwargs [KEY: VALUE [KEY: VALUE ...]]]
              [--volumes [DIR [DIR ...]]] [--gpus] [--disable-auto-volume]
              [--host] [--quiet-remote-logs]
              [--upload-files [FILE [FILE ...]]] [--runtime-cls]
              [--timeout-ready] [--env [KEY: VALUE [KEY: VALUE ...]]]
              [--shards] [--replicas] [--port] [--monitoring]
              [--port-monitoring] [--retries] [--floating]
              [--install-requirements] [--force-update]
              [--compression {NoCompression, Deflate, Gzip}]
              [--uses-before-address] [--uses-after-address]
              [--connection-list] [--disable-reduce] [--timeout-send]

Essential arguments#

--name

The name of this object.

This will be used in the following places: - how you refer to this object in Python/YAML/CLI - visualization - log message header - …

When not given, then the default naming strategy will apply.

--workspace

The working directory for any IO operations in this object. If not set, then derive from its parent workspace.

--log-config

The YAML config of the logger used in this object.

Default: “default”

--quiet

If set, then no log will be emitted from this object.

Default: False

--quiet-error

If set, then exception stack information will not be added to the log

Default: False

Base Deployment arguments#

--timeout-ctrl

The timeout in milliseconds of the control request, -1 for waiting forever

Default: 60

--polling

The polling strategy of the Deployment and its endpoints (when shards>1). Can be defined for all endpoints of a Deployment or by endpoint. Define per Deployment: - ANY: only one (whoever is idle) Pod polls the message - ALL: all Pods poll the message (like a broadcast) Define per Endpoint: JSON dict, {endpoint: PollingType} {‘/custom’: ‘ALL’, ‘/search’: ‘ANY’, ‘*’: ‘ANY’}

Default: “ANY”

WorkerRuntime arguments#

--uses

The config of the executor, it could be one of the followings: * an Executor YAML file (.yml, .yaml, .jaml) * a Jina Hub Executor (must start with jinahub:// or jinahub+docker://) * a docker image (must start with docker://) * the string literal of a YAML config (must start with ! or `jtype: `) * the string literal of a JSON config

When use it under Python, one can use the following values additionally: - a Python dict that represents the config - a text file stream has .read() interface

Default: “BaseExecutor”

--uses-with

Dictionary of keyword arguments that will override the with configuration in uses

--uses-metas

Dictionary of keyword arguments that will override the metas configuration in uses

--uses-requests

Dictionary of keyword arguments that will override the requests configuration in uses

--py-modules

The customized python modules need to be imported before loading the executor

Note that the recommended way is to only import a single module - a simple python file, if your executor can be defined in a single file, or an __init__.py file if you have multiple files, which should be structured as a python package. For more details, please see the Executor cookbook

--port-in

The port for input data to bind to, default a random port between [49152, 65535]

Default: 58103

--host-in

The host address for binding to, by default it is 0.0.0.0

Default: “0.0.0.0”

--native

If set, only native Executors is allowed, and the Executor is always run inside WorkerRuntime.

Default: False

--output-array-type

The type of array tensor and embedding will be serialized to.

Supports the same types as docarray.to_protobuf(.., ndarray_type=…), which can be found here <https://docarray.jina.ai/fundamentals/document/serialization/#from-to-protobuf>. Defaults to retaining whatever type is returned by the Executor.

ContainerRuntime arguments#

--entrypoint

The entrypoint command overrides the ENTRYPOINT in Docker image. when not set then the Docker image ENTRYPOINT takes effective.

--docker-kwargs

Dictionary of kwargs arguments that will be passed to Docker SDK when starting the docker ‘ container.

More details can be found in the Docker SDK docs: https://docker-py.readthedocs.io/en/stable/

--volumes

The path on the host to be mounted inside the container.

Note, - If separated by :, then the first part will be considered as the local host path and the second part is the path in the container system. - If no split provided, then the basename of that directory will be mounted into container’s root path, e.g. –volumes=”/user/test/my-workspace” will be mounted into /my-workspace inside the container. - All volumes are mounted with read-write mode.

--gpus

This argument allows dockerized Jina executor discover local gpu devices.

Note, - To access all gpus, use –gpus all. - To access multiple gpus, e.g. make use of 2 gpus, use –gpus 2. - To access specified gpus based on device id, use –gpus device=[YOUR-GPU-DEVICE-ID] - To access specified gpus based on multiple device id, use –gpus device=[YOUR-GPU-DEVICE-ID1],device=[YOUR-GPU-DEVICE-ID2] - To specify more parameters, use `–gpus device=[YOUR-GPU-DEVICE-ID],runtime=nvidia,capabilities=display

--disable-auto-volume

Do not automatically mount a volume for dockerized Executors.

Default: False

RemoteRuntime arguments#

--host

The host address of the runtime, by default it is 0.0.0.0.

Default: “0.0.0.0”

Distributed arguments#

--quiet-remote-logs

Do not display the streaming of remote logs on local console

Default: False

--upload-files

The files on the host to be uploaded to the remote workspace. This can be useful when your Deployment has more file dependencies beyond a single YAML file, e.g. Python files, data files.

Note, - currently only flatten structure is supported, which means if you upload [./foo/a.py, ./foo/b.pp, ./bar/c.yml], then they will be put under the _same_ workspace on the remote, losing all hierarchies. - by default, –uses YAML file is always uploaded. - uploaded files are by default isolated across the runs. To ensure files are submitted to the same workspace across different runs, use –workspace-id to specify the workspace.

Pod arguments#

--runtime-cls

The runtime class to run inside the Pod

Default: “WorkerRuntime”

--timeout-ready

The timeout in milliseconds of a Pod waits for the runtime to be ready, -1 for waiting forever

Default: 600000

--env

The map of environment variables that are available inside runtime

--shards

The number of shards in the deployment running at the same time. For more details check https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies

Default: 1

--replicas

The number of replicas in the deployment

Default: 1

--port

The port for input data to bind to, default is a random port between [49152, 65535]

Default: 58464

--monitoring

If set, spawn an http server with a prometheus endpoint to expose metrics

Default: False

--port-monitoring

The port on which the prometheus server is exposed, default is a random port between [49152, 65535]

Default: 50804

--retries

Number of retries per gRPC call. If <0 it defaults to max(3, num_replicas)

Default: -1

--floating

If set, the current Pod/Deployment can not be further chained, and the next .add() will chain after the last Pod/Deployment not this current one.

Default: False

Pull arguments#

--install-requirements

If set, install requirements.txt in the Hub Executor bundle to local

Default: False

--force-update, --force

If set, always pull the latest Hub Executor bundle even it exists on local

Default: False

Head arguments#

--compression

Possible choices: NoCompression, Deflate, Gzip

The compression mechanism used when sending requests from the Head to the WorkerRuntimes. For more details, check https://grpc.github.io/grpc/python/grpc.html#compression.

--uses-before-address

The address of the uses-before runtime

--uses-after-address

The address of the uses-before runtime

--connection-list

dictionary JSON with a list of connections to configure

--disable-reduce

Disable the built-in reduce mechanism, set this if the reduction is to be handled by the Executor connected to this Head

Default: False

--timeout-send

The timeout in milliseconds used when sending data requests to Executors, -1 means no timeout, disabled by default

flow#

Start a Flow. Flow is how Jina streamlines and distributes Executors.

jina flow [-h] [--name] [--workspace] [--log-config] [--quiet] [--quiet-error]
          [--uses] [--env [KEY: VALUE [KEY: VALUE ...]]]
          [--inspect {HANG, REMOVE, COLLECT}]

Essential arguments#

--name

The name of this object.

This will be used in the following places: - how you refer to this object in Python/YAML/CLI - visualization - log message header - …

When not given, then the default naming strategy will apply.

--workspace

The working directory for any IO operations in this object. If not set, then derive from its parent workspace.

--log-config

The YAML config of the logger used in this object.

Default: “default”

--quiet

If set, then no log will be emitted from this object.

Default: False

--quiet-error

If set, then exception stack information will not be added to the log

Default: False

Flow Feature arguments#

--uses

The YAML path represents a flow. It can be either a local file path or a URL.

--env

The map of environment variables that are available inside runtime

--inspect

Possible choices: HANG, REMOVE, COLLECT

The strategy on those inspect deployments in the flow.

If REMOVE is given then all inspect deployments are removed when building the flow.

Default: COLLECT

ping#

Ping a Deployment and check its network connectivity.

jina ping [-h] [--timeout] [--retries] host port

Positional Arguments#

host

The host address of the target Pod, e.g. 0.0.0.0

port

The control port of the target deployment/pod

Named Arguments#

--timeout

Timeout in millisecond of one check -1 for waiting forever

Default: 3000

--retries

The max number of tried health checks before exit with exit code 1

Default: 3

export#

Export Jina API and Flow to JSONSchema, Kubernetes YAML, or SVG flowchart.

jina export [-h] {flowchart, kubernetes, docker-compose, schema} ...

subcommands#

use %(prog)-8s [sub-command] –help to get detailed information about each sub-command

export

Possible choices: flowchart, kubernetes, docker-compose, schema

Sub-commands:#

flowchart#

Export a Flow YAML file to a flowchart

jina export flowchart [-h] [--vertical-layout] INPUT OUTPUT
Positional Arguments#
INPUT

The input file path of a Flow YAML

OUTPUT

The output path

Named Arguments#
--vertical-layout

If set, then the flowchart is rendered vertically from top to down.

Default: False

kubernetes#

Export a Flow YAML file to a Kubernetes YAML bundle

jina export kubernetes [-h] [--k8s-namespace] INPUT OUTPUT
Positional Arguments#
INPUT

The input file path of a Flow YAML

OUTPUT

The output path

Named Arguments#
--k8s-namespace

The name of the k8s namespace to set for the configurations. If None, the name of the Flow will be used.

docker-compose#

Export a Flow YAML file to a Docker Compose YAML file

jina export docker-compose [-h] [--network_name] INPUT OUTPUT
Positional Arguments#
INPUT

The input file path of a Flow YAML

OUTPUT

The output path

Named Arguments#
--network_name

The name of the network that will be used by the deployment name.

schema#

Export Jina Executor & Flow API to JSONSchema files

jina export schema [-h] [--yaml-path [PATH [PATH ...]]]
                   [--json-path [PATH [PATH ...]]]
                   [--schema-path [PATH [PATH ...]]]
Named Arguments#
--yaml-path

The YAML file path for storing the exported API

--json-path

The JSON file path for storing the exported API

--schema-path

The JSONSchema file path for storing the exported API

new#

Create a new Jina toy project with the predefined template.

jina new [-h] name

Positional Arguments#

name

The name of the project

Default: “hello-jina”

gateway#

Start a Gateway that receives client Requests via gRPC/REST interface

jina gateway [-h] [--name] [--workspace] [--log-config] [--quiet]
             [--quiet-error] [--timeout-ctrl] [--polling] [--uses]
             [--uses-with [KEY: VALUE [KEY: VALUE ...]]]
             [--uses-metas [KEY: VALUE [KEY: VALUE ...]]]
             [--uses-requests [KEY: VALUE [KEY: VALUE ...]]]
             [--py-modules [PATH [PATH ...]]] [--port-in] [--host-in]
             [--native] [--output-array-type] [--prefetch] [--title]
             [--description] [--cors] [--no-debug-endpoints]
             [--no-crud-endpoints] [--expose-endpoints]
             [--uvicorn-kwargs [KEY: VALUE [KEY: VALUE ...]]]
             [--grpc-server-kwargs [KEY: VALUE [KEY: VALUE ...]]]
             [--ssl-certfile] [--ssl-keyfile] [--expose-graphql-endpoint]
             [--protocol {GRPC, HTTP, WEBSOCKET}] [--host] [--proxy]
             [--port-expose] [--graph-description] [--graph-conditions]
             [--deployments-addresses] [--deployments-disable-reduce]
             [--compression {NoCompression, Deflate, Gzip}] [--timeout-send]
             [--runtime-cls] [--timeout-ready]
             [--env [KEY: VALUE [KEY: VALUE ...]]] [--shards] [--replicas]
             [--port] [--monitoring] [--port-monitoring] [--retries]
             [--floating]

Named Arguments#

--protocol

Possible choices: GRPC, HTTP, WEBSOCKET

Communication protocol between server and client.

Default: GRPC

--graph-description

Routing graph for the gateway

Default: “{}”

--graph-conditions

Dictionary stating which filtering conditions each Executor in the graph requires to receive Documents.

Default: “{}”

--deployments-addresses

dictionary JSON with the input addresses of each Deployment

Default: “{}”

--deployments-disable-reduce

list JSON disabling the built-in merging mechanism for each Deployment listed

Default: “[]”

Essential arguments#

--name

The name of this object.

This will be used in the following places: - how you refer to this object in Python/YAML/CLI - visualization - log message header - …

When not given, then the default naming strategy will apply.

Default: “gateway”

--workspace

The working directory for any IO operations in this object. If not set, then derive from its parent workspace.

--log-config

The YAML config of the logger used in this object.

Default: “default”

--quiet

If set, then no log will be emitted from this object.

Default: False

--quiet-error

If set, then exception stack information will not be added to the log

Default: False

Base Deployment arguments#

--timeout-ctrl

The timeout in milliseconds of the control request, -1 for waiting forever

Default: 60

--polling

The polling strategy of the Deployment and its endpoints (when shards>1). Can be defined for all endpoints of a Deployment or by endpoint. Define per Deployment: - ANY: only one (whoever is idle) Pod polls the message - ALL: all Pods poll the message (like a broadcast) Define per Endpoint: JSON dict, {endpoint: PollingType} {‘/custom’: ‘ALL’, ‘/search’: ‘ANY’, ‘*’: ‘ANY’}

Default: “ANY”

WorkerRuntime arguments#

--uses

The config of the executor, it could be one of the followings: * an Executor YAML file (.yml, .yaml, .jaml) * a Jina Hub Executor (must start with jinahub:// or jinahub+docker://) * a docker image (must start with docker://) * the string literal of a YAML config (must start with ! or `jtype: `) * the string literal of a JSON config

When use it under Python, one can use the following values additionally: - a Python dict that represents the config - a text file stream has .read() interface

Default: “BaseExecutor”

--uses-with

Dictionary of keyword arguments that will override the with configuration in uses

--uses-metas

Dictionary of keyword arguments that will override the metas configuration in uses

--uses-requests

Dictionary of keyword arguments that will override the requests configuration in uses

--py-modules

The customized python modules need to be imported before loading the executor

Note that the recommended way is to only import a single module - a simple python file, if your executor can be defined in a single file, or an __init__.py file if you have multiple files, which should be structured as a python package. For more details, please see the Executor cookbook

--port-in

The port for input data to bind to, default a random port between [49152, 65535]

Default: 50443

--host-in

The host address for binding to, by default it is 0.0.0.0

Default: “0.0.0.0”

--native

If set, only native Executors is allowed, and the Executor is always run inside WorkerRuntime.

Default: False

--output-array-type

The type of array tensor and embedding will be serialized to.

Supports the same types as docarray.to_protobuf(.., ndarray_type=…), which can be found here <https://docarray.jina.ai/fundamentals/document/serialization/#from-to-protobuf>. Defaults to retaining whatever type is returned by the Executor.

Prefetch arguments#

--prefetch

Number of requests fetched from the client before feeding into the first Executor.

Used to control the speed of data input into a Flow. 0 disables prefetch (disabled by default)

Default: 0

HTTP Gateway arguments#

--title

The title of this HTTP server. It will be used in automatics docs such as Swagger UI.

--description

The description of this HTTP server. It will be used in automatics docs such as Swagger UI.

--cors

If set, a CORS middleware is added to FastAPI frontend to allow cross-origin access.

Default: False

--no-debug-endpoints

If set, /status /post endpoints are removed from HTTP interface.

Default: False

--no-crud-endpoints

If set, /index, /search, /update, /delete endpoints are removed from HTTP interface.

Any executor that has @requests(on=…) bind with those values will receive data requests.

Default: False

--expose-endpoints

A JSON string that represents a map from executor endpoints (@requests(on=…)) to HTTP endpoints.

--uvicorn-kwargs

Dictionary of kwargs arguments that will be passed to Uvicorn server when starting the server

More details can be found in Uvicorn docs: https://www.uvicorn.org/settings/

--grpc-server-kwargs

Dictionary of kwargs arguments that will be passed to the grpc server when starting the server # todo update

--ssl-certfile

the path to the certificate file

--ssl-keyfile

the path to the key file

GraphQL arguments#

--expose-graphql-endpoint

If set, /graphql endpoint is added to HTTP interface.

Default: False

Gateway arguments#

--host

The host address of the runtime, by default it is 0.0.0.0.

Default: “0.0.0.0”

--proxy

If set, respect the http_proxy and https_proxy environment variables. otherwise, it will unset these proxy variables before start. gRPC seems to prefer no proxy

Default: False

--port-expose

The port that the gateway exposes for clients for GRPC connections.

Default: 61898

--compression

Possible choices: NoCompression, Deflate, Gzip

The compression mechanism used when sending requests from the Head to the WorkerRuntimes. For more details, check https://grpc.github.io/grpc/python/grpc.html#compression.

--timeout-send

The timeout in milliseconds used when sending data requests to Executors, -1 means no timeout, disabled by default

Pod arguments#

--runtime-cls

The runtime class to run inside the Pod

Default: “GRPCGatewayRuntime”

--timeout-ready

The timeout in milliseconds of a Pod waits for the runtime to be ready, -1 for waiting forever

Default: 600000

--env

The map of environment variables that are available inside runtime

--shards

The number of shards in the deployment running at the same time. For more details check https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies

Default: 1

--replicas

The number of replicas in the deployment

Default: 1

--port

The port for input data to bind to, default is a random port between [49152, 65535]

Default: 50203

--monitoring

If set, spawn an http server with a prometheus endpoint to expose metrics

Default: False

--port-monitoring

The port on which the prometheus server is exposed, default is a random port between [49152, 65535]

Default: 49665

--retries

Number of retries per gRPC call. If <0 it defaults to max(3, num_replicas)

Default: -1

--floating

If set, the current Pod/Deployment can not be further chained, and the next .add() will chain after the last Pod/Deployment not this current one.

Default: False

hub#

Push/Pull an Executor to/from Jina Hub

jina hub [-h] {new, push, pull} ...

subcommands#

use %(prog)-8s [sub-command] –help to get detailed information about each sub-command

hub

Possible choices: new, push, pull

Sub-commands:#

new#

Create a new executor using the template

jina hub new [-h] [--name] [--path] [--advance-configuration] [--description]
             [--keywords] [--url] [--add-dockerfile]
Create Executor arguments#
--name

the name of the Executor

--path

the path to store the Executor

--advance-configuration

If set, always set up advance configuration like description, keywords and url

Default: False

--description

the short description of the Executor

--keywords

some keywords to help people search your Executor (separated by comma)

--url

the URL of your GitHub repo

--add-dockerfile

If set, add a Dockerfile to the created Executor bundle

Default: False

push#

Push an executor package to Jina hub

jina hub push [-h] [--no-usage] [--verbose] [-f DOCKERFILE] [-t]
              [--protected-tag] [--force-update] [--secret] [--no-cache]
              [--public | --private]
              path
Named Arguments#
--no-usage

If set, Hub executor usage will not be printed.

Default: False

--verbose

If set, more information will be printed.

Default: False

Push arguments#
path

The Executor folder to be pushed to Jina Hub

-f, --dockerfile

The file path to the Dockerfile (default is ${cwd}/Dockerfile)

-t, --tag

A list of tags. One can use it to distinguish architecture (e.g. cpu, gpu) or versions (e.g. v1, v2).

One can later fetch a tagged Executor via jinahub[+docker]://MyExecutor/gpu

--protected-tag

A list of protected tags. Like tag but protected against updates after first push.

--force-update, --force

If set, push will overwrite the Executor on the Hub that shares the same NAME or UUID8 identifier

--secret

The secret for overwrite a Hub executor

--no-cache

If set, “–no-cache” option will be added to the Docker build.

Default: False

Visibility arguments#
--public

If set, the pushed executor is visible to public

--private

If set, the pushed executor is invisible to public

pull#

Download an executor image/package from Jina hub

jina hub pull [-h] [--no-usage] [--install-requirements] [--force-update] uri
Positional Arguments#
uri

The URI of the executor to pull (e.g., jinahub[+docker]://NAME)

Named Arguments#
--no-usage

If set, Hub executor usage will not be printed.

Default: False

Pull arguments#
--install-requirements

If set, install requirements.txt in the Hub Executor bundle to local

Default: False

--force-update, --force

If set, always pull the latest Hub Executor bundle even it exists on local

Default: False

help#

Show help text of a CLI argument

jina help [-h] query

Positional Arguments#

query

Lookup the usage & mention of the argument name in Jina API. The name can be fuzzy

pod#

Start a Pod. You should rarely use this directly unless you are doing low-level orchestration

jina pod [-h] [--name] [--workspace] [--log-config] [--quiet] [--quiet-error]
         [--timeout-ctrl] [--polling] [--uses]
         [--uses-with [KEY: VALUE [KEY: VALUE ...]]]
         [--uses-metas [KEY: VALUE [KEY: VALUE ...]]]
         [--uses-requests [KEY: VALUE [KEY: VALUE ...]]]
         [--py-modules [PATH [PATH ...]]] [--port-in] [--host-in] [--native]
         [--output-array-type] [--entrypoint]
         [--docker-kwargs [KEY: VALUE [KEY: VALUE ...]]]
         [--volumes [DIR [DIR ...]]] [--gpus] [--disable-auto-volume] [--host]
         [--quiet-remote-logs] [--upload-files [FILE [FILE ...]]]
         [--runtime-cls] [--timeout-ready]
         [--env [KEY: VALUE [KEY: VALUE ...]]] [--shards] [--replicas]
         [--port] [--monitoring] [--port-monitoring] [--retries] [--floating]
         [--install-requirements] [--force-update]
         [--compression {NoCompression, Deflate, Gzip}]
         [--uses-before-address] [--uses-after-address] [--connection-list]
         [--disable-reduce] [--timeout-send]

Essential arguments#

--name

The name of this object.

This will be used in the following places: - how you refer to this object in Python/YAML/CLI - visualization - log message header - …

When not given, then the default naming strategy will apply.

--workspace

The working directory for any IO operations in this object. If not set, then derive from its parent workspace.

--log-config

The YAML config of the logger used in this object.

Default: “default”

--quiet

If set, then no log will be emitted from this object.

Default: False

--quiet-error

If set, then exception stack information will not be added to the log

Default: False

Base Deployment arguments#

--timeout-ctrl

The timeout in milliseconds of the control request, -1 for waiting forever

Default: 60

--polling

The polling strategy of the Deployment and its endpoints (when shards>1). Can be defined for all endpoints of a Deployment or by endpoint. Define per Deployment: - ANY: only one (whoever is idle) Pod polls the message - ALL: all Pods poll the message (like a broadcast) Define per Endpoint: JSON dict, {endpoint: PollingType} {‘/custom’: ‘ALL’, ‘/search’: ‘ANY’, ‘*’: ‘ANY’}

Default: “ANY”

WorkerRuntime arguments#

--uses

The config of the executor, it could be one of the followings: * an Executor YAML file (.yml, .yaml, .jaml) * a Jina Hub Executor (must start with jinahub:// or jinahub+docker://) * a docker image (must start with docker://) * the string literal of a YAML config (must start with ! or `jtype: `) * the string literal of a JSON config

When use it under Python, one can use the following values additionally: - a Python dict that represents the config - a text file stream has .read() interface

Default: “BaseExecutor”

--uses-with

Dictionary of keyword arguments that will override the with configuration in uses

--uses-metas

Dictionary of keyword arguments that will override the metas configuration in uses

--uses-requests

Dictionary of keyword arguments that will override the requests configuration in uses

--py-modules

The customized python modules need to be imported before loading the executor

Note that the recommended way is to only import a single module - a simple python file, if your executor can be defined in a single file, or an __init__.py file if you have multiple files, which should be structured as a python package. For more details, please see the Executor cookbook

--port-in

The port for input data to bind to, default a random port between [49152, 65535]

Default: 49861

--host-in

The host address for binding to, by default it is 0.0.0.0

Default: “0.0.0.0”

--native

If set, only native Executors is allowed, and the Executor is always run inside WorkerRuntime.

Default: False

--output-array-type

The type of array tensor and embedding will be serialized to.

Supports the same types as docarray.to_protobuf(.., ndarray_type=…), which can be found here <https://docarray.jina.ai/fundamentals/document/serialization/#from-to-protobuf>. Defaults to retaining whatever type is returned by the Executor.

ContainerRuntime arguments#

--entrypoint

The entrypoint command overrides the ENTRYPOINT in Docker image. when not set then the Docker image ENTRYPOINT takes effective.

--docker-kwargs

Dictionary of kwargs arguments that will be passed to Docker SDK when starting the docker ‘ container.

More details can be found in the Docker SDK docs: https://docker-py.readthedocs.io/en/stable/

--volumes

The path on the host to be mounted inside the container.

Note, - If separated by :, then the first part will be considered as the local host path and the second part is the path in the container system. - If no split provided, then the basename of that directory will be mounted into container’s root path, e.g. –volumes=”/user/test/my-workspace” will be mounted into /my-workspace inside the container. - All volumes are mounted with read-write mode.

--gpus

This argument allows dockerized Jina executor discover local gpu devices.

Note, - To access all gpus, use –gpus all. - To access multiple gpus, e.g. make use of 2 gpus, use –gpus 2. - To access specified gpus based on device id, use –gpus device=[YOUR-GPU-DEVICE-ID] - To access specified gpus based on multiple device id, use –gpus device=[YOUR-GPU-DEVICE-ID1],device=[YOUR-GPU-DEVICE-ID2] - To specify more parameters, use `–gpus device=[YOUR-GPU-DEVICE-ID],runtime=nvidia,capabilities=display

--disable-auto-volume

Do not automatically mount a volume for dockerized Executors.

Default: False

RemoteRuntime arguments#

--host

The host address of the runtime, by default it is 0.0.0.0.

Default: “0.0.0.0”

Distributed arguments#

--quiet-remote-logs

Do not display the streaming of remote logs on local console

Default: False

--upload-files

The files on the host to be uploaded to the remote workspace. This can be useful when your Deployment has more file dependencies beyond a single YAML file, e.g. Python files, data files.

Note, - currently only flatten structure is supported, which means if you upload [./foo/a.py, ./foo/b.pp, ./bar/c.yml], then they will be put under the _same_ workspace on the remote, losing all hierarchies. - by default, –uses YAML file is always uploaded. - uploaded files are by default isolated across the runs. To ensure files are submitted to the same workspace across different runs, use –workspace-id to specify the workspace.

Pod arguments#

--runtime-cls

The runtime class to run inside the Pod

Default: “WorkerRuntime”

--timeout-ready

The timeout in milliseconds of a Pod waits for the runtime to be ready, -1 for waiting forever

Default: 600000

--env

The map of environment variables that are available inside runtime

--shards

The number of shards in the deployment running at the same time. For more details check https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies

Default: 1

--replicas

The number of replicas in the deployment

Default: 1

--port

The port for input data to bind to, default is a random port between [49152, 65535]

Default: 51635

--monitoring

If set, spawn an http server with a prometheus endpoint to expose metrics

Default: False

--port-monitoring

The port on which the prometheus server is exposed, default is a random port between [49152, 65535]

Default: 55842

--retries

Number of retries per gRPC call. If <0 it defaults to max(3, num_replicas)

Default: -1

--floating

If set, the current Pod/Deployment can not be further chained, and the next .add() will chain after the last Pod/Deployment not this current one.

Default: False

Pull arguments#

--install-requirements

If set, install requirements.txt in the Hub Executor bundle to local

Default: False

--force-update, --force

If set, always pull the latest Hub Executor bundle even it exists on local

Default: False

Head arguments#

--compression

Possible choices: NoCompression, Deflate, Gzip

The compression mechanism used when sending requests from the Head to the WorkerRuntimes. For more details, check https://grpc.github.io/grpc/python/grpc.html#compression.

--uses-before-address

The address of the uses-before runtime

--uses-after-address

The address of the uses-before runtime

--connection-list

dictionary JSON with a list of connections to configure

--disable-reduce

Disable the built-in reduce mechanism, set this if the reduction is to be handled by the Executor connected to this Head

Default: False

--timeout-send

The timeout in milliseconds used when sending data requests to Executors, -1 means no timeout, disabled by default

deployment#

Start a Deployment. You should rarely use this directly unless you are doing low-level orchestration

jina deployment [-h] [--name] [--workspace] [--log-config] [--quiet]
                [--quiet-error] [--timeout-ctrl] [--polling] [--uses]
                [--uses-with [KEY: VALUE [KEY: VALUE ...]]]
                [--uses-metas [KEY: VALUE [KEY: VALUE ...]]]
                [--uses-requests [KEY: VALUE [KEY: VALUE ...]]]
                [--py-modules [PATH [PATH ...]]] [--port-in] [--host-in]
                [--native] [--output-array-type] [--entrypoint]
                [--docker-kwargs [KEY: VALUE [KEY: VALUE ...]]]
                [--volumes [DIR [DIR ...]]] [--gpus] [--disable-auto-volume]
                [--host] [--quiet-remote-logs]
                [--upload-files [FILE [FILE ...]]] [--runtime-cls]
                [--timeout-ready] [--env [KEY: VALUE [KEY: VALUE ...]]]
                [--shards] [--replicas] [--port] [--monitoring]
                [--port-monitoring] [--retries] [--floating]
                [--install-requirements] [--force-update]
                [--compression {NoCompression, Deflate, Gzip}]
                [--uses-before-address] [--uses-after-address]
                [--connection-list] [--disable-reduce] [--timeout-send]
                [--uses-before] [--uses-after]
                [--when [KEY: VALUE [KEY: VALUE ...]]] [--external] [--tls]

Essential arguments#

--name

The name of this object.

This will be used in the following places: - how you refer to this object in Python/YAML/CLI - visualization - log message header - …

When not given, then the default naming strategy will apply.

--workspace

The working directory for any IO operations in this object. If not set, then derive from its parent workspace.

--log-config

The YAML config of the logger used in this object.

Default: “default”

--quiet

If set, then no log will be emitted from this object.

Default: False

--quiet-error

If set, then exception stack information will not be added to the log

Default: False

Base Deployment arguments#

--timeout-ctrl

The timeout in milliseconds of the control request, -1 for waiting forever

Default: 60

--polling

The polling strategy of the Deployment and its endpoints (when shards>1). Can be defined for all endpoints of a Deployment or by endpoint. Define per Deployment: - ANY: only one (whoever is idle) Pod polls the message - ALL: all Pods poll the message (like a broadcast) Define per Endpoint: JSON dict, {endpoint: PollingType} {‘/custom’: ‘ALL’, ‘/search’: ‘ANY’, ‘*’: ‘ANY’}

Default: “ANY”

WorkerRuntime arguments#

--uses

The config of the executor, it could be one of the followings: * an Executor YAML file (.yml, .yaml, .jaml) * a Jina Hub Executor (must start with jinahub:// or jinahub+docker://) * a docker image (must start with docker://) * the string literal of a YAML config (must start with ! or `jtype: `) * the string literal of a JSON config

When use it under Python, one can use the following values additionally: - a Python dict that represents the config - a text file stream has .read() interface

Default: “BaseExecutor”

--uses-with

Dictionary of keyword arguments that will override the with configuration in uses

--uses-metas

Dictionary of keyword arguments that will override the metas configuration in uses

--uses-requests

Dictionary of keyword arguments that will override the requests configuration in uses

--py-modules

The customized python modules need to be imported before loading the executor

Note that the recommended way is to only import a single module - a simple python file, if your executor can be defined in a single file, or an __init__.py file if you have multiple files, which should be structured as a python package. For more details, please see the Executor cookbook

--port-in

The port for input data to bind to, default a random port between [49152, 65535]

Default: 64791

--host-in

The host address for binding to, by default it is 0.0.0.0

Default: “0.0.0.0”

--native

If set, only native Executors is allowed, and the Executor is always run inside WorkerRuntime.

Default: False

--output-array-type

The type of array tensor and embedding will be serialized to.

Supports the same types as docarray.to_protobuf(.., ndarray_type=…), which can be found here <https://docarray.jina.ai/fundamentals/document/serialization/#from-to-protobuf>. Defaults to retaining whatever type is returned by the Executor.

ContainerRuntime arguments#

--entrypoint

The entrypoint command overrides the ENTRYPOINT in Docker image. when not set then the Docker image ENTRYPOINT takes effective.

--docker-kwargs

Dictionary of kwargs arguments that will be passed to Docker SDK when starting the docker ‘ container.

More details can be found in the Docker SDK docs: https://docker-py.readthedocs.io/en/stable/

--volumes

The path on the host to be mounted inside the container.

Note, - If separated by :, then the first part will be considered as the local host path and the second part is the path in the container system. - If no split provided, then the basename of that directory will be mounted into container’s root path, e.g. –volumes=”/user/test/my-workspace” will be mounted into /my-workspace inside the container. - All volumes are mounted with read-write mode.

--gpus

This argument allows dockerized Jina executor discover local gpu devices.

Note, - To access all gpus, use –gpus all. - To access multiple gpus, e.g. make use of 2 gpus, use –gpus 2. - To access specified gpus based on device id, use –gpus device=[YOUR-GPU-DEVICE-ID] - To access specified gpus based on multiple device id, use –gpus device=[YOUR-GPU-DEVICE-ID1],device=[YOUR-GPU-DEVICE-ID2] - To specify more parameters, use `–gpus device=[YOUR-GPU-DEVICE-ID],runtime=nvidia,capabilities=display

--disable-auto-volume

Do not automatically mount a volume for dockerized Executors.

Default: False

RemoteRuntime arguments#

--host

The host address of the runtime, by default it is 0.0.0.0.

Default: “0.0.0.0”

Distributed arguments#

--quiet-remote-logs

Do not display the streaming of remote logs on local console

Default: False

--upload-files

The files on the host to be uploaded to the remote workspace. This can be useful when your Deployment has more file dependencies beyond a single YAML file, e.g. Python files, data files.

Note, - currently only flatten structure is supported, which means if you upload [./foo/a.py, ./foo/b.pp, ./bar/c.yml], then they will be put under the _same_ workspace on the remote, losing all hierarchies. - by default, –uses YAML file is always uploaded. - uploaded files are by default isolated across the runs. To ensure files are submitted to the same workspace across different runs, use –workspace-id to specify the workspace.

Pod arguments#

--runtime-cls

The runtime class to run inside the Pod

Default: “WorkerRuntime”

--timeout-ready

The timeout in milliseconds of a Pod waits for the runtime to be ready, -1 for waiting forever

Default: 600000

--env

The map of environment variables that are available inside runtime

--shards

The number of shards in the deployment running at the same time. For more details check https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies

Default: 1

--replicas

The number of replicas in the deployment

Default: 1

--port

The port for input data to bind to, default is a random port between [49152, 65535]

Default: 53318

--monitoring

If set, spawn an http server with a prometheus endpoint to expose metrics

Default: False

--port-monitoring

The port on which the prometheus server is exposed, default is a random port between [49152, 65535]

Default: 64652

--retries

Number of retries per gRPC call. If <0 it defaults to max(3, num_replicas)

Default: -1

--floating

If set, the current Pod/Deployment can not be further chained, and the next .add() will chain after the last Pod/Deployment not this current one.

Default: False

Pull arguments#

--install-requirements

If set, install requirements.txt in the Hub Executor bundle to local

Default: False

--force-update, --force

If set, always pull the latest Hub Executor bundle even it exists on local

Default: False

Head arguments#

--compression

Possible choices: NoCompression, Deflate, Gzip

The compression mechanism used when sending requests from the Head to the WorkerRuntimes. For more details, check https://grpc.github.io/grpc/python/grpc.html#compression.

--uses-before-address

The address of the uses-before runtime

--uses-after-address

The address of the uses-before runtime

--connection-list

dictionary JSON with a list of connections to configure

--disable-reduce

Disable the built-in reduce mechanism, set this if the reduction is to be handled by the Executor connected to this Head

Default: False

--timeout-send

The timeout in milliseconds used when sending data requests to Executors, -1 means no timeout, disabled by default

Deployment arguments#

--uses-before

The executor attached before the Pods described by –uses, typically before sending to all shards, accepted type follows –uses. This argument only applies for sharded Deployments (shards > 1).

--uses-after

The executor attached after the Pods described by –uses, typically used for receiving from all shards, accepted type follows –uses. This argument only applies for sharded Deployments (shards > 1).

--when

The condition that the documents need to fulfill before reaching the Executor.The condition can be defined in the form of a DocArray query condition <https://docarray.jina.ai/fundamentals/documentarray/find/#query-by-conditions>

--external

The Deployment will be considered an external Deployment that has been started independently from the Flow.This Deployment will not be context managed by the Flow.

Default: False

--tls

If set, connect to deployment using tls encryption

Default: False

client#

Start a Python client that connects to a Jina gateway

jina client [-h] [--host] [--proxy] [--port] [--tls] [--asyncio]
            [--return-responses] [--protocol {GRPC, HTTP, WEBSOCKET}]

Named Arguments#

--asyncio

If set, then the input and output of this Client work in an asynchronous manner.

Default: False

--return-responses

If set, return results as List of Requests instead of a reduced DocArray.

Default: False

--protocol

Possible choices: GRPC, HTTP, WEBSOCKET

Communication protocol between server and client.

Default: GRPC

ClientGateway arguments#

--host

The host address of the runtime, by default it is 0.0.0.0.

Default: “0.0.0.0”

--proxy

If set, respect the http_proxy and https_proxy environment variables. otherwise, it will unset these proxy variables before start. gRPC seems to prefer no proxy

Default: False

--port

The port of the Gateway, which the client should connect to.

--tls

If set, connect to gateway using tls encryption

Default: False