Command-Line Interface#
usage: jina [-h] [-v] [-vf]
{hello, executor, flow, ping ... 8 more choices} ...
Named Arguments#
- -v, --version
Show Jina version
- -vf, --version-full
Show Jina and all dependencies’ versions
subcommands#
Use %(prog)-8s [sub-command] –help to get detailed information about each sub-command.
To show all commands, run JINA_FULL_CLI=1 jina –help.
- cli
Possible choices: hello, executor, flow, ping, new, gateway, hub, help, pod, deployment, client, export-api
Sub-commands:#
hello#
Start hello world demos.
jina hello [-h] {fashion, chatbot, multimodal, fork} ...
subcommands#
use %(prog)-8s [sub-command] –help to get detailed information about each sub-command
- hello
Possible choices: fashion, chatbot, multimodal, fork
Sub-commands:#
fashion#
Run a fashion search demo
jina hello fashion [-h] [--workdir] [--download-proxy] [--index-data-url]
[--index-labels-url] [--query-data-url]
[--query-labels-url] [--num-query] [--top-k]
General arguments#
- --workdir
The workdir for hello-world demoall data, indices, shards and outputs will be saved there
Default: “cbe6f1b9f8874466b31234b169ade3ca”
- --download-proxy
The proxy when downloading sample data
Index arguments#
- --index-data-url
The url of index data (should be in idx3-ubyte.gz format)
Default: “http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz”
- --index-labels-url
The url of index labels data (should be in idx3-ubyte.gz format)
Default: “http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz”
Search arguments#
- --query-data-url
The url of query data (should be in idx3-ubyte.gz format)
Default: “http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz”
- --query-labels-url
The url of query labels data (should be in idx3-ubyte.gz format)
Default: “http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz”
- --num-query
The number of queries to visualize
Default: 128
- --top-k
Top-k results to retrieve and visualize
Default: 50
chatbot#
Run a chatbot QA demo
jina hello chatbot [-h] [--workdir] [--download-proxy] [--index-data-url]
[--port] [--replicas]
Named Arguments#
- --index-data-url
The url of index csv data
Default: “https://static.jina.ai/chatbot/dataset.csv”
- --port
The port of the host exposed to the public
Default: 8080
- --replicas
The number of replicas when index and query
Default: 2
General arguments#
- --workdir
The workdir for hello-world demoall data, indices, shards and outputs will be saved there
Default: “a9ca770a7d904821aaf339ba7d216ec9”
- --download-proxy
The proxy when downloading sample data
multimodal#
Run a multimodal search demo
jina hello multimodal [-h] [--workdir] [--download-proxy] [--index-data-url]
[--port]
Named Arguments#
- --index-data-url
The url of index csv data
- --port
The port of the host exposed to the public
Default: 8080
General arguments#
- --workdir
The workdir for hello-world demoall data, indices, shards and outputs will be saved there
Default: “c034fc8a34e549c7bd98d95d00026e0e”
- --download-proxy
The proxy when downloading sample data
fork#
Fork a hello world project to a local directory.
jina hello fork [-h] {fashion, chatbot, multimodal} destination
Positional Arguments#
- project
Possible choices: fashion, chatbot, multimodal
The hello world project to fork
- destination
The dest directory of the forked project. Note, it can not be an existing path.
executor#
Start an Executor. Executor is how Jina processes Document.
jina executor [-h] [--name] [--workspace] [--log-config] [--quiet]
[--quiet-error] [--timeout-ctrl] [--polling] [--uses]
[--uses-with [KEY: VALUE [KEY: VALUE ...]]]
[--uses-metas [KEY: VALUE [KEY: VALUE ...]]]
[--uses-requests [KEY: VALUE [KEY: VALUE ...]]]
[--py-modules [PATH [PATH ...]]] [--port-in] [--host-in]
[--native] [--output-array-type] [--entrypoint]
[--docker-kwargs [KEY: VALUE [KEY: VALUE ...]]] [--pull-latest]
[--volumes [DIR [DIR ...]]] [--gpus] [--host] [--port-jinad]
[--quiet-remote-logs] [--upload-files [FILE [FILE ...]]]
[--runtime-backend {THREAD, PROCESS}] [--runtime-cls]
[--timeout-ready] [--env [KEY: VALUE [KEY: VALUE ...]]]
[--shards] [--replicas] [--port] [--install-requirements]
[--force-update] [--uses-before-address] [--uses-after-address]
[--connection-list] [--disable-reduce]
Essential arguments#
- --name
The name of this object.
This will be used in the following places: - how you refer to this object in Python/YAML/CLI - visualization - log message header - …
When not given, then the default naming strategy will apply.
- --workspace
The working directory for any IO operations in this object. If not set, then derive from its parent workspace.
- --log-config
The YAML config of the logger used in this object.
Default: “/home/runner/work/jina/jina/jina/resources/logging.default.yml”
- --quiet
If set, then no log will be emitted from this object.
Default: False
- --quiet-error
If set, then exception stack information will not be added to the log
Default: False
- --timeout-ctrl
The timeout in milliseconds of the control request, -1 for waiting forever
Default: 60
- --polling
The polling strategy of the Deployment and its endpoints (when shards>1). Can be defined for all endpoints of a Deployment or by endpoint. Define per Deployment: - ANY: only one (whoever is idle) Pod polls the message - ALL: all Pods poll the message (like a broadcast) Define per Endpoint: JSON dict, {endpoint: PollingType} {‘/custom’: ‘ALL’, ‘/search’: ‘ANY’, ‘*’: ‘ANY’}
Default: “ANY”
WorkerRuntime arguments#
- --uses
The config of the executor, it could be one of the followings: * an Executor YAML file (.yml, .yaml, .jaml) * a Jina Hub Executor (must start with jinahub:// or jinahub+docker://) * a docker image (must start with docker://) * the string literal of a YAML config (must start with ! or `jtype: `) * the string literal of a JSON config
When use it under Python, one can use the following values additionally: - a Python dict that represents the config - a text file stream has .read() interface
Default: “BaseExecutor”
- --uses-with
Dictionary of keyword arguments that will override the with configuration in uses
- --uses-metas
Dictionary of keyword arguments that will override the metas configuration in uses
- --uses-requests
Dictionary of keyword arguments that will override the requests configuration in uses
- --py-modules
The customized python modules need to be imported before loading the executor
Note that the recommended way is to only import a single module - a simple python file, if your executor can be defined in a single file, or an
__init__.py
file if you have multiple files, which should be structured as a python package. For more details, please see the Executor cookbook- --port-in
The port for input data to bind to, default a random port between [49152, 65535]
Default: 64594
- --host-in
The host address for binding to, by default it is 0.0.0.0
Default: “0.0.0.0”
- --native
If set, only native Executors is allowed, and the Executor is always run inside WorkerRuntime.
Default: False
- --output-array-type
The type of array tensor and embedding will be serialized to.
Supports the same types as docarray.to_protobuf(.., ndarray_type=…), which can be found here <https://docarray.jina.ai/fundamentals/document/serialization/#from-to-protobuf>. Defaults to retaining whatever type is returned by the Executor.
ContainerRuntime arguments#
- --entrypoint
The entrypoint command overrides the ENTRYPOINT in Docker image. when not set then the Docker image ENTRYPOINT takes effective.
- --docker-kwargs
Dictionary of kwargs arguments that will be passed to Docker SDK when starting the docker ‘ container.
More details can be found in the Docker SDK docs: https://docker-py.readthedocs.io/en/stable/
- --pull-latest
Pull the latest image before running
Default: False
- --volumes
The path on the host to be mounted inside the container.
Note, - If separated by :, then the first part will be considered as the local host path and the second part is the path in the container system. - If no split provided, then the basename of that directory will be mounted into container’s root path, e.g. –volumes=”/user/test/my-workspace” will be mounted into /my-workspace inside the container. - All volumes are mounted with read-write mode.
- --gpus
This argument allows dockerized Jina executor discover local gpu devices.
Note, - To access all gpus, use –gpus all. - To access multiple gpus, e.g. make use of 2 gpus, use –gpus 2. - To access specified gpus based on device id, use –gpus device=[YOUR-GPU-DEVICE-ID] - To access specified gpus based on multiple device id, use –gpus device=[YOUR-GPU-DEVICE-ID1],device=[YOUR-GPU-DEVICE-ID2] - To specify more parameters, use `–gpus device=[YOUR-GPU-DEVICE-ID],runtime=nvidia,capabilities=display
RemoteRuntime arguments#
- --host
The host address of the runtime, by default it is 0.0.0.0.
Default: “0.0.0.0”
- --port-jinad
The port of the remote machine for usage with JinaD.
Default: 8000
Distributed arguments#
- --quiet-remote-logs
Do not display the streaming of remote logs on local console
Default: False
- --upload-files
The files on the host to be uploaded to the remote workspace. This can be useful when your Deployment has more file dependencies beyond a single YAML file, e.g. Python files, data files.
Note, - currently only flatten structure is supported, which means if you upload [./foo/a.py, ./foo/b.pp, ./bar/c.yml], then they will be put under the _same_ workspace on the remote, losing all hierarchies. - by default, –uses YAML file is always uploaded. - uploaded files are by default isolated across the runs. To ensure files are submitted to the same workspace across different runs, use –workspace-id to specify the workspace.
Pod arguments#
- --runtime-backend, --runtime
Possible choices: THREAD, PROCESS
The parallel backend of the runtime inside the Pod
Default: 1
- --runtime-cls
The runtime class to run inside the Pod
Default: “WorkerRuntime”
- --timeout-ready
The timeout in milliseconds of a Pod waits for the runtime to be ready, -1 for waiting forever
Default: 600000
- --env
The map of environment variables that are available inside runtime
- --shards
The number of shards in the deployment running at the same time. For more details check https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies
Default: 1
- --replicas
The number of replicas in the deployment
Default: 1
- --port
The port for input data to bind to, default is a random port between [49152, 65535]
Default: 60164
Pull arguments#
- --install-requirements
If set, install requirements.txt in the Hub Executor bundle to local
Default: False
- --force-update, --force
If set, always pull the latest Hub Executor bundle even it exists on local
Default: False
Head arguments#
- --uses-before-address
The address of the uses-before runtime
- --uses-after-address
The address of the uses-before runtime
- --connection-list
dictionary JSON with a list of connections to configure
- --disable-reduce
Disable the built-in reduce mechanism, set this if the reduction is to be handled by the Executor connected to this Head
Default: False
flow#
Start a Flow. Flow is how Jina streamlines and distributes Executors.
jina flow [-h] [--name] [--workspace] [--log-config] [--quiet] [--quiet-error]
[--timeout-ctrl] [--polling] [--expose-graphql-endpoint] [--uses]
[--env [KEY: VALUE [KEY: VALUE ...]]]
[--inspect {HANG, REMOVE, COLLECT}]
Essential arguments#
- --name
The name of this object.
This will be used in the following places: - how you refer to this object in Python/YAML/CLI - visualization - log message header - …
When not given, then the default naming strategy will apply.
- --workspace
The working directory for any IO operations in this object. If not set, then derive from its parent workspace.
- --log-config
The YAML config of the logger used in this object.
Default: “/home/runner/work/jina/jina/jina/resources/logging.default.yml”
- --quiet
If set, then no log will be emitted from this object.
Default: False
- --quiet-error
If set, then exception stack information will not be added to the log
Default: False
- --timeout-ctrl
The timeout in milliseconds of the control request, -1 for waiting forever
Default: 60
- --polling
The polling strategy of the Deployment and its endpoints (when shards>1). Can be defined for all endpoints of a Deployment or by endpoint. Define per Deployment: - ANY: only one (whoever is idle) Pod polls the message - ALL: all Pods poll the message (like a broadcast) Define per Endpoint: JSON dict, {endpoint: PollingType} {‘/custom’: ‘ALL’, ‘/search’: ‘ANY’, ‘*’: ‘ANY’}
Default: “ANY”
GraphQL arguments#
- --expose-graphql-endpoint
If set, /graphql endpoint is added to HTTP interface.
Default: False
Flow Feature arguments#
- --uses
The YAML file represents a flow
- --env
The map of environment variables that are available inside runtime
- --inspect
Possible choices: HANG, REMOVE, COLLECT
The strategy on those inspect deployments in the flow.
If REMOVE is given then all inspect deployments are removed when building the flow.
Default: 2
ping#
Ping a Deployment and check its network connectivity.
jina ping [-h] [--timeout] [--retries] host port
Positional Arguments#
- host
The host address of the target Pod, e.g. 0.0.0.0
- port
The control port of the target deployment/pod
Named Arguments#
- --timeout
Timeout in millisecond of one check -1 for waiting forever
Default: 3000
- --retries
The max number of tried health checks before exit with exit code 1
Default: 3
new#
Create a new Jina toy project with the predefined template.
jina new [-h] name
Positional Arguments#
- name
The name of the project
Default: “hello-jina”
gateway#
Start a Gateway that receives client Requests via gRPC/REST interface
jina gateway [-h] [--name] [--workspace] [--log-config] [--quiet]
[--quiet-error] [--timeout-ctrl] [--polling] [--uses]
[--uses-with [KEY: VALUE [KEY: VALUE ...]]]
[--uses-metas [KEY: VALUE [KEY: VALUE ...]]]
[--uses-requests [KEY: VALUE [KEY: VALUE ...]]]
[--py-modules [PATH [PATH ...]]] [--port-in] [--host-in]
[--native] [--output-array-type] [--prefetch] [--title]
[--description] [--cors] [--default-swagger-ui]
[--no-debug-endpoints] [--no-crud-endpoints] [--expose-endpoints]
[--uvicorn-kwargs [KEY: VALUE [KEY: VALUE ...]]]
[--expose-graphql-endpoint] [--protocol {GRPC, HTTP, WEBSOCKET}]
[--host] [--proxy] [--port-expose] [--graph-description]
[--graph-conditions] [--deployments-addresses]
[--runtime-backend {THREAD, PROCESS}] [--runtime-cls]
[--timeout-ready] [--env [KEY: VALUE [KEY: VALUE ...]]]
[--shards] [--replicas] [--port] [--uses-before-address]
[--uses-after-address] [--connection-list] [--disable-reduce]
Named Arguments#
- --protocol
Possible choices: GRPC, HTTP, WEBSOCKET
Communication protocol between server and client.
Default: 0
- --graph-description
Routing graph for the gateway
Default: “{}”
- --graph-conditions
Dictionary stating which filtering conditions each Executor in the graph requires to receive Documents.
Default: “{}”
- --deployments-addresses
dictionary JSON with the input addresses of each Deployment
Default: “{}”
Essential arguments#
- --name
The name of this object.
This will be used in the following places: - how you refer to this object in Python/YAML/CLI - visualization - log message header - …
When not given, then the default naming strategy will apply.
Default: “gateway”
- --workspace
The working directory for any IO operations in this object. If not set, then derive from its parent workspace.
- --log-config
The YAML config of the logger used in this object.
Default: “/home/runner/work/jina/jina/jina/resources/logging.default.yml”
- --quiet
If set, then no log will be emitted from this object.
Default: False
- --quiet-error
If set, then exception stack information will not be added to the log
Default: False
- --timeout-ctrl
The timeout in milliseconds of the control request, -1 for waiting forever
Default: 60
- --polling
The polling strategy of the Deployment and its endpoints (when shards>1). Can be defined for all endpoints of a Deployment or by endpoint. Define per Deployment: - ANY: only one (whoever is idle) Pod polls the message - ALL: all Pods poll the message (like a broadcast) Define per Endpoint: JSON dict, {endpoint: PollingType} {‘/custom’: ‘ALL’, ‘/search’: ‘ANY’, ‘*’: ‘ANY’}
Default: “ANY”
WorkerRuntime arguments#
- --uses
The config of the executor, it could be one of the followings: * an Executor YAML file (.yml, .yaml, .jaml) * a Jina Hub Executor (must start with jinahub:// or jinahub+docker://) * a docker image (must start with docker://) * the string literal of a YAML config (must start with ! or `jtype: `) * the string literal of a JSON config
When use it under Python, one can use the following values additionally: - a Python dict that represents the config - a text file stream has .read() interface
Default: “BaseExecutor”
- --uses-with
Dictionary of keyword arguments that will override the with configuration in uses
- --uses-metas
Dictionary of keyword arguments that will override the metas configuration in uses
- --uses-requests
Dictionary of keyword arguments that will override the requests configuration in uses
- --py-modules
The customized python modules need to be imported before loading the executor
Note that the recommended way is to only import a single module - a simple python file, if your executor can be defined in a single file, or an
__init__.py
file if you have multiple files, which should be structured as a python package. For more details, please see the Executor cookbook- --port-in
The port for input data to bind to, default a random port between [49152, 65535]
Default: 58192
- --host-in
The host address for binding to, by default it is 0.0.0.0
Default: “0.0.0.0”
- --native
If set, only native Executors is allowed, and the Executor is always run inside WorkerRuntime.
Default: False
- --output-array-type
The type of array tensor and embedding will be serialized to.
Supports the same types as docarray.to_protobuf(.., ndarray_type=…), which can be found here <https://docarray.jina.ai/fundamentals/document/serialization/#from-to-protobuf>. Defaults to retaining whatever type is returned by the Executor.
Prefetch arguments#
- --prefetch
Number of requests fetched from the client before feeding into the first Executor.
Used to control the speed of data input into a Flow. 0 disables prefetch (disabled by default)
Default: 0
HTTP Gateway arguments#
- --title
The title of this HTTP server. It will be used in automatics docs such as Swagger UI.
- --description
The description of this HTTP server. It will be used in automatics docs such as Swagger UI.
- --cors
If set, a CORS middleware is added to FastAPI frontend to allow cross-origin access.
Default: False
- --default-swagger-ui
If set, the default swagger ui is used for /docs endpoint.
Default: False
- --no-debug-endpoints
If set, /status /post endpoints are removed from HTTP interface.
Default: False
- --no-crud-endpoints
If set, /index, /search, /update, /delete endpoints are removed from HTTP interface.
Any executor that has @requests(on=…) bind with those values will receive data requests.
Default: False
- --expose-endpoints
A JSON string that represents a map from executor endpoints (@requests(on=…)) to HTTP endpoints.
- --uvicorn-kwargs
Dictionary of kwargs arguments that will be passed to Uvicorn server when starting the server
More details can be found in Uvicorn docs: https://www.uvicorn.org/settings/
GraphQL arguments#
- --expose-graphql-endpoint
If set, /graphql endpoint is added to HTTP interface.
Default: False
Gateway arguments#
- --host
The host address of the runtime, by default it is 0.0.0.0.
Default: “0.0.0.0”
- --proxy
If set, respect the http_proxy and https_proxy environment variables. otherwise, it will unset these proxy variables before start. gRPC seems to prefer no proxy
Default: False
- --port-expose
The port that the gateway exposes for clients for GRPC connections.
Default: 50891
Pod arguments#
- --runtime-backend, --runtime
Possible choices: THREAD, PROCESS
The parallel backend of the runtime inside the Pod
Default: 1
- --runtime-cls
The runtime class to run inside the Pod
Default: “GRPCGatewayRuntime”
- --timeout-ready
The timeout in milliseconds of a Pod waits for the runtime to be ready, -1 for waiting forever
Default: 600000
- --env
The map of environment variables that are available inside runtime
- --shards
The number of shards in the deployment running at the same time. For more details check https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies
Default: 1
- --replicas
The number of replicas in the deployment
Default: 1
- --port
The port for input data to bind to, default is a random port between [49152, 65535]
Default: 51337
Head arguments#
- --uses-before-address
The address of the uses-before runtime
- --uses-after-address
The address of the uses-before runtime
- --connection-list
dictionary JSON with a list of connections to configure
- --disable-reduce
Disable the built-in reduce mechanism, set this if the reduction is to be handled by the Executor connected to this Head
Default: False
hub#
Push/Pull an Executor to/from Jina Hub
jina hub [-h] {new, push, pull} ...
subcommands#
use %(prog)-8s [sub-command] –help to get detailed information about each sub-command
- hub
Possible choices: new, push, pull
Sub-commands:#
new#
Create a new executor using the template
jina hub new [-h] [--name] [--path] [--advance-configuration] [--description]
[--keywords] [--url] [--add-dockerfile]
Create Executor arguments#
- --name
the name of the Executor
- --path
the path to store the Executor
- --advance-configuration
If set, always set up advance configuration like description, keywords and url
Default: False
- --description
the short description of the Executor
- --keywords
some keywords to help people search your Executor (separated by comma)
- --url
the URL of your GitHub repo
- --add-dockerfile
If set, add a Dockerfile to the created Executor bundle
Default: False
push#
Push an executor package to Jina hub
jina hub push [-h] [--no-usage] [--verbose] [-f DOCKERFILE] [-t]
[--force-update] [--secret] [--public | --private]
path
Named Arguments#
- --no-usage
If set, Hub executor usage will not be printed.
Default: False
- --verbose
If set, more information will be printed.
Default: False
Push arguments#
- path
The Executor folder to be pushed to Jina Hub
- -f, --dockerfile
The file path to the Dockerfile (default is ${cwd}/Dockerfile)
- -t, --tag
A list of tags. One can use it to distinguish architecture (e.g. cpu, gpu) or versions (e.g. v1, v2).
One can later fetch a tagged Executor via jinahub[+docker]://MyExecutor/gpu
- --force-update, --force
If set, push will overwrite the Executor on the Hub that shares the same NAME or UUID8 identifier
- --secret
The secret for overwrite a Hub executor
Visibility arguments#
- --public
If set, the pushed executor is visible to public
- --private
If set, the pushed executor is invisible to public
pull#
Download an executor image/package from Jina hub
jina hub pull [-h] [--no-usage] [--install-requirements] [--force-update] uri
Positional Arguments#
- uri
The URI of the executor to pull (e.g., jinahub[+docker]://NAME)
Named Arguments#
- --no-usage
If set, Hub executor usage will not be printed.
Default: False
Pull arguments#
- --install-requirements
If set, install requirements.txt in the Hub Executor bundle to local
Default: False
- --force-update, --force
If set, always pull the latest Hub Executor bundle even it exists on local
Default: False
help#
Show help text of a CLI argument
jina help [-h] query
Positional Arguments#
- query
Lookup the usage & mention of the argument name in Jina API. The name can be fuzzy
pod#
Start a Pod. You should rarely use this directly unless you are doing low-level orchestration
jina pod [-h] [--name] [--workspace] [--log-config] [--quiet] [--quiet-error]
[--timeout-ctrl] [--polling] [--uses]
[--uses-with [KEY: VALUE [KEY: VALUE ...]]]
[--uses-metas [KEY: VALUE [KEY: VALUE ...]]]
[--uses-requests [KEY: VALUE [KEY: VALUE ...]]]
[--py-modules [PATH [PATH ...]]] [--port-in] [--host-in] [--native]
[--output-array-type] [--entrypoint]
[--docker-kwargs [KEY: VALUE [KEY: VALUE ...]]] [--pull-latest]
[--volumes [DIR [DIR ...]]] [--gpus] [--host] [--port-jinad]
[--quiet-remote-logs] [--upload-files [FILE [FILE ...]]]
[--runtime-backend {THREAD, PROCESS}] [--runtime-cls]
[--timeout-ready] [--env [KEY: VALUE [KEY: VALUE ...]]] [--shards]
[--replicas] [--port] [--install-requirements] [--force-update]
[--uses-before-address] [--uses-after-address] [--connection-list]
[--disable-reduce]
Essential arguments#
- --name
The name of this object.
This will be used in the following places: - how you refer to this object in Python/YAML/CLI - visualization - log message header - …
When not given, then the default naming strategy will apply.
- --workspace
The working directory for any IO operations in this object. If not set, then derive from its parent workspace.
- --log-config
The YAML config of the logger used in this object.
Default: “/home/runner/work/jina/jina/jina/resources/logging.default.yml”
- --quiet
If set, then no log will be emitted from this object.
Default: False
- --quiet-error
If set, then exception stack information will not be added to the log
Default: False
- --timeout-ctrl
The timeout in milliseconds of the control request, -1 for waiting forever
Default: 60
- --polling
The polling strategy of the Deployment and its endpoints (when shards>1). Can be defined for all endpoints of a Deployment or by endpoint. Define per Deployment: - ANY: only one (whoever is idle) Pod polls the message - ALL: all Pods poll the message (like a broadcast) Define per Endpoint: JSON dict, {endpoint: PollingType} {‘/custom’: ‘ALL’, ‘/search’: ‘ANY’, ‘*’: ‘ANY’}
Default: “ANY”
WorkerRuntime arguments#
- --uses
The config of the executor, it could be one of the followings: * an Executor YAML file (.yml, .yaml, .jaml) * a Jina Hub Executor (must start with jinahub:// or jinahub+docker://) * a docker image (must start with docker://) * the string literal of a YAML config (must start with ! or `jtype: `) * the string literal of a JSON config
When use it under Python, one can use the following values additionally: - a Python dict that represents the config - a text file stream has .read() interface
Default: “BaseExecutor”
- --uses-with
Dictionary of keyword arguments that will override the with configuration in uses
- --uses-metas
Dictionary of keyword arguments that will override the metas configuration in uses
- --uses-requests
Dictionary of keyword arguments that will override the requests configuration in uses
- --py-modules
The customized python modules need to be imported before loading the executor
Note that the recommended way is to only import a single module - a simple python file, if your executor can be defined in a single file, or an
__init__.py
file if you have multiple files, which should be structured as a python package. For more details, please see the Executor cookbook- --port-in
The port for input data to bind to, default a random port between [49152, 65535]
Default: 50076
- --host-in
The host address for binding to, by default it is 0.0.0.0
Default: “0.0.0.0”
- --native
If set, only native Executors is allowed, and the Executor is always run inside WorkerRuntime.
Default: False
- --output-array-type
The type of array tensor and embedding will be serialized to.
Supports the same types as docarray.to_protobuf(.., ndarray_type=…), which can be found here <https://docarray.jina.ai/fundamentals/document/serialization/#from-to-protobuf>. Defaults to retaining whatever type is returned by the Executor.
ContainerRuntime arguments#
- --entrypoint
The entrypoint command overrides the ENTRYPOINT in Docker image. when not set then the Docker image ENTRYPOINT takes effective.
- --docker-kwargs
Dictionary of kwargs arguments that will be passed to Docker SDK when starting the docker ‘ container.
More details can be found in the Docker SDK docs: https://docker-py.readthedocs.io/en/stable/
- --pull-latest
Pull the latest image before running
Default: False
- --volumes
The path on the host to be mounted inside the container.
Note, - If separated by :, then the first part will be considered as the local host path and the second part is the path in the container system. - If no split provided, then the basename of that directory will be mounted into container’s root path, e.g. –volumes=”/user/test/my-workspace” will be mounted into /my-workspace inside the container. - All volumes are mounted with read-write mode.
- --gpus
This argument allows dockerized Jina executor discover local gpu devices.
Note, - To access all gpus, use –gpus all. - To access multiple gpus, e.g. make use of 2 gpus, use –gpus 2. - To access specified gpus based on device id, use –gpus device=[YOUR-GPU-DEVICE-ID] - To access specified gpus based on multiple device id, use –gpus device=[YOUR-GPU-DEVICE-ID1],device=[YOUR-GPU-DEVICE-ID2] - To specify more parameters, use `–gpus device=[YOUR-GPU-DEVICE-ID],runtime=nvidia,capabilities=display
RemoteRuntime arguments#
- --host
The host address of the runtime, by default it is 0.0.0.0.
Default: “0.0.0.0”
- --port-jinad
The port of the remote machine for usage with JinaD.
Default: 8000
Distributed arguments#
- --quiet-remote-logs
Do not display the streaming of remote logs on local console
Default: False
- --upload-files
The files on the host to be uploaded to the remote workspace. This can be useful when your Deployment has more file dependencies beyond a single YAML file, e.g. Python files, data files.
Note, - currently only flatten structure is supported, which means if you upload [./foo/a.py, ./foo/b.pp, ./bar/c.yml], then they will be put under the _same_ workspace on the remote, losing all hierarchies. - by default, –uses YAML file is always uploaded. - uploaded files are by default isolated across the runs. To ensure files are submitted to the same workspace across different runs, use –workspace-id to specify the workspace.
Pod arguments#
- --runtime-backend, --runtime
Possible choices: THREAD, PROCESS
The parallel backend of the runtime inside the Pod
Default: 1
- --runtime-cls
The runtime class to run inside the Pod
Default: “WorkerRuntime”
- --timeout-ready
The timeout in milliseconds of a Pod waits for the runtime to be ready, -1 for waiting forever
Default: 600000
- --env
The map of environment variables that are available inside runtime
- --shards
The number of shards in the deployment running at the same time. For more details check https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies
Default: 1
- --replicas
The number of replicas in the deployment
Default: 1
- --port
The port for input data to bind to, default is a random port between [49152, 65535]
Default: 51555
Pull arguments#
- --install-requirements
If set, install requirements.txt in the Hub Executor bundle to local
Default: False
- --force-update, --force
If set, always pull the latest Hub Executor bundle even it exists on local
Default: False
Head arguments#
- --uses-before-address
The address of the uses-before runtime
- --uses-after-address
The address of the uses-before runtime
- --connection-list
dictionary JSON with a list of connections to configure
- --disable-reduce
Disable the built-in reduce mechanism, set this if the reduction is to be handled by the Executor connected to this Head
Default: False
deployment#
Start a Deployment. You should rarely use this directly unless you are doing low-level orchestration
jina deployment [-h] [--name] [--workspace] [--log-config] [--quiet]
[--quiet-error] [--timeout-ctrl] [--polling] [--uses]
[--uses-with [KEY: VALUE [KEY: VALUE ...]]]
[--uses-metas [KEY: VALUE [KEY: VALUE ...]]]
[--uses-requests [KEY: VALUE [KEY: VALUE ...]]]
[--py-modules [PATH [PATH ...]]] [--port-in] [--host-in]
[--native] [--output-array-type] [--entrypoint]
[--docker-kwargs [KEY: VALUE [KEY: VALUE ...]]]
[--pull-latest] [--volumes [DIR [DIR ...]]] [--gpus] [--host]
[--port-jinad] [--quiet-remote-logs]
[--upload-files [FILE [FILE ...]]]
[--runtime-backend {THREAD, PROCESS}] [--runtime-cls]
[--timeout-ready] [--env [KEY: VALUE [KEY: VALUE ...]]]
[--shards] [--replicas] [--port] [--install-requirements]
[--force-update] [--uses-before-address]
[--uses-after-address] [--connection-list] [--disable-reduce]
[--uses-before] [--uses-after]
[--input-condition [KEY: VALUE [KEY: VALUE ...]]] [--external]
Essential arguments#
- --name
The name of this object.
This will be used in the following places: - how you refer to this object in Python/YAML/CLI - visualization - log message header - …
When not given, then the default naming strategy will apply.
- --workspace
The working directory for any IO operations in this object. If not set, then derive from its parent workspace.
- --log-config
The YAML config of the logger used in this object.
Default: “/home/runner/work/jina/jina/jina/resources/logging.default.yml”
- --quiet
If set, then no log will be emitted from this object.
Default: False
- --quiet-error
If set, then exception stack information will not be added to the log
Default: False
- --timeout-ctrl
The timeout in milliseconds of the control request, -1 for waiting forever
Default: 60
- --polling
The polling strategy of the Deployment and its endpoints (when shards>1). Can be defined for all endpoints of a Deployment or by endpoint. Define per Deployment: - ANY: only one (whoever is idle) Pod polls the message - ALL: all Pods poll the message (like a broadcast) Define per Endpoint: JSON dict, {endpoint: PollingType} {‘/custom’: ‘ALL’, ‘/search’: ‘ANY’, ‘*’: ‘ANY’}
Default: “ANY”
WorkerRuntime arguments#
- --uses
The config of the executor, it could be one of the followings: * an Executor YAML file (.yml, .yaml, .jaml) * a Jina Hub Executor (must start with jinahub:// or jinahub+docker://) * a docker image (must start with docker://) * the string literal of a YAML config (must start with ! or `jtype: `) * the string literal of a JSON config
When use it under Python, one can use the following values additionally: - a Python dict that represents the config - a text file stream has .read() interface
Default: “BaseExecutor”
- --uses-with
Dictionary of keyword arguments that will override the with configuration in uses
- --uses-metas
Dictionary of keyword arguments that will override the metas configuration in uses
- --uses-requests
Dictionary of keyword arguments that will override the requests configuration in uses
- --py-modules
The customized python modules need to be imported before loading the executor
Note that the recommended way is to only import a single module - a simple python file, if your executor can be defined in a single file, or an
__init__.py
file if you have multiple files, which should be structured as a python package. For more details, please see the Executor cookbook- --port-in
The port for input data to bind to, default a random port between [49152, 65535]
Default: 58512
- --host-in
The host address for binding to, by default it is 0.0.0.0
Default: “0.0.0.0”
- --native
If set, only native Executors is allowed, and the Executor is always run inside WorkerRuntime.
Default: False
- --output-array-type
The type of array tensor and embedding will be serialized to.
Supports the same types as docarray.to_protobuf(.., ndarray_type=…), which can be found here <https://docarray.jina.ai/fundamentals/document/serialization/#from-to-protobuf>. Defaults to retaining whatever type is returned by the Executor.
ContainerRuntime arguments#
- --entrypoint
The entrypoint command overrides the ENTRYPOINT in Docker image. when not set then the Docker image ENTRYPOINT takes effective.
- --docker-kwargs
Dictionary of kwargs arguments that will be passed to Docker SDK when starting the docker ‘ container.
More details can be found in the Docker SDK docs: https://docker-py.readthedocs.io/en/stable/
- --pull-latest
Pull the latest image before running
Default: False
- --volumes
The path on the host to be mounted inside the container.
Note, - If separated by :, then the first part will be considered as the local host path and the second part is the path in the container system. - If no split provided, then the basename of that directory will be mounted into container’s root path, e.g. –volumes=”/user/test/my-workspace” will be mounted into /my-workspace inside the container. - All volumes are mounted with read-write mode.
- --gpus
This argument allows dockerized Jina executor discover local gpu devices.
Note, - To access all gpus, use –gpus all. - To access multiple gpus, e.g. make use of 2 gpus, use –gpus 2. - To access specified gpus based on device id, use –gpus device=[YOUR-GPU-DEVICE-ID] - To access specified gpus based on multiple device id, use –gpus device=[YOUR-GPU-DEVICE-ID1],device=[YOUR-GPU-DEVICE-ID2] - To specify more parameters, use `–gpus device=[YOUR-GPU-DEVICE-ID],runtime=nvidia,capabilities=display
RemoteRuntime arguments#
- --host
The host address of the runtime, by default it is 0.0.0.0.
Default: “0.0.0.0”
- --port-jinad
The port of the remote machine for usage with JinaD.
Default: 8000
Distributed arguments#
- --quiet-remote-logs
Do not display the streaming of remote logs on local console
Default: False
- --upload-files
The files on the host to be uploaded to the remote workspace. This can be useful when your Deployment has more file dependencies beyond a single YAML file, e.g. Python files, data files.
Note, - currently only flatten structure is supported, which means if you upload [./foo/a.py, ./foo/b.pp, ./bar/c.yml], then they will be put under the _same_ workspace on the remote, losing all hierarchies. - by default, –uses YAML file is always uploaded. - uploaded files are by default isolated across the runs. To ensure files are submitted to the same workspace across different runs, use –workspace-id to specify the workspace.
Pod arguments#
- --runtime-backend, --runtime
Possible choices: THREAD, PROCESS
The parallel backend of the runtime inside the Pod
Default: 1
- --runtime-cls
The runtime class to run inside the Pod
Default: “WorkerRuntime”
- --timeout-ready
The timeout in milliseconds of a Pod waits for the runtime to be ready, -1 for waiting forever
Default: 600000
- --env
The map of environment variables that are available inside runtime
- --shards
The number of shards in the deployment running at the same time. For more details check https://docs.jina.ai/fundamentals/flow/create-flow/#complex-flow-topologies
Default: 1
- --replicas
The number of replicas in the deployment
Default: 1
- --port
The port for input data to bind to, default is a random port between [49152, 65535]
Default: 63510
Pull arguments#
- --install-requirements
If set, install requirements.txt in the Hub Executor bundle to local
Default: False
- --force-update, --force
If set, always pull the latest Hub Executor bundle even it exists on local
Default: False
Head arguments#
- --uses-before-address
The address of the uses-before runtime
- --uses-after-address
The address of the uses-before runtime
- --connection-list
dictionary JSON with a list of connections to configure
- --disable-reduce
Disable the built-in reduce mechanism, set this if the reduction is to be handled by the Executor connected to this Head
Default: False
Deployment arguments#
- --uses-before
The executor attached after the Pods described by –uses, typically before sending to all shards, accepted type follows –uses
- --uses-after
The executor attached after the Pods described by –uses, typically used for receiving from all shards, accepted type follows –uses
- --input-condition
The condition that the documents need to fulfill before reaching the Executor.The condition can be defined in the form of a DocArray query condition <https://docarray.jina.ai/fundamentals/documentarray/find/#query-by-conditions>
- --external
The Deployment will be considered an external Deployment that has been started independently from the Flow.This Deployment will not be context managed by the Flow.
Default: False
client#
Start a Python client that connects to a remote Jina gateway
jina client [-h] [--host] [--proxy] [--port] [--https] [--asyncio]
[--return-responses] [--protocol {GRPC, HTTP, WEBSOCKET}]
Named Arguments#
- --asyncio
If set, then the input and output of this Client work in an asynchronous manner.
Default: False
- --return-responses
If set, return results as List of Requests instead of a reduced DocArray.
Default: False
- --protocol
Possible choices: GRPC, HTTP, WEBSOCKET
Communication protocol between server and client.
Default: 0
ClientGateway arguments#
- --host
The host address of the runtime, by default it is 0.0.0.0.
Default: “0.0.0.0”
- --proxy
If set, respect the http_proxy and https_proxy environment variables. otherwise, it will unset these proxy variables before start. gRPC seems to prefer no proxy
Default: False
- --port
The port of the Gateway, which the client should connect to.
Default: 59863
- --https
If set, connect to gateway using https
Default: False
export-api#
Export Jina API to JSON/YAML file for 3rd party applications
jina export-api [-h] [--yaml-path [PATH [PATH ...]]]
[--json-path [PATH [PATH ...]]]
[--schema-path [PATH [PATH ...]]]
Named Arguments#
- --yaml-path
The YAML file path for storing the exported API
- --json-path
The JSON file path for storing the exported API
- --schema-path
The JSONSchema file path for storing the exported API