jina.serve.bff module#
- class jina.serve.bff.GatewayBFF(graph_representation, executor_addresses, graph_conditions={}, deployments_disable_reduce=[], timeout_send=None, retries=0, compression=None, runtime_name='gateway_bff', prefetch=0, logger=None, metrics_registry=None)[source]#
Bases:
object
Wrapper object to be used in a BFF or in the Gateway. Naming to be defined
- Parameters:
graph_representation (
Dict
) – A dictionary describing the topology of the Deployments. 2 special nodes are expected, the name start-gateway and end-gateway to determine the nodes that receive the very first request and the ones whose response needs to be sent back to the client. All the nodes with no outgoing nodes will be considered to be floating, and they will be “flagged” so that the user can ignore their tasks and not await them.executor_addresses (
Dict
[str
,Union
[str
,List
[str
]]]) – dictionary JSON with the input addresses of each Deployment. Each Executor can have one single address or a list of addrresses for each Executorgraph_conditions (
Dict
) – Dictionary stating which filtering conditions each Executor in the graph requires to receive Documents.deployments_disable_reduce (
List
[str
]) – list of Executor disabling the built-in merging mechanism.timeout_send (
Optional
[float
]) – Timeout to be considered when sending requests to Executorsretries (
int
) – Number of retries to try to make successfull sendings to Executorscompression (
Optional
[str
]) – The compression mechanism used when sending requests from the Head to the WorkerRuntimes. For more details, check https://grpc.github.io/grpc/python/grpc.html#compression.runtime_name (
str
) – Name to be used for monitoring.prefetch (
int
) – How many Requests are processed from the Client at the same time.logger (
Optional
[JinaLogger
]) – Optional logger that can be used for loggingmetrics_registry (
Optional
[CollectorRegistry
]) – optional metrics registry for prometheus used if we need to expose metrics
- stream(*args, **kwargs)[source]#
stream requests from client iterator and stream responses back.
- Parameters:
args – positional arguments to be passed to inner RequestStreamer
kwargs – keyword arguments to be passed to inner RequestStreamer
- Returns:
An iterator over the responses from the Executors
- async stream_docs(docs, request_size, return_results=False, exec_endpoint=None, target_executor=None, parameters=None)[source]#
stream documents and stream responses back.
- Parameters:
docs (
DocumentArray
) – The Documents to be sent to all the Executorsrequest_size (
int
) – The amount of Documents to be put inside a single request.return_results (
bool
) – If set to True, the generator will yield Responses and not DocumentArraysexec_endpoint (
Optional
[str
]) – The executor endpoint to which to send the Documentstarget_executor (
Optional
[str
]) – A regex expression indicating the Executors that should receive the Requestparameters (
Optional
[Dict
]) – Parameters to be attached to the Requests
- Yield:
Yields DocumentArrays or Responses from the Executors
- async close()[source]#
Gratefully closes the object making sure all the floating requests are taken care and the connections are closed gracefully
- Call(*args, **kwargs)#
stream requests from client iterator and stream responses back.
- Parameters:
args – positional arguments to be passed to inner RequestStreamer
kwargs – keyword arguments to be passed to inner RequestStreamer
- Returns:
An iterator over the responses from the Executors