The pipeline orchestrator is responsible for:
When a StartWorkflowRequest is received, the body of the request will first be translated into a graph model (see ToGraph).
The resulting graph will then be translated into an Argo Workflow see graph package. The cwl of each task and the parameters supplied are used to create Argo Templates and containers. Between each Argo Containers, the input and outputs are passed using a S3 container as defined in the StartWorkflowRequest ( see addS3Bucket function). At the end we have an executable Argo Workflow.
Admiralty specific annotations are added to this workflow if multi cluster execution is needed ( see addAnnotation function in run controller)
The compute provider is called to ensure that our local cluster is ready to execute the workflow, and that it is well-connected to all the others clusters involved in the execution ( see compute_provider_client.go)
Starting the workflow is handled by argo_client.go
helm
skaffold
kubectl
You’ll need an active kubectl context to use this API in skaffold
skaffold run
Option default-repo need to be added in skaffold command for specify container registry to used.
skaffold dev
From the project root, run :
go test ./... -cover
Name | Type | Default |
---|---|---|
listenAddress | string | “:9092” |
apiPrefix | string | “/api/pipeline-orchestrator” |
Name | Type | Default |
---|---|---|
MONGO_DATABASE | string | “service-provider” |
COMPUTE_PROVIDER_API | string | “http://compute-provider-service:8080” |
LOG_LEVEL | string | “INFO” |
URL: /api/pipeline-orchestrator/start
METHOD: POST
Path Params: None
Query Params: None
Request Body: StartWorkflowRequest
Code: 200 OK
Content: StartWorkflowResponse
Code: 500 Internal Server Error
Or
Code: 422 Unprocessable entity
the request body cannot be parsed
Or
Code: 400 Bad Request
the request body has been parsed but is invalid