Pipelines
- Overview
- Configuring a
Pipeline
Overview
A Pipeline is a collection of Tasks that you define and arrange in a specific order
of execution as part of your continuous integration flow. Each Task in a Pipeline
executes as a Pod on your Kubernetes cluster. You can configure various execution
conditions to fit your business needs.
Configuring a Pipeline
A Pipeline definition supports the following fields:
- Required:
apiVersion- Specifies the API version, for exampletekton.dev/v1beta1.kind- Identifies this resource object as aPipelineobject.metadata- Specifies metadata that uniquely identifies thePipelineobject. For example, aname.spec- Specifies the configuration information for thisPipelineobject. This must include:tasks- Specifies theTasksthat comprise thePipelineand the details of their execution.
- Optional:
resources- alpha only SpecifiesPipelineResourcesneeded or created by theTaskscomprising thePipeline.tasks:resources.inputs/resource.outputsfrom- Indicates the data for aPipelineResourceoriginates from the output of a previousTask.
runAfter- Indicates that aTaskshould execute after one or more otherTaskswithout output linking.retries- Specifies the number of times to retry the execution of aTaskafter a failure. Does not apply to execution cancellations.conditions- SpecifiesConditionsthat only allow aTaskto execute if they successfully evaluate.timeout- Specifies the timeout before aTaskfails.
results- Specifies the location to which thePipelineemits its execution results.description- Holds an informative description of thePipelineobject.finally- Specifies one or moreTasksto be executed in parallel after all other tasks have completed.
Specifying Resources
A Pipeline requires PipelineResources to provide inputs and store outputs
for the Tasks that comprise it. You can declare those in the resources field in the spec
section of the Pipeline definition. Each entry requires a unique name and a type. For example:
spec:
resources:
- name: my-repo
type: git
- name: my-image
type: image
Specifying Workspaces
Workspaces allow you to specify one or more volumes that each Task in the Pipeline
requires during execution. You specify one or more Workspaces in the workspaces field.
For example:
spec:
workspaces:
- name: pipeline-ws1 # The name of the workspace in the Pipeline
tasks:
- name: use-ws-from-pipeline
taskRef:
name: gen-code # gen-code expects a workspace with name "output"
workspaces:
- name: output
workspace: pipeline-ws1
- name: use-ws-again
taskRef:
name: commit # commit expects a workspace with name "src"
runAfter:
- use-ws-from-pipeline # important: use-ws-from-pipeline writes to the workspace first
workspaces:
- name: src
workspace: pipeline-ws1
For more information, see:
- Using
WorkspacesinPipelines - The
Workspacesin aPipelineRuncode example - The variables available in a
PipelineRun, includingworkspaces.<name>.bound.
Specifying Parameters
You can specify global parameters, such as compilation flags or artifact names, that you want to supply
to the Pipeline at execution time. Parameters are passed to the Pipeline from its corresponding
PipelineRun and can replace template values specified within each Task in the Pipeline.
Parameter names:
- Must only contain alphanumeric characters, hyphens (
-), and underscores (_). - Must begin with a letter or an underscore (
_).
For example, fooIs-Bar_ is a valid parameter name, but barIsBa$ or 0banana are not.
Each declared parameter has a type field, which can be set to either array or string.
array is useful in cases where the number of compilation flags being supplied to the Pipeline
varies throughout its execution. If no value is specified, the type field defaults to string.
When the actual parameter value is supplied, its parsed type is validated against the type field.
The description and default fields for a Parameter are optional.
The following example illustrates the use of Parameters in a Pipeline.
The following Pipeline declares an input parameter called context and passes its
value to the Task to set the value of the pathToContext parameter within the Task.
If you specify a value for the default field and invoke this Pipeline in a PipelineRun
without specifying a value for context, that value will be used.
Note: Input parameter values can be used as variables throughout the Pipeline
by using variable substitution.
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: pipeline-with-parameters
spec:
params:
- name: context
type: string
description: Path to context
default: /some/where/or/other
tasks:
- name: build-skaffold-web
taskRef:
name: build-push
params:
- name: pathToDockerFile
value: Dockerfile
- name: pathToContext
value: "$(params.context)"
The following PipelineRun supplies a value for context:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-with-parameters
spec:
pipelineRef:
name: pipeline-with-parameters
params:
- name: "context"
value: "/workspace/examples/microservices/leeroy-web"
Adding Tasks to the Pipeline
Your Pipeline definition must reference at least one Task.
Each Task within a Pipeline must have a valid
name and a taskRef. For example:
tasks:
- name: build-the-image
taskRef:
name: build-push
You can use PipelineResources as inputs and outputs for Tasks
in the Pipeline. For example:
spec:
tasks:
- name: build-the-image
taskRef:
name: build-push
resources:
inputs:
- name: workspace
resource: my-repo
outputs:
- name: image
resource: my-image
You can also provide Parameters:
spec:
tasks:
- name: build-skaffold-web
taskRef:
name: build-push
params:
- name: pathToDockerFile
value: Dockerfile
- name: pathToContext
value: /workspace/examples/microservices/leeroy-web
Tekton Bundles
Note: This is only allowed if enable-tekton-oci-bundles is set to
"true" in the feature-flags configmap, see install.md
You may also specify your Task reference using a Tekton Bundle. A Tekton Bundle is an OCI artifact that
contains Tekton resources like Tasks which can be referenced within a taskRef.
spec:
tasks:
- name: hello-world
taskRef:
name: echo-task
bundle: docker.com/myrepo/mycatalog
Here, the bundle field is the full reference url to the artifact. The name is the
metadata.name field of the Task.
You may also specify a tag as you would with a Docker image which will give you a fixed,
repeatable reference to a Task.
spec:
tasks:
- name: hello-world
taskRef:
name: echo-task
bundle: docker.com/myrepo/mycatalog:v1.0.1
You may also specify a fixed digest instead of a tag.
spec:
tasks:
- name: hello-world
taskRef:
name: echo-task
bundle: docker.io/myrepo/mycatalog@sha256:abc123
Any of the above options will fetch the image using the ImagePullSecrets attached to the
ServiceAccount specified in the PipelineRun. See the Service Account section for details on how to configure a ServiceAccount
on a PipelineRun. The PipelineRun will then run that Task without registering it in
the cluster allowing multiple versions of the same named Task to be run at once.
Tekton Bundles may be constructed with any toolsets that produce valid OCI image artifacts
so long as the artifact adheres to the contract.
Using the from parameter
If a Task in your Pipeline needs to use the output of a previous Task
as its input, use the optional from parameter to specify a list of Tasks
that must execute before the Task that requires their outputs as its
input. When your target Task executes, only the version of the desired
PipelineResource produced by the last Task in this list is used. The
name of this output PipelineResource output must match the name of the
input PipelineResource specified in the Task that ingests it.
In the example below, the deploy-app Task ingests the output of the build-app
Task named my-image as its input. Therefore, the build-app Task will
execute before the deploy-app Task regardless of the order in which those
Tasks are declared in the Pipeline.
- name: build-app
taskRef:
name: build-push
resources:
outputs:
- name: image
resource: my-image
- name: deploy-app
taskRef:
name: deploy-kubectl
resources:
inputs:
- name: image
resource: my-image
from:
- build-app
Using the runAfter parameter
If you need your Tasks to execute in a specific order within the Pipeline
but they don’t have resource dependencies that require the from parameter,
use the runAfter parameter to indicate that a Task must execute after
one or more other Tasks.
In the example below, we want to test the code before we build it. Since there
is no output from the test-app Task, the build-app Task uses runAfter
to indicate that test-app must run before it, regardless of the order in which
they are referenced in the Pipeline definition.
- name: test-app
taskRef:
name: make-test
resources:
inputs:
- name: workspace
resource: my-repo
- name: build-app
taskRef:
name: kaniko-build
runAfter:
- test-app
resources:
inputs:
- name: workspace
resource: my-repo
Using the retries parameter
For each Task in the Pipeline, you can specify the number of times Tekton
should retry its execution when it fails. When a Task fails, the corresponding
TaskRun sets its Succeeded Condition to False. The retries parameter
instructs Tekton to retry executing the Task when this happens.
If you expect a Task to encounter problems during execution (for example,
you know that there will be issues with network connectivity or missing
dependencies), set its retries parameter to a suitable value greater than 0.
If you don’t explicitly specify a value, Tekton does not attempt to execute
the failed Task again.
In the example below, the execution of the build-the-image Task will be
retried once after a failure; if the retried execution fails, too, the Task
execution fails as a whole.
tasks:
- name: build-the-image
retries: 1
taskRef:
name: build-push
Guard Task execution using WhenExpressions
To run a Task only when certain conditions are met, it is possible to guard task execution using the when field. The when field allows you to list a series of references to WhenExpressions.
The components of WhenExpressions are Input, Operator and Values:
Inputis the input for theWhenExpressionwhich can be static inputs or variables (ParametersorResults). If theInputis not provided, it defaults to an empty string.Operatorrepresents anInput’s relationship to a set ofValues. A validOperatormust be provided, which can be eitherinornotin.Valuesis an array of string values. TheValuesarray must be provided and be non-empty. It can contain static values or variables (Parameters,Resultsor a Workspaces’sboundstate).
The Parameters are read from the Pipeline and Results are read directly from previous Tasks. Using Results in a WhenExpression in a guarded Task introduces a resource dependency on the previous Task that produced the Result.
The declared WhenExpressions are evaluated before the Task is run. If all the WhenExpressions evaluate to True, the Task is run. If any of the WhenExpressions evaluate to False, the Task is not run and the Task is listed in the Skipped Tasks section of the PipelineRunStatus.
In these examples, first-create-file task will only be executed if the path parameter is README.md, echo-file-exists task will only be executed if the exists result from check-file task is yes and run-lint task will only be executed if the lint-config optional workspace has been provided by a PipelineRun.
tasks:
- name: first-create-file
when:
- input: "$(params.path)"
operator: in
values: ["README.md"]
taskRef:
name: first-create-file
---
tasks:
- name: echo-file-exists
when:
- input: "$(tasks.check-file.results.exists)"
operator: in
values: ["yes"]
taskRef:
name: echo-file-exists
---
tasks:
- name: run-lint
when:
- input: "$(workspaces.lint-config.bound)"
operator: in
values: ["true"]
taskRef:
name: lint-source
For an end-to-end example, see PipelineRun with WhenExpressions.
When WhenExpressions are specified in a Task, Conditions should not be specified in the same Task. The Pipeline will be rejected as invalid if both WhenExpressions and Conditions are included.
There are a lot of scenarios where WhenExpressions can be really useful. Some of these are:
- Checking if the name of a git branch matches
- Checking if the
Resultof a previousTaskis as expected - Checking if a git file has changed in the previous commits
- Checking if an image exists in the registry
- Checking if the name of a CI job matches
- Checking if an optional Workspace has been provided
Guard Task execution using Conditions
Note: Conditions are deprecated, use WhenExpressions instead.
To run a Task only when certain conditions are met, it is possible to guard task execution using
the conditions field. The conditions field allows you to list a series of references to
Condition resources. The declared Conditions are run before the Task is run.
If all of the conditions successfully evaluate, the Task is run. If any of the conditions fails,
the Task is not run and the TaskRun status field ConditionSucceeded is set to False with the
reason set to ConditionCheckFailed.
In this example, is-master-branch refers to a Condition resource. The deploy
task will only be executed if the condition successfully evaluates.
tasks:
- name: deploy-if-branch-is-master
conditions:
- conditionRef: is-master-branch
params:
- name: branch-name
value: my-value
taskRef:
name: deploy
Unlike regular task failures, condition failures do not automatically fail the entire PipelineRun –
other tasks that are not dependent on the Task (via from or runAfter) are still run.
In this example, (task C) has a condition set to guard its execution. If the condition
is not successfully evaluated, task (task D) will not be run, but all other tasks in the pipeline
that not depend on (task C) will be executed and the PipelineRun will successfully complete.
(task B) — (task E)
/
(task A)
\
(guarded task C) — (task D)
Resources in conditions can also use the from field to indicate that they
expect the output of a previous task as input. As with regular Pipeline Tasks, using from
implies ordering – if task has a condition that takes in an output resource from
another task, the task producing the output resource will run first:
tasks:
- name: first-create-file
taskRef:
name: create-file
resources:
outputs:
- name: workspace
resource: source-repo
- name: then-check
conditions:
- conditionRef: "file-exists"
resources:
- name: workspace
resource: source-repo
from: [first-create-file]
taskRef:
name: echo-hello
Configuring the failure timeout
You can use the Timeout field in the Task spec within the Pipeline to set the timeout
of the TaskRun that executes that Task within the PipelineRun that executes your Pipeline.
The Timeout value is a duration conforming to Go’s ParseDuration
format. For example, valid values are 1h30m, 1h, 1m, and 60s.
Note: If you do not specify a Timeout value, Tekton instead honors the timeout for the PipelineRun.
In the example below, the build-the-image Task is configured to time out after 90 seconds:
spec:
tasks:
- name: build-the-image
taskRef:
name: build-push
timeout: "0h1m30s"
Using variable substitution
Tekton provides variables to inject values into the contents of certain fields. The values you can inject come from a range of sources including other fields in the Pipeline, context-sensitive information that Tekton provides, and runtime information received from a PipelineRun.
The mechanism of variable substitution is quite simple - string replacement is performed by the Tekton Controller when a PipelineRun is executed.
See the complete list of variable substitutions for Pipelines and the list of fields that accept substitutions.
Using Results
Tasks can emit Results when they execute. A Pipeline can use these
Results for two different purposes:
- A Pipeline can pass the
Resultof aTaskinto theParametersorWhenExpressionsof another. - A Pipeline can itself emit
Resultsand include data from theResultsof its Tasks.
Passing one Task’s Results into the Parameters or WhenExpressions of another
Sharing Results between Tasks in a Pipeline happens via
variable substitution - one Task emits
a Result and another receives it as a Parameter with a variable such as
$(tasks.<task-name>.results.<result-name>).
When one Task receives the Results of another, there is a dependency created between those
two Tasks. In order for the receiving Task to get data from another Task's Result,
the Task producing the Result must run first. Tekton enforces this Task ordering
by ensuring that the Task emitting the Result executes before any Task that uses it.
In the snippet below, a param is provided its value from the commit Result emitted by the
checkout-source Task. Tekton will make sure that the checkout-source Task runs
before this one.
params:
- name: foo
value: "$(tasks.checkout-source.results.commit)"
Note: If checkout-source exits successfully without initializing commit Result,
the receiving Task fails and causes the Pipeline to fail with InvalidTaskResultReference:
unable to find result referenced by param 'foo' in 'task';: Could not find result with name 'commit' for task run 'checkout-source'
In the snippet below, a WhenExpression is provided its value from the exists Result emitted by the
check-file Task. Tekton will make sure that the check-file Task runs before this one.
when:
- input: "$(tasks.check-file.results.exists)"
operator: in
values: ["yes"]
For an end-to-end example, see Task Results in a PipelineRun.
Emitting Results from a Pipeline
A Pipeline can emit Results of its own for a variety of reasons - an external
system may need to read them when the Pipeline is complete, they might summarise
the most important Results from the Pipeline's Tasks, or they might simply
be used to expose non-critical messages generated during the execution of the Pipeline.
A Pipeline's Results can be composed of one or many Task Results emitted during
the course of the Pipeline's execution. A Pipeline Result can refer to its Tasks'
Results using a variable of the form $(tasks.<task-name>.results.<result-name>).
After a Pipeline has executed the PipelineRun will be populated with the Results
emitted by the Pipeline. These will be written to the PipelineRun's
status.pipelineResults field.
In the example below, the Pipeline specifies a results entry with the name sum that
references the outputValue Result emitted by the calculate-sum Task.
results:
- name: sum
description: the sum of all three operands
value: $(tasks.calculate-sum.results.outputValue)
For an end-to-end example, see Results in a PipelineRun.
A Pipeline Result is not emitted if any of the following are true:
- A
PipelineTaskreferenced by thePipeline Resultfailed. ThePipelineRunwill also have failed. - A
PipelineTaskreferenced by thePipeline Resultwas skipped. - A
PipelineTaskreferenced by thePipeline Resultdidn’t emit the referencedTask Result. This should be considered a bug in theTaskand may fail aPipelineTaskin future. - The
Pipeline Resultuses a variable that doesn’t point to an actualPipelineTask. This will result in anInvalidTaskResultReferencevalidation error duringPipelineRunexecution. - The
Pipeline Resultuses a variable that doesn’t point to an actual result in aPipelineTask. This will cause anInvalidTaskResultReferencevalidation error duringPipelineRunexecution.
Note: Since a Pipeline Result can contain references to multiple Task Results, if any of those
Task Result references are invalid the entire Pipeline Result is not emitted.
Configuring the Task execution order
You can connect Tasks in a Pipeline so that they execute in a Directed Acyclic Graph (DAG).
Each Task in the Pipeline becomes a node on the graph that can be connected with an edge
so that one will run before another and the execution of the Pipeline progresses to completion
without getting stuck in an infinite loop.
This is done using:
fromclauses on thePipelineResourcesused by eachTaskrunAfterclauses on the correspondingTasks- By linking the
resultsof oneTaskto the params of another
For example, the Pipeline defined as follows
- name: lint-repo
taskRef:
name: pylint
resources:
inputs:
- name: workspace
resource: my-repo
- name: test-app
taskRef:
name: make-test
resources:
inputs:
- name: workspace
resource: my-repo
- name: build-app
taskRef:
name: kaniko-build-app
runAfter:
- test-app
resources:
inputs:
- name: workspace
resource: my-repo
outputs:
- name: image
resource: my-app-image
- name: build-frontend
taskRef:
name: kaniko-build-frontend
runAfter:
- test-app
resources:
inputs:
- name: workspace
resource: my-repo
outputs:
- name: image
resource: my-frontend-image
- name: deploy-all
taskRef:
name: deploy-kubectl
resources:
inputs:
- name: my-app-image
resource: my-app-image
from:
- build-app
- name: my-frontend-image
resource: my-frontend-image
from:
- build-frontend
executes according to the following graph:
| |
v v
test-app lint-repo
/ \
v v
build-app build-frontend
\ /
v v
deploy-all
In particular:
- The
lint-repoandtest-appTaskshave nofromorrunAfterclauses and start executing simultaneously. - Once
test-appcompletes, bothbuild-appandbuild-frontendstart executing simultaneously since they bothrunAfterthetest-appTask. - The
deploy-allTaskexecutes once bothbuild-appandbuild-frontendcomplete, since it ingestsPipelineResourcesfrom both. - The entire
Pipelinecompletes execution once bothlint-repoanddeploy-allcomplete execution.
Adding a description
The description field is an optional field and can be used to provide description of the Pipeline.
Adding Finally to the Pipeline
You can specify a list of one or more final tasks under finally section. Final tasks are guaranteed to be executed
in parallel after all PipelineTasks under tasks have completed regardless of success or error. Final tasks are very
similar to PipelineTasks under tasks section and follow the same syntax. Each final task must have a
valid name and a taskRef or
taskSpec. For example:
spec:
tasks:
- name: tests
taskRef:
Name: integration-test
finally:
- name: cleanup-test
taskRef:
Name: cleanup
Specifying Workspaces in Final Tasks
Finally tasks can specify workspaces which PipelineTasks might have utilized
e.g. a mount point for credentials held in Secrets. To support that requirement, you can specify one or more
Workspaces in the workspaces field for the final tasks similar to tasks.
spec:
resources:
- name: app-git
type: git
workspaces:
- name: shared-workspace
tasks:
- name: clone-app-source
taskRef:
name: clone-app-repo-to-workspace
workspaces:
- name: shared-workspace
workspace: shared-workspace
resources:
inputs:
- name: app-git
resource: app-git
finally:
- name: cleanup-workspace
taskRef:
name: cleanup-workspace
workspaces:
- name: shared-workspace
workspace: shared-workspace
Specifying Parameters in Final Tasks
Similar to tasks, you can specify Parameters in final tasks:
spec:
tasks:
- name: tests
taskRef:
Name: integration-test
finally:
- name: report-results
taskRef:
Name: report-results
params:
- name: url
value: "someURL"
PipelineRun Status with finally
With finally, PipelineRun status is calculated based on PipelineTasks under tasks section and final tasks.
Without finally:
PipelineTasks under tasks |
PipelineRun status |
Reason |
|---|---|---|
all PipelineTasks successful |
true |
Succeeded |
one or more PipelineTasks skipped and rest successful |
true |
Completed |
single failure of PipelineTask |
false |
failed |
With finally:
PipelineTasks under tasks |
Final Tasks | PipelineRun status |
Reason |
|---|---|---|---|
all PipelineTask successful |
all final tasks successful | true |
Succeeded |
all PipelineTask successful |
one or more failure of final tasks | false |
Failed |
one or more PipelineTask skipped and rest successful |
all final tasks successful | true |
Completed |
one or more PipelineTask skipped and rest successful |
one or more failure of final tasks | false |
Failed |
single failure of PipelineTask |
all final tasks successful | false |
failed |
single failure of PipelineTask |
one or more failure of final tasks | false |
failed |
Overall, PipelineRun state transitioning is explained below for respective scenarios:
- All
PipelineTaskand final tasks are successful:Started->Running->Succeeded - At least one
PipelineTaskskipped and rest successful:Started->Running->Completed - One
PipelineTaskfailed / one or more final tasks failed:Started->Running->Failed
Please refer to the table under Monitoring Execution Status to learn about
what kind of events are triggered based on the Pipelinerun status.
Using Execution Status of pipelineTask
Finally Task can utilize execution status of any of the pipelineTasks under tasks section using param:
finally:
- name: finaltask
params:
- name: task1Status
value: "$(tasks.task1.status)"
taskSpec:
params:
- name: task1Status
steps:
- image: ubuntu
name: print-task-status
script: |
if [ $(params.task1Status) == "Failed" ]
then
echo "Task1 has failed, continue processing the failure"
fi
This kind of variable can have any one of the values from the following table:
| Status | Description |
|---|---|
| Succeeded | taskRun for the pipelineTask completed successfully |
| Failed | taskRun for the pipelineTask completed with a failure or cancelled by the user |
| None | the pipelineTask has been skipped or no execution information available for the pipelineTask |
For an end-to-end example, see status in a PipelineRun.
Known Limitations
Specifying Resources in Final Tasks
Similar to tasks, you can use PipelineResources as inputs and outputs for
final tasks in the Pipeline. The only difference here is, final tasks with an input resource can not have a from clause
like a PipelineTask from tasks section. For example:
spec:
tasks:
- name: tests
taskRef:
Name: integration-test
resources:
inputs:
- name: source
resource: tektoncd-pipeline-repo
outputs:
- name: workspace
resource: my-repo
finally:
- name: clear-workspace
taskRef:
Name: clear-workspace
resources:
inputs:
- name: workspace
resource: my-repo
from: #invalid
- tests
Cannot configure the Final Task execution order
It’s not possible to configure or modify the execution order of the final tasks. Unlike Tasks in a Pipeline,
all final tasks run simultaneously and start executing once all PipelineTasks under tasks have settled which means
no runAfter can be specified in final tasks.
Cannot specify execution Conditions in Final Tasks
Tasks in a Pipeline can be configured to run only if some conditions are satisfied using conditions. But the
final tasks are guaranteed to be executed after all PipelineTasks therefore no conditions can be specified in
final tasks.
Cannot configure Task execution results with finally
Final tasks can not be configured to consume Results of PipelineTask from tasks section i.e. the following
example is not supported right now but we are working on adding support for the same (tracked in issue
#2557).
spec:
tasks:
- name: count-comments-before
taskRef:
Name: count-comments
- name: add-comment
taskRef:
Name: add-comment
- name: count-comments-after
taskRef:
Name: count-comments
finally:
- name: check-count
taskRef:
Name: check-count
params:
- name: before-count
value: $(tasks.count-comments-before.results.count) #invalid
- name: after-count
value: $(tasks.count-comments-after.results.count) #invalid
Cannot configure Pipeline result with finally
Final tasks can emit Results but results emitted from the final tasks can not be configured in the
Pipeline Results. We are working on adding support for this
(tracked in issue #2710).
results:
- name: comment-count-validate
value: $(finally.check-count.results.comment-count-validate)
In this example, PipelineResults is set to:
"pipelineResults": [
{
"name": "comment-count-validate",
"value": "$(finally.check-count.results.comment-count-validate)"
}
],
Using Custom Tasks
Note: This is only allowed if enable-custom-tasks is set to
"true" in the feature-flags configmap, see install.md
Custom Tasks
can implement behavior that doesn’t correspond directly to running a workload in a Pod on the cluster.
For example, a custom task might execute some operation outside of the cluster and wait for its execution to complete.
A PipelineRun starts a custom task by creating a Run instead of a TaskRun.
In order for a custom task to execute, there must be a custom task controller running on the cluster
that is responsible for watching and updating Runs which reference their type.
If no such controller is running, those Runs will never complete and Pipelines using them will time out.
Custom tasks are an experimental alpha feature and should be expected to change in breaking ways or even be removed.
Specifying the target Custom Task
To specify the custom task type you want to execute, the taskRef field
must include the custom task’s apiVersion and kind as shown below:
spec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
This creates a Run of a custom task of type Example in the example.dev API group with the version v1alpha1.
You can also specify the name of a custom task resource object previously defined in the cluster.
spec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
If the taskRef specifies a name, the custom task controller should look up the
Example resource with that name and use that object to configure the execution.
If the taskRef does not specify a name, the custom task controller might support
some default behavior for executing unnamed tasks.
Specifying parameters
If a custom task supports parameters, you can use the
params field to specify their values:
spec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
params:
- name: foo
value: bah
Specifying workspaces
If the custom task supports it, you can provide Workspaces to share data with the custom task.
spec:
tasks:
- name: run-custom-task
taskRef:
apiVersion: example.dev/v1alpha1
kind: Example
name: myexample
workspaces:
- name: my-workspace
Consult the documentation of the custom task that you are using to determine whether it supports workspaces and how to name them.
Using Results
If the custom task produces results, you can reference them in a Pipeline using the normal syntax,
$(tasks.<task-name>.results.<result-name>).
Limitations
Pipelines do not support the following items with custom tasks:
- Pipeline Resources
retriestimeout- Conditions (
Conditionsare deprecated. UseWhenExpressionsinstead.)
Code examples
For a better understanding of Pipelines, study our code examples.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.
Feedback
Was this page helpful?
Thanks! Tell us how we can further improve.
Sorry about that. Tell us how we can further improve.