See also the Kubernetes deployment guide.
This library contains utilities for running Dagster with Kubernetes. This includes a Python API allowing Dagit to launch runs as Kubernetes Jobs, as well as a Helm chart you can use as the basis for a Dagster deployment on a Kubernetes cluster.
dagster_k8s.
K8sRunLauncher
(service_account_name, instance_config_map, postgres_password_secret=None, dagster_home=None, job_image=None, image_pull_policy='Always', image_pull_secrets=None, load_incluster_config=True, kubeconfig_file=None, inst_data=None, job_namespace='default', env_config_maps=None, env_secrets=None, k8s_client_batch_api=None, k8s_client_core_api=None)[source]¶RunLauncher that starts a Kubernetes Job for each pipeline run.
Encapsulates each pipeline run in a separate, isolated invocation of dagster-graphql
.
You may configure a Dagster instance to use this RunLauncher by adding a section to your
dagster.yaml
like the following:
run_launcher:
module: dagster_k8s.launcher
class: K8sRunLauncher
config:
service_account_name: pipeline_run_service_account
job_image: my_project/dagster_image:latest
instance_config_map: dagster-instance
postgres_password_secret: dagster-postgresql-secret
As always when using a ConfigurableClass
, the values
under the config
key of this YAML block will be passed to the constructor. The full list
of acceptable values is given below by the constructor args.
service_account_name (str) – The name of the Kubernetes service account under which to run the Job.
job_image (Optional[str]) – The name
of the image to use for the Job’s Dagster container.
This image will be run with the command
dagster api execute_run
.
When using user code deployments, the image should not be specified.
instance_config_map (str) – The name
of an existing Volume to mount into the pod in
order to provide a ConfigMap for the Dagster instance. This Volume should contain a
dagster.yaml
with appropriate values for run storage, event log storage, etc.
postgres_password_secret (Optional[str]) – The name of the Kubernetes Secret where the postgres password can be retrieved. Will be mounted and supplied as an environment variable to the Job Pod.
dagster_home (str) – The location of DAGSTER_HOME in the Job container; this is where the
dagster.yaml
file will be mounted from the instance ConfigMap specified above.
load_incluster_config (Optional[bool]) – Set this value if you are running the launcher
within a k8s cluster. If True
, we assume the launcher is running within the target
cluster and load config using kubernetes.config.load_incluster_config
. Otherwise,
we will use the k8s config specified in kubeconfig_file
(using
kubernetes.config.load_kube_config
) or fall back to the default kubeconfig. Default:
True
.
kubeconfig_file (Optional[str]) – The kubeconfig file from which to load config. Defaults to None (using the default kubeconfig).
image_pull_secrets (Optional[List[Dict[str, str]]]) – Optionally, a list of dicts, each of
which corresponds to a Kubernetes LocalObjectReference
(e.g.,
{'name': 'myRegistryName'}
). This allows you to specify the `imagePullSecrets
on
a pod basis. Typically, these will be provided through the service account, when needed,
and you will not need to pass this argument.
See:
https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
and https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#podspec-v1-core.
image_pull_policy (Optional[str]) – Allows the image pull policy to be overridden, e.g. to
facilitate local testing with kind. Default:
"Always"
. See: https://kubernetes.io/docs/concepts/containers/images/#updating-images.
job_namespace (Optional[str]) – The namespace into which to launch new jobs. Note that any
other Kubernetes resources the Job requires (such as the service account) must be
present in this namespace. Default: "default"
env_config_maps (Optional[List[str]]) – A list of custom ConfigMapEnvSource names from which to
draw environment variables (using envFrom
) for the Job. Default: []
. See:
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#define-an-environment-variable-for-a-container
env_secrets (Optional[List[str]]) – A list of custom Secret names from which to
draw environment variables (using envFrom
) for the Job. Default: []
. See:
https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#configure-all-key-value-pairs-in-a-secret-as-container-environment-variables
dagster_k8s.
K8sScheduler
(dagster_home, service_account_name, instance_config_map, postgres_password_secret, job_image, load_incluster_config=True, scheduler_namespace='default', image_pull_policy='Always', image_pull_secrets=None, kubeconfig_file=None, inst_data=None, env_config_maps=None, env_secrets=None)[source]¶Scheduler implementation on top of Kubernetes CronJob.
Enable this scheduler by adding it to your dagster.yaml, or by configuring the scheduler section of the Helm chart https://github.com/dagster-io/dagster/tree/master/helm/dagster
The K8sRunLauncher
allows Dagit instances to be configured to launch new runs by starting
per-run Kubernetes Jobs. To configure the K8sRunLauncher
, your dagster.yaml
should
include a section like:
run_launcher:
module: dagster_k8s.launcher
class: K8sRunLauncher
config:
image_pull_secrets:
service_account_name: dagster
job_image: "my-company.com/image:latest"
dagster_home: "/opt/dagster/dagster_home"
postgres_password_secret: "dagster-postgresql-secret"
image_pull_policy: "IfNotPresent"
job_namespace: "dagster"
instance_config_map: "dagster-instance"
env_config_maps:
- "dagster-k8s-job-runner-env"
env_secrets:
- "dagster-k8s-some-secret"
For local dev (e.g., on kind or minikube):
helm install \
--set dagit.image.repository="dagster.io/buildkite-test-image" \
--set dagit.image.tag="py37-latest" \
--set job_runner.image.repository="dagster.io/buildkite-test-image" \
--set job_runner.image.tag="py37-latest" \
--set imagePullPolicy="IfNotPresent" \
dagster \
helm/dagster/
Upon installation, the Helm chart will provide instructions for port forwarding Dagit and Flower (if configured).
To run the unit tests:
pytest -m "not integration"
To run the integration tests, you must have Docker, kind, and helm installed.
On macOS:
brew install kind
brew install helm
Docker must be running.
You may experience slow first test runs thanks to image pulls (run pytest -svv --fulltrace
for
visibility). Building images and loading them to the kind cluster is slow, and there is
no visibility into the progress of the load.
NOTE: This process is quite slow, as it requires bootstrapping a local kind
cluster with
Docker images and the dagster-k8s
Helm chart. For faster development, you can either:
Keep a warm kind cluster
Use a remote K8s cluster, e.g. via AWS EKS or GCP GKE
Instructions are below.
You may find that the kind cluster creation, image loading, and kind cluster creation loop is too slow for effective local dev.
You may bypass cluster creation and image loading in the following way. First add the --no-cleanup
flag to your pytest invocation:
pytest --no-cleanup -s -vvv -m "not integration"
The tests will run as before, but the kind cluster will be left running after the tests are completed.
For subsequent test runs, you can run:
pytest --kind-cluster="cluster-d9971c84d44d47f382a2928c8c161faa" --existing-helm-namespace="dagster-test-95590a" -s -vvv -m "not integration"
This will bypass cluster creation, image loading, and Helm chart installation, for much faster tests.
The kind cluster name and Helm namespace for this command can be found in the logs, or retrieved
via the respective CLIs, using kind get clusters
and kubectl get namespaces
. Note that
for kubectl
and helm
to work correctly with a kind cluster, you should override your
kubeconfig file location with:
kind get kubeconfig --name kind-test > /tmp/kubeconfig
export KUBECONFIG=/tmp/kubeconfig
The test fixtures provided by dagster-k8s
automate the process described below, but sometimes
it’s useful to manually configure a kind cluster and load images onto it.
First, ensure you have a Docker image appropriate for your Python version. Run, from the root of the repo:
./python_modules/dagster-test/dagster_test/test_project/build.sh 3.7.6 \
dagster.io.priv/buildkite-test-image:py37-latest
In the above invocation, the Python majmin version should be appropriate for your desired tests.
Then run the following commands to create the cluster and load the image. Note that there is no feedback from the loading process.
kind create cluster --name kind-test
kind load docker-image --name kind-test dagster.io/dagster-docker-buildkite:py37-latest
If you are deploying the Helm chart with an in-cluster Postgres (rather than an external database), and/or with dagster-celery workers (and a RabbitMQ), you’ll also want to have images present for rabbitmq and postgresql:
docker pull docker.io/bitnami/rabbitmq
docker pull docker.io/bitnami/postgresql
kind load docker-image --name kind-test docker.io/bitnami/rabbitmq:latest
kind load docker-image --name kind-test docker.io/bitnami/postgresql:latest
Then you can run pytest as follows:
pytest --kind-cluster=kind-test
If you already have a development K8s cluster available, you can run tests on that cluster vs.
running locally in kind
.
For this to work, first build and deploy the test image to a registry available to your cluster. For example, with a private ECR repository:
./python_modules/dagster-test/dagster_test/test_project/build.sh 3.7.6
docker tag dagster-docker-buildkite:latest $AWS_ACCOUNT_ID.dkr.ecr.us-west-2.amazonaws.com/dagster-k8s-tests:2020-04-21T21-04-06
aws ecr get-login --no-include-email --region us-west-1 | sh
docker push $AWS_ACCOUNT_ID.dkr.ecr.us-west-1.amazonaws.com/dagster-k8s-tests:2020-04-21T21-04-06
Then, you can run tests on EKS with:
export DAGSTER_DOCKER_IMAGE_TAG="2020-04-21T21-04-06"
export DAGSTER_DOCKER_REPOSITORY="$AWS_ACCOUNT_ID.dkr.ecr.us-west-2.amazonaws.com"
export DAGSTER_DOCKER_IMAGE="dagster-k8s-tests"
# First run with --no-cleanup to leave Helm chart in place
pytest --cluster-provider="kubeconfig" --no-cleanup -s -vvv
# Subsequent runs against existing Helm chart
pytest --cluster-provider="kubeconfig" --existing-helm-namespace="dagster-test-<some id>" -s -vvv
To test / validate Helm charts, you can run:
helm install dagster --dry-run --debug helm/dagster
helm lint
To enable GCR access from Minikube:
kubectl create secret docker-registry element-dev-key \
--docker-server=https://gcr.io \
--docker-username=oauth2accesstoken \
--docker-password="$(gcloud auth print-access-token)" \
--docker-email=my@email.com
Both the Postgres and the RabbitMQ Helm charts will store credentials using Persistent Volume
Claims, which will outlive test invocations and calls to helm uninstall
. These must be deleted if
you want to change credentials. To view your pvcs, run:
kubectl get pvc
The Redis Helm chart installs w/ a randomly-generated password by default; turn this off:
helm install dagredis stable/redis --set usePassword=false
Then, to connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/dagredis-master 6379:6379
redis-cli -h 127.0.0.1 -p 6379