page_content
stringlengths 74
2.86k
| parent_section
stringclasses 7
values | url
stringlengths 21
129
| token_count
int64 17
755
|
---|---|---|---|
Reuse Docker builds to speed up Docker build times
Avoid building Docker images each time a pipeline runs
When using containerized components in your stack, ZenML needs to build Docker images to remotely execute your code. Building Docker images without connecting a git repository includes your step code in the built Docker image. This, however, means that new Docker images will be built and pushed whenever you make changes to any of your source files.
One way of skipping Docker builds each time is to pass in the ID of a build as you run the pipeline:
my_pipeline = my_pipeline.with_options(build=<BUILD_ID>)
or when running a pipeline from the CLI:
zenml pipeline run <PIPELINE_NAME> --build=<BUILD_ID>
Please note, that this means specifying a custom build when running a pipeline will not run the code on your client machine but will use the code included in the Docker images of the build. As a consequence, even if you make local code changes, reusing a build will always execute the code bundled in the Docker image, rather than the local code. Therefore, if you would like to reuse a Docker build AND make sure your local code changes are also downloaded into the image, you need to connect a git repository.
PreviousWhich files are built into the image
NextUse code repositories to automate Docker build reuse
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/customize-docker-builds/reuse-docker-builds | 273 |
Run on AWS
A simple guide to create an AWS stack to run your ZenML pipelines
This page aims to quickly set up a minimal production stack on AWS. With just a few simple steps, you will set up an IAM role with specifically-scoped permissions that ZenML can use to authenticate with the relevant AWS resources.
1) Set up credentials and local environment
To follow this guide, you need:
An active AWS account with necessary permissions for AWS S3, SageMaker, ECR, and ECS.
ZenML installed
AWS CLI installed and configured with your AWS credentials. You can follow the instructions here.
Once ready, navigate to the AWS console:
Choose an AWS region In the AWS console, choose the region where you want to deploy your ZenML stack resources. Make note of the region name (e.g., us-east-1, eu-west-2, etc.) as you will need it in subsequent steps.
Create an IAM role
For this, you'll need to find out your AWS account ID. You can find this by running:
aws sts get-caller-identity --query Account --output text
This will output your AWS account ID. Make a note of this as you will need it in the next steps. (If you're doing anything more esoteric with your AWS account and IAM roles, this might not work for you. The account ID here that we're trying to get is the root account ID that you use to log in to the AWS console.)
Then create a file named assume-role-policy.json with the following content:
"Version": "2012-10-17",
"Statement": [
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<YOUR_ACCOUNT_ID>:root",
"Service": "sagemaker.amazonaws.com"
},
"Action": "sts:AssumeRole"
Make sure to replace the placeholder <YOUR_ACCOUNT_ID> with your actual AWS account ID that we found earlier.
Now create a new IAM role that ZenML will use to access AWS resources. We'll use zenml-role as a role name in this example, but you can feel free to choose something else if you prefer. Run the following command to create the role: | how-to | https://docs.zenml.io/how-to/popular-integrations/aws-guide | 450 |
arameters
from zenml import step, pipeline
@stepdef my_custom_block_step(block_message) -> List[Dict]:
my_custom_block = [
"type": "header",
"text": {
"type": "plain_text",
"text": f":tada: {block_message}",
"emoji": true
return SlackAlerterParameters(blocks = my_custom_block)
@pipeline
def my_pipeline(...):
...
message_blocks = my_custom_block_step("my custom block!")
post_message = slack_alerter_post_step(params = message_blocks)
return post_message
if __name__ == "__main__":
my_pipeline()
For more information and a full list of configurable attributes of the Slack alerter, check out the SDK Docs .
PreviousDiscord Alerter
NextDevelop a Custom Alerter
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/alerters/slack | 169 |
i-key <SERVICE_ACCOUNT_NAME> rotate <API_KEY_NAME>Running this command will create a new API key and invalidate the old one. The new API key is displayed as part of the command output and cannot be retrieved later. You can then use the new API key to connect your ZenML client to the server just as described above.
When rotating an API key, you can also configure a retention period for the old API key. This is useful if you need to keep the old API key for a while to ensure that all your workloads have been updated to use the new API key. You can do this with the --retain flag. For example, to rotate an API key and keep the old one for 60 minutes, you can run the following command:
zenml service-account api-key <SERVICE_ACCOUNT_NAME> rotate <API_KEY_NAME> \
--retain 60
For increased security, you can deactivate a service account or an API key using one of the following commands:
zenml service-account update <SERVICE_ACCOUNT_NAME> --active false
zenml service-account api-key <SERVICE_ACCOUNT_NAME> update <API_KEY_NAME> \
--active false
Deactivating a service account or an API key will prevent it from being used to authenticate and has immediate effect on all workloads that use it.
To keep things simple, we can summarize the steps:
Use the zenml service-account create command to create a service account and an API key.
Use the zenml connect --url <url> --api-key <api-key> command to connect your ZenML client to the server using the API key.
Check configured service accounts with zenml service-account list.
Check configured API keys with zenml service-account api-key <SERVICE_ACCOUNT_NAME> list.
Regularly rotate API keys with zenml service-account api-key <SERVICE_ACCOUNT_NAME> rotate.
Deactivate service accounts or API keys with zenml service-account update or zenml service-account api-key <SERVICE_ACCOUNT_NAME> update.
Important notice | how-to | https://docs.zenml.io/how-to/connecting-to-zenml/connect-with-a-service-account | 402 |
fer. Run the following command to create the role:aws iam create-role --role-name zenml-role --assume-role-policy-document file://assume-role-policy.json
Be sure to take note of the information that is output to the terminal, as you will need it in the next steps, especially the Role ARN.
Attach policies to the role
Attach the following policies to the role to grant access to the necessary AWS services:
AmazonS3FullAccess
AmazonEC2ContainerRegistryFullAccess
AmazonSageMakerFullAccess
aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess
aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess
If you have not already, install the AWS and S3 ZenML integrations:
zenml integration install aws s3 -y
2) Create a Service Connector within ZenML
Create an AWS Service Connector within ZenML. The service connector will allow ZenML and other ZenML components to authenticate themselves with AWS using the IAM role.
zenml service-connector register aws_connector \
--type aws \
--auth-method iam-role \
--role_arn=<ROLE_ARN> \
--region=<YOUR_REGION> \
--aws_access_key_id=<YOUR_ACCESS_KEY_ID> \
--aws_secret_access_key=<YOUR_SECRET_ACCESS_KEY>
Replace <ROLE_ARN> with the ARN of the IAM role you created in the previous step, <YOUR_REGION> with the respective value and use your AWS access key ID and secret access key that we noted down earlier.
3) Create Stack Components
Artifact Store (S3)
An artifact store is used for storing and versioning data flowing through your pipelines.
Before you run anything within the ZenML CLI, create an AWS S3 bucket. If you already have one, you can skip this step. (Note: the bucket name should be unique, so you might need to try a few times to find a unique name.)
aws s3api create-bucket --bucket your-bucket-name | how-to | https://docs.zenml.io/v/docs/how-to/popular-integrations/aws-guide | 474 |
tored in the
secrets store.
"""
@abstractmethoddef delete_secret_values(self, secret_id: UUID) -> None:
"""Deletes secret values for an existing secret.
Args:
secret_id: The ID of the secret.
Raises:
KeyError: if no secret values for the given ID are stored in the
secrets store.
"""
This is a slimmed-down version of the real interface which aims to highlight the abstraction layer. In order to see the full definition and get the complete docstrings, please check the SDK docs .
Build your own custom secrets store
If you want to create your own custom secrets store implementation, you can follow the following steps:
Create a class that inherits from the zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore base class and implements the abstractmethods shown in the interface above. Use SecretsStoreType.CUSTOM as the TYPE value for your secrets store class.
If you need to provide any configuration, create a class that inherits from the SecretsStoreConfiguration class and add your configuration parameters there. Use that as the CONFIG_TYPE value for your secrets store class.
To configure the ZenML server to use your custom secrets store, make sure your code is available in the container image that is used to run the ZenML server. Then, use environment variables or helm chart values to configure the ZenML server to use your custom secrets store, as covered in the deployment guide.
PreviousTroubleshoot stack components
NextSecret management
Last updated 15 days ago | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/manage-the-deployed-services/custom-secret-stores | 306 |
re a credential helper to remove this warning. Seehttps://docs.docker.com/engine/reference/commandline/login/#credentials-store
The 'dockerhub' Docker Service Connector connector was used to successfully configure the local Docker/OCI container registry client/SDK.
Stack Components use
The Docker Service Connector can be used by all Container Registry stack component flavors to authenticate to a remote Docker/OCI container registry. This allows container images to be built and published to private container registries without the need to configure explicit Docker credentials in the target environment or the Stack Component.
ZenML does not yet support automatically configuring Docker credentials in container runtimes such as Kubernetes clusters (i.e. via imagePullSecrets) to allow container images to be pulled from the private container registries. This will be added in a future release.
PreviousSecurity best practices
NextKubernetes Service Connector
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/auth-management/docker-service-connector | 177 |
βββββΌββββββββββββββββββββββΌβββββββββββββββββββββββ¨β β zenbytes_local_with_ β c2acd029-8eed-4b6e-a β β default β default β default β β β zenbytes_mlflow β zenbytes_mlflow_trac β
β β mlflow β d19-91c419ce91d4 β β β β β β β β ker β
ββββββββββ·βββββββββββββββββββββββ·βββββββββββββββββββββββ·βββββββββ·ββββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββββββ·βββββββββββββββββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββββββββ·βββββββββββββββββββββββ
Example of migrating a profile into a new project:
$ zenml profile migrate /home/stefan/.config/zenml/profiles/zenprojects --project zenprojects
Unable to find ZenML repository in your current working directory (/home/stefan/aspyre/src/zenml) or any parent directories. If you want to use an existing repository which is in a different location, set the environment variable 'ZENML_REPOSITORY_PATH'. If you want to create a new repository, run zenml init.
Running without an active repository root.
Creating project zenprojects
Creating default stack for user 'default' in project zenprojects...
No component flavors to migrate from /home/stefan/.config/zenml/profiles/zenprojects/stacks.yaml...
Migrating stack components from /home/stefan/.config/zenml/profiles/zenprojects/stacks.yaml...
Created artifact_store 'cloud_artifact_store' with flavor 's3'.
Created container_registry 'cloud_registry' with flavor 'aws'.
Created container_registry 'local_registry' with flavor 'default'.
Created model_deployer 'eks_seldon' with flavor 'seldon'.
Created orchestrator 'cloud_orchestrator' with flavor 'kubeflow'.
Created orchestrator 'kubeflow_orchestrator' with flavor 'kubeflow'.
Created secrets_manager 'aws_secret_manager' with flavor 'aws'.
Migrating stacks from /home/stefan/.config/zenml/profiles/zenprojects/stacks.yaml... | reference | https://docs.zenml.io/v/docs/reference/migration-guide/migration-zero-twenty | 547 |
Local Artifact Store
Storing artifacts on your local filesystem.
The local Artifact Store is a built-in ZenML Artifact Store flavor that uses a folder on your local filesystem to store artifacts.
When would you want to use it?
The local Artifact Store is a great way to get started with ZenML, as it doesn't require you to provision additional local resources or to interact with managed object-store services like Amazon S3 and Google Cloud Storage. All you need is the local filesystem. You should use the local Artifact Store if you're just evaluating or getting started with ZenML, or if you are still in the experimental phase and don't need to share your pipeline artifacts (dataset, models, etc.) with others.
The local Artifact Store is not meant to be utilized in production. The local filesystem cannot be shared across your team and the artifacts stored in it cannot be accessed from other machines. This also means that artifact visualizations will not be available when using a local Artifact Store through a ZenML instance deployed in the cloud.
Furthermore, the local Artifact Store doesn't cover services like high-availability, scalability, backup and restore and other features that are expected from a production grade MLOps system.
The fact that it stores artifacts on your local filesystem also means that not all stack components can be used in the same stack as a local Artifact Store:
only Orchestrators running on the local machine, such as the local Orchestrator, a local Kubeflow Orchestrator, or a local Kubernetes Orchestrator can be combined with a local Artifact Store
only Model Deployers that are running locally, such as the MLflow Model Deployer, can be used in combination with a local Artifact Store
Step Operators: none of the Step Operators can be used in the same stack as a local Artifact Store, given that their very purpose is to run ZenML steps in remote specialized environments | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/local | 378 |
figuration option to true in the ZenML deployment.This authentication method doesn't require any credentials to be explicitly configured. It automatically discovers and uses credentials from one of the following sources:
environment variables (GOOGLE_APPLICATION_CREDENTIALS)
local ADC credential files set up by running gcloud auth application-default login (e.g. ~/.config/gcloud/application_default_credentials.json).
a GCP service account attached to the resource where the ZenML server is running. Only works when running the ZenML server on a GCP resource with a service account attached to it or when using Workload Identity (e.g. GKE cluster).
This is the quickest and easiest way to authenticate to GCP services. However, the results depend on how ZenML is deployed and the environment where it is used and is thus not fully reproducible:
when used with the default local ZenML deployment or a local ZenML server, the credentials are those set up on your machine (i.e. by running gcloud auth application-default login or setting the GOOGLE_APPLICATION_CREDENTIALS environment variable to point to a service account key JSON file).
when connected to a ZenML server, this method only works if the ZenML server is deployed in GCP and will use the service account attached to the GCP resource where the ZenML server is running (e.g. a GKE cluster). The service account permissions may need to be adjusted to allow listing and accessing/describing the GCP resources that the connector is configured to access.
Note that the discovered credentials inherit the full set of permissions of the local GCP CLI credentials or service account attached to the ZenML server GCP workload. Depending on the extent of those permissions, this authentication method might not be suitable for production use, as it can lead to accidental privilege escalation. Instead, it is recommended to use the Service Account Key or Service Account Impersonation authentication methods to restrict the permissions that are granted to the connector clients. | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 390 |
uter vision that expect a single dataset as input.model drift checks require two datasets and a mandatory model as input. This list includes a subset of the model evaluation checks provided by Deepchecks for tabular data and for computer vision that expect two datasets as input: target and reference.
This structure is directly reflected in how Deepchecks can be used with ZenML: there are four different Deepchecks standard steps and four different ZenML enums for Deepchecks checks . The Deepchecks Data Validator API is also modeled to reflect this same structure.
A notable characteristic of Deepchecks is that you don't need to customize the set of Deepchecks tests that are part of a test suite. Both ZenML and Deepchecks provide sane defaults that will run all available Deepchecks tests in a given category with their default conditions if a custom list of tests and conditions are not provided.
There are three ways you can use Deepchecks in your ZenML pipelines that allow different levels of flexibility:
instantiate, configure and insert one or more of the standard Deepchecks steps shipped with ZenML into your pipelines. This is the easiest way and the recommended approach, but can only be customized through the supported step configuration parameters.
call the data validation methods provided by the Deepchecks Data Validator in your custom step implementation. This method allows for more flexibility concerning what can happen in the pipeline step, but you are still limited to the functionality implemented in the Data Validator.
use the Deepchecks library directly in your custom step implementation. This gives you complete freedom in how you are using Deepchecks' features.
You can visualize Deepchecks results in Jupyter notebooks or view them directly in the ZenML dashboard.
Warning! Usage in remote orchestrators | stack-components | https://docs.zenml.io/stack-components/data-validators/deepchecks | 337 |
y custom tool? How can I extend or build on ZenML?This depends on the tool and its respective MLOps category. We have a full guide on this over here!
How can I contribute?
We develop ZenML together with our community! To get involved, the best way to get started is to select any issue from the good-first-issue label. If you would like to contribute, please review our Contributing Guide for all relevant details.
How can I speak with the community?
The first point of the call should be our Slack group. Ask your questions about bugs or specific use cases and someone from the core team will respond.
Which license does ZenML use?
ZenML is distributed under the terms of the Apache License Version 2.0. A complete version of the license is available in the LICENSE.md in this repository. Any contribution made to this project will be licensed under the Apache License Version 2.0.
PreviousCommunity & content
Last updated 15 days ago | reference | https://docs.zenml.io/reference/faq | 199 |
πProduction guide
Level up your skills in a production setting.
The ZenML production guide builds upon the Starter guide and is the next step in the MLOps Engineer journey with ZenML. If you're an ML practitioner hoping to implement a proof of concept within your workplace to showcase the importance of MLOps, this is the place for you.
This guide will focus on shifting gears from running pipelines locally on your machine, to running them in production in the cloud. We'll cover:
Deploying ZenML
Understanding stacks
Connecting remote storage
Orchestrating on the cloud
Configuring the pipeline to scale compute
Configure a code repository
Like in the starter guide, make sure you have a Python environment ready and virtualenv installed to follow along with ease. As now we are dealing with cloud infrastructure, you'll also want to select one of the major cloud providers (AWS, GCP, Azure), and make sure the respective CLIs are installed and authorized.
By the end, you will have completed an end-to-end MLOps project that you can use as inspiration for your own work. Let's get right into it!
PreviousA starter project
NextDeploying ZenML
Last updated 18 days ago | user-guide | https://docs.zenml.io/v/docs/user-guide/production-guide | 250 |
β server β https://35.175.95.223 ββ ββββββββββββββββββββββββΌββββββββββββββββββββββββ¨
β insecure β False β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββ¨
β cluster_name β 35.175.95.223 β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββ¨
β token β [HIDDEN] β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββ¨
β certificate_authority β [HIDDEN] β
βββββββββββββββββββββββββ·ββββββββββββββββββββββββ
Credentials auto-discovered and lifted through the Kubernetes Service Connector might have a limited lifetime, especially if the target Kubernetes cluster is managed through a 3rd party authentication provider such a GCP or AWS. Using short-lived credentials with your Service Connectors could lead to loss of connectivity and other unexpected errors in your pipeline.
Local client provisioning
This Service Connector allows configuring the local Kubernetes client (i.e. kubectl) with credentials:
zenml service-connector login kube-auto
Example Command Output
β ¦ Attempting to configure local client using service connector 'kube-auto'...
Cluster "35.185.95.223" set.
β Attempting to configure local client using service connector 'kube-auto'...
β Attempting to configure local client using service connector 'kube-auto'...
Updated local kubeconfig with the cluster details. The current kubectl context was set to '35.185.95.223'.
The 'kube-auto' Kubernetes Service Connector connector was used to successfully configure the local Kubernetes cluster client/SDK.
Stack Components use
The Kubernetes Service Connector can be used in Orchestrator and Model Deployer stack component flavors that rely on Kubernetes clusters to manage their workloads. This allows Kubernetes container workloads to be managed without the need to configure and maintain explicit Kubernetes kubectl configuration contexts and credentials in the target environment and in the Stack Component.
PreviousDocker Service Connector | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/kubernetes-service-connector | 453 |
β SHARED β OWNER β EXPIRES IN β LABELS ββ βββββββββΌβββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββΌβββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββββΌβββββββββΌββββββββββΌβββββββββββββΌβββββββββ¨
β β gcp-multi β 9d953320-3560-4a78-817c-926a3898064d β π΅ gcp β π΅ gcp-generic β <multiple> β β β default β β β
β β β β β π¦ gcs-bucket β β β β β β
β β β β β π kubernetes-cluster β β β β β β
β β β β β π³ docker-registry β β β β β β
β βββββββββΌβββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββΌβββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββββΌβββββββββΌββββββββββΌβββββββββββββΌβββββββββ¨
β β gcs-multi β ff9c0723-7451-46b7-93ef-fcf3efde30fa β π΅ gcp β π¦ gcs-bucket β <multiple> β β β default β β β
β βββββββββΌβββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββΌβββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββββΌβββββββββΌββββββββββΌβββββββββββββΌβββββββββ¨
β β gcs-langchain-slackbot β cf3953e9-414c-4875-ba00-24c62a0dc0c5 β π΅ gcp β π¦ gcs-bucket β gs://langchain-slackbot β β β default β β β
ββββββββββ·βββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββ·βββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββββββββββββ·βββββββββ·ββββββββββ·βββββββββββββ·βββββββββ
Local and remote availability | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 558 |
Find out which configuration was used for a run
Sometimes you might want to extract the used configuration from a pipeline that has already run. You can do this simply by loading the pipeline run and accessing its config attribute.
from zenml.client import Client
pipeline_run = Client().get_pipeline_run("<PIPELINE_RUN_NAME>")
configuration = pipeline_run.config
PreviousConfiguration hierarchy
NextAutogenerate a template yaml file
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/use-configuration-files/retrieve-used-configuration-of-a-run | 90 |
Reranking for better retrieval
Add reranking to your RAG inference for better retrieval performance.
Rerankers are a crucial component of retrieval systems that use LLMs. They help improve the quality of the retrieved documents by reordering them based on additional features or scores. In this section, we'll explore how to add a reranker to your RAG inference pipeline in ZenML.
In previous sections, we set up the overall workflow, from data ingestion and preprocessing to embeddings generation and retrieval. We then set up some basic evaluation metrics to assess the performance of our retrieval system. A reranker is a way to squeeze a bit of extra performance out of the system by reordering the retrieved documents based on additional features or scores.
As you can see, reranking is an optional addition we make to what we've already set up. It's not strictly necessary, but it can help improve the relevance and quality of the retrieved documents, which in turn can lead to better responses from the LLM. Let's dive in!
PreviousEvaluation in practice
NextUnderstanding reranking
Last updated 15 days ago | user-guide | https://docs.zenml.io/user-guide/llmops-guide/reranking | 224 |
uto-configuration is used (see the example below).The following assumes the local AWS CLI has a connectors AWS CLI profile configured with an AWS Secret Key. We need to force the ZenML CLI to use the Secret Key authentication by passing the --auth-method secret-key option, otherwise it would automatically use the AWS Session Token authentication method as an extra precaution:
AWS_PROFILE=connectors zenml service-connector register aws-secret-key --type aws --auth-method secret-key --auto-configure
Example Command Output
β Έ Registering service connector 'aws-secret-key'...
Successfully registered service connector `aws-secret-key` with access to the following resources:
βββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β πΆ aws-generic β us-east-1 β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://zenfiles β
β β s3://zenml-demos β
β β s3://zenml-generative-chat β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β zenhacks-cluster β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
βββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ
The AWS Secret Key was lifted up from the local host:
zenml service-connector describe aws-secret-key
Example Command Output
Service connector 'aws-secret-key' of type 'aws' with id 'a1b07c5a-13af-4571-8e63-57a809c85790' is owned by user 'default' and is 'private'.
'aws-secret-key' aws Service Connector Details | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 505 |
inheriting from the BaseArtifactStoreFlavor class.Once you are done with the implementation, you can register it through the CLI. Please ensure you point to the flavor class via dot notation:
zenml artifact-store flavor register <path.to.MyArtifactStoreFlavor>
For example, if your flavor class MyArtifactStoreFlavor is defined in flavors/my_flavor.py, you'd register it by doing:
zenml artifact-store flavor register flavors.my_flavor.MyArtifactStoreFlavor
ZenML resolves the flavor class by taking the path where you initialized zenml (via zenml init) as the starting point of resolution. Therefore, please ensure you follow the best practice of initializing zenml at the root of your repository.
If ZenML does not find an initialized ZenML repository in any parent directory, it will default to the current working directory, but usually, it's better to not have to rely on this mechanism and initialize zenml at the root.
Afterward, you should see the new custom artifact store flavor in the list of available artifact store flavors:
zenml artifact-store flavor list
It is important to draw attention to when and how these base abstractions are coming into play in a ZenML workflow.
The CustomArtifactStoreFlavor class is imported and utilized upon the creation of the custom flavor through the CLI.
The CustomArtifactStoreConfig class is imported when someone tries to register/update a stack component with this custom flavor. Especially, during the registration process of the stack component, the config will be used to validate the values given by the user. As Config objects are inherently pydantic objects, you can also add your own custom validators here.
The CustomArtifactStore only comes into play when the component is ultimately in use. | stack-components | https://docs.zenml.io/stack-components/artifact-stores/custom | 350 |
res, and the thought process behind them.
PodcastWe also have a Podcast series that brings you interviews and discussions with industry leaders, top technology professionals, and others. We discuss the latest developments in machine learning, deep learning, and artificial intelligence, with a particular focus on MLOps, or how trained models are used in production.
Newsletter
You can also subscribe to our Newsletter where we share what we learn as we develop open-source tooling for production machine learning. You will also get all the exciting news about ZenML in general.
PreviousMigration guide 0.58.2 β 0.60.0
NextFAQ
Last updated 19 days ago | reference | https://docs.zenml.io/v/docs/reference/community-and-content | 135 |
otated[int, "remainder"]
]:
return a // b, a % bIf you do not give your outputs custom names, the created artifacts will be named {pipeline_name}::{step_name}::output or {pipeline_name}::{step_name}::output_{i} in the dashboard. See the documentation on artifact versioning and configuration for more information.
See Also:
Learn more about output annotation here
For custom data types you should check these docs out
PreviousUse pipeline/step parameters
NextControl caching behavior
Last updated 16 days ago | how-to | https://docs.zenml.io/v/docs/how-to/build-pipelines/step-output-typing-and-annotation | 112 |
se Python <3.11 together with the GCP integration.The GCS Artifact Store flavor is provided by the GCP ZenML integration, you need to install it on your local machine to be able to register a GCS Artifact Store and add it to your stack:
zenml integration install gcp -y
The only configuration parameter mandatory for registering a GCS Artifact Store is the root path URI, which needs to point to a GCS bucket and take the form gs://bucket-name. Please read the Google Cloud Storage documentation on how to configure a GCS bucket.
With the URI to your GCS bucket known, registering a GCS Artifact Store can be done as follows:
# Register the GCS artifact store
zenml artifact-store register gs_store -f gcp --path=gs://bucket-name
# Register and set a stack with the new artifact store
zenml stack register custom_stack -a gs_store ... --set
Depending on your use case, however, you may also need to provide additional configuration parameters pertaining to authentication to match your deployment scenario.
Infrastructure Deployment
A GCS Artifact Store can be deployed directly from the ZenML CLI:
zenml artifact-store deploy gcs_artifact_store --flavor=gcp --provider=gcp ...
You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section.
Authentication Methods
Integrating and using a GCS Artifact Store in your pipelines is not possible without employing some form of authentication. If you're looking for a quick way to get started locally, you can use the Implicit Authentication method. However, the recommended way to authenticate to the GCP cloud platform is through a GCP Service Connector. This is particularly useful if you are configuring ZenML stacks that combine the GCS Artifact Store with other remote stack components also running in GCP. | stack-components | https://docs.zenml.io/stack-components/artifact-stores/gcp | 403 |
gistry or even more than one type of AWS resource:zenml service-connector register --type aws -i
A non-interactive CLI example that leverages the AWS CLI configuration on your local machine to auto-configure an AWS Service Connector targeting an ECR registry is:
zenml service-connector register <CONNECTOR_NAME> --type aws --resource-type docker-registry --auto-configure
Example Command Output
$ zenml service-connector register aws-us-east-1 --type aws --resource-type docker-registry --auto-configure
β Έ Registering service connector 'aws-us-east-1'...
Successfully registered service connector `aws-us-east-1` with access to the following resources:
ββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
ββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ
Note: Please remember to grant the entity associated with your AWS credentials permissions to read and write to one or more ECR repositories as well as to list accessible ECR repositories. For a full list of permissions required to use an AWS Service Connector to access an ECR registry, please refer to the AWS Service Connector ECR registry resource type documentation or read the documentation available in the interactive CLI commands and dashboard. The AWS Service Connector supports many different authentication methods with different levels of security and convenience. You should pick the one that best fits your use case.
If you already have one or more AWS Service Connectors configured in your ZenML deployment, you can check which of them can be used to access the ECR registry you want to use for your AWS Container Registry by running e.g.:
zenml service-connector list-resources --connector-type aws --resource-type docker-registry
Example Command Output | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries/aws | 460 |
ββββββββββββββββββββββββββββββββββββββββββββββββββ¨β β β β π¦ gcs-bucket β gs://annotation-gcp-store β
β β β β β gs://zenml-bucket-sl β
β β β β β gs://zenml-core.appspot.com β
β β β β β gs://zenml-core_cloudbuild β
β β β β β gs://zenml-datasets β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β β β β π kubernetes-cluster β zenml-test-cluster β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β β β β π³ docker-registry β gcr.io/zenml-core β | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 262 |
Control execution order of steps
By default, ZenML uses the data flowing between steps of your pipeline to determine the order in which steps get executed.
from zenml import pipeline
@pipeline
def example_pipeline():
step_1_output = step_1()
step_2_output = step_2()
step_3(step_1_output, step_2_output)
If you have additional constraints on the order in which steps get executed, you can specify non-data dependencies by passing the invocation IDs of steps that should run before your step like this: my_step(after="other_step"). If you want to define multiple upstream steps, you can also pass a list for the after argument when calling your step: my_step(after=["other_step", "other_step_2"]).
Check out the documentation here to learn about the invocation ID and how to use a custom one for your steps.
from zenml import pipeline
@pipeline
def example_pipeline():
step_1_output = step_1(after="step_2")
step_2_output = step_2()
step_3(step_1_output, step_2_output)
This pipeline is similar to the one explained above, but this time ZenML will make sure to only start step_1 after step_2 has finished.
PreviousRun pipelines asynchronously
NextUsing a custom step invocation ID
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/build-pipelines/control-execution-order-of-steps | 276 |
s that the task requires at least 16 GB of memory.accelerators: The accelerators required. If a string, must be a string of the form 'V100' or 'V100:2', where the :2 indicates that the task requires 2 V100 GPUs. If a dict, must be a dict of the form {'V100': 2} or {'tpu-v2-8': 1}.
accelerator_args: Accelerator-specific arguments. For example, {'tpu_vm': True, 'runtime_version': 'tpu-vm-base'} for TPUs.
use_spot: Whether to use spot instances. If None, defaults to False.
spot_recovery: The spot recovery strategy to use for the managed spot to recover the cluster from preemption. Read more about the available strategies here
region: The cloud region to use.
zone: The cloud zone to use within the region.
image_id: The image ID to use. If a string, must be a string of the image id from the cloud, such as AWS: 'ami-1234567890abcdef0', GCP: 'projects/my-project-id/global/images/my-image-name'; Or, a image tag provided by SkyPilot, such as AWS: 'skypilot:gpu-ubuntu-2004'. If a dict, must be a dict mapping from region to image ID.
disk_size: The size of the OS disk in GiB.
disk_tier: The disk performance tier to use. If None, defaults to 'medium'.
cluster_name: Name of the cluster to create/reuse. If None, auto-generate a name. SkyPilot uses term cluster to refer to a group or a single VM that are provisioned to execute the task. The cluster name is used to identify the cluster and to determine whether to reuse an existing cluster or create a new one.
retry_until_up: Whether to retry launching the cluster until it is up.
idle_minutes_to_autostop: Automatically stop the cluster after this many minutes of idleness, i.e., no running or pending jobs in the cluster's job queue. Idleness gets reset whenever setting-up/running/pending jobs are found in the job queue. Setting this flag is equivalent to running sky.launch(..., detach_run=True, ...) and then sky.autostop(idle_minutes=<minutes>). If not set, the cluster will not be autostopped. | stack-components | https://docs.zenml.io/stack-components/orchestrators/skypilot-vm | 492 |
to access the Artifact Store to load served modelsTo enable these use cases, it is recommended to use an Azure Service Connector to link your Azure Artifact Store to the remote Azure Blob storage container.
To set up the Azure Artifact Store to authenticate to Azure and access an Azure Blob storage container, it is recommended to leverage the many features provided by the Azure Service Connector such as auto-configuration, best security practices regarding long-lived credentials and reusing the same credentials across multiple stack components.
If you don't already have an Azure Service Connector configured in your ZenML deployment, you can register one using the interactive CLI command. You have the option to configure an Azure Service Connector that can be used to access more than one Azure blob storage container or even more than one type of Azure resource:
zenml service-connector register --type azure -i
A non-interactive CLI example that uses Azure Service Principal credentials to configure an Azure Service Connector targeting a single Azure Blob storage container is:
zenml service-connector register <CONNECTOR_NAME> --type azure --auth-method service-principal --tenant_id=<AZURE_TENANT_ID> --client_id=<AZURE_CLIENT_ID> --client_secret=<AZURE_CLIENT_SECRET> --resource-type blob-container --resource-id <BLOB_CONTAINER_NAME>
Example Command Output
$ zenml service-connector register azure-blob-demo --type azure --auth-method service-principal --tenant_id=a79f3633-8f45-4a74-a42e-68871c17b7fb --client_id=8926254a-8c3f-430a-a2fd-bdab234d491e --client_secret=AzureSuperSecret --resource-type blob-container --resource-id az://demo-zenmlartifactstore
Successfully registered service connector `azure-blob-demo` with access to the following resources:
βββββββββββββββββββββ―βββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββΌβββββββββββββββββββββββββββββββ¨
β π¦ blob-container β az://demo-zenmlartifactstore β | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/azure | 454 |
authentication.
ED25519 key based authentication.SSH private keys configured in the connector will be distributed to all clients that use them to run pipelines with the HyperAI orchestrator. SSH keys are long-lived credentials that give unrestricted access to HyperAI instances.
When configuring the Service Connector, it is required to provide at least one hostname via hostnames and the username with which to login. Optionally, it is possible to provide an ssh_passphrase if applicable. This way, it is possible to use the HyperAI service connector in multiple ways:
Create one service connector per HyperAI instance with different SSH keys.
Configure a reused SSH key just once for multiple HyperAI instances, then select the individual instance when creating the HyperAI orchestrator component.
Auto-configuration
This Service Connector does not support auto-discovery and extraction of authentication credentials from HyperAI instances. If this feature is useful to you or your organization, please let us know by messaging us in Slack or creating an issue on GitHub.
Stack Components use
The HyperAI Service Connector can be used by the HyperAI Orchestrator to deploy pipeline runs to HyperAI instances.
PreviousAzure Service Connector
NextManage stacks
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/hyperai-service-connector | 240 |
Docker settings on a pipeline
Using Docker images to run your pipeline.
When a pipeline is run with a remote orchestrator a Dockerfile is dynamically generated at runtime. It is then used to build the Docker image using the image builder component of your stack. The Dockerfile consists of the following steps:
Starts from a parent image that has ZenML installed. By default, this will use the official ZenML image for the Python and ZenML version that you're using in the active Python environment. If you want to use a different image as the base for the following steps, check out this guide.
Installs additional pip dependencies. ZenML will automatically detect which integrations are used in your stack and install the required dependencies. If your pipeline needs any additional requirements, check out our guide on including custom dependencies.
Optionally copies your source files. Your source files need to be available inside the Docker container so ZenML can execute your step code. Check out this section for more information on how you can customize how ZenML handles your source files in Docker images.
Sets user-defined environment variables.
The process described above is automated by ZenML and covers the most basic use cases. This section covers various ways to customize the Docker build process to fit your needs.
For a full list of configuration options, check out the DockerSettings object on the SDKDocs.
How to configure settings for a pipeline
Customizing the Docker builds for your pipelines and steps is done using the DockerSettings class which you can import like this:
from zenml.config import DockerSettings
There are many ways in which you can supply these settings:
Configuring them on a pipeline applies the settings to all steps of that pipeline:
from zenml.config import DockerSettings
docker_settings = DockerSettings()
# Either add it to the decorator
@pipeline(settings={"docker": docker_settings})
def my_pipeline() -> None:
my_step() | how-to | https://docs.zenml.io/v/docs/how-to/customize-docker-builds/docker-settings-on-a-pipeline | 381 |
to install and configure the AWS CLI on your hostyou don't need to care about enabling your other stack components (orchestrators, step operators, and model deployers) to have access to the artifact store through IAM roles and policies
you can combine the S3 artifact store with other stack components that are not running in AWS
Note: When you create the IAM user for your AWS access key, please remember to grant the created IAM user permissions to read and write to your S3 bucket (i.e. at a minimum: s3:PutObject, s3:GetObject, s3:ListBucket, s3:DeleteObject)
After having set up the IAM user and generated the access key, as described in the AWS documentation, you can register the S3 Artifact Store as follows:
# Store the AWS access key in a ZenML secret
zenml secret create s3_secret \
--aws_access_key_id='<YOUR_S3_ACCESS_KEY_ID>' \
--aws_secret_access_key='<YOUR_S3_SECRET_KEY>'
# Register the S3 artifact-store and reference the ZenML secret
zenml artifact-store register s3_store -f s3 \
--path='s3://your-bucket' \
--authentication_secret=s3_secret
# Register and set a stack with the new artifact store
zenml stack register custom_stack -a s3_store ... --set
Advanced Configuration
The S3 Artifact Store accepts a range of advanced configuration options that can be used to further customize how ZenML connects to the S3 storage service that you are using. These are accessible via the client_kwargs, config_kwargs and s3_additional_kwargs configuration attributes and are passed transparently to the underlying S3Fs library:
client_kwargs: arguments that will be transparently passed to the botocore client . You can use it to configure parameters like endpoint_url and region_name when connecting to an S3-compatible endpoint (e.g. Minio).
config_kwargs: advanced parameters passed to botocore.client.Config.
s3_additional_kwargs: advanced parameters that are used when calling S3 API, typically used for things like ServerSideEncryption and ACL. | stack-components | https://docs.zenml.io/stack-components/artifact-stores/s3 | 432 |
β impersonation β β ββββββββββββββββββββββββββ·βββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββββ·ββββββββ·βββββββββ
For this example we will configure a service connector using the user-account auth method. But before we can do that, we need to login to GCP using the following command:
gcloud auth application-default login
This will open a browser window and ask you to login to your GCP account. Once you have logged in, you can register a new service connector using the following command:
# We want to use --auto-configure to automatically configure the service connector with the appropriate credentials and permissions to provision VMs on GCP.
zenml service-connector register gcp-skypilot-vm -t gcp --auth-method user-account --auto-configure
# using generic resource type requires disabling the generation of temporary tokens
zenml service-connector update gcp-skypilot-vm --generate_temporary_tokens=False
This will automatically configure the service connector with the appropriate credentials and permissions to provision VMs on GCP. You can then use the service connector to configure your registered VM Orchestrator stack component using the following commands:
# Register the orchestrator
zenml orchestrator register <ORCHESTRATOR_NAME> --flavor vm_gcp
# Connect the orchestrator to the service connector
zenml orchestrator connect <ORCHESTRATOR_NAME> --connector gcp-skypilot-vm
# Register and activate a stack with the new orchestrator
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
We need first to install the SkyPilot integration for Azure and the Azure extra for ZenML, using the following two commands
pip install "zenml[connectors-azure]"
zenml integration install azure skypilot_azure
To provision VMs on Azure, your VM Orchestrator stack component needs to be configured to authenticate with Azure Service Connector | stack-components | https://docs.zenml.io/stack-components/orchestrators/skypilot-vm | 443 |
d:
zenml step-operator flavor list
How to use itYou don't need to directly interact with any ZenML step operator in your code. As long as the step operator that you want to use is part of your active ZenML stack, you can simply specify it in the @step decorator of your step.
from zenml import step
@step(step_operator= <STEP_OPERATOR_NAME>)
def my_step(...) -> ...:
...
Specifying per-step resources
If your steps require additional hardware resources, you can specify them on your steps as described here.
Enabling CUDA for GPU-backed hardware
Note that if you wish to use step operators to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
PreviousDevelop a Custom Model Deployer
NextAmazon SageMaker
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/step-operators | 195 |
ing it easy to scale your RAG pipelines as needed.ZenML allows you to track all the artifacts associated with your RAG pipeline, from hyperparameters and model weights to metadata and performance metrics, as well as all the RAG or LLM-specific artifacts like chains, agents, tokenizers and vector stores. These can all be tracked in the Model Control Plane and thus visualized in the ZenML Pro dashboard.
By bringing all of the above into a simple ZenML pipeline we achieve a clearly delineated set of steps that can be run and rerun to set up our basic RAG pipeline. This is a great starting point for building out more complex RAG pipelines, and it's a great way to get started with LLMs in a sensible way.
A summary of some of the advantages that ZenML brings to the table here includes:
Reproducibility: You can rerun the pipeline to update the index store with new documents or to change the parameters of the chunking process and so on. Previous versions of the artifacts will be preserved, and you can compare the performance of different versions of the pipeline.
Scalability: You can easily scale the pipeline to handle larger corpora of documents by deploying it on a cloud provider and using a more scalable vector store.
Tracking artifacts and associating them with metadata: You can track the artifacts generated by the pipeline and associate them with metadata that provides additional context and insights into the pipeline. This metadata and these artifacts are then visible in the ZenML dashboard, allowing you to monitor the performance of the pipeline and debug any issues that arise.
Maintainability - Having your pipeline in a clear, modular format makes it easier to maintain and update. You can easily add new steps, change the parameters of existing steps, and experiment with different configurations to see how they affect the performance of the pipeline. | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/rag-with-zenml/understanding-rag | 372 |
io β
ββββββββββββββββββββββ·βββββββββββββββββIf you already have one or more Docker Service Connectors configured in your ZenML deployment, you can check which of them can be used to access the container registry you want to use for your Default Container Registry by running e.g.:
zenml service-connector list-resources --connector-type docker --resource-id <REGISTRY_URI>
Example Command Output
$ zenml service-connector list-resources --connector-type docker --resource-id docker.io
The resource with name 'docker.io' can be accessed by 'docker' service connectors configured in your workspace:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββββββ―βββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌβββββββββββββββββ¨
β cf55339f-dbc8-4ee6-862e-c25aff411292 β dockerhub β π³ docker β π³ docker-registry β docker.io β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββββββ·βββββββββββββββββ
After having set up or decided on a Docker Service Connector to use to connect to the target container registry, you can register the Docker Container Registry as follows:
# Register the container registry and reference the target registry URI
zenml container-registry register <CONTAINER_REGISTRY_NAME> -f default \
--uri=<REGISTRY_URL>
# Connect the container registry to the target registry via a Docker Service Connector
zenml container-registry connect <CONTAINER_REGISTRY_NAME> -i
A non-interactive version that connects the Default Container Registry to a target registry through a Docker Service Connector:
zenml container-registry connect <CONTAINER_REGISTRY_NAME> --connector <CONNECTOR_ID>
Example Command Output
$ zenml container-registry connect dockerhub --connector dockerhub | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries/default | 523 |
k update annotation -a <YOUR_CLOUD_ARTIFACT_STORE># this must be done separately so that the other required stack components are first registered
zenml stack update annotation -an <YOUR_LABEL_STUDIO_ANNOTATOR>
zenml stack set annotation
# optionally also
zenml stack describe
Now if you run a simple CLI command like zenml annotator dataset list this should work without any errors. You're ready to use your annotator in your ML workflow!
How do you use it?
ZenML assumes that users have registered a cloud artifact store and an annotator as described above. ZenML currently only supports this setup, but we will add in the fully local stack option in the future.
ZenML supports access to your data and annotations via the zenml annotator ... CLI command.
You can access information about the datasets you're using with the zenml annotator dataset list. To work on annotation for a particular dataset, you can run zenml annotator dataset annotate <dataset_name>.
Our computer vision end to end example is the best place to see how all the pieces of making this integration work fit together. What follows is an overview of some key components to the Label Studio integration and how it can be used.
Label Studio Annotator Stack Component
Our Label Studio annotator component inherits from the BaseAnnotator class. There are some methods that are core methods that must be defined, like being able to register or get a dataset. Most annotators handle things like the storage of state and have their own custom features, so there are quite a few extra methods specific to Label Studio. | stack-components | https://docs.zenml.io/v/docs/stack-components/annotators/label-studio | 325 |
pose -f /path/to/docker-compose.yml -p zenml up -dYou need to visit the ZenML dashboard at http://localhost:8080 to activate the server by creating an initial admin account. You can then connect your client to the server with the web login flow:
zenml connect --url http://localhost:8080
Tearing down the installation is as simple as running:
docker-compose -p zenml down
Database backup and recovery
An automated database backup and recovery feature is enabled by default for all Docker deployments. The ZenML server will automatically back up the database in-memory before every database schema migration and restore it if the migration fails.
The database backup automatically created by the ZenML server is only temporary and only used as an immediate recovery in case of database migration failures. It is not meant to be used as a long-term backup solution. If you need to back up your database for long-term storage, you should use a dedicated backup solution.
Several database backup strategies are supported, depending on where and how the backup is stored. The strategy can be configured by means of the ZENML_STORE_BACKUP_STRATEGY environment variable:
disabled - no backup is performed
in-memory - the database schema and data are stored in memory. This is the fastest backup strategy, but the backup is not persisted across container restarts, so no manual intervention is possible in case the automatic DB recovery fails after a failed DB migration. Adequate memory resources should be allocated to the ZenML server container when using this backup strategy with larger databases. This is the default backup strategy. | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-docker | 319 |
RAG with ZenML
RAG is a sensible way to get started with LLMs.
Retrieval-Augmented Generation (RAG) is a powerful technique that combines the strengths of retrieval-based and generation-based models. In this guide, we'll explore how to set up RAG pipelines with ZenML, including data ingestion, index store management, and tracking RAG-associated artifacts.
LLMs are a powerful tool, as they can generate human-like responses to a wide variety of prompts. However, they can also be prone to generating incorrect or inappropriate responses, especially when the input prompt is ambiguous or misleading. They are also (currently) limited in the amount of text they can understand and/or generate. While there are some LLMs like Google's Gemini 1.5 Pro that can consistently handle 1 million tokens (small units of text), the vast majority (particularly the open-source ones currently available) handle far less.
The first part of this guide to RAG pipelines with ZenML is about understanding the basic components and how they work together. We'll cover the following topics:
why RAG exists and what problem it solves
how to ingest and preprocess data that we'll use in our RAG pipeline
how to leverage embeddings to represent our data; this will be the basis for our retrieval mechanism
how to store these embeddings in a vector database
how to track RAG-associated artifacts with ZenML
At the end, we'll bring it all together and show all the components working together to perform basic RAG inference.
PreviousLLMOps guide
NextRAG in 85 lines of code
Last updated 15 days ago | user-guide | https://docs.zenml.io/user-guide/llmops-guide/rag-with-zenml | 333 |
@pipeline(model=model)
def my_pipeline(...):
...You can also assign tags when creating or updating models with the Python SDK:
from zenml.models import Model
from zenml.client import Client
# Create or register a new model with tags
Client().create_model(
name="iris_logistic_regression",
tags=["classification", "iris-dataset"],
# Create or register a new model version also with tags
Client().create_model_version(
model_name_or_id="iris_logistic_regression",
name="2",
tags=["version-1", "experiment-42"],
To add tags to existing models and their versions using the ZenML CLI, you can use the following commands:
# Tag an existing model
zenml model update iris_logistic_regression --tag "classification"
# Tag a specific model version
zenml model version update iris_logistic_regression 2 --tag "experiment3"
PreviousDelete an artifact
NextGet arbitrary artifacts in a step
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/handle-data-artifacts/tagging | 200 |
A starter project
Put your new knowledge into action with a simple starter project
By now, you have understood some of the basic pillars of a MLOps system:
Pipelines and steps
Artifacts
Models
We will now put this into action with a simple starter project.
Get started
Start with a fresh virtual environment with no dependencies. Then let's install our dependencies:
pip install "zenml[templates,server]" notebook
zenml integration install sklearn -y
We will then use ZenML templates to help us get the code we need for the project:
mkdir zenml_starter
cd zenml_starter
zenml init --template starter --template-with-defaults
# Just in case, we install the requirements again
pip install -r requirements.txt
The starter template is the same as the ZenML quickstart. You can clone it like so:
git clone --depth 1 [email protected]:zenml-io/zenml.git
cd zenml/examples/quickstart
pip install -r requirements.txt
zenml init
What you'll learn
You can either follow along in the accompanying Jupyter notebook, or just keep reading the README file for more instructions.
Either way, at the end you would run three pipelines that are exemplary:
A feature engineering pipeline that loads data and prepares it for training.
A training pipeline that loads the preprocessed dataset and trains a model.
A batch inference pipeline that runs predictions on the trained model with new data.
And voilΓ ! You're now well on your way to be an MLOps expert. As a next step, try introducing the ZenML starter template to your colleagues and see the benefits of a standard MLOps framework in action!
Conclusion and next steps
This marks the end of the first chapter of your MLOps journey with ZenML. Make sure you do your own experimentation with ZenML to master the basics. When ready, move on to the production guide, which is the next part of the series.
PreviousTrack ML models
NextProduction guide
Last updated 19 days ago | user-guide | https://docs.zenml.io/v/docs/user-guide/starter-guide/starter-project | 422 |
use for the database connection.
database_ssl_ca:# The path to the client SSL certificate to use for the database connection.
database_ssl_cert:
# The path to the client SSL key to use for the database connection.
database_ssl_key:
# Whether to verify the database server SSL certificate.
database_ssl_verify_server_cert:
Run the deploy command and pass the config file above to it.Copyzenml deploy --config=/PATH/TO/FILENote To be able to run the deploy command, you should have your cloud provider's CLI configured locally with permissions to create resources like MySQL databases and networks.
Configuration file templates
Base configuration file
Below is the general structure of a config file. Use this as a base and then add any cloud-specific parameters from the sections below.
# Name of the server deployment.
name:
# The server provider type, one of aws, gcp or azure.
provider:
# The path to the kubectl config file to use for deployment.
kubectl_config_path:
# The Kubernetes namespace to deploy the ZenML server to.
namespace: zenmlserver
# The path to the ZenML server helm chart to use for deployment.
helm_chart:
# The repository and tag to use for the ZenML server Docker image.
zenmlserver_image_repo: zenmldocker/zenml
zenmlserver_image_tag: latest
# Whether to deploy an nginx ingress controller as part of the deployment.
create_ingress_controller: true
# Whether to use TLS for the ingress.
ingress_tls: true
# Whether to generate self-signed TLS certificates for the ingress.
ingress_tls_generate_certs: true
# The name of the Kubernetes secret to use for the ingress.
ingress_tls_secret_name: zenml-tls-certs
# The ingress controller's IP address. The ZenML server will be exposed on a subdomain of this IP. For AWS, if you have a hostname instead, use the following command to get the IP address: `dig +short <hostname>`.
ingress_controller_ip:
# Whether to create a SQL database service as part of the recipe.
deploy_db: true
# The username and password for the database. | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-zenml-cli | 437 |
onnect your client to it by running zenml connect.use the zenml profile list and zenml profile migrate CLI commands to import the Stacks and Stack Components from your Profiles into your new ZenML deployment. If you have multiple Profiles that you would like to migrate, you can either use a prefix for the names of your imported Stacks and Stack Components, or you can use a different ZenML Project for each Profile.
The ZenML Dashboard is currently limited to showing only information that is available in the default Project. If you wish to migrate your Profiles to a different Project, you will not be able to visualize the migrated Stacks and Stack Components in the Dashboard. This will be fixed in a future release.
Once you've migrated all your Profiles, you can delete the old YAML files.
Example of migrating a default profile into the default project:
$ zenml profile list
ZenML profiles have been deprecated and removed in this version of ZenML. All
stacks, stack components, flavors etc. are now stored and managed globally,
either in a local database or on a remote ZenML server (see the `zenml up` and
`zenml connect` commands). As an alternative to profiles, you can use projects
as a scoping mechanism for stacks, stack components and other ZenML objects.
The information stored in legacy profiles is not automatically migrated. You can
do so manually by using the `zenml profile list` and `zenml profile migrate` commands.
Found profile with 1 stacks, 3 components and 0 flavors at: /home/stefan/.config/zenml/profiles/default
Found profile with 3 stacks, 6 components and 0 flavors at: /home/stefan/.config/zenml/profiles/zenprojects
Found profile with 3 stacks, 7 components and 0 flavors at: /home/stefan/.config/zenml/profiles/zenbytes
$ zenml profile migrate /home/stefan/.config/zenml/profiles/default
No component flavors to migrate from /home/stefan/.config/zenml/profiles/default/stacks.yaml...
Migrating stack components from /home/stefan/.config/zenml/profiles/default/stacks.yaml... | reference | https://docs.zenml.io/v/docs/reference/migration-guide/migration-zero-twenty | 461 |
eline_step_name="huggingface_model_deployer_step",model_name="LLAMA-7B",
model_uri="s3://zenprojects/huggingface_model_deployer_step/output/884/huggingface",
revision="main",
task="text-classification",
region="us-east-1",
vendor="aws",
token="huggingface_token",
namespace="zenml-workloads",
endpoint_type="public",
print(f"Model server {service.config['model_name']} is deployed at {service.status['prediction_url']}")
How to Interact with a model deployer after deployment?
When a Model Deployer is part of the active ZenML Stack, it is also possible to interact with it from the CLI to list, start, stop, or delete the model servers that is managed:
$ zenml model-deployer models list
ββββββββββ―βββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββ―βββββββββββββββββββββββββββββ
β STATUS β UUID β PIPELINE_NAME β PIPELINE_STEP_NAME β
β βββββββββΌβββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββ¨
β β
β 8cbe671b-9fce-4394-a051-68e001f92765 β seldon_deployment_pipeline β seldon_model_deployer_step β
ββββββββββ·βββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββ·βββββββββββββββββββββββββββββ
$ zenml model-deployer models describe 8cbe671b-9fce-4394-a051-68e001f92765
Properties of Served Model 8cbe671b-9fce-4394-a051-68e001f92765
ββββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β MODEL SERVICE PROPERTY β VALUE β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β MODEL_NAME β mnist β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨ | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers | 572 |
w location. Some examples of such changes include:switching Secrets Store back-end types (e.g. from internal SQL database to AWS Secrets Manager or Azure Key Vault)
switching back-end locations (e.g. changing the AWS Secrets Manager account or region, GCP Secret Manager project or Azure Key Vault's vault).
In such cases, it is not sufficient to simply reconfigure and redeploy the ZenML server with the new Secrets Store configuration. This is because the ZenML server will not automatically migrate existing secrets to the new location. Instead, you should follow a specific migration strategy to ensure that existing secrets are also properly migrated to the new location with minimal, even zero downtime.
The secrets migration process makes use of the fact that a secondary Secrets Store can be configured for the ZenML server for backup purposes. This secondary Secrets Store is used as an intermediate step in the migration process. The migration process is as follows (we'll refer to the Secrets Store that is currently in use as Secrets Store A and the Secrets Store that will be used after the migration as Secrets Store B):
Re-configure the ZenML server to use Secrets Store B as the secondary Secrets Store.
Re-deploy the ZenML server.
Use the zenml secret backup CLI command to back up all secrets from Secrets Store A to Secrets Store B. You don't have to worry about secrets that are created or updated by users during or after this process, as they will be automatically backed up to Secrets Store B. If you also wish to delete secrets from Secrets Store A after they are successfully backed up to Secrets Store B, you should run zenml secret backup --delete-secrets instead.
Re-configure the ZenML server to use Secrets Store B as the primary Secrets Store and remove Secrets Store A as the secondary Secrets Store.
Re-deploy the ZenML server.
This migration strategy is not necessary if the actual location of the secrets values in the Secrets Store back-end does not change. For example: | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/manage-the-deployed-services/secret-management | 396 |
π¨βπ€Popular integrations
Use your favorite tools with ZenML.
ZenML is designed to work seamlessly with your favorite tools. This guide will show you how to integrate ZenML with some of the most popular tools in the data science and machine learning ecosystem.
PreviousFetch metadata during pipeline composition
NextRun on AWS
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/popular-integrations | 76 |
r managed solutions like Vertex.
How to deploy itThe Kubernetes orchestrator requires a Kubernetes cluster in order to run. There are many ways to deploy a Kubernetes cluster using different cloud providers or on your custom infrastructure, and we can't possibly cover all of them, but you can check out our cloud guide
If the above Kubernetes cluster is deployed remotely on the cloud, then another pre-requisite to use this orchestrator would be to deploy and connect to a remote ZenML server.
Infrastructure Deployment
A Kubernetes orchestrator can be deployed directly from the ZenML CLI:
zenml orchestrator deploy k8s_orchestrator --flavor=kubernetes --provider=<YOUR_PROVIDER> ...
You can pass other configurations specific to the stack components as key-value arguments. If you don't provide a name, a random one is generated for you. For more information about how to work use the CLI for this, please refer to the dedicated documentation section.
How to use it
To use the Kubernetes orchestrator, we need:
The ZenML kubernetes integration installed. If you haven't done so, runCopyzenml integration install kubernetes
Docker installed and running.
kubectl installed.
A remote artifact store as part of your stack.
A remote container registry as part of your stack.
A Kubernetes cluster deployed
kubectl installed and the name of the Kubernetes configuration context which points to the target cluster (i.e. runkubectl config get-contexts to see a list of available contexts) . This is optional (see below).
It is recommended that you set up a Service Connector and use it to connect ZenML Stack Components to the remote Kubernetes cluster, especially If you are using a Kubernetes cluster managed by a cloud provider like AWS, GCP or Azure, This guarantees that your Stack is fully portable on other environments and your pipelines are fully reproducible.
We can then register the orchestrator and use it in our active stack. This can be done in two ways: | stack-components | https://docs.zenml.io/stack-components/orchestrators/kubernetes | 392 |
Deploy using HuggingFace Spaces
Deploying ZenML to Huggingface Spaces.
A quick way to deploy ZenML and get started is to use HuggingFace Spaces. HuggingFace Spaces is a platform for hosting and sharing ML projects and workflows, and it also works to deploy ZenML. You can be up and running in minutes (for free) with a hosted ZenML server, so it's a good option if you want to try out ZenML without any infrastructure overhead.
Note that it is not recommended to use HuggingFace Spaces for production use as by default the data stored there is non-persistent and the underlying machine is not as available to you as a dedicated machine. See our other deployment options if you want to use ZenML in production.
In this diagram, you can see what the default deployment of ZenML on HuggingFace looks like.
Deploying ZenML on HuggingFace Spaces
You can deploy ZenML on HuggingFace Spaces with just a few clicks:
To set up your ZenML app, you need to specify three main components: the Owner (either your personal account or an organization), a Space name, and the Visibility (a bit lower down the page). Note that the space visibility needs to be set to 'Public' if you wish to connect to the ZenML server from your local machine.
You have the option here to select a higher-tier machine to use for your server. The advantage of selecting a paid CPU instance is that it is not subject to auto-shutdown policies and thus will stay up as long as you leave it up. In order to make use of a persistent CPU, you'll likely want to create and set up a MySQL database to connect to (see below).
To personalize your Space's appearance, such as the title, emojis, and colors, navigate to "Files and Versions" and modify the metadata in your README.md file. Full information on Spaces configuration parameters can be found on the HuggingFace documentation reference guide. | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-using-huggingface-spaces | 401 |
Skypilot VM Orchestrator
Orchestrating your pipelines to run on VMs using SkyPilot.
The SkyPilot VM Orchestrator is an integration provided by ZenML that allows you to provision and manage virtual machines (VMs) on any cloud provider supported by the SkyPilot framework. This integration is designed to simplify the process of running machine learning workloads on the cloud, offering cost savings, high GPU availability, and managed execution, We recommend using the SkyPilot VM Orchestrator if you need access to GPUs for your workloads, but don't want to deal with the complexities of managing cloud infrastructure or expensive managed solutions.
This component is only meant to be used within the context of a remote ZenML deployment scenario. Usage with a local ZenML deployment may lead to unexpected behavior!
SkyPilot VM Orchestrator is currently supported only for Python 3.8 and 3.9.
When to use it
You should use the SkyPilot VM Orchestrator if:
you want to maximize cost savings by leveraging spot VMs and auto-picking the cheapest VM/zone/region/cloud.
you want to ensure high GPU availability by provisioning VMs in all zones/regions/clouds you have access to.
you don't need a built-in UI of the orchestrator. (You can still use ZenML's Dashboard to view and monitor your pipelines/artifacts.)
you're not willing to maintain Kubernetes-based solutions or pay for managed solutions like Sagemaker.
How it works
The orchestrator leverages the SkyPilot framework to handle the provisioning and scaling of VMs. It automatically manages the process of launching VMs for your pipelines, with support for both on-demand and managed spot VMs. While you can select the VM type you want to use, the orchestrator also includes an optimizer that automatically selects the cheapest VM/zone/region/cloud for your workloads. Finally, the orchestrator includes an autostop feature that cleans up idle clusters, preventing unnecessary cloud costs. | stack-components | https://docs.zenml.io/stack-components/orchestrators/skypilot-vm | 410 |
Slack Alerter
Sending automated alerts to a Slack channel.
The SlackAlerter enables you to send messages to a dedicated Slack channel directly from within your ZenML pipelines.
The slack integration contains the following two standard steps:
slack_alerter_post_step takes a string message or a custom Slack block, posts it to a Slack channel, and returns whether the operation was successful.
slack_alerter_ask_step also posts a message or a custom Slack block to a Slack channel, but waits for user feedback, and only returns True if a user explicitly approved the operation from within Slack (e.g., by sending "approve" / "reject" to the bot in response).
Interacting with Slack from within your pipelines can be very useful in practice:
The slack_alerter_post_step allows you to get notified immediately when failures happen (e.g., model performance degradation, data drift, ...),
The slack_alerter_ask_step allows you to integrate a human-in-the-loop into your pipelines before executing critical steps, such as deploying new models.
How to use it
Requirements
Before you can use the SlackAlerter, you first need to install ZenML's slack integration:
zenml integration install slack -y
See the Integrations page for more details on ZenML integrations and how to install and use them.
Setting Up a Slack Bot
In order to use the SlackAlerter, you first need to have a Slack workspace set up with a channel that you want your pipelines to post to.
Then, you need to create a Slack App with a bot in your workspace.
Make sure to give your Slack bot chat:write and chat:write.public permissions in the OAuth & Permissions tab under Scopes.
Registering a Slack Alerter in ZenML
Next, you need to register a slack alerter in ZenML and link it to the bot you just created. You can do this with the following command:
zenml alerter register slack_alerter \
--flavor=slack \
--slack_token=<SLACK_TOKEN> \
--default_slack_channel_id=<SLACK_CHANNEL_ID> | stack-components | https://docs.zenml.io/stack-components/alerters/slack | 430 |
to the container registry.
Authentication MethodsIntegrating and using an Azure Container Registry in your pipelines is not possible without employing some form of authentication. If you're looking for a quick way to get started locally, you can use the Local Authentication method. However, the recommended way to authenticate to the Azure cloud platform is through an Azure Service Connector. This is particularly useful if you are configuring ZenML stacks that combine the Azure Container Registry with other remote stack components also running in Azure.
This method uses the Docker client authentication available in the environment where the ZenML code is running. On your local machine, this is the quickest way to configure an Azure Container Registry. You don't need to supply credentials explicitly when you register the Azure Container Registry, as it leverages the local credentials and configuration that the Azure CLI and Docker client store on your local machine. However, you will need to install and set up the Azure CLI on your machine as a prerequisite, as covered in the Azure CLI documentation, before you register the Azure Container Registry.
With the Azure CLI installed and set up with credentials, you need to login to the container registry so Docker can pull and push images:
# Fill your REGISTRY_NAME in the placeholder in the following command.
# You can find the REGISTRY_NAME as part of your registry URI: `<REGISTRY_NAME>.azurecr.io`
az acr login --name=<REGISTRY_NAME>
Stacks using the Azure Container Registry set up with local authentication are not portable across environments. To make ZenML pipelines fully portable, it is recommended to use an Azure Service Connector to link your Azure Container Registry to the remote ACR registry. | stack-components | https://docs.zenml.io/stack-components/container-registries/azure | 330 |
HyperAI Service Connector
Configuring HyperAI Connectors to connect ZenML to HyperAI instances.
The ZenML HyperAI Service Connector allows authenticating with a HyperAI instance for deployment of pipeline runs. This connector provides pre-authenticated Paramiko SSH clients to Stack Components that are linked to it.
$ zenml service-connector list-types --type hyperai
βββββββββββββββββββββββββββββ―βββββββββββββ―βββββββββββββββββββββ―βββββββββββββββ―ββββββββ―βββββββββ
β NAME β TYPE β RESOURCE TYPES β AUTH METHODS β LOCAL β REMOTE β
β ββββββββββββββββββββββββββββΌβββββββββββββΌβββββββββββββββββββββΌβββββββββββββββΌββββββββΌβββββββββ¨
β HyperAI Service Connector β π€ hyperai β π€ hyperai-instance β rsa-key β β
β β
β
β β β β dsa-key β β β
β β β β ecdsa-key β β β
β β β β ed25519-key β β β
βββββββββββββββββββββββββββββ·βββββββββββββ·βββββββββββββββββββββ·βββββββββββββββ·ββββββββ·βββββββββ
Prerequisites
The HyperAI Service Connector is part of the HyperAI integration. It is necessary to install the integration in order to use this Service Connector:
zenml integration install hyperai installs the HyperAI integration
Resource Types
The HyperAI Service Connector supports HyperAI instances.
Authentication Methods
ZenML creates an SSH connection to the HyperAI instance in the background when using this Service Connector. It then provides these connections to stack components requiring them, such as the HyperAI Orchestrator. Multiple authentication methods are supported:
RSA key based authentication.
DSA (DSS) key based authentication.
ECDSA key based authentication.
ED25519 key based authentication. | how-to | https://docs.zenml.io/how-to/auth-management/hyperai-service-connector | 473 |
πTrack metrics and metadata
Tracking metrics and metadata
Logging metrics and metadata is standardized in ZenML. The most common pattern is to use the log_xxx methods, e.g.:
Log metadata to a model: log_model_metadata
Log metadata to an artifact: log_artifact_metadata
Log metadata to a step: log_step_metadata
PreviousLoad artifacts from Model
NextAttach metadata to a model
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/track-metrics-metadata | 90 |
tion_name) ββ 511 β
β β
β /home/stefan/aspyre/src/zenml/.venv/lib/python3.8/site-packages/botocore/client.py:915 in β
β _make_api_call β
β β
β 912 β β if http.status_code >= 300: β
β 913 β β β error_code = parsed_response.get("Error", {}).get("Code") β
β 914 β β β error_class = self.exceptions.from_code(error_code) β
β β± 915 β β β raise error_class(parsed_response, operation_name) β
β 916 β β else: β
β 917 β β β return parsed_response β
β 918 β
β°βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ―
ClientError: An error occurred (403) when calling the HeadBucket operation: Forbidden
Impersonating accounts and assuming roles
These types of authentication methods require more work to set up because multiple permission-bearing accounts and roles need to be provisioned in advance depending on the target audience. On the other hand, they also provide the most flexibility and control. Despite their operational cost, if you are a platform engineer and have the infrastructure know-how necessary to understand and set up the authentication resources, this is for you. | how-to | https://docs.zenml.io/how-to/auth-management/best-security-practices | 318 |
`aws-auto` with access to the following resources:βββββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β πΆ aws-generic β us-east-1 β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://zenbytes-bucket β
β β s3://zenfiles β
β β s3://zenml-demos β
β β s3://zenml-generative-chat β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β zenhacks-cluster β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
βββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ
The Service Connector configuration shows how credentials have automatically been fetched from the local AWS CLI configuration:
zenml service-connector describe aws-auto
Example Command Output
Service connector 'aws-auto' of type 'aws' with id '9f3139fd-4726-421a-bc07-312d83f0c89e' is owned by user 'default' and is 'private'.
'aws-auto' aws Service Connector Details
ββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β ID β 9cdc926e-55d7-49f0-838e-db5ac34bb7dc β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β aws-auto β | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 548 |
Whylogs
How to collect and visualize statistics to track changes in your pipelines' data with whylogs/WhyLabs profiling.
The whylogs/WhyLabs Data Validator flavor provided with the ZenML integration uses whylogs and WhyLabs to generate and track data profiles, highly accurate descriptive representations of your data. The profiles can be used to implement automated corrective actions in your pipelines, or to render interactive representations for further visual interpretation, evaluation and documentation.
When would you want to use it?
Whylogs is an open-source library that analyzes your data and creates statistical summaries called whylogs profiles. Whylogs profiles can be processed in your pipelines and visualized locally or uploaded to the WhyLabs platform, where more in depth analysis can be carried out. Even though whylogs also supports other data types, the ZenML whylogs integration currently only works with tabular data in pandas.DataFrame format.
You should use the whylogs/WhyLabs Data Validator when you need the following data validation features that are possible with whylogs and WhyLabs:
Data Quality: validate data quality in model inputs or in a data pipeline
Data Drift: detect data drift in model input features
Model Drift: Detect training-serving skew, concept drift, and model performance degradation
You should consider one of the other Data Validator flavors if you need a different set of data validation features.
How do you deploy it?
The whylogs Data Validator flavor is included in the whylogs ZenML integration, you need to install it on your local machine to be able to register a whylogs Data Validator and add it to your stack:
zenml integration install whylogs -y
If you don't need to connect to the WhyLabs platform to upload and store the generated whylogs data profiles, the Data Validator stack component does not require any configuration parameters. Adding it to a stack is as simple as running e.g.: | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/whylogs | 382 |
ββββββ―βββββββββββββββββββββββββββββββββββββββββββββ PROPERTY β VALUE β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ¨
β ID β 96a92154-4ec7-4722-bc18-21eeeadb8a4f β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β aws-s3 (s3-bucket | s3://zenfiles client) β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β πΆ aws β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β sts-token β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β π¦ s3-bucket β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β s3://zenfiles β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β N/A β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β 11h59m56s β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ¨
β OWNER β default β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ¨
β WORKSPACE β default β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ¨
β SHARED β β β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ¨
β CREATED_AT β 2023-06-15 18:56:33.880081 β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββ¨
β UPDATED_AT β 2023-06-15 18:56:33.880082 β
ββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββ
Configuration | how-to | https://docs.zenml.io/v/docs/how-to/auth-management | 555 |
] = None,
model_source_uri: Optional[str] = None,tags: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[RegistryModelVersion]:
"""Lists all model versions for a registered model."""
@abstractmethod
def get_model_version(self, name: str, version: str) -> RegistryModelVersion:
"""Gets a model version for a registered model."""
@abstractmethod
def load_model_version(
self,
name: str,
version: str,
**kwargs: Any,
) -> Any:
"""Loads a model version from the model registry."""
@abstractmethod
def get_model_uri_artifact_store(
self,
model_version: RegistryModelVersion,
) -> str:
"""Gets the URI artifact store for a model version."""
This is a slimmed-down version of the base implementation which aims to highlight the abstraction layer. To see the full implementation and get the complete docstrings, please check the source code on GitHub .
Build your own custom model registry
If you want to create your own custom flavor for a model registry, you can follow the following steps:
Learn more about the core concepts for the model registry here. Your custom model registry will be built on top of these concepts so it helps to be aware of them.
Create a class that inherits from BaseModelRegistry and implements the abstract methods.
Create a ModelRegistryConfig class that inherits from BaseModelRegistryConfig and adds any additional configuration parameters that you need.
Bring the implementation and the configuration together by inheriting from the BaseModelRegistryFlavor class. Make sure that you give a name to the flavor through its abstract property.
Once you are done with the implementation, you can register it through the CLI with the following command:
zenml model-registry flavor register <IMAGE-BUILDER-FLAVOR-SOURCE-PATH>
It is important to draw attention to how and when these base abstractions are coming into play in a ZenML workflow.
The CustomModelRegistryFlavor class is imported and utilized upon the creation of the custom flavor through the CLI. | stack-components | https://docs.zenml.io/stack-components/model-registries/custom | 407 |
ct store `s3-zenfiles` to the following resources:ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββ―βββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌβββββββββββββββββ¨
β 19edc05b-92db-49de-bc84-aa9b3fb8261a β aws-s3-zenfiles β πΆ aws β π¦ s3-bucket β s3://zenfiles β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββ·βββββββββββββββββ
End-to-end examples
To get an idea of what a complete end-to-end journey looks like, from registering Service Connector all the way to configuring Stacks and Stack Components and running pipelines that access remote resources through Service Connectors, take a look at the following full-fledged examples:
the AWS Service Connector end-to-end examples
the GCP Service Connector end-to-end examples
the Azure Service Connector end-to-end examples
PreviousConnect services (AWS, GCP, Azure, K8s etc)
NextSecurity best practices
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 368 |
ed if the corresponding ZenML pipeline step fails.Consult the documentation for the particular Experiment Tracker flavor that you plan on using or are using in your stack for detailed information about how to use it in your ZenML pipelines.
PreviousDevelop a custom data validator
NextComet
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/experiment-trackers | 61 |
other remote stack components also running in AWS.This method uses the implicit AWS authentication available in the environment where the ZenML code is running. On your local machine, this is the quickest way to configure an S3 Artifact Store. You don't need to supply credentials explicitly when you register the S3 Artifact Store, as it leverages the local credentials and configuration that the AWS CLI stores on your local machine. However, you will need to install and set up the AWS CLI on your machine as a prerequisite, as covered in the AWS CLI documentation, before you register the S3 Artifact Store.
Certain dashboard functionality, such as visualizing or deleting artifacts, is not available when using an implicitly authenticated artifact store together with a deployed ZenML server because the ZenML server will not have permission to access the filesystem.
The implicit authentication method also needs to be coordinated with other stack components that are highly dependent on the Artifact Store and need to interact with it directly to work. If these components are not running on your machine, they do not have access to the local AWS CLI configuration and will encounter authentication failures while trying to access the S3 Artifact Store:
Orchestrators need to access the Artifact Store to manage pipeline artifacts
Step Operators need to access the Artifact Store to manage step-level artifacts
Model Deployers need to access the Artifact Store to load served models
To enable these use-cases, it is recommended to use an AWS Service Connector to link your S3 Artifact Store to the remote S3 bucket.
To set up the S3 Artifact Store to authenticate to AWS and access an S3 bucket, it is recommended to leverage the many features provided by the AWS Service Connector such as auto-configuration, best security practices regarding long-lived credentials and fine-grained access control and reusing the same credentials across multiple stack components. | stack-components | https://docs.zenml.io/stack-components/artifact-stores/s3 | 364 |
sing the 'default' username and an empty password.The Dashboard will be available at http://localhost:8237 by default:
For more details on other possible deployment options, see the ZenML deployment documentation, and/or follow the starter guide to learn more.
Removal of Profiles and the local YAML database
Prior to 0.20.0, ZenML used used a set of local YAML files to store information about the Stacks and Stack Components that were registered on your machine. In addition to that, these Stacks could be grouped together and organized under individual Profiles.
Profiles and the local YAML database have both been deprecated and removed in ZenML 0.20.0. Stack, Stack Components as well as all other information that ZenML tracks, such as Pipelines and Pipeline Runs, are now stored in a single SQL database. These entities are no longer organized into Profiles, but they can be scoped into different Projects instead.
Since the local YAML database is no longer used by ZenML 0.20.0, you will lose all the Stacks and Stack Components that you currently have configured when you update to ZenML 0.20.0. If you still want to use these Stacks, you will need to manually migrate them after the update.
π£ How to migrate your Profiles
If you're already using ZenML, you can migrate your existing Profiles to the new ZenML 0.20.0 paradigm by following these steps:
first, update ZenML to 0.20.0. This will automatically invalidate all your existing Profiles.
decide the ZenML deployment model that you want to follow for your projects. See the ZenML deployment documentation for available deployment scenarios. If you decide on using a local or remote ZenML server to manage your pipelines, make sure that you first connect your client to it by running zenml connect. | reference | https://docs.zenml.io/v/docs/reference/migration-guide/migration-zero-twenty | 378 |
Access secrets in a step
Fetching secret values in a step
ZenML secrets are groupings of key-value pairs which are securely stored in the ZenML secrets store. Additionally, a secret always has a name that allows you to fetch or reference them in your pipelines and stacks. In order to learn more about how to configure and create secrets, please refer to the platform guide on secrets.
You can access secrets directly from within your steps through the ZenML Client API. This allows you to use your secrets for querying APIs from within your step without hard-coding your access keys:
from zenml import step
from zenml.client import Client
from somewhere import authenticate_to_some_api
@step
def secret_loader() -> None:
"""Load the example secret from the server."""
# Fetch the secret from ZenML.
secret = Client().get_secret("<SECRET_NAME>")
# `secret.secret_values` will contain a dictionary with all key-value
# pairs within your secret.
authenticate_to_some_api(
username=secret.secret_values["username"],
password=secret.secret_values["password"],
...
See Also:
Learn how to create and manage secrets
Find out more about the secrets backend in ZenML
PreviousVersion pipelines
NextFetching pipelines
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/build-pipelines/access-secrets-in-a-step | 256 |
e how they affect the performance of the pipeline.Collaboration - You can share the pipeline with your team and collaborate on it together. You can also use the ZenML dashboard to share insights and findings with your team, making it easier to work together on the pipeline.
In the next section, we'll showcase the components of a basic RAG pipeline. This will give you a taste of how you can leverage the power of LLMs in your MLOps workflows using ZenML. Subsequent sections will cover more advanced topics like reranking retrieved documents, finetuning embeddings, and finetuning the LLM itself.
PreviousRAG in 85 lines of code
NextData ingestion and preprocessing
Last updated 15 days ago | user-guide | https://docs.zenml.io/user-guide/llmops-guide/rag-with-zenml/understanding-rag | 149 |
Evaluation in practice
Learn how to evaluate the performance of your RAG system in practice.
Now that we've seen individually how to evaluate the retrieval and generation components of our pipeline, it's worth taking a step back to think through how all of this works in practice.
Our example project includes the evaluation as a separate pipeline that optionally runs after the main pipeline that generates and populates the embeddings. This is a good practice to follow, as it allows you to separate the concerns of generating the embeddings and evaluating them. Depending on the specific use case, the evaluations could be included as part of the main pipeline and used as a gating mechanism to determine whether the embeddings are good enough to be used in production.
Given some of the performance constraints of the LLM judge, it might be worth experimenting with using a local LLM judge for evaluation during the course of the development process and then running the full evaluation using a cloud LLM like Anthropic's Claude or OpenAI's GPT-3.5 or 4. This can help you iterate faster and get a sense of how well your embeddings are performing before committing to the cost of running the full evaluation.
Automated evaluation isn't a silver bullet
While automating the evaluation process can save you time and effort, it's important to remember that it doesn't replace the need for a human to review the results. The LLM judge is expensive to run, and it takes time to get the results back. Automating the evaluation process can help you focus on the details and the data, but it doesn't replace the need for a human to review the results and make sure that the embeddings (and the RAG system as a whole) are performing as expected.
When and how much to evaluate | user-guide | https://docs.zenml.io/user-guide/llmops-guide/evaluation/evaluation-in-practice | 350 |
GCP Service Connector
Configuring GCP Service Connectors to connect ZenML to GCP resources such as GCS buckets, GKE Kubernetes clusters, and GCR container registries.
The ZenML GCP Service Connector facilitates the authentication and access to managed GCP services and resources. These encompass a range of resources, including GCS buckets, GCR container repositories, and GKE clusters. The connector provides support for various authentication methods, including GCP user accounts, service accounts, short-lived OAuth 2.0 tokens, and implicit authentication.
To ensure heightened security measures, this connector always issues short-lived OAuth 2.0 tokens to clients instead of long-lived credentials unless explicitly configured to do otherwise. Furthermore, it includes automatic configuration and detection of credentials locally configured through the GCP CLI.
This connector serves as a general means of accessing any GCP service by issuing OAuth 2.0 credential objects to clients. Additionally, the connector can handle specialized authentication for GCS, Docker, and Kubernetes Python clients. It also allows for the configuration of local Docker and Kubernetes CLIs.
$ zenml service-connector list-types --type gcp
βββββββββββββββββββββββββ―βββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββββ―ββββββββ―βββββββββ
β NAME β TYPE β RESOURCE TYPES β AUTH METHODS β LOCAL β REMOTE β
β ββββββββββββββββββββββββΌβββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββΌββββββββΌβββββββββ¨
β GCP Service Connector β π΅ gcp β π΅ gcp-generic β implicit β β
β β
β
β β β π¦ gcs-bucket β user-account β β β
β β β π kubernetes-cluster β service-account β β β
β β β π³ docker-registry β external-account β β β
β β β β oauth2-token β β β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 469 |
s that the task requires at least 16 GB of memory.accelerators: The accelerators required. If a string, must be a string of the form 'V100' or 'V100:2', where the :2 indicates that the task requires 2 V100 GPUs. If a dict, must be a dict of the form {'V100': 2} or {'tpu-v2-8': 1}.
accelerator_args: Accelerator-specific arguments. For example, {'tpu_vm': True, 'runtime_version': 'tpu-vm-base'} for TPUs.
use_spot: Whether to use spot instances. If None, defaults to False.
spot_recovery: The spot recovery strategy to use for the managed spot to recover the cluster from preemption. Read more about the available strategies here
region: The cloud region to use.
zone: The cloud zone to use within the region.
image_id: The image ID to use. If a string, must be a string of the image id from the cloud, such as AWS: 'ami-1234567890abcdef0', GCP: 'projects/my-project-id/global/images/my-image-name'; Or, a image tag provided by SkyPilot, such as AWS: 'skypilot:gpu-ubuntu-2004'. If a dict, must be a dict mapping from region to image ID.
disk_size: The size of the OS disk in GiB.
disk_tier: The disk performance tier to use. If None, defaults to 'medium'.
cluster_name: Name of the cluster to create/reuse. If None, auto-generate a name. SkyPilot uses term cluster to refer to a group or a single VM that are provisioned to execute the task. The cluster name is used to identify the cluster and to determine whether to reuse an existing cluster or create a new one.
retry_until_up: Whether to retry launching the cluster until it is up.
idle_minutes_to_autostop: Automatically stop the cluster after this many minutes of idleness, i.e., no running or pending jobs in the cluster's job queue. Idleness gets reset whenever setting-up/running/pending jobs are found in the job queue. Setting this flag is equivalent to running sky.launch(..., detach_run=True, ...) and then sky.autostop(idle_minutes=<minutes>). If not set, the cluster will not be autostopped. | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/skypilot-vm | 492 |
b.com/your-username/your-template.git your-projectReplace https://github.com/your-username/your-template.git with the URL of your template repository, and your-project with the name of the new project you want to create.
Use your template with ZenML. Once your template is ready, you can use it with the zenml init command:
zenml init --template https://github.com/your-username/your-template.git
Replace https://github.com/your-username/your-template.git with the URL of your template repository.
If you want to use a specific version of your template, you can use the --template-tag option to specify the git tag of the version you want to use:
zenml init --template https://github.com/your-username/your-template.git --template-tag v1.0.0
Replace v1.0.0 with the git tag of the version you want to use.
That's it! Now you have your own ZenML project template that you can use to quickly set up new ML projects. Remember to keep your template up-to-date with the latest best practices and changes in your ML workflows.
Our Production Guide documentation is built around the E2E Batch project template codes. Most examples will be based on it, so we highly recommend you to install the e2e_batch template with --template-with-defaults flag before diving deeper into this documentation section, so you can follow this guide along using your own local environment.
mkdir e2e_batch
cd e2e_batch
zenml init --template e2e_batch --template-with-defaults
PreviousConnect your git repository
NextBest practices
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/setting-up-a-project-repository/using-project-templates | 343 |
ehensive list of available information.
ArtifactsEach step of a pipeline run can have multiple output and input artifacts that we can inspect via the outputs and inputs properties.
To inspect the output artifacts of a step, you can use the outputs attribute, which is a dictionary that can be indexed using the name of an output. Alternatively, if your step only has a single output, you can use the output property as a shortcut directly:
# The outputs of a step are accessible by name
output = step.outputs["output_name"]
# If there is only one output, you can use the `.output` property instead
output = step.output
# use the `.load()` method to load the artifact into memory
my_pytorch_model = output.load()
Similarly, you can use the inputs and input properties to get the input artifacts of a step instead.
Check out this page to see what the output names of your steps are and how to customize them.
Note that the output of a step corresponds to a specific artifact version.
Fetching artifacts directly
If you'd like to fetch an artifact or an artifact version directly, it is easy to do so with the Client:
from zenml.client import Client
# Get artifact
artifact = Client().get_artifact('iris_dataset')
artifact.versions # Contains all the versions of the artifact
output = artifact.versions['2022'] # Get version name "2022"
# Get artifact version directly:
# Using version name:
output = Client().get_artifact_version('iris_dataset', '2022')
# Using UUID
output = Client().get_artifact_version('f429f94c-fb15-43b5-961d-dbea287507c5')
loaded_artifact = output.load()
Artifact information
Regardless of how one fetches it, each artifact contains a lot of general information about the artifact as well as datatype-specific metadata and visualizations.
Metadata | how-to | https://docs.zenml.io/how-to/build-pipelines/fetching-pipelines | 388 |
the Tekton orchestrator, check out the SDK Docs .Enabling CUDA for GPU-backed hardware
Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow the instructions on this page to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
PreviousAWS Sagemaker Orchestrator
NextAirflow Orchestrator
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/tekton | 97 |
res, and the thought process behind them.
PodcastWe also have a Podcast series that brings you interviews and discussions with industry leaders, top technology professionals, and others. We discuss the latest developments in machine learning, deep learning, and artificial intelligence, with a particular focus on MLOps, or how trained models are used in production.
Newsletter
You can also subscribe to our Newsletter where we share what we learn as we develop open-source tooling for production machine learning. You will also get all the exciting news about ZenML in general.
PreviousMigration guide 0.39.1 β 0.41.0
NextFAQ
Last updated 15 days ago | reference | https://docs.zenml.io/reference/community-and-content | 135 |
A starter project
Put your new knowledge into action with a simple starter project
By now, you have understood some of the basic pillars of a MLOps system:
Pipelines and steps
Artifacts
Models
We will now put this into action with a simple starter project.
Get started
Start with a fresh virtual environment with no dependencies. Then let's install our dependencies:
pip install "zenml[templates,server]" notebook
zenml integration install sklearn -y
We will then use ZenML templates to help us get the code we need for the project:
mkdir zenml_starter
cd zenml_starter
zenml init --template starter --template-with-defaults
# Just in case, we install the requirements again
pip install -r requirements.txt
The starter template is the same as the ZenML quickstart. You can clone it like so:
git clone --depth 1 [email protected]:zenml-io/zenml.git
cd zenml/examples/quickstart
pip install -r requirements.txt
zenml init
What you'll learn
You can either follow along in the accompanying Jupyter notebook, or just keep reading the README file for more instructions.
Either way, at the end you would run three pipelines that are exemplary:
A feature engineering pipeline that loads data and prepares it for training.
A training pipeline that loads the preprocessed dataset and trains a model.
A batch inference pipeline that runs predictions on the trained model with new data.
And voilΓ ! You're now well on your way to be an MLOps expert. As a next step, try introducing the ZenML starter template to your colleagues and see the benefits of a standard MLOps framework in action!
Conclusion and next steps
This marks the end of the first chapter of your MLOps journey with ZenML. Make sure you do your own experimentation with ZenML to master the basics. When ready, move on to the production guide, which is the next part of the series.
PreviousTrack ML models
NextProduction guide
Last updated 15 days ago | user-guide | https://docs.zenml.io/user-guide/starter-guide/starter-project | 422 |
πConnect services (AWS, GCP, Azure, K8s etc)
Connect your ZenML deployment to a cloud provider and other infrastructure services and resources.
A production-grade MLOps platform involves interactions between a diverse combination of third-party libraries and external services sourced from various different vendors. One of the most daunting hurdles in building and operating an MLOps platform composed of multiple components is configuring and maintaining uninterrupted and secured access to the infrastructure resources and services that it consumes.
In layman's terms, your pipeline code needs to "connect" to a handful of different services to run successfully and do what it's designed to do. For example, it might need to connect to a private AWS S3 bucket to read and store artifacts, a Kubernetes cluster to execute steps with Kubeflow or Tekton, and a private GCR container registry to build and store container images. ZenML makes this possible by allowing you to configure authentication information and credentials embedded directly into your Stack Components, but this doesn't scale well when you have more than a few Stack Components and has many other disadvantages related to usability and security.
Gaining access to infrastructure resources and services requires knowledge about the different authentication and authorization mechanisms and involves configuring and maintaining valid credentials. It gets even more complicated when these different services need to access each other. For instance, the Kubernetes container running your pipeline step needs access to the S3 bucket to store artifacts or needs to access a cloud service like AWS SageMaker, VertexAI, or AzureML to run a CPU/GPU intensive task like training a model. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management | 316 |
Azure Container Registry
Storing container images in Azure.
The Azure container registry is a container registry flavor that comes built-in with ZenML and uses the Azure Container Registry to store container images.
When to use it
You should use the Azure container registry if:
one or more components of your stack need to pull or push container images.
you have access to Azure. If you're not using Azure, take a look at the other container registry flavors.
How to deploy it
Go here and choose a subscription, resource group, location, and registry name. Then click on Review + Create and to create your container registry.
How to find the registry URI
The Azure container registry URI should have the following format:
<REGISTRY_NAME>.azurecr.io
# Examples:
zenmlregistry.azurecr.io
myregistry.azurecr.io
To figure out the URI for your registry:
Go to the Azure portal.
In the search bar, enter container registries and select the container registry you want to use. If you don't have any container registries yet, check out the deployment section on how to create one.
Use the name of your registry to fill the template <REGISTRY_NAME>.azurecr.io and get your URI.
How to use it
To use the Azure container registry, we need:
Docker installed and running.
The registry URI. Check out the previous section on the URI format and how to get the URI for your registry.
We can then register the container registry and use it in our active stack:
zenml container-registry register <NAME> \
--flavor=azure \
--uri=<REGISTRY_URI>
# Add the container registry to the active stack
zenml stack update -c <NAME>
You also need to set up authentication required to log in to the container registry.
Authentication Methods | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries/azure | 365 |
hat will hold the data that you want to visualize.Build a custom materializer for this custom class with the visualization logic implemented in the save_visualizations() method.
Return your custom class from any of your ZenML steps.
Example: Facets Data Skew Visualization
As an example, have a look at the models, materializers, and steps of the Facets Integration, which can be used to visualize the data skew between multiple Pandas DataFrames:
1. Custom Class The FacetsComparison is the custom class that holds the data required for the visualization.
class FacetsComparison(BaseModel):
datasets: List[Dict[str, Union[str, pd.DataFrame]]]
2. Materializer The FacetsMaterializer is a custom materializer that only handles this custom class and contains the corresponding visualization logic.
class FacetsMaterializer(BaseMaterializer):
ASSOCIATED_TYPES = (FacetsComparison,)
ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA_ANALYSIS
def save_visualizations(
self, data: FacetsComparison
) -> Dict[str, VisualizationType]:
html = ... # Create a visualization for the custom type
visualization_path = os.path.join(self.uri, VISUALIZATION_FILENAME)
with fileio.open(visualization_path, "w") as f:
f.write(html)
return {visualization_path: VisualizationType.HTML}
3. Step There are three different steps in the facets integration that can be used to create FacetsComparisons for different sets of inputs. E.g., the facets_visualization_step below takes two DataFrames as inputs and builds a FacetsComparison object out of them:
@step
def facets_visualization_step(
reference: pd.DataFrame, comparison: pd.DataFrame
) -> FacetsComparison: # Return the custom type from your step
return FacetsComparison(
datasets=[
{"name": "reference", "table": reference},
{"name": "comparison", "table": comparison},
This is what happens now under the hood when you add the facets_visualization_step into your pipeline:
The step creates and returns a FacetsComparison. | how-to | https://docs.zenml.io/v/docs/how-to/visualize-artifacts/creating-custom-visualizations | 418 |
redentials are always stored on the customer side.Even though they are stored customer side, access to ZenML secrets is fully managed by ZenML Pro. The individually deployed ZenML Servers can also allowed to use some of those credentials to connect directly to customer infrastructure services to implement control plane features such as artifact visualization or triggering pipelines. This implies that the secret values are allowed to leave the customer environment to allow their access to be managed centrally by the ZenML Pro and to enforce access control policies, but the ZenML users and pipelines never have direct access to the secret store.
All access to customer secrets is, of course, regulated through authentication and RBAC, so that only authorized users can access the secrets. This deployment scenario is meant for customers who want to use the ZenML Pro but want to keep their secrets on their own infrastructure.
Scenario 3: Fully On-prem
In this scenario, all services, data, and secrets are deployed on the customer cloud. This is the opposite of Scenario 1, and is meant for customers who require completely airgapped deployments, for the tightest security standards. Reach out to us if you want to set this up.
Are you interested in ZenML Pro? Sign up and get access to Scenario 1. with a free 14 day trial now!
PreviousZenML Pro
NextZenML SaaS
Last updated 12 days ago | getting-started | https://docs.zenml.io/v/docs/getting-started/zenml-pro/system-architectures | 279 |
ββββββββββββββββββββββββββββββββββββββββββββββββββFinally, you can ask for a particular resource, if you know its Resource Name beforehand:
zenml service-connector list-resources --resource-type s3-bucket --resource-id zenfiles
Example Command Output
The 's3-bucket' resource with name 'zenfiles' can be accessed by service connectors configured in your workspace:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββ―βββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌβββββββββββββββββ¨
β 373a73c2-8295-45d4-a768-45f5a0f744ea β aws-multi-type β πΆ aws β π¦ s3-bucket β s3://zenfiles β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌβββββββββββββββββ¨
β fa9325ab-ce01-4404-aec3-61a3af395d48 β aws-s3-multi-instance β πΆ aws β π¦ s3-bucket β s3://zenfiles β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌβββββββββββββββββ¨
β 19edc05b-92db-49de-bc84-aa9b3fb8261a β aws-s3-zenfiles β πΆ aws β π¦ s3-bucket β s3://zenfiles β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββ·βββββββββββββββββ
Connect Stack Components to resources
Service Connectors and the resources and services that they can authenticate to and grant access to are only useful because they are a means of providing Stack Components a better and easier way of accessing external resources.
If you are looking for a quick, assisted tour, we recommend using the interactive CLI mode to connect a Stack Component to a compatible Service Connector, especially if this is your first time doing it, e.g.:
zenml artifact-store connect <component-name> -i
zenml orchestrator connect <component-name> -i
zenml container-registry connect <component-name> -i | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 641 |
o_utils.read_file_contents_as_string(artifact_uri)using a temporary local file/folder to serialize and copy in-memory objects to/from the artifact store (heavily used in Materializers to transfer information between the Artifact Store and external libraries that don't support writing/reading directly to/from the artifact store backend):
import os
import tempfile
import external_lib
root_path = Repository().active_stack.artifact_store.path
artifact_path = os.path.join(root_path, "artifacts", "examples")
artifact_uri = os.path.join(artifact_path, "test.json")
fileio.makedirs(artifact_path)
with tempfile.NamedTemporaryFile(
mode="w", suffix=".json", delete=True
) as f:
external_lib.external_object.save_to_file(f.name)
# Copy it into artifact store
fileio.copy(f.name, artifact_uri)
import os
import tempfile
import external_lib
root_path = Repository().active_stack.artifact_store.path
artifact_path = os.path.join(root_path, "artifacts", "examples")
artifact_uri = os.path.join(artifact_path, "test.json")
with tempfile.NamedTemporaryFile(
mode="w", suffix=".json", delete=True
) as f:
# Copy the serialized object from the artifact store
fileio.copy(artifact_uri, f.name)
external_lib.external_object.load_from_file(f.name)
PreviousDevelop a custom orchestrator
NextLocal Artifact Store
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/artifact-stores | 290 |
Google Cloud Image Builder
Building container images with Google Cloud Build
The Google Cloud image builder is an image builder flavor provided by the ZenML gcp integration that uses Google Cloud Build to build container images.
When to use it
You should use the Google Cloud image builder if:
you're unable to install or use Docker on your client machine.
you're already using GCP.
your stack is mainly composed of other Google Cloud components such as the GCS Artifact Store or the Vertex Orchestrator.
How to deploy it
In order to use the ZenML Google Cloud image builder you need to enable Google Cloud Build relevant APIs on the Google Cloud project.
How to use it
The GCP image builder (and GCP integration in general) currently only works for Python versions <3.11. The ZenML team is aware of this dependency clash/issue and is working on a fix. For now, please use Python <3.11 together with the GCP integration.
To use the Google Cloud image builder, we need:
The ZenML gcp integration installed. If you haven't done so, run:Copyzenml integration install gcp
A GCP Artifact Store where the build context will be uploaded, so Google Cloud Build can access it.
A GCP container registry where the built image will be pushed.
Optionally, the GCP project ID in which you want to run the build and a service account with the needed permissions to run the build. If not provided, then the project ID and credentials will be inferred from the environment.
Optionally, you can change:the Docker image used by Google Cloud Build to execute the steps to build and push the Docker image. By default, the builder image will be 'gcr.io/cloud-builders/docker'.The network to which the container used to build the ZenML pipeline Docker image will be attached. More information: Cloud build network.The build timeout for the build, and for the blocking operation waiting for the build to finish. More information: Build Timeout. | stack-components | https://docs.zenml.io/v/docs/stack-components/image-builders/gcp | 406 |
nterfaces:
To list artifacts: zenml artifact listThe ZenML Pro dashboard offers advanced visualization features for artifact exploration.
To prevent visual clutter, make sure to assign names to your most important artifacts that you would like to explore visually.
Versioning artifacts manually
ZenML automatically versions all created artifacts using auto-incremented numbering. I.e., if you have defined a step creating an artifact named iris_dataset as shown above, the first execution of the step will create an artifact with this name and version "1", the second execution will create version "2", and so on.
While ZenML handles artifact versioning automatically, you have the option to specify custom versions using the ArtifactConfig. This may come into play during critical runs like production releases.
from zenml import step, ArtifactConfig
@step
def training_data_loader() -> (
Annotated[
pd.DataFrame,
# Add `ArtifactConfig` to control more properties of your artifact
ArtifactConfig(
name="iris_dataset",
version="raw_2023"
),
):
...
The next execution of this step will then create an artifact with the name iris_dataset and version raw_2023. This is primarily useful if you are making a particularly important pipeline run (such as a release) whose artifacts you want to distinguish at a glance later.
Since custom versions cannot be duplicated, the above step can only be run once successfully. To avoid altering your code frequently, consider using a YAML config for artifact versioning.
After execution, iris_dataset and its version raw_2023 can be seen using:
To list versions: zenml artifact version list
The Cloud dashboard visualizes version history for your review.
Add metadata and tags to artifacts
If you would like to extend your artifacts with extra metadata or tags you can do so by following the patterns demonstrated below:
from zenml import step, get_step_context, ArtifactConfig
from typing_extensions import Annotated
# below we annotate output with `ArtifactConfig` giving it a name, | user-guide | https://docs.zenml.io/v/docs/user-guide/starter-guide/manage-artifacts | 400 |
to be created and a client secret to be generated.The following assumes an Azure service principal was configured with a client secret and has permissions to access an Azure blob storage container, an AKS Kubernetes cluster and an ACR container registry. The service principal client ID, tenant ID and client secret are then used to configure the Azure Service Connector.
zenml service-connector register azure-service-principal --type azure --auth-method service-principal --tenant_id=a79f3633-8f45-4a74-a42e-68871c17b7fb --client_id=8926254a-8c3f-430a-a2fd-bdab234d491e --client_secret=AzureSuperSecret
Example Command Output
β Registering service connector 'azure-service-principal'...
Successfully registered service connector `azure-service-principal` with access to the following resources:
βββββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ azure-generic β ZenML Subscription β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ blob-container β az://demo-zenmlartifactstore β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β demo-zenml-demos/demo-zenml-terraform-cluster β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β demozenmlcontainerregistry.azurecr.io β
βββββββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββ
The Service Connector configuration shows that the connector is configured with service principal credentials:
zenml service-connector describe azure-service-principal
Example Command Output
ββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β | how-to | https://docs.zenml.io/how-to/auth-management/azure-service-connector | 542 |
lues={
"username": "admin",
"password": "abc123"Other Client methods used for secrets management include get_secret to fetch a secret by name or id, update_secret to update an existing secret, list_secrets to query the secrets store using a variety of filtering and sorting criteria, and delete_secret to delete a secret. The full Client API reference is available here.
Set scope for secrets
ZenML secrets can be scoped to a workspace or a user. This allows you to create secrets that are only accessible within a specific workspace or to one user.
By default, all created secrets are scoped to the active workspace. To create a secret and scope it to your active user instead, you can pass the --scope argument to the CLI command:
zenml secret create <SECRET_NAME> \
--scope user \
--<KEY_1>=<VALUE_1> \
--<KEY_2>=<VALUE_2>
Scopes also act as individual namespaces. When you are referencing a secret by name in your pipelines and stacks, ZenML will first look for a secret with that name scoped to the active user, and if it doesn't find one, it will look for one in the active workspace.
Accessing registered secrets
Reference secrets in stack component attributes and settings
Some of the components in your stack require you to configure them with sensitive information like passwords or tokens, so they can connect to the underlying infrastructure. Secret references allow you to configure these components in a secure way by not specifying the value directly but instead referencing a secret by providing the secret name and key. Referencing a secret for the value of any string attribute of your stack components, simply specify the attribute using the following syntax: {{<SECRET_NAME>.<SECRET_KEY>}}
For example:
# Register a secret called `mlflow_secret` with key-value pairs for the
# username and password to authenticate with the MLflow tracking server
# Using central secrets management
zenml secret create mlflow_secret \
--username=admin \
--password=abc123 | how-to | https://docs.zenml.io/how-to/interact-with-secrets | 411 |
nfiguration files, workload or managed identities.This method may constitute a security risk, because it can give users access to the same cloud resources and services that the ZenML Server itself is configured to access. For this reason, all implicit authentication methods are disabled by default and need to be explicitly enabled by setting the ZENML_ENABLE_IMPLICIT_AUTH_METHODS environment variable or the helm chart enableImplicitAuthMethods configuration option to true in the ZenML deployment.
This authentication method doesn't require any credentials to be explicitly configured. It automatically discovers and uses credentials from one of the following sources:
environment variables
workload identity - if the application is deployed to an Azure Kubernetes Service with Managed Identity enabled. This option can only be used when running the ZenML server on an AKS cluster.
managed identity - if the application is deployed to an Azure host with Managed Identity enabled. This option can only be used when running the ZenML client or server on an Azure host.
Azure CLI - if a user has signed in via the Azure CLI az login command.
This is the quickest and easiest way to authenticate to Azure services. However, the results depend on how ZenML is deployed and the environment where it is used and is thus not fully reproducible:
when used with the default local ZenML deployment or a local ZenML server, the credentials are the same as those used by the Azure CLI or extracted from local environment variables.
when connected to a ZenML server, this method only works if the ZenML server is deployed in Azure and will use the workload identity attached to the Azure resource where the ZenML server is running (e.g. an AKS cluster). The permissions of the managed identity may need to be adjusted to allows listing and accessing/describing the Azure resources that the connector is configured to access. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector | 362 |
us know!
Configuration at pipeline or step levelWhen running your ZenML pipeline with the Sagemaker orchestrator, the configuration set when configuring the orchestrator as a ZenML component will be used by default. However, it is possible to provide additional configuration at the pipeline or step level. This allows you to run whole pipelines or individual steps with alternative configurations. For example, this allows you to run the training process with a heavier, GPU-enabled instance type, while running other steps with lighter instances.
Additional configuration for the Sagemaker orchestrator can be passed via SagemakerOrchestratorSettings. Here, it is possible to configure processor_args, which is a dictionary of arguments for the Processor. For available arguments, see the Sagemaker documentation . Currently, it is not possible to provide custom configuration for the following attributes:
image_uri
instance_count
sagemaker_session
entrypoint
base_job_name
env
For example, settings can be provided in the following way:
sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(
processor_args={
"instance_type": "ml.t3.medium",
"volume_size_in_gb": 30
They can then be applied to a step as follows:
@step(settings={"orchestrator.sagemaker": sagemaker_orchestrator_settings})
For example, if your ZenML component is configured to use ml.c5.xlarge with 400GB additional storage by default, all steps will use it except for the step above, which will use ml.t3.medium with 30GB additional storage.
Check out this docs page for more information on how to specify settings in general.
For more information and a full list of configurable attributes of the Sagemaker orchestrator, check out the SDK Docs .
S3 data access in ZenML steps | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/sagemaker | 371 |
bucket name: {bucket-name}
EKS Kubernetes clusterAllows users to access an EKS cluster as a standard Kubernetes cluster resource. When used by Stack Components, they are provided a pre-authenticated Python Kubernetes client instance.
The configured credentials must have at least the following AWS IAM permissions associated with the ARNs of EKS clusters that the connector will be allowed to access (e.g. arn:aws:eks:{region_id}:{project_id}:cluster/* represents all the EKS clusters available in the target AWS region).
eks:ListClusters
eks:DescribeCluster
If you are using the AWS IAM role, Session Token or Federation Token authentication methods, you don't have to worry too much about restricting the permissions of the AWS credentials that you use to access the AWS cloud resources. These authentication methods already support automatically generating temporary tokens with permissions down-scoped to the minimum required to access the target resource.
In addition to the above permissions, if the credentials are not associated with the same IAM user or role that created the EKS cluster, the IAM principal must be manually added to the EKS cluster's aws-auth ConfigMap, otherwise the Kubernetes client will not be allowed to access the cluster's resources. This makes it more challenging to use the AWS Implicit and AWS Federation Token authentication methods for this resource. For more information, see this documentation.
If set, the resource name must identify an EKS cluster using one of the following formats:
EKS cluster name (canonical resource name): {cluster-name}
EKS cluster ARN: arn:aws:eks:{region}:{account-id}:cluster/{cluster-name}
EKS cluster names are region scoped. The connector can only be used to access EKS clusters in the AWS region that it is configured to use.
ECR container registry
Allows Stack Components to access one or more ECR repositories as a standard Docker registry resource. When used by Stack Components, they are provided a pre-authenticated python-docker client instance. | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 397 |
π€Deploying ZenML
Why do we need to deploy ZenML?
Moving your ZenML Server to a production environment offers several benefits over staying local:
Scalability: Production environments are designed to handle large-scale workloads, allowing your models to process more data and deliver faster results.
Reliability: Production-grade infrastructure ensures high availability and fault tolerance, minimizing downtime and ensuring consistent performance.
Collaboration: A shared production environment enables seamless collaboration between team members, making it easier to iterate on models and share insights.
Despite these advantages, transitioning to production can be challenging due to the complexities involved in setting up the needed infrastructure.
ZenML Server
When you first get started with ZenML, it relies with the following architecture on your machine.
The SQLite database that you can see in this diagram is used to store information about pipelines, pipeline runs, stacks, and other configurations. Users can run the zenml up command to spin up a local REST server to serve the dashboard. The diagram for this looks as follows:
In Scenario 2, the zenml up command implicitly connects the client to the server.
In order to move into production, the ZenML server needs to be deployed somewhere centrally so that the different cloud stack components can read from and write to the server. Additionally, this also allows all your team members to connect to it and share stacks and pipelines.
Deploying a ZenML Server
Deploying the ZenML Server is a crucial step towards transitioning to a production-grade environment for your machine learning projects. By setting up a deployed ZenML Server instance, you gain access to powerful features, allowing you to use stacks with remote components, centrally track progress, collaborate effectively, and achieve reproducible results.
Currently, there are two main options to access a deployed ZenML server: | getting-started | https://docs.zenml.io/getting-started/deploying-zenml | 360 |
g version/stage.
Fetching model versions by stageA common pattern is to assign a special stage to a model version, i.e. production, staging, development etc. This marks this version especially, and can be used to fetch it using a particular semantic meaning, disconnected from the concrete model version. A model version can be assigned a particular stage in the dashboard or by executing the following command in the CLI:
zenml model version update MODEL_NAME --stage=STAGE
These stages can then be passed in as a version to fetch the right model version at a later point:
from zenml import Model, step, pipeline
model= Model(
name="my_model",
version="production"
# The step configuration will take precedence over the pipeline
@step(model=model)
def svc_trainer(...) -> ...:
...
# This configures it for all steps within the pipeline
@pipeline(model=model)
def training_pipeline( ... ):
# training happens here
Autonumbering of versions
ZenML automatically numbers your model versions for you. If you don't specify a version number, or if you pass None into the version argument of the Model object, ZenML will automatically generate a version number (or a new version, if you already have a version) for you. For example if we had a model version really_good_version for model my_model and we wanted to create a new version of this model, we could do so as follows:
from zenml import Model, step
model = Model(
name="my_model",
version="even_better_version"
@step(model=model)
def svc_trainer(...) -> ...:
...
A new model version will be created and ZenML will track that this is the next in the iteration sequence of the models using the number property. If really_good_version was the 5th version of my_model, then even_better_version will be the 6th version of my_model.
from zenml import Model
earlier_version = Model(
name="my_model",
version="really_good_version"
).number # == 5
updated_version = Model(
name="my_model",
version="even_better_version"
).number # == 6 | how-to | https://docs.zenml.io/how-to/use-the-model-control-plane/model-versions | 440 |
D Kubernetes clusters and local Docker containers.when used with a remote ZenML server, the implicit authentication method only works if your ZenML server is deployed in the same cloud as the one supported by the Service Connector Type that you are using. For instance, if you're using the AWS Service Connector Type, then the ZenML server must also be deployed in AWS (e.g. in an EKS Kubernetes cluster). You may also need to manually adjust the cloud configuration of the remote cloud workload where the ZenML server is running to allow access to resources (e.g. add permissions to the AWS IAM role attached to the EC2 or EKS node, add roles to the GCP service account attached to the GKE cluster nodes).
The following is an example of using the GCP Service Connector's implicit authentication method to gain immediate access to all the GCP resources that the ZenML server also has access to. Note that this is only possible because the ZenML server is also deployed in GCP, in a GKE cluster, and the cluster is attached to a GCP service account with permissions to access the project resources:
zenml service-connector register gcp-implicit --type gcp --auth-method implicit --project_id=zenml-core
Example Command Output
Successfully registered service connector `gcp-implicit` with access to the following resources:
βββββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π΅ gcp-generic β zenml-core β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ gcs-bucket β gs://annotation-gcp-store β
β β gs://zenml-bucket-sl β
β β gs://zenml-core.appspot.com β
β β gs://zenml-core_cloudbuild β | how-to | https://docs.zenml.io/how-to/auth-management/best-security-practices | 452 |
β β β ββ β β β β π³ docker-registry β β β β β β
β βββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββΌββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββΌβββββββββΌββββββββββΌβββββββββββββΌββββββββββββββββββββββ¨
β β kube-auto β da497715-7502-4cdd-81ed-289e β π kubernetes β π kubernetes-cluster β A5F8F4142FB12DDCDE9F21F6E9B0 β β β default β β β
β β β 70664597 β β β 7A18.gr7.us-east-1.eks.amazo β β β β β
β β β β β β naws.com β β β β β
ββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββ·ββββββββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββ·βββββββββ·ββββββββββ·βββββββββββββ·ββββββββββββββββββββββ
If a Resource Type is used to identify a class of resources, we also need some way to uniquely identify each resource instance belonging to that class that a Service Connector can provide access to. For example, an AWS Service Connector can be configured to provide access to multiple S3 buckets identifiable by their bucket names or their s3://bucket-name formatted URIs. Similarly, an AWS Service Connector can be configured to provide access to multiple EKS Kubernetes clusters in the same AWS region, each uniquely identifiable by their EKS cluster name. This is what we call Resource Names. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 462 |
the command:
zenml container-registry flavor listPreviousDevelop a custom artifact store
NextDefault Container Registry
Last updated 10 months ago | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries | 28 |
ββββββββββ·ββββββββββββββββββββββββββββββββββββββββNote: Please remember to grant the entity associated with your Azure credentials permissions to read and write to your ACR registry as well as to list accessible ACR registries. For a full list of permissions required to use an Azure Service Connector to access a ACR registry, please refer to the Azure Service Connector ACR registry resource type documentation or read the documentation available in the interactive CLI commands and dashboard. The Azure Service Connector supports many different authentication methods with different levels of security and convenience. You should pick the one that best fits your use case.
If you already have one or more Azure Service Connectors configured in your ZenML deployment, you can check which of them can be used to access the ACR registry you want to use for your Azure Container Registry by running e.g.:
zenml service-connector list-resources --connector-type azure --resource-type docker-registry
Example Command Output
The following 'docker-registry' resources can be accessed by 'azure' service connectors configured in your workspace:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββ¨
β db5821d0-a658-4504-ae96-04c3302d8f85 β azure-demo β π¦ azure β π³ docker-registry β demozenmlcontainerregistry.azurecr.io β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββ
After having set up or decided on an Azure Service Connector to use to connect to the target ACR registry, you can register the Azure Container Registry as follows: | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries/azure | 530 |
Configuration hierarchy
When things can be configured on the pipeline and step level, the step configuration overrides the pipeline.
There are a few general rules when it comes to settings and configurations that are applied in multiple places. Generally the following is true:
Configurations in code override configurations made inside of the yaml file
Configurations at the step level override those made at the pipeline level
In case of attributes the dictionaries are merged
from zenml import pipeline, step
from zenml.config import ResourceSettings
@step
def load_data(parameter: int) -> dict:
...
@step(settings={"resources": ResourceSettings(gpu_count=1, memory="2GB")})
def train_model(data: dict) -> None:
...
@pipeline(settings={"resources": ResourceSettings(cpu_count=2, memory="1GB")})
def simple_ml_pipeline(parameter: int):
...
# ZenMl merges the two configurations and uses the step configuration to override
# values defined on the pipeline level
train_model.configuration.settings["resources"]
# -> cpu_count: 2, gpu_count=1, memory="2GB"
simple_ml_pipeline.configuration.settings["resources"]
# -> cpu_count: 2, memory="1GB"
PreviousRuntime settings for Docker, resources, and stack components
NextFind out which configuration was used for a run
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/use-configuration-files/configuration-hierarchy | 272 |
nRegExpMetric",
columns=["Review_Text", "Title"],reg_exp=r"[A-Z][A-Za-z0-9 ]*",
),
],
column_mapping = ColumnMapping(
target="Rating",
numerical_features=["Age", "Positive_Feedback_Count"],
categorical_features=[
"Division_Name",
"Department_Name",
"Class_Name",
],
text_features=["Review_Text", "Title"],
),
download_nltk_data = True,
# post-processing (e.g. interpret results, take actions) can happen here
return report.json(), HTMLString(report.show(mode="inline").data)
@step
def data_validation(
reference_dataset: pd.DataFrame,
comparison_dataset: pd.DataFrame,
) -> Tuple[
Annotated[str, "test_json"],
Annotated[HTMLString, "test_html"]
]:
"""Custom data validation step with Evidently.
Args:
reference_dataset: a Pandas DataFrame
comparison_dataset: a Pandas DataFrame of new data you wish to
compare against the reference data
Returns:
The Evidently test suite results rendered in JSON and HTML formats.
"""
# pre-processing (e.g. dataset preparation) can take place here
data_validator = EvidentlyDataValidator.get_active_data_validator()
test_suite = data_validator.data_validation(
dataset=reference_dataset,
comparison_dataset=comparison_dataset,
check_list=[
EvidentlyTestConfig.test("DataQualityTestPreset"),
EvidentlyTestConfig.test_generator(
"TestColumnRegExp",
columns=["Review_Text", "Title"],
reg_exp=r"[A-Z][A-Za-z0-9 ]*",
),
],
column_mapping = ColumnMapping(
target="Rating",
numerical_features=["Age", "Positive_Feedback_Count"],
categorical_features=[
"Division_Name",
"Department_Name",
"Class_Name",
],
text_features=["Review_Text", "Title"],
),
download_nltk_data = True,
# post-processing (e.g. interpret results, take actions) can happen here
return test_suite.json(), HTMLString(test_suite.show(mode="inline").data)
Have a look at the complete list of methods and parameters available in the EvidentlyDataValidator API in the SDK docs.
Call Evidently directly
You can use the Evidently library directly in your custom pipeline steps, e.g.: | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/evidently | 468 |
run ZenML steps in remote specialized environmentsAs you transition to a team setting or a production setting, you can replace the local Artifact Store in your stack with one of the other flavors that are better suited for these purposes, with no changes required in your code.
How do you deploy it?
The default stack that comes pre-configured with ZenML already contains a local Artifact Store:
$ zenml stack list
Running without an active repository root.
Using the default local database.
ββββββββββ―βββββββββββββ―βββββββββββββββββ―βββββββββββββββ
β ACTIVE β STACK NAME β ARTIFACT_STORE β ORCHESTRATOR β
β βββββββββΌβββββββββββββΌβββββββββββββββββΌβββββββββββββββ¨
β π β default β default β default β
ββββββββββ·βββββββββββββ·βββββββββββββββββ·βββββββββββββββ
$ zenml artifact-store describe
Running without an active repository root.
Using the default local database.
Running with active stack: 'default'
No component name given; using `default` from active stack.
ARTIFACT_STORE Component Configuration (ACTIVE)
ββββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β COMPONENT_PROPERTY β VALUE β
β βββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β artifact_store β
β βββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β FLAVOR β local β
β βββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β default β
β βββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨ | stack-components | https://docs.zenml.io/stack-components/artifact-stores/local | 454 |
ister <STACK_NAME> -a <AZURE_STORE_NAME> ... --setWhen you register the Azure Artifact Store, you can create a ZenML Secret to store a variety of Azure credentials and then reference it in the Artifact Store configuration:
to use an Azure storage account key , set account_name to your account name and one of account_key or sas_token to the Azure key or SAS token value as attributes in the ZenML secret
to use an Azure storage account key connection string , configure the connection_string attribute in the ZenML secret to your Azure Storage Key connection string
to use Azure Service Principal credentials , create an Azure Service Principal and then set account_name to your account name and client_id, client_secret and tenant_id to the client ID, secret and tenant ID of your service principal in the ZenML secret
This method has some advantages over the implicit authentication method:
you don't need to install and configure the Azure CLI on your host
you don't need to care about enabling your other stack components (orchestrators, step operators and model deployers) to have access to the artifact store through Azure Managed Identities
you can combine the Azure artifact store with other stack components that are not running in Azure
Configuring Azure credentials in a ZenML secret and then referencing them in the Artifact Store configuration could look like this:
# Store the Azure storage account key in a ZenML secret
zenml secret create az_secret \
--account_name='<YOUR_AZURE_ACCOUNT_NAME>' \
--account_key='<YOUR_AZURE_ACCOUNT_KEY>'
# or if you want to use a connection string
zenml secret create az_secret \
--connection_string='<YOUR_AZURE_CONNECTION_STRING>'
# or if you want to use Azure ServicePrincipal credentials
zenml secret create az_secret \
--account_name='<YOUR_AZURE_ACCOUNT_NAME>' \
--tenant_id='<YOUR_AZURE_TENANT_ID>' \
--client_id='<YOUR_AZURE_CLIENT_ID>' \
--client_secret='<YOUR_AZURE_CLIENT_SECRET>' | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/azure | 411 |
Load a Model in code
There are a few different ways to load a ZenML Model in code:
Load the active model in a pipeline
You can also use the active model to get the model metadata, or the associated artifacts directly as described in the starter guide:
from zenml import step, pipeline, get_step_context, pipeline, Model
@pipeline(model=Model(name="my_model"))
def my_pipeline():
...
@step
def my_step():
# Get model from active step context
mv = get_step_context().model
# Get metadata
print(mv.run_metadata["metadata_key"].value)
# Directly fetch an artifact that is attached to the model
output = mv.get_artifact("my_dataset", "my_version")
output.run_metadata["accuracy"].value
Load any model via the Client
Alternatively, you can use the Client:
from zenml import step
from zenml.client import Client
from zenml.enums import ModelStages
@step
def model_evaluator_step()
...
# Get staging model version
try:
staging_zenml_model = Client().get_model_version(
model_name_or_id="<INSERT_MODEL_NAME>",
model_version_name_or_number_or_id=ModelStages.STAGING,
except KeyError:
staging_zenml_model = None
...
PreviousControlling Model versions
NextPromote a Model
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/use-the-model-control-plane/load-a-model-in-code | 281 |
r": "#c292a1",
"reference_data_color": "#017b92",),
],
),
You can view the complete list of configuration parameters in the SDK docs.
The Evidently Data Validator
The Evidently Data Validator implements the same interface as do all Data Validators, so this method forces you to maintain some level of compatibility with the overall Data Validator abstraction, which guarantees an easier migration in case you decide to switch to another Data Validator.
All you have to do is call the Evidently Data Validator methods when you need to interact with Evidently to generate data reports or to run test suites, e.g.:
from typing_extensions import Annotated # or `from typing import Annotated on Python 3.9+
from typing import Tuple
import pandas as pd
from evidently.pipeline.column_mapping import ColumnMapping
from zenml.integrations.evidently.data_validators import EvidentlyDataValidator
from zenml.integrations.evidently.metrics import EvidentlyMetricConfig
from zenml.integrations.evidently.tests import EvidentlyTestConfig
from zenml.types import HTMLString
from zenml import step
@step
def data_profiling(
reference_dataset: pd.DataFrame,
comparison_dataset: pd.DataFrame,
) -> Tuple[
Annotated[str, "report_json"],
Annotated[HTMLString, "report_html"]
]:
"""Custom data profiling step with Evidently.
Args:
reference_dataset: a Pandas DataFrame
comparison_dataset: a Pandas DataFrame of new data you wish to
compare against the reference data
Returns:
The Evidently report rendered in JSON and HTML formats.
"""
# pre-processing (e.g. dataset preparation) can take place here
data_validator = EvidentlyDataValidator.get_active_data_validator()
report = data_validator.data_profiling(
dataset=reference_dataset,
comparison_dataset=comparison_dataset,
profile_list=[
EvidentlyMetricConfig.metric("DataQualityPreset"),
EvidentlyMetricConfig.metric(
"TextOverviewPreset", column_name="Review_Text"
),
EvidentlyMetricConfig.metric_generator(
"ColumnRegExpMetric",
columns=["Review_Text", "Title"], | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/evidently | 441 |
other remote stack components also running in AWS.This method uses the Docker client authentication available in the environment where the ZenML code is running. On your local machine, this is the quickest way to configure an AWS Container Registry. You don't need to supply credentials explicitly when you register the AWS Container Registry, as it leverages the local credentials and configuration that the AWS CLI and Docker client store on your local machine. However, you will need to install and set up the AWS CLI on your machine as a prerequisite, as covered in the AWS CLI documentation, before you register the AWS Container Registry.
With the AWS CLI installed and set up with credentials, we'll need to log in to the container registry so Docker can pull and push images:
# Fill your REGISTRY_URI and REGION in the placeholders in the following command.
# You can find the REGION as part of your REGISTRY_URI: `<ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com`
aws ecr get-login-password --region <REGION> | docker login --username AWS --password-stdin <REGISTRY_URI>
Stacks using the AWS Container Registry set up with local authentication are not portable across environments. To make ZenML pipelines fully portable, it is recommended to use an AWS Service Connector to link your AWS Container Registry to the remote ECR registry.
To set up the AWS Container Registry to authenticate to AWS and access an ECR registry, it is recommended to leverage the many features provided by the AWS Service Connector such as auto-configuration, local login, best security practices regarding long-lived credentials and fine-grained access control and reusing the same credentials across multiple stack components.
If you don't already have an AWS Service Connector configured in your ZenML deployment, you can register one using the interactive CLI command. You have the option to configure an AWS Service Connector that can be used to access an ECR registry or even more than one type of AWS resource: | stack-components | https://docs.zenml.io/stack-components/container-registries/aws | 388 |
Google Cloud VertexAI
Executing individual steps in Vertex AI.
Vertex AI offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's Vertex AI step operator allows you to submit individual steps to be run on Vertex AI compute instances.
When to use it
You should use the Vertex step operator if:
one or more steps of your pipeline require computing resources (CPU, GPU, memory) that are not provided by your orchestrator.
you have access to Vertex AI. If you're using a different cloud provider, take a look at the SageMaker or AzureML step operators.
How to deploy it
Enable Vertex AI here.
Create a service account with the right permissions to create Vertex AI jobs (roles/aiplatform.admin) and push to the container registry (roles/storage.admin).
How to use it
The GCP step operator (and GCP integration in general) currently only works for Python versions <3.11. The ZenML team is aware of this dependency clash/issue and is working on a fix. For now, please use Python <3.11 together with the GCP integration.
To use the Vertex step operator, we need:
The ZenML gcp integration installed. If you haven't done so, runCopyzenml integration install gcp
Docker installed and running.
Vertex AI enabled and a service account file. See the deployment section for detailed instructions.
A GCR container registry as part of our stack.
(Optional) A machine type that we want to execute our steps on (this defaults to n1-standard-4). See here for a list of available machine types.
A remote artifact store as part of your stack. This is needed so that both your orchestration environment and VertexAI can read and write step artifacts. Check out the documentation page of the artifact store you want to use for more information on how to set that up and configure authentication for it.
You have three different options to provide GCP credentials to the step operator: | stack-components | https://docs.zenml.io/v/docs/stack-components/step-operators/vertex | 408 |
sing a custom secrets store backend implementationYou have the option of using a custom implementation of the secrets store API as your secrets store back-end. This must come in the form of a class derived from zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore. This class must be importable from within the ZenML server container, which means you most likely need to build a custom container image that contains the class. Then, you can configure the Helm values to use your custom secrets store as follows:
zenml:
# ...
# Secrets store settings. This is used to store centralized secrets.
secretsStore:
# Set to false to disable the secrets store.
enabled: true
# The type of the secrets store
type: custom
# Configuration for the HashiCorp Vault secrets store
custom:
# The class path of the custom secrets store implementation. This should
# point to a full Python class that extends the
# `zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore`
# base class. The class should be importable from the container image
# that you are using for the ZenML server.
class_path: my.custom.secrets.store.MyCustomSecretsStore
# Extra environment variables used to configure the custom secrets store.
environment:
ZENML_SECRETS_STORE_OPTION_1: value1
ZENML_SECRETS_STORE_OPTION_2: value2
# Extra environment variables to set in the ZenML server container that
# should be kept secret and are used to configure the custom secrets store.
secretEnvironment:
ZENML_SECRETS_STORE_SECRET_OPTION_3: value3
ZENML_SECRETS_STORE_SECRET_OPTION_4: value4
Backup secrets store
A backup secrets store back-end may be configured for high-availability and backup purposes. or as an intermediate step in the process of migrating secrets to a different external location or secrets manager provider. | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-helm | 399 |
πExperiment Trackers
Logging and visualizing ML experiments.
Experiment trackers let you track your ML experiments by logging extended information about your models, datasets, metrics, and other parameters and allowing you to browse them, visualize them and compare them between runs. In the ZenML world, every pipeline run is considered an experiment, and ZenML facilitates the storage of experiment results through Experiment Tracker stack components. This establishes a clear link between pipeline runs and experiments.
Related concepts:
the Experiment Tracker is an optional type of Stack Component that needs to be registered as part of your ZenML Stack.
ZenML already provides versioning and tracking for the pipeline artifacts by storing artifacts in the Artifact Store.
When to use it
ZenML already records information about the artifacts circulated through your pipelines by means of the mandatory Artifact Store.
However, these ZenML mechanisms are meant to be used programmatically and can be more difficult to work with without a visual interface.
Experiment Trackers on the other hand are tools designed with usability in mind. They include extensive UIs providing users with an interactive and intuitive interface that allows them to browse and visualize the information logged during the ML pipeline runs.
You should add an Experiment Tracker to your ZenML stack and use it when you want to augment ZenML with the visual features provided by experiment tracking tools.
How they experiment trackers slot into the stack
Here is an architecture diagram that shows how experiment trackers fit into the overall story of a remote stack.
Experiment Tracker Flavors
Experiment Trackers are optional stack components provided by integrations: | stack-components | https://docs.zenml.io/stack-components/experiment-trackers | 310 |
initializing zenml at the root of your repository.If ZenML does not find an initialized ZenML repository in any parent directory, it will default to the current working directory, but usually, it's better to not have to rely on this mechanism and initialize zenml at the root.
Afterward, you should see the new custom alerter flavor in the list of available alerter flavors:
zenml alerter flavor list
It is important to draw attention to when and how these abstractions are coming into play in a ZenML workflow.
The MyAlerterFlavor class is imported and utilized upon the creation of the custom flavor through the CLI.
The MyAlerterConfig class is imported when someone tries to register/update a stack component with the my_alerter flavor. Especially, during the registration process of the stack component, the config will be used to validate the values given by the user. As Config objects are inherently pydantic objects, you can also add your own custom validators here.
The MyAlerter only comes into play when the component is ultimately in use.
The design behind this interaction lets us separate the configuration of the flavor from its implementation. This way we can register flavors and components even when the major dependencies behind their implementation are not installed in our local setting (assuming the MyAlerterFlavor and the MyAlerterConfig are implemented in a different module/path than the actual MyAlerter).
PreviousSlack Alerter
NextImage Builders
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/alerters/custom | 310 |