page_content
stringlengths 74
2.86k
| parent_section
stringclasses 7
values | url
stringlengths 21
129
| token_count
int64 17
755
|
---|---|---|---|
β s3://zenml-generative-chat ββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββ
zenml service-connector verify aws-s3-multi-instance --resource-id s3://zenml-demos
Example Command Output
Service connector 'aws-s3-multi-instance' is correctly configured with valid credentials and has access to the following resources:
βββββββββββββββββ―βββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββΌβββββββββββββββββββ¨
β π¦ s3-bucket β s3://zenml-demos β
βββββββββββββββββ·βββββββββββββββββββ
Finally, verifying the single-instance Service Connector is straight-forward and requires no further explanation:
zenml service-connector verify aws-s3-zenfiles
Example Command Output
Service connector 'aws-s3-zenfiles' is correctly configured with valid credentials and has access to the following resources:
βββββββββββββββββ―βββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββΌβββββββββββββββββ¨
β π¦ s3-bucket β s3://zenfiles β
βββββββββββββββββ·βββββββββββββββββ
Configure local clients
Yet another neat feature built into some Service Container Types that is the opposite of Service Connector auto-configuration is the ability to configure local CLI and SDK utilities installed on your host, like the Docker or Kubernetes CLI (kubectl) with credentials issued by a compatible Service Connector.
You may need to exercise this feature to get direct CLI access to a remote service in order to manually manage some configurations or resources, to debug some workloads or to simply verify that the Service Connector credentials are actually working. | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 439 |
:
zenml integration install great_expectations -yDepending on how you configure the Great Expectations Data Validator, it can reduce or even completely eliminate the complexity associated with setting up the store backends for Great Expectations. If you're only looking for a quick and easy way of adding Great Expectations to your stack and are not concerned with the configuration details, you can simply run:
# Register the Great Expectations data validator
zenml data-validator register ge_data_validator --flavor=great_expectations
# Register and set a stack with the new data validator
zenml stack register custom_stack -dv ge_data_validator ... --set
If you already have a Great Expectations deployment, you can configure the Great Expectations Data Validator to reuse or even replace your current configuration. You should consider the pros and cons of every deployment use-case and choose the one that best fits your needs:
let ZenML initialize and manage the Great Expectations configuration. The Artifact Store will serve as a storage backend for all the information that Great Expectations needs to persist (e.g. Expectation Suites, Validation Results). However, you will not be able to setup new Data Sources, Metadata Stores or Data Docs sites. Any changes you try and make to the configuration through code will not be persisted and will be lost when your pipeline completes or your local process exits.
use ZenML with your existing Great Expectations configuration. You can tell ZenML to replace your existing Metadata Stores with the active ZenML Artifact Store by setting the configure_zenml_stores attribute in the Data Validator. The downside is that you will only be able to run pipelines locally with this setup, given that the Great Expectations configuration is a file on your local machine. | stack-components | https://docs.zenml.io/v/docs/stack-components/data-validators/great-expectations | 347 |
ht want to use torch.save() and torch.load() here.(Optional) How to Visualize the Artifact
Optionally, you can override the save_visualizations() method to automatically save visualizations for all artifacts saved by your materializer. These visualizations are then shown next to your artifacts in the dashboard:
Currently, artifacts can be visualized either as CSV table, embedded HTML, image or Markdown. For more information, see zenml.enums.VisualizationType.
To create visualizations, you need to:
Compute the visualizations based on the artifact
Save all visualizations to paths inside self.uri
Return a dictionary mapping visualization paths to visualization types.
As an example, check out the implementation of the zenml.materializers.NumpyMaterializer that use matplotlib to automatically save or plot certain arrays.
Read more about visualizations here.
(Optional) Which Metadata to Extract for the Artifact
Optionally, you can override the extract_metadata() method to track custom metadata for all artifacts saved by your materializer. Anything you extract here will be displayed in the dashboard next to your artifacts.
zenml.metadata.metadata_types that are displayed in a dedicated way in the dashboard. See
zenml.metadata.metadata_types.MetadataType for more details.
By default, this method will only extract the storage size of an artifact, but you can override it to track anything you wish. E.g., the zenml.materializers.NumpyMaterializer overrides this method to track the shape, dtype, and some statistical properties of each np.ndarray that it saves.
If you would like to disable artifact visualization altogether, you can set enable_artifact_visualization at either pipeline or step level via @pipeline(enable_artifact_visualization=False) or @step(enable_artifact_visualization=False).
(Optional) Which Metadata to Extract for the Artifact | how-to | https://docs.zenml.io/how-to/handle-data-artifacts/handle-custom-data-types | 358 |
Name your pipeline runs
In the output logs of a pipeline run you will see the name of the run:
Pipeline run training_pipeline-2023_05_24-12_41_04_576473 has finished in 3.742s.
This name is automatically generated based on the current date and time. To change the name for a run, pass run_name as a parameter to the with_options() method:
training_pipeline = training_pipeline.with_options(
run_name="custom_pipeline_run_name"
training_pipeline()
Pipeline run names must be unique, so if you plan to run your pipelines multiple times or run them on a schedule, make sure to either compute the run name dynamically or include one of the following placeholders that ZenML will replace:
{{date}} will resolve to the current date, e.g. 2023_02_19
{{time}} will resolve to the current time, e.g. 11_07_09_326492
training_pipeline = training_pipeline.with_options(
run_name=f"custom_pipeline_run_name_{{date}}_{{time}}"
training_pipeline()
Be sure to include the f string prefix to allow for the placeholders to be replaced, as shown in the example above. Without the f prefix, the placeholders will not be replaced.
PreviousUsing a custom step invocation ID
NextUse failure/success hooks
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/build-pipelines/name-your-pipeline-and-runs | 279 |
used for things like ServerSideEncryption and ACL.To include these advanced parameters in your Artifact Store configuration, pass them using JSON format during registration, e.g.:
zenml artifact-store register minio_store -f s3 \
--path='s3://minio_bucket' \
--authentication_secret=s3_secret \
--client_kwargs='{"endpoint_url": "http://minio.cluster.local:9000", "region_name": "us-east-1"}'
For more, up-to-date information on the S3 Artifact Store implementation and its configuration, you can have a look at the SDK docs .
How do you use it?
Aside from the fact that the artifacts are stored in an S3 compatible backend, using the S3 Artifact Store is no different than using any other flavor of Artifact Store.
PreviousLocal Artifact Store
NextGoogle Cloud Storage (GCS)
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/s3 | 180 |
ep
def print_data(data: np.ndarray):
print(data)@pipeline
def printing_pipeline():
# One can also pass data directly into the ExternalArtifact
# to create a new artifact on the fly
data = ExternalArtifact(value=np.array([0]))
print_data(data=data)
if __name__ == "__main__":
printing_pipeline()
Optionally, you can configure the ExternalArtifact to use a custom materializer for your data or disable artifact metadata and visualizations. Check out the SDK docs for all available options.
Using an ExternalArtifact for your step automatically disables caching for the step.
Consuming artifacts produced by other pipelines
It is also common to consume an artifact downstream after producing it in an upstream pipeline or step. As we have learned in the previous section, the Client can be used to fetch artifacts directly inside the pipeline code:
from uuid import UUID
import pandas as pd
from zenml import step, pipeline
from zenml.client import Client
@step
def trainer(dataset: pd.DataFrame):
...
@pipeline
def training_pipeline():
client = Client()
# Fetch by ID
dataset_artifact = client.get_artifact_version(
name_id_or_prefix=UUID("3a92ae32-a764-4420-98ba-07da8f742b76")
# Fetch by name alone - uses the latest version of this artifact
dataset_artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset")
# Fetch by name and version
dataset_artifact = client.get_artifact_version(
name_id_or_prefix="iris_dataset", version="raw_2023"
# Pass into any step
trainer(dataset=dataset_artifact)
if __name__ == "__main__":
training_pipeline()
Calls of Client methods like get_artifact_version directly inside the pipeline code makes use of ZenML's late materialization behind the scenes.
If you would like to bypass materialization entirely and just download the data or files associated with a particular artifact version, you can use the .download_files method:
from zenml.client import Client
client = Client()
artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset") | user-guide | https://docs.zenml.io/v/docs/user-guide/starter-guide/manage-artifacts | 433 |
re you point to the flavor class via dot notation:zenml data-validator flavor register <path.to.MyDataValidatorFlavor>
For example, if your flavor class MyDataValidatorFlavor is defined in flavors/my_flavor.py, you'd register it by doing:
zenml data-validator flavor register flavors.my_flavor.MyDataValidatorFlavor
ZenML resolves the flavor class by taking the path where you initialized zenml (via zenml init) as the starting point of resolution. Therefore, please ensure you follow the best practice of initializing zenml at the root of your repository.
If ZenML does not find an initialized ZenML repository in any parent directory, it will default to the current working directory, but usually it's better to not have to rely on this mechanism, and initialize zenml at the root.
Afterwards, you should see the new flavor in the list of available flavors:
zenml data-validator flavor list
It is important to draw attention to when and how these base abstractions are coming into play in a ZenML workflow.
The CustomDataValidatorFlavor class is imported and utilized upon the creation of the custom flavor through the CLI.
The CustomDataValidatorConfig class is imported when someone tries to register/update a stack component with this custom flavor. Especially, during the registration process of the stack component, the config will be used to validate the values given by the user. As Config object are inherently pydantic objects, you can also add your own custom validators here.
The CustomDataValidator only comes into play when the component is ultimately in use.
The design behind this interaction lets us separate the configuration of the flavor from its implementation. This way we can register flavors and components even when the major dependencies behind their implementation are not installed in our local setting (assuming the CustomDataValidatorFlavor and the CustomDataValidatorConfig are implemented in a different module/path than the actual CustomDataValidator).
PreviousWhylogs
NextExperiment Trackers | stack-components | https://docs.zenml.io/stack-components/data-validators/custom | 393 |
Running with active stack: 'default' (repository)Successfully connected artifact store `s3-zenfiles` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββ―βββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌβββββββββββββββββ¨
β bf073e06-28ce-4a4a-8100-32e7cb99dced β aws-demo-multi β πΆ aws β π¦ s3-bucket β s3://zenfiles β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββ·βββββββββββββββββ
```
register and connect a Kubernetes Orchestrator Stack Component to an EKS cluster:Copyzenml orchestrator register eks-zenml-zenhacks --flavor kubernetes --synchronous=true --kubernetes_namespace=zenml-workloads
Example Command Output
```text
Running with active workspace: 'default' (repository)
Running with active stack: 'default' (repository)
Successfully registered orchestrator `eks-zenml-zenhacks`.
```
```sh
zenml orchestrator connect eks-zenml-zenhacks --connector aws-demo-multi
```
Example Command Output
```text
Running with active workspace: 'default' (repository)
Running with active stack: 'default' (repository)
Successfully connected orchestrator `eks-zenml-zenhacks` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββ¨
β bf073e06-28ce-4a4a-8100-32e7cb99dced β aws-demo-multi β πΆ aws β π kubernetes-cluster β zenhacks-cluster β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββββ
``` | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/aws-service-connector | 674 |
Deploying ZenML
Deploying ZenML is the first step to production.
When you first get started with ZenML, it is based on the following architecture on your machine:
The SQLite database that you can see in this diagram is used to store all the metadata we produced in the previous guide (pipelines, models, artifacts, etc).
In order to move into production, you will need to deploy this server somewhere centrally outside of your machine. This allows different infrastructure components to interact with, alongside enabling you to collaborate with your team members:
Choosing how to deploy ZenML
While there are many options on how to deploy ZenML, the two simplest ones are:
Option 1: Sign up for a free ZenML Cloud Trial
ZenML Cloud is a managed SaaS solution that offers a one-click deployment for your ZenML server. Click here to start a free trial.
On top of the one-click experience, ZenML Cloud also comes built-in with additional features and a new dashboard that might be beneficial to follow for this guide. You can always go back to self-hosting after your learning journey is complete.
Option 2: Self-host ZenML on your cloud provider
As ZenML is open source, it is easy to self-host it. There is even a ZenML CLI one-liner that deploys ZenML on a Kubernetes cluster, abstracting away all the infrastructure complexity. If you don't have an existing Kubernetes cluster, you can create it manually using the documentation for your cloud provider. For convenience, here are links for AWS, Azure, and GCP.
Once you have created your cluster, make sure that you configure your kubectl client to connect to it.
You're now ready to deploy ZenML! Run the following command:
zenml deploy | user-guide | https://docs.zenml.io/user-guide/production-guide/deploying-zenml | 357 |
t_repository: str
user: Optional[str]
resources:cpu_count: Optional[PositiveFloat]
gpu_count: Optional[NonNegativeInt]
memory: Optional[ConstrainedStrValue]
step_operator: Optional[str]
success_hook_source:
attribute: Optional[str]
module: str
type: SourceType
train_model:
enable_artifact_metadata: Optional[bool]
enable_artifact_visualization: Optional[bool]
enable_cache: Optional[bool]
enable_step_logs: Optional[bool]
experiment_tracker: Optional[str]
extra: Mapping[str, Any]
failure_hook_source:
attribute: Optional[str]
module: str
type: SourceType
model:
audience: Optional[str]
description: Optional[str]
ethics: Optional[str]
license: Optional[str]
limitations: Optional[str]
name: str
save_models_to_registry: bool
suppress_class_validation_warnings: bool
tags: Optional[List[str]]
trade_offs: Optional[str]
use_cases: Optional[str]
version: Union[ModelStages, int, str, NoneType]
was_created_in_this_run: bool
name: Optional[str]
outputs: {}
parameters: {}
settings:
docker:
apt_packages: List[str]
build_context_root: Optional[str]
build_options: Mapping[str, Any]
copy_files: bool
copy_global_config: bool
dockerfile: Optional[str]
dockerignore: Optional[str]
environment: Mapping[str, Any]
install_stack_requirements: bool
parent_image: Optional[str]
python_package_installer: PythonPackageInstaller
replicate_local_python_environment: Union[List[str], PythonEnvironmentExportMethod,
NoneType]
required_hub_plugins: List[str]
required_integrations: List[str]
requirements: Union[NoneType, str, List[str]]
skip_build: bool
source_files: SourceFileMode
target_repository: str
user: Optional[str]
resources:
cpu_count: Optional[PositiveFloat]
gpu_count: Optional[NonNegativeInt]
memory: Optional[ConstrainedStrValue]
step_operator: Optional[str]
success_hook_source:
attribute: Optional[str]
module: str
type: SourceType | how-to | https://docs.zenml.io/how-to/use-configuration-files/autogenerate-a-template-yaml-file | 424 |
Evaluation and metrics
Track how your RAG pipeline improves using evaluation and metrics.
In this section, we'll explore how to evaluate the performance of your RAG pipeline using metrics and visualizations. Evaluating your RAG pipeline is crucial to understanding how well it performs and identifying areas for improvement. With language models in particular, it's hard to evaluate their performance using traditional metrics like accuracy, precision, and recall. This is because language models generate text, which is inherently subjective and difficult to evaluate quantitatively.
Our RAG pipeline is a whole system, moreover, not just a model, and evaluating it requires a holistic approach. We'll look at various ways to evaluate the performance of your RAG pipeline but the two main areas we'll focus on are:
Retrieval evaluation, so checking that the retrieved documents or document chunks are relevant to the query.
Generation evaluation, so checking that the generated text is coherent and helpful for our specific use case.
In the previous section we built out a basic RAG pipeline for our documentation question-and-answer use case. We'll use this pipeline to demonstrate how to evaluate the performance of your RAG pipeline.
If you were running this in a production setting, you might want to set up evaluation to check the performance of a raw LLM model (i.e. without any retrieval / RAG components) as a baseline, and then compare this to the performance of your RAG pipeline. This will help you understand how much value the retrieval and generation components are adding to your system. We won't cover this here, but it's a good practice to keep in mind.
What are we evaluating?
When evaluating the performance of your RAG pipeline, your specific use case and the extent to which you can tolerate errors or lower performance will determine what you need to evaluate. For instance, if you're building a user-facing chatbot, you might need to evaluate the following: | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/evaluation | 384 |
can interact with the pods using kubectl commands.For more advanced configuration options and additional details, refer to the full Kubernetes Orchestrator documentation.
PreviousKubeflow
NextMLflow
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/popular-integrations/kubernetes | 44 |
lines orchestrator, and an ECR container registry.Registering the stack components and creating a ZenML stack.
By following these steps, you can leverage the power of AWS services, such as S3 for artifact storage, SageMaker Pipelines for orchestration, and ECR for container management, all within the ZenML framework. This setup allows you to build, deploy, and manage machine learning pipelines efficiently and scale your workloads based on your requirements.
The benefits of using an AWS stack with ZenML include:
Scalability: Leverage the scalability of AWS services to handle large-scale machine learning workloads.
Reproducibility: Ensure reproducibility of your pipelines with versioned artifacts and containerized environments.
Collaboration: Enable collaboration among team members by using a centralized stack and shared resources.
Flexibility: Customize and extend your stack components based on your specific needs and preferences.
Now that you have a functional AWS stack set up with ZenML, you can explore more advanced features and capabilities offered by ZenML. Some next steps to consider:
Dive deeper into ZenML's production guide to learn best practices for deploying and managing production-ready pipelines.
Explore ZenML's integrations with other popular tools and frameworks in the machine learning ecosystem.
Join the ZenML community to connect with other users, ask questions, and get support.
By leveraging the power of AWS and ZenML, you can streamline your machine learning workflows, improve collaboration, and deploy production-ready pipelines with ease. Happy experimenting and building!
PreviousPopular integrations
NextRun on GCP
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/popular-integrations/aws-guide | 320 |
hat will hold the data that you want to visualize.Build a custom materializer for this custom class with the visualization logic implemented in the save_visualizations() method.
Return your custom class from any of your ZenML steps.
Example: Facets Data Skew Visualization
As an example, have a look at the models, materializers, and steps of the Facets Integration, which can be used to visualize the data skew between multiple Pandas DataFrames:
1. Custom Class The FacetsComparison is the custom class that holds the data required for the visualization.
class FacetsComparison(BaseModel):
datasets: List[Dict[str, Union[str, pd.DataFrame]]]
2. Materializer The FacetsMaterializer is a custom materializer that only handles this custom class and contains the corresponding visualization logic.
class FacetsMaterializer(BaseMaterializer):
ASSOCIATED_TYPES = (FacetsComparison,)
ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA_ANALYSIS
def save_visualizations(
self, data: FacetsComparison
) -> Dict[str, VisualizationType]:
html = ... # Create a visualization for the custom type
visualization_path = os.path.join(self.uri, VISUALIZATION_FILENAME)
with fileio.open(visualization_path, "w") as f:
f.write(html)
return {visualization_path: VisualizationType.HTML}
3. Step There are three different steps in the facets integration that can be used to create FacetsComparisons for different sets of inputs. E.g., the facets_visualization_step below takes two DataFrames as inputs and builds a FacetsComparison object out of them:
@step
def facets_visualization_step(
reference: pd.DataFrame, comparison: pd.DataFrame
) -> FacetsComparison: # Return the custom type from your step
return FacetsComparison(
datasets=[
{"name": "reference", "table": reference},
{"name": "comparison", "table": comparison},
This is what happens now under the hood when you add the facets_visualization_step into your pipeline:
The step creates and returns a FacetsComparison. | how-to | https://docs.zenml.io/how-to/visualize-artifacts/creating-custom-visualizations | 418 |
β us-east-1 ββ ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ s3-bucket β s3://aws-ia-mwaa-715803424590 β
β β s3://zenfiles β
β β s3://zenml-demos β
β β s3://zenml-generative-chat β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π kubernetes-cluster β zenhacks-cluster β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ¨
β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
βββββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββββββββββ
zenml service-connector register gcp-auto --type gcp --auto-configure
Example Command Output
Successfully registered service connector `gcp-auto` with access to the following resources:
βββββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π΅ gcp-generic β zenml-core β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ gcs-bucket β gs://annotation-gcp-store β
β β gs://zenml-bucket-sl β
β β gs://zenml-core.appspot.com β
β β gs://zenml-core_cloudbuild β
β β gs://zenml-datasets β
β β gs://zenml-internal-artifact-store β
β β gs://zenml-kubeflow-artifact-store β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 514 |
Google Cloud VertexAI
Executing individual steps in Vertex AI.
Vertex AI offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's Vertex AI step operator allows you to submit individual steps to be run on Vertex AI compute instances.
When to use it
You should use the Vertex step operator if:
one or more steps of your pipeline require computing resources (CPU, GPU, memory) that are not provided by your orchestrator.
you have access to Vertex AI. If you're using a different cloud provider, take a look at the SageMaker or AzureML step operators.
How to deploy it
Enable Vertex AI here.
Create a service account with the right permissions to create Vertex AI jobs (roles/aiplatform.admin) and push to the container registry (roles/storage.admin).
How to use it
The GCP step operator (and GCP integration in general) currently only works for Python versions <3.11. The ZenML team is aware of this dependency clash/issue and is working on a fix. For now, please use Python <3.11 together with the GCP integration.
To use the Vertex step operator, we need:
The ZenML gcp integration installed. If you haven't done so, runCopyzenml integration install gcp
Docker installed and running.
Vertex AI enabled and a service account file. See the deployment section for detailed instructions.
A GCR container registry as part of our stack.
(Optional) A machine type that we want to execute our steps on (this defaults to n1-standard-4). See here for a list of available machine types.
A remote artifact store as part of your stack. This is needed so that both your orchestration environment and VertexAI can read and write step artifacts. Check out the documentation page of the artifact store you want to use for more information on how to set that up and configure authentication for it.
You have three different options to provide GCP credentials to the step operator: | stack-components | https://docs.zenml.io/stack-components/step-operators/vertex | 408 |
The step creates and returns a FacetsComparison.When the step finishes, ZenML will search for a materializer class that can handle this type, finds the FacetsMaterializer, and calls the save_visualizations() method which creates the visualization and saves it into your artifact store as an HTML file.
When you open your dashboard and click on the artifact inside the run DAG, the visualization HTML file is loaded from the artifact store and displayed.
PreviousDefault visualizations
NextDisplaying visualizations in the dashboard
Last updated 15 days ago | how-to | https://docs.zenml.io/how-to/visualize-artifacts/creating-custom-visualizations | 109 |
registry β demozenmlcontainerregistry.azurecr.io βββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββ
```
Combine all Stack Components together into a Stack and set it as active (also throw in a local Image Builder for completion):Copyzenml image-builder register local --flavor local
Example Command Output
```
Running with active workspace: 'default' (global)
Running with active stack: 'default' (global)
Successfully registered image_builder `local`.
```
```sh
zenml stack register gcp-demo -a azure-demo -o aks-demo-cluster -c acr-demo-registry -i local --set
```
Example Command Output
```
Stack 'gcp-demo' successfully registered!
Active repository stack set to:'gcp-demo'
```
Finally, run a simple pipeline to prove that everything works as expected. We'll use the simplest pipelines possible for this example:Copyfrom zenml import pipeline, step
@step
def step_1() -> str:
"""Returns the `world` string."""
return "world"
@step(enable_cache=False)
def step_2(input_one: str, input_two: str) -> None:
"""Combines the two strings at its input and prints them."""
combined_str = f"{input_one} {input_two}"
print(combined_str)
@pipeline
def my_pipeline():
output_step_one = step_1()
step_2(input_one="hello", input_two=output_step_one)
if __name__ == "__main__":
my_pipeline()Saving that to a run.py file and running it gives us:
Example Command Output
```
$ python run.py
Registered pipeline simple_pipeline (version 1).
Building Docker image(s) for pipeline simple_pipeline.
Building Docker image demozenmlcontainerregistry.azurecr.io/zenml:simple_pipeline-orchestrator.
Including integration requirements: adlfs==2021.10.0, azure-identity==1.10.0, azure-keyvault-keys, azure-keyvault-secrets, azure-mgmt-containerservice>=20.0.0, azureml-core==1.48.0, kubernetes, kubernetes==18.20.0
No .dockerignore found, including all files inside build context. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector | 539 |
# only. Leave blank if using account/key or MSI.--rclone_config_az_use_msi="" \ # use a managed service identity to
# authenticate (only works in Azure).
--rclone_config_az_client_id="" \ # client ID of the service principal
# to use for authentication.
--rclone_config_az_client_secret="" \ # client secret of the service
# principal to use for authentication.
--rclone_config_az_tenant="" \ # tenant ID of the service principal
# to use for authentication.
# Alternatively for providing key-value pairs, you can utilize the '--values' option by specifying a file path containing
# key-value pairs in either JSON or YAML format.
# File content example: {"rclone_config_az_type":"azureblob",...}
zenml secret create az-seldon-secret \
--values=@path/to/file.json
How do you use it?
Requirements
To run pipelines that deploy models to Seldon, you need the following tools installed locally:
Docker
K3D (can be installed by running curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash).
Stack Component Registration
For registering the model deployer, we need the URL of the Istio Ingress Gateway deployed on the Kubernetes cluster. We can get this URL by running the following command (assuming that the service name is istio-ingressgateway, deployed in the istio-system namespace):
# For GKE clusters, the host is the GKE cluster IP address.
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
# For EKS clusters, the host is the EKS cluster IP hostname.
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
Now register the model deployer: | stack-components | https://docs.zenml.io/v/docs/stack-components/model-deployers/seldon | 401 |
e found, including all files inside build context.Step 1/10 : FROM zenmldocker/zenml:0.40.0-py3.8
Step 2/10 : WORKDIR /app
Step 3/10 : COPY .zenml_user_requirements .
Step 4/10 : RUN pip install --default-timeout=60 --no-cache-dir -r .zenml_user_requirements
Step 5/10 : COPY .zenml_integration_requirements .
Step 6/10 : RUN pip install --default-timeout=60 --no-cache-dir -r .zenml_integration_requirements
Step 7/10 : ENV ZENML_ENABLE_REPO_INIT_WARNINGS=False
Step 8/10 : ENV ZENML_CONFIG_PATH=/app/.zenconfig
Step 9/10 : COPY . .
Step 10/10 : RUN chmod -R a+rw .
Pushing Docker image demozenmlcontainerregistry.azurecr.io/zenml:simple_pipeline-orchestrator.
Finished pushing Docker image.
Finished building Docker image(s).
Running pipeline simple_pipeline on stack gcp-demo (caching disabled)
Waiting for Kubernetes orchestrator pod...
Kubernetes orchestrator pod started.
Waiting for pod of step simple_step_one to start...
Step simple_step_one has started.
INFO:azure.identity._internal.get_token_mixin:ClientSecretCredential.get_token succeeded
INFO:azure.identity._internal.get_token_mixin:ClientSecretCredential.get_token succeeded
INFO:azure.identity._internal.get_token_mixin:ClientSecretCredential.get_token succeeded
INFO:azure.identity.aio._internal.get_token_mixin:ClientSecretCredential.get_token succeeded
Step simple_step_one has finished in 0.396s.
Pod of step simple_step_one completed.
Waiting for pod of step simple_step_two to start...
Step simple_step_two has started.
INFO:azure.identity._internal.get_token_mixin:ClientSecretCredential.get_token succeeded
INFO:azure.identity._internal.get_token_mixin:ClientSecretCredential.get_token succeeded
INFO:azure.identity.aio._internal.get_token_mixin:ClientSecretCredential.get_token succeeded
Hello World!
Step simple_step_two has finished in 3.203s.
Pod of step simple_step_two completed.
Orchestration pod completed. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector | 464 |
or describe azure-implicit
Example Command OutputService connector 'azure-implicit' of type 'azure' with id 'ad645002-0cd4-4d4f-ae20-499ce888a00a' is owned by user 'default' and is 'private'.
'azure-implicit' azure Service Connector Details
ββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β ID β ad645002-0cd4-4d4f-ae20-499ce888a00a β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β azure-implicit β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β π¦ azure β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β implicit β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β π¦ azure-generic, π¦ blob-container, π kubernetes-cluster, π³ docker-registry β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β <multiple> β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/azure-service-connector | 419 |
e to the query.
def get_completion_from_messages(messages, model=OPENAI_MODEL, temperature=0.4, max_tokens=1000
):
"""Generates a completion response from the given messages using the specified model."""
model = MODEL_NAME_MAP.get(model, model)
completion_response = litellm.completion(
model=model,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
return completion_response.choices[0].message.content
We're using litellm because it makes sense not to have to implement separate functions for each LLM we might want to use. The pace of development in the field is such that you will want to experiment with new LLMs as they come out, and litellm gives you the flexibility to do that without having to rewrite your code.
We've now completed a basic RAG inference pipeline that uses the embeddings generated by the pipeline to retrieve the most relevant chunks of text based on a given query. We can inspect the various components of the pipeline to see how they work together to provide a response to the query. This gives us a solid foundation to move onto more complex RAG pipelines and to look into how we might improve this. The next section will cover how to improve retrieval by finetuning the embeddings generated by the pipeline. This will boost our performance in situations where we have a large volume of documents and also when the documents are potentially very different from the training data that was used for the embeddings.
Code Example
To explore the full code, visit the Complete Guide repository and for this section, particularly the llm_utils.py file.
PreviousStoring embeddings in a vector database
NextEvaluation and metrics
Last updated 2 months ago | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/rag-with-zenml/basic-rag-inference-pipeline | 342 |
Develop a Custom Annotator
Learning how to develop a custom annotator.
Before diving into the specifics of this component type, it is beneficial to familiarize yourself with our general guide to writing custom component flavors in ZenML. This guide provides an essential understanding of ZenML's component flavor concepts.
Annotators are a stack component that enables the use of data annotation as part of your ZenML stack and pipelines. You can use the associated CLI command to launch annotation, configure your datasets and get stats on how many labeled tasks you have ready for use.
Base abstraction in progress!
We are actively working on the base abstraction for the annotators, which will be available soon. As a result, their extension is not possible at the moment. If you would like to use an annotator in your stack, please check the list of already available feature stores down below.
PreviousProdigy
NextModel Registries
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/annotators/custom | 190 |
ake a look here for a guide on how to set that up.A remote artifact store as part of your stack. This is needed so that both your orchestration environment and SageMaker can read and write step artifacts. Check out the documentation page of the artifact store you want to use for more information on how to set that up and configure authentication for it.
An instance type that we want to execute our steps on. See here for a list of available instance types.
(Optional) An experiment that is used to group SageMaker runs. Check this guide to see how to create an experiment.
There are two ways you can authenticate your orchestrator to AWS to be able to run steps on SageMaker:
The recommended way to authenticate your SageMaker step operator is by registering or using an existing AWS Service Connector and connecting it to your SageMaker step operator. The credentials configured for the connector must have permissions to create and manage SageMaker runs (e.g. the AmazonSageMakerFullAccess managed policy permissions). The SageMaker step operator uses these aws-generic resource type, so make sure to configure the connector accordingly:
zenml service-connector register <CONNECTOR_NAME> --type aws -i
zenml step-operator register <STEP_OPERATOR_NAME> \
--flavor=sagemaker \
--role=<SAGEMAKER_ROLE> \
--instance_type=<INSTANCE_TYPE> \
# --experiment_name=<EXPERIMENT_NAME> # optionally specify an experiment to assign this run to
zenml step-operator connect <STEP_OPERATOR_NAME> --connector <CONNECTOR_NAME>
zenml stack register <STACK_NAME> -s <STEP_OPERATOR_NAME> ... --set
If you don't connect your step operator to a service connector:
If using a local orchestrator: ZenML will try to implicitly authenticate to AWS via the default profile in your local AWS configuration file. Make sure this profile has permissions to create and manage SageMaker runs (e.g. the AmazonSageMakerFullAccess managed policy permissions). | stack-components | https://docs.zenml.io/v/docs/stack-components/step-operators/sagemaker | 400 |
lly registered orchestrator `<ORCHESTRATOR_NAME>`.$ zenml service-connector list-resources --resource-type kubernetes-cluster -e
The following 'kubernetes-cluster' resources can be accessed by service connectors configured in your workspace:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββββββ―ββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββ¨
β e33c9fac-5daa-48b2-87bb-0187d3782cde β aws-iam-multi-eu β πΆ aws β π kubernetes-cluster β kubeflowmultitenant β
β β β β β zenbox β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββ¨
β ed528d5a-d6cb-4fc4-bc52-c3d2d01643e5 β aws-iam-multi-us β πΆ aws β π kubernetes-cluster β zenhacks-cluster β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββ¨
β 1c54b32a-4889-4417-abbd-42d3ace3d03a β gcp-sa-multi β π΅ gcp β π kubernetes-cluster β zenml-test-cluster β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββββββββ | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/kubernetes | 508 |
API key, run:
...
Set up your secrets in GithubFor our Github Actions we will need to set up some secrets for our repository. Specifically, you should use github secrets to store the ZENML_API_KEY that you created above.
The other values that are loaded from secrets into the environment here can also be set explicitly or as variables.
(Optional) Set up different stacks for Staging and Production
You might not necessarily want to use the same stack with the same resources for your staging and production use.
This step is optional, all you'll need for certain is a stack that runs remotely (remote orchestration and artifact storage). The rest is up to you. You might for example want to parametrize your pipeline to use different data sources for the respective environments. You can also use different configuration files for the different environments to configure the Model, the DockerSettings, the ResourceSettings like accelerators differently for the different environments.
Trigger a pipeline on a Pull Request (Merge Request)
One way to ensure only fully working code makes it into production, you should use a staging environment to test all the changes made to your code base and verify they work as intended. To do so automatically you should set up a github action workflow that runs your pipeline for you when you make changes to it. Here is an example that you can use.
To only run the Github Action on a PR, you can configure the yaml like this
on:
pull_request:
branches: [ staging, main ]
When the workflow starts we want to set some important values. Here is a simplified version that you can use.
jobs:
run-staging-workflow:
runs-on: run-zenml-pipeline
env:
ZENML_HOST: ${{ secrets.ZENML_HOST }} # Put your server url here
ZENML_API_KEY: ${{ secrets.ZENML_API_KEY }} # Retrieves the api key for use
ZENML_STACK: stack_name # Use this to decide which stack is used for staging
ZENML_GITHUB_SHA: ${{ github.event.pull_request.head.sha }} | user-guide | https://docs.zenml.io/v/docs/user-guide/production-guide/ci-cd | 421 |
mated evaluation using synthetic generated queriesFor a broader evaluation we can examine a larger number of queries to check the retrieval component's performance. We do this by using an LLM to generate synthetic data. In our case we take the text of each document chunk and pass it to an LLM, telling it to generate a question.
For example, given the text:
zenml orchestrator connect ${ORCHESTRATOR\_NAME} -iHead on over to our docs to
learn more about orchestrators and how to configure them. Container Registry export
CONTAINER\_REGISTRY\_NAME=gcp\_container\_registry zenml container-registry register $
{CONTAINER\_REGISTRY\_NAME} --flavor=gcp --uri=<GCR-URI> # Connect the GCS
orchestrator to the target gcp project via a GCP Service Connector zenml
container-registry connect ${CONTAINER\_REGISTRY\_NAME} -i Head on over to our docs to
learn more about container registries and how to configure them. 7) Create Stack
export STACK\_NAME=gcp\_stack zenml stack register ${STACK\_NAME} -o $
{ORCHESTRATOR\_NAME} \\ a ${ARTIFACT\_STORE\_NAME} -c ${CONTAINER\_REGISTRY\_NAME}
--set In case you want to also add any other stack components to this stack, feel free
to do so. And you're already done! Just like that, you now have a fully working GCP
stack ready to go. Feel free to take it for a spin by running a pipeline on it.
Cleanup If you do not want to use any of the created resources in the future, simply
delete the project you created. gcloud project delete <PROJECT\_ID\_OR\_NUMBER> <!--
For scarf --> <figure><img alt="ZenML Scarf"
referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?
x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" /></figure> PreviousScale compute to the
cloud NextConfiguring ZenML Last updated 2 days ago
we might get the question:
How do I create and configure a GCP stack in ZenML using an
orchestrator, container registry, and stack components, and how
do I delete the resources when they are no longer needed? | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/evaluation/retrieval | 508 |
βββββββββββββββββββββββββββββ
GCP OAuth 2.0 tokenUses temporary OAuth 2.0 tokens explicitly configured by the user.
This method has the major limitation that the user must regularly generate new tokens and update the connector configuration as OAuth 2.0 tokens expire. On the other hand, this method is ideal in cases where the connector only needs to be used for a short period of time, such as sharing access temporarily with someone else in your team.
Using any of the other authentication methods will automatically generate and refresh OAuth 2.0 tokens for clients upon request.
A GCP project is required and the connector may only be used to access GCP resources in the specified project.
Fetching OAuth 2.0 tokens from the local GCP CLI is possible if the GCP CLI is already configured with valid credentials (i.e. by running gcloud auth application-default login). We need to force the ZenML CLI to use the OAuth 2.0 token authentication by passing the --auth-method oauth2-token option, otherwise, it would automatically pick up long-term credentials:
zenml service-connector register gcp-oauth2-token --type gcp --auto-configure --auth-method oauth2-token
Example Command Output
Successfully registered service connector `gcp-oauth2-token` with access to the following resources:
βββββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π΅ gcp-generic β zenml-core β
β ββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β π¦ gcs-bucket β gs://zenml-bucket-sl β
β β gs://zenml-core.appspot.com β
β β gs://zenml-core_cloudbuild β
β β gs://zenml-datasets β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 460 |
Implement a custom stack component
How to write a custom stack component flavor
When building a sophisticated MLOps Platform, you will often need to come up with custom-tailored solutions for your infrastructure or tooling. ZenML is built around the values of composability and reusability which is why the stack component flavors in ZenML are designed to be modular and straightforward to extend.
This guide will help you understand what a flavor is, and how you can develop and use your own custom flavors in ZenML.
Understanding component flavors
In ZenML, a component type is a broad category that defines the functionality of a stack component. Each type can have multiple flavors, which are specific implementations of the component type. For instance, the type artifact_store can have flavors like local, s3, etc. Each flavor defines a unique implementation of functionality that an artifact store brings to a stack.
Base Abstractions
Before we get into the topic of creating custom stack component flavors, let us briefly discuss the three core abstractions related to stack components: the StackComponent, the StackComponentConfig, and the Flavor.
Base Abstraction 1: StackComponent
The StackComponent is the abstraction that defines the core functionality. As an example, check out the BaseArtifactStore definition below: The BaseArtifactStore inherits from StackComponent and establishes the public interface of all artifact stores. Any artifact store flavor needs to follow the standards set by this base class.
from zenml.stack import StackComponent
class BaseArtifactStore(StackComponent):
"""Base class for all ZenML artifact stores."""
# --- public interface ---
@abstractmethod
def open(self, path, mode = "r"):
"""Open a file at the given path."""
@abstractmethod
def exists(self, path):
"""Checks if a path exists."""
... | how-to | https://docs.zenml.io/v/docs/how-to/stack-deployment/implement-a-custom-stack-component | 362 |
should pick the one that best fits your use case.If you already have one or more GCP Service Connectors configured in your ZenML deployment, you can check which of them can be used to access the GCS bucket you want to use for your GCS Artifact Store by running e.g.:
zenml service-connector list-resources --resource-type gcs-bucket
Example Command Output
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β 7f0c69ba-9424-40ae-8ea6-04f35c2eba9d β gcp-user-account β π΅ gcp β π¦ gcs-bucket β gs://zenml-bucket-sl β
β β β β β gs://zenml-core.appspot.com β
β β β β β gs://zenml-core_cloudbuild β
β β β β β gs://zenml-datasets β
β β β β β gs://zenml-internal-artifact-store β
β β β β β gs://zenml-kubeflow-artifact-store β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β 2a0bec1b-9787-4bd7-8d4a-9a47b6f61643 β gcs-zenml-bucket-sl β π΅ gcp β π¦ gcs-bucket β gs://zenml-bucket-sl β | stack-components | https://docs.zenml.io/stack-components/artifact-stores/gcp | 497 |
Embeddings generation
Generate embeddings to improve retrieval performance.
In this section, we'll explore how to generate embeddings for your data to improve retrieval performance in your RAG pipeline. Embeddings are a crucial part of the retrieval mechanism in RAG, as they represent the data in a high-dimensional space where similar items are closer together. By generating embeddings for your data, you can enhance the retrieval capabilities of your RAG pipeline and provide more accurate and relevant responses to user queries.
Embeddings are vector representations of data that capture the semantic meaning and context of the data in a high-dimensional space. They are generated using machine learning models, such as word embeddings or sentence embeddings, that learn to encode the data in a way that preserves its underlying structure and relationships. Embeddings are commonly used in natural language processing (NLP) tasks, such as text classification, sentiment analysis, and information retrieval, to represent textual data in a format that is suitable for computational processing.
The whole purpose of the embeddings is to allow us to quickly find the small chunks that are most relevant to our input query at inference time. An even simpler way of doing this would be to just to search for some keywords in the query and hope that they're also represented in the chunks. However, this approach is not very robust and may not work well for more complex queries or longer documents. By using embeddings, we can capture the semantic meaning and context of the data and retrieve the most relevant chunks based on their similarity to the query.
We're using the sentence-transformers library to generate embeddings for our data. This library provides pre-trained models for generating sentence embeddings that capture the semantic meaning of the text. It's an open-source library that is easy to use and provides high-quality embeddings for a wide range of NLP tasks. | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/rag-with-zenml/embeddings-generation | 359 |
Configure the server environment
How to configure the server environment
The ZenML server environment is configured using environment variables. You will need to set these before deploying your server instance. Please refer to the full list of environment variables available to you here.
PreviousHandling dependencies
NextConnect to a server
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/configure-python-environments/configure-the-server-environment | 65 |
ββββββββββββββββββββββββββββββββββββββββββββββββββ¨β β β β π kubernetes-cluster β zenhacks-cluster β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β β β β π³ docker-registry β 715803424590.dkr.ecr.us-east-1.amazonaws.com β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β fa9325ab-ce01-4404-aec3-61a3af395d48 β aws-s3-multi-instance β πΆ aws β π¦ s3-bucket β s3://aws-ia-mwaa-715803424590 β
β β β β β s3://zenfiles β
β β β β β s3://zenml-demos β
β β β β β s3://zenml-generative-chat β
β β β β β s3://zenml-public-datasets β | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 301 |
SageMaker Pipelines orchestrator stack component:You'll need the IAM role ARN that we noted down earlier to register the orchestrator. This is the 'execution role' ARN you need to pass to the orchestrator.
zenml orchestrator register sagemaker-orchestrator --flavor=sagemaker --region=<YOUR_REGION> --execution_role=<ROLE_ARN>
Note: The SageMaker orchestrator utilizes the AWS configuration for operation and does not require direct connection via a service connector for authentication, as it relies on your AWS CLI configurations or environment variables.
More details here.
Container Registry (ECR)
A container registry is used to store Docker images for your pipelines.
You'll need to create a repository in ECR. If you already have one, you can skip this step.
aws ecr create-repository --repository-name zenml --region <YOUR_REGION>
Once this is done, you can create the ZenML stack component as follows:
Register an ECR container registry stack component:
zenml container-registry register ecr-registry --flavor=aws --uri=<ACCOUNT_ID>.dkr.ecr.<YOUR_REGION>.amazonaws.com --connector aws-connector
More details here.
4) Create stack
export STACK_NAME=aws_stack
zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} \
a ${ARTIFACT_STORE_NAME} -c ${CONTAINER_REGISTRY_NAME} --set
In case you want to also add any other stack components to this stack, feel free to do so.
5) And you're already done!
Just like that, you now have a fully working AWS stack ready to go. Feel free to take it for a spin by running a pipeline on it.
Define a ZenML pipeline:
from zenml import pipeline, step
@step
def hello_world() -> str:
return "Hello from SageMaker!"
@pipeline
def aws_sagemaker_pipeline():
hello_step()
if __name__ == "__main__":
aws_sagemaker_pipeline()
Save this code to run.py and execute it. The pipeline will use AWS S3 for artifact storage, Amazon SageMaker Pipelines for orchestration, and Amazon ECR for container registry.
python run.py
Read more in the production guide.
Cleanup | how-to | https://docs.zenml.io/how-to/popular-integrations/aws-guide | 464 |
s to provide GCP credentials to the step operator:use the gcloud CLI to authenticate locally with GCP. This only works in combination with the local orchestrator.Copygcloud auth login
zenml step-operator register <STEP_OPERATOR_NAME> \
--flavor=vertex \
--project=<GCP_PROJECT> \
--region=<REGION> \
# --machine_type=<MACHINE_TYPE> # optionally specify the type of machine to run on
configure the orchestrator to use a service account key file to authenticate with GCP by setting the service_account_path parameter in the orchestrator configuration to point to a service account key file. This also works only in combination with the local orchestrator.Copyzenml step-operator register <STEP_OPERATOR_NAME> \
--flavor=vertex \
--project=<GCP_PROJECT> \
--region=<REGION> \
--service_account_path=<SERVICE_ACCOUNT_PATH> \
# --machine_type=<MACHINE_TYPE> # optionally specify the type of machine to run on
(recommended) configure a GCP Service Connector with GCP credentials coming from a service account key file or the local gcloud CLI set up with user account credentials and then link the Vertex AI Step Operator stack component to the Service Connector. This option works with any orchestrator.Copyzenml service-connector register <CONNECTOR_NAME> --type gcp --auth-method=service-account --project_id=<PROJECT_ID> --service_account_json=@<SERVICE_ACCOUNT_PATH> --resource-type gcp-generic
# Or, as an alternative, you could use the GCP user account locally set up with gcloud
# zenml service-connector register <CONNECTOR_NAME> --type gcp --resource-type gcp-generic --auto-configure
zenml step-operator register <STEP_OPERATOR_NAME> \
--flavor=vertex \
--region=<REGION> \
# --machine_type=<MACHINE_TYPE> # optionally specify the type of machine to run on
zenml step-operator connect <STEP_OPERATOR_NAME> --connector <CONNECTOR_NAME>
We can then use the registered step operator in our active stack:
# Add the step operator to the active stack
zenml stack update -s <NAME> | stack-components | https://docs.zenml.io/stack-components/step-operators/vertex | 453 |
ssions, the credentials should include permissionsto connect to and use the GKE cluster (i.e. some or all permissions in the
Kubernetes Engine Developer role).
If set, the resource name must identify an GKE cluster using one of the
following formats:
GKE cluster name: {cluster-name}
GKE cluster names are project scoped. The connector can only be used to access
GKE clusters in the GCP project that it is configured to use.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Displaying information about the service-account GCP authentication method:
zenml service-connector describe-type gcp --auth-method service-account
Example Command Output
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π GCP Service Account (auth method: service-account) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Supports issuing temporary credentials: False
Use a GCP service account and its credentials to authenticate to GCP services.
This method requires a GCP service account and a service account key JSON
created for it.
The GCP connector generates temporary OAuth 2.0 tokens from the user account
credentials and distributes them to clients. The tokens have a limited lifetime
of 1 hour.
A GCP project is required and the connector may only be used to access GCP
resources in the specified project.
If you already have the GOOGLE_APPLICATION_CREDENTIALS environment variable
configured to point to a service account key JSON file, it will be automatically
picked up when auto-configuration is used.
Attributes:
service_account_json {string, secret, required}: GCP Service Account Key JSON
project_id {string, required}: GCP Project ID where the target resource is
located.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Basic Service Connector Types | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 436 |
tored in the
secrets store.
"""
@abstractmethoddef delete_secret_values(self, secret_id: UUID) -> None:
"""Deletes secret values for an existing secret.
Args:
secret_id: The ID of the secret.
Raises:
KeyError: if no secret values for the given ID are stored in the
secrets store.
"""
This is a slimmed-down version of the real interface which aims to highlight the abstraction layer. In order to see the full definition and get the complete docstrings, please check the SDK docs .
Build your own custom secrets store
If you want to create your own custom secrets store implementation, you can follow the following steps:
Create a class that inherits from the zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore base class and implements the abstractmethods shown in the interface above. Use SecretsStoreType.CUSTOM as the TYPE value for your secrets store class.
If you need to provide any configuration, create a class that inherits from the SecretsStoreConfiguration class and add your configuration parameters there. Use that as the CONFIG_TYPE value for your secrets store class.
To configure the ZenML server to use your custom secrets store, make sure your code is available in the container image that is used to run the ZenML server. Then, use environment variables or helm chart values to configure the ZenML server to use your custom secrets store, as covered in the deployment guide.
PreviousTroubleshoot stack components
NextSecret management
Last updated 19 days ago | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/manage-the-deployed-services/custom-secret-stores | 306 |
s:
π password
Resource types:
π³ docker-registrySupports auto-configuration: False
Available locally: True
Available remotely: True
The ZenML Docker Service Connector allows authenticating with a Docker or OCI
container registry and managing Docker clients for the registry.
This connector provides pre-authenticated python-docker Python clients to Stack
Components that are linked to it.
No Python packages are required for this Service Connector. All prerequisites
are included in the base ZenML Python package. Docker needs to be installed on
environments where container images are built and pushed to the target container
registry.
[...]
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Please select a service connector type (kubernetes, docker, azure, aws, gcp): gcp
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Available resource types β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π΅ Generic GCP resource (resource type: gcp-generic)
Authentication methods: implicit, user-account, service-account, oauth2-token,
impersonation
Supports resource instances: False
Authentication methods:
π implicit
π user-account
π service-account
π oauth2-token
π impersonation
This resource type allows Stack Components to use the GCP Service Connector to
connect to any GCP service or resource. When used by Stack Components, they are
provided a Python google-auth credentials object populated with a GCP OAuth 2.0
token. This credentials object can then be used to create GCP Python clients for
any particular GCP service.
This generic GCP resource type is meant to be used with Stack Components that
are not represented by other, more specific resource type, like GCS buckets,
Kubernetes clusters or Docker registries. For example, it can be used with the
Google Cloud Builder Image Builder stack component, or the Vertex AI | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 465 |
βββββββββββ·βββββββββββββββββ·βββββββββββββββββ
```register and connect a Vertex AI Orchestrator Stack Component to the target GCP projectNOTE: If we do not specify a workload service account, the Vertex AI Pipelines Orchestrator uses the Compute Engine default service account in the target project to run pipelines. You must grant this account the Vertex AI Service Agent role, otherwise the pipelines will fail. More information on other configurations possible for the Vertex AI Orchestrator can be found here.Copyzenml orchestrator register vertex-ai-zenml-core --flavor=vertex --location=europe-west1 --synchronous=true
Example Command Output
```text
Running with active workspace: 'default' (repository)
Running with active stack: 'default' (repository)
Successfully registered orchestrator `vertex-ai-zenml-core`.
```
```sh
zenml orchestrator connect vertex-ai-zenml-core --connector vertex-ai-zenml-core
```
Example Command Output
```text
Running with active workspace: 'default' (repository)
Running with active stack: 'default' (repository)
Successfully connected orchestrator `vertex-ai-zenml-core` to the following resources:
ββββββββββββββββββββββββββββββββββββββββ―βββββββββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ―βββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββ¨
β f97671b9-8c73-412b-bf5e-4b7c48596f5f β vertex-ai-zenml-core β π΅ gcp β π΅ gcp-generic β zenml-core β
ββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ
```
Register and connect a GCP Container Registry Stack Component to a GCR container registry:Copyzenml container-registry register gcr-zenml-core --flavor gcp --uri=gcr.io/zenml-core
Example Command Output
```text
Running with active workspace: 'default' (repository) | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 573 |
πEnvironment Variables
How to control ZenML behavior with environmental variables.
There are a few pre-defined environmental variables that can be used to control the behavior of ZenML. See the list below with default values and options:
Logging verbosity
export ZENML_LOGGING_VERBOSITY=INFO
Choose from INFO, WARN, ERROR, CRITICAL, DEBUG.
Disable step logs
Usually, ZenML stores step logs in the artifact store, but this can sometimes cause performance bottlenecks, especially if the code utilizes progress bars.
If you want to configure whether logged output from steps is stored or not, set the ZENML_DISABLE_STEP_LOGS_STORAGE environment variable to true. Note that this will mean that logs from your steps will no longer be stored and thus won't be visible on the dashboard anymore.
export ZENML_DISABLE_STEP_LOGS_STORAGE=false
ZenML repository path
To configure where ZenML will install and look for its repository, set the environment variable ZENML_REPOSITORY_PATH.
export ZENML_REPOSITORY_PATH=/path/to/somewhere
Analytics
Please see our full page on what analytics are tracked and how you can opt out, but the quick summary is that you can set this to false if you want to opt out of analytics.
export ZENML_ANALYTICS_OPT_IN=false
Debug mode
Setting to true switches to developer mode:
export ZENML_DEBUG=true
Active stack
Setting the ZENML_ACTIVE_STACK_ID to a specific UUID will make the corresponding stack the active stack:
export ZENML_ACTIVE_STACK_ID=<UUID-OF-YOUR-STACK>
Prevent pipeline execution
When true, this prevents a pipeline from executing:
export ZENML_PREVENT_PIPELINE_EXECUTION=false
Disable rich traceback
Set to false to disable the rich traceback:
export ZENML_ENABLE_RICH_TRACEBACK=true
Disable colourful logging
If you wish to disable colourful logging, set the following environment variable:
ZENML_LOGGING_COLORS_DISABLED=true | reference | https://docs.zenml.io/reference/environment-variables | 404 |
he need to rerun unchanged parts of your pipeline.With ZenML, you can easily trace an artifact back to its origins and understand the exact sequence of executions that led to its creation, such as a trained model. This feature enables you to gain insights into the entire lineage of your artifacts, providing a clear understanding of how your data has been processed and transformed throughout your machine-learning pipelines. With ZenML, you can ensure the reproducibility of your results, and identify potential issues or bottlenecks in your pipelines. This level of transparency and traceability is essential for maintaining the reliability and trustworthiness of machine learning projects, especially when working in a team or across different environments.
For more details on how to adjust the names or versions assigned to your artifacts, assign tags to them, or adjust other artifact properties, see the documentation on artifact versioning and configuration.
By tracking the lineage of artifacts across environments and stacks, ZenML enables ML engineers to reproduce results and understand the exact steps taken to create a model. This is crucial for ensuring the reliability and reproducibility of machine learning models, especially when working in a team or across different environments.
Saving and Loading Artifacts with Materializers
Materializers play a crucial role in ZenML's artifact management system. They are responsible for handling the serialization and deserialization of artifacts, ensuring that data is consistently stored and retrieved from the artifact store. Each materializer stores data flowing through a pipeline in one or more files within a unique directory in the artifact store: | how-to | https://docs.zenml.io/v/docs/how-to/handle-data-artifacts/artifact-versioning | 303 |
y on your machine or running remotely as a server.All metadata is now stored, tracked, and managed by ZenML itself. The Metadata Store stack component type and all its implementations have been deprecated and removed. It is no longer possible to register them or include them in ZenML stacks. This is a key architectural change in ZenML 0.20.0 that further improves usability, reproducibility and makes it possible to visualize and manage all your pipelines and pipeline runs in the new ZenML Dashboard.
The architecture changes for the local case are shown in the diagram below:
The architecture changes for the remote case are shown in the diagram below:
If you're already using ZenML, aside from the above limitation, this change will impact you differently, depending on the flavor of Metadata Stores you have in your stacks:
if you're using the default sqlite Metadata Store flavor in your stacks, you don't need to do anything. ZenML will automatically switch to using its local database instead of your sqlite Metadata Stores when you update to 0.20.0 (also see how to migrate your stacks).
if you're using the kubeflow Metadata Store flavor only as a way to connect to the local Kubeflow Metadata Service (i.e. the one installed by the kubeflow Orchestrator in a local k3d Kubernetes cluster), you also don't need to do anything explicitly. When you migrate your stacks to ZenML 0.20.0, ZenML will automatically switch to using its local database.
if you're using the kubeflow Metadata Store flavor to connect to a remote Kubeflow Metadata Service such as those provided by a Kubeflow installation running in AWS, Google or Azure, there is currently no equivalent in ZenML 0.20.0. You'll need to deploy a ZenML Server instance close to where your Kubeflow service is running (e.g. in the same cloud region).
if you're using the mysql Metadata Store flavor to connect to a remote MySQL database service (e.g. a managed AWS, GCP or Azure MySQL service), you'll have to deploy a ZenML Server instance connected to that same database. | reference | https://docs.zenml.io/reference/migration-guide/migration-zero-twenty | 440 |
thon file_that_runs_a_zenml_pipeline.py
Tekton UITekton comes with its own UI that you can use to find further details about your pipeline runs, such as the logs of your steps.
To find the Tekton UI endpoint, we can use the following command:
kubectl get ingress -n tekton-pipelines -o jsonpath='{.items[0].spec.rules[0].host}'
Additional configuration
For additional configuration of the Tekton orchestrator, you can pass TektonOrchestratorSettings which allows you to configure node selectors, affinity, and tolerations to apply to the Kubernetes Pods running your pipeline. These can be either specified using the Kubernetes model objects or as dictionaries.
from zenml.integrations.tekton.flavors.tekton_orchestrator_flavor import TektonOrchestratorSettings
from kubernetes.client.models import V1Toleration
tekton_settings = TektonOrchestratorSettings(
pod_settings={
"affinity": {
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
"matchExpressions": [
"key": "node.kubernetes.io/name",
"operator": "In",
"values": ["my_powerful_node_group"],
},
"tolerations": [
V1Toleration(
key="node.kubernetes.io/name",
operator="Equal",
value="",
effect="NoSchedule"
If your pipelines steps have certain hardware requirements, you can specify them as ResourceSettings:
resource_settings = ResourceSettings(cpu_count=8, memory="16GB")
These settings can then be specified on either pipeline-level or step-level:
# Either specify on pipeline-level
@pipeline(
settings={
"orchestrator.tekton": tekton_settings,
"resources": resource_settings,
def my_pipeline():
...
# OR specify settings on step-level
@step(
settings={
"orchestrator.tekton": tekton_settings,
"resources": resource_settings,
def my_step():
...
Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings.
For more information and a full list of configurable attributes of the Tekton orchestrator, check out the SDK Docs . | stack-components | https://docs.zenml.io/stack-components/orchestrators/tekton | 462 |
iple> β β β default β 40m58s β ββ β β β β π¦ blob-container β β β β β β
β β β β β π kubernetes-cluster β β β β β β
β β β β β π³ docker-registry β β β β β β
ββββββββββ·ββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββ·βββββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββ·βββββββββ·ββββββββββ·βββββββββββββ·βββββββββ
Auto-configuration
The Azure Service Connector allows auto-discovering and fetching credentials and configuration set up by the Azure CLI on your local host.
The Azure service connector auto-configuration comes with two limitations:
it can only pick up temporary Azure access tokens and therefore cannot be used for long-term authentication scenarios
it doesn't support authenticating to the Azure blob storage service. The Azure service principal authentication method can be used instead.
For an auto-configuration example, please refer to the section about Azure access tokens.
Local client provisioning
The local Azure CLI, Kubernetes kubectl CLI and the Docker CLI can be configured with credentials extracted from or generated by a compatible Azure Service Connector.
Note that the Azure local CLI can only be configured with credentials issued by the Azure Service Connector if the connector is configured with the service principal authentication method.
The following shows an example of configuring the local Kubernetes CLI to access an AKS cluster reachable through an Azure Service Connector:
zenml service-connector list --name azure-service-principal
Example Command Output | how-to | https://docs.zenml.io/how-to/auth-management/azure-service-connector | 412 |
GitHub Container Registry
Storing container images in GitHub.
The GitHub container registry is a container registry flavor that comes built-in with ZenML and uses the GitHub Container Registry to store container images.
When to use it
You should use the GitHub container registry if:
one or more components of your stack need to pull or push container images.
you're using GitHub for your projects. If you're not using GitHub, take a look at the other container registry flavors.
How to deploy it
The GitHub container registry is enabled by default when you create a GitHub account.
How to find the registry URI
The GitHub container registry URI should have the following format:
ghcr.io/<USER_OR_ORGANIZATION_NAME>
# Examples:
ghcr.io/zenml
ghcr.io/my-username
ghcr.io/my-organization
To figure our the URI for your registry:
Use the GitHub user or organization name to fill the template ghcr.io/<USER_OR_ORGANIZATION_NAME> and get your URI.
How to use it
To use the GitHub container registry, we need:
Docker installed and running.
The registry URI. Check out the previous section on the URI format and how to get the URI for your registry.
Our Docker client configured, so it can pull and push images. Follow this guide to create a personal access token and login to the container registry.
We can then register the container registry and use it in our active stack:
zenml container-registry register <NAME> \
--flavor=github \
--uri=<REGISTRY_URI>
# Add the container registry to the active stack
zenml stack update -c <NAME>
For more information and a full list of configurable attributes of the GitHub container registry, check out the SDK Docs .
PreviousAzure Container Registry
NextDevelop a custom container registry
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/container-registries/github | 371 |
Local Docker Orchestrator
Orchestrating your pipelines to run in Docker.
The local Docker orchestrator is an orchestrator flavor that comes built-in with ZenML and runs your pipelines locally using Docker.
When to use it
You should use the local Docker orchestrator if:
you want the steps of your pipeline to run locally in isolated environments.
you want to debug issues that happen when running your pipeline in Docker containers without waiting and paying for remote infrastructure.
How to deploy it
To use the local Docker orchestrator, you only need to have Docker installed and running.
How to use it
To use the local Docker orchestrator, we can register it and use it in our active stack:
zenml orchestrator register <ORCHESTRATOR_NAME> --flavor=local_docker
# Register and activate a stack with the new orchestrator
zenml stack register <STACK_NAME> -o <ORCHESTRATOR_NAME> ... --set
You can now run any ZenML pipeline using the local Docker orchestrator:
python file_that_runs_a_zenml_pipeline.py
Additional configuration
For additional configuration of the Local Docker orchestrator, you can pass LocalDockerOrchestratorSettings when defining or running your pipeline. Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings. A full list of what can be passed in via the run_args can be found in the Docker Python SDK documentation.
For more information and a full list of configurable attributes of the local Docker orchestrator, check out the SDK Docs .
For example, if you wanted to specify the CPU count available for the Docker image (note: only configurable for Windows), you could write a simple pipeline like the following:
from zenml import step, pipeline
from zenml.orchestrators.local_docker.local_docker_orchestrator import (
LocalDockerOrchestratorSettings,
@step
def return_one() -> int:
return 1
settings = {
"orchestrator.local_docker": LocalDockerOrchestratorSettings( | stack-components | https://docs.zenml.io/stack-components/orchestrators/local-docker | 426 |
] = None,
model_source_uri: Optional[str] = None,tags: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[RegistryModelVersion]:
"""Lists all model versions for a registered model."""
@abstractmethod
def get_model_version(self, name: str, version: str) -> RegistryModelVersion:
"""Gets a model version for a registered model."""
@abstractmethod
def load_model_version(
self,
name: str,
version: str,
**kwargs: Any,
) -> Any:
"""Loads a model version from the model registry."""
@abstractmethod
def get_model_uri_artifact_store(
self,
model_version: RegistryModelVersion,
) -> str:
"""Gets the URI artifact store for a model version."""
This is a slimmed-down version of the base implementation which aims to highlight the abstraction layer. To see the full implementation and get the complete docstrings, please check the source code on GitHub .
Build your own custom model registry
If you want to create your own custom flavor for a model registry, you can follow the following steps:
Learn more about the core concepts for the model registry here. Your custom model registry will be built on top of these concepts so it helps to be aware of them.
Create a class that inherits from BaseModelRegistry and implements the abstract methods.
Create a ModelRegistryConfig class that inherits from BaseModelRegistryConfig and adds any additional configuration parameters that you need.
Bring the implementation and the configuration together by inheriting from the BaseModelRegistryFlavor class. Make sure that you give a name to the flavor through its abstract property.
Once you are done with the implementation, you can register it through the CLI with the following command:
zenml model-registry flavor register <IMAGE-BUILDER-FLAVOR-SOURCE-PATH>
It is important to draw attention to how and when these base abstractions are coming into play in a ZenML workflow.
The CustomModelRegistryFlavor class is imported and utilized upon the creation of the custom flavor through the CLI. | stack-components | https://docs.zenml.io/v/docs/stack-components/model-registries/custom | 407 |
onSageMakerFullAccess managed policy permissions).If using a remote orchestrator: the remote environment in which the orchestrator runs needs to be able to implicitly authenticate to AWS and assume the IAM role specified when registering the SageMaker step operator. This is only possible if the orchestrator is also running in AWS and uses a form of implicit workload authentication like the IAM role of an EC2 instance. If this is not the case, you will need to use a service connector.
zenml step-operator register <NAME> \
--flavor=sagemaker \
--role=<SAGEMAKER_ROLE> \
--instance_type=<INSTANCE_TYPE> \
# --experiment_name=<EXPERIMENT_NAME> # optionally specify an experiment to assign this run to
zenml stack register <STACK_NAME> -s <STEP_OPERATOR_NAME> ... --set
python run.py # Authenticates with `default` profile in `~/.aws/config`
Once you added the step operator to your active stack, you can use it to execute individual steps of your pipeline by specifying it in the @step decorator as follows:
from zenml import step
@step(step_operator= <NAME>)
def trainer(...) -> ...:
"""Train a model."""
# This step will be executed in SageMaker.
ZenML will build a Docker image called <CONTAINER_REGISTRY_URI>/zenml:<PIPELINE_NAME> which includes your code and use it to run your steps in SageMaker. Check out this page if you want to learn more about how ZenML builds these images and how you can customize them.
Additional configuration
For additional configuration of the SageMaker step operator, you can pass SagemakerStepOperatorSettings when defining or running your pipeline. Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings.
For more information and a full list of configurable attributes of the SageMaker step operator, check out the SDK Docs .
Enabling CUDA for GPU-backed hardware | stack-components | https://docs.zenml.io/v/docs/stack-components/step-operators/sagemaker | 403 |
> \
--default_slack_channel_id=<SLACK_CHANNEL_ID>Here is where you can find the required parameters:
<SLACK_CHANNEL_ID>: Open your desired Slack channel in a browser, and copy out the last part of the URL starting with C.....
<SLACK_TOKEN>: This is the Slack token of your bot. You can find it in the Slack app settings under OAuth & Permissions. IMPORTANT: Please make sure that the token is the Bot User OAuth Token not the User OAuth Token.
After you have registered the slack_alerter, you can add it to your stack like this:
zenml stack register ... -al slack_alerter
How to Use the Slack Alerter
After you have a SlackAlerter configured in your stack, you can directly import the slack_alerter_post_step and slack_alerter_ask_step steps and use them in your pipelines.
Since these steps expect a string message as input (which needs to be the output of another step), you typically also need to define a dedicated formatter step that takes whatever data you want to communicate and generates the string message that the alerter should post.
As an example, adding slack_alerter_ask_step() to your pipeline could look like this:
from zenml.integrations.slack.steps.slack_alerter_ask_step import slack_alerter_ask_step
from zenml import step, pipeline
@step
def my_formatter_step(artifact_to_be_communicated) -> str:
return f"Here is my artifact {artifact_to_be_communicated}!"
@pipeline
def my_pipeline(...):
...
artifact_to_be_communicated = ...
message = my_formatter_step(artifact_to_be_communicated)
approved = slack_alerter_ask_step(message)
... # Potentially have different behavior in subsequent steps if `approved`
if __name__ == "__main__":
my_pipeline()
An example of adding a custom Slack block as part of any alerter logic for your pipeline could look like this:
from typing import List, Dict
from zenml.integrations.slack.steps.slack_alerter_ask_step import slack_alerter_post_step
from zenml.integrations.slack.alerters.slack_alerter import SlackAlerterParameters
from zenml import step, pipeline
@step | stack-components | https://docs.zenml.io/v/docs/stack-components/alerters/slack | 470 |
Cache previous executions
Iterating quickly with ZenML through caching.
Developing machine learning pipelines is iterative in nature. ZenML speeds up development in this work with step caching.
In the logs of your previous runs, you might have noticed at this point that rerunning the pipeline a second time will use caching on the first step:
Step training_data_loader has started.
Using cached version of training_data_loader.
Step svc_trainer has started.
Train accuracy: 0.3416666666666667
Step svc_trainer has finished in 0.932s.
ZenML understands that nothing has changed between subsequent runs, so it re-uses the output of the previous run (the outputs are persisted in the artifact store). This behavior is known as caching.
In ZenML, caching is enabled by default. Since ZenML automatically tracks and versions all inputs, outputs, and parameters of steps and pipelines, steps will not be re-executed within the same pipeline on subsequent pipeline runs as long as there is no change in the inputs, parameters, or code of a step.
The caching does not automatically detect changes within the file system or on external APIs. Make sure to manually set caching to False on steps that depend on external inputs, file-system changes, or if the step should run regardless of caching.
@step(enable_cache=False)
def load_data_from_external_system(...) -> ...:
# This step will always be run
Configuring the caching behavior of your pipelines
With caching as the default behavior, there will be times when you need to disable it.
There are levels at which you can take control of when and where caching is used.
Caching at the pipeline level
On a pipeline level, the caching policy can be set as a parameter within the @pipeline decorator as shown below:
@pipeline(enable_cache=False)
def first_pipeline(....):
"""Pipeline with cache disabled""" | user-guide | https://docs.zenml.io/v/docs/user-guide/starter-guide/cache-previous-executions | 378 |
lly registered orchestrator `<ORCHESTRATOR_NAME>`.$ zenml service-connector list-resources --resource-type kubernetes-cluster -e
The following 'kubernetes-cluster' resources can be accessed by service connectors configured in your workspace:
ββββββββββββββββββββββββββββββββββββββββ―ββββββββββββββββββββββββ―βββββββββββββββββ―ββββββββββββββββββββββββ―ββββββββββββββββββββββ
β CONNECTOR ID β CONNECTOR NAME β CONNECTOR TYPE β RESOURCE TYPE β RESOURCE NAMES β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββ¨
β e33c9fac-5daa-48b2-87bb-0187d3782cde β aws-iam-multi-eu β πΆ aws β π kubernetes-cluster β kubeflowmultitenant β
β β β β β zenbox β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββ¨
β ed528d5a-d6cb-4fc4-bc52-c3d2d01643e5 β aws-iam-multi-us β πΆ aws β π kubernetes-cluster β zenhacks-cluster β
β βββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌββββββββββββββββββββββββΌββββββββββββββββββββββ¨
β 1c54b32a-4889-4417-abbd-42d3ace3d03a β gcp-sa-multi β π΅ gcp β π kubernetes-cluster β zenml-test-cluster β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββββββββ | stack-components | https://docs.zenml.io/stack-components/orchestrators/kubernetes | 508 |
β πΆ aws β π¦ s3-bucket β s3://zenfiles βββββββββββββββββββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββ·βββββββββββββββββ
The ZenML CLI provides an even easier and more interactive way of connecting a stack component to an external resource. Just pass the -i command line argument and follow the interactive guide:
zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles
zenml artifact-store connect s3-zenfiles -i
The S3 Artifact Store Stack Component we just connected to the infrastructure is now ready to be used in a stack to run a pipeline:
zenml stack register s3-zenfiles -o default -a s3-zenfiles --set
A simple pipeline could look like this:
from zenml import step, pipeline
@step
def simple_step_one() -> str:
"""Simple step one."""
return "Hello World!"
@step
def simple_step_two(msg: str) -> None:
"""Simple step two."""
print(msg)
@pipeline
def simple_pipeline() -> None:
"""Define single step pipeline."""
message = simple_step_one()
simple_step_two(msg=message)
if __name__ == "__main__":
simple_pipeline()
Save this as run.py and run it with the following command:
python run.py
Example Command Output
Registered pipeline simple_pipeline (version 1).
Running pipeline simple_pipeline on stack s3-zenfiles (caching enabled)
Step simple_step_one has started.
Step simple_step_one has finished in 1.065s.
Step simple_step_two has started.
Hello World!
Step simple_step_two has finished in 5.681s.
Pipeline run simple_pipeline-2023_06_15-19_29_42_159831 has finished in 12.522s.
Dashboard URL: http://127.0.0.1:8237/workspaces/default/pipelines/8267b0bc-9cbd-42ac-9b56-4d18275bdbb4/runs | how-to | https://docs.zenml.io/v/docs/how-to/auth-management | 471 |
e.g. feature drift, label drift, new labels etc.).Model Performance Checks for tabular or computer vision data: evaluate a model and detect problems with its performance (e.g. confusion matrix, boosting overfit, model error analysis)
You should consider one of the other Data Validator flavors if you need a different set of data validation features.
How do you deploy it?
The Deepchecks Data Validator flavor is included in the Deepchecks ZenML integration, you need to install it on your local machine to be able to register a Deepchecks Data Validator and add it to your stack:
zenml integration install deepchecks -y
The Data Validator stack component does not have any configuration parameters. Adding it to a stack is as simple as running e.g.:
# Register the Deepchecks data validator
zenml data-validator register deepchecks_data_validator --flavor=deepchecks
# Register and set a stack with the new data validator
zenml stack register custom_stack -dv deepchecks_data_validator ... --set
How do you use it?
The ZenML integration restructures the way Deepchecks validation checks are organized in four categories, based on the type and number of input parameters that they expect as input. This makes it easier to reason about them when you decide which tests to use in your pipeline steps:
data integrity checks expect a single dataset as input. These correspond one-to-one to the set of Deepchecks data integrity checks for tabular and computer vision data
data drift checks require two datasets as input: target and reference. These correspond one-to-one to the set of Deepchecks train-test checks for tabular data and for computer vision.
model validation checks require a single dataset and a mandatory model as input. This list includes a subset of the model evaluation checks provided by Deepchecks for tabular data and for computer vision that expect a single dataset as input. | stack-components | https://docs.zenml.io/stack-components/data-validators/deepchecks | 375 |
lowing configuration values in custom-values.yaml:the database configuration, if you mean to use an external database:the database URL, formatted as mysql://<username>:<password>@<hostname>:<port>/<database>CA and/or client TLS certificates, if youβre using SSL to secure the connection to the database
the Ingress configuration, if enabled:enabling TLSenabling self-signed certificatesconfiguring the hostname that will be used to access the ZenML server, if different from the IP address or hostname associated with the Ingress service installed in your cluster
Note All the file paths that you use in your helm chart (e.g. for certificates like database.sslCa) must be relative to the ./zenml helm chart directory, meaning that you also have to copy these files there.
Install the Helm chart
Once everything is configured, you can run the following command in the ./zenml folder to install the Helm chart.
helm -n <namespace> install zenml-server . --create-namespace --values custom-values.yaml
Connect to the deployed ZenML server
Immediately after deployment, the ZenML server needs to be activated before it can be used. The activation process includes creating an initial admin user account and configuring some server settings. You can do this only by visiting the ZenML server URL in your browser and following the on-screen instructions. Connecting your local ZenML client to the server is not possible until the server is properly initialized.
The Helm chart should print out a message with the URL of the deployed ZenML server. You can use the URL to open the ZenML UI in your browser.
To connect your local client to the ZenML server, you can either pass the configuration as command line arguments or as a YAML file:
zenml connect --url=https://zenml.example.com:8080 --no-verify-ssl
or
zenml connect --config=/path/to/zenml_server_config.yaml
The YAML file should have the following structure when connecting to a ZenML server:
url: <The URL of the ZenML server>
verify_ssl: | | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-helm | 421 |
How to configure a pipeline with a YAML
Specify a configuration file
All configuration that can be specified in a YAML file can also be specified in code itself. However, it is best practice to use a YAML file to separate config from code.
You can use the with_options(config_path=<PATH_TO_CONFIG>) pattern to apply your configuration to a pipeline. Here is a minimal example of using a file based configuration yaml.
enable_cache: False
# Configure the pipeline parameters
parameters:
dataset_name: "best_dataset"
steps:
load_data: # Use the step name here
enable_cache: False # same as @step(enable_cache=False)
from zenml import step, pipeline
@step
def load_data(dataset_name: str) -> dict:
...
@pipeline # This function combines steps together
def simple_ml_pipeline(dataset_name: str):
load_data(dataset_name)
if __name__=="__main__":
simple_ml_pipeline.with_options(config_path=<INSERT_PATH_TO_CONFIG_YAML>)()
The above would run the simple_ml_pipeline with cache disabled for load_data and the parameter dataset_name set to best_dataset.
PreviousUse configuration files
NextWhat can be configured
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/use-configuration-files/how-to-use-config | 244 |
in our active stack. This can be done in two ways:If you have a Service Connector configured to access the remote Kubernetes cluster, you no longer need to set the kubernetes_context attribute to a local kubectl context. In fact, you don't need the local Kubernetes CLI at all. You can connect the stack component to the Service Connector instead:Copy$ zenml orchestrator register <ORCHESTRATOR_NAME> --flavor kubernetes
Running with active workspace: 'default' (repository)
Running with active stack: 'default' (repository)
Successfully registered orchestrator `<ORCHESTRATOR_NAME>`. | stack-components | https://docs.zenml.io/stack-components/orchestrators/kubernetes | 125 |
the GCP Secrets Manager as a secrets store backendThe GCP Secrets Store uses the ZenML GCP Service Connector under the hood to authenticate with the GCP Secrets Manager API. This means that you can use any of the authentication methods supported by the GCP Service Connector to authenticate with the GCP Secrets Manager API.
The minimum set of permissions that must be attached to the implicit or configured GCP credentials are as follows:
secretmanager.secrets.create for the target GCP project (i.e. no condition on the name prefix)
secretmanager.secrets.get, secretmanager.secrets.update, secretmanager.versions.access, secretmanager.versions.add and secretmanager.secrets.delete for the target GCP project and for secrets that have a name starting with zenml-
This can be achieved by creating two custom IAM roles and attaching them to the principal (e.g. user or service account) that will be used to access the GCP Secrets Manager API with a condition configured when attaching the second role to limit access to secrets with a name prefix of zenml-. The following gcloud CLI command examples can be used as a starting point:
gcloud iam roles create ZenMLServerSecretsStoreCreator \
--project <your GCP project ID> \
--title "ZenML Server Secrets Store Creator" \
--description "Allow the ZenML Server to create new secrets" \
--stage GA \
--permissions "secretmanager.secrets.create"
gcloud iam roles create ZenMLServerSecretsStoreEditor \
--project <your GCP project ID> \
--title "ZenML Server Secrets Store Editor" \
--description "Allow the ZenML Server to manage its secrets" \
--stage GA \
--permissions "secretmanager.secrets.get,secretmanager.secrets.update,secretmanager.versions.access,secretmanager.versions.add,secretmanager.secrets.delete"
gcloud projects add-iam-policy-binding <your GCP project ID> \
--member serviceAccount:<your GCP service account email> \
--role projects/<your GCP project ID>/roles/ZenMLServerSecretsStoreCreator \
--condition None | getting-started | https://docs.zenml.io/getting-started/deploying-zenml/deploy-with-helm | 426 |
AzureML
Executing individual steps in AzureML.
AzureML offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's AzureML step operator allows you to submit individual steps to be run on AzureML compute instances.
When to use it
You should use the AzureML step operator if:
one or more steps of your pipeline require computing resources (CPU, GPU, memory) that are not provided by your orchestrator.
you have access to AzureML. If you're using a different cloud provider, take a look at the SageMaker or Vertex step operators.
How to deploy it
Create a Machine learning resource on Azure .
Once your resource is created, you can head over to the Azure Machine Learning Studio and create a compute cluster to run your pipelines.
Create an environment for your pipelines. Follow this guide to set one up.
(Optional) Create a Service Principal for authentication. This is required if you intend to run your pipelines with a remote orchestrator.
How to use it
To use the AzureML step operator, we need:
The ZenML azure integration installed. If you haven't done so, runCopyzenml integration install azure
An AzureML compute cluster and environment. See the deployment section for detailed instructions.
A remote artifact store as part of your stack. This is needed so that both your orchestration environment and AzureML can read and write step artifacts. Check out the documentation page of the artifact store you want to use for more information on how to set that up and configure authentication for it.
We can then register the step operator and use it in our active stack:
zenml step-operator register <NAME> \
--flavor=azureml \
--subscription_id=<AZURE_SUBSCRIPTION_ID> \
--resource_group=<AZURE_RESOURCE_GROUP> \
--workspace_name=<AZURE_WORKSPACE_NAME> \
--compute_target_name=<AZURE_COMPUTE_TARGET_NAME> \
--environment_name=<AZURE_ENVIRONMENT_NAME> \ | stack-components | https://docs.zenml.io/v/docs/stack-components/step-operators/azureml | 404 |
ββββββββΌβββββββββββββββββββββΌβββββββββββββββββββββ¨β β cloud_kubeflow_stack β b94df4d2-5b65-4201-945a-61436c9c5384 β β default β cloud_artifact_store β cloud_orchestrator β eks_seldon β cloud_registry β aws_secret_manager β
β βββββββββΌβββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββΌβββββββββΌββββββββββΌβββββββββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββΌβββββββββββββββββββββΌβββββββββββββββββββββ¨
β β local_kubeflow_stack β 8d9343ac-d405-43bd-ab9c-85637e479efe β β default β default β kubeflow_orchestrator β β local_registry β β
ββββββββββ·βββββββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββ·βββββββββ·ββββββββββ·βββββββββββββββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββ·βββββββββββββββββββββ·βββββββββββββββββββββ
The zenml profile migrate CLI command also provides command line flags for cases in which the user wants to overwrite existing components or stacks, or ignore errors.
Decoupling Stack Component configuration from implementation
Stack components can now be registered without having the required integrations installed. As part of this change, we split all existing stack component definitions into three classes: an implementation class that defines the logic of the stack component, a config class that defines the attributes and performs input validations, and a flavor class that links implementation and config classes together. See component flavor models #895 for more details.
If you are only using stack component flavors that are shipped with the zenml Python distribution, this change has no impact on the configuration of your existing stacks. However, if you are currently using custom stack component implementations, you will need to update them to the new format. See the documentation on writing custom stack component flavors for updated information on how to do this.
Shared ZenML Stacks and Stack Components | reference | https://docs.zenml.io/reference/migration-guide/migration-zero-twenty | 507 |
hestrator_url"].value
Run pipelines on a scheduleThe Vertex Pipelines orchestrator supports running pipelines on a schedule using its native scheduling capability.
How to schedule a pipeline
from zenml.config.schedule import Schedule
# Run a pipeline every 5th minute
pipeline_instance.run(
schedule=Schedule(
cron_expression="*/5 * * * *"
# Run a pipeline every hour
# starting in one day from now and ending in three days from now
pipeline_instance.run(
schedule=Schedule(
cron_expression="0 * * * *"
start_time=datetime.datetime.now() + datetime.timedelta(days=1),
end_time=datetime.datetime.now() + datetime.timedelta(days=3),
The Vertex orchestrator only supports the cron_expression, start_time (optional) and end_time (optional) parameters in the Schedule object, and will ignore all other parameters supplied to define the schedule.
The start_time and end_time timestamp parameters are both optional and are to be specified in local time. They define the time window in which the pipeline runs will be triggered. If they are not specified, the pipeline will run indefinitely.
The cron_expression parameter supports timezones. For example, the expression TZ=Europe/Paris 0 10 * * * will trigger runs at 10:00 in the Europe/Paris timezone.
How to delete a scheduled pipeline
Note that ZenML only gets involved to schedule a run, but maintaining the lifecycle of the schedule is the responsibility of the user.
In order to cancel a scheduled Vertex pipeline, you need to manually delete the schedule in VertexAI (via the UI or the CLI).
Additional configuration
For additional configuration of the Vertex orchestrator, you can pass VertexOrchestratorSettings which allows you to configure node selectors, affinity, and tolerations to apply to the Kubernetes Pods running your pipeline. These can be either specified using the Kubernetes model objects or as dictionaries.
from zenml.integrations.gcp.flavors.vertex_orchestrator_flavor import VertexOrchestratorSettings
from kubernetes.client.models import V1Toleration | stack-components | https://docs.zenml.io/stack-components/orchestrators/vertex | 416 |
Run pipelines asynchronously
The best way to trigger a pipeline run so that it runs in the background
By default your pipelines will run synchronously. This means your terminal will follow along the logs as the pipeline is being built/runs.
This behavior can be changed in multiple ways. Either the orchestrator can be configured to always run asynchronously by setting synchronous=False. The other option is to temporarily set this at the pipeline configuration level during runtime.
from zenml import pipeline
@pipeline(settings = {"orchestrator.<STACK_NAME>": {"synchronous": False}})
def my_pipeline():
...
or in a yaml config file:
settings:
orchestrator.<STACK_NAME>:
synchronous: false
Learn more about orchestrators here
PreviousAutomatically retry steps
NextControl execution order of steps
Last updated 14 days ago | how-to | https://docs.zenml.io/v/docs/how-to/build-pipelines/run-pipelines-asynchronously | 167 |
Deploy a stack using mlstacks
Deploying an entire stack with mlstacks.
mlstacks is a Python package that allows you to quickly spin up MLOps infrastructure using Terraform. It is designed to be used with ZenML, but can be used with any MLOps tool or platform. You can deploy a modular MLOps stack for AWS, GCP or K3D using mlstacks. Each deployment type is designed to offer a great deal of flexibility in configuring the resources while preserving the ease of application through the use of sensible defaults.
Check out the full documentation for the mlstacks package for more information.
When should I deploy something using mlstacks?
To answer this question, here are some pros and cons in comparison to the stack-component deploy method which can help you choose what works best for you!
Offers a lot of flexibility in what you deploy.
Deploying with mlstacks gives you a full MLOps stack as the output. Your components and stack is automatically imported to ZenML. This saves you the effort of manually registering all the components.
Currently only supports AWS, GCP, and K3D as providers.
Most stack deployments are Kubernetes-based which might be too heavy for your needs.
Not all stack components are supported yet.
Installing the mlstacks extra
To install mlstacks, either run pip install mlstacks or pip install "zenml[mlstacks]" to install it along with ZenML.
MLStacks uses Terraform on the backend to manage infrastructure. You will need to have Terraform installed. Please visit the Terraform docs for installation instructions.
MLStacks also uses Helm to deploy Kubernetes resources. You will need to have Helm installed. Please visit the Helm docs for installation instructions.
Deploying a stack
A simple stack deployment can be done using the following command:
zenml stack deploy -p aws -a -n basic -r eu-north-1 -x bucket_name=my_bucket -o sagemaker | how-to | https://docs.zenml.io/v/docs/how-to/stack-deployment/deploy-a-stack-using-mlstacks | 405 |
p a custom alerter as described on the Feast page,and where can I find the 'How to use it?' guide?". Expected URL ending: feature-stores.
Got: ['https://docs.zenml.io/stacks-and-components/component-guide/alerters/custom',
'https://docs.zenml.io/v/docs/stacks-and-components/component-guide/alerters/custom',
'https://docs.zenml.io/v/docs/reference/how-do-i', 'https://docs.zenml.io/stacks-and-components/component-guide/alerters',
'https://docs.zenml.io/stacks-and-components/component-guide/alerters/slack']
Loading default flashrank model for language en
Default Model: ms-marco-MiniLM-L-12-v2
Loading FlashRankRanker model ms-marco-MiniLM-L-12-v2
Loading model FlashRank model ms-marco-MiniLM-L-12-v2...
Running pairwise ranking..
Step retrieval_evaluation_full_with_reranking has finished in 4m20s.
We can see here a specific example of a failure in the reranking evaluation. It's quite a good one because we can see that the question asked was actually an anomaly in the sense that the LLM has generated two questions and included its meta-discussion of the two questions it generated. Obviously this is not a representative question for the dataset, and if we saw a lot of these we might want to take some time to both understand why the LLM is generating these questions and how we can filter them out.
Visualising our reranking performance
Since ZenML can display visualizations in its dashboard, we can showcase the results of our experiments in a visual format. For example, we can plot the failure rates of the retrieval system with and without reranking to see the impact of reranking on the performance.
Our documentation explains how to set up your outputs so that they appear as visualizations in the ZenML dashboard. You can find more information here. There are lots of options, but we've chosen to plot our failure rates as a bar chart and export them as a PIL.Image object. We also plotted the other evaluation scores so as to get a quick global overview of our performance. | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/reranking/evaluating-reranking-performance | 446 |
to the container registry.
Authentication MethodsIntegrating and using an Azure Container Registry in your pipelines is not possible without employing some form of authentication. If you're looking for a quick way to get started locally, you can use the Local Authentication method. However, the recommended way to authenticate to the Azure cloud platform is through an Azure Service Connector. This is particularly useful if you are configuring ZenML stacks that combine the Azure Container Registry with other remote stack components also running in Azure.
This method uses the Docker client authentication available in the environment where the ZenML code is running. On your local machine, this is the quickest way to configure an Azure Container Registry. You don't need to supply credentials explicitly when you register the Azure Container Registry, as it leverages the local credentials and configuration that the Azure CLI and Docker client store on your local machine. However, you will need to install and set up the Azure CLI on your machine as a prerequisite, as covered in the Azure CLI documentation, before you register the Azure Container Registry.
With the Azure CLI installed and set up with credentials, you need to login to the container registry so Docker can pull and push images:
# Fill your REGISTRY_NAME in the placeholder in the following command.
# You can find the REGISTRY_NAME as part of your registry URI: `<REGISTRY_NAME>.azurecr.io`
az acr login --name=<REGISTRY_NAME>
Stacks using the Azure Container Registry set up with local authentication are not portable across environments. To make ZenML pipelines fully portable, it is recommended to use an Azure Service Connector to link your Azure Container Registry to the remote ACR registry. | stack-components | https://docs.zenml.io/v/docs/stack-components/container-registries/azure | 330 |
our active stack:
from zenml.client import Clientexperiment_tracker = Client().active_stack.experiment_tracker
@step(experiment_tracker=experiment_tracker.name)
def tf_trainer(...):
...
MLflow UI
MLflow comes with its own UI that you can use to find further details about your tracked experiments.
You can find the URL of the MLflow experiment linked to a specific ZenML run via the metadata of the step in which the experiment tracker was used:
from zenml.client import Client
last_run = client.get_pipeline("<PIPELINE_NAME>").last_run
trainer_step = last_run.get_step("<STEP_NAME>")
tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value
print(tracking_url)
This will be the URL of the corresponding experiment in your deployed MLflow instance, or a link to the corresponding mlflow experiment file if you are using local MLflow.
If you are using local MLflow, you can use the mlflow ui command to start MLflow at localhost:5000 where you can then explore the UI in your browser.
mlflow ui --backend-store-uri <TRACKING_URL>
Additional configuration
For additional configuration of the MLflow experiment tracker, you can pass MLFlowExperimentTrackerSettings to create nested runs or add additional tags to your MLflow runs:
import mlflow
from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings
mlflow_settings = MLFlowExperimentTrackerSettings(
nested=True,
tags={"key": "value"}
@step(
experiment_tracker="<MLFLOW_TRACKER_STACK_COMPONENT_NAME>",
settings={
"experiment_tracker.mlflow": mlflow_settings
def step_one(
data: np.ndarray,
) -> np.ndarray:
...
Check out the SDK docs for a full list of available attributes and this docs page for more information on how to specify settings.
PreviousComet
NextNeptune
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/experiment-trackers/mlflow | 391 |
ssions, the credentials should include permissionsto connect to and use the GKE cluster (i.e. some or all permissions in the
Kubernetes Engine Developer role).
If set, the resource name must identify an GKE cluster using one of the
following formats:
GKE cluster name: {cluster-name}
GKE cluster names are project scoped. The connector can only be used to access
GKE clusters in the GCP project that it is configured to use.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Displaying information about the service-account GCP authentication method:
zenml service-connector describe-type gcp --auth-method service-account
Example Command Output
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β π GCP Service Account (auth method: service-account) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Supports issuing temporary credentials: False
Use a GCP service account and its credentials to authenticate to GCP services.
This method requires a GCP service account and a service account key JSON
created for it.
The GCP connector generates temporary OAuth 2.0 tokens from the user account
credentials and distributes them to clients. The tokens have a limited lifetime
of 1 hour.
A GCP project is required and the connector may only be used to access GCP
resources in the specified project.
If you already have the GOOGLE_APPLICATION_CREDENTIALS environment variable
configured to point to a service account key JSON file, it will be automatically
picked up when auto-configuration is used.
Attributes:
service_account_json {string, secret, required}: GCP Service Account Key JSON
project_id {string, required}: GCP Project ID where the target resource is
located.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Basic Service Connector Types | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/service-connectors-guide | 436 |
ml service-connector describe eks-zenhacks-clusterExample Command Output
Service connector 'eks-zenhacks-cluster' of type 'aws' with id 'be53166a-b39c-4e39-8e31-84658e50eec4' is owned by user 'default' and is 'private'.
'eks-zenhacks-cluster' aws Service Connector Details
ββββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β ID β be53166a-b39c-4e39-8e31-84658e50eec4 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β NAME β eks-zenhacks-cluster β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β TYPE β πΆ aws β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β session-token β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β π kubernetes-cluster β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β zenhacks-cluster β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β fa42ab38-3c93-4765-a4c6-9ce0b548a86c β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β 43200s β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β N/A β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β OWNER β default β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β WORKSPACE β default β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β SHARED β β β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨
β CREATED_AT β 2023-06-16 10:15:26.393769 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββ¨ | how-to | https://docs.zenml.io/how-to/auth-management/best-security-practices | 567 |
DockerHub
Storing container images in DockerHub.
The DockerHub container registry is a container registry flavor that comes built-in with ZenML and uses DockerHub to store container images.
When to use it
You should use the DockerHub container registry if:
one or more components of your stack need to pull or push container images.
you have a DockerHub account. If you're not using DockerHub, take a look at the other container registry flavors.
How to deploy it
To use the DockerHub container registry, all you need to do is create a DockerHub account.
When this container registry is used in a ZenML stack, the Docker images that are built will be published in a ** public** repository and everyone will be able to pull your images. If you want to use a private repository instead, you'll have to create a private repository on the website before running the pipeline. The repository name depends on the remote orchestrator or step operator that you're using in your stack.
How to find the registry URI
The DockerHub container registry URI should have one of the two following formats:
<ACCOUNT_NAME>
# or
docker.io/<ACCOUNT_NAME>
# Examples:
zenml
my-username
docker.io/zenml
docker.io/my-username
To figure out the URI for your registry:
Find out the account name of your DockerHub account.
Use the account name to fill the template docker.io/<ACCOUNT_NAME> and get your URI.
How to use it
To use the Azure container registry, we need:
Docker installed and running.
The registry URI. Check out the previous section on the URI format and how to get the URI for your registry.
We can then register the container registry and use it in our active stack:
zenml container-registry register <NAME> \
--flavor=dockerhub \
--uri=<REGISTRY_URI>
# Add the container registry to the active stack
zenml stack update -c <NAME> | stack-components | https://docs.zenml.io/stack-components/container-registries/dockerhub | 399 |
to access the Artifact Store to load served modelsTo enable these use cases, it is recommended to use an Azure Service Connector to link your Azure Artifact Store to the remote Azure Blob storage container.
To set up the Azure Artifact Store to authenticate to Azure and access an Azure Blob storage container, it is recommended to leverage the many features provided by the Azure Service Connector such as auto-configuration, best security practices regarding long-lived credentials and reusing the same credentials across multiple stack components.
If you don't already have an Azure Service Connector configured in your ZenML deployment, you can register one using the interactive CLI command. You have the option to configure an Azure Service Connector that can be used to access more than one Azure blob storage container or even more than one type of Azure resource:
zenml service-connector register --type azure -i
A non-interactive CLI example that uses Azure Service Principal credentials to configure an Azure Service Connector targeting a single Azure Blob storage container is:
zenml service-connector register <CONNECTOR_NAME> --type azure --auth-method service-principal --tenant_id=<AZURE_TENANT_ID> --client_id=<AZURE_CLIENT_ID> --client_secret=<AZURE_CLIENT_SECRET> --resource-type blob-container --resource-id <BLOB_CONTAINER_NAME>
Example Command Output
$ zenml service-connector register azure-blob-demo --type azure --auth-method service-principal --tenant_id=a79f3633-8f45-4a74-a42e-68871c17b7fb --client_id=8926254a-8c3f-430a-a2fd-bdab234d491e --client_secret=AzureSuperSecret --resource-type blob-container --resource-id az://demo-zenmlartifactstore
Successfully registered service connector `azure-blob-demo` with access to the following resources:
βββββββββββββββββββββ―βββββββββββββββββββββββββββββββ
β RESOURCE TYPE β RESOURCE NAMES β
β ββββββββββββββββββββΌβββββββββββββββββββββββββββββββ¨
β π¦ blob-container β az://demo-zenmlartifactstore β | stack-components | https://docs.zenml.io/stack-components/artifact-stores/azure | 454 |
User Management
In ZenML Pro, there is a slightly different entity hierarchy as compared to the open-source ZenML framework. This document walks you through the key differences and new concepts that are pro-only.
Organizations, Tenants, and Roles
ZenML Pro arranges various aspects of your work experience around the concept of an Organization. This is the top-most level structure within the ZenML Cloud environment. Generally, an organization contains a group of users and one or more tenants. Tenants are individual, isolated deployments of the ZenML server.
Every user in an organization has a distinct role. Each role configures what they can view, modify, and their level of involvement in collaborative tasks. A role thus helps determine the level of access that a user has within an organization.
The admin has all permissions on an organization. They are allowed to add members, adjust the billing information and assign roles. The editor can still fully manage tenants and members but is not allowed to access the subscription information or delete the organization. The viewer Role allows you to allow users to access the tenants within the organization with only view permissions.
Inviting Team Members
Inviting users to your organization to work on the organization's tenants is easy. Simply click Add Member in the Organization settings, and give them an initial Role. The User will be sent an invitation email. If a user is part of an organization, they can utilize their login on all tenants they have authority to access.
PreviousZenML SaaS
NextStarter guide
Last updated 12 days ago | getting-started | https://docs.zenml.io/v/docs/getting-started/zenml-pro/user-management | 310 |
306:3306 -e MYSQL_ROOT_PASSWORD=password mysql:8.0The ZenML client on the host machine can then be configured to connect directly to the database with a slightly different zenml connect command:
zenml connect --url mysql://127.0.0.1/zenml --username root --password password
Note The localhost hostname will not work with MySQL databases. You need to use the 127.0.0.1 IP address instead.
ZenML server with docker-compose
Docker compose offers a simpler way of managing multi-container setups on your local machine, which is the case for instance if you are looking to deploy the ZenML server container and connect it to a MySQL database service also running in a Docker container.
To use Docker Compose, you need to install the docker-compose plugin on your machine first.
A docker-compose.yml file like the one below can be used to start and manage the ZenML server container and the MySQL database service all at once:
version: "3.9"
services:
mysql:
image: mysql:8.0
ports:
3306:3306
environment:
MYSQL_ROOT_PASSWORD=password
zenml:
image: zenmldocker/zenml-server
ports:
"8080:8080"
environment:
ZENML_STORE_URL=mysql://root:[email protected]/zenml
links:
mysql
depends_on:
mysql
extra_hosts:
"host.docker.internal:host-gateway"
restart: on-failure
Note the following:
ZENML_STORE_URL is set to the special Docker host.docker.internal hostname to instruct the server to connect to the database over the Docker network.
The extra_hosts section is needed on Linux to make the host.docker.internal hostname resolvable from the ZenML server container.
To start the containers, run the following command from the directory where the docker-compose.yml file is located:
docker-compose -p zenml up -d
or, if you need to use a different filename or path:
docker-compose -f /path/to/docker-compose.yml -p zenml up -d | getting-started | https://docs.zenml.io/v/docs/getting-started/deploying-zenml/deploy-with-docker | 421 |
a_new_local_stack -o default -a my_artifact_storestack : This is the CLI group that enables interactions with the stacks
register: Here we want to register a new stack. Explore other operations withzenml stack --help.
a_new_local_stack : This is the unique name that the stack will have.
--orchestrator or -o are used to specify which orchestrator to use for the stack
--artifact-store or -a are used to specify which artifact store to use for the stack
The output for the command should look something like this:
Using the default local database.
Running with active workspace: 'default' (repository)
Stack 'a_new_local_stack' successfully registered!
You can inspect the stack with the following command:
zenml stack describe a_new_local_stack
Which will give you an output like this:
Stack Configuration
ββββββββββββββββββ―ββββββββββββββββββββ
β COMPONENT_TYPE β COMPONENT_NAME β
β βββββββββββββββββΌββββββββββββββββββββ¨
β ORCHESTRATOR β default β
β βββββββββββββββββΌββββββββββββββββββββ¨
β ARTIFACT_STORE β my_artifact_store β
ββββββββββββββββββ·ββββββββββββββββββββ
'a_new_local_stack' stack
Stack 'a_new_local_stack' with id '...' is owned by user default and is 'private'.
Switch stacks with our VS Code extension
If you are using our VS Code extension, you can easily view and switch your stacks by opening the sidebar (click on the ZenML icon). You can then click on the stack you want to switch to as well as view the stack components it's made up of.
Run a pipeline on the new local stack
Let's use the pipeline in our starter project from the previous guide to see it in action.
If you have not already, clone the starter template:
pip install "zenml[templates,server]" notebook
zenml integration install sklearn -y
mkdir zenml_starter
cd zenml_starter
zenml init --template starter --template-with-defaults
# Just in case, we install the requirements again
pip install -r requirements.txt
The starter template is the same as the ZenML quickstart. You can clone it like so: | user-guide | https://docs.zenml.io/v/docs/user-guide/production-guide/understand-stacks | 497 |
K_NAME> -i <IMAGE_BUILDER_NAME> ... --set
CaveatsAs described in this Google Cloud Build documentation page, Google Cloud Build uses containers to execute the build steps which are automatically attached to a network called cloudbuild that provides some Application Default Credentials (ADC), that allow the container to be authenticated and therefore use other GCP services.
By default, the GCP Image Builder is executing the build command of the ZenML Pipeline Docker image with the option --network=cloudbuild, so the ADC provided by the cloudbuild network can also be used in the build. This is useful if you want to install a private dependency from a GCP Artifact Registry, but you will also need to use a custom base parent image with the keyrings.google-artifactregistry-auth installed, so pip can connect and authenticate in the private artifact registry to download the dependency.
FROM zenmldocker/zenml:latest
RUN pip install keyrings.google-artifactregistry-auth
The above Dockerfile uses zenmldocker/zenml:latest as a base image, but is recommended to change the tag to specify the ZenML version and Python version like 0.33.0-py3.10.
PreviousKaniko Image Builder
NextDevelop a Custom Image Builder
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/image-builders/gcp | 273 |
Automatically retry steps
Automatically configure your steps to retry if they fail.
ZenML provides a built-in retry mechanism that allows you to configure automatic retries for your steps in case of failures. This can be useful when dealing with intermittent issues or transient errors. A common pattern when trying to run a step on GPU-backed hardware is that the provider will not have enough resources available, so you can set ZenML to handle the retries until the resources free up. You can configure three parameters for step retries:
max_retries: The maximum number of times the step should be retried in case of failure.
delay: The initial delay in seconds before the first retry attempt.
backoff: The factor by which the delay should be multiplied after each retry attempt.
Using the @step decorator:
You can specify the retry configuration directly in the definition of your step as follows:
from zenml.config.retry_config import StepRetryConfig
@step(
retry=StepRetryConfig(
max_retries=3,
delay=10,
backoff=2
def my_step() -> None:
raise Exception("This is a test exception")
steps:
my_step:
retry:
max_retries: 3
delay: 10
backoff: 2
Note that infinite retries are not supported at the moment. If you set max_retries to a very large value or do not specify it at all, ZenML will still enforce an internal maximum number of retries to prevent infinite loops. We recommend setting a reasonable max_retries value based on your use case and the expected frequency of transient failures.
See Also:
Failure/Success Hooks
Configure pipelines
PreviousTrigger a pipeline from another
NextRun pipelines asynchronously
Last updated 14 days ago | how-to | https://docs.zenml.io/v/docs/how-to/build-pipelines/retry-steps | 346 |
vel of permissions required to function correctly.Using long-lived credentials on their own still isn't ideal, because if leaked, they pose a security risk, even when they have limited permissions attached. The good news is that ZenML Service Connectors include additional mechanisms that, when used in combination with long-lived credentials, make it even safer to share long-lived credentials with other ZenML users and automated workloads:
automatically generating temporary credentials from long-lived credentials and even downgrading their permission scope to enforce the least-privilege principle
implementing authentication schemes that impersonate accounts and assume roles
Generating temporary and down-scoped credentials
Most authentication methods that utilize long-lived credentials also implement additional mechanisms that help reduce the accidental credentials exposure and risk of security incidents even further, making them ideal for production.
Issuing temporary credentials: this authentication strategy keeps long-lived credentials safely stored on the ZenML server and away from the eyes of actual API clients and people that need to authenticate to the remote resources. Instead, clients are issued API tokens that have a limited lifetime and expire after a given amount of time. The Service Connector is able to generate these API tokens from long-lived credentials on a need-to-have basis. For example, the AWS Service Connector's "Session Token", "Federation Token" and "IAM Role" authentication methods and basically all authentication methods supported by the GCP Service Connector support this feature.
The following example shows the difference between the long-lived AWS credentials configured for an AWS Service Connector and kept on the ZenML server and the temporary Kubernetes API token credentials that the client receives and uses to access the resource.
First, showing the long-lived AWS credentials configured for the AWS Service Connector:
zenml service-connector describe eks-zenhacks-cluster | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/best-security-practices | 350 |
in the stack.
zenml integration install mlflow -yOnce the MLflow integration is installed, you can register an MLflow model registry component in your stack:
zenml model-registry register mlflow_model_registry --flavor=mlflow
# Register and set a stack with the new model registry as the active stack
zenml stack register custom_stack -r mlflow_model_registry ... --set
The MLFlow model registry will automatically use the same configuration as the MLFlow Experiment Tracker. So if you have a remote MLFlow tracking server configured in your stack, the MLFlow model registry will also use the same configuration.
Due to a critical severity vulnerability found in older versions of MLflow, we recommend using MLflow version 2.2.1 or higher.
How do you use it?
There are different ways to use the MLflow model registry. You can use it in your ZenML pipelines with the built-in step, or you can use the ZenML CLI to register your model manually or call the model registry API within a custom step in your pipeline. The following sections show you how to use the MLflow model registry in your ZenML pipelines and with the ZenML CLI:
Register models inside a pipeline
ZenML provides a predefined mlflow_model_deployer_step that you can use to register a model in the MLflow model registry which you have previously logged to MLflow:
from zenml import pipeline
from zenml.integrations.mlflow.steps.mlflow_registry import (
mlflow_register_model_step,
@pipeline
def mlflow_registry_training_pipeline():
model = ...
mlflow_register_model_step(
model=model,
name="tensorflow-mnist-model",
The mlflow_register_model_step expects that the model it receives has already been logged to MLflow in a previous step. E.g., for a scikit-learn model, you would need to have used mlflow.sklearn.autolog() or mlflow.sklearn.log_model(model) in a previous step. See the MLflow experiment tracker documentation for more information on how to log models to MLflow from your ZenML steps.
List of available parameters | stack-components | https://docs.zenml.io/stack-components/model-registries/mlflow | 426 |
ral options are presented.
hyperparameter tuning?Our dedicated documentation guide on implementing this is the place to learn more.
reset things when something goes wrong?
To reset your ZenML client, you can run zenml clean which will wipe your local metadata database and reset your client. Note that this is a destructive action, so feel free to reach out to us on Slack before doing this if you are unsure.
steps that create other steps AKA dynamic pipelines and steps?
Please read our general information on how to compose steps + pipelines together to start with. You might also find the code examples in our guide to implementing hyperparameter tuning which is related to this topic.
templates: using starter code with ZenML?
Project templates allow you to get going quickly with ZenML. We recommend the Starter template (starter) for most use cases which gives you a basic scaffold and structure around which you can write your own code. You can also build templates for others inside a Git repository and use them with ZenML's templates functionality.
upgrade my ZenML client and/or server?
Upgrading your ZenML client package is as simple as running pip install --upgrade zenml in your terminal. For upgrading your ZenML server, please refer to the dedicated documentation section which covers most of the ways you might do this as well as common troubleshooting steps.
use a <YOUR_COMPONENT_GOES_HERE> stack component?
For information on how to use a specific stack component, please refer to the component guide which contains all our tips and advice on how to use each integration and component with ZenML.
PreviousAPI reference
NextMigration guide
Last updated 18 days ago | reference | https://docs.zenml.io/v/docs/reference/how-do-i | 327 |
t into the step
print(0.01)
trainer(gamma=gamma)Important note, in the above case, the value of the step would be the one defined in the steps key (i.e. 0.001). So the YAML config always takes precedence over pipeline parameters that are passed down to steps in code. Read this section for more details.
Normally, parameters defined at the pipeline level are used in multiple steps, and then no step-level configuration is defined.
Note that parameters are different from artifacts. Parameters are JSON-serializable values that are passed in the runtime configuration of a pipeline. Artifacts are inputs and outputs of a step, and need not always be JSON-serializable (materializers handle their persistence in the artifact store).
Setting the run_name
To change the name for a run, pass run_name as a parameter. This can be a dynamic value as well.
run_name: <INSERT_RUN_NAME_HERE>
You will not be able to run with the same run_name twice. Do not set this statically when running on a schedule. Try to include some auto-incrementation or timestamp to the name.
Stack Component Runtime settings
Settings are special runtime configurations of a pipeline or a step that require a dedicated section. In short, they define a bunch of execution configuration such as Docker building and resource settings.
Docker Settings
Docker Settings can be passed in directly as objects, or a dictionary representation of the object. For example, the Docker configuration can be set in configuration files as follows:
settings:
docker:
requirements:
pandas
Find a complete list of all Docker Settings here. To learn more about pipeline containerization consult our documentation on this here.
Resource Settings
Some stacks allow setting the resource settings using these settings.
resources:
cpu_count: 2
gpu_count: 1
memory: "4Gb"
Note that this may not work for all types of stack components. To learn which components support this, please refer to the specific orchestrator docs.
failure_hook_source and success_hook_source | how-to | https://docs.zenml.io/how-to/use-configuration-files/what-can-be-configured | 411 |
πͺCore concepts
Discovering the core concepts behind ZenML.
ZenML is an extensible, open-source MLOps framework for creating portable, production-ready MLOps pipelines. It's built for data scientists, ML Engineers, and MLOps Developers to collaborate as they develop to production. In order to achieve this goal, ZenML introduces various concepts for different aspects of an ML workflow and we can categorize these concepts under three different threads:
1. DevelopmentAs a developer, how do I design my machine learning workflows?
2. ExecutionWhile executing, how do my workflows utilize the large landscape of MLOps tooling/infrastructure?
3. ManagementHow do I establish and maintain a production-grade and efficient solution?
1. Development
First, let's look at the main concepts which play a role during the development stage of an ML workflow with ZenML.
Step
Steps are functions annotated with the @step decorator. The easiest one could look like this.
@step
def step_1() -> str:
"""Returns a string."""
return "world"
These functions can also have inputs and outputs. For ZenML to work properly, these should preferably be typed.
@step(enable_cache=False)
def step_2(input_one: str, input_two: str) -> str:
"""Combines the two strings passed in."""
combined_str = f"{input_one} {input_two}"
return combined_str
Pipelines
At its core, ZenML follows a pipeline-based workflow for your projects. A pipeline consists of a series of steps, organized in any order that makes sense for your use case.
As seen in the image, a step might use the outputs from a previous step and thus must wait until the previous step is completed before starting. This is something you can keep in mind when organizing your steps.
Pipelines and steps are defined in code using Python decorators or classes. This is where the core business logic and value of your work lives, and you will spend most of your time defining these two things. | getting-started | https://docs.zenml.io/v/docs/getting-started/core-concepts | 410 |
Fetching pipelines
Inspecting a finished pipeline run and its outputs.
Once a pipeline run has been completed, we can access the corresponding information in code, which enables the following:
Loading artifacts like models or datasets saved by previous runs
Accessing metadata or configurations of previous runs
Programmatically inspecting the lineage of pipeline runs and their artifacts
The hierarchy of pipelines, runs, steps, and artifacts is as follows:
As you can see from the diagram, there are many layers of 1-to-N relationships.
Let us investigate how to traverse this hierarchy level by level:
Pipelines
Get a pipeline via the client
After you have run a pipeline at least once, you can also fetch the pipeline via the Client.get_pipeline() method.
from zenml.client import Client
pipeline_model = Client().get_pipeline("first_pipeline")
Check out the ZenML Client Documentation for more information on the Client class and its purpose.
Discover and list all pipelines
If you're not sure which pipeline you need to fetch, you can find a list of all registered pipelines in the ZenML dashboard, or list them programmatically either via the Client or the CLI.
You can use the Client.list_pipelines() method to get a list of all pipelines registered in ZenML:
from zenml.client import Client
pipelines = Client().list_pipelines()
Alternatively, you can also list pipelines with the following CLI command:
zenml pipeline list
Runs
Each pipeline can be executed many times, resulting in several Runs.
Get all runs of a pipeline
You can get a list of all runs of a pipeline using the runs property of the pipeline:
runs = pipeline_model.runs
The result will be a list of the most recent runs of this pipeline, ordered from newest to oldest.
Alternatively, you can also use the pipeline_model.get_runs() method which allows you to specify detailed parameters for filtering or pagination. See the ZenML SDK Docs for more information.
Get the last run of a pipeline | how-to | https://docs.zenml.io/how-to/build-pipelines/fetching-pipelines | 398 |
ββββββββββββββββββββββββββββββββββββββββββββββββββ¨β AUTH METHOD β user-account β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β π΅ gcp-generic, π¦ gcs-bucket, π kubernetes-cluster, π³ docker-registry β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β <multiple> β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β 17692951-614f-404f-a13a-4abb25bfa758 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β N/A β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β N/A β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β OWNER β default β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β WORKSPACE β default β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SHARED β β β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β CREATED_AT β 2023-05-19 08:09:44.102934 β
β βββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨ | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 383 |
AWS Sagemaker Orchestrator
Orchestrating your pipelines to run on Amazon Sagemaker.
Sagemaker Pipelines is a serverless ML workflow tool running on AWS. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.
This component is only meant to be used within the context of a remote ZenML deployment scenario. Usage with a local ZenML deployment may lead to unexpected behavior!
When to use it
You should use the Sagemaker orchestrator if:
you're already using AWS.
you're looking for a proven production-grade orchestrator.
you're looking for a UI in which you can track your pipeline runs.
you're looking for a managed solution for running your pipelines.
you're looking for a serverless solution for running your pipelines.
How it works
The ZenML Sagemaker orchestrator works with Sagemaker Pipelines, which can be used to construct machine learning pipelines. Under the hood, for each ZenML pipeline step, it creates a SageMaker PipelineStep, which contains a Sagemaker Processing job. Currently, other step types are not supported.
How to deploy it
In order to use a Sagemaker AI orchestrator, you need to first deploy ZenML to the cloud. It would be recommended to deploy ZenML in the same region as you plan on using for Sagemaker, but it is not necessary to do so. You must ensure that you are connected to the remote ZenML server before using this stack component.
The only other thing necessary to use the ZenML Sagemaker orchestrator is enabling the relevant permissions for your particular role.
In order to quickly enable APIs, and create other resources necessary for to use this integration, we will soon provide a Sagemaker stack recipe via our mlstacks repository, which will help you set up the infrastructure with one click.
Infrastructure Deployment
A Sagemaker orchestrator can be deployed directly from the ZenML CLI: | stack-components | https://docs.zenml.io/stack-components/orchestrators/sagemaker | 411 |
our pipeline runs, such as the logs of your steps.To access the Sagemaker Pipelines UI, you will have to launch Sagemaker Studio via the AWS Sagemaker UI. Make sure that you are launching it from within your desired AWS region.
Once the Studio UI has launched, click on the 'Pipeline' button on the left side. From there you can view the pipelines that have been launched via ZenML:
Debugging SageMaker Pipelines
If your SageMaker pipeline encounters an error before the first ZenML step starts, the ZenML run will not appear in the ZenML dashboard. In such cases, use the SageMaker UI to review the error message and logs. Here's how:
Open the corresponding pipeline in the SageMaker UI as shown in the SageMaker UI Section,
Open the execution,
Click on the failed step in the pipeline graph,
Go to the 'Output' tab to see the error message or to 'Logs' to see the logs.
Alternatively, for a more detailed view of log messages during SageMaker pipeline executions, consider using Amazon CloudWatch:
Search for 'CloudWatch' in the AWS console search bar.
Navigate to 'Logs > Log groups.'
Open the '/aws/sagemaker/ProcessingJobs' log group.
Here, you can find log streams for each step of your SageMaker pipeline executions.
Run pipelines on a schedule
The ZenML Sagemaker orchestrator doesn't currently support running pipelines on a schedule. We maintain a public roadmap for ZenML, which you can find here. We welcome community contributions (see more here) so if you want to enable scheduling for Sagemaker, please do let us know!
Configuration at pipeline or step level | stack-components | https://docs.zenml.io/v/docs/stack-components/orchestrators/sagemaker | 343 |
total_tests) * 100
return round(failure_rate, 2)This function takes a sample size and a flag indicating whether to use reranking and evaluates the retrieval performance based on the generated questions and relevant documents. It queries similar documents for each question and checks whether the expected URL ending is present in the retrieved URLs. The failure rate is calculated as the percentage of failed tests over the total number of tests.
This function is then called in two separate evaluation steps: one for the retrieval system without reranking and one for the retrieval system with reranking.
@step
def retrieval_evaluation_full(
sample_size: int = 100,
) -> Annotated[float, "full_failure_rate_retrieval"]:
"""Executes the retrieval evaluation step without reranking."""
failure_rate = perform_retrieval_evaluation(
sample_size, use_reranking=False
logging.info(f"Retrieval failure rate: {failure_rate}%")
return failure_rate
@step
def retrieval_evaluation_full_with_reranking(
sample_size: int = 100,
) -> Annotated[float, "full_failure_rate_retrieval_reranking"]:
"""Executes the retrieval evaluation step with reranking."""
failure_rate = perform_retrieval_evaluation(
sample_size, use_reranking=True
logging.info(f"Retrieval failure rate with reranking: {failure_rate}%")
return failure_rate
Both of these steps return the failure rate of the respective retrieval systems. If we want, we can look into the logs of those steps (either on the dashboard or in the terminal) to see specific examples that failed. For example:
...
Loading default flashrank model for language en
Default Model: ms-marco-MiniLM-L-12-v2
Loading FlashRankRanker model ms-marco-MiniLM-L-12-v2
Loading model FlashRank model ms-marco-MiniLM-L-12-v2...
Running pairwise ranking..
Failed for question: Based on the provided ZenML documentation text, here's a question
that can be asked: "How do I develop a custom alerter as described on the Feast page, | user-guide | https://docs.zenml.io/user-guide/llmops-guide/reranking/evaluating-reranking-performance | 433 |
'default' ...
Creating default user 'default' ...Creating default stack for user 'default' in workspace default...
Active workspace not set. Setting it to the default.
The active stack is not set. Setting the active stack to the default workspace stack.
Using the default store for the global config.
Unable to find ZenML repository in your current working directory (/tmp/folder) or any parent directories. If you want to use an existing repository which is in a different location, set the environment variable 'ZENML_REPOSITORY_PATH'. If you want to create a new repository, run zenml init.
Running without an active repository root.
Using the default local database.
Running with active workspace: 'default' (global)
ββββββββββ―βββββββββββββ―βββββββββ―ββββββββββ―βββββββββββββββββ―βββββββββββββββ
β ACTIVE β STACK NAME β SHARED β OWNER β ARTIFACT_STORE β ORCHESTRATOR β
β βββββββββΌβββββββββββββΌβββββββββΌββββββββββΌβββββββββββββββββΌβββββββββββββββ¨
β π β default β β β default β default β default β
ββββββββββ·βββββββββββββ·βββββββββ·ββββββββββ·βββββββββββββββββ·βββββββββββββββ
The following is an example of the layout of the global config directory immediately after initialization:
/home/stefan/.config/zenml <- Global Config Directory
βββ config.yaml <- Global Configuration Settings
βββ local_stores <- Every Stack component that stores information
| locally will have its own subdirectory here.
βββ a1a0d3d0-d552-4a80-be09-67e5e29be8ee <- e.g. Local Store path for the
| `default` local Artifact Store
βββ default_zen_store
βββ zenml.db <- SQLite database where ZenML data (stacks,
components, etc) are stored by default.
As shown above, the global config directory stores the following information: | reference | https://docs.zenml.io/v/docs/reference/global-settings | 484 |
ββββββ
Configuration
ββββββββββββββ―ββββββββββββββ PROPERTY β VALUE β
β βββββββββββββΌβββββββββββββ¨
β project_id β zenml-core β
β βββββββββββββΌβββββββββββββ¨
β token β [HIDDEN] β
ββββββββββββββ·βββββββββββββ
Note the temporary nature of the Service Connector. It will expire and become unusable in 1 hour:
zenml service-connector list --name gcp-oauth2-token
Example Command Output
ββββββββββ―βββββββββββββββββββ―βββββββββββββββββββββββββββββββββββββββ―βββββββββ―ββββββββββββββββββββββββ―ββββββββββββββββ―βββββββββ―ββββββββββ―βββββββββββββ―βββββββββ
β ACTIVE β NAME β ID β TYPE β RESOURCE TYPES β RESOURCE NAME β SHARED β OWNER β EXPIRES IN β LABELS β
β βββββββββΌβββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββΌβββββββββΌββββββββββββββββββββββββΌββββββββββββββββΌβββββββββΌββββββββββΌβββββββββββββΌβββββββββ¨
β β gcp-oauth2-token β ec4d7d85-c71c-476b-aa76-95bf772c90da β π΅ gcp β π΅ gcp-generic β <multiple> β β β default β 59m35s β β
β β β β β π¦ gcs-bucket β β β β β β
β β β β β π kubernetes-cluster β β β β β β
β β β β β π³ docker-registry β β β β β β
ββββββββββ·βββββββββββββββββββ·βββββββββββββββββββββββββββββββββββββββ·βββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββ·βββββββββ·ββββββββββ·βββββββββββββ·βββββββββ
Auto-configuration
The GCP Service Connector allows auto-discovering and fetching credentials and configuration set up by the GCP CLI on your local host. | how-to | https://docs.zenml.io/v/docs/how-to/auth-management/gcp-service-connector | 609 |
ββ β β β β HTTP response headers: HTTPHeaderDict({'Audit-Id': '72558f83-e050-4fe3-93e5-9f7e66988a4c', 'Cache-Control': β
β β β β β 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Fri, 09 Jun 2023 18:59:02 GMT', β
β β β β β 'Content-Length': '129'}) β
β β β β β HTTP response body: β
β β β β β {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauth β
β β β β β orized","code":401} β
β β β β β β
β β β β β β
ββββββββββββββββββββββββββββββββββββββββ·ββββββββββββββββββ·βββββββββββββββββ·ββββββββββββββββββββββββ·ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | how-to | https://docs.zenml.io/how-to/auth-management/service-connectors-guide | 356 |
ry_similar_docs(
question: str,
url_ending: str,use_reranking: bool = False,
returned_sample_size: int = 5,
) -> Tuple[str, str, List[str]]:
"""Query similar documents for a given question and URL ending."""
embedded_question = get_embeddings(question)
db_conn = get_db_conn()
num_docs = 20 if use_reranking else returned_sample_size
# get (content, url) tuples for the top n similar documents
top_similar_docs = get_topn_similar_docs(
embedded_question, db_conn, n=num_docs, include_metadata=True
if use_reranking:
reranked_docs_and_urls = rerank_documents(question, top_similar_docs)[
:returned_sample_size
urls = [doc[1] for doc in reranked_docs_and_urls]
else:
urls = [doc[1] for doc in top_similar_docs] # Unpacking URLs
return (question, url_ending, urls)
We get the embeddings for the question being passed into the function and connect to our PostgreSQL database. If we're using reranking, we get the top 20 documents similar to our query and rerank them using the rerank_documents helper function. We then extract the URLs from the reranked documents and return them. Note that we only return 5 URLs, but in the case of reranking we get a larger number of documents and URLs back from the database to pass to our reranker, but in the end we always choose the top five reranked documents to return.
Now that we've added reranking to our pipeline, we can evaluate the performance of our reranker and see how it affects the quality of the retrieved documents.
Code Example
To explore the full code, visit the Complete Guide repository and for this section, particularly the eval_retrieval.py file.
PreviousUnderstanding reranking
NextEvaluating reranking performance
Last updated 1 month ago | user-guide | https://docs.zenml.io/v/docs/user-guide/llmops-guide/reranking/implementing-reranking | 397 |
π²Control logging
Configuring ZenML's default logging behavior
ZenML produces various kinds of logs:
The ZenML Server produces server logs (like any FastAPI server).
The Client or Runner environment produces logs, for example after running a pipeline. These are steps that are typically before, after, and during the creation of a pipeline run.
The Execution environment (on the orchestrator level) produces logs when it executes each step of a pipeline. These are logs that are typically written in your steps using the python logging module.
This section talks about how users can control logging behavior in these various environments.
PreviousTrain with GPUs
NextView logs on the dashboard
Last updated 19 days ago | how-to | https://docs.zenml.io/v/docs/how-to/control-logging | 141 |
into play when the component is ultimately in use.The design behind this interaction lets us separate the configuration of the flavor from its implementation. This way we can register flavors and components even when the major dependencies behind their implementation are not installed in our local setting (assuming the CustomArtifactStoreFlavor and the CustomArtifactStoreConfig are implemented in a different module/path than the actual CustomArtifactStore).
Enabling Artifact Visualizations with Custom Artifact Stores
ZenML automatically saves visualizations for many common data types and allows you to view these visualizations in the ZenML dashboard. Under the hood, this works by saving the visualizations together with the artifacts in the artifact store.
In order to load and display these visualizations, ZenML needs to be able to load and access the corresponding artifact store. This means that your custom artifact store needs to be configured in a way that allows authenticating to the back-end without relying on the local environment, e.g., by embedding the authentication credentials in the stack component configuration or by referencing a secret.
Furthermore, for deployed ZenML instances, you need to install the package dependencies of your artifact store implementation in the environment where you have deployed ZenML. See the Documentation on deploying ZenML with custom Docker images for more information on how to do that.
PreviousAzure Blob Storage
NextContainer Registries
Last updated 15 days ago | stack-components | https://docs.zenml.io/stack-components/artifact-stores/custom | 272 |
-registry β iam-role β β ββ β β β session-token β β β
β β β β federation-token β β β
β βββββββββββββββββββββββββββββββΌββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββΌββββββββΌβββββββββ¨
β GCP Service Connector β π΅ gcp β π΅ gcp-generic β implicit β β
β β
β
β β β π¦ gcs-bucket β user-account β β β
β β β π kubernetes-cluster β service-account β β β
β β β π³ docker-registry β oauth2-token β β β
β β β β impersonation β β β
β βββββββββββββββββββββββββββββββΌββββββββββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββΌββββββββΌβββββββββ¨
β HyperAI Service Connector β π€ hyperai β π€ hyperai-instance β rsa-key β β
β β
β
β β β β dsa-key β β β
β β β β ecdsa-key β β β
β β β β ed25519-key β β β
ββββββββββββββββββββββββββββββββ·ββββββββββββββββ·ββββββββββββββββββββββββ·βββββββββββββββββββ·ββββββββ·βββββββββ
Service Connector Types are also displayed in the dashboard during the configuration of a new Service Connector:
The cloud provider of choice for our example is AWS and we're looking to hook up an S3 bucket to an S3 Artifact Store Stack Component. We'll use the AWS Service Connector Type. | how-to | https://docs.zenml.io/how-to/auth-management | 466 |
GCP Python clients for any particular GCP service.This generic GCP resource type is meant to be used with Stack Components that are not represented by one of the other, more specific resource types like GCS buckets, Kubernetes clusters, or Docker registries. For example, it can be used with the Google Cloud Image Builder stack component, or the Vertex AI Orchestrator and Step Operator. It should be accompanied by a matching set of GCP permissions that allow access to the set of remote resources required by the client and Stack Component (see the documentation of each Stack Component for more details).
The resource name represents the GCP project that the connector is authorized to access.
GCS bucket
Allows Stack Components to connect to GCS buckets. When used by Stack Components, they are provided a pre-configured GCS Python client instance.
The configured credentials must have at least the following GCP permissions associated with the GCS buckets that it can access:
storage.buckets.list
storage.buckets.get
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.list
storage.objects.update
For example, the GCP Storage Admin role includes all of the required permissions, but it also includes additional permissions that are not required by the connector.
If set, the resource name must identify a GCS bucket using one of the following formats:
GCS bucket URI (canonical resource name): gs://{bucket-name}
GCS bucket name: {bucket-name}
GKE Kubernetes cluster
Allows Stack Components to access a GKE cluster as a standard Kubernetes cluster resource. When used by Stack Components, they are provided a pre-authenticated Python Kubernetes client instance.
The configured credentials must have at least the following GCP permissions associated with the GKE clusters that it can access:
container.clusters.list
container.clusters.get | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 363 |
ource-id zenfiles --client
Example Command OutputService connector 'aws-federation-token (s3-bucket | s3://zenfiles client)' of type 'aws' with id '868b17d4-b950-4d89-a6c4-12e520e66610' is owned by user 'default' and is 'private'.
'aws-federation-token (s3-bucket | s3://zenfiles client)' aws Service
Connector Details
ββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β PROPERTY β VALUE β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β ID β e28c403e-8503-4cce-9226-8a7cd7934763 β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β NAME β aws-federation-token (s3-bucket | s3://zenfiles client) β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β TYPE β πΆ aws β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β AUTH METHOD β sts-token β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE TYPES β π¦ s3-bucket β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β RESOURCE NAME β s3://zenfiles β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SECRET ID β β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β SESSION DURATION β N/A β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨
β EXPIRES IN β 11h59m56s β
β βββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¨ | how-to | https://docs.zenml.io/how-to/auth-management/aws-service-connector | 484 |
βοΈBuild a pipeline
Building pipelines is as simple as adding the `@step` and `@pipeline` decorators to your code.
@step # Just add this decorator
def load_data() -> dict:
training_data = [[1, 2], [3, 4], [5, 6]]
labels = [0, 1, 0]
return {'features': training_data, 'labels': labels}
@step
def train_model(data: dict) -> None:
total_features = sum(map(sum, data['features']))
total_labels = sum(data['labels'])
# Train some model here
print(f"Trained model using {len(data['features'])} data points. "
f"Feature sum is {total_features}, label sum is {total_labels}")
@pipeline # This function combines steps together
def simple_ml_pipeline():
dataset = load_data()
train_model(dataset)
You can now run this pipeline by simply calling the function:
simple_ml_pipeline()
When this pipeline is executed, the run of the pipeline gets logged to the ZenML dashboard where you can now go to look at its DAG and all the associated metadata. To access the dashboard you need to have a ZenML server either running locally or remotely. See our documentation on this here.
Check below for more advanced ways to build and interact with your pipeline.
Configure pipeline/step parameters
Name and annotate step outputs
Control caching behavior
Run pipeline from a pipeline
Control the execution order of steps
Customize the step invocation ids
Name your pipeline runs
Use failure/success hooks
Hyperparameter tuning
Attach metadata to steps
Fetch metadata within steps
Fetch metadata during pipeline composition
Version pipelines
Enable or disable logs storing
Special Metadata Types
Access secrets in a step
PreviousBest practices
NextUse pipeline/step parameters
Last updated 16 days ago | how-to | https://docs.zenml.io/v/docs/how-to/build-pipelines | 378 |
the creation of the custom flavor through the CLI.The CustomModelRegistryConfig class is imported when someone tries to register/update a stack component with this custom flavor. Most of all, during the registration process of the stack component, the config will be used to validate the values given by the user. As Config objects are pydantic objects under the hood, you can also add your own custom validators here.
The CustomModelRegistry only comes into play when the component is ultimately in use.
The design behind this interaction lets us separate the configuration of the flavor from its implementation. This way we can register flavors and components even when the major dependencies behind their implementation are not installed in our local setting (assuming the CustomModelRegistryFlavor and the CustomModelRegistryConfig are implemented in a different module/path than the actual CustomModelRegistry).
For a full implementation example, please check out the MLFlowModelRegistry
PreviousMLflow Model Registry
NextFeature Stores
Last updated 19 days ago | stack-components | https://docs.zenml.io/v/docs/stack-components/model-registries/custom | 195 |
entials JSON to clients instead (not recommended).A GCP project is required and the connector may only be used to access GCP resources in the specified roject. This project must be the same as the one for which the external account was configured.
If you already have the GOOGLE_APPLICATION_CREDENTIALS environment variable configured to point to an external account key JSON file, it will be automatically picked up when auto-configuration is used.
The following assumes the following prerequisites are met, as covered in the GCP documentation on how to configure workload identity federation with AWS:
the ZenML server is deployed in AWS in an EKS cluster (or any other AWS compute environment)
the ZenML server EKS pods are associated with an AWS IAM role by means of an IAM OIDC provider, as covered in the AWS documentation on how to associate a IAM role with a service account. Alternatively, the IAM role associated with the EKS/EC2 nodes can be used instead. This AWS IAM role provides the implicit AWS IAM identity and credentials that will be used to authenticate to GCP services.
a GCP workload identity pool and AWS provider are configured for the GCP project where the target resources are located, as covered in the GCP documentation on how to configure workload identity federation with AWS.
a GCP service account is configured with permissions to access the target resources and granted the roles/iam.workloadIdentityUser role for the workload identity pool and AWS provider
a GCP external account JSON file is generated for the GCP service account. This is used to configure the GCP connector.
zenml service-connector register gcp-workload-identity --type gcp \
--auth-method external-account --project_id=zenml-core \
[email protected]
Example Command Output
Successfully registered service connector `gcp-workload-identity` with access to the following resources:
βββββββββββββββββββββββββ―ββββββββββββββββββββββββββββββββββββββββββββββββββ | how-to | https://docs.zenml.io/how-to/auth-management/gcp-service-connector | 427 |
pipeline again:
python run.py --training-pipelineNow you should notice the machine that gets provisioned on your cloud provider would have a different configuration as compared to last time. As easy as that!
Bear in mind that not every orchestrator supports ResourceSettings directly. To learn more, you can read about ResourceSettings here, including the ability to attach a GPU.
PreviousOrchestrate on the cloud
NextConfigure a code repository
Last updated 15 days ago | user-guide | https://docs.zenml.io/user-guide/production-guide/configure-pipeline | 93 |
MLflow Model Registry
Managing MLFlow logged models and artifacts
MLflow is a popular tool that helps you track experiments, manage models and even deploy them to different environments. ZenML already provides a MLflow Experiment Tracker that you can use to track your experiments, and an MLflow Model Deployer that you can use to deploy your models locally.
The MLflow model registry uses the MLflow model registry service to manage and track ML models and their artifacts and provides a user interface to browse them:
When would you want to use it?
You can use the MLflow model registry throughout your experimentation, QA, and production phases to manage and track machine learning model versions. It is designed to help teams collaborate on model development and deployment, and keep track of which models are being used in which environments. With the MLflow model registry, you can store and manage models, deploy them to different environments, and track their performance over time.
This is particularly useful in the following scenarios:
If you are working on a machine learning project and want to keep track of different model versions as they are developed and deployed.
If you need to deploy machine learning models to different environments and want to keep track of which version is being used in each environment.
If you want to monitor and compare the performance of different model versions over time and make data-driven decisions about which models to use in production.
If you want to simplify the process of deploying models either to a production environment or to a staging environment for testing.
How do you deploy it?
The MLflow Experiment Tracker flavor is provided by the MLflow ZenML integration, so you need to install it on your local machine to be able to register an MLflow model registry component. Note that the MLFlow model registry requires MLFlow Experiment Tracker to be present in the stack.
zenml integration install mlflow -y | stack-components | https://docs.zenml.io/v/docs/stack-components/model-registries/mlflow | 369 |
ome additional important configuration parameters:namespace is the namespace under which the driver and executor pods will run.
service_account is the service account that will be used by various Spark components (to create and watch the pods).
Additionally, the _backend_configuration method is adjusted to handle the Kubernetes-specific configuration.
When to use it
You should use the Spark step operator:
when you are dealing with large amounts of data.
when you are designing a step that can benefit from distributed computing paradigms in terms of time and resources.
How to deploy it
To use the KubernetesSparkStepOperator you will need to setup a few things first:
Remote ZenML server: See the deployment guide for more information.
Kubernetes cluster: There are many ways to deploy a Kubernetes cluster using different cloud providers or on your custom infrastructure. For AWS, you can follow the Spark EKS Setup Guide below.
Spark EKS Setup Guide
The following guide will walk you through how to spin up and configure a Amazon Elastic Kubernetes Service with Spark on it:
EKS Kubernetes Cluster
Follow this guide to create an Amazon EKS cluster role.
Follow this guide to create an Amazon EC2 node role.
Go to the IAM website, and select Roles to edit both roles.
Attach the AmazonRDSFullAccess and AmazonS3FullAccess policies to both roles.
Go to the EKS website.
Make sure the correct region is selected on the top right.
Click on Add cluster and select Create.
Enter a name and select the cluster role for Cluster service role.
Keep the default values for the networking and logging steps and create the cluster.
Note down the cluster name and the API server endpoint:
EKS_CLUSTER_NAME=<EKS_CLUSTER_NAME>
EKS_API_SERVER_ENDPOINT=<API_SERVER_ENDPOINT>
After the cluster is created, select it and click on Add node group in the Compute tab.
Enter a name and select the node role.
For the instance type, we recommend t3a.xlarge, as it provides up to 4 vCPUs and 16 GB of memory. | stack-components | https://docs.zenml.io/v/docs/stack-components/step-operators/spark-kubernetes | 409 |
to install and configure the AWS CLI on your hostyou don't need to care about enabling your other stack components (orchestrators, step operators, and model deployers) to have access to the artifact store through IAM roles and policies
you can combine the S3 artifact store with other stack components that are not running in AWS
Note: When you create the IAM user for your AWS access key, please remember to grant the created IAM user permissions to read and write to your S3 bucket (i.e. at a minimum: s3:PutObject, s3:GetObject, s3:ListBucket, s3:DeleteObject)
After having set up the IAM user and generated the access key, as described in the AWS documentation, you can register the S3 Artifact Store as follows:
# Store the AWS access key in a ZenML secret
zenml secret create s3_secret \
--aws_access_key_id='<YOUR_S3_ACCESS_KEY_ID>' \
--aws_secret_access_key='<YOUR_S3_SECRET_KEY>'
# Register the S3 artifact-store and reference the ZenML secret
zenml artifact-store register s3_store -f s3 \
--path='s3://your-bucket' \
--authentication_secret=s3_secret
# Register and set a stack with the new artifact store
zenml stack register custom_stack -a s3_store ... --set
Advanced Configuration
The S3 Artifact Store accepts a range of advanced configuration options that can be used to further customize how ZenML connects to the S3 storage service that you are using. These are accessible via the client_kwargs, config_kwargs and s3_additional_kwargs configuration attributes and are passed transparently to the underlying S3Fs library:
client_kwargs: arguments that will be transparently passed to the botocore client . You can use it to configure parameters like endpoint_url and region_name when connecting to an S3-compatible endpoint (e.g. Minio).
config_kwargs: advanced parameters passed to botocore.client.Config.
s3_additional_kwargs: advanced parameters that are used when calling S3 API, typically used for things like ServerSideEncryption and ACL. | stack-components | https://docs.zenml.io/v/docs/stack-components/artifact-stores/s3 | 432 |