As machine learning finds its way into the mainstream, practices are maturing around automatically testing models, validating training data and observing model performance in production. Increasingly, these automated checks are being incorporated into continuous delivery pipelines or run against production models to detect drift and model performance. A number of tools with similar or overlapping capabilities have emerged to handle various steps in this process (Giskard and Evidently are also covered in this volume). Deepchecks is another of these tools that’s available as an open-source Python library and can be invoked from pipeline code through an extensive set of APIs. One unique feature is its ability to handle either tabular or image data with a module for language data currently in alpha release. At the moment, no single tool can handle the variety of tests and guardrails across the entire ML pipeline. We recommend assessing Deepchecks for your particular application niche.