EU AI Act Compliance — Read our original regulation brief on how the EU AI Act aims to balance innovation with safety and accountability, setting standards for responsible AI use
The ValidMind Library streamlines the process of documenting various types of models. ValidMind automates the documentation process, ensuring that your model documentation and testing aligns with regulatory and compliance standards.
The ValidMind Library
The ValidMind Library is a Python library and documentation engine designed to streamline the process of documenting various types of models, including traditional statistical models, legacy systems, artificial intelligence/machine learning models, and large language models (LLMs).
It offers model developers a systematic approach to documenting and testing risk models with repeatability and consistency, ensuring alignment with regulatory and compliance standards.
The two main components of ValidMind: the ValidMind Library that integrates with your existing developer environment, and the ValidMind Platform
The ValidMind Library consists of a client-side library, a Python API integration for models and testing, and validation tests that streamline the model development process. Implemented as a series of independent libraries in Python and R, our library ensures compatibility and flexibility with diverse sets of developer environments and requirements.
With the ValidMind Library, you can:
Automate documentation — Add comprehensive documentation as metadata while you build models to be shared with model validators, streamlining and speeding up the process.
Run test suites — Identify potential risks for a diverse range of statistical and AI/LLM/ML models by assessing data quality, model outcomes, robustness, and explainability.
Integrate with your development environment — Seamlessly incorporate the ValidMind Library into your existing model development environment, connecting to your existing model code and data sets.
Upload documentation data — Send qualitative and quantitative test data to the ValidMind Platform1 to generate the model documentation for review and approval, fostering effective collaboration with model reviewers and validators.
The tests and functions are executed automatically, following pre-configured templates tailored for specific model use cases. This ensures that minimum documentation requirements are consistently fulfilled.
The library integrates with ETL/data processing pipelines using connector interfaces. This enables the extraction of relationships between raw data sources and their corresponding post-processed datasets, such as those preloaded session instances received from platforms like Spark and Snowflake.
Extensible by design
ValidMind supports various model types, including:2
Traditional machine learning models (ML) such as tree-based models and neural network models.
Natural language processing models (NLP) for text analysis and understanding.
Large language models (LLMs) in beta testing phase, offering advanced language capabilities.
Traditional statistical models like Ordinary Least Squares (OLS) regression, Logistic regression, Time Series models, and more.
ValidMind is designed to be highly extensible to cater to our customers’ specific requirements. You can expand its functionality in the following ways:
You can easily add support for new models and data types by defining new classes within the ValidMind Library. We provide templates to guide you through this process.3
To include custom tests in the library, you can define new functions. We offer templates to help you create these custom tests.4
You have the flexibility to integrate third-party test libraries seamlessly. These libraries can be hosted either locally within your infrastructure or remotely, for example, on GitHub. Leverage additional testing capabilities and resources as needed.5
ValidMind imports the following artifacts into the documentation via our ValidMind Library Python API integration:
Metadata about datasets and models, used to lookup programmatic documentation content, such as the stored definition for common logistic regression limitations when a logistic regression model has been passed to the ValidMind test plan to be run.
Quality and performance metrics collected from datasets and models.
Output from test and test suites that have been run.
Images, plots, visuals that were generated as part of extracting metrics and running tests.
Artifacts imported into the documentation via our Python API
ValidMind does NOT:
Send any personal identifiable information (PII) when generating documentation reports.