ValidMind for model validation 2 — Start the model validation process

Learn how to use ValidMind for your end-to-end model validation process with our series of four introductory notebooks. In this second notebook, independently verify the data quality tests performed on the dataset used to train the champion model.

You'll learn how to run relevant validation tests with ValidMind, log the results of those tests to the ValidMind Platform, and insert your logged test results as evidence into your validation report. You'll become familiar with the tests available in ValidMind, as well as how to run them. Running tests during model validation is crucial to the effective challenge process, as we want to independently evaluate the evidence and assessments provided by the model development team.

While running our tests in this notebook, we'll focus on:

For a full list of out-of-the-box tests, refer to our Test descriptions or try the interactive Test sandbox.

Learn by doing

Our course tailor-made for validators new to ValidMind combines this series of notebooks with more a more in-depth introduction to the ValidMind Platform — Validator Fundamentals

Prerequisites

In order to independently assess the quality of your datasets with notebook, you'll need to first have:

Need help with the above steps?

Refer to the first notebook in this series: 1 — Set up the ValidMind Library for validation

Setting up

Initialize the ValidMind Library

First, let's connect up the ValidMind Library to our model we previously registered in the ValidMind Platform:

  1. In a browser, log in to ValidMind.

  2. In the left sidebar, navigate to Inventory and select the model you registered for this "ValidMind for model validation" series of notebooks.

  3. Go to Getting Started and click Copy snippet to clipboard.

Next, load your model identifier credentials from an .env file or replace the placeholder with your own code snippet:

# Make sure the ValidMind Library is installed

%pip install -q validmind

# Load your model identifier credentials from an `.env` file

%load_ext dotenv
%dotenv .env

# Or replace with your code snippet

import validmind as vm

vm.init(
    # api_host="...",
    # api_key="...",
    # api_secret="...",
    # model="...",
)
Note: you may need to restart the kernel to use updated packages.
2026-01-28 18:09:34,679 - INFO(validmind.api_client): 🎉 Connected to ValidMind!
📊 Model: [ValidMind Academy] Model validation (ID: cmalguc9y02ok199q2db381ib)
📁 Document Type: validation_report

Load the sample dataset

Let's first import the public Bank Customer Churn Prediction dataset from Kaggle, which was used to develop the dummy champion model.

We'll use this dataset to review steps that should have been conducted during the initial development and documentation of the model to ensure that the model was built correctly. By independently performing steps taken by the model development team, we can confirm whether the model was built using appropriate and properly processed data.

In our below example, note that:

  • The target column, Exited has a value of 1 when a customer has churned and 0 otherwise.
  • The ValidMind Library provides a wrapper to automatically load the dataset as a Pandas DataFrame object. A Pandas Dataframe is a two-dimensional tabular data structure that makes use of rows and columns.
from validmind.datasets.classification import customer_churn as demo_dataset

print(
    f"Loaded demo dataset with: \n\n\t• Target column: '{demo_dataset.target_column}' \n\t• Class labels: {demo_dataset.class_labels}"
)

raw_df = demo_dataset.load_data()
raw_df.head()
Loaded demo dataset with: 

    • Target column: 'Exited' 
    • Class labels: {'0': 'Did not exit', '1': 'Exited'}
CreditScore Geography Gender Age Tenure Balance NumOfProducts HasCrCard IsActiveMember EstimatedSalary Exited
0 619 France Female 42 2 0.00 1 1 1 101348.88 1
1 608 Spain Female 41 1 83807.86 1 0 1 112542.58 0
2 502 France Female 42 8 159660.80 3 1 0 113931.57 1
3 699 France Female 39 1 0.00 2 0 0 93826.63 0
4 850 Spain Female 43 2 125510.82 1 1 1 79084.10 0

Verifying data quality adjustments

Let's say that thanks to the documentation submitted by the model development team (Learn more ...), we know that the sample dataset was first modified before being used to train the champion model. After performing some data quality assessments on the raw dataset, it was determined that the dataset required rebalancing, and highly correlated features were also removed.

Identify qualitative tests

During model validation, we use the same data processing logic and training procedure to confirm that the model's results can be reproduced independently, so let's start by doing some data quality assessments by running a few individual tests just like the development team did.

Use the vm.tests.list_tests() function introduced by the first notebook in this series in combination with vm.tests.list_tags() and vm.tests.list_tasks() to find which prebuilt tests are relevant for data quality assessment:

  • tasks represent the kind of modeling task associated with a test. Here we'll focus on classification tasks.
  • tags are free-form descriptions providing more details about the test, for example, what category the test falls into. Here we'll focus on the data_quality tag.
# Get the list of available task types
sorted(vm.tests.list_tasks())
['classification',
 'clustering',
 'data_validation',
 'feature_extraction',
 'monitoring',
 'nlp',
 'regression',
 'residual_analysis',
 'text_classification',
 'text_generation',
 'text_qa',
 'text_summarization',
 'time_series_forecasting',
 'visualization']
# Get the list of available tags
sorted(vm.tests.list_tags())
['AUC',
 'analysis',
 'anomaly_detection',
 'bias_and_fairness',
 'binary_classification',
 'calibration',
 'categorical_data',
 'classification',
 'classification_metrics',
 'clustering',
 'correlation',
 'credit_risk',
 'data_analysis',
 'data_distribution',
 'data_quality',
 'data_validation',
 'descriptive_statistics',
 'dimensionality_reduction',
 'distribution',
 'embeddings',
 'feature_importance',
 'feature_selection',
 'few_shot',
 'forecasting',
 'frequency_analysis',
 'kmeans',
 'linear_regression',
 'llm',
 'logistic_regression',
 'metadata',
 'model_comparison',
 'model_diagnosis',
 'model_explainability',
 'model_interpretation',
 'model_performance',
 'model_predictions',
 'model_selection',
 'model_training',
 'model_validation',
 'multiclass_classification',
 'nlp',
 'normality',
 'numerical_data',
 'outliers',
 'qualitative',
 'rag_performance',
 'ragas',
 'regression',
 'retrieval_performance',
 'scorecard',
 'seasonality',
 'senstivity_analysis',
 'sklearn',
 'stationarity',
 'statistical_test',
 'statistics',
 'statsmodels',
 'tabular_data',
 'text_data',
 'threshold_optimization',
 'time_series_data',
 'unit_root_test',
 'visualization',
 'zero_shot']

You can pass tags and tasks as parameters to the vm.tests.list_tests() function to filter the tests based on the tags and task types.

For example, to find tests related to tabular data quality for classification models, you can call list_tests() like this:

vm.tests.list_tests(task="classification", tags=["tabular_data", "data_quality"])
ID Name Description Has Figure Has Table Required Inputs Params Tags Tasks
validmind.data_validation.ClassImbalance Class Imbalance Evaluates and quantifies class distribution imbalance in a dataset used by a machine learning model.... True True ['dataset'] {'min_percent_threshold': {'type': 'int', 'default': 10}} ['tabular_data', 'binary_classification', 'multiclass_classification', 'data_quality'] ['classification']
validmind.data_validation.DescriptiveStatistics Descriptive Statistics Performs a detailed descriptive statistical analysis of both numerical and categorical data within a model's... False True ['dataset'] {} ['tabular_data', 'time_series_data', 'data_quality'] ['classification', 'regression']
validmind.data_validation.Duplicates Duplicates Tests dataset for duplicate entries, ensuring model reliability via data quality verification.... False True ['dataset'] {'min_threshold': {'type': '_empty', 'default': 1}} ['tabular_data', 'data_quality', 'text_data'] ['classification', 'regression']
validmind.data_validation.HighCardinality High Cardinality Assesses the number of unique values in categorical columns to detect high cardinality and potential overfitting.... False True ['dataset'] {'num_threshold': {'type': 'int', 'default': 100}, 'percent_threshold': {'type': 'float', 'default': 0.1}, 'threshold_type': {'type': 'str', 'default': 'percent'}} ['tabular_data', 'data_quality', 'categorical_data'] ['classification', 'regression']
validmind.data_validation.HighPearsonCorrelation High Pearson Correlation Identifies highly correlated feature pairs in a dataset suggesting feature redundancy or multicollinearity.... False True ['dataset'] {'max_threshold': {'type': 'float', 'default': 0.3}, 'top_n_correlations': {'type': 'int', 'default': 10}, 'feature_columns': {'type': 'list', 'default': None}} ['tabular_data', 'data_quality', 'correlation'] ['classification', 'regression']
validmind.data_validation.MissingValues Missing Values Evaluates dataset quality by ensuring missing value ratio across all features does not exceed a set threshold.... False True ['dataset'] {'min_threshold': {'type': 'int', 'default': 1}} ['tabular_data', 'data_quality'] ['classification', 'regression']
validmind.data_validation.MissingValuesBarPlot Missing Values Bar Plot Assesses the percentage and distribution of missing values in the dataset via a bar plot, with emphasis on... True False ['dataset'] {'threshold': {'type': 'int', 'default': 80}, 'fig_height': {'type': 'int', 'default': 600}} ['tabular_data', 'data_quality', 'visualization'] ['classification', 'regression']
validmind.data_validation.Skewness Skewness Evaluates the skewness of numerical data in a dataset to check against a defined threshold, aiming to ensure data... False True ['dataset'] {'max_threshold': {'type': '_empty', 'default': 1}} ['data_quality', 'tabular_data'] ['classification', 'regression']
validmind.plots.BoxPlot Box Plot Generates customizable box plots for numerical features in a dataset with optional grouping using Plotly.... True False ['dataset'] {'columns': {'type': 'Optional', 'default': None}, 'group_by': {'type': 'Optional', 'default': None}, 'width': {'type': 'int', 'default': 1800}, 'height': {'type': 'int', 'default': 1200}, 'colors': {'type': 'Optional', 'default': None}, 'show_outliers': {'type': 'bool', 'default': True}, 'title_prefix': {'type': 'str', 'default': 'Box Plot of'}} ['tabular_data', 'visualization', 'data_quality'] ['classification', 'regression', 'clustering']
validmind.plots.HistogramPlot Histogram Plot Generates customizable histogram plots for numerical features in a dataset using Plotly.... True False ['dataset'] {'columns': {'type': 'Optional', 'default': None}, 'bins': {'type': 'Union', 'default': 30}, 'color': {'type': 'str', 'default': 'steelblue'}, 'opacity': {'type': 'float', 'default': 0.7}, 'show_kde': {'type': 'bool', 'default': True}, 'normalize': {'type': 'bool', 'default': False}, 'log_scale': {'type': 'bool', 'default': False}, 'title_prefix': {'type': 'str', 'default': 'Histogram of'}, 'width': {'type': 'int', 'default': 1200}, 'height': {'type': 'int', 'default': 800}, 'n_cols': {'type': 'int', 'default': 2}, 'vertical_spacing': {'type': 'float', 'default': 0.15}, 'horizontal_spacing': {'type': 'float', 'default': 0.1}} ['tabular_data', 'visualization', 'data_quality'] ['classification', 'regression', 'clustering']
validmind.stats.DescriptiveStats Descriptive Stats Provides comprehensive descriptive statistics for numerical features in a dataset.... False True ['dataset'] {'columns': {'type': 'Optional', 'default': None}, 'include_advanced': {'type': 'bool', 'default': True}, 'confidence_level': {'type': 'float', 'default': 0.95}} ['tabular_data', 'statistics', 'data_quality'] ['classification', 'regression', 'clustering']
Want to learn more about navigating ValidMind tests?

Refer to our notebook outlining the utilities available for viewing and understanding available ValidMind tests: Explore tests

Initialize the ValidMind datasets

With the individual tests we want to run identified, the next step is to connect your data with a ValidMind Dataset object. This step is always necessary every time you want to connect a dataset to documentation and produce test results through ValidMind, but you only need to do it once per dataset.

Initialize a ValidMind dataset object using the init_dataset function from the ValidMind (vm) module. For this example, we'll pass in the following arguments:

  • dataset — The raw dataset that you want to provide as input to tests.
  • input_id — A unique identifier that allows tracking what inputs are used when running each individual test.
  • target_column — A required argument if tests require access to true values. This is the name of the target column in the dataset.
# vm_raw_dataset is now a VMDataset object that you can pass to any ValidMind test
vm_raw_dataset = vm.init_dataset(
    dataset=raw_df,
    input_id="raw_dataset",
    target_column="Exited",
)

Run data quality tests

Now that we know how to initialize a ValidMind dataset object, we're ready to run some tests!

You run individual tests by calling the run_test function provided by the validmind.tests module. For the examples below, we'll pass in the following arguments:

  • test_id — The ID of the test to run, as seen in the ID column when you run list_tests.
  • params — A dictionary of parameters for the test. These will override any default_params set in the test definition.

Run tabular data tests

The inputs expected by a test can also be found in the test definition — let's take validmind.data_validation.DescriptiveStatistics as an example.

Note that the output of the describe_test() function below shows that this test expects a dataset as input:

vm.tests.describe_test("validmind.data_validation.DescriptiveStatistics")
Test: Descriptive Statistics ('validmind.data_validation.DescriptiveStatistics')

Now, let's run a few tests to assess the quality of the dataset:

result2 = vm.tests.run_test(
    test_id="validmind.data_validation.ClassImbalance",
    inputs={"dataset": vm_raw_dataset},
    params={"min_percent_threshold": 30},
)

❌ Class Imbalance

The Class Imbalance test evaluates the distribution of target classes in the dataset to identify potential imbalances that could affect model performance. The results table presents the percentage of records for each class in the "Exited" target variable, alongside a pass/fail outcome based on a minimum percentage threshold of 30%. The accompanying bar plot visually depicts the proportion of each class, highlighting the relative representation of the majority and minority classes.

Key insights:

  • Majority class exceeds threshold: The "Exited = 0" class constitutes 79.80% of the dataset and passes the 30% minimum threshold.
  • Minority class below threshold: The "Exited = 1" class represents 20.20% of the dataset and fails the 30% minimum threshold, indicating under-representation.
  • Visual confirmation of imbalance: The bar plot demonstrates a pronounced disparity between the two classes, with the majority class substantially outnumbering the minority class.

The results indicate a notable class imbalance in the dataset, with the minority class ("Exited = 1") falling below the specified 30% threshold. This distribution suggests that the dataset is dominated by the majority class, which may have implications for model training and predictive performance, particularly in accurately identifying the minority class.

Parameters:

{
  "min_percent_threshold": 30
}
            

Tables

Exited Class Imbalance

Exited Percentage of Rows (%) Pass/Fail
0 79.80% Pass
1 20.20% Fail

Figures

ValidMind Figure validmind.data_validation.ClassImbalance:f89d

The output above shows that the class imbalance test did not pass according to the value we set for min_percent_threshold — great, this matches what was reported by the model development team.

To address this issue, we'll re-run the test on some processed data. In this case let's apply a very simple rebalancing technique to the dataset:

import pandas as pd

raw_copy_df = raw_df.sample(frac=1)  # Create a copy of the raw dataset

# Create a balanced dataset with the same number of exited and not exited customers
exited_df = raw_copy_df.loc[raw_copy_df["Exited"] == 1]
not_exited_df = raw_copy_df.loc[raw_copy_df["Exited"] == 0].sample(n=exited_df.shape[0])

balanced_raw_df = pd.concat([exited_df, not_exited_df])
balanced_raw_df = balanced_raw_df.sample(frac=1, random_state=42)

With this new balanced dataset, you can re-run the individual test to see if it now passes the class imbalance test requirement.

As this is technically a different dataset, remember to first initialize a new ValidMind Dataset object to pass in as input as required by run_test():

# Register new data and now 'balanced_raw_dataset' is the new dataset object of interest
vm_balanced_raw_dataset = vm.init_dataset(
    dataset=balanced_raw_df,
    input_id="balanced_raw_dataset",
    target_column="Exited",
)
# Pass the initialized `balanced_raw_dataset` as input into the test run
result = vm.tests.run_test(
    test_id="validmind.data_validation.ClassImbalance",
    inputs={"dataset": vm_balanced_raw_dataset},
    params={"min_percent_threshold": 30},
)

✅ Class Imbalance

The Class Imbalance test evaluates the distribution of target classes within the dataset to identify potential imbalances that could impact model performance. The results table presents the percentage representation of each class in the "Exited" target variable, alongside a pass/fail assessment based on a minimum threshold of 30%. The accompanying bar plot visually displays the proportion of each class, facilitating interpretation of class distribution.

Key insights:

  • Equal class representation: Both classes (Exited = 0 and Exited = 1) each constitute 50.00% of the dataset, indicating a perfectly balanced class distribution.
  • All classes meet threshold: Each class exceeds the minimum percentage threshold of 30%, resulting in a "Pass" outcome for both classes.
  • No evidence of class imbalance: The visual plot confirms the tabular results, with both classes displaying identical bar heights at the 0.5 mark.

The dataset demonstrates a balanced distribution across the target classes, with both classes equally represented and surpassing the specified minimum threshold. No class imbalance is observed, and all test criteria are satisfied based on the current configuration.

Parameters:

{
  "min_percent_threshold": 30
}
            

Tables

Exited Class Imbalance

Exited Percentage of Rows (%) Pass/Fail
0 50.00% Pass
1 50.00% Pass

Figures

ValidMind Figure validmind.data_validation.ClassImbalance:8afd

Remove highly correlated features

Next, let's also remove highly correlated features from our dataset as outlined by the development team. Removing highly correlated features helps make the model simpler, more stable, and easier to understand.

You can utilize the output from a ValidMind test for further use — in this below example, to retrieve the list of features with the highest correlation coefficients and use them to reduce the final list of features for modeling.

First, we'll run validmind.data_validation.HighPearsonCorrelation with the balanced_raw_dataset we initialized previously as input as is for comparison with later runs:

corr_result = vm.tests.run_test(
    test_id="validmind.data_validation.HighPearsonCorrelation",
    params={"max_threshold": 0.3},
    inputs={"dataset": vm_balanced_raw_dataset},
)

❌ High Pearson Correlation

The High Pearson Correlation test evaluates the linear relationships between feature pairs to identify potential redundancy or multicollinearity. The results table presents the top ten strongest absolute Pearson correlation coefficients among feature pairs, along with their Pass/Fail status based on a threshold of 0.3. Only one feature pair exceeds the threshold, while the remaining pairs show lower correlation magnitudes.

Key insights:

  • Single feature pair exceeds threshold: The (Age, Exited) pair has a correlation coefficient of 0.3549, surpassing the 0.3 threshold and resulting in a Fail status.
  • All other correlations below threshold: The remaining nine feature pairs have absolute correlation coefficients ranging from 0.0309 to 0.1935, all classified as Pass.
  • No evidence of widespread multicollinearity: Only one out of the top ten feature pairs indicates a correlation above the threshold, with the rest displaying low to moderate linear relationships.

The results indicate that the dataset contains minimal evidence of high linear correlation among most feature pairs, with only the (Age, Exited) pair exceeding the specified threshold. The overall correlation structure suggests low risk of feature redundancy or multicollinearity based on the tested pairs.

Parameters:

{
  "max_threshold": 0.3
}
            

Tables

Columns Coefficient Pass/Fail
(Age, Exited) 0.3549 Fail
(IsActiveMember, Exited) -0.1935 Pass
(Balance, NumOfProducts) -0.1742 Pass
(Balance, Exited) 0.1473 Pass
(NumOfProducts, Exited) -0.0548 Pass
(NumOfProducts, IsActiveMember) 0.0531 Pass
(CreditScore, Exited) -0.0471 Pass
(Tenure, IsActiveMember) -0.0339 Pass
(CreditScore, EstimatedSalary) -0.0309 Pass
(CreditScore, IsActiveMember) 0.0309 Pass

The output above shows that the test did not pass according to the value we set for max_threshold — as reported and expected.

corr_result is an object of type TestResult. We can inspect the result object to see what the test has produced:

print(type(corr_result))
print("Result ID: ", corr_result.result_id)
print("Params: ", corr_result.params)
print("Passed: ", corr_result.passed)
print("Tables: ", corr_result.tables)
<class 'validmind.vm_models.result.result.TestResult'>
Result ID:  validmind.data_validation.HighPearsonCorrelation
Params:  {'max_threshold': 0.3}
Passed:  False
Tables:  [ResultTable]

Let's remove the highly correlated features and create a new VM dataset object.

We'll begin by checking out the table in the result and extracting a list of features that failed the test:

# Extract table from `corr_result.tables`
features_df = corr_result.tables[0].data
features_df
Columns Coefficient Pass/Fail
0 (Age, Exited) 0.3549 Fail
1 (IsActiveMember, Exited) -0.1935 Pass
2 (Balance, NumOfProducts) -0.1742 Pass
3 (Balance, Exited) 0.1473 Pass
4 (NumOfProducts, Exited) -0.0548 Pass
5 (NumOfProducts, IsActiveMember) 0.0531 Pass
6 (CreditScore, Exited) -0.0471 Pass
7 (Tenure, IsActiveMember) -0.0339 Pass
8 (CreditScore, EstimatedSalary) -0.0309 Pass
9 (CreditScore, IsActiveMember) 0.0309 Pass
# Extract list of features that failed the test
high_correlation_features = features_df[features_df["Pass/Fail"] == "Fail"]["Columns"].tolist()
high_correlation_features
['(Age, Exited)']

Next, extract the feature names from the list of strings (example: (Age, Exited) > Age):

high_correlation_features = [feature.split(",")[0].strip("()") for feature in high_correlation_features]
high_correlation_features
['Age']

Now, it's time to re-initialize the dataset with the highly correlated features removed.

Note the use of a different input_id. This allows tracking the inputs used when running each individual test.

# Remove the highly correlated features from the dataset
balanced_raw_no_age_df = balanced_raw_df.drop(columns=high_correlation_features)

# Re-initialize the dataset object
vm_raw_dataset_preprocessed = vm.init_dataset(
    dataset=balanced_raw_no_age_df,
    input_id="raw_dataset_preprocessed",
    target_column="Exited",
)

Re-running the test with the reduced feature set should pass the test:

corr_result = vm.tests.run_test(
    test_id="validmind.data_validation.HighPearsonCorrelation",
    params={"max_threshold": 0.3},
    inputs={"dataset": vm_raw_dataset_preprocessed},
)

✅ High Pearson Correlation

The High Pearson Correlation test evaluates the linear relationships between feature pairs to identify potential redundancy or multicollinearity. The results table presents the top ten absolute Pearson correlation coefficients among feature pairs, along with their Pass/Fail status based on a threshold of 0.3. All reported coefficients are below the threshold, and each feature pair is marked as Pass.

Key insights:

  • No high correlations detected: All absolute Pearson correlation coefficients are below the 0.3 threshold, with the highest magnitude observed at 0.1935 between IsActiveMember and Exited.
  • Consistent Pass status across feature pairs: Every evaluated feature pair received a Pass status, indicating no evidence of strong linear relationships among the top correlations.
  • Low to moderate relationships observed: The reported coefficients range from -0.1935 to 0.0309, reflecting only weak linear associations among the examined features.

The test results indicate an absence of strong linear dependencies among the evaluated feature pairs. The observed correlation structure suggests low risk of feature redundancy or multicollinearity within the dataset based on the current threshold. All feature pairs meet the test criteria, supporting the interpretability and stability of subsequent modeling efforts.

Parameters:

{
  "max_threshold": 0.3
}
            

Tables

Columns Coefficient Pass/Fail
(IsActiveMember, Exited) -0.1935 Pass
(Balance, NumOfProducts) -0.1742 Pass
(Balance, Exited) 0.1473 Pass
(NumOfProducts, Exited) -0.0548 Pass
(NumOfProducts, IsActiveMember) 0.0531 Pass
(CreditScore, Exited) -0.0471 Pass
(Tenure, IsActiveMember) -0.0339 Pass
(CreditScore, EstimatedSalary) -0.0309 Pass
(CreditScore, IsActiveMember) 0.0309 Pass
(HasCrCard, EstimatedSalary) -0.0251 Pass

You can also plot the correlation matrix to visualize the new correlation between features:

corr_result = vm.tests.run_test(
    test_id="validmind.data_validation.PearsonCorrelationMatrix",
    inputs={"dataset": vm_raw_dataset_preprocessed},
)

Pearson Correlation Matrix

The Pearson Correlation Matrix test evaluates the extent of linear dependency between all pairs of numerical variables in the dataset. The resulting heat map displays the Pearson correlation coefficients, with values ranging from -1 to 1, where the color intensity indicates the strength and direction of the relationship. No coefficients exceed the ±0.7 threshold, and the matrix reveals generally low to moderate correlations among the variables.

Key insights:

  • No high correlations detected: All pairwise correlation coefficients fall well below the ±0.7 threshold, indicating an absence of strong linear relationships between variables.
  • Weak to moderate relationships observed: The highest observed correlation is 0.19 (negative) between Exited and IsActiveMember, and 0.15 (positive) between Exited and Balance, with most other coefficients close to zero.
  • No evidence of multicollinearity: The lack of high correlations suggests that the variables are not redundant and are likely to contribute distinct information to the model.

The correlation structure demonstrates that the dataset's numerical variables are largely independent, with no evidence of strong linear dependencies or redundancy. This supports the suitability of the variables for inclusion in modeling without risk of multicollinearity affecting model interpretability or stability.

Figures

ValidMind Figure validmind.data_validation.PearsonCorrelationMatrix:6023

Documenting test results

Now that we've done some analysis on two different datasets, we can use ValidMind to easily document why certain things were done to our raw data with testing to support it. Every test result returned by the run_test() function has a .log() method that can be used to send the test results to the ValidMind Platform.

When logging validation test results to the platform, you'll need to manually add those results to the desired section of the validation report. To demonstrate how to add test results to your validation report, we'll log our data quality tests and insert the results via the ValidMind Platform.

Configure and run comparison tests

Below, we'll perform comparison tests between the original raw dataset (raw_dataset) and the final preprocessed (raw_dataset_preprocessed) dataset, again logging the results to the ValidMind Platform.

We can specify all the tests we'd ike to run in a dictionary called test_config, and we'll pass in the following arguments for each test:

  • params: Individual test parameters.
  • input_grid: Individual test inputs to compare. In this case, we'll input our two datasets for comparison.

Note here that the input_grid expects the input_id of the dataset as the value rather than the variable name we specified:

# Individual test config with inputs specified
test_config = {
    "validmind.data_validation.ClassImbalance": {
        "input_grid": {"dataset": ["raw_dataset", "raw_dataset_preprocessed"]},
        "params": {"min_percent_threshold": 30}
    },
    "validmind.data_validation.HighPearsonCorrelation": {
        "input_grid": {"dataset": ["raw_dataset", "raw_dataset_preprocessed"]},
        "params": {"max_threshold": 0.3}
    },
}

Then batch run and log our tests in test_config:

for t in test_config:
    print(t)
    try:
        # Check if test has input_grid
        if 'input_grid' in test_config[t]:
            # For tests with input_grid, pass the input_grid configuration
            if 'params' in test_config[t]:
                vm.tests.run_test(t, input_grid=test_config[t]['input_grid'], params=test_config[t]['params']).log()
            else:
                vm.tests.run_test(t, input_grid=test_config[t]['input_grid']).log()
        else:
            # Original logic for regular inputs
            if 'params' in test_config[t]:
                vm.tests.run_test(t, inputs=test_config[t]['inputs'], params=test_config[t]['params']).log()
            else:
                vm.tests.run_test(t, inputs=test_config[t]['inputs']).log()
    except Exception as e:
        print(f"Error running test {t}: {str(e)}")
validmind.data_validation.ClassImbalance

❌ Class Imbalance

The Class Imbalance test evaluates the distribution of target classes within the dataset to identify potential imbalances that could impact model performance. The results present the proportion of each class in both the raw and preprocessed datasets, with a minimum percentage threshold for each class set at 30%. The test outcomes are shown for each class, indicating whether the class distribution meets the specified threshold.

Key insights:

  • Significant imbalance in raw dataset: In the raw dataset, class 0 constitutes 79.80% of records, while class 1 accounts for only 20.20%. Class 1 fails the minimum threshold criterion, indicating under-representation.
  • Balanced distribution after preprocessing: In the preprocessed dataset, both classes are equally represented at 50.00% each, and both pass the minimum threshold requirement.
  • Visual confirmation of class proportions: The accompanying bar plots visually confirm the numerical findings, with a pronounced skew in the raw dataset and equal class proportions in the preprocessed dataset.

The results indicate that the raw dataset exhibits a pronounced class imbalance, with class 1 falling below the 30% minimum threshold. Preprocessing steps have effectively addressed this issue, resulting in a balanced class distribution that meets the test criteria for both classes. This transition from imbalance to balance is clearly reflected in both the tabular and visual outputs.

Parameters:

{
  "min_percent_threshold": 30
}
            

Tables

dataset Exited Percentage of Rows (%) Pass/Fail
raw_dataset 0 79.80% Pass
raw_dataset 1 20.20% Fail
raw_dataset_preprocessed 0 50.00% Pass
raw_dataset_preprocessed 1 50.00% Pass

Figures

ValidMind Figure validmind.data_validation.ClassImbalance:eeb3
ValidMind Figure validmind.data_validation.ClassImbalance:8719
2026-01-28 18:10:28,533 - INFO(validmind.vm_models.result.result): Test driven block with result_id validmind.data_validation.ClassImbalance does not exist in model's document
validmind.data_validation.HighPearsonCorrelation

❌ High Pearson Correlation

The High Pearson Correlation test evaluates the linear relationships between feature pairs to identify potential redundancy or multicollinearity. The results table presents the top pairwise Pearson correlation coefficients for both the raw and preprocessed datasets, indicating whether each pair exceeds the specified threshold of 0.3. Each entry includes the feature pair, the calculated coefficient, and a Pass/Fail status based on the threshold.

Key insights:

  • One feature pair exceeds the correlation threshold: The pair (Balance, NumOfProducts) in the raw dataset shows a correlation coefficient of -0.3045, resulting in a Fail status.
  • All other correlations remain below threshold: All other feature pairs in both the raw and preprocessed datasets have absolute correlation coefficients below 0.3 and are marked as Pass.
  • Preprocessing reduces maximum observed correlation: In the preprocessed dataset, the highest absolute correlation is -0.1935 for (IsActiveMember, Exited), which is below the threshold.

The results indicate that, with the exception of the (Balance, NumOfProducts) pair in the raw dataset, all examined feature pairs exhibit low linear correlation, suggesting limited risk of feature redundancy or multicollinearity. Preprocessing further reduces the magnitude of observed correlations, supporting the independence of features in the processed data.

Parameters:

{
  "max_threshold": 0.3
}
            

Tables

dataset Columns Coefficient Pass/Fail
raw_dataset (Balance, NumOfProducts) -0.3045 Fail
raw_dataset (Age, Exited) 0.2810 Pass
raw_dataset (IsActiveMember, Exited) -0.1515 Pass
raw_dataset (Balance, Exited) 0.1174 Pass
raw_dataset (Age, IsActiveMember) 0.0873 Pass
raw_dataset (NumOfProducts, Exited) -0.0523 Pass
raw_dataset (Age, NumOfProducts) -0.0306 Pass
raw_dataset (CreditScore, IsActiveMember) 0.0306 Pass
raw_dataset (Tenure, IsActiveMember) -0.0293 Pass
raw_dataset (Age, Balance) 0.0290 Pass
raw_dataset_preprocessed (IsActiveMember, Exited) -0.1935 Pass
raw_dataset_preprocessed (Balance, NumOfProducts) -0.1742 Pass
raw_dataset_preprocessed (Balance, Exited) 0.1473 Pass
raw_dataset_preprocessed (NumOfProducts, Exited) -0.0548 Pass
raw_dataset_preprocessed (NumOfProducts, IsActiveMember) 0.0531 Pass
raw_dataset_preprocessed (CreditScore, Exited) -0.0471 Pass
raw_dataset_preprocessed (Tenure, IsActiveMember) -0.0339 Pass
raw_dataset_preprocessed (CreditScore, EstimatedSalary) -0.0309 Pass
raw_dataset_preprocessed (CreditScore, IsActiveMember) 0.0309 Pass
raw_dataset_preprocessed (HasCrCard, EstimatedSalary) -0.0251 Pass
2026-01-28 18:10:37,441 - INFO(validmind.vm_models.result.result): Test driven block with result_id validmind.data_validation.HighPearsonCorrelation does not exist in model's document
Note the output returned indicating that a test-driven block doesn't currently exist in your model's documentation for some test IDs.

That's expected, as when we run validations tests the results logged need to be manually added to your report as part of your compliance assessment process within the ValidMind Platform.

Log tests with unique identifiers

Next, we'll use the previously initialized vm_balanced_raw_dataset (that still has a highly correlated Age column) as input to run an individual test, then log the result to the ValidMind Platform.

When running individual tests, you can use a custom result_id to tag the individual result with a unique identifier:

  • This result_id can be appended to test_id with a : separator.
  • The balanced_raw_dataset result identifier will correspond to the balanced_raw_dataset input, the dataset that still has the Age column.
result = vm.tests.run_test(
    test_id="validmind.data_validation.HighPearsonCorrelation:balanced_raw_dataset",
    params={"max_threshold": 0.3},
    inputs={"dataset": vm_balanced_raw_dataset},
)
result.log()

❌ High Pearson Correlation Balanced Raw Dataset

The High Pearson Correlation test identifies pairs of features in the dataset that exhibit strong linear relationships, with the aim of detecting potential feature redundancy or multicollinearity. The results table lists the top ten feature pairs ranked by the absolute value of their Pearson correlation coefficients, along with a Pass or Fail status based on a threshold of 0.3. Only one feature pair exceeds the threshold, while the remaining pairs display lower correlation values and pass the test criteria.

Key insights:

  • One feature pair exceeds correlation threshold: The pair (Age, Exited) shows a Pearson correlation coefficient of 0.3549, surpassing the 0.3 threshold and resulting in a Fail status.
  • All other feature pairs below threshold: The remaining nine feature pairs have absolute correlation coefficients ranging from 0.0309 to 0.1935, all below the 0.3 threshold and marked as Pass.
  • No evidence of widespread multicollinearity: Only a single pair among the top correlations fails the threshold, indicating limited linear redundancy among most features.

The test results indicate that the dataset contains minimal evidence of high linear correlation among most feature pairs, with only the (Age, Exited) pair exceeding the specified threshold. The overall correlation structure suggests low risk of multicollinearity, supporting the interpretability and stability of subsequent modeling efforts.

Parameters:

{
  "max_threshold": 0.3
}
            

Tables

Columns Coefficient Pass/Fail
(Age, Exited) 0.3549 Fail
(IsActiveMember, Exited) -0.1935 Pass
(Balance, NumOfProducts) -0.1742 Pass
(Balance, Exited) 0.1473 Pass
(NumOfProducts, Exited) -0.0548 Pass
(NumOfProducts, IsActiveMember) 0.0531 Pass
(CreditScore, Exited) -0.0471 Pass
(Tenure, IsActiveMember) -0.0339 Pass
(CreditScore, EstimatedSalary) -0.0309 Pass
(CreditScore, IsActiveMember) 0.0309 Pass
2026-01-28 18:10:47,934 - INFO(validmind.vm_models.result.result): Test driven block with result_id validmind.data_validation.HighPearsonCorrelation:balanced_raw_dataset does not exist in model's document

Add test results to reporting

With some test results logged, let's head to the model we connected to at the beginning of this notebook and learn how to insert a test result into our validation report (Need more help?).

While the example below focuses on a specific test result, you can follow the same general procedure for your other results:

  1. From the Inventory in the ValidMind Platform, go to the model you connected to earlier.

  2. In the left sidebar that appears for your model, click Validation Report under Documents.

  3. Locate the Data Preparation section and click on 2.2.1. Data Quality to expand that section.

  4. Under the Class Imbalance Assessment section, locate Validator Evidence then click Link Evidence to Report:

    Screenshot showing the validation report with the link validator evidence to report option highlighted

  5. Select the Class Imbalance test results we logged: ValidMind Data Validation Class Imbalance

    Screenshot showing the ClassImbalance test selected

  6. Click Update Linked Evidence to add the test results to the validation report.

    Confirm that the results for the Class Imbalance test you inserted has been correctly inserted into section 2.2.1. Data Quality of the report:

    Screenshot showing the ClassImbalance test inserted into the validation report

  7. Note that these test results are flagged as Requires Attention — as they include comparative results from our initial raw dataset.

    Click See evidence details to review the LLM-generated description that summarizes the test results, that confirm that our final preprocessed dataset actually passes our test:

    Screenshot showing the ClassImbalance test generated description in the text editor

Here in this text editor, you can make qualitative edits to the draft that ValidMind generated to finalize the test results.

Learn more: Work with content blocks

Split the preprocessed dataset

With our raw dataset rebalanced with highly correlated features removed, let's now spilt our dataset into train and test in preparation for model evaluation testing.

To start, let's grab the first few rows from the balanced_raw_no_age_df dataset we initialized earlier:

balanced_raw_no_age_df.head()
CreditScore Geography Gender Tenure Balance NumOfProducts HasCrCard IsActiveMember EstimatedSalary Exited
7691 655 France Male 10 0.00 2 1 0 51620.94 0
567 621 France Female 5 0.00 1 1 1 47578.45 0
7612 667 France Male 6 0.00 2 0 0 167181.77 0
737 684 France Female 3 73309.38 1 0 0 21228.34 1
3086 645 Spain Male 4 0.00 1 0 1 174916.85 1

Before training the model, we need to encode the categorical features in the dataset:

  • Use the OneHotEncoder class from the sklearn.preprocessing module to encode the categorical features.
  • The categorical features in the dataset are Geography and Gender.
balanced_raw_no_age_df = pd.get_dummies(
    balanced_raw_no_age_df, columns=["Geography", "Gender"], drop_first=True
)
balanced_raw_no_age_df.head()
CreditScore Tenure Balance NumOfProducts HasCrCard IsActiveMember EstimatedSalary Exited Geography_Germany Geography_Spain Gender_Male
7691 655 10 0.00 2 1 0 51620.94 0 False False True
567 621 5 0.00 1 1 1 47578.45 0 False False False
7612 667 6 0.00 2 0 0 167181.77 0 False False True
737 684 3 73309.38 1 0 0 21228.34 1 False False False
3086 645 4 0.00 1 0 1 174916.85 1 False True True

Splitting our dataset into training and testing is essential for proper validation testing, as this helps assess how well the model generalizes to unseen data:

  • We start by dividing our balanced_raw_no_age_df dataset into training and test subsets using train_test_split, with 80% of the data allocated to training (train_df) and 20% to testing (test_df).
  • From each subset, we separate the features (all columns except "Exited") into X_train and X_test, and the target column ("Exited") into y_train and y_test.
from sklearn.model_selection import train_test_split

train_df, test_df = train_test_split(balanced_raw_no_age_df, test_size=0.20)

X_train = train_df.drop("Exited", axis=1)
y_train = train_df["Exited"]
X_test = test_df.drop("Exited", axis=1)
y_test = test_df["Exited"]

Initialize the split datasets

Next, let's initialize the training and testing datasets so they are available for use:

vm_train_ds = vm.init_dataset(
    input_id="train_dataset_final",
    dataset=train_df,
    target_column="Exited",
)

vm_test_ds = vm.init_dataset(
    input_id="test_dataset_final",
    dataset=test_df,
    target_column="Exited",
)

In summary

In this second notebook, you learned how to:

Next steps

Develop potential challenger models

Now that you're familiar with the basics of using the ValidMind Library, let's use it to develop a challenger model: 3 — Developing a potential challenger model


Copyright © 2023-2026 ValidMind Inc. All rights reserved.
Refer to LICENSE for details.
SPDX-License-Identifier: AGPL-3.0 AND ValidMind Commercial