• About
  • Get Started
  • Guides
  • ValidMind Library
    • ValidMind Library
    • Supported Models
    • QuickStart Notebook

    • TESTING
    • Run Tests & Test Suites
    • Test Descriptions
    • Test Sandbox (BETA)

    • CODE SAMPLES
    • All Code Samples · LLM · NLP · Time Series · Etc.
    • Download Code Samples · notebooks.zip
    • Try it on JupyterHub

    • REFERENCE
    • ValidMind Library Python API
  • Support
  • Training
  • Releases
  • Documentation
    • About ​ValidMind
    • Get Started
    • Guides
    • Support
    • Releases

    • Python Library
    • ValidMind Library

    • ValidMind Academy
    • Training Courses
  • Log In
    • Public Internet
    • ValidMind Platform · US1
    • ValidMind Platform · CA1

    • Private Link
    • Virtual Private ValidMind (VPV)

    • Which login should I use?
  1. Code samples
  2. Capital Markets
  3. Quickstart for Heston option pricing model using QuantLib

EU AI Act Compliance — Read our original regulation brief on how the EU AI Act aims to balance innovation with safety and accountability, setting standards for responsible AI use

  • ValidMind Library
  • Supported models

  • QuickStart
  • Quickstart for model documentation
  • Install and initialize ValidMind Library
  • Store model credentials in .env files

  • Model Development
  • 1 — Set up ValidMind Library
  • 2 — Start model development process
  • 3 — Integrate custom tests
  • 4 — Finalize testing & documentation

  • Model Validation
  • 1 — Set up ValidMind Library for validation
  • 2 — Start model validation process
  • 3 — Developing a challenger model
  • 4 — Finalize validation & reporting

  • Model Testing
  • Run tests & test suites
    • Add context to LLM-generated test descriptions
    • Configure dataset features
    • Document multiple results for the same test
    • Explore test suites
    • Explore tests
    • Dataset Column Filters when Running Tests
    • Load dataset predictions
    • Log metrics over time
    • Run individual documentation sections
    • Run documentation tests with custom configurations
    • Run tests with multiple datasets
    • Intro to Unit Metrics
    • Understand and utilize RawData in ValidMind tests
    • Introduction to ValidMind Dataset and Model Objects
    • Run Tests
      • Run dataset based tests
      • Run comparison tests
  • Test descriptions
    • Data Validation
      • ACFandPACFPlot
      • ADF
      • AutoAR
      • AutoMA
      • AutoStationarity
      • BivariateScatterPlots
      • BoxPierce
      • ChiSquaredFeaturesTable
      • ClassImbalance
      • DatasetDescription
      • DatasetSplit
      • DescriptiveStatistics
      • DickeyFullerGLS
      • Duplicates
      • EngleGrangerCoint
      • FeatureTargetCorrelationPlot
      • HighCardinality
      • HighPearsonCorrelation
      • IQROutliersBarPlot
      • IQROutliersTable
      • IsolationForestOutliers
      • JarqueBera
      • KPSS
      • LaggedCorrelationHeatmap
      • LJungBox
      • MissingValues
      • MissingValuesBarPlot
      • MutualInformation
      • PearsonCorrelationMatrix
      • PhillipsPerronArch
      • ProtectedClassesCombination
      • ProtectedClassesDescription
      • ProtectedClassesDisparity
      • ProtectedClassesThresholdOptimizer
      • RollingStatsPlot
      • RunsTest
      • ScatterPlot
      • ScoreBandDefaultRates
      • SeasonalDecompose
      • ShapiroWilk
      • Skewness
      • SpreadPlot
      • TabularCategoricalBarPlots
      • TabularDateTimeHistograms
      • TabularDescriptionTables
      • TabularNumericalHistograms
      • TargetRateBarPlots
      • TimeSeriesDescription
      • TimeSeriesDescriptiveStatistics
      • TimeSeriesFrequency
      • TimeSeriesHistogram
      • TimeSeriesLinePlot
      • TimeSeriesMissingValues
      • TimeSeriesOutliers
      • TooManyZeroValues
      • UniqueRows
      • WOEBinPlots
      • WOEBinTable
      • ZivotAndrewsArch
      • Nlp
        • CommonWords
        • Hashtags
        • LanguageDetection
        • Mentions
        • PolarityAndSubjectivity
        • Punctuations
        • Sentiment
        • StopWords
        • TextDescription
        • Toxicity
    • Model Validation
      • BertScore
      • BleuScore
      • ClusterSizeDistribution
      • ContextualRecall
      • FeaturesAUC
      • MeteorScore
      • ModelMetadata
      • ModelPredictionResiduals
      • RegardScore
      • RegressionResidualsPlot
      • RougeScore
      • TimeSeriesPredictionsPlot
      • TimeSeriesPredictionWithCI
      • TimeSeriesR2SquareBySegments
      • TokenDisparity
      • ToxicityScore
      • Embeddings
        • ClusterDistribution
        • CosineSimilarityComparison
        • CosineSimilarityDistribution
        • CosineSimilarityHeatmap
        • DescriptiveAnalytics
        • EmbeddingsVisualization2D
        • EuclideanDistanceComparison
        • EuclideanDistanceHeatmap
        • PCAComponentsPairwisePlots
        • StabilityAnalysisKeyword
        • StabilityAnalysisRandomNoise
        • StabilityAnalysisSynonyms
        • StabilityAnalysisTranslation
        • TSNEComponentsPairwisePlots
      • Ragas
        • AnswerCorrectness
        • AspectCritic
        • ContextEntityRecall
        • ContextPrecision
        • ContextPrecisionWithoutReference
        • ContextRecall
        • Faithfulness
        • NoiseSensitivity
        • ResponseRelevancy
        • SemanticSimilarity
      • Sklearn
        • AdjustedMutualInformation
        • AdjustedRandIndex
        • CalibrationCurve
        • ClassifierPerformance
        • ClassifierThresholdOptimization
        • ClusterCosineSimilarity
        • ClusterPerformanceMetrics
        • CompletenessScore
        • ConfusionMatrix
        • FeatureImportance
        • FowlkesMallowsScore
        • HomogeneityScore
        • HyperParametersTuning
        • KMeansClustersOptimization
        • MinimumAccuracy
        • MinimumF1Score
        • MinimumROCAUCScore
        • ModelParameters
        • ModelsPerformanceComparison
        • OverfitDiagnosis
        • PermutationFeatureImportance
        • PopulationStabilityIndex
        • PrecisionRecallCurve
        • RegressionErrors
        • RegressionErrorsComparison
        • RegressionPerformance
        • RegressionR2Square
        • RegressionR2SquareComparison
        • RobustnessDiagnosis
        • ROCCurve
        • ScoreProbabilityAlignment
        • SHAPGlobalImportance
        • SilhouettePlot
        • TrainingTestDegradation
        • VMeasure
        • WeakspotsDiagnosis
      • Statsmodels
        • AutoARIMA
        • CumulativePredictionProbabilities
        • DurbinWatsonTest
        • GINITable
        • KolmogorovSmirnov
        • Lilliefors
        • PredictionProbabilitiesHistogram
        • RegressionCoeffs
        • RegressionFeatureSignificance
        • RegressionModelForecastPlot
        • RegressionModelForecastPlotLevels
        • RegressionModelSensitivityPlot
        • RegressionModelSummary
        • RegressionPermutationFeatureImportance
        • ScorecardHistogram
    • Ongoing Monitoring
      • CalibrationCurveDrift
      • ClassDiscriminationDrift
      • ClassificationAccuracyDrift
      • ClassImbalanceDrift
      • ConfusionMatrixDrift
      • CumulativePredictionProbabilitiesDrift
      • FeatureDrift
      • PredictionAcrossEachFeature
      • PredictionCorrelation
      • PredictionProbabilitiesHistogramDrift
      • PredictionQuantilesAcrossFeatures
      • ROCCurveDrift
      • ScoreBandsDrift
      • ScorecardHistogramDrift
      • TargetPredictionDistributionPlot
    • Prompt Validation
      • Bias
      • Clarity
      • Conciseness
      • Delimitation
      • NegativeInstruction
      • Robustness
      • Specificity
  • Test sandbox beta

  • Notebooks
  • Code samples
    • Capital Markets
      • Quickstart for knockout option pricing model documentation
      • Quickstart for Heston option pricing model using QuantLib
    • Credit Risk
      • Document an application scorecard model
      • Document an application scorecard model
      • Document an application scorecard model
      • Document a credit risk model
      • Document an application scorecard model
    • Custom Tests
      • Implement custom tests
      • Integrate external test providers
    • Model Validation
      • Validate an application scorecard model
    • Nlp and Llm
      • Sentiment analysis of financial data using a large language model (LLM)
      • Summarization of financial data using a large language model (LLM)
      • Sentiment analysis of financial data using Hugging Face NLP models
      • Summarization of financial data using Hugging Face NLP models
      • Automate news summarization using LLMs
      • Prompt validation for large language models (LLMs)
      • RAG Model Benchmarking Demo
      • RAG Model Documentation Demo
    • Ongoing Monitoring
      • Ongoing Monitoring for Application Scorecard
      • Quickstart for ongoing monitoring of models with ValidMind
    • Regression
      • Document a California Housing Price Prediction regression model
    • Time Series
      • Document a time series forecasting model
      • Document a time series forecasting model

  • Reference
  • ValidMind Library Python API

On this page

  • Model Assumptions and Characteristics
  • Heston Model Parameters
  • Advantages and Limitations
  • Contents
  • About ValidMind
    • Before you begin
    • New to ValidMind?
    • Key concepts
  • Install the ValidMind Library
  • Initialize the ValidMind Library
    • Get your code snippet
  • Initialize the Python environment
    • Preview the documentation template
    • Market Data Sources
    • Market Data Quality and Availability
    • Initialize the ValidMind datasets
    • Data Quality
    • Model Evaluation
  • Next steps
    • Work with your model documentation
    • Discover more learning resources
  • Edit this page
  • Report an issue
  1. Code samples
  2. Capital Markets
  3. Quickstart for Heston option pricing model using QuantLib

Quickstart for Heston option pricing model using QuantLib

Welcome! Let's get you started with the basic process of documenting models with ValidMind.

The Heston option pricing model is a popular stochastic volatility model used to price options. Developed by Steven Heston in 1993, the model assumes that the asset's volatility follows a mean-reverting square-root process, allowing it to capture the empirical observation of volatility "clustering" in financial markets. This model is particularly useful for assets where volatility is not constant, making it a favored approach in quantitative finance for pricing complex derivatives.

Here’s an overview of the Heston model as implemented in QuantLib, a powerful library for quantitative finance:

Model Assumptions and Characteristics

  1. Stochastic Volatility: The volatility is modeled as a stochastic process, following a mean-reverting square-root process (Cox-Ingersoll-Ross process).
  2. Correlated Asset and Volatility Processes: The asset price and volatility are assumed to be correlated, allowing the model to capture the "smile" effect observed in implied volatilities.
  3. Risk-Neutral Dynamics: The Heston model is typically calibrated under a risk-neutral measure, which allows for direct application to pricing.

Heston Model Parameters

The model is governed by a set of key parameters: - S0: Initial stock price - v0: Initial variance of the asset price - kappa: Speed of mean reversion of the variance - theta: Long-term mean level of variance - sigma: Volatility of volatility (vol of vol) - rho: Correlation between the asset price and variance processes

The dynamics of the asset price ( S ) and the variance ( v ) under the Heston model are given by:

\[ dS_t = r S_t \, dt + \sqrt{v_t} S_t \, dW^S_t \]

\[ dv_t = \kappa (\theta - v_t) \, dt + \sigma \sqrt{v_t} \, dW^v_t \]

where ( \(dW^S\) ) and ( \(dW^v\) ) are Wiener processes with correlation ( \(\rho\) ).

Advantages and Limitations

  • Advantages:
    • Ability to capture volatility smiles and skews.
    • More realistic pricing for options on assets with stochastic volatility.
  • Limitations:
    • Calibration can be complex due to the number of parameters.
    • Computationally intensive compared to simpler models like Black-Scholes.

This setup provides a robust framework for pricing and analyzing options with stochastic volatility dynamics. QuantLib’s implementation makes it easy to experiment with different parameter configurations and observe their effects on pricing.

You will learn how to initialize the ValidMind Library, develop a option pricing model, and then write custom tests that can be used for sensitivity and stress testing to quickly generate documentation about model.

Contents

  • About ValidMind
    • Before you begin
    • New to ValidMind?
    • Key concepts
  • Install the ValidMind Library
    • Get your code snippet
  • Initialize the ValidMind Library
  • Initialize the Python environment
    • Preview the documentation template
  • Data preparation
    • Data quality
  • Model development
    • Model Calibration
  • Model Evaluation
    • Benchmark Testing
    • Sensitivity Testing
    • Stress Testing
  • Next steps
    • Work with your model documentation
    • Discover more learning resources

About ValidMind

ValidMind is a suite of tools for managing model risk, including risk associated with AI and statistical models.

You use the ValidMind Library to automate documentation and validation tests, and then use the ValidMind Platform to collaborate on model documentation. Together, these products simplify model risk management, facilitate compliance with regulations and institutional standards, and enhance collaboration between yourself and model validators.

Before you begin

This notebook assumes you have basic familiarity with Python, including an understanding of how functions work. If you are new to Python, you can still run the notebook but we recommend further familiarizing yourself with the language.

If you encounter errors due to missing modules in your Python environment, install the modules with pip install, and then re-run the notebook. For more help, refer to Installing Python Modules.

New to ValidMind?

If you haven't already seen our documentation on the ValidMind Library, we recommend you begin by exploring the available resources in this section. There, you can learn more about documenting models and running tests, as well as find code samples and our Python Library API reference.

For access to all features available in this notebook, create a free ValidMind account.

Signing up is FREE — Register with ValidMind

Key concepts

Model documentation: A structured and detailed record pertaining to a model, encompassing key components such as its underlying assumptions, methodologies, data sources, inputs, performance metrics, evaluations, limitations, and intended uses. It serves to ensure transparency, adherence to regulatory requirements, and a clear understanding of potential risks associated with the model’s application.

Documentation template: Functions as a test suite and lays out the structure of model documentation, segmented into various sections and sub-sections. Documentation templates define the structure of your model documentation, specifying the tests that should be run, and how the results should be displayed.

Tests: A function contained in the ValidMind Library, designed to run a specific quantitative test on the dataset or model. Tests are the building blocks of ValidMind, used to evaluate and document models and datasets, and can be run individually or as part of a suite defined by your model documentation template.

Custom tests: Custom tests are functions that you define to evaluate your model or dataset. These functions can be registered via the ValidMind Library to be used with the ValidMind Platform.

Inputs: Objects to be evaluated and documented in the ValidMind Library. They can be any of the following:

  • model: A single model that has been initialized in ValidMind with vm.init_model().
  • dataset: Single dataset that has been initialized in ValidMind with vm.init_dataset().
  • models: A list of ValidMind models - usually this is used when you want to compare multiple models in your custom test.
  • datasets: A list of ValidMind datasets - usually this is used when you want to compare multiple datasets in your custom test. See this example for more information.

Parameters: Additional arguments that can be passed when running a ValidMind test, used to pass additional information to a test, customize its behavior, or provide additional context.

Outputs: Custom tests can return elements like tables or plots. Tables may be a list of dictionaries (each representing a row) or a pandas DataFrame. Plots may be matplotlib or plotly figures.

Test suites: Collections of tests designed to run together to automate and generate model documentation end-to-end for specific use-cases.

Example: the classifier_full_suite test suite runs tests from the tabular_dataset and classifier test suites to fully document the data and model sections for binary classification model use-cases.

Install the ValidMind Library

To install the library:

%pip install -q validmind

To install the QuantLib library:

%pip install -q QuantLib

Initialize the ValidMind Library

ValidMind generates a unique code snippet for each registered model to connect with your developer environment. You initialize the ValidMind Library with this code snippet, which ensures that your documentation and tests are uploaded to the correct model when you run the notebook.

Get your code snippet

  1. In a browser, log in to ValidMind.

  2. In the left sidebar, navigate to Model Inventory and click + Register Model.

  3. Enter the model details and click Continue. (Need more help?)

    For example, to register a model for use with this notebook, select:

    • Documentation template: Capital markets

    You can fill in other options according to your preference.

  4. Go to Getting Started and click Copy snippet to clipboard.

Next, load your model identifier credentials from an .env file or replace the placeholder with your own code snippet:

# Load your model identifier credentials from an `.env` file

%load_ext dotenv
%dotenv .env

# Or replace with your code snippet

import validmind as vm

vm.init(
    # api_host="...",
    # api_key="...",
    # api_secret="...",
    # model="...",
)

Initialize the Python environment

Next, let's import the necessary libraries and set up your Python environment for data analysis:

%matplotlib inline

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize
import yfinance as yf
import QuantLib as ql
from validmind.tests import run_test

Preview the documentation template

A template predefines sections for your model documentation and provides a general outline to follow, making the documentation process much easier.

You will upload documentation and test results into this template later on. For now, take a look at the structure that the template provides with the vm.preview_template() function from the ValidMind library and note the empty sections:

vm.preview_template()

## Data Preparation

Market Data Sources

Helper functions

Let's define helper function retrieve to option data from Yahoo Finance.

def get_market_data(ticker, expiration_date_str):
    """
    Fetch option market data from Yahoo Finance for the given ticker and expiration date.
    Returns a list of tuples: (strike, maturity, option_price).
    """
    # Create a Ticker object for the specified stock
    stock = yf.Ticker(ticker)

    # Get all available expiration dates for options
    option_dates = stock.options

    # Check if the requested expiration date is available
    if expiration_date_str not in option_dates:
        raise ValueError(f"Expiration date {expiration_date_str} not available for {ticker}. Available dates: {option_dates}")

    # Get the option chain for the specified expiration date
    option_chain = stock.option_chain(expiration_date_str)

    # Get call options (or you can use puts as well based on your requirement)
    calls = option_chain.calls

    # Convert expiration_date_str to QuantLib Date
    expiry_date_parts = list(map(int, expiration_date_str.split('-')))  # Split YYYY-MM-DD
    maturity_date = ql.Date(expiry_date_parts[2], expiry_date_parts[1], expiry_date_parts[0])  # Convert to QuantLib Date

    # Create a list to store strike prices, maturity dates, and option prices
    market_data = []
    for index, row in calls.iterrows():
        strike = row['strike']
        option_price = row['lastPrice']  # You can also use 'bid', 'ask', 'mid', etc.
        market_data.append((strike, maturity_date, option_price))
    df = pd.DataFrame(market_data, columns = ['strike', 'maturity_date', 'option_price'])
    return df

Let's define helper function retrieve to stock data from Yahoo Finance. This helper function to calculate spot price, dividend yield, volatility and risk free rate using the underline stock data.

def get_option_parameters(ticker):
    # Fetch historical data for the stock
    stock_data = yf.Ticker(ticker)
    
    # Get the current spot price
    spot_price = stock_data.history(period="1d")['Close'].iloc[-1]
    
    # Get dividend yield
    dividend_rate = stock_data.dividends.mean() / spot_price if not stock_data.dividends.empty else 0.0
    
    # Estimate volatility (standard deviation of log returns)
    hist_data = stock_data.history(period="1y")['Close']
    log_returns = np.log(hist_data / hist_data.shift(1)).dropna()
    volatility = np.std(log_returns) * np.sqrt(252)  # Annualized volatility
    
    # Assume a risk-free rate from some known data (can be fetched from market data, here we use 0.001)
    risk_free_rate = 0.001
    
    # Return the calculated parameters
    return {
        "spot_price": spot_price,
        "volatility": volatility,
        "dividend_rate": dividend_rate,
        "risk_free_rate": risk_free_rate
    }

Market Data Quality and Availability

Next, let's specify ticker and expiration date to get market data.

ticker = "MSFT"
expiration_date =  "2024-12-13" # Example expiration date in 'YYYY-MM-DD' form

market_data = get_market_data(ticker=ticker, expiration_date_str=expiration_date)

Initialize the ValidMind datasets

Before you can run tests, you must first initialize a ValidMind dataset object using the init_dataset function from the ValidMind (vm) module.

vm_market_data = vm.init_dataset(
    dataset=market_data,
    input_id="market_data",
)

Data Quality

Let's check quality of the data using outliers and missing data tests.

Isolation Forest Outliers Test

Let's detects anomalies in the dataset using the Isolation Forest algorithm, visualized through scatter plots.

result = run_test(
    "validmind.data_validation.IsolationForestOutliers",
    inputs={
        "dataset": vm_market_data,
    },
    title="Outliers detection using Isolation Forest",
)
Missing Values Test

Let's evaluates dataset quality by ensuring the missing value ratio across all features does not exceed a set threshold.

result = run_test(
    "validmind.data_validation.MissingValues",
    inputs={
        "dataset": vm_market_data,
    },
    title="Missing Values detection",
)

Model parameters

Let's calculate the model parameters using from stock data

option_params = get_option_parameters(ticker=ticker)

## Model development - Heston Option price

class HestonModel:

    def __init__(self, ticker, expiration_date_str, calculation_date, spot_price, dividend_rate, risk_free_rate):
        self.ticker = ticker
        self.expiration_date_str = expiration_date_str,
        self.calculation_date = calculation_date
        self.spot_price = spot_price
        self.dividend_rate = dividend_rate
        self.risk_free_rate = risk_free_rate
    
    def predict_option_price(self, strike, maturity_date, spot_price, v0=None, theta=None, kappa=None, sigma=None, rho=None):
        # Set the evaluation date
        ql.Settings.instance().evaluationDate = self.calculation_date

        # Construct the European Option
        payoff = ql.PlainVanillaPayoff(ql.Option.Call, strike)
        exercise = ql.EuropeanExercise(maturity_date)
        european_option = ql.VanillaOption(payoff, exercise)

        # Yield term structures for risk-free rate and dividend
        riskFreeTS = ql.YieldTermStructureHandle(ql.FlatForward(calculation_date, self.risk_free_rate, ql.Actual365Fixed()))
        dividendTS = ql.YieldTermStructureHandle(ql.FlatForward(calculation_date, self.dividend_rate, ql.Actual365Fixed()))

        # Initial stock price
        initialValue = ql.QuoteHandle(ql.SimpleQuote(spot_price))

        # Heston process parameters
        heston_process = ql.HestonProcess(riskFreeTS, dividendTS, initialValue, v0, kappa, theta, sigma, rho)
        hestonModel = ql.HestonModel(heston_process)

        # Use the Heston analytic engine
        engine = ql.AnalyticHestonEngine(hestonModel)
        european_option.setPricingEngine(engine)

        # Calculate the Heston model price
        h_price = european_option.NPV()

        return h_price

    def predict_american_option_price(self, strike, maturity_date, spot_price, v0=None, theta=None, kappa=None, sigma=None, rho=None):
        # Set the evaluation date
        ql.Settings.instance().evaluationDate = self.calculation_date

        # Construct the American Option
        payoff = ql.PlainVanillaPayoff(ql.Option.Call, strike)
        exercise = ql.AmericanExercise(self.calculation_date, maturity_date)
        american_option = ql.VanillaOption(payoff, exercise)

        # Yield term structures for risk-free rate and dividend
        riskFreeTS = ql.YieldTermStructureHandle(ql.FlatForward(self.calculation_date, self.risk_free_rate, ql.Actual365Fixed()))
        dividendTS = ql.YieldTermStructureHandle(ql.FlatForward(self.calculation_date, self.dividend_rate, ql.Actual365Fixed()))

        # Initial stock price
        initialValue = ql.QuoteHandle(ql.SimpleQuote(spot_price))

        # Heston process parameters
        heston_process = ql.HestonProcess(riskFreeTS, dividendTS, initialValue, v0, kappa, theta, sigma, rho)
        heston_model = ql.HestonModel(heston_process)


        payoff = ql.PlainVanillaPayoff(ql.Option.Call, strike)
        exercise = ql.AmericanExercise(self.calculation_date, maturity_date)
        american_option = ql.VanillaOption(payoff, exercise)
        heston_fd_engine = ql.FdHestonVanillaEngine(heston_model)
        american_option.setPricingEngine(heston_fd_engine)
        option_price = american_option.NPV()

        return option_price

    def objective_function(self, params, market_data, spot_price, dividend_rate, risk_free_rate):
        v0, theta, kappa, sigma, rho = params

        # Sum of squared differences between market prices and model prices
        error = 0.0
        for i, row in market_data.iterrows():
            model_price = self.predict_option_price(row['strike'], row['maturity_date'], spot_price, 
                                            v0, theta, kappa, sigma, rho)
            error += (model_price - row['option_price']) ** 2
        
        return error

    def calibrate_model(self, ticker, expiration_date_str):
        # Get the option market data dynamically from Yahoo Finance
        market_data = get_market_data(ticker, expiration_date_str)

        # Initial guesses for Heston parameters
        initial_params = [0.04, 0.04, 0.1, 0.1, -0.75]

        # Bounds for the parameters to ensure realistic values
        bounds = [(0.0001, 1.0),  # v0
                (0.0001, 1.0),  # theta
                (0.001, 2.0),   # kappa
                (0.001, 1.0),   # sigma
                (-0.75, 0.0)]    # rho

        # Optimize the parameters to minimize the error between model and market prices
        result = minimize(self.objective_function, initial_params, args=(market_data, self.spot_price, self.dividend_rate, self.risk_free_rate),
                        bounds=bounds, method='L-BFGS-B')

        # Optimized Heston parameters
        v0_opt, theta_opt, kappa_opt, sigma_opt, rho_opt = result.x

        return v0_opt, theta_opt, kappa_opt, sigma_opt, rho_opt

### Model Calibration * The calibration process aims to optimize the Heston model parameters (v0, theta, kappa, sigma, rho) by minimizing the difference between model-predicted option prices and observed market prices. * In this implementation, the model is calibrated to current market data, specifically using option prices from the selected ticker and expiration date.

Let's specify calculation_date and strike_price as input parameters for the model to verify its functionality and confirm it operates as expected.

calculation_date = ql.Date(26, 11, 2024)
# Convert expiration date string to QuantLib.Date
expiry_date_parts = list(map(int, expiration_date.split('-')))
maturity_date = ql.Date(expiry_date_parts[2], expiry_date_parts[1], expiry_date_parts[0])
strike_price = 460.0

hm = HestonModel(
    ticker=ticker,
    expiration_date_str= expiration_date,
    calculation_date= calculation_date,
    spot_price= option_params['spot_price'],
    dividend_rate = option_params['dividend_rate'],
    risk_free_rate = option_params['risk_free_rate']
)

# Let's calibrate model
v0_opt, theta_opt, kappa_opt, sigma_opt, rho_opt = hm.calibrate_model(ticker, expiration_date)
print(f"Optimized Heston parameters: v0={v0_opt}, theta={theta_opt}, kappa={kappa_opt}, sigma={sigma_opt}, rho={rho_opt}")


# option price
h_price = hm.predict_option_price(strike_price, maturity_date, option_params['spot_price'], v0_opt, theta_opt, kappa_opt, sigma_opt, rho_opt)
print("The Heston model price for the option is:", h_price)

Model Evaluation

#### Benchmark Testing The benchmark testing framework provides a robust way to validate the Heston model implementation and understand the relationships between European and American option prices under stochastic volatility conditions. Let's compares European and American option prices using the Heston model.

@vm.test("my_custom_tests.BenchmarkTest")
def benchmark_test(hm_model, strikes, maturity_date, spot_price, v0=None, theta=None, kappa=None, sigma=None, rho=None):
    """
    Compares European and American option prices using the Heston model.

    This test evaluates the price differences between European and American options
    across multiple strike prices while keeping other parameters constant. The comparison
    helps understand the early exercise premium of American options over their European
    counterparts under stochastic volatility conditions.

    Args:
        hm_model: HestonModel instance for option pricing calculations
        strikes (list[float]): List of strike prices to test
        maturity_date (ql.Date): Option expiration date in QuantLib format
        spot_price (float): Current price of the underlying asset
        v0 (float, optional): Initial variance. Defaults to None.
        theta (float, optional): Long-term variance. Defaults to None.
        kappa (float, optional): Mean reversion rate. Defaults to None.
        sigma (float, optional): Volatility of variance. Defaults to None.
        rho (float, optional): Correlation between asset and variance. Defaults to None.

    Returns:
        dict: Contains a DataFrame with the following columns:
            - Strike: Strike prices tested
            - Maturity date: Expiration date for all options
            - Spot price: Current underlying price
            - european model price: Prices for European options
            - american model price: Prices for American options
"""
    american_derived_prices = []
    european_derived_prices = []
    for K in strikes:
        european_derived_prices.append(hm_model.predict_option_price(K, maturity_date, spot_price, v0, theta, kappa, sigma, rho))
        american_derived_prices.append(hm_model.predict_american_option_price(K, maturity_date, spot_price, v0, theta, kappa, sigma, rho))

    data = {
        "Strike": strikes,
        "Maturity date": [maturity_date] * len(strikes),
        "Spot price": [spot_price] * len(strikes),
        "european model price": european_derived_prices,
        "american model price": american_derived_prices,

    }
    df1 = pd.DataFrame(data)
    return {"strikes variation benchmarking": df1}
result = run_test(
    "my_custom_tests.BenchmarkTest",
    params={
        "hm_model": hm,
        "strikes": [400, 425, 460, 495, 520],
        "maturity_date": maturity_date,
        "spot_price": option_params['spot_price'],
        "v0":v0_opt,
        "theta": theta_opt,
        "kappa":kappa_opt ,
        "sigma": sigma_opt,
        "rho":rho_opt
    },
).log()

#### Sensitivity Testing The sensitivity testing framework provides a systematic approach to understanding how the Heston model responds to parameter changes, which is crucial for both model validation and practical application in trading and risk management.

@vm.test("my_test_provider.Sensitivity")
def SensitivityTest(
    model,
    strike_price,
    maturity_date,
    spot_price,
    v0_opt,
    theta_opt,
    kappa_opt,
    sigma_opt,
    rho_opt,
):
    """
    Evaluates the sensitivity of American option prices to changes in model parameters.

    This test calculates option prices using the Heston model with optimized parameters.
    It's designed to analyze how changes in various model inputs affect the option price,
    which is crucial for understanding model behavior and risk management.

    Args:
        model (HestonModel): Initialized Heston model instance wrapped in ValidMind model object
        strike_price (float): Strike price of the option
        maturity_date (ql.Date): Expiration date of the option in QuantLib format
        spot_price (float): Current price of the underlying asset
        v0_opt (float): Optimized initial variance parameter
        theta_opt (float): Optimized long-term variance parameter
        kappa_opt (float): Optimized mean reversion rate parameter
        sigma_opt (float): Optimized volatility of variance parameter
        rho_opt (float): Optimized correlation parameter between asset price and variance
    """
    price = model.model.predict_american_option_price(
        strike_price,
        maturity_date,
        spot_price,
        v0_opt,
        theta_opt,
        kappa_opt,
        sigma_opt,
        rho_opt,
    )

    return price
Common plot function
def plot_results(df, params: dict = None):
        fig2 =  plt.figure(figsize=(10, 6))
        plt.plot(df[params["x"]], df[params["y"]], label=params["label"])
        plt.xlabel(params["xlabel"])
        plt.ylabel(params["ylabel"])
        
        plt.title(params["title"])
        plt.legend()
        plt.grid(True)
        plt.show()  # close the plot to avoid displaying it

Let's create ValidMind model object

hm_model = vm.init_model(model=hm, input_id="HestonModel")
Strike sensitivity

Let's analyzes how option prices change as the strike price varies. We create a range of strike prices around the current strike (460) and observe the impact on option prices while keeping all other parameters constant.

result = run_test(
    "my_test_provider.Sensitivity:ToStrike",
    inputs = {
        "model": hm_model
    },
    param_grid={
        "strike_price": list(np.linspace(460-50, 460+50, 10)),
        "maturity_date": [maturity_date],
        "spot_price": [option_params["spot_price"]],
        "v0_opt": [v0_opt],
        "theta_opt": [theta_opt],
        "kappa_opt": [kappa_opt],
        "sigma_opt": [sigma_opt],
        "rho_opt":[rho_opt]
    },
)
result.log()
# Visualize how option prices change with different strike prices
plot_results(
    pd.DataFrame(result.tables[0].data),
    params={
        "x": "strike_price",
        "y":"Value",
        "label":"Strike price",
        "xlabel":"Strike price",
        "ylabel":"option price",
        "title":"Heston option - Strike price Sensitivity",
    }
)

#### Stress Testing This stress testing framework provides a comprehensive view of how the Heston model behaves under different market conditions and helps identify potential risks in option pricing.

@vm.test("my_custom_tests.Stressing")
def StressTest(
    model,
    strike_price,
    maturity_date,
    spot_price,
    v0_opt,
    theta_opt,
    kappa_opt,
    sigma_opt,
    rho_opt,
):
    """
    Performs stress testing on Heston model parameters to evaluate option price sensitivity.

    This test evaluates how the American option price responds to stressed market conditions
    by varying key model parameters. It's designed to:
    1. Identify potential model vulnerabilities
    2. Understand price behavior under extreme scenarios
    3. Support risk management decisions
    4. Validate model stability across parameter ranges

    Args:
        model (HestonModel): Initialized Heston model instance wrapped in ValidMind model object
        strike_price (float): Option strike price
        maturity_date (ql.Date): Option expiration date in QuantLib format
        spot_price (float): Current price of the underlying asset
        v0_opt (float): Initial variance parameter under stress testing
        theta_opt (float): Long-term variance parameter under stress testing
        kappa_opt (float): Mean reversion rate parameter under stress testing
        sigma_opt (float): Volatility of variance parameter under stress testing
        rho_opt (float): Correlation parameter under stress testing
    """
    price = model.model.predict_american_option_price(
        strike_price,
        maturity_date,
        spot_price,
        v0_opt,
        theta_opt,
        kappa_opt,
        sigma_opt,
        rho_opt,
    )

    return price
Rho (correlation) and Theta (long term vol) stress test

Next, let's evaluates the sensitivity of a model's output to changes in the correlation parameter (rho) and the long-term variance parameter (theta) within a stochastic volatility framework.

result = run_test(
    "my_custom_tests.Stressing:TheRhoAndThetaParameters",
    inputs = {
        "model": hm_model,
    },
    param_grid={
        "strike_price": [460],
        "maturity_date": [maturity_date],
        "spot_price": [option_params["spot_price"]],
        "v0_opt": [v0_opt],
        "theta_opt": list(np.linspace(0.1, theta_opt+0.4, 5)),
        "kappa_opt": [kappa_opt],
        "sigma_opt": [sigma_opt],
        "rho_opt":list(np.linspace(rho_opt-0.2, rho_opt+0.2, 5))
    },
).log()
Sigma stress test

Let's evaluates the sensitivity of a model's output to changes in the volatility parameter, sigma. This test is crucial for understanding how variations in market volatility impact the model's valuation of financial instruments, particularly options.

result = run_test(
    "my_custom_tests.Stressing:TheSigmaParameter",
    inputs = {
        "model": hm_model,
    },
    param_grid={
        "strike_price": [460],
        "maturity_date": [maturity_date],
        "spot_price": [option_params["spot_price"]],
        "v0_opt": [v0_opt],
        "theta_opt": [theta_opt],
        "kappa_opt": [kappa_opt],
        "sigma_opt": list(np.linspace(0.1, sigma_opt+0.6, 5)),
        "rho_opt": [rho_opt]
    },
).log()
Stress kappa

Let's evaluates the sensitivity of a model's output to changes in the kappa parameter, which is a mean reversion rate in stochastic volatility models.

result = run_test(
    "my_custom_tests.Stressing:TheKappaParameter",
    inputs = {
        "model": hm_model,
    },
    param_grid={
        "strike_price": [460],
        "maturity_date": [maturity_date],
        "spot_price": [option_params["spot_price"]],
        "v0_opt": [v0_opt],
        "theta_opt": [theta_opt],
        "kappa_opt": list(np.linspace(kappa_opt, kappa_opt+0.2, 5)),
        "sigma_opt": [sigma_opt],
        "rho_opt": [rho_opt]
    },
).log()
Stress theta

Let's evaluates the sensitivity of a model's output to changes in the parameter theta, which represents the long-term variance in a stochastic volatility model.

result = run_test(
    "my_custom_tests.Stressing:TheThetaParameter",
    inputs = {
        "model": hm_model,
    },
    param_grid={
        "strike_price": [460],
        "maturity_date": [maturity_date],
        "spot_price": [option_params["spot_price"]],
        "v0_opt": [v0_opt],
        "theta_opt": list(np.linspace(0.1, theta_opt+0.9, 5)),
        "kappa_opt": [kappa_opt],
        "sigma_opt": [sigma_opt],
        "rho_opt": [rho_opt]
    },
).log()
Stress rho

Let's evaluates the sensitivity of a model's output to changes in the correlation parameter, rho, within a stochastic volatility (SV) model framework. This test is crucial for understanding how variations in rho, which represents the correlation between the asset price and its volatility, impact the model's valuation output.

result = run_test(
    "my_custom_tests.Stressing:TheRhoParameter",
    inputs = {
        "model": hm_model,
    },
    param_grid={
        "strike_price": [460],
        "maturity_date": [maturity_date],
        "spot_price": [option_params["spot_price"]],
        "v0_opt": [v0_opt],
        "theta_opt": [theta_opt],
        "kappa_opt": [kappa_opt],
        "sigma_opt": [sigma_opt],
        "rho_opt": list(np.linspace(rho_opt-0.2, rho_opt+0.2, 5))
    },
).log()

Next steps

You can look at the results of this test suite right in the notebook where you ran the code, as you would expect. But there is a better way — use the ValidMind Platform to work with your model documentation.

Work with your model documentation

  1. From the Model Inventory in the ValidMind Platform, go to the model you registered earlier. (Need more help?)

  2. Click and expand the Model Development section.

What you see is the full draft of your model documentation in a more easily consumable version. From here, you can make qualitative edits to model documentation, view guidelines, collaborate with validators, and submit your model documentation for approval when it's ready. Learn more ...

Discover more learning resources

We offer many interactive notebooks to help you document models:

  • Run tests & test suites
  • Code samples

Or, visit our documentation to learn more about ValidMind.

Quickstart for knockout option pricing model documentation
Document an application scorecard model

© Copyright 2025 ValidMind Inc. All Rights Reserved.

  • Edit this page
  • Report an issue
Cookie Preferences
  • validmind.com

  • Privacy Policy

  • Terms of Use