%pip install -q validmindSummarization of financial data using a large language model (LLM)
Document a large language model (LLM) using the ValidMind Library. The use case is a summarization of financial news based on a dataset containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail.
This interactive notebook shows you how to set up the ValidMind Library, initialize the library, and load the dataset, followed by running the model validation tests provided by the ValidMind Library to quickly generate documentation about the data and model.
Before you begin
Register with ValidMind
This notebook requires an OpenAI API secret key to run. If you don't have one, visit API keys on OpenAI's site to create a new key for yourself. Note that API usage charges may apply.
If you encounter errors due to missing modules in your Python environment, install the modules with pip install, and then re-run the notebook. For more help, refer to Installing Python Modules.
About ValidMind
ValidMind's suite of tools enables organizations to identify, document, and manage model risks for all types of models, including AI/ML models, LLMs, and statistical models. As a model developer, you use the ValidMind Library to automate documentation and validation tests, and then use the ValidMind Platform to collaborate on model documentation, these products simplify model risk management, facilitate compliance with regulations and institutional standards, and enhance collaboration between yourself and model validators.
If this is your first time trying out ValidMind, we recommend going through the following resources first:
- Get started — The basics, including key concepts, and how our products work
- ValidMind Library — The path for developers, more code samples, and our developer reference
Setting up
Install the ValidMind Library
To install the library:
Initialize the ValidMind Library
Register sample model
Let's first register a sample model for use with this notebook:
In a browser, log in to ValidMind.
In the left sidebar, navigate to Inventory and click + Register Model.
Enter the model details and click Next > to continue to assignment of model stakeholders. (Need more help?)
Select your own name under the MODEL OWNER drop-down.
Click Register Model to add the model to your inventory.
Apply documentation template
Once you've registered your model, let's select a documentation template. A template predefines sections for your model documentation and provides a general outline to follow, making the documentation process much easier.
In the left sidebar that appears for your model, click Documents and select Documentation.
Under TEMPLATE, select
LLM-based Text Summarization.Click Use Template to apply the template.
Get your code snippet
ValidMind generates a unique code snippet for each registered model to connect with your developer environment. You initialize the ValidMind Library with this code snippet, which ensures that your documentation and tests are uploaded to the correct model when you run the notebook.
- On the left sidebar that appears for your model, select Getting Started and click Copy snippet to clipboard.
- Next, load your model identifier credentials from an
.envfile or replace the placeholder with your own code snippet:
# Load your model identifier credentials from an `.env` file
%load_ext dotenv
%dotenv .env
# Or replace with your code snippet
import validmind as vm
vm.init(
# api_host="...",
# api_key="...",
# api_secret="...",
# model="...",
)Preview the documentation template
Let's verify that you have connected the ValidMind Library to the ValidMind Platform and that the appropriate template is selected for your model.
You will upload documentation and test results unique to your model based on this template later on. For now, take a look at the default structure that the template provides with the vm.preview_template() function from the ValidMind library and note the empty sections:
vm.preview_template()Helper functions
Let's define the following functions to help visualize datasets with long text fields:
import textwrap
from IPython.display import display, HTML
from tabulate import tabulate
def _format_cell_text(text, width=50):
"""Private function to format a cell's text."""
return "\n".join([textwrap.fill(line, width=width) for line in text.split("\n")])
def _format_dataframe_for_tabulate(df):
"""Private function to format the entire DataFrame for tabulation."""
df_out = df.copy()
# Format all string columns
for column in df_out.columns:
# Check if column is of type object (likely strings)
if df_out[column].dtype == object:
df_out[column] = df_out[column].apply(_format_cell_text)
return df_out
def _dataframe_to_html_table(df):
"""Private function to convert a DataFrame to an HTML table."""
headers = df.columns.tolist()
table_data = df.values.tolist()
return tabulate(table_data, headers=headers, tablefmt="html")
def display_formatted_dataframe(df, num_rows=None):
"""Primary function to format and display a DataFrame."""
if num_rows is not None:
df = df.head(num_rows)
formatted_df = _format_dataframe_for_tabulate(df)
html_table = _dataframe_to_html_table(formatted_df)
display(HTML(html_table))Load the dataset
The CNN Dailymail Dataset is an English-language dataset containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail (https://huggingface.co/datasets/cnn_dailymail). The current version supports both extractive and abstractive summarization, though the original version was created for machine reading and comprehension and abstractive question answering.
import pandas as pd
df = pd.read_csv("./datasets/cnn_dailymail_100_with_predictions.csv")
display_formatted_dataframe(df, num_rows=5)Get ready to run the analysis
Import the ValidMind FoundationModel and Prompt classes needed for the sentiment analysis later on:
from validmind.models import FoundationModel, PromptCheck your access to the OpenAI API:
import os
import dotenv
import nltk
dotenv.load_dotenv()
nltk.download('stopwords')
if os.getenv("OPENAI_API_KEY") is None:
raise Exception("OPENAI_API_KEY not found")from openai import OpenAI
model = OpenAI()
def call_model(prompt):
return (
model.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": prompt},
],
)
.choices[0]
.message.content
)Set the prompt guidelines for the sentiment analysis:
prompt_template = """
You are an AI with expertise in summarizing financial news.
Your task is to provide a concise summary of the specific news article provided below.
Before proceeding, take a moment to understand the context and nuances of the financial terminology used in the article.
Article to Summarize:
```
{article}
```
Please respond with a concise summary of the article's main points.
Ensure that your summary is based on the content of the article and not on external information or assumptions.
""".strip()
prompt_variables = ["article"]vm_test_ds = vm.init_dataset(
dataset=df,
input_id="test_dataset",
text_column="article",
target_column="highlights",
)
vm_model = vm.init_model(
model=FoundationModel(
predict_fn=call_model,
prompt=Prompt(
template=prompt_template,
variables=prompt_variables,
),
),
input_id="gpt_35_model",
)
# Assign model predictions to the test dataset
vm_test_ds.assign_predictions(vm_model, prediction_column="gpt_35_prediction")Run model validation tests
It's possible to run a subset of tests on the documentation template by passing a section parameter to run_documentation_tests(). Let's run the tests that evaluate the model's overall performance (including summarization metrics), by selecting the "model development" section of the template:
summarization_results = vm.run_documentation_tests(
section="model_development",
inputs={
"dataset": vm_test_ds,
"model": vm_model,
},
)Next steps
You can look at the results of this test suite right in the notebook where you ran the code, as you would expect. But there is a better way: view the prompt validation test results as part of your model documentation in the ValidMind Platform:
In the ValidMind Platform, click Documentation under Documents for the model you registered earlier. (Need more help?
Expand 2. Data Preparation or 3. Model Development to review all test results.
What you can see now is a more easily consumable version of the prompt validation testing you just performed, along with other parts of your model documentation that still need to be completed.
If you want to learn more about where you are in the model documentation process, take a look our documentation on the ValidMind Library.
Upgrade ValidMind
Retrieve the information for the currently installed version of ValidMind:
%pip show validmindIf the version returned is lower than the version indicated in our production open-source code, restart your notebook and run:
%pip install --upgrade validmindYou may need to restart your kernel after running the upgrade package for changes to be applied.