Skip to content

darveenvijayan/autoevaluator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

73 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AutoEvaluator: An LLM based LLM Evaluator

AutoEvaluator is a Python library that speeds up the large language models (LLMs) output generation QC work. It provides a simple, transparent, and user-friendly API to identify the True Positives (TP), False Positives (FP), and False Negatives (FN) statements based the generated statement and ground truth provided. Get ready to turbocharge your LLM evaluations!

Autoevaluator PyPI - Downloads

Static Badge Static Badge Twitter Follow

Features:

  • Evaluate LLM outputs against a reference dataset or human judgement.
  • Generate TP, FP, and FN sentences based on ground truth provided
  • Calculate Precision, Recall and F1 score

Installation

Autoevaluator requires Python 3.9 and several dependencies. You can install autoevaluator:

pip install autoevaluator

Usage

  1. Prepare your data:

    • Create a dataset containing LLM outputs and their corresponding ground truth labels.
    • The format of the data can be customized depending on the evaluation task.
    • Example: A CSV file with columns for "prompt," "llm_output," and "ground_truth"
  2. setup environment variables

import os
os.environ["OPENAI_API_KEY"] = "<OPENAI_API_KEY>"
os.environ["AZURE_OPENAI_API_KEY"] = "<AZURE_OPENAI_API_KEY>"
os.environ["AZURE_OPENAI_ENDPOINT"] = "<AZURE_OPENAI_ENDPOINT>"
os.environ["DEPLOYMENT"] = "<azure>/<not-azure>"
  1. run autoevaluator
# Import the evaluate function from the autoevaluator module
from autoevaluator import evaluate, setup_client

# setup openai client
client, model =  setup_client()

# Define the claim to be evaluated
claim = 'Feynmann was born in 1918 in Malaysia'

# Define the ground truth statement
ground_truth = 'Feynmann was born in 1918 in America.'

# Evaluate the claim against the ground truth
evaluate(claim, ground_truth, client=client, model_name = model)

# output
{'TP': ['Feynmann was born in 1918.'],
 'FP': ['Feynmann was born in Malaysia.'],
 'FN': ['Feynmann was born in America.'],
 'recall': 0.5,
 'precision': 0.5,
 'f1_score': 0.5}

  1. Output:
    • The script will generate a dictionary with the following information:
      • TP, FP, and FN sentences
      • Precision, Recall and F1 score

License:

This project is licensed under the MIT License. See the LICENSE file for details.