Introduction
Contents
Introduction#
Kor is a thin wrapper on top of LLMs that helps to extract structured data using LLMs.
To use Kor, specify the schema of what should be extracted and provide some extraction examples.
As you’re looking through this tutorial, examine 👀 the outputs carefully to understand what errors are being made.
Extraction isn’t perfect! Understand the limitations before adopting it for your use case.
from kor.extraction import create_extraction_chain
from kor.nodes import Object, Text, Number
from langchain_openai import ChatOpenAI
Schema#
Kor requires that you specify the schema
of what you want parsed with some optional examples.
We’ll start off by specifying a very simple schema.
schema = Object(
id="person",
description="Personal information",
examples=[
("Alice and Bob are friends", [{"first_name": "Alice"}, {"first_name": "Bob"}])
],
attributes=[
Text(
id="first_name",
description="The first name of a person.",
)
],
many=True,
)
The schema above consists of a single object node which contains a single text attribute called first_name.
The object can be repeated many times, so if the text contains many multiple first names, multiple objects will be extracted.
As part of the schema, we specified a description
of what we’re extracting, as well as 2 examples.
Including both a description
and examples
will likely improve performance.
Langchain#
Instantiate a langchain LLM and create a chain.
https://langchain.readthedocs.io/en/latest/modules/llms.html
from langchain_openai import OpenAI
llm = ChatOpenAI(
model_name="gpt-4o",
temperature=0,
max_tokens=2000,
)
chain = create_extraction_chain(llm, schema)
Extract#
With a chain
and a schema
defined, we’re ready to extract data.
chain.invoke(("My name is Bobby. My brother's name Joe."))["data"]
{'person': [{'first_name': 'Bobby'}, {'first_name': 'Joe'}]}
We got back a list of people (under the person
key).
The Full Response#
The full response contains the raw output from the LLM, and a list of errors of any errors that occurred while parsing the LLM result.
chain.invoke(("My name is Bobby. My brother's name Joe."))
{'data': {'person': [{'first_name': 'Bobby'}, {'first_name': 'Joe'}]},
'raw': 'first_name\nBobby\nJoe',
'errors': [],
'validated_data': {}}
The Prompt#
And here’s the actual prompt that was sent to the LLM.
print(chain.get_prompts()[0].format_prompt(text="[user input]").to_string())
Your goal is to extract structured information from the user's input that matches the form described below. When extracting information please make sure it matches the type information exactly. Do not add any attributes that do not appear in the schema shown below.
```TypeScript
person: Array<{ // Personal information
first_name: string // The first name of a person.
}>
```
Please output the extracted information in CSV format in Excel dialect. Please use a | as the delimiter.
Do NOT add any clarifying information. Output MUST follow the schema above. Do NOT add any additional columns that do not appear in the schema.
Input: Alice and Bob are friends
Output: first_name
Alice
Bob
Input: [user input]
Output:
With pydantic#
from kor import from_pydantic
from typing import List, Optional
from pydantic import BaseModel, Field
class Person(BaseModel):
first_name: str = Field(description="The first name of a person")
schema, validator = from_pydantic(
Person,
description="Personal Information", # <-- Description
examples=[ # <-- Object level examples
("Alice and Bob are friends", [{"first_name": "Alice"}, {"first_name": "Bob"}])
],
many=True, # <-- Note Many = True
)
chain = create_extraction_chain(llm, schema, validator=validator)
chain.invoke(("My name is Bobby. My brother's name Joe."))
{'data': {'person': [{'first_name': 'Bobby'}, {'first_name': 'Joe'}]},
'raw': 'first_name\nBobby\nJoe',
'errors': [],
'validated_data': [Person(first_name='Bobby'), Person(first_name='Joe')]}