Skip to content

Use Guardrails from LangChain

You can use Guardrails to add a layer of security around LangChain components. Here's how to use Guardrails with LangChain.

Installing dependencies

Make sure you have both langchain and guardrails installed. If you don't, run the following commands:

!pip install guardrails-ai
!pip install langchain

Create a RAIL spec

rail_spec = """
<rail version="0.1">

<output>
    <object name="patient_info">
        <string name="gender" description="Patient's gender" />
        <integer name="age" format="valid-range: 0 100" />
        <string name="symptoms" description="Symptoms that the patient is currently experiencing" />
    </object>
</output>

<prompt>

Given the following doctor's notes about a patient, please extract a dictionary that contains the patient's information.

{{doctors_notes}}

@complete_json_suffix_v2
</prompt>
</rail>
"""

Create a GuardrailsOutputParser

from rich import print

from langchain.output_parsers import GuardrailsOutputParser

from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
output_parser = GuardrailsOutputParser.from_rail_string(rail_spec)

The GuardrailsOutputParser contains a Guard object, which can be used to access the prompt and output schema. E.g., here is the compiled prompt that is stored in GuardrailsOutputParser:

print(output_parser.guard.base_prompt)

Given the following doctor's notes about a patient, please extract a dictionary that contains the patient's 
information.

{doctors_notes}


Given below is XML that describes the information to extract from this document and the tags to extract it into.

<output>
    <object name="patient_info">
        <string name="gender" description="Patient's gender"/>
        <integer name="age" format="valid-range: 0 100"/>
        <string name="symptoms" description="Symptoms that the patient is currently experiencing"/>
    </object>
</output>

ONLY return a valid JSON object (no other text is necessary), where the key of the field in JSON is the `name` 
attribute of the corresponding XML, and the value is of the type specified by the corresponding XML's tag. The JSON
MUST conform to the XML format, including any types and format requests e.g. requests for lists, objects and 
specific types. Be correct and concise.

Here are examples of simple (XML, JSON) pairs that show the expected behavior:
- `<string name='foo' format='two-words lower-case' />` => `{{'foo': 'example one'}}`
- `<list name='bar'><string format='upper-case' /></list>` => `{{"bar": ['STRING ONE', 'STRING TWO', etc.]}}`
- `<object name='baz'><string name="foo" format="capitalize two-words" /><integer name="index" format="1-indexed" 
/></object>` => `{{'baz': {{'foo': 'Some String', 'index': 1}}}}`

JSON Object:

We can now create a LangChain PromptTemplate from this output parser.

Create Prompt Template

prompt = PromptTemplate(
    template=output_parser.guard.base_prompt,
    input_variables=output_parser.guard.prompt.variable_names,
)

Query the LLM and get formatted, validated and corrected output

model = OpenAI(temperature=0)


doctors_notes = """
49 y/o Male with chronic macular rash to face & hair, worse in beard, eyebrows & nares.
Itchy, flaky, slightly scaly. Moderate response to OTC steroid cream
"""
output = model(prompt.format_prompt(doctors_notes=doctors_notes).to_string())
print(output_parser.parse(output))
/Users/shreyarajpal/anaconda3/envs/tiff-env/lib/python3.9/site-packages/eliot/json.py:22: FutureWarning: In the future `np.bool` will be defined as the corresponding NumPy scalar.  (This may have returned Python scalars in past versions.
  if isinstance(o, (numpy.bool, numpy.bool_)):

{
    'gender': 'Male',
    'age': 49,
    'symptoms': 'Chronic macular rash to face & hair, worse in beard, eyebrows & nares. Itchy, flaky, slightly 
scaly. Moderate response to OTC steroid cream'
}