Optional
criteria?: Criteria | Record<string, string>The "criteria" to insert into the prompt template used for evaluation. See the prompt at https://smith.langchain.com/hub/langchain-ai/labeled-criteria for more information.
Optional
llm?: ToolkitThe language model to use as the evaluator, defaults to GPT-4
The criteria to use for the evaluator.
The language model to use for the evaluator.
The configuration for the evaluator.
const evalConfig = {
evaluators: [LabeledCriteria("correctness")],
};
@example
```ts
const evalConfig = {
evaluators: [
LabeledCriteria({
"mentionsAllFacts": "Does the include all facts provided in the reference?"
})
],
};
const evalConfig = {
evaluators: [{
evaluatorType: "labeled_criteria",
criteria: "correctness",
formatEvaluatorInputs: ...
}],
};
const evalConfig = {
evaluators: [{
evaluatorType: "labeled_criteria",
criteria: { "mentionsAllFacts": "Does the include all facts provided in the reference?" },
formatEvaluatorInputs: ...
}],
};
Generated using TypeDoc
Configuration to load a "LabeledCriteriaEvalChain" evaluator, which prompts an LLM to determine whether the model's prediction complies with the provided criteria and also provides a "ground truth" label for the evaluator to incorporate in its evaluation.