Wolfram Prompt Repository(Under Construction)
A curated collection of prompts, personas, functions, & more for LLMs (large language model AIs)
Use an LLM to assess the quality of another LLM's response for a given prompt
LLMResourceFunction["LLMPromptAssessment"]["prompt"] generate a response from an LLM for the given prompt, then have another LLM assess the quality of that response. | |
LLMResourceFunction["LLMPromptAssessment"]["prompt",config] use the LLMConfiguration specified by config to generate the initial response. | |
LLMResourceFunction["LLMPromptAssessment"]["prompt",config,"extra"] includes the given extra instructions in the prompting for the assessment LLM. |
"inner" | the model that generates the initial response for the given "prompt" |
"outer" | the model that looks at the response and evaluates how well it followed instructions specified in "prompt" |