Basic Examples (2)
Find an answer of a question based on a given text:
Find the parameters of a computation workflow specification:
Scope (2)
If one question is asked and the third argument is String, then a string is returned as an answer (note that date results are given in the ISO-8601 standard, per the default prompt of LLMTextualAnswer):
Here is a recommender system pipeline specification:
Here are a list of questions and a list of corresponding answers for getting the parameters of the pipeline:
Here we get a question-answer association:
Here is the LLM function constructed by LLMTextualAnswer:
Alternatively, just the string template can be obtained:
Options (12)
Prelude (5)
Here is the default prelude:
It can be beneficial to change the prelude if the first argument has a more specific form. (Like XML, JSON, WL, etc.) For example, here is a JSON string:
Here we change the prelude:
Without the prelude change some LLM models have hard time finding the answers:
But some models do succeed:
Prompt (4)
The default prompt is crafted in such a way that it allows the obtaining of a list of question-answer pairs in JSON format. Here is the default prompt:
Here the results are easy to process further in a programmatic way:
Using a different prompt guarantees actionable results:
Here no prompt is used -- the answers are correct, but further processing is needed in order to use them in computations:
Request (3)
For the default value of the "Request" option, Automatic, LLMTextualAnswer uses the request is "list the shortest answers of the questions:":
On the other hand, for a single question, "Request"→Automatic uses the request is "give the shortest answer of the question:":
Here the request is changed to "give a complete sentence as an answer to the question":
Applications (6)
Random mandala creation by verbal commands (6)
ResourceFunction["RandomMandala"] takes its arguments -- like, rotational symmetry order, or symmetry -- through option specifications. Here we make a list of the options we want to specify:
Here we create corresponding "extraction" questions and display them:
Here we define rules to make the LLM responses (more) acceptable by WL:
Here we define a function that converts natural language commands into images of random mandalas:
Here is an example application:
Here is an application with multi-symmetry and multi-radius specifications:
Properties and Relations (2)
The built-in function FindTextualAnswer has the same purpose and goal. LLMTextualAnswer seems to be more precise and powerful. FindTextualAnswer is often faster:
A main motivation to make this function is to have a more powerful and precise alternative to FindTextualAnswer in the paclet NLPTemplateEngine. Here is the Wolfram Language pipeline built by the function Concretize of NLPTemplateEngine for a classifier specification (similar to one used above):
Possible Issues (4)
Some combinations of texts, questions, preludes and requests might be incompatible with the default prompt (which is aimed at getting JSON dictionaries.) For example, consider this recipe:
Here are related food questions:
Here LLMTextualAnswer is invoked and fails (with default settings):
Here a result is obtained after replacing the default prompt with an empty string:
Neat Examples (6)
Make a "universal" function for invoking functions from Wolfram Function Repository:
Request a binary tree:
Request a Mondrian:
Request a Rorschach (inkblot) pattern:
Request a maze:
Request a random English haiku and specify an LLMEvaluator to use: