Function Repository Resource:

LLMTextualAnswer (1.0.0) current version: 1.1.0 »

Source Notebook

Find textual answers via an LLM

Contributed by: Anton Antonov

ResourceFunction["LLMTextualAnswer"][txt,qs]

gives answers to the questions qs found in the text txt.

ResourceFunction["LLMTextualAnswer"][txt,qs,frm]

returns the result in the form frm.

Details and Options

ResourceFunction["LLMTextualAnswer"] uses Large Language Model (LLM) services like OpenAI's ChatGPT or Google's PaLM.
The default elements of ResourceFunction["LLMTextualAnswer"] aim at giving results that are amenable for further computational processing.
All elements of the answer-finding process are tunable. The options "Prelude", "Prompt", and "Request" are used for tuning:
PreludeAutomaticphrase to construct the query with
PromptAutomaticprompt to condition the LLM with
RequestAutomaticrequest for the questions in the query
The format of the answer results can be one of String, Association, List, Automatic, LLMFunction or StringTemplate.
ResourceFunction["LLMTextualAnswer"] takes all options of LLMFunction, in addition to the tuning options mentioned above.
For given text and questions, the quality and computability of the result varies from model to model.

Examples

Basic Examples (2) 

Find an answer of a question based on a given text:

In[1]:=
ResourceFunction["LLMTextualAnswer", ResourceVersion->"1.0.0"]["Born and raised in the Austrian Empire, Tesla studied engineering and physics in the 1870s without receiving a degree,
gaining practical experience in the early 1880s working in telephony and at Continental Edison in the new electric power industry.",
 "Where born?"
 ]
Out[1]=

Find the parameters of a computation workflow specification:

In[2]:=
command = "Make a classifier with the method RandomForest over the data dfTitanic; show precision and accuracy.";
questions = {
   "What is the dataset?",
   "What is the method?",
   "Which metrics to show?",
   "Which ROC functions to plot?"};
ResourceFunction["LLMTextualAnswer"][command, questions]
Out[4]=

Scope (2) 

If one question is asked and the third argument is String, then a string is returned as an answer (note that date results are given in the ISO-8601 standard, per the default prompt of LLMTextualAnswer):

In[5]:=
ResourceFunction["LLMTextualAnswer", ResourceVersion->"1.0.0"]["Our trip started on July 3d 2019.", "When it started?", String]
Out[5]=

Here is a recommender system pipeline specification:

In[6]:=
command2 = "Make a recommender system over the data dfTitanic. Give the top 12 recommendations for the profile 1st, male, and survived.";

Here are a list of questions and a list of corresponding answers for getting the parameters of the pipeline:

In[7]:=
questions2 = {"What is the dataset?", "What is the profile?", "How many recommendations?", "Should matrix factorization be applied?"};
ResourceFunction["LLMTextualAnswer"][command2, questions2, List]
Out[8]=

Here we get a question-answer association:

In[9]:=
ResourceFunction["LLMTextualAnswer", ResourceVersion->"1.0.0"][command2, questions2, Association]
Out[9]=

Here is the LLM function constructed by LLMTextualAnswer:

In[10]:=
f = ResourceFunction["LLMTextualAnswer"][command2, questions2, LLMFunction, LLMEvaluator -> LLMConfiguration["Model" -> "gpt-3.5-turbo"]]
Out[10]=

Alternatively, just the string template can be obtained:

In[11]:=
ResourceFunction["LLMTextualAnswer", ResourceVersion->"1.0.0"][command2, questions2, StringTemplate]
Out[11]=

Options (12) 

Prelude (5) 

Here is the default prelude:

In[12]:=
"Prelude" /. Options[
ResourceFunction["LLMTextualAnswer"]]
Out[12]=

It can be beneficial to change the prelude if the first argument has a more specific form. (Like XML, JSON, WL, etc.) For example, here is a JSON string:

In[13]:=
SeedRandom[3];
json = ExportString[
  Association@Thread[RandomWord[5] -> RandomInteger[{2, 10}, 5]], "JSON"]
Out[14]=

Here we change the prelude:

In[15]:=
ResourceFunction["LLMTextualAnswer", ResourceVersion->"1.0.0"][json, {"What is the longest key?", "Which is the largest value"}, "Prelude" -> "Given the JSON dictionary:"]
Out[15]=

Without the prelude change some LLM models have hard time finding the answers:

In[16]:=
ResourceFunction["LLMTextualAnswer", ResourceVersion->"1.0.0"][json, {"What is the longest key?", "Which is the largest value"}, LLMEvaluator -> LLMConfiguration["Model" -> "gpt-4o"]]
Out[16]=

But some models do succeed:

In[17]:=
ResourceFunction["LLMTextualAnswer", ResourceVersion->"1.0.0"][json, {"What is the longest key?", "Which is the largest value"}, LLMEvaluator -> LLMConfiguration["Model" -> "gpt-3.5-turbo"]]
Out[17]=

Prompt (4) 

The default prompt is crafted in such a way that it allows the obtaining of a list of question-answer pairs in JSON format. Here is the default prompt:

In[18]:=
"Prompt" /. Options[
ResourceFunction["LLMTextualAnswer"]]
Out[18]=

Here the results are easy to process further in a programmatic way:

In[19]:=
command = "Make a classifier with the method RandomForest over the data dfTitanic; show precision and accuracy.";
questions = {
   "What is the dataset?",
   "What is the method?",
   "Which metrics to show?",
   "Which ROC functions to plot?"};
ResourceFunction["LLMTextualAnswer"][command, questions]
Out[15]=

Using a different prompt guarantees actionable results:

In[20]:=
ResourceFunction["LLMTextualAnswer"][command, questions, String, "Prompt" -> "Return JSON results always."]
Out[20]=

Here no prompt is used -- the answers are correct, but further processing is needed in order to use them in computations:

In[21]:=
ResourceFunction["LLMTextualAnswer"][command, questions, String, "Prompt" -> ""]
Out[21]=

Request (3) 

For the default value of the "Request" option, Automatic, LLMTextualAnswer uses the request is "list the shortest answers of the questions:":

In[22]:=
command = "Make a classifier with the method RandomForest over the data dfTitanic; show precision and accuracy.";
questions = {
   "What is the dataset?",
   "What is the method?",
   "Which metrics to show?",
   "Which ROC functions to plot?"};
ResourceFunction["LLMTextualAnswer", ResourceVersion->"1.0.0"][command, questions, StringTemplate]
Out[23]=

On the other hand, for a single question, "Request"Automatic uses the request is "give the shortest answer of the question:":

In[24]:=
ResourceFunction["LLMTextualAnswer"][command, First@questions, StringTemplate]
Out[24]=

Here the request is changed to "give a complete sentence as an answer to the question":

In[25]:=
ResourceFunction["LLMTextualAnswer", ResourceVersion->"1.0.0"]["Born and raised in the Austrian Empire, Tesla studied engineering and physics in the 1870s without receiving a degree,
gaining practical experience in the early 1880s working in telephony and at Continental Edison in the new electric power industry.",
 "Where born?",
 String,
 "Prompt" -> "",
 "Request" -> "give a complete sentence as an answer to the question"
 ]
Out[25]=

Applications (6) 

Random mandala creation by verbal commands (6) 

ResourceFunction["RandomMandala"] takes its arguments -- like, rotational symmetry order, or symmetry -- through option specifications. Here we make a list of the options we want to specify:

In[26]:=
mandalaOptKeys = Join[Keys[Options[ResourceFunction["RandomMandala"]]][[
   1 ;; 7]], {"ImageSize", "Background"}]
Out[26]=

Here we create corresponding "extraction" questions and display them:

In[27]:=
mandalaQuestions = Association@
   Map["What is the " <> ToLowerCase@
        StringReplace[ToString[#], RegularExpression["(?<=\\S)(\\p{Lu})"] :> " $1"] <> "?" -> # &, mandalaOptKeys];
mandalaQuestions = Append[mandalaQuestions, "Should symmetric seed be used or not? True or False" -> "SymmetricSeed"];
ResourceFunction["GridTableForm"][List @@@ Normal[mandalaQuestions], TableHeadings -> {"Extracting question", "Option name"}]
Out[24]=

Here we define rules to make the LLM responses (more) acceptable by WL:

In[28]:=
numberRules = Map[IntegerName[#, {"English", "Words"}] -> ToString[#] &, Range[100]];
curveRules = {"bezier curve" -> "BezierCurve", "bezier function" -> "BezierCurve", "filled curve" -> "FilledCurve@*BezierCurve", "filled bezier curve" -> "FilledCurve@*BezierCurve"};
boolRules = {"true" -> "True", "yes" -> "True", "false" -> "False", "no" -> "False"};

Here we define a function that converts natural language commands into images of random mandalas:

In[29]:=
Clear[LLMRandomMandala];
Options[LLMRandomMandala] = Join[{"Echo" -> False}, Options[
ResourceFunction["LLMTextualAnswer"]]];
LLMRandomMandala[cmd_String, questions_Association : mandalaQuestions,
    opts : OptionsPattern[]] :=
  Block[{echoQ = TrueQ[OptionValue[LLMRandomMandala, "Echo"]], params,
     args, res},
   params = ResourceFunction["LLMTextualAnswer"][cmd, Keys@questions, Association];
   If[echoQ, Echo[params, "LLMTextualAnswer result:"]];
   params = Select[params, # != "N/A" &];
   args = KeyMap[# /. questions &, params];
   args = Map[StringReplace[#, Join[numberRules, curveRules, boolRules], IgnoreCase -> True] &, args];
   args = Map[StringReplace[#, StartOfString ~~ x : ((DigitCharacter | "," | "{" | "}" | "(" | ")" | WhitespaceCharacter) ..) ~~ EndOfString :> "{" <> x <> "}"] &, args];
   args = Map[(
       t = ToExpression[#];
       Which[
        VectorQ[t, NumericQ] && Length[t] == 1, t[[1]],
        MemberQ[ColorData["Gradients"], #], #,
        True, t
        ]
       ) &, args];
   If[echoQ, Echo[args, "Processed arguments:"]];
   ResourceFunction["RandomMandala"][Sequence @@ Normal[args]]
   ];

Here is an example application:

In[30]:=
SeedRandom[33]; LLMRandomMandala["Make a random mandala with rotational symmetry seven, connecting function is a filled curve, and the number of seed elements is six. Use asymmetric seed.",
 "Echo" -> True
 ]
Out[30]=

Here is an application with multi-symmetry and multi-radius specifications:

In[31]:=
SeedRandom[775]; LLMRandomMandala["Make a random mandala with rotational symmetry 12, 6, and 3, radiuses 10, 6, 4, (keep that order), the connecting function is a filled curve, and the number of seed elements is seven. Use symmetric seed and color function SunsetColors.",
 "Echo" -> True,
 LLMEvaluator -> LLMConfiguration["Model" -> "gpt-3.5-turbo"]
 ]
Out[31]=

Properties and Relations (2) 

The built-in function FindTextualAnswer has the same purpose and goal. LLMTextualAnswer seems to be more precise and powerful. FindTextualAnswer is often faster:

In[32]:=
Thread[questions -> FindTextualAnswer[command, questions, PerformanceGoal -> "Quality"]] // ColumnForm
Out[32]=

A main motivation to make this function is to have a more powerful and precise alternative to FindTextualAnswer in the paclet NLPTemplateEngine. Here is the Wolfram Language pipeline built by the function Concretize of NLPTemplateEngine for a classifier specification (similar to one used above):

In[33]:=
spec = "Make a classifier with the method 'RandomForest' over the dataset dfTitanic; use the split ratio 0.82; show precision and accuracy.";
PacletSymbol[
 "AntonAntonov/NLPTemplateEngine", "AntonAntonov`NLPTemplateEngine`Concretize"][spec]
Out[34]=

Possible Issues (4) 

Some combinations of texts, questions, preludes and requests might be incompatible with the default prompt (which is aimed at getting JSON dictionaries.) For example, consider this recipe:

In[35]:=
recipe = "\nA comfort food classic, this Greek casserole is really delicious the day after, and believe it or not, it's great straight out of the fridge for breakfast. Don't ask us how we know this, but if you like cold pizza, you'll like cold moussaka.\n\nIngredients\n\n8 SERVINGS\n\nEGGPLANT AND LAMB\n8 garlic cloves, finely grated, divided\n½ cup plus 2 tablespoons extra-virgin olive oil\n2 tablespoons chopped mint, divided\n2 tablespoons chopped oregano, divided\n3 medium eggplants (about 3½ pounds total), sliced crosswise into ½-inch-thick rounds\n2½ teaspoons kosher salt, plus more\n½ teaspoon freshly ground black pepper, plus more\n2 pounds ground lamb\n2 medium onions, chopped\n1 3-inch cinnamon stick\n2 Fresno chiles, finely chopped\n1 tablespoon paprika\n1 tablespoon tomato paste\n¾ cup dry white wine\n1 28-ounce can whole peeled tomatoes\n\nBÉCHAMEL AND ASSEMBLY\n6 tablespoons unsalted butter\n½ cup all-purpose flour\n2½ cups whole milk, warmed\n¾ teaspoon kosher salt\n4 ounces farmer cheese, crumbled (about 1 cup)\n2 ounces Pecorino or Parmesan, finely grated (about 1¾ cups), divided\n3 large egg yolks, beaten to blend\n\nPreparation\n\nEGGPLANT AND LAMB\n\nStep 1\nPlace a rack in middle of oven; preheat to 475°. Whisk half of the garlic, ½ cup oil, 1 Tbsp. mint, and 1 Tbsp. oregano in a small bowl. Brush both sides of eggplant rounds with herb oil, making sure to get all the herbs and garlic onto eggplant; season with salt and pepper. Transfer eggplant to a rimmed baking sheet (it's okay to pile the rounds on top of each other) and roast until tender and browned, 35\[Dash]45 minutes. Reduce oven temperature to 400°.\n\nStep 2\nMeanwhile, heat remaining 2 Tbsp. oil in a large wide pot over high. Cook lamb, breaking up with a spoon, until browned on all sides and cooked through and liquid from meat is evaporated (there will be a lot of rendered fat), 12\[Dash]16 minutes. Strain fat through a fine-mesh sieve into a clean small bowl and transfer lamb to a medium bowl. Reserve 3 Tbsp. lamb fat; discard remaining fat.\n\nStep 3\nHeat 2 Tbsp. lamb fat in same pot over medium-high (reserve remaining 1 Tbsp. lamb fat for assembling the moussaka). Add onion, cinnamon, 2½ tsp. salt, and ½ tsp. pepper and cook, stirring occasionally, until onion is tender and translucent, 8\[Dash]10 minutes. Add chiles and remaining garlic and cook, scraping up any browned bits from the bottom of the pot, until onion is golden brown, about 5 minutes. Add paprika and tomato paste and cook until brick red in color, about 1 minute. Add wine and cook, stirring occasionally, until slightly reduced and no longer smells of alcohol, about 3 minutes. Add tomatoes and break up with a wooden spoon into small pieces (the seeds will shoot out at you if you're too aggressive, so start slowly\[LongDash]puncture the tomato, then get your smash and break on!). Add lamb and remaining 1 Tbsp. mint and 1 Tbsp. oregano and cook, stirring occasionally, until most of the liquid is evaporated and mixture looks like a thick meat sauce, 5\[Dash]7 minutes. Pluck out and discard cinnamon stick.\n\nBÉCHAMEL AND ASSEMBLY\n\nStep 4\nHeat butter in a medium saucepan over medium until foaming. Add flour and cook, whisking constantly, until combined, about 1 minute. Whisk in warm milk and bring sauce to a boil. Cook béchamel, whisking often, until very thick (it should have the consistency of pudding), about 5 minutes; stir in salt. Remove from heat and whisk in farmer cheese and half of the Pecorino. Let sit 10 minutes for cheese to melt, then add egg yolks and vigorously whisk until combined and béchamel is golden yellow.\n\nStep 5\nBrush a 13x9'' baking pan with remaining 1 Tbsp. lamb fat. Layer half of eggplant in pan, covering the bottom entirely. Spread half of lamb mixture over eggplant in an even layer. Repeat with remaining eggplant and lamb to make another layer of each. Top with béchamel and smooth surface; sprinkle with remaining Pecorino.\n\nStep 6\nBake moussaka until bubbling vigorously and béchamel is browned in spots, 30\[Dash]45 minutes. Let cool 30 minutes before serving.\nStep 7\n\nDo Ahead: Moussaka can be baked 1 day ahead. Let cool, then cover and chill, or freeze up to 3 months. Thaw before reheating in a 250° oven until warmed through, about 1 hour.\n";
Panel[Pane[recipe, Sequence[
  ImageSize -> {500, 150}, Scrollbars -> {True, True}]]]
Out[36]=

Here are related food questions:

In[37]:=
foodQuestions = {"How long to boil?", "How many eggs?", "How much water?", "What are the ingredients?", "Which spices?", "What temperature to cook with?", "What temperature to preheat to?"}
Out[37]=

Here LLMTextualAnswer is invoked and fails (with default settings):

In[38]:=
ResourceFunction["LLMTextualAnswer"][recipe, foodQuestions]
Out[38]=

Here a result is obtained after replacing the default prompt with an empty string:

In[39]:=
ResourceFunction["LLMTextualAnswer"][recipe, foodQuestions, String, "Prompt" -> ""]
Out[39]=

Neat Examples (6) 

Make a "universal" function for invoking functions from Wolfram Function Repository:

In[40]:=
Clear[LLMInvoker];
LLMInvoker[command_String, opts : OptionsPattern[]] :=
  Block[{questions, ans},
   questions = {"Which function?", "Which arguments?"};
   ans = ResourceFunction["LLMTextualAnswer"][
     command,
     questions,
     "Request" -> "use camel case for to answer the questions, do not add prefixes like 'make', start with a capital letter, give the arguments as a list:",
     opts];
   ResourceFunction[ans["Which function?"]][
    Sequence @@ If[MemberQ[{None, "None", "N/A", "n/a"}, ans["Which arguments?"]], {}, ToExpression@ans["Which arguments?"]]]
   ];

Request a binary tree:

In[41]:=
SeedRandom[8728];
LLMInvoker["Make a random binary tree with arguments 5 and 3.", LLMEvaluator -> LLMConfiguration["Model" -> "gpt-4-turbo"]]
Out[42]=

Request a Mondrian:

In[43]:=
LLMInvoker["Run of the function random mondrian."]
Out[43]=

Request a Rorschach (inkblot) pattern:

In[44]:=
LLMInvoker["Random rorschach.", LLMEvaluator -> LLMConfiguration["Model" -> "gpt-4"]]
Out[44]=

Request a maze:

In[45]:=
LLMInvoker["random maze with argument 20."]
Out[45]=

Request a random English haiku and specify an LLMEvaluator to use:

In[46]:=
LLMInvoker["Make a random english haiku.", LLMEvaluator -> LLMConfiguration["Model" -> "gpt-4-turbo"]]
Out[46]=

Publisher

Anton Antonov

Requirements

Wolfram Language 13.0 (December 2021) or above

Version History

  • 1.1.0 – 27 September 2024
  • 1.0.0 – 14 August 2024

Related Resources

License Information