LLM
LLM nodes in workflows can insert outputs from previous nodes (along with custom text) as parameters for input questions, set default system prompts for the node, and generate intelligent responses based on questions and system prompts through the model.
1. Overview
Supports selection of all models available in LinkAI applications (
OpenAI series
,Claude series
,Gemini series
,DeepSeek series
,Wenxin (ERNIE Bot) series
,Qwen (Tongyi Qianwen) series
,Xunfei (iFLYTEK Spark)
,ChatGLM (Zhipu)
,Kimi (Moonshot) series
,Doubao (ByteDance) series
)Supports setting system prompts (default prompts sent to the model when passing through this node)
Supports adding outputs from other nodes as inputs to the LLM node (defaults to using the previous node's output)
Supports setting model temperature (precision level)
Supports configuring the model node to carry workflow input-output records as contextual memory
2. Configuration
- Node Input: The question sent to the LLM. Defaults to the previous node's output, can be manually changed to outputs from other previous nodes, or insert multiple previous node outputs and add custom text to combine as the "question." Note: This node must be directly or indirectly connected to the start node to select previous node output parameters.
- System Prompt: System prompts are crucial as they determine the bot's character, functionality, and working characteristics. Please describe in detailed natural language - the more specific, the better the results.
- Temperature: Higher temperature produces more creative and uncertain responses, while lower temperature produces more precise responses.
- Model Selection: Choose any large language model supported across the LinkAI platform based on your needs.
- Memory: Use workflow input-output records as context to help the model make better reasoning decisions or responses based on historical conversation context. You can choose whether to enable memory and set the number of workflow memory rounds this node retains, supporting up to 10 rounds of memory.
Each round of memory refers to the start node input content and end node output content generated from one complete workflow run (excluding intermediate process content).
Workflow memory content may have been processed through multiple nodes unrelated to this node, so please choose whether to enable memory functionality based on actual needs.
Current memory is retained for 30 minutes by default

3. Advanced Usage: Structured Output
Workflow LLM nodes support defining output variables, allowing models to intelligently extract structured parameter values from input natural language questions:

LLM nodes will output in JSON format, and subsequent nodes can directly reference the output variables:

Using structured output mode, LLM nodes can be more flexibly used for parameter extraction, decision making, and other scenarios, and more accurately pass information to subsequent nodes in variable form. For example, structured information extracted from conversations can be stored in databases, further processed with code blocks, or interact with business systems through custom plugins, further unleashing the potential of large language models.
Example: Use LLMs to extract content from user input, enable structured output functionality for LLMs, define extracted content as individual variables, and map these variables one-to-one as plugin parameters for subsequent custom plugin nodes.