The AI Prompt node lets you send a prompt to a large language model (LLM) and use the result in your module's workflow. Use it to extract data points, summarize texts, classify content, or perform any non-legal reasoning task directly within a module.
The AI Prompt node replaces the previous AI Connector. Existing AI Connector nodes in your modules are automatically upgraded to the new AI Prompt node. No manual migration is needed.
When to use the AI Prompt node vs. BEAMON
BRYTER offers two ways to work with AI:
- BEAMON is purpose-built for legal reasoning, e.g., contract review, compliance checks, and legal analysis.
- AI Prompt node is for everything else: extracting data points from documents, summarizing texts, classifying content, generating drafts, and general-purpose reasoning.
If your use case involves legal analysis, start with BEAMON. For all other AI tasks within a module, use the AI Prompt node.
How to use the AI Prompt node
Add an AI Prompt node to your module from the action node menu. The node supports two modes:
Simple text mode
Enter a prompt and optionally attach one or more files (DOCX, PDF, or ZIP). The model processes your prompt and returns the result as a single text value that you can reference in subsequent nodes.
Structured output mode
Define the output data points you expect and specify a type for each one (e.g., text, number, date). The model returns structured values that you can use in type-specific operations downstream, for example, using a returned number in a calculation or a returned date in a date comparison.
Additional options
Accuracy and thoroughness
Depending on the model selected, you can adjust how the LLM processes your prompt:
- Temperature (available with GPT-4o) – controls how creative or deterministic the response is. Lower values produce more consistent results.
- Reasoning (available with GPT-5.1 and higher) – controls how thoroughly the model reasons through your prompt. Higher reasoning produces more accurate results but increases processing time.
Available models may change over time. The node editor shows which options apply to the model you have selected.
Web search
You can enable web search to let the model retrieve current information from the web as part of its response. Higher reasoning settings improve the quality of web search results, but may increase processing time.
Custom processing messages
You can configure custom messages that are displayed to end users while the AI Prompt node is processing. This is useful for longer-running prompts where you want to set expectations.
Limitations
- Maximum outputs: 30 per node (in structured output mode).
- Context window: Depends on the selected model.
- Total attachment size: Up to 100 MB.
- Scanned PDFs (without a text layer, e.g., un-OCR'd documents): Up to approximately 25 MB, depending on the model's context window.
Trial mode
In trial mode, the AI Prompt node is available with the following restrictions:
- Modules using the AI Prompt node cannot be published to the Live environment.
- You have a one-time token cap of 10 million tokens across all applications. This cap is not renewed.
Timeouts
Because LLM operations can take longer than typical module actions, timeouts may occur. The following factors influence how long a prompt takes to process:
- Reasoning level: Higher reasoning settings mean longer processing times.
- Prompt complexity: Complex prompts or large attached files require more processing time.
If you experience timeouts, consider reducing the reasoning level, simplifying the prompt, or splitting large documents into smaller parts.
Good to know
We have various options in the US, UK, and in the EU to comply with your data processing requirements, notably GDPR.
No. Your data is never used for training purposes.
BRYTER does not use your Customer Data or permit others to use your Customer Data to train the machine learning models used to provide the AI Prompt node.
No. Your data is fully separate from any other customer environment.
BRYTER currently utilizes large language models (LLMs) provided by OpenAI and Anthropic. We continuously evaluate LLM providers and their models to provide the highest quality experience to our users while ensuring our privacy and security standards are enforced. Learn more about data, privacy, and security architecture for BRYTER.