Skip to content

InvokeLLM

Purpose of the Node

The InvokeLLM node is designed to invoke a language model using the provided model and history. It handles streaming output by triggering connected nodes on each chunk and generates the final result when the model invocation is complete.

Pins

Pin NamePin DescriptionPin TypeValue Type
StartTriggers the model invocationExecutionNormal
ModelThe model to be used for the invocationStructBit
HistoryThe chat history to be used for the invocationStructHistory
On StreamTriggers on streaming outputExecutionNormal
ChunkThe current streaming chunkStructResponseChunk
DoneTriggers when the model invocation is completeExecutionNormal
ResultThe resulting model outputStructResponse