For LangChain Users
Coming from LangChain (Python)? This guide maps your AI orchestration knowledge to Flow-Like’s visual approach. You’ll find familiar patterns, but with a drag-and-drop interface instead of code.
Quick Concept Mapping
Section titled “Quick Concept Mapping”| LangChain Concept | Flow-Like Equivalent |
|---|---|
| Chain | Flow (sequence of nodes) |
| Agent | Agent node + Tools |
| Tool | Quick Action / Callable flow |
| Memory | Variables + History arrays |
| Prompt Template | Prompt node |
| LLM | Model Provider + Invoke |
| Retriever | Vector Search nodes |
| VectorStore | LanceDB + Embeddings |
| Document Loader | Read nodes + Parse |
| Output Parser | Extract Knowledge |
| Runnable | Node or subflow |
| Callbacks | Console Log (debug mode) |
Core Patterns Compared
Section titled “Core Patterns Compared”Chains → Flows
Section titled “Chains → Flows”In LangChain, you build Chains by composing components:
LangChain:
from langchain import PromptTemplate, LLMChainfrom langchain.llms import OpenAI
prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?")
chain = LLMChain(llm=OpenAI(), prompt=prompt)result = chain.run("eco-friendly water bottles")Flow-Like:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐│ Quick Action │ │ Prompt │ │ Invoke LLM ││ (product) ├────▶│ "What is..." ├────▶│ OpenAI │└─────────────────┘ └─────────────────┘ └────────┬────────┘ │ ▼ [company_name]The visual flow is the chain—each node is a step in the pipeline.
Agents → Agent Nodes
Section titled “Agents → Agent Nodes”LangChain Agents make decisions about tool usage. Flow-Like has dedicated Agent nodes:
LangChain:
from langchain.agents import initialize_agent, Tool
tools = [ Tool(name="Calculator", func=calculator_func, description="..."), Tool(name="Search", func=search_func, description="...")]
agent = initialize_agent( tools=tools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)result = agent.run("What is 25 * 48, then search for that number")Flow-Like:
┌─────────────────────────────────────────────────────────┐│ Board: CalculatorTool ││ Quick Action Event ──▶ Calculate ──▶ Return Result │└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐│ Board: SearchTool ││ Quick Action Event ──▶ Web Search ──▶ Return Result │└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐│ Main Flow: ││ ││ Chat Event ──▶ Make Agent ──▶ Run Agent ──▶ Response ││ │ ││ ├── Tool: CalculatorTool ││ └── Tool: SearchTool │└─────────────────────────────────────────────────────────┘Each Tool is a separate Board with a Quick Action Event—the agent can call it when needed.
Prompts → Prompt Nodes
Section titled “Prompts → Prompt Nodes”LangChain PromptTemplate:
prompt = PromptTemplate( input_variables=["context", "question"], template="""Answer based on this context: {context}
Question: {question} Answer:""")Flow-Like Prompt Node:
┌─────────────────────────────────────────────────────────┐│ Prompt Node ││ ││ Template: ││ "Answer based on this context: ││ {context} ││ ││ Question: {question} ││ Answer:" ││ ││ Inputs: ││ ◀── context ││ ◀── question ││ ││ Outputs: ││ ──▶ formatted_prompt │└─────────────────────────────────────────────────────────┘Variables are auto-extracted from {variable} placeholders in your template.
Memory → Variables + Arrays
Section titled “Memory → Variables + Arrays”LangChain Memory:
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory()chain = ConversationChain(llm=llm, memory=memory)Flow-Like:
Variables Panel:├── chat_history: Array<Message>└── user_context: String
Chat Event │ ▼Get Variable: chat_history │ ▼Build Messages (system + history + new) │ ▼Invoke LLM │ ▼Append to Variable: chat_historyMemory persists in Board Variables. Use arrays for conversation history.
RAG Retrievers → Vector Search
Section titled “RAG Retrievers → Vector Search”LangChain RAG:
from langchain.vectorstores import Chromafrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.chains import RetrievalQA
embeddings = OpenAIEmbeddings()vectorstore = Chroma(embedding_function=embeddings)
qa_chain = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=vectorstore.as_retriever(search_kwargs={"k": 5}))Flow-Like RAG:
Ingest Pipeline:┌─────────────────────────────────────────────────────────┐│ Read Documents ──▶ Chunk ──▶ Embed ──▶ Insert to LanceDB│└─────────────────────────────────────────────────────────┘
Query Pipeline:┌─────────────────────────────────────────────────────────┐│ Chat Event ││ │ ││ ▼ ││ Embed Query ││ │ ││ ▼ ││ Vector Search (LanceDB, k=5) ││ │ ││ ▼ ││ Build Context Prompt ││ │ ││ ▼ ││ Invoke LLM ──▶ Response │└─────────────────────────────────────────────────────────┘Document Loaders → Read + Parse Nodes
Section titled “Document Loaders → Read + Parse Nodes”| LangChain Loader | Flow-Like Nodes |
|---|---|
TextLoader | Read to String |
PyPDFLoader | Read to String (PDF) |
CSVLoader | Buffered CSV Reader |
JSONLoader | Read to String + Parse JSON |
DirectoryLoader | List Paths + For Each + Read |
WebBaseLoader | HTTP Request |
UnstructuredLoader | Read + Chunk |
Example PDF Loading:
List Paths (*.pdf) │ ▼For Each path │ ▼Read to String (path) │ ▼Chunk Document │ ▼Embed Document ──▶ Insert to LanceDBOutput Parsers → Extract Knowledge
Section titled “Output Parsers → Extract Knowledge”LangChain Structured Output:
from langchain.output_parsers import PydanticOutputParserfrom pydantic import BaseModel
class Person(BaseModel): name: str age: int occupation: str
parser = PydanticOutputParser(pydantic_object=Person)prompt = PromptTemplate( template="Extract person info:\n{text}\n{format_instructions}", input_variables=["text"], partial_variables={"format_instructions": parser.get_format_instructions()})Flow-Like Extract Knowledge:
┌─────────────────────────────────────────────────────────┐│ Extract Knowledge Node ││ ││ Schema: ││ { ││ "name": "string", ││ "age": "number", ││ "occupation": "string" ││ } ││ ││ Input: ◀── document_text ││ Output: ──▶ Person (typed struct) │└─────────────────────────────────────────────────────────┘The node handles prompting, parsing, and validation automatically.
LCEL → Visual Pipelines
Section titled “LCEL → Visual Pipelines”LangChain Expression Language (LCEL) chains look like:
chain = prompt | llm | parserresult = chain.invoke({"topic": "AI"})Flow-Like:
Input ──▶ Prompt ──▶ LLM ──▶ Parser ──▶ OutputThe pipe (|) becomes a visual wire. Parallel execution uses multiple branches:
LCEL Parallel:
chain = RunnableParallel(summary=summarize_chain, translation=translate_chain)Flow-Like Parallel:
┌──▶ Summarize ──┐Input ──▶ Split ├──▶ Merge ──▶ Output └──▶ Translate ──┘Common Patterns
Section titled “Common Patterns”Conversational RAG
Section titled “Conversational RAG”LangChain:
memory = ConversationBufferMemory()qa_chain = ConversationalRetrievalChain.from_llm( llm=llm, retriever=retriever, memory=memory)Flow-Like:
Variables:├── chat_history: Array<Message>└── active_context: String
Chat Event (user_message) │ ├──▶ Embed Query ──▶ Vector Search │ │ │ ▼ │ Retrieve Context │ │ └─────────────────────────┤ ▼ Build Messages: [system + context + history + query] │ ▼ Invoke LLM │ ├──▶ Append to history │ └──▶ ResponseFunction Calling
Section titled “Function Calling”LangChain Tools with OpenAI:
tools = [get_weather_tool, search_tool]llm_with_tools = llm.bind_tools(tools)result = llm_with_tools.invoke("What's the weather in Paris?")Flow-Like:
Make Agent │ ├── Tool: GetWeather (Board with Quick Action) ├── Tool: Search (Board with Quick Action) │ ▼Run Agent (handles tool calling loop) │ ▼Final ResponseMap-Reduce
Section titled “Map-Reduce”LangChain:
from langchain.chains import MapReduceDocumentsChain
map_reduce_chain = MapReduceDocumentsChain( llm_chain=map_chain, reduce_documents_chain=reduce_chain)Flow-Like:
Split Documents │ ▼For Each document │ ▼Map: Summarize ──▶ Collect Summaries │ ▼ Reduce: Final SummaryFeature Comparison
Section titled “Feature Comparison”| Feature | LangChain | Flow-Like |
|---|---|---|
| Interface | Python code | Visual drag-and-drop |
| Learning curve | Python required | Lower barrier |
| Flexibility | Very flexible | Visual constraints |
| Debugging | Print statements | Visual execution trace |
| Versioning | Git | Built-in + Git |
| Deployment | Custom infrastructure | Desktop/Cloud included |
| RAG | Many vector store options | LanceDB native |
| Agents | Multiple implementations | Unified Agent nodes |
| Streaming | Callback-based | Native streaming |
What Flow-Like Adds
Section titled “What Flow-Like Adds”Visual Debugging
Section titled “Visual Debugging”- Watch data flow in real-time
- Inspect any wire’s value
- Step through execution
Data Processing
Section titled “Data Processing”- Native DataFusion SQL engine
- Chart visualizations
- ML models (no Python needed)
Full Application Stack
Section titled “Full Application Stack”- UI pages (A2UI)
- Event-driven architecture
- Built-in deployment
Type Safety
Section titled “Type Safety”- Strongly typed pins
- Compile-time validation
- Schema enforcement
Migration Tips
Section titled “Migration Tips”1. Think in Nodes, Not Functions
Section titled “1. Think in Nodes, Not Functions”Each LangChain function call becomes a node. Chain composition becomes wiring.
2. Use Extract Knowledge Instead of Parsers
Section titled “2. Use Extract Knowledge Instead of Parsers”The Extract Knowledge node is your Pydantic output parser—just define the schema.
3. Boards Are Your Modules
Section titled “3. Boards Are Your Modules”Each Python module can become a Board. Import/export via Quick Actions.
4. Variables Replace State
Section titled “4. Variables Replace State”Where you’d use class attributes or memory, use Board Variables.
5. Embrace Visual Loops
Section titled “5. Embrace Visual Loops”For Each nodes with visual branches often work better than Python list comprehensions.
Example Migration
Section titled “Example Migration”LangChain: Q&A Bot
Section titled “LangChain: Q&A Bot”Original Python:
from langchain.chains import RetrievalQAfrom langchain.embeddings import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.chat_models import ChatOpenAI
embeddings = OpenAIEmbeddings()vectorstore = Chroma( persist_directory="./db", embedding_function=embeddings)
qa = RetrievalQA.from_chain_type( llm=ChatOpenAI(model="gpt-4"), chain_type="stuff", retriever=vectorstore.as_retriever(k=5), return_source_documents=True)
def answer(question: str): result = qa({"query": question}) return result["result"], result["source_documents"]Flow-Like Equivalent:
Board: QABot├── Variables:│ └── db: LanceDB connection│└── Events: └── Chat Event (question) │ ▼ Embed Query (OpenAI) │ ▼ Vector Search (db, k=5) │ ├──▶ sources: Get Metadata │ ▼ Build Context Prompt │ ▼ Invoke LLM (GPT-4) │ ▼ Return: {answer, sources}Deployment:
- LangChain: Set up FastAPI, Docker, hosting
- Flow-Like: Click “Publish” → Done
Can I import my existing chains?
Section titled “Can I import my existing chains?”Not directly. You’ll rebuild them visually, which often simplifies the logic.
What about custom LLM providers?
Section titled “What about custom LLM providers?”Flow-Like supports OpenAI, Anthropic, Google, Ollama, and any OpenAI-compatible API.
Is performance comparable?
Section titled “Is performance comparable?”Yes—Flow-Like’s runtime is Rust-based and often faster than Python.
Can I use my existing vector database?
Section titled “Can I use my existing vector database?”Flow-Like uses LanceDB natively. You can re-embed your documents or connect external databases via SQL.
What about LangSmith?
Section titled “What about LangSmith?”Flow-Like has built-in execution tracing. View logs, timing, and data at each node.
Next Steps
Section titled “Next Steps”- GenAI Overview – Full AI capabilities guide
- RAG Setup – Vector search and retrieval
- Agents – Building AI agents
- Extraction – Structured data extraction