AI Models & Setup
Before building AI-powered flows, you need to configure at least one AI model. Flow-Like supports a wide range of providers—from cloud APIs like OpenAI and Anthropic to local models via Ollama.
Supported Providers
Section titled “Supported Providers”Flow-Like supports these AI providers out of the box:
| Provider | Type | Best For |
|---|---|---|
| OpenAI | Cloud | GPT-4, GPT-4o, o1, o3 models |
| Azure OpenAI | Cloud | Enterprise deployments |
| Anthropic | Cloud | Claude 3.5, Claude 4 models |
| Cloud | Gemini models | |
| Ollama | Local | Self-hosted, privacy-first |
| Groq | Cloud | Ultra-fast inference |
| DeepSeek | Cloud | Reasoning models |
| Mistral | Cloud | European AI provider |
| Together AI | Cloud | Open-source models |
| OpenRouter | Cloud | Model aggregator |
| Perplexity | Cloud | Search-enhanced AI |
| Cohere | Cloud | Enterprise NLP |
| HuggingFace | Cloud | Open-source models |
| xAI (Grok) | Cloud | X’s AI models |
| VoyageAI | Cloud | Embedding models |
| Hyperbolic | Cloud | High-performance inference |
| Moonshot | Cloud | Multilingual models |
Adding a Model to Flow-Like
Section titled “Adding a Model to Flow-Like”Step 1: Open Profiles
Section titled “Step 1: Open Profiles”Go to Settings → Profiles in Flow-Like Desktop:
- Click your profile avatar in the sidebar
- Select Profiles from the menu
- Click on your active profile
Step 2: Add a Provider
Section titled “Step 2: Add a Provider”In the Models section of your profile:
- Click Add Provider
- Select your provider from the list
- Enter your API key (or connection details for local models)
- Click Save
Step 3: Select Models
Section titled “Step 3: Select Models”After adding a provider, Flow-Like will fetch the available models. Choose which ones you want to use:
- Browse the available models
- Toggle on the models you want active
- Optionally, set a default model for quick access
Using Models in Your Flows
Section titled “Using Models in Your Flows”Once configured, you can use AI models in your flows with these nodes:
Find Model Node
Section titled “Find Model Node”The Find Model node automatically selects the best available model based on your preferences:
┌─────────────────┐ ┌─────────────────┐│ Make Model │────▶│ Invoke LLM ││ Preferences │ │ │└─────────────────┘ └─────────────────┘How it works:
- Add a Make Preferences node
- Set preferences like speed, cost, or capability requirements
- Connect to your LLM node
- Flow-Like automatically picks the best matching model
Direct Model Selection
Section titled “Direct Model Selection”Alternatively, use a specific model by its identifier:
- Add your provider’s Prepare Model node (e.g., “Prepare OpenAI”)
- Select the specific model from the dropdown
- Connect to your LLM invocation node
Model Preferences
Section titled “Model Preferences”The Model Preferences system helps you choose the right model dynamically:
| Preference | Description |
|---|---|
| Speed | Prioritize fast response times |
| Cost | Prefer cheaper models |
| Quality | Prefer more capable models |
| Context Size | Require specific context window |
| Capabilities | Require features like vision or tools |
This is especially useful when you want your flows to:
- Automatically fall back to alternatives if a model is unavailable
- Balance cost vs. quality based on the task
- Use different models in development vs. production
Local Models with Ollama
Section titled “Local Models with Ollama”For privacy-sensitive applications or offline use, you can run models locally with Ollama:
Setting Up Ollama
Section titled “Setting Up Ollama”- Download Ollama from ollama.ai
- Install and run Ollama on your machine
- Pull a model:
ollama pull llama3.2orollama pull mistral - In Flow-Like, add Ollama as a provider (usually auto-detected)
Recommended Local Models
Section titled “Recommended Local Models”| Model | Size | Good For |
|---|---|---|
| llama3.2 | 3B/8B | General purpose, fast |
| mistral | 7B | Coding, reasoning |
| phi-4 | 14B | High quality, balanced |
| deepseek-r1 | 7B/32B | Complex reasoning |
| nomic-embed-text | - | Text embeddings |
Model Configuration Tips
Section titled “Model Configuration Tips”For Chat Applications
Section titled “For Chat Applications”- Use models with large context windows (32K+ tokens)
- Enable streaming for better user experience
- Consider cost for high-volume applications
For RAG & Knowledge Retrieval
Section titled “For RAG & Knowledge Retrieval”- Pair a generative model with an embedding model
- Use the same embedding model for indexing and querying
- Consider specialized models like
nomic-embed-text
For Agents & Tools
Section titled “For Agents & Tools”- Use models explicitly designed for tool use (GPT-4, Claude 3.5+)
- Larger models generally perform better with complex tools
- Test with simpler tasks first
Troubleshooting
Section titled “Troubleshooting””No models available”
Section titled “”No models available””- Check your API key is valid and has credits
- Verify your internet connection (for cloud providers)
- For Ollama, ensure it’s running (
ollama serve)
“Model not responding”
Section titled ““Model not responding””- Check rate limits on your API key
- Try a different model from the same provider
- For local models, check system resources
”Unexpected responses”
Section titled “”Unexpected responses””- Verify the model supports your use case (e.g., vision, tools)
- Check your system prompt and temperature settings
- Try a more capable model
Next Steps
Section titled “Next Steps”With your models configured, you’re ready to build:
- Simple chat: Continue to Chat & Conversations
- Knowledge retrieval: Jump to RAG & Knowledge Bases
- Autonomous agents: See AI Agents