Skip to content

AI Models & Setup

Before building AI-powered flows, you need to configure at least one AI model. Flow-Like supports a wide range of providers—from cloud APIs like OpenAI and Anthropic to local models via Ollama.

Flow-Like supports these AI providers out of the box:

ProviderTypeBest For
OpenAICloudGPT-4, GPT-4o, o1, o3 models
Azure OpenAICloudEnterprise deployments
AnthropicCloudClaude 3.5, Claude 4 models
GoogleCloudGemini models
OllamaLocalSelf-hosted, privacy-first
GroqCloudUltra-fast inference
DeepSeekCloudReasoning models
MistralCloudEuropean AI provider
Together AICloudOpen-source models
OpenRouterCloudModel aggregator
PerplexityCloudSearch-enhanced AI
CohereCloudEnterprise NLP
HuggingFaceCloudOpen-source models
xAI (Grok)CloudX’s AI models
VoyageAICloudEmbedding models
HyperbolicCloudHigh-performance inference
MoonshotCloudMultilingual models

Go to Settings → Profiles in Flow-Like Desktop:

  1. Click your profile avatar in the sidebar
  2. Select Profiles from the menu
  3. Click on your active profile

In the Models section of your profile:

  1. Click Add Provider
  2. Select your provider from the list
  3. Enter your API key (or connection details for local models)
  4. Click Save

After adding a provider, Flow-Like will fetch the available models. Choose which ones you want to use:

  1. Browse the available models
  2. Toggle on the models you want active
  3. Optionally, set a default model for quick access

Once configured, you can use AI models in your flows with these nodes:

The Find Model node automatically selects the best available model based on your preferences:

┌─────────────────┐ ┌─────────────────┐
│ Make Model │────▶│ Invoke LLM │
│ Preferences │ │ │
└─────────────────┘ └─────────────────┘

How it works:

  1. Add a Make Preferences node
  2. Set preferences like speed, cost, or capability requirements
  3. Connect to your LLM node
  4. Flow-Like automatically picks the best matching model

Alternatively, use a specific model by its identifier:

  1. Add your provider’s Prepare Model node (e.g., “Prepare OpenAI”)
  2. Select the specific model from the dropdown
  3. Connect to your LLM invocation node

The Model Preferences system helps you choose the right model dynamically:

PreferenceDescription
SpeedPrioritize fast response times
CostPrefer cheaper models
QualityPrefer more capable models
Context SizeRequire specific context window
CapabilitiesRequire features like vision or tools

This is especially useful when you want your flows to:

  • Automatically fall back to alternatives if a model is unavailable
  • Balance cost vs. quality based on the task
  • Use different models in development vs. production

For privacy-sensitive applications or offline use, you can run models locally with Ollama:

  1. Download Ollama from ollama.ai
  2. Install and run Ollama on your machine
  3. Pull a model: ollama pull llama3.2 or ollama pull mistral
  4. In Flow-Like, add Ollama as a provider (usually auto-detected)
ModelSizeGood For
llama3.23B/8BGeneral purpose, fast
mistral7BCoding, reasoning
phi-414BHigh quality, balanced
deepseek-r17B/32BComplex reasoning
nomic-embed-text-Text embeddings
  • Use models with large context windows (32K+ tokens)
  • Enable streaming for better user experience
  • Consider cost for high-volume applications
  • Pair a generative model with an embedding model
  • Use the same embedding model for indexing and querying
  • Consider specialized models like nomic-embed-text
  • Use models explicitly designed for tool use (GPT-4, Claude 3.5+)
  • Larger models generally perform better with complex tools
  • Test with simpler tasks first
  • Check your API key is valid and has credits
  • Verify your internet connection (for cloud providers)
  • For Ollama, ensure it’s running (ollama serve)
  • Check rate limits on your API key
  • Try a different model from the same provider
  • For local models, check system resources
  • Verify the model supports your use case (e.g., vision, tools)
  • Check your system prompt and temperature settings
  • Try a more capable model

With your models configured, you’re ready to build: