Skip to content

Business Intelligence

Flow-Like provides a complete Business Intelligence toolkit—connect to any data source, query with SQL, build interactive dashboards, and share insights across your organization.

SolutionDescription
Executive DashboardsKPIs, trends, real-time metrics
Operational ReportsDaily/weekly/monthly business reports
Self-Service AnalyticsLet users explore data themselves
Embedded AnalyticsAdd BI to your existing apps
Automated ReportsScheduled delivery via email/Slack
Ad-Hoc AnalysisQuick data exploration and insights
┌─────────────────────────────────────────────────────────┐
│ Data Sources │
├─────────────────────────────────────────────────────────┤
│ Databases │ Files │ APIs │ Data Lakes │
│ PostgreSQL │ CSV │ REST │ Delta Lake │
│ MySQL │ Excel │ GraphQL │ Iceberg │
│ ClickHouse │ Parquet │ Webhooks │ S3/Azure │
└───────────────┴────────────┴────────────┴──────────────┘
┌─────────────────────────────────────────────────────────┐
│ DataFusion SQL Engine │
│ • Federated queries across sources │
│ • Real-time aggregations │
│ • Window functions & analytics │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ Visualization Layer │
│ • 25+ chart types (Nivo) │
│ • Interactive tables │
│ • Filters & drill-downs │
│ • Export to PDF/CSV │
└─────────────────────────────────────────────────────────┘

Connect to production databases or data warehouses:

Create DataFusion Session
Register PostgreSQL ("sales_db", connection_string)
Register ClickHouse ("analytics_dw", connection_string)
Ready to query both with SQL

Supported databases:

  • PostgreSQL, MySQL, SQLite
  • ClickHouse, DuckDB
  • Oracle, SQL Server (via connectors)

Query files directly as tables:

Register CSV ("monthly_targets", /data/targets.csv)
Register Parquet ("transactions", s3://bucket/transactions/)
SQL: "SELECT * FROM monthly_targets t
JOIN transactions tx ON t.month = tx.month"

Connect to modern data platforms:

Register Delta Lake ("customers", s3://lake/customers)
Register Iceberg ("orders", s3://lake/orders)
Query with time travel, partitioning, schema evolution

Pull real-time data from APIs:

HTTP Request (CRM API) ──▶ Parse JSON ──▶ Register as Table
Join with database data in single query
SELECT
region,
COUNT(*) as order_count,
SUM(amount) as total_revenue,
AVG(amount) as avg_order_value
FROM orders
WHERE order_date >= '2024-01-01'
GROUP BY region
ORDER BY total_revenue DESC
SELECT
DATE_TRUNC('week', order_date) as week,
SUM(amount) as weekly_revenue,
SUM(amount) - LAG(SUM(amount)) OVER (ORDER BY DATE_TRUNC('week', order_date)) as wow_change
FROM orders
GROUP BY DATE_TRUNC('week', order_date)
ORDER BY week
SELECT
product_name,
category,
revenue,
RANK() OVER (PARTITION BY category ORDER BY revenue DESC) as category_rank,
revenue / SUM(revenue) OVER (PARTITION BY category) * 100 as pct_of_category
FROM product_sales

Query across different databases in one statement:

SELECT
c.name as customer,
c.segment,
SUM(o.amount) as total_spent,
COUNT(DISTINCT o.id) as order_count
FROM postgres_crm.customers c
JOIN clickhouse_dw.orders o ON c.id = o.customer_id
WHERE o.order_date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY c.name, c.segment
ORDER BY total_spent DESC
LIMIT 100
WITH first_purchase AS (
SELECT
customer_id,
DATE_TRUNC('month', MIN(order_date)) as cohort_month
FROM orders
GROUP BY customer_id
),
monthly_activity AS (
SELECT
customer_id,
DATE_TRUNC('month', order_date) as activity_month
FROM orders
)
SELECT
fp.cohort_month,
DATE_DIFF('month', fp.cohort_month, ma.activity_month) as months_since_first,
COUNT(DISTINCT ma.customer_id) as active_customers
FROM first_purchase fp
JOIN monthly_activity ma ON fp.customer_id = ma.customer_id
GROUP BY fp.cohort_month, DATE_DIFF('month', fp.cohort_month, ma.activity_month)
ORDER BY fp.cohort_month, months_since_first
Page: /analytics/sales
├── Grid (3 columns)
│ ├── KPI Card: Total Revenue
│ ├── KPI Card: Orders Today
│ └── KPI Card: Conversion Rate
├── Row
│ ├── Line Chart: Revenue Trend (60%)
│ └── Pie Chart: Revenue by Region (40%)
├── Row
│ ├── Bar Chart: Top Products
│ └── Heatmap: Sales by Day/Hour
└── Table: Recent Orders (full width)
Card Component
├── Text (label): "Total Revenue"
├── Text (value): {$metrics.total_revenue | currency}
├── Text (change): "↑ {$metrics.revenue_change}% vs last month"
├── Sparkline: {$metrics.revenue_trend}
└── Style: Conditional color based on change
Row (filters)
├── Select: Date Range
│ └── Options: Today, 7 days, 30 days, 90 days, YTD, Custom
├── Select: Region
│ └── Options: All, North, South, East, West
├── Select: Product Category
│ └── Options: (dynamic from data)
└── Button: Apply Filters
└── onClick: Refresh dashboard data

Filter flow:

Filter Change
Update filter variables
Re-run SQL queries with filters
Update all visualizations
Bar Chart: Revenue by Category
└── onClick (category)
Navigate to /analytics/category/{category_id}
Category detail page with:
├── Products in category
├── Trend over time
└── Customer breakdown
ChartUse Case
BarCompare categories
Grouped BarCompare categories across dimensions
Stacked BarShow composition within categories
BulletActual vs target comparison
ChartUse Case
LineTime series trends
AreaCumulative trends
StreamCategory trends over time
BumpRanking changes over time
ChartUse Case
HistogramValue distribution
Box PlotStatistical distribution
ScatterCorrelation analysis
Swarm PlotDense distributions
ChartUse Case
PieSimple proportions
DonutProportions with center metric
TreemapHierarchical proportions
SunburstMulti-level hierarchy
WaffleDiscrete proportions
ChartUse Case
SankeyFlow between categories
ChordRelationships between groups
NetworkConnection networks
ChartUse Case
Heatmap2D value density
CalendarDaily patterns
RadarMulti-variable comparison
FunnelConversion funnels
GaugeSingle metric progress
Table Component
├── data: {$query_results}
├── columns:
│ ├── customer (sortable, filterable)
│ ├── region (sortable, filterable, groupable)
│ ├── revenue (sortable, format: currency)
│ ├── orders (sortable)
│ └── last_order (sortable, format: date)
├── Features:
│ ├── Sorting (click headers)
│ ├── Filtering (column filters)
│ ├── Pagination (50 per page)
│ ├── Column resizing
│ ├── Column reordering
│ └── Export (CSV, Excel)
└── Actions:
└── Row click → Customer detail

Create pivot-like views with SQL:

SELECT
region,
SUM(CASE WHEN month = 'Jan' THEN revenue END) as jan,
SUM(CASE WHEN month = 'Feb' THEN revenue END) as feb,
SUM(CASE WHEN month = 'Mar' THEN revenue END) as mar,
SUM(revenue) as total
FROM monthly_sales
GROUP BY region
Scheduled Event (Monday 8am)
Run analytics queries
Generate visualizations
Render report template
Convert to PDF
├──▶ Email to stakeholders
├──▶ Upload to SharePoint
└──▶ Post to Slack (#reports)
# Weekly Sales Report
**Period:** {start_date} to {end_date}
**Generated:** {generated_at}
## Executive Summary
- Total Revenue: {total_revenue}
- Orders: {order_count}
- New Customers: {new_customers}
## Revenue Trend
{revenue_chart}
## Top Performing Products
{products_table}
## Regional Breakdown
{region_chart}
## Key Insights
{ai_generated_insights}
Query results
Invoke LLM
├── prompt: "Analyze this sales data and provide 3-5 key insights:
│ {data_summary}
│ Focus on: trends, anomalies, opportunities"
└── model: GPT-4
Formatted insights for report

Let business users build queries visually:

Page: /analytics/explorer
├── Data Source Selector
│ └── Available tables and columns
├── Query Builder
│ ├── Select columns (drag & drop)
│ ├── Add filters (visual builder)
│ ├── Group by (drag columns)
│ └── Sort order
├── Preview
│ └── Live SQL preview
├── Results
│ ├── Table view
│ └── Chart view (select type)
└── Actions
├── Save as Report
├── Export Data
└── Schedule
Board Variables:
├── saved_reports: Array<Report>
└── shared_reports: Array<Report>
Quick Action: Save Report
├── name: user_input
├── query: current_query
├── filters: current_filters
├── visualization: current_chart_config
└── permissions: private/team/public
Report Sharing
├── Private (only creator)
├── Team (specific team members)
├── Public (anyone with link)
└── Embedded (iframe in other apps)
Page Load
Initial data fetch
Set up polling (every 30 seconds)
On poll:
├── Fetch updated metrics
├── Compare with previous
├── Update visualizations
└── Highlight changes (animations)
Scheduled Event (every 10 seconds)
Query real-time metrics:
├── Active users
├── Orders in progress
├── Current revenue
└── System health
Update dashboard variables
UI reactively updates

Define business metrics once, use everywhere:

Board: MetricsDefinitions
├── Variables:
│ └── metric_definitions: {
│ "revenue": "SUM(order_amount)",
│ "aov": "AVG(order_amount)",
│ "conversion_rate": "COUNT(orders) / COUNT(visits) * 100",
│ "customer_lifetime_value": "SUM(amount) / COUNT(DISTINCT customer_id)"
│ }
└── Quick Action: Calculate Metric (metric_name, filters)
Build SQL with metric definition
Execute and return result
-- Revenue with returns adjustment
SELECT
date,
gross_revenue,
returns,
gross_revenue - returns as net_revenue,
(gross_revenue - returns) / gross_revenue * 100 as net_revenue_pct
FROM daily_sales
-- Customer health score
SELECT
customer_id,
recency_score * 0.3 +
frequency_score * 0.3 +
monetary_score * 0.4 as health_score
FROM customer_rfm
Board: SalesDashboard
├── Variables:
│ ├── date_range: { start, end }
│ ├── region_filter: "all"
│ ├── metrics: {}
│ └── chart_data: {}
└── Init Event
Create DataFusion Session
Register data sources
├──▶ Query: KPI Metrics
│ │
│ ▼
│ Set metrics variable
├──▶ Query: Revenue Trend
│ │
│ ▼
│ Set chart_data.trend
├──▶ Query: Revenue by Region
│ │
│ ▼
│ Set chart_data.regions
└──▶ Query: Top Products
Set chart_data.products
Page Layout:
├── Header
│ ├── Title: "Sales Analytics"
│ ├── Date Range Picker
│ └── Region Filter
├── KPI Row
│ ├── Card: Revenue ({metrics.revenue})
│ ├── Card: Orders ({metrics.orders})
│ ├── Card: AOV ({metrics.aov})
│ └── Card: Conversion ({metrics.conversion})
├── Charts Row
│ ├── LineChart: Revenue Trend (chart_data.trend)
│ └── PieChart: By Region (chart_data.regions)
└── Table: Top Products (chart_data.products)
-- Use aggregations in the database, not in Flow-Like
-- Filter early, aggregate late
-- Use appropriate indexes
-- Cache frequently-used data
  • Start with KPIs (what matters most)
  • Progressive disclosure (summary → detail)
  • Consistent formatting (dates, currencies)
  • Clear labels (no jargon)
  • Limit data returned (pagination, top N)
  • Pre-aggregate when possible
  • Cache slowly-changing data
  • Use incremental updates
  • Document metric definitions
  • Version control reports
  • Access control on sensitive data
  • Audit trail for changes
<iframe
src="https://your-flow-like-app/embed/dashboard/sales"
width="100%"
height="600"
></iframe>
Dashboard data
Export as:
├── CSV (for Excel)
├── Parquet (for data tools)
├── JSON (for APIs)
└── PDF (for sharing)

Flow-Like can feed data to traditional BI tools:

Scheduled Event
Run analytics queries
Write to PostgreSQL (analytics schema)
Tableau/PowerBI reads from PostgreSQL

For many use cases, yes. Flow-Like excels at custom dashboards and automated reporting. Traditional BI tools may still be better for very complex ad-hoc exploration.

DataFusion is highly optimized. For most queries, performance is excellent. Very large datasets may benefit from a dedicated data warehouse.

Yes—the visual query builder and pre-built templates make it accessible. Power users can write SQL directly.

Use aggregations, partitioning, and caching. Connect to a data warehouse for very large volumes.