Explore every path, find the motherlode

Branching AI chat for discovery and comparison

Branch from any message onto a canvas, compare prompts and models side by side, and collaborate in real time—so you can strike insights faster and avoid dead ends.

4.9/5
Trusted by 200+ teams

Free while in beta. No credit card required.

See it in action
No credit card. Works with your favorite providers.
Preview
Works with your stack
OPENAI
ANTHROPIC
GOOGLE
AZURE
MISTRAL
OLLAMA
SLACK
GITHUB
How it works

Start. Branch. Compare.

Try alternatives in parallel and see them side by side—all in one canvas.

  1. Step 1
    Lode — New canvas
    Preview

    Start a canvas

    Kick off a chat from any question or doc. Your context stays put.

  2. Step 2
    Lode — Branching
    Preview

    Branch anywhere

    Fork any message to try different prompts or models—no copy‑paste, no mess.

  3. Step 3
    Lode — Compare
    Preview

    Compare side by side

    Scan results next to each other to spot the best direction fast.

Run options in parallel, not sequentiallyKeep paths focused and on‑topicSwap models per branchSave time and tokens
Results

Real teams find better answers faster

Outcomes from using branching and side‑by‑side comparison in Lode

Avg. 2–3x options tested per question60% faster iteration35% fewer tokens
60% faster prompt iteration

“Branching let us try three approaches at once and move forward in minutes.”

Jamie K.
PM
ARCADIA
35% fewer tokens per project

“We compare models side by side and drop bad directions early.”

Priya S.
Content Lead
NORTHBEAM
Cleaner threads, fewer wrong turns

“Focused paths kept us on topic. No more overwriting the main chat.”

Leo M.
Data Ops
HELIX
How a product team cut iteration time by 60%

They branched from key messages, tried alternate prompts and models in parallel, and scanned results side by side.

  • Branched 4 variations in under 2 minutes
  • Compared by model and prompt; continued from the strongest path
  • Reduced back‑and‑forth rewrites; shipped same day
Use cases

Where Lode shines

The same motions—branch, compare, co‑prompt—adapt to your workflow. Pick a lane and see it click.

Lode — Prompt variations
Preview
Prompt engineering

Find prompts that actually work

Run 3–5 variations in parallel and converge on the one that holds up.

  • Branch alternatives from any message
  • A/B prompts with the same context
  • Pick a winner and keep going
2–4x faster iterationFewer wrong turns
Core features

The essentials that keep you moving

Scroll to see how Lode helps you branch, compare, and collaborate—without breaking flow.

Lode — Branching on canvas
CanvasStep 1

Branch from any message

Explore tangents without polluting the main thread. Shared context keeps each path focused.

  • Fork quickly
  • Scope prompts per branch
  • Tag for traceability
No copy‑pasteNo drift
CompareStep 2

See options side by side

Pin branches into a split view. Scan clarity, depth, and gaps at a glance.

  • Compare by prompt/model
  • Pick a winner and continue
Same contextA/B made easy
MultiplayerStep 3

Prompt together in real time

See presence and cursors as teammates co‑prompt. No overwrites, just momentum.

  • Live cursors & presence
  • Comment and mention
Live sync
FocusStep 4

Scoped prompts per branch

Keep each line of exploration on‑topic with branch‑level system prompts and tags.

  • Branch‑level system prompts
  • Tags and naming
Stay on topic
ModelsStep 5

Bring your own keys

Swap providers freely. Evaluate models per branch and keep control over cost and data.

  • OpenAI, Anthropic, Google, Mistral
  • Local via Ollama
Model‑agnosticCost control
Models & providers

Works with the leaders

Choose providers and models per branch. Keep comparisons fair with the same context.

OpenAI
OpenAI
Anthropic
Anthropic
Google Gemini
Google
Meta
Meta Llama
X
xAI
DeepSeek
DeepSeek
Mistral AI
Mistral AI
Alibaba.com
Qwen
NVIDIA
NVIDIA
GPT‑4o
Claude 3.5
Gemini 1.5
Llama 3.1
Grok‑2
DeepSeek‑V3
Mistral Large
Command R+
2–4x
Faster iteration
25–50%
Tokens saved
+30%
On‑topic accuracy
What teams say

Momentum you can feel

We found a winning prompt in one morning instead of a week. The side‑by‑side view is a cheat code.
PM, SaaS
Branches keep experiments tidy. It finally feels safe to go down rabbit holes.
Head of Research, Fintech
Multiplayer prompting made our workshops twice as productive.
Director, Innovation Lab
Dropping weak paths early saved us 40% on tokens last quarter.
Ops Lead, Support
FAQ

Answers to common questions

Do you store my prompts and model responses?

Yes, to power history, branches, compare views, and audit logs. You control retention. For sensitive work, export/clear on a schedule.

How do API keys work?

Use your own provider keys. Keys are encrypted at rest and never shared with teammates unless explicitly configured.

Which models are supported?

Most major providers: OpenAI, Anthropic, Google, Azure OpenAI, Mistral, and local via Ollama. The list grows over time.

Can I export my branches?

Yes. Export threads and branches as JSON or Markdown. API access for automation is planned.

Does Lode support teams and permissions?

Yes. Role‑based access, project‑level permissions, and audit history help keep work compliant.

Ready to strike gold?

Start branching, comparing, and collaborating today. Find better prompts in fewer tokens.

  • Unlimited branches
  • Real‑time multiplayer
  • Model agnostic
Get started freeNo credit card • Cancel anytime