Introducing Memly — Zero Friction Context for LLMs. Learn more →

Simple, transparent pricing

Start free. Scale as you grow. Self-host for full control.

Free

$0forever

For individual developers exploring AI context.

  • 500 requests/month
  • 1M tokens stored
  • 1 seat
  • Community support
  • Self-host available
Most Popular

Pro

$29/month

For power users who need more capacity.

  • 10,000 requests/month
  • 20M tokens stored
  • 1 seat
  • Priority support
  • Usage analytics dashboard
  • API key management

Team

$99/month

For engineering teams building together.

  • 100,000 requests/month
  • 200M tokens stored
  • Up to 20 seats
  • SSO & SAML
  • Dedicated support
  • Shared memory across team
  • Organization management

Enterprise

Custom

For organizations with advanced requirements.

  • Unlimited requests
  • Unlimited tokens
  • Unlimited seats
  • Self-hosted deployment
  • SLA & on-call support
  • Custom integrations
  • Dedicated infrastructure
  • SOC 2 compliance

Frequently asked questions

Can I self-host Memly?

Yes! Memly is BSL licensed. Every plan includes a license key. Personal self-hosted instances are free forever. Team licenses require a paid key.

What happens if I exceed my tier limits?

Memly gracefully degrades to passthrough mode — your LLM calls still work, they just won't include context injection. No hard stops, ever.

Which LLM providers are supported?

OpenAI, Anthropic, and Google Gemini. Any provider with an OpenAI-compatible API works out of the box. SSE streaming, function calling, and vision are fully supported.

Is my code stored on your servers?

In cloud mode, code chunks are stored encrypted in Supabase with RLS. For full data sovereignty, use self-hosted mode — your data never leaves your infrastructure.