Letta Code: A Memory-First Coding Agent
Letta Code is a memory-first coding agent, designed for working with agents that learn over time. When working with coding agents today, interactions happen in independent sessions. Letta Code is built around long-lived agents that persist across sessions and improve with use. Rather than working in independent sessions, each session is tied to a persisted agent that learns. Letta Code is also the #1 model-agnostic OSS harness on TerminalBench, and achieves comparable performance to harnesses built by LLM providers (Claude Code, Gemini CLI, Codex CLI) on their own models.
Continual Learning & Memory for Coding Agents
Agents today accumulate valuable experience: they receive the user’s preferences and feedback, review significant parts of code, and observe the outcomes of taking actions like running scripts or commands. Yet today this experience is largely wasted. Letta agents learn from experience through:
- Agentic context engineering
- Long-term memory
- Skill learning
The more you work with an agent, the more context and memory it accumulates, and the better it becomes.
Memory Initialization
When you get started with Letta Code, you can run an /init command to encourage your agent to learn about your existing project. This will trigger your agent to run deep research on your local codebase, forming memories and rewriting its system prompt (through memory blocks) as it learns.
Your agent will continue to learn automatically, but you can also explicitly trigger your agent to reflect and learn with the /remember command.
Skill Learning
Many tasks that we work on with coding agents are repeated or follow similar patterns - for example API patterns or running DB migrations. Once you’ve worked with an agent to coach it through a complex task, you can trigger it to learn a skill from its experience, so the agent itself or other agents can reference the skill for similar tasks in the future.
Skill learning can dramatically improve performance on future similar tasks, as we showed with recent results on TerminalBench.
On our team, some skills that agents have contributed (with the help of human engineers) are:
- Generating DB migrations on schema changes
- Creating PostHog dashboards with the PostHog CLI
- Best practices for API changes
Since skills are simply .md files, they can be managed in git repositories for versioning - or even used by other coding agents that support skills.
Persisted State
Agents can also lookup past conversations (or even conversations of other agents) through the Letta API. The builtin /search command allows you to easily search through messages, so you can find the agent you worked on something with. The Letta API supports vector, full-text, and hybrid search over messages and available tools.
Letta Code is the #1 model-agnostic OSS coding harness
Letta Code adds statefulness and learning to coding agents, but is the #1 model-agnostic, OSS harness on Terminal-Bench. Letta Code’s performance is comparable to provider-specific harnesses (Gemini CLI, Claude Code, Codex) across model providers, and significantly outperforms the previous leading model-agnostic harness, Terminus 2.
This means that even without memory, you can expect Letta Code agents to work just as well with a frontier model as they would with a specific harness built by the model provider.
Getting Started with Letta Code
To try out Letta Code, you can install it with:
npm install -g @letta-ai/letta-code
Or install from source (see the full documentation).
Letta Code can be used with the Letta Developer Platform, or with a self-hosted Letta server.
Company Announcements, Partnerships
- Jul 7, 2025: Agent Memory: How to Build Agents that Learn and Remember
- Jul 3, 2025: Anatomy of a Context Window: A Guide to Context Engineering
- May 14, 2025: Memory Blocks: The Key to Agentic Context Management
- Feb 13, 2025: RAG is not Agent Memory
- Feb 6, 2025: Stateful Agents: The Missing Link in LLM Intelligence
- Nov 14, 2024: The AI agents stack
- Nov 7, 2024: New course on Letta with DeepLearning.AI
- Sep 23, 2024: Announcing Letta
- Sep 23, 2024: MemGPT is now part of Letta
Product: Release Notes, Feature Announcements
- Dec 1, 2025: Programmatic Tool Calling with any LLM
- Oct 23, 2025: Letta Evals: Evaluating Agents that Learn
- Oct 14, 2025: Rearchitecting Letta’s Agent Loop: Lessons from ReAct, MemGPT, & Claude Code
- Sep 30, 2025: Introducing Claude Sonnet 4.5 and the memory omni-tool in Letta
- Jul 24, 2025: Introducing Letta Filesystem
- Apr 17, 2025: Announcing Letta Client SDKs for Python and TypeScript
- Apr 2, 2025: Agent File
- Jan 15, 2025: Introducing the Agent Development Environment
- Dec 13, 2024: Letta v0.6.4 release
- Nov 6, 2024: Letta v0.5.2 release
- Oct 23, 2024: Letta v0.5.1 release
- Oct 14, 2024: Letta v0.5 release
- Oct 3, 2024: Letta v0.4.1 release
Research: Sleep-time Compute, Anatomy of a Context Window
- Dec 11, 2025: Continual Learning in Token Space
- Dec 2, 2025: Skill Learning: Bringing Continual Learning to CLI Agents
- Nov 7, 2025: Can Any Model Use Skills? Adding Skills to Context-Bench
- Oct 30, 2025: Context-Bench: Benchmarking LLMs on Agentic Context Engineering
- Aug 27, 2025: Introducing Recovery-Bench: Evaluating LLMs' Ability to Recover from Mistakes
- Aug 12, 2025: Benchmarking AI Agent Memory: Is a Filesystem All You Need?
- Aug 5, 2025: Building the #1 open source terminal-use agent using Letta
- May 29, 2025: Letta Leaderboard: Benchmarking LLMs on Agentic Memory
- Apr 21, 2025: Sleep-time Compute
This is some text inside of a div block.