Research

Pre-doctoral researcher investigating episodic and semantic memory in AI systems, grounded in cognitive neuroscience and Complementary Learning Systems theory.

Pre-Doctoral Research Direction

My research direction centers on a fundamental question: how can AI systems develop and maintain memory architectures that support persistent identity, continuous learning, and genuine episodic recall?

The field is advancing rapidly. Systems like Letta (formerly MemGPT) have introduced multi-tier memory architectures with core, recall, and archival memory. Open-source agents like Moltbot demonstrate persistent memory across sessions. The December 2025 survey "Memory in the Age of AI Agents" became Huggingface's top daily paper, signaling unprecedented attention to this problem space.

Yet significant gaps remain. Current approaches largely implement semantic memory and document retrieval. True episodic memory:the ability to recall specific experiences grounded in time and space, to remember not just what happened but what it was like:remains an open challenge. My work explores how insights from cognitive neuroscience can inform architectures that bridge this gap.

The Current Landscape

Understanding what exists helps identify where the open problems lie.

What Exists Now

  • Virtual context management (MemGPT/Letta) with hierarchical memory tiers
  • Persistent agent memory enabling cross-session continuity (Moltbot, Letta)
  • RAG systems for retrieval-augmented generation from external knowledge
  • Self-directed memory editing where LLMs manage their own memory via tool use
  • MemRL framework (January 2026) for self-evolving agents via RL on episodic memory

Open Research Questions

  • True episodic recall:remembering experiences, not just retrieving documents
  • Catastrophic forgetting in continuous learning without replay mechanisms
  • Memory consolidation:how to integrate new experiences without destabilizing existing knowledge
  • Temporal encoding:representing when events occurred, not just what happened
  • Identity persistence:maintaining coherent self across experiences and time

Neuroscience Foundations

The brain solves the continuous learning problem that AI systems struggle with. Complementary Learning Systems (CLS) theory, originally proposed by McClelland, McNaughton, and O'Reilly, offers a framework: two specialized systems working in concert.

The hippocampus acts as a fast-learning system for episodic memories:sparse, pattern-separated representations of specific experiences. The neocortex serves as a slow-learning system that gradually extracts statistical regularities to form semantic knowledge. The hippocampus functions as an index that reactivates distributed memory traces, and through systems consolidation, memories become sustained by neocortical networks.

Critical to this process is sleep replay. Research from UC San Diego and UC Irvine (2025) demonstrated that slow-wave sleep interleaves replay of familiar and novel memory traces within individual slow waves, allowing new memories to integrate without catastrophic interference. This "interleaved replay" principle has direct implications for AI architectures.

Memory Types in Cognitive Systems

Understanding how biological memory systems work informs how we might design computational analogues.

Episodic Memory

Autobiographical memory of specific experiences grounded in time and space. When you remember your first day at a job, you use episodic memory:you recall not just facts but the experience itself.

For AI: remembering specific interactions, the context of decisions, how understanding evolved through particular experiences.

Semantic Memory

General knowledge divorced from specific experiences. You know Paris is in France without remembering when you learned it. Facts, concepts, meanings.

For AI: LLMs are strong here:they encode vast factual knowledge. The gap is episodic: they cannot trace how understanding developed.

Procedural Memory

Expertise developed through practice:patterns of successful action. How to ride a bike, play an instrument, or navigate a familiar route.

For AI: captured in model weights through training, but difficult to update incrementally without full retraining.

Bio-Inspired Approaches

Several recent frameworks attempt to bridge neuroscience and AI memory:

Hippocampal-Augmented Memory Integration (HAMI), introduced in 2025, uses symbolic indexing, hierarchical memory refinement, and structured episodic retrieval:directly inspired by how the hippocampus operates.

Sleep Replay Consolidation (SRC) algorithms implement a sleep-like phase where neural networks replay and consolidate memories, enabling continual learning without catastrophic forgetting. Spiking neural networks show particular promise here, as local learning rules combined with spike-based communication allow spontaneous reactivation without interference.

Nested Learning (Google Research, NeurIPS 2025) treats models as interconnected, multi-level learning systems optimized simultaneously:a "continuum memory system" where different modules update at different frequencies, analogous to hippocampal-neocortical interaction.

Products as Living Experiments

My products are not separate from the research:they are experiments in the questions I am pursuing.

Ajax Studio explores creative persistence. How does a digital artist maintain consistent style across projects? How does it build on past work? How does it develop identity over time? These are research questions about memory and continuity implemented in a working system.

VoiceGuard approaches identity from the security side: what happens when synthetic voices can impersonate anyone? This is fundamentally a question about authenticity verification in a world where the markers of identity can be synthesized.

Both products generate data and insights that inform the underlying research. Building real systems reveals constraints that theory alone cannot anticipate.

Specific Research Questions

These are the questions I am exploring in my pre-doctoral research direction.

CLS-Inspired Architectures

How can Complementary Learning Systems theory inform practical AI architectures? What is the computational analogue of hippocampal-neocortical interaction?

Replay and Consolidation

Can sleep-inspired replay mechanisms enable continuous learning in AI without catastrophic forgetting? What is the optimal interleaving strategy?

Temporal Encoding

How can AI systems represent when experiences occurred, not just what happened? What role does temporal context play in episodic recall?

Identity Persistence

What computational mechanisms support coherent identity across time? How does accumulated experience shape a consistent self?

Beyond RAG

How can retrieval systems support genuine episodic recall rather than document lookup? What distinguishes remembering from searching?

Authenticity Verification

As synthesis improves, how do we verify that content came from the claimed source? What markers of identity can resist synthesis?

Interested in collaboration?

I welcome conversations with researchers, institutions, and organizations interested in AI memory systems, Complementary Learning Systems, or related areas.