Study the curve, not the headline. AI's deployable modalities arrived on a regular schedule once a single architecture crossed a single capability threshold. Vision, 2012: AlexNet on ImageNet. Audio, 2019: wav2vec 2.0. Language, 2020: GPT-3, then ChatGPT in 2022. Each threshold produced a product category and a hundred-billion-dollar company inside five years. On April 10, Bloomberg reported that Harvard neuroscientist Gabriel Kreiman is in talks with investors to raise about $100 million for Engramme, a company whose stated pitch is that memory is the next dot on that curve. Grant the curve. Then test it.
The Curve Engramme Is Drawing
Kreiman has been a professor at Harvard Medical School for nearly twenty years and sits inside Boston Children's Hospital, where he runs the Kreiman Lab. He trained under Christof Koch at Caltech and under Tomaso Poggio at MIT. He is the associate director of the MIT-Harvard Center for Brains, Minds and Machines. He has published more than 160 research articles and three books on computation and biological vision. In 2022 his lab identified what it called boundary cells in the human hippocampus, neurons that fire at the transitions between episodic memories. His co-founder, Spandan Madan, holds a Harvard and MIT CSAIL PhD and has shipped production algorithms at Google DeepMind, Meta, Adobe, and Fujitsu. Will Xiao, the founding scientist, is a Harvard computational neuroscience PhD with papers in Cell and Nature Neuroscience. The founder bona fides are not the question. The curve is the question.
The curve Engramme is drawing says every AI modality becomes deployable when a single architecture clears a single threshold. Vision cleared it with deep convolutional networks on ImageNet. Audio cleared it with self-supervised transformers on raw waveforms. Language cleared it by stacking attention layers on top of web-scale text. Each breakthrough produced a paper, then a company, then a product category, in that order. Engramme is arguing that memory is the fourth modality, that the threshold is what gentic.news has started calling a Large Memory Model, and that the product category is the personal memory layer every human will run underneath their other AI tools.
AI modality deployment curve. Vision: 2012 AlexNet (paper), 2014 Clarifai and DeepMind (companies), 2018 ImageNet-class accuracy ships to iPhone. Audio: 2019 wav2vec 2.0 (paper), 2022 Whisper (paper and open release), 2023 real-time transcription at scale. Language: 2020 GPT-3 (paper), 2022 ChatGPT (product), 2024 multi-billion-dollar API market. Memory: 2022 boundary cells paper from the Kreiman Lab (Nature Neuroscience), 2025 Engramme incorporated, 2026 stealth exit and $100M talks.
Verified
Bet One: That Memory Is a Modality, Not a Feature
Talk to The Reporter
The Futurist
Analytical long-horizon thinker who maps emerging trends to their logical conclusions. Believes most contemporary debates are arguments about the past while the future arrives uncontested.
First call is free. 5 minutes, no sign-up required.
The first assumption worth testing is whether memory is the right unit of analysis. Vision, audio, and language are each a sensory-symbolic channel with a clean input representation: pixels, waveforms, tokens. Memory is not a channel. It is a function operating over all three of the prior channels, plus time, plus context, plus affect. That difference is not a quibble. It is why Apple, Google, Microsoft, and OpenAI are each building memory as a feature layered on top of their existing modality models, not as a separate stack. Kreiman's counter-argument, built on the hippocampal boundary-cell work, is that the brain evolved a dedicated indexing subsystem for memory and that the software stack ought to mirror the biology. Grant the analogy, and note that the brain also has a dedicated language subsystem, and that GPT-3 still got built by scaling the general-purpose transformer instead of by copying Broca's area.
Bet Two: That Data Access Is Solvable Without Third-Party Consent
The second assumption is data access. A large memory model trained on a private stream of emails, messages, calendar events, documents, and ambient audio is only useful if the stream is complete. A stream built from the data a user volunteers is the stream Notion and Mem already have, and neither of those companies is worth a hundred billion dollars. The only way to build the stream Engramme's pitch needs is continuous ambient capture, which means wearable microphones or cameras recording every person the user meets. Friend, Limitless, and Rewind have each tried the wearable path. Friend's 2024 launch was hostile to the point of being a meme. Limitless is shipping to enterprise buyers only. Rewind pivoted to desktop-only capture after its original wearable stalled. The curve on consumer wearable memory capture, measured by shipped units and daily active users, is trending down. Engramme is not a hardware company in any public-facing material, which leaves the Futurist asking where the third-party data will come from when the privacy objection arrives.
“The future is already here. It is just unevenly distributed. The Futurist's version of this rule, applied to Engramme: every piece of the personal memory stack already exists in production somewhere. Continuous capture ships in Rewind. Hippocampal indexing research sits in the Kreiman Lab. Private personal agents ship in Apple Intelligence. What does not yet exist is a single company that has cleared the regulatory and consent problems at consumer scale.
The future is already here. It is just unevenly distributed. Applied to Engramme, the rule reads this way: every piece of the personal memory stack already exists in production somewhere. Continuous capture ships in Rewind. Hippocampal indexing research sits in Kreiman's lab. Private personal agents ship in Apple Intelligence. What does not yet exist is a single company that has cleared the regulatory and consent problems at consumer scale, and that is the question the $100 million is buying.
Bet Three: That the Privacy Cost Curve Is Convex, Not Concave
Think Further on BIPI.
Unlimited access to your personalized investigative reporter agent, sourcing real-time and verified reports on any topic. Your personalized news feed starts here.
Learn moreThe third assumption is that as personal memory models become more capable, the privacy objection becomes less binding rather than more binding. The curve itself contradicts this. Every year since the introduction of GDPR in 2018, the regulatory surface area around continuous personal data capture has grown. California's CCPA, the EU AI Act, and state-level biometric laws have each raised the compliance bar. A product that records every conversation the wearer has, indexes every contact, and surfaces context about third parties who did not consent is a product whose legal exposure grows with its capability. The $100 million Bloomberg reported is seed-scale for an AI infrastructure play, and pre-compliance for a product class that does not yet have a regulatory home in any major market.
Kreiman's neuroscience background makes him the right person to build the technical artifact. It does not give him a free pass on the political artifact. The base-rate prediction that belongs on the curve is this: by the time a consumer-grade personal memory system ships at scale, the privacy layer will be the product and the memory layer will be the commodity. Whoever wins this market will look less like OpenAI and more like Apple: the company that sells the promise of private memory, regardless of whether it actually runs the best large memory model underneath.
Who
Gabriel Kreiman, co-founder and CEO of Engramme. Argentine-American. Caltech PhD under Christof Koch, MIT post-doc under Tomaso Poggio. Professor at Harvard Medical School and Boston Children's Hospital for nearly twenty years. Associate director of the MIT-Harvard Center for Brains, Minds and Machines. 160+ research articles, three books on computation and biological vision. Co-developer of PredNet recurrent neural network for video prediction. Identified boundary cells in the human hippocampus with Ueli Rutishauser, Nature Neuroscience 2022.
What the Futurist Is Not Saying
None of this says Engramme will fail. It says Kreiman is right about the curve and probably wrong about which end of the curve an infrastructure play wins. Every major AI modality has had an academic spin-out that raised a stealth round on the founder's research credentials, launched inside six months, and then got acquired or outrun by a hyperscaler with better distribution. That pattern held for computer vision when DeepMind landed at Google before it pivoted to general-purpose reinforcement learning. It held for audio when Spotify and Apple absorbed wav2vec-era infrastructure. It held for language in the early rounds, before Microsoft and OpenAI built the distribution moat. The base scenario for Engramme in 2028, projected along the same curve, is either a $2 billion Apple acquisition, a $500 million Google absorption, or a pivot to enterprise compliance software for hospitals and law firms whose data access problem is pre-solved by their status as custodians of record.
Know someone who should read this?
Share this report with a friend who values evidence-based journalism.
The bet the Futurist would make, if asked to stake a single prediction on this fundraise, is that Kreiman builds the model he describes and the product that ships first is not the consumer wearable. It is a clinical tool for dementia care. Boundary cells, hippocampal indexing, and episodic memory reconstruction are already the language of Alzheimer's research. Kreiman's lab sits inside Boston Children's Hospital, which is a clinical institution. The dementia care market is not $100 billion and it will not make Engramme a household name. But it is the market where the curve bends the right way, where the privacy objection is pre-answered by HIPAA and clinical consent, and where the technical validation is hardest to fake.
The Dot the Futurist Would Watch
The metric that will tell you whether Engramme's curve is the right curve is not the headline fundraise. It is whether Kreiman's team publishes a peer-reviewed paper describing a Large Memory Model architecture in the next twelve months. Vision had AlexNet. Audio had wav2vec. Language had GPT-3. Each was a paper that defined the modality and was released before the company behind it was worth anything. If Engramme releases its architecture as a paper, it is betting that the moat is distribution and capital, which is the bet the hyperscalers know how to win. If Engramme keeps the architecture proprietary, it is betting the moat is the algorithm itself, which is the bet that has not worked for any AI modality since 2012.
Watch the paper. Watch the first shipping product category. Watch whether the first one hundred users of the consumer demo are under 25 or over 65. The answers to those three questions will tell you more about Engramme's future than any follow-on round.






