Your AI forgets everything when the session ends. XHawk is fixing that.
Puneet Singh spent years watching engineering knowledge evaporate between coding sessions. He built a system to stop it.
There’s a quiet tax on every AI-native engineering team.
It doesn’t show up in sprint velocity. It doesn’t get flagged in retros. But it’s there, compounding every single day.
Every time a new co
ding session starts, AI forgets everything.
The architectural decision made last Tuesday. The three approaches the team tried before landing on the right one. The “never do this again” lesson that cost two days. All of it vanishes the moment the session closes. And somewhere, someone has to reconstruct it from scratch. Every. Single. Time.
That’s not a tooling problem. That’s a structural problem. And it only gets more expensive as a codebase grows.
The gap between code and context
Developers have spent years making code easier to write. AI coding assistants have compressed what used to take hours into minutes. That’s real progress.
But here’s what hasn’t kept pace: the reasoning behind the code.
Why this architecture and not that one. What the agent tried before it got it right. What guardrails the team built to keep the next agent from making the same mistake. That layer, the intent layer, has no home. It lives in Slack threads, tribal memory, and the minds of people who might not be on the team six months from now.
The code ships. The context evaporates.
What Puneet built
I’ve gotten a chance to meet Puneet Singh recently who is focussing on this vital problem. He came off to me as one of those founders who can’t let go of a problem until it’s actually solved, not papered over. That’s usually a good sign.
What he and his co-founder built is called XHawk. The pitch is clean: a System of Context.
XHawk CLI captures AI coding sessions automatically on every git push. It maps the agent’s reasoning directly to commits, indexes it, and serves it back through an MCP server so the next session starts with full context instead of a blank slate.
It also does something I find particularly interesting: it captures negative knowledge. Not just what the team did, but what they tried and explicitly decided not to do. For any AI agent working in a codebase, that’s the difference between confident execution and expensive hallucination.
The output is a living knowledge graph. Code, docs, decisions, all linked and searchable. It auto-generates context files for every relevant folder, so any developer or agent that touches the codebase hits the ground running instead of starting from scratch.
Why this matters now
The default mental model for AI in software development is still the session. Open a session, write some code, close the session. Each one isolated.
But the teams moving fastest aren’t treating AI sessions as disposable. They’re treating every session as a permanent asset, something that compounds. The reasoning gets captured. The intent lives alongside the code. The knowledge doesn’t walk out the door.
XHawk is one of the clearest expressions of that shift I’ve seen. It’s not a better chat interface. It’s the memory layer that AI-native engineering has been missing.
That’s a different kind of product. And I think it’s the right kind for where this is all heading.
Try It Yourself
Today, I hunted XHawk 0.99 on Product Hunt. Puneet and the team are live in the comments all day answering questions.
If you’re building with AI coding agents, this one is worth 10 minutes of your time. Go check it out and show them some love.
👉 Check out their PH launch here.


