
The Full Cognitive Cycle And Why Lumis Starts With Understanding
Published on 11/20/2025
For the last few years, I’ve been wrestling with a problem that’s surprisingly hard to pin down.
It’s not just that I’m "busy." It’s that my brain feels constantly fractured.
My day is a blur of context switching: Slack channels stacked on top of Telegram groups, with emails sneaking into every crack in between. It’s dozens, sometimes hundreds, of messages a day.
This isn't because I'm popular, and it’s certainly not because I enjoy the noise. It’s simply the reality of modern work. Every role I hold connects me to a different network of people, expectations, and decisions. Whether I'm boarding a flight or trying to wind down at night, the stream never stops.
At some point, I realized the issue wasn't the volume of messaging. It was the architecture of how we process it.
We are trying to fix a cognitive problem with communication tools. And that’s why I started Lumis.
The Reality of How We Communicate
Most software treats communication as a simple game of catch: receive message, send reply. But when I analyze my own workflow, the actual process is much messier.
I see it as a five-stage cycle that we all go through, mostly on autopilot:
- Sensing (The Input): The world hits us in fragments. A vague Slack DM, a frantic Telegram voice note, the tension in a meeting.
- Understanding (The Meaning): This is the hard part. Before I can reply, I have to compute: Who is this? What role am I playing right now? What’s the history here? Is this actually urgent, or just loud?
- Expressing: Only then do I write the update or answer the question.
- Acting: I update the jira ticket, nudge the designer, or move the project forward.
- Agency: In a perfect world, the system knows me well enough to handle some of this for me.
Here is the crux of the problem: We are currently stitching these layers together manually. The fatigue doesn't come from typing; it comes from the constant mental gymnastics required to maintain context across all these apps.
Why Current AI Tools Feel "Hollow"
Over the past two years, I’ve subscribed to nearly every AI tool out there.
Don't get me wrong, they are useful. They help me search faster, polish my English, or summarize a long email chain. But they all share a frustrating limitation: They work on the edges, not the core.
ChatGPT can write a beautiful email, but it doesn't know who I'm emailing. It doesn't know that this investor prefers brief updates, or that this project has been stalled for three weeks due to a specific bug. It lacks the "connective tissue" of my work life.
So, I’m still stuck doing the heavy lifting: reading, processing, remembering context, and then asking the AI to fix the grammar. The AI solved the easy part (expression) but ignored the hard part (cognition).
When Lumis Finally Clicked
We recently started using an internal build of Lumis, and for the first time, something felt different.
It didn't just dump a list of unread messages on me. It showed me the state of my world:
- "Here is what actually moved forward today."
- "Here is a new risk you need to see."
- "These threads are noise; ignore them."
I wasn't scrolling through chat logs trying to reconstruct my day. It felt like someone had pre-processed reality for me.
And because Lumis has long-term memory, when it drafted a reply, it didn't sound like a generic bot. It knew the history. It knew the tone. It felt like me—just a less overwhelmed version of me.
Starting With "Understanding"
If you look at that cognitive cycle again, the market is crowded at the edges.
- Sensing? Every app does this.
- Expressing? LLMs have solved this.
- Acting? Automation tools are everywhere.
But Understanding? That’s the gap.
We chose to start here for a simple reason: If the AI doesn't understand you, it can't meaningfully help you. Without deep context, automation is dangerous and summaries are shallow.
Our goal isn't just another productivity hack. We are trying to rebuild the cognitive layer underneath modern work. We want to build a system that captures your world, models your context, and eventually acts as a trusted extension of your mind.
We’re not there yet, and there is a long road ahead. But honestly? I’m building this because I need it myself.
— Ethan Founder, Meland Labs