SangAI is getting smarter every day. But somehow, it still feels… a little empty. It can answer...
AI is getting smarter every day.
But somehow, it still feels… a little empty.
And yet, when you’re having a rough day and ask something personal, the response often feels slightly off. Technically correct, emotionally distant.
So I started wondering. What if the problem isn’t the model — but how we train it?
The Problem is Intelligence isn't euqal to Empathy.
Modern AI is trained on massive datasets and refined through techniques like reinforcement learning from human feedback.
But there’s a subtle issue.
Most feedback systems are optimized for:
Not necessarily for:
In other words we’re training AI to be right — not to be understood.
And those are very different things.
A Different Thought
Instead of asking "Is this answer correct?”
What if we asked “Does this answer feel right to people?”
And more importantly "What if people could directly rewrite AI responses — not just rate them?"
The Experiment: Letting Humans Edit AI
So I built a small experiment called Crowdians.
Here’s the idea:
Instead of just collecting feedback we collect better answers.
Not “this is bad” — but “this is better.”
Why This Might Matter
Most AI systems learn from:
But Crowdians explores something slightly different. Active co-creation of responses
It treats users not just as evaluators but as contributors.
Almost like Wikipedia, but for AI responses or GitHub, but for human empathy
The assumption is simple. Good answers aren’t just generated. They’re refined.
Why I’m Sharing This
I don’t know if this is the right approach.
But it also feels like something worth exploring.
Because right now, AI is incredibly capable — but still lacks something deeply human.
And maybe that missing piece can’t just be trained.
Maybe it has to be collaborated on.
I’d Love Your Thoughts
I’m curious what you think about this idea.
If you want to try it out and break it (please do), here it is:
Crowdians
Any feedback, criticism, or wild ideas are more than welcome.