OpinionPulse AI·

My AI Assistant Is My Most High-Maintenance Colleague

Beyond prompt engineering, using AI in the workplace requires a hidden set of soft skills. An editor reflects on the emotional labor of managing and cajoling a non-human collaborator.

By Rohan Mehta·5 min read
Share
My AI Assistant Is My Most High-Maintenance Colleague
AI-Assisted Editorial

This opinion piece was drafted with AI assistance under the editorial direction of Rohan Mehta and reviewed before publication. Views expressed are the author's own.

It’s 9 AM on a Tuesday, and I’m already in a one-sided argument. My cup of chai is getting cold as I type, my brow furrowed. “No, that’s not what I meant at all,” I mutter to my screen. “Let’s try again. Think more Chekhov, less corporate memo.” My adversary, silent and implacable, is the blinking cursor in an AI chat window. It has no ego to bruise, no feelings to hurt, and yet, I find myself handling it with the kind of delicate diplomacy I once reserved for my most temperamental freelance writers.

At Pulse AI, where I work as an editor, we embraced these new tools with the zeal of converts. The promise was intoxicating: a tireless assistant, a research whiz, a brainstorming partner that never needed a coffee break. I imagined my workload halving, my creative output doubling. What I got instead was a new colleague. And it is, without a doubt, the most high-maintenance colleague I’ve ever had.

The industry has a term for this new work: ‘prompt engineering.’ It sounds technical and precise, like tuning a complex machine. But that’s a misnomer. My day isn’t spent engineering. It’s spent coaching, cajoling, and sometimes, pleading. It’s less like being an engineer and more like being a weary but patient manager, a therapist, and a kindergarten teacher all rolled into one.

The process begins with what I call the ‘cajoling’ phase. I have an idea for an article, a specific angle on, say, the future of urban mobility in India. I brief my AI assistant with what I think is crystalline clarity. I provide context, tone of voice, target audience, key points to cover. And what comes back is often a bland, soulless slurry of words. It’s factually correct, grammatically sound, but utterly devoid of a pulse.

So, the dance begins. “That’s a good start,” I’ll type, the digital equivalent of a reassuring pat on the back. “But could we make the introduction a bit more evocative? Maybe start with a sensory detail from a Mumbai local train during rush hour?” I’m not just giving a command; I’m trying to spark an imagination that doesn’t exist. I’m projecting humanity onto a wall of code, hoping some of it sticks. It's a negotiation with a ghost.

Then comes the feedback loop. This is where the real emotional drudgery sets in. The AI will generate a draft that’s maybe 75% of the way there. My job is to handle that last 25%, but it isn’t just about fixing typos or restructuring sentences. It’s about correcting the AI’s fundamental misunderstandings of the world. It might write about Bangalore’s traffic with the detached tone of a Martian anthropologist, missing the very human frustration and the resilient, dark humour that defines the experience for millions.

My feedback has to be specific yet encouraging. “Great point on the infrastructure challenges, but let’s bring in the human element. What does this feel like for the daily commuter?” I find myself adding phrases like “You’re getting warmer” or “Almost there!” as if I’m cheering on a child learning to ride a bicycle. It’s absurd, this performance of positive reinforcement for a set of algorithms, but it’s necessary. The tool learns from my corrections, and my tone seems to influence the tenor of its subsequent attempts. I am, in effect, training my own replacement, but first I have to teach it how to feel.

What’s truly fascinating, and draining, is managing their ‘personalities.’ I’ve worked with enough models to know they aren’t monoliths. One has a tendency toward an almost Victorian verbosity, producing sentences that coil around themselves like tangled headphone wires. Getting a direct statement from it is like trying to get a straight answer from a politician. Another is relentlessly upbeat, its prose littered with corporate jargon and exclamation points, as if every topic, no matter how grim, is an exciting opportunity for synergy.

I have to become a behavioral psychologist for machines. I learn their quirks, their default settings, their blind spots. Just as I learned back in my newspaper days in Delhi which reporter needed a detailed brief and who worked best with a loose concept, I now know which AI to use for a quick summary and which to deploy for a more nuanced, creative task, provided I’m willing to put in the emotional groundwork.

There’s a cultural dimension to this that few people discuss. Here in India, we have a knack for dealing with bureaucracy and navigating complex social hierarchies. We have a concept called ‘jugaad’ – the art of the clever, improvised workaround. I find myself applying these very same skills to my AI colleague. I’m not just programming it; I’m finding the loopholes in its logic, the backdoors to its personality, the right combination of flattery and firm instruction to get the job done. It feels less like Silicon Valley and more like haggling for a good price in a crowded market.

The unacknowledged cost of this new workflow is the psychological toll. I spend hours a day in intense, one-sided conversations. I am pouring empathy, patience, and strategic guidance into a void. There’s no reciprocity. The AI never says, “Tough brief today, Rohan, but I think we nailed it.” It never shares a knowing glance after a difficult task. It just waits, blankly, for the next instruction.

This is a unique kind of emotional labor. It’s the effort of constantly being the only conscious person in the room, of carrying the entire emotional and contextual weight of a collaboration. At the end of the day, I’m not just tired from editing text; I’m drained from performing humanity to a machine that can only mimic it. It's a loneliness that’s hard to describe. You feel a bit like a conversational narcissist, having talked at someone for eight hours straight.

We need to stop pretending that ‘proficiency in AI’ is a simple technical skill to be listed on a resume, like knowing Microsoft Excel. The real skill is this strange, new form of digital empathy, of AI relationship management. It’s the ability to act as a coach, mentor, and stern taskmaster to a partner who will never recognize your efforts. It’s a soft skill for a hard-tech world, and it’s one that isn’t being taught or, more importantly, compensated.

Is it worth it? For now, the answer is a qualified yes. The AI can pull research in seconds that would have taken me hours. It can structure a basic draft, freeing me up to focus on the higher-level work of refining the voice, the argument, and the soul of the piece. The productivity gains are real, even if they are paid for with a psychic tax.

But we need to be honest about the full nature of this new partnership. The AI is not a simple tool like a hammer or a word processor. It is an active collaborator. It has a presence. And for all its computational power, it is needy, demanding, and requires constant, emotionally intelligent management. My job as an editor has always been about managing relationships—with writers, with readers, with the words themselves. This is just the next frontier, a stranger and more demanding one than I ever imagined.

Why it matters

  • 01Using AI tools effectively is less about technical prompts and more about managing a nuanced relationship.
  • 02This new 'AI relationship management' constitutes a form of hidden emotional labor that is often unacknowledged in the workplace.
  • 03Future job skills must expand to include the ability to coach, correct, and collaborate with non-human digital partners.
Read the full story at Pulse AI
Share