“Good job Sherlock! Go ahead,” I typed, responding to GitHub Copilot’s meticulous analysis of my code issue.
“Okay, Watson! Now that we’ve definitively identified the ID mismatch as the culprit, let’s proceed with the fix,” Copilot replied, seamlessly adopting the role of the famous detective to my faithful companion.
This playful Conan Doyle-inspired exchange wasn’t planned or programmed—it emerged organically as we debugged a JavaScript ID reference problem. Somehow, without explicit instruction, we’d fallen into fictional personas that made the troubleshooting process feel more like collaborative detective work than coding.
For nearly two years now, I’ve been coding alongside AI assistants, long before “vibe coding” entered the tech lexicon. What began as occasional prompting has evolved into genuine collaboration, where the lines between tool and teammate increasingly blur.
What fascinates me most isn’t just the technical capability growth – though that’s certainly impressive – but the emerging “personalities” of different models. Some are overly cautious, prefacing every suggestion with multiple disclaimers. Others display a surprising stubbornness, standing firm on their analysis even when presented with contradicting evidence.
My Sherlock/Watson exchange with Copilot perfectly illustrates this personality dimension. The playful literary role play emerged naturally from our debugging session, with Copilot identifying the culprit (“the ID mismatch”) and proposing a solution with the dramatic flair of Baker Street’s finest detective.
This evolution in how we interact with AI feels significant. I’ve caught myself saying “please” and “thank you” to chatbots, feeling genuinely frustrated when they misunderstand my intentions (“I will not accept ‘most likely’ or ‘most probable’. I want definite findings.”), and even experiencing satisfaction when they rapidly grasp complex problems.
What’s particularly interesting is how these “personalities” shift with each update. The Copilot that channels Sherlock Holmes today might adopt a different persona tomorrow. They retain their core functions while subtly transforming how they communicate and reason.
As these tools become more sophisticated, our relationships with them naturally evolve. When you work with an AI assistant daily, you develop expectations, preferences, and even communication patterns tailored to getting the best results – just as you would with human colleagues.
Six years ago, the idea of role playing literary characters with an AI assistant would have seemed absurd. Today, it’s a spontaneous part of my workflow. Tomorrow? Perhaps we’ll have developed intricate best practices for managing these AI relationships—complete with guidance on how to leverage the narrative frameworks that make human-AI collaboration not just productive, but genuinely enjoyable.
The future isn’t coming – it’s already here, evolving in that space between prompt and response, in those moments when debugging code feels less like work and more like solving a mystery on Baker Street.
Leave a Reply