
Sign up to save your podcasts
Or


Most engineering leaders are still running interviews like it's 2004.
Multiple coding rounds. Brain twisters on whiteboards. Engineers throwing their favorite puzzle at candidates. The whole process optimized for showing off interviewer cleverness rather than predicting job performance.
Ashwin Baskaran figured this out early. He's the VP of Engineering at Mercury, the fintech that more than 200,000 companies trust with their finances. Over his 20+ years in engineering leadership—from startups to Citrix to scaling Mercury—he's watched the industry slowly realize that technical depth alone doesn't predict success.
The companies still doing leetcode marathons are missing the point entirely.
🎧 Subscribe and Listen Now →
The Product Sense Revolution
"I would say interviews in the early part of when I became a manager tended to be very technical," Ashwin told me. "Multiple coding rounds. But simply doing more technical interviews doesn't necessarily give you more signal."
Here's what's interesting: a well-designed single coding interview gives you more signal than four or five ad hoc brain teasers. But that's not even the biggest shift.
The real evolution is recognizing that engineers need product sense. "I think the typical product company has more of an expectation that engineers will have ideas on the product and have product sense and not outsource their thinking."
At Mercury, Ashwin literally asks candidates: "Define the product." He leaves it vague on purpose. Whether it's infrastructure for databases or tools for general contractors, he's looking for people who understand boundaries—who built this, who experiences it, how they're experiencing it.
"I'm looking for people who have the sense like, there's a boundary and this boundary is being experienced by somebody."
The AI Testing Ground
But here's where it gets really interesting for the AI era. While everyone's debating productivity gains, Ashwin's team has been quietly experimenting across their 250+ engineer organization.
"Our general hypothesis is that it's gonna be a net positive," he said. "But it is going to be a bit of a journey."
The key insight: different types of problems benefit differently from AI assistance. Greenfield applications? AI shines. Legacy systems with complex context? Much harder. Python and TypeScript seem to get better results than other stacks, though the data is still anecdotal.
This creates a new interview challenge: "Do we change our interview structure and measure for proficiency with AI tools?"
Mercury is exploring screening processes that actually watch candidates use coding assistants in real-time. Not just prompting—sophisticated use of the tools integrated into VS Code. The question isn't whether someone can code, but how effectively they can collaborate with AI.
The Architecture Advantage
Ashwin's prediction for the future cuts through the hype: "I like this concept that this will actually drive extreme modularity."
His reasoning is compelling. Today, creating microservices means dealing with schemas, protocol buffers, gRPC boilerplate—all the toil that makes developers avoid proper boundaries. But AI excels at generating exactly this kind of repetitive infrastructure code.
"All of those things that are toil associated with building something that way could vanish. Those could be some of the early things that vanish."
The result? Systems with much better boundaries. Code that's easier for both humans and AI to understand and modify. The less context an AI needs to incorporate, the better outcomes you can expect.
Even more provocatively: "Prompts, or some variation of the prompt plus other contextual information could actually be the new code. And your code itself is like assembly code or machine code—the thing you produce whenever you need it, not the thing you version control."
What This Means for You
First, audit your current interview process. If you're still doing multiple coding rounds or whiteboard brain teasers, you're optimizing for the wrong signal. Design one well-calibrated coding interview and invest the saved time in assessing product sense and communication skills.
Second, start experimenting with AI-augmented interviews for appropriate roles. Not every position needs this, but for roles involving rapid prototyping or greenfield development, understanding how candidates collaborate with AI tools is becoming critical.
Third, prioritize modular architecture now. Whether AI reaches its full potential or not, better boundaries between systems will benefit your team. And if AI does deliver on code generation, you'll be perfectly positioned to take advantage.
Fourth, recognize that technical interviews are moving toward assessing judgment, not just implementation ability. In a world where AI can generate code, the premium is on engineers who understand what to build and how it fits into the larger context.
The question for your next hire: Are you screening for someone who can write algorithms on a whiteboard, or someone who can define product boundaries and collaborate effectively with both humans and AI?
The leaders who evolve their hiring practices now will build significantly stronger teams than those still stuck in 2004.
High Output is brought to you by Maestro AI. Ashwin talked about how "simply doing more technical interviews doesn't necessarily give you more signal"—the same principle applies to measuring your engineering team's performance. While most leaders rely on outdated metrics like story points, the real signals about velocity and blockers are hidden in your team's daily work. Maestro reveals which practices actually drive results and which just create noise.
Ready to move beyond surface-level metrics? Schedule a chat with our team → https://cal.com/team/maestro-ai/chat-with-maestro
By Maestro AIMost engineering leaders are still running interviews like it's 2004.
Multiple coding rounds. Brain twisters on whiteboards. Engineers throwing their favorite puzzle at candidates. The whole process optimized for showing off interviewer cleverness rather than predicting job performance.
Ashwin Baskaran figured this out early. He's the VP of Engineering at Mercury, the fintech that more than 200,000 companies trust with their finances. Over his 20+ years in engineering leadership—from startups to Citrix to scaling Mercury—he's watched the industry slowly realize that technical depth alone doesn't predict success.
The companies still doing leetcode marathons are missing the point entirely.
🎧 Subscribe and Listen Now →
The Product Sense Revolution
"I would say interviews in the early part of when I became a manager tended to be very technical," Ashwin told me. "Multiple coding rounds. But simply doing more technical interviews doesn't necessarily give you more signal."
Here's what's interesting: a well-designed single coding interview gives you more signal than four or five ad hoc brain teasers. But that's not even the biggest shift.
The real evolution is recognizing that engineers need product sense. "I think the typical product company has more of an expectation that engineers will have ideas on the product and have product sense and not outsource their thinking."
At Mercury, Ashwin literally asks candidates: "Define the product." He leaves it vague on purpose. Whether it's infrastructure for databases or tools for general contractors, he's looking for people who understand boundaries—who built this, who experiences it, how they're experiencing it.
"I'm looking for people who have the sense like, there's a boundary and this boundary is being experienced by somebody."
The AI Testing Ground
But here's where it gets really interesting for the AI era. While everyone's debating productivity gains, Ashwin's team has been quietly experimenting across their 250+ engineer organization.
"Our general hypothesis is that it's gonna be a net positive," he said. "But it is going to be a bit of a journey."
The key insight: different types of problems benefit differently from AI assistance. Greenfield applications? AI shines. Legacy systems with complex context? Much harder. Python and TypeScript seem to get better results than other stacks, though the data is still anecdotal.
This creates a new interview challenge: "Do we change our interview structure and measure for proficiency with AI tools?"
Mercury is exploring screening processes that actually watch candidates use coding assistants in real-time. Not just prompting—sophisticated use of the tools integrated into VS Code. The question isn't whether someone can code, but how effectively they can collaborate with AI.
The Architecture Advantage
Ashwin's prediction for the future cuts through the hype: "I like this concept that this will actually drive extreme modularity."
His reasoning is compelling. Today, creating microservices means dealing with schemas, protocol buffers, gRPC boilerplate—all the toil that makes developers avoid proper boundaries. But AI excels at generating exactly this kind of repetitive infrastructure code.
"All of those things that are toil associated with building something that way could vanish. Those could be some of the early things that vanish."
The result? Systems with much better boundaries. Code that's easier for both humans and AI to understand and modify. The less context an AI needs to incorporate, the better outcomes you can expect.
Even more provocatively: "Prompts, or some variation of the prompt plus other contextual information could actually be the new code. And your code itself is like assembly code or machine code—the thing you produce whenever you need it, not the thing you version control."
What This Means for You
First, audit your current interview process. If you're still doing multiple coding rounds or whiteboard brain teasers, you're optimizing for the wrong signal. Design one well-calibrated coding interview and invest the saved time in assessing product sense and communication skills.
Second, start experimenting with AI-augmented interviews for appropriate roles. Not every position needs this, but for roles involving rapid prototyping or greenfield development, understanding how candidates collaborate with AI tools is becoming critical.
Third, prioritize modular architecture now. Whether AI reaches its full potential or not, better boundaries between systems will benefit your team. And if AI does deliver on code generation, you'll be perfectly positioned to take advantage.
Fourth, recognize that technical interviews are moving toward assessing judgment, not just implementation ability. In a world where AI can generate code, the premium is on engineers who understand what to build and how it fits into the larger context.
The question for your next hire: Are you screening for someone who can write algorithms on a whiteboard, or someone who can define product boundaries and collaborate effectively with both humans and AI?
The leaders who evolve their hiring practices now will build significantly stronger teams than those still stuck in 2004.
High Output is brought to you by Maestro AI. Ashwin talked about how "simply doing more technical interviews doesn't necessarily give you more signal"—the same principle applies to measuring your engineering team's performance. While most leaders rely on outdated metrics like story points, the real signals about velocity and blockers are hidden in your team's daily work. Maestro reveals which practices actually drive results and which just create noise.
Ready to move beyond surface-level metrics? Schedule a chat with our team → https://cal.com/team/maestro-ai/chat-with-maestro