
Sign up to save your podcasts
Or


“Digital native” is NOT an AI skill!! Speaking with ChatGPT daily shouldn't be a hiring metric.
71% of leaders say they'd hire AI skills over experience. The research says they're measuring the wrong thing.
After this, you’ll know what to measure instead (and how to defend experience): https://www.2ndorderthinkers.com/
- If you're measuring AI adoption → you're counting ChatGPT opens, not judgment → measure who catches errors instead
- If you're hiring "AI natives" → you're paying for fluency, not accuracy → test evaluation skills, not usage frequency
- If juniors ship 40% faster → they're also shipping 10× more security findings → ask: productive at what?
- If you can't articulate your AI value → you're underselling pattern recognition → use the self-assessment questions below
📖 Full write-up + sources
The full article lays out the studies, the arguments, and a checklist of questions you can use in performance reviews.
🔗 Links
Newsletter: https://www.2ndorderthinkers.com/
LinkedIn: https://www.linkedin.com/in/jing--hu/
Full article: https://www.2ndorderthinkers.com/p/why-your-20-something-colleague-is
⏱️ Timestamps
00:00 Performance reviews meet “AI integration”
01:03 AI use ≠ work quality
04:56 “Digital native” has weak evidence
07:48 Older participants write better with AI
09:22 Vendor productivity claims, missing quality
09:57 Coding assistants and security mistakes
12:21 AI boosts novice confidence, not novelty
14:26 The review questions that prove value
18:22 What to do next in 2026
💬 Question
Which matters more in your org right now: AI adoption rate, or AI error rate?
❤️ Read the full article + membership for the full write-up + sources + framework/checklist. ❤️
Comment with one example where your experience caught an AI mistake (or where it didn’t).
By Jing Hu“Digital native” is NOT an AI skill!! Speaking with ChatGPT daily shouldn't be a hiring metric.
71% of leaders say they'd hire AI skills over experience. The research says they're measuring the wrong thing.
After this, you’ll know what to measure instead (and how to defend experience): https://www.2ndorderthinkers.com/
- If you're measuring AI adoption → you're counting ChatGPT opens, not judgment → measure who catches errors instead
- If you're hiring "AI natives" → you're paying for fluency, not accuracy → test evaluation skills, not usage frequency
- If juniors ship 40% faster → they're also shipping 10× more security findings → ask: productive at what?
- If you can't articulate your AI value → you're underselling pattern recognition → use the self-assessment questions below
📖 Full write-up + sources
The full article lays out the studies, the arguments, and a checklist of questions you can use in performance reviews.
🔗 Links
Newsletter: https://www.2ndorderthinkers.com/
LinkedIn: https://www.linkedin.com/in/jing--hu/
Full article: https://www.2ndorderthinkers.com/p/why-your-20-something-colleague-is
⏱️ Timestamps
00:00 Performance reviews meet “AI integration”
01:03 AI use ≠ work quality
04:56 “Digital native” has weak evidence
07:48 Older participants write better with AI
09:22 Vendor productivity claims, missing quality
09:57 Coding assistants and security mistakes
12:21 AI boosts novice confidence, not novelty
14:26 The review questions that prove value
18:22 What to do next in 2026
💬 Question
Which matters more in your org right now: AI adoption rate, or AI error rate?
❤️ Read the full article + membership for the full write-up + sources + framework/checklist. ❤️
Comment with one example where your experience caught an AI mistake (or where it didn’t).