The source critically examines recent research suggesting that AI systems might be developing a capacity for "scheming," defined as covertly and strategically pursuing misaligned goals. It draws a parallel between current AI "scheming" research and past attempts to teach apes human language, highlighting similar methodological pitfalls. The paper argues that both fields suffered from overattribution of human traits, excessive reliance on anecdote, and a lack of strong theoretical frameworks. It systematically critiques the current methods used to assess AI scheming, pointing out deficiencies such as anecdotal evidence, absence of control conditions, weak theoretical motivation, and exaggerated interpretations. Ultimately, the source advocates for more rigorous scientific practices, including quantitative analysis, clear hypothesis testing, and cautious use of mentalistic language, to ensure claims about AI scheming are defensible and to foster a more productive research program.