This episode unpacks a recent article from AIModels.fyi focusing on the potential for "scheming" in frontier AI models. We delve into Google DeepMind's framework for evaluating AI stealth and situational awareness, vital capabilities related to AI safety.
• Can current AI models exhibit "scheming" behavior?
• What are the key elements of "stealth" in AI systems?
• How does "situational awareness" impact AI risk?
• What are the potential threat models of AI scheming?
• How can the CAE framework be used to assess AI safety?
• What kinds of AI actions are considered "code sabotage?"
• What kinds of AI actions are considered "research sabotage?"
• What kinds of AI actions are considered "decision sabotage?"
• What does 'power-seeking behavior' in AI look like?