AI agents, driven by foundation models, are emerging as intelligent assistants capable of perceiving and acting within their environments to complete user-defined tasks. These agents rely on tools for environmental interaction and AI-driven planning to determine action sequences. The effectiveness of an agent hinges on its available tools and its planning capabilities, as failures can stem from inadequate planning, tool malfunctions, or inefficiencies. Planning can be enhanced through reflection and error correction, involving plan generation, evaluation, and execution, potentially with human oversight. Tool selection is critical, requiring experimentation to balance capabilities with complexity, while planning granularity can be optimized through hierarchical approaches and natural language translation. Evaluation of agents focuses on detecting failures in planning, tool usage, and efficiency to improve their overall performance and reliability.