This research paper provides a comprehensive survey of small language models (SLMs), which are smaller versions of large language models (LLMs) designed for deployment on devices like smartphones. The study analyzes the architecture, training datasets, and training algorithms used in SLMs, as well as their capabilities in various domains, including commonsense reasoning, problem-solving, and mathematics. The researchers also evaluate the runtime cost of SLMs on edge devices, examining factors like inference latency and memory footprint. The paper concludes with insights and future directions for SLM research, emphasizing the importance of co-designing SLM architecture and device processors, developing high-quality synthetic datasets, and exploring continual on-device learning for personalization.