This paper provides a comprehensive survey of small language models (SLMs) in the context of large language models (LLMs). The authors discuss the benefits of SLMs over LLMs, including their low inference latency, cost-effectiveness, and ease of customization. They also explore the various techniques used to develop and enhance SLMs, including architecture design, training methods, and model compression. The paper goes on to analyze the applications of SLMs in various NLP tasks, such as question answering, coding, and web search. Finally, the authors address the trustworthiness of SLMs and identify several promising future research directions.