AI Post Transformers

Small Versus Large Models for Requirements Classification


Listen Later

The October 24, 2025 collaboration between many universities have published a paper thst compares the performance of Large Language Models (LLMs) and Small Language Models (SLMs) on requirements classification tasks within software engineering. Researchers conducted a preliminary study using eight models across three datasets to address concerns about the high computational cost and privacy risks associated with using proprietary LLMs. The results indicate that while LLMs achieved an average F1 score only 2% higher than SLMs, this difference was not statistically significant, suggesting that SLMs are a valid and highly competitive alternative. The study concludes that SLMs offer substantial benefits in terms of privacy, cost efficiency, and local deployability, and found that dataset characteristics played a more significant role in performance than did model size. Source: https://arxiv.org/pdf/2510.21443
...more
View all episodesView all episodes
Download on the App Store

AI Post TransformersBy mcgrof