
Sign up to save your podcasts
Or


Qwen-1, also known as QWEN, is a series of large language models that includes base pretrained models, chat models, and specialized models for coding and math. These models are trained on a massive dataset of 3 trillion tokens using byte pair encoding for tokenization, and they feature a modified Transformer architecture with untied embeddings and rotary positional embeddings. The chat models (QWEN-CHAT) are aligned to human preferences using Supervised Finetuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). QWEN models have strong performance, outperforming many open-source models, but they generally lag behind models like GPT-4.
By AI-Talk4
44 ratings
Qwen-1, also known as QWEN, is a series of large language models that includes base pretrained models, chat models, and specialized models for coding and math. These models are trained on a massive dataset of 3 trillion tokens using byte pair encoding for tokenization, and they feature a modified Transformer architecture with untied embeddings and rotary positional embeddings. The chat models (QWEN-CHAT) are aligned to human preferences using Supervised Finetuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). QWEN models have strong performance, outperforming many open-source models, but they generally lag behind models like GPT-4.

303 Listeners

341 Listeners

112,584 Listeners

264 Listeners

110 Listeners

3 Listeners