
Sign up to save your podcasts
Or


This research paper focuses on a safety evaluation of DeepSeek-R1 and DeepSeek-V3 models within Chinese language contexts, an area previously underexplored. It highlights that while DeepSeek models possess strong reasoning capabilities, previous studies, primarily in English, have revealed significant safety flaws. To address the gap in Chinese safety assessments, the authors introduce CHiSafetyBench, a new benchmark designed to systematically test these models across various safety categories like discrimination and violation of values. The experimental results quantitatively demonstrate the deficiencies of DeepSeek models in Chinese safety performance, particularly in identifying and refusing harmful content, offering insights for future improvements. The authors acknowledge potential biases in their evaluation and plan to continually optimize the benchmark.
Source: https://arxiv.org/pdf/2502.11137
By mcgrofThis research paper focuses on a safety evaluation of DeepSeek-R1 and DeepSeek-V3 models within Chinese language contexts, an area previously underexplored. It highlights that while DeepSeek models possess strong reasoning capabilities, previous studies, primarily in English, have revealed significant safety flaws. To address the gap in Chinese safety assessments, the authors introduce CHiSafetyBench, a new benchmark designed to systematically test these models across various safety categories like discrimination and violation of values. The experimental results quantitatively demonstrate the deficiencies of DeepSeek models in Chinese safety performance, particularly in identifying and refusing harmful content, offering insights for future improvements. The authors acknowledge potential biases in their evaluation and plan to continually optimize the benchmark.
Source: https://arxiv.org/pdf/2502.11137