Perplexity, an AI startup, has been accused of plagiarism by news outlets like Forbes and CNBC, raising concerns about the erosion of trust in media and the impact of AI on journalism.
The article "TechScape: How cheap, outsourced labor in Africa is shaping AI English" from The Guardian highlights the impact of outsourcing AI training to anglophonic knowledge workers in parts of the global south, and raises questions about the impact on language, culture, and identity.
The paper "Show, Don't Tell: Aligning Language Models with Demonstrated Feedback" from Stanford University introduces a method called DITTO that uses a small number of demonstrations to customize language models, showing promising results in fine-grained style and task alignment.
"WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild" from the Allen Institute for AI and the University of Washington introduces an automated evaluation framework designed to benchmark large language models on challenging real-world user queries, providing a more reliable and interpretable evaluation of models' performance.
Contact: [email protected]
01:36 AI startup Perplexity accused of ‘directly ripping off’ news outlets like Forbes, CNBC without proper credit
03:32 TechScape: How cheap, outsourced labour in Africa is shaping AI English
04:34 Thread: an AI jupyter notebook
07:34 Show, Don't Tell: Aligning Language Models with Demonstrated Feedback
08:56 WildBench: Benchmarking LLMs with Challenging Tasks from Real Users in the Wild
10:46 Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?