
Sign up to save your podcasts
Or
Paper: On the Trustworthiness of Generative Foundation Models
This paper presents an in-depth investigation into the trustworthiness of generative AI models, spanning text-to-image, large language, and vision-language modalities. It outlines the challenges and potential risks associated with these models, such as safety, fairness, privacy, and ethical considerations. The paper introduces TrustGen, a dynamic evaluation framework designed to assess and enhance the trustworthiness of these systems. Furthermore, the source analyses vulnerabilities like jailbreak attacks, bias, and hallucinations across different models. The source also emphasises the importance of interdisciplinary collaboration and explores the broad societal impacts of these technologies, offering a roadmap for future research and development in the field.
Paper: On the Trustworthiness of Generative Foundation Models
This paper presents an in-depth investigation into the trustworthiness of generative AI models, spanning text-to-image, large language, and vision-language modalities. It outlines the challenges and potential risks associated with these models, such as safety, fairness, privacy, and ethical considerations. The paper introduces TrustGen, a dynamic evaluation framework designed to assess and enhance the trustworthiness of these systems. Furthermore, the source analyses vulnerabilities like jailbreak attacks, bias, and hallucinations across different models. The source also emphasises the importance of interdisciplinary collaboration and explores the broad societal impacts of these technologies, offering a roadmap for future research and development in the field.