
Sign up to save your podcasts
Or


Large text-to-image models can generate "novel" images, but it is difficult to determine which training images are responsible for the appearance of a generated image. This paper proposes a method to evaluate data attribution in these models by customizing them towards exemplar objects or styles. The authors create a dataset of exemplar-influenced images to evaluate different attribution algorithms and feature spaces. They also show that training on this dataset can generalize to larger exemplar sets and assign soft attribution scores.
By Igor Melnyk5
33 ratings
Large text-to-image models can generate "novel" images, but it is difficult to determine which training images are responsible for the appearance of a generated image. This paper proposes a method to evaluate data attribution in these models by customizing them towards exemplar objects or styles. The authors create a dataset of exemplar-influenced images to evaluate different attribution algorithms and feature spaces. They also show that training on this dataset can generalize to larger exemplar sets and assign soft attribution scores.

958 Listeners

1,977 Listeners

438 Listeners

112,858 Listeners

10,073 Listeners

5,535 Listeners

214 Listeners

51 Listeners

98 Listeners

473 Listeners