
Sign up to save your podcasts
Or


Large text-to-image models can generate "novel" images, but it is difficult to determine which training images are responsible for the appearance of a generated image. This paper proposes a method to evaluate data attribution in these models by customizing them towards exemplar objects or styles. The authors create a dataset of exemplar-influenced images to evaluate different attribution algorithms and feature spaces. They also show that training on this dataset can generalize to larger exemplar sets and assign soft attribution scores.
By Igor Melnyk5
33 ratings
Large text-to-image models can generate "novel" images, but it is difficult to determine which training images are responsible for the appearance of a generated image. This paper proposes a method to evaluate data attribution in these models by customizing them towards exemplar objects or styles. The authors create a dataset of exemplar-influenced images to evaluate different attribution algorithms and feature spaces. They also show that training on this dataset can generalize to larger exemplar sets and assign soft attribution scores.

977 Listeners

1,993 Listeners

443 Listeners

113,121 Listeners

10,254 Listeners

5,576 Listeners

221 Listeners

51 Listeners

101 Listeners

475 Listeners