
Sign up to save your podcasts
Or


The paper discusses the presence of a neural mechanism in autoregressive transformer language models that represents input-output functions as vectors. These vectors, called function vectors (FVs), are robust and can be used to trigger execution of tasks in various contexts. The study also explores the internal structure of FVs and their potential for semantic vector composition.
https://arxiv.org/abs//2310.15213
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
The paper discusses the presence of a neural mechanism in autoregressive transformer language models that represents input-output functions as vectors. These vectors, called function vectors (FVs), are robust and can be used to trigger execution of tasks in various contexts. The study also explores the internal structure of FVs and their potential for semantic vector composition.
https://arxiv.org/abs//2310.15213
YouTube: https://www.youtube.com/@ArxivPapers
TikTok: https://www.tiktok.com/@arxiv_papers
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

953 Listeners

1,971 Listeners

438 Listeners

112,700 Listeners

10,063 Listeners

5,531 Listeners

214 Listeners

51 Listeners

99 Listeners

473 Listeners