
Sign up to save your podcasts
Or


LongNet is a Transformer variant that can handle sequences longer than 1 billion tokens without sacrificing performance. It introduces dilated attention, which expands the attentive field exponentially as the distance grows. LongNet has linear computation complexity and can be used for distributed training. Experimental results show strong performance on long-sequence modeling and general language tasks.
https://arxiv.org/abs//2307.02486
YouTube: https://www.youtube.com/@ArxivPapers
PODCASTS:
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
LongNet is a Transformer variant that can handle sequences longer than 1 billion tokens without sacrificing performance. It introduces dilated attention, which expands the attentive field exponentially as the distance grows. LongNet has linear computation complexity and can be used for distributed training. Experimental results show strong performance on long-sequence modeling and general language tasks.
https://arxiv.org/abs//2307.02486
YouTube: https://www.youtube.com/@ArxivPapers
PODCASTS:
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

958 Listeners

1,977 Listeners

438 Listeners

112,858 Listeners

10,073 Listeners

5,535 Listeners

214 Listeners

51 Listeners

98 Listeners

473 Listeners