
Sign up to save your podcasts
Or


A sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder.
The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way. Encoding an input to a small number of hidden nodes which can effectively be decoded to a matching string requires machine learning to learn an efficient representation of the essence of the strings.
In addition to translation, seq2seq models have been used in a number of other NLP tasks such as summarization and image captioning.
Related Links
tf-seq2seq
Describing Multimedia Content using Attention-based Encoder--Decoder Networks
Show and Tell: A Neural Image Caption Generator
Attend to You: Personalized Image Captioning with Context Sequence Memory Networks
By Kyle Polich4.4
475475 ratings
A sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder.
The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way. Encoding an input to a small number of hidden nodes which can effectively be decoded to a matching string requires machine learning to learn an efficient representation of the essence of the strings.
In addition to translation, seq2seq models have been used in a number of other NLP tasks such as summarization and image captioning.
Related Links
tf-seq2seq
Describing Multimedia Content using Attention-based Encoder--Decoder Networks
Show and Tell: A Neural Image Caption Generator
Attend to You: Personalized Image Captioning with Context Sequence Memory Networks

290 Listeners

622 Listeners

584 Listeners

302 Listeners

332 Listeners

228 Listeners

206 Listeners

203 Listeners

306 Listeners

96 Listeners

517 Listeners

261 Listeners

131 Listeners

228 Listeners

620 Listeners