
Sign up to save your podcasts
Or
A sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder.
The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way. Encoding an input to a small number of hidden nodes which can effectively be decoded to a matching string requires machine learning to learn an efficient representation of the essence of the strings.
In addition to translation, seq2seq models have been used in a number of other NLP tasks such as summarization and image captioning.
Related Links
tf-seq2seq
Describing Multimedia Content using Attention-based Encoder--Decoder Networks
Show and Tell: A Neural Image Caption Generator
Attend to You: Personalized Image Captioning with Context Sequence Memory Networks
4.4
473473 ratings
A sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder.
The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way. Encoding an input to a small number of hidden nodes which can effectively be decoded to a matching string requires machine learning to learn an efficient representation of the essence of the strings.
In addition to translation, seq2seq models have been used in a number of other NLP tasks such as summarization and image captioning.
Related Links
tf-seq2seq
Describing Multimedia Content using Attention-based Encoder--Decoder Networks
Show and Tell: A Neural Image Caption Generator
Attend to You: Personalized Image Captioning with Context Sequence Memory Networks
585 Listeners
624 Listeners
298 Listeners
340 Listeners
140 Listeners
770 Listeners
270 Listeners
183 Listeners
63 Listeners
298 Listeners
91 Listeners
105 Listeners
201 Listeners
72 Listeners
496 Listeners