Misreading Chat

#113: Chain-of-Thought Prompting Elicits Reasoning in Large Language Models


Listen Later

LLM に算数の文章題を解かせるコツについて森田が読みました。ご意見ご感想などはおたより投書箱Reddit にお寄せください。iTunes のレビューや星も歓迎です。

  • Improving Language Understanding by Generative Pre-Training
  • Language Models are Unsupervised Multitask Learners
  • [2005.14165] Language Models are Few-Shot Learners
  • [2201.11903] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
  • NLP’s ImageNet moment has arrived
  • In-Context Learning, In Context
  • ChatGPT Prompt Engineering for Developers – DeepLearning.AI
  • ...more
    View all episodesView all episodes
    Download on the App Store

    Misreading ChatBy Hajime Morrita, Jun Mukai

    • 5
    • 5
    • 5
    • 5
    • 5

    5

    6 ratings


    More shows like Misreading Chat

    View all
    Rebuild by Tatsuhiko Miyagawa

    Rebuild

    49 Listeners

    耳で学ぶAI、ロボシンク by 矢野 哲平

    耳で学ぶAI、ロボシンク

    0 Listeners