
Sign up to save your podcasts
Or


In this week’s In-Ear Insights, Katie and Chris answer the big question that people are afraid to ask for fear of looking silly: what IS a large language model? Learn what an LLM is, why LLMs like GPT-4 can do what they do, and how to use them best. Tune in to learn more!
[podcastsponsor]
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
Download the MP3 audio here.
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
In this week’s In-Ear Insights, today, we are talking all about large language models.
Now, the ones you’ve probably heard of most are models like GPT-3, point five, and GPT-4, which are from open AI.
But there are many of these things.
There is the GPT Neo X series from Eleuther.ai AI there stable LM from stability AI, there’s POM from Google.
So there’s many, many, many of these language models out there.
And today, we figured we’d talk about what they are, why you should probably know what they are, and maybe a little bit about how they work.
So Katie, where would you like to start?
I think we need to start with some basic definitions, because you just said a bunch of words that basically made my eyes glaze over.
And so I think we need to start first is what is a large language model.
And before you give me an overcomplicated definition, let me see if I can take a stab at this in a non like sort of technical see if I’m even understanding.
So my basic understanding of a large language model is that it is basically like, if you think of like a box or a bucket, and you put all of your content, your papers, your writing your text, your data, whatever, into that bucket, then that’s sort of like the house, the container for all of the language that you want to train the model on.
And so, you know, the more the more stuff you put in it, the bigger the bucket you need, hence the large because you can’t just have like, this tiny little handheld bucket have like two documents in it and say, Okay, that’s my large language model.
That’s not enough.
That’s not enough examples to give the model to train on, you need to keep giving it more information.
And the more information you put in the bucket, the more refined you can make the model.
Am I even close?
By Trust Insights5
99 ratings
In this week’s In-Ear Insights, Katie and Chris answer the big question that people are afraid to ask for fear of looking silly: what IS a large language model? Learn what an LLM is, why LLMs like GPT-4 can do what they do, and how to use them best. Tune in to learn more!
[podcastsponsor]
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
Download the MP3 audio here.
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
In this week’s In-Ear Insights, today, we are talking all about large language models.
Now, the ones you’ve probably heard of most are models like GPT-3, point five, and GPT-4, which are from open AI.
But there are many of these things.
There is the GPT Neo X series from Eleuther.ai AI there stable LM from stability AI, there’s POM from Google.
So there’s many, many, many of these language models out there.
And today, we figured we’d talk about what they are, why you should probably know what they are, and maybe a little bit about how they work.
So Katie, where would you like to start?
I think we need to start with some basic definitions, because you just said a bunch of words that basically made my eyes glaze over.
And so I think we need to start first is what is a large language model.
And before you give me an overcomplicated definition, let me see if I can take a stab at this in a non like sort of technical see if I’m even understanding.
So my basic understanding of a large language model is that it is basically like, if you think of like a box or a bucket, and you put all of your content, your papers, your writing your text, your data, whatever, into that bucket, then that’s sort of like the house, the container for all of the language that you want to train the model on.
And so, you know, the more the more stuff you put in it, the bigger the bucket you need, hence the large because you can’t just have like, this tiny little handheld bucket have like two documents in it and say, Okay, that’s my large language model.
That’s not enough.
That’s not enough examples to give the model to train on, you need to keep giving it more information.
And the more information you put in the bucket, the more refined you can make the model.
Am I even close?

0 Listeners