
Sign up to save your podcasts
Or


Since its release in November 2022, ChatGPT has been lauded as a groundbreaking AI chatbot. However, recent research indicates its capabilities may be declining over time.
In this episode of the Trust Insights podcast In-Ear Insights, hosts Christopher Penn and Katie Robbert ask the question: is ChatGPT getting dumber? They discuss findings that ChatGPT appears to be getting worse at certain tasks it previously handled well. This includes mathematical reasoning, code generation, and visual puzzles.
There is speculation that the declines are due to OpenAI opening up access to ChatGPT’s premium GPT-4 model, which overwhelmed their systems. The increased demand likely required reducing capabilities to manage traffic.
Whatever the cause, the changes have big implications for businesses relying on ChatGPT’s API in their products and services. When the AI model drifts substantially in just a few months, it can break assumptions made during development.
Penn and Robbert emphasize the importance of clearly defining your purpose and requirements for implementing AI. Is convenience more important or reliability? What happens if the system goes down or its capabilities change?
For mission-critical uses, they recommend exploring open source AI models you can run on your own servers. This provides more control and avoids being subject to vendors altering public APIs.
The key takeaway is to carefully weigh the tradeoffs and have backup plans in place when utilizing third-party AI services. Model drift may not matter for minor uses but could seriously impact products dependent on certain functionality. Do your due diligence upfront to prevent disruptive surprises down the road.
[podcastsponsor]
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
Download the MP3 audio here.
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
In this week’s In-Ear Insights ChatGPT has been the darling of everyone’s everything since November 22, when it first came out, and since then it’s gone through a number of evolutions from the original version to a new model.
And then to the big model that OpenAI came out with earlier this year, the GPT-4 model, which is supposedly the best in class, biggest, fanciest, it’s the it’s the Porsche 911 of AI models for large language models.
However, new research has come out that is corroborated in many ways by many people’s experiences that seems to be getting worse.
Over time.
It seems to not be as smart it seems to not be as as clever it seems to be running into more and more difficulties.
And a research paper came out recently that summarize
By Trust Insights5
99 ratings
Since its release in November 2022, ChatGPT has been lauded as a groundbreaking AI chatbot. However, recent research indicates its capabilities may be declining over time.
In this episode of the Trust Insights podcast In-Ear Insights, hosts Christopher Penn and Katie Robbert ask the question: is ChatGPT getting dumber? They discuss findings that ChatGPT appears to be getting worse at certain tasks it previously handled well. This includes mathematical reasoning, code generation, and visual puzzles.
There is speculation that the declines are due to OpenAI opening up access to ChatGPT’s premium GPT-4 model, which overwhelmed their systems. The increased demand likely required reducing capabilities to manage traffic.
Whatever the cause, the changes have big implications for businesses relying on ChatGPT’s API in their products and services. When the AI model drifts substantially in just a few months, it can break assumptions made during development.
Penn and Robbert emphasize the importance of clearly defining your purpose and requirements for implementing AI. Is convenience more important or reliability? What happens if the system goes down or its capabilities change?
For mission-critical uses, they recommend exploring open source AI models you can run on your own servers. This provides more control and avoids being subject to vendors altering public APIs.
The key takeaway is to carefully weigh the tradeoffs and have backup plans in place when utilizing third-party AI services. Model drift may not matter for minor uses but could seriously impact products dependent on certain functionality. Do your due diligence upfront to prevent disruptive surprises down the road.
[podcastsponsor]
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
Download the MP3 audio here.
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
In this week’s In-Ear Insights ChatGPT has been the darling of everyone’s everything since November 22, when it first came out, and since then it’s gone through a number of evolutions from the original version to a new model.
And then to the big model that OpenAI came out with earlier this year, the GPT-4 model, which is supposedly the best in class, biggest, fanciest, it’s the it’s the Porsche 911 of AI models for large language models.
However, new research has come out that is corroborated in many ways by many people’s experiences that seems to be getting worse.
Over time.
It seems to not be as smart it seems to not be as as clever it seems to be running into more and more difficulties.
And a research paper came out recently that summarize

0 Listeners