
Sign up to save your podcasts
Or
In this week’s SlatorPod, we are joined by Nikola Nikolov, an experienced researcher, engineer, YouTuber, and consultant in natural language processing (NLP) and machine learning.
Nikola talks about the evolution of large language models (LLMs), where the core technology remains the same, but the number of parameters has grown exponentially and the capacity to fine-tune models on human data via reinforcement learning from human feedback has turbocharged the models’ capabilities.
Nikola unpacks the rapid increase in front-end use cases with companies like Google and Microsoft already integrating LLMs into their products. At the same time, he speculates about what will happen to the hundreds of startups that are using APIs to build similar tools like writing assistance or summarization.
Nikola shares the limitations of an API-only approach, which include using a model limited in data it has collected from the internet and that is not fine-tuned to a domain or specific use case.
He discusses how LLMs perform when it comes to machine translation (MT). Although GPT is trained on large amounts of multilingual data, it’s not specialized in translation, so machine translation providers will retain their edge over ChatGPT for now.
Nikola predicts two different scenarios when it comes to the future of LLMs: the first is where large corporations quickly integrate LLMs into their products, competing with startups and putting many of them out of business. The second scenario is where startups will create novel use cases and integrate multimodal technology to build something completely new and different from big companies.
4.3
66 ratings
In this week’s SlatorPod, we are joined by Nikola Nikolov, an experienced researcher, engineer, YouTuber, and consultant in natural language processing (NLP) and machine learning.
Nikola talks about the evolution of large language models (LLMs), where the core technology remains the same, but the number of parameters has grown exponentially and the capacity to fine-tune models on human data via reinforcement learning from human feedback has turbocharged the models’ capabilities.
Nikola unpacks the rapid increase in front-end use cases with companies like Google and Microsoft already integrating LLMs into their products. At the same time, he speculates about what will happen to the hundreds of startups that are using APIs to build similar tools like writing assistance or summarization.
Nikola shares the limitations of an API-only approach, which include using a model limited in data it has collected from the internet and that is not fine-tuned to a domain or specific use case.
He discusses how LLMs perform when it comes to machine translation (MT). Although GPT is trained on large amounts of multilingual data, it’s not specialized in translation, so machine translation providers will retain their edge over ChatGPT for now.
Nikola predicts two different scenarios when it comes to the future of LLMs: the first is where large corporations quickly integrate LLMs into their products, competing with startups and putting many of them out of business. The second scenario is where startups will create novel use cases and integrate multimodal technology to build something completely new and different from big companies.
7,665 Listeners
4,204 Listeners
253 Listeners
3,664 Listeners
9,266 Listeners
24,596 Listeners
118 Listeners
4 Listeners
46 Listeners
0 Listeners
3,027 Listeners
1,816 Listeners
809 Listeners
119 Listeners
1,163 Listeners