
Sign up to save your podcasts
Or


đź““NOTE: Please forgive the audio quality. I was out on a walk and had been previously interrupted by leaf blowers, hedge trimmers, and skateboarders. At some point you have to ship, especially when the universe has made it clear youre destined not to get ideal audio.
How consistent is generative AI?
That’s a question I’ve been asking since I used Perplexity.ai, a Google replacement that I’ve been meaning to check out for a while but just never quite found the time or motivation to do so.
Seth Godin recently pushed Perplexity in one of his blog posts in which he claimed that he hadn’t performed a Google search in months because he’d been using Perplexity.
So I finally went over to Perplexity.ai and—one, because I’m vain and two, because I think you should search things you know about to test generative AI—I decided to search my name.
The answer it came back with was great and accurate.
It said that I was a writer and blogger at jakelacaze.com and then it gave some of my my views:
Jake LaCaze is a writer and blogger who runs the website jakelacaze.com. Based on the search results, here are some key points about him:
Jake LaCaze values serendipity and embracing uncertainty over relying solely on algorithms and data to curate experiences. He advocates for taking control of one’s online experience rather than being beholden to social media algorithms.
LaCaze is an advocate for Seth Godin’s philosophy of doing work that matters. He appreciates companies like iA Writer that prioritize serving their customers' true needs over pushing unnecessary features.
LaCaze encourages being thoughtful about the digital tools we use daily and bending technology to work for our individual needs, rather than the other way around. He believes processes and workflows should be optimized before adopting new tech solutions.
LaCaze has a critical view of aspects of the tech industry, coining the term “SaaShole” to describe tech bros who prioritize selling solutions over solving customer problems. He is concerned about the motivations and potential negative impacts of tech billionaires.
Overall, the search results portray LaCaze as a writer focused on finding balance and meaning in our relationship with technology, while maintaining a skeptical eye towards the profit motives and potential downsides of the tech industry.’
I didn’t disagree with any of what Perplexity said. I’ve added links to Perplexity’s answers to give ideas of how Perplexity reached its conclusions.
Next, I downloaded the app on my iPhone and I was curious how the same search looked on the iPhone so I searched my name again.
This time it focused on my oil and gas experience.
Two-thirds of the answer was correct but then at the end it said that I have my real estate license and worked for a certain company in Louisiana.
That’s not true.
I’m from Louisiana,but that is another Jake LaCaze that is not hard to find if you just do a quick Google search.
So this brought up an interesting question about consistency and context because it gave me really two different kinds of answers.
The first one leaned on my writing and my views on tech.
The second answer emphasized my oil and gas experience and then went down the avenue of real estate, which is technically related. So I understand why Perplexity confused me and the other Jake LaCaze.
This question about context is really important, because if someone in oil and gas were to search for Jake LaCaze, that person is probably looking for me with my oil and gas experience, but someone else on the internet might just want to know about my tech writing.
It could give different answers depending on who is asking. But if this AI is just generating a new answer every time, what answer is any one person going to get. Does the AI really learn? Would the AI say this person with this Perplexity account is an oil and gas person therefore I should give the oil and gas answer? Would it look at someone else and say: Oh, this person is a tech critic maybe they want to know about Jake’s writing on tech?
These are just interesting questions that I wish we had more answers on.
How exactly does this technology work?
Why did I get two radically different answers?
And also will there ever be a way to reconcile these questions of context and consistency with generative AI?
By Jake LaCazeđź““NOTE: Please forgive the audio quality. I was out on a walk and had been previously interrupted by leaf blowers, hedge trimmers, and skateboarders. At some point you have to ship, especially when the universe has made it clear youre destined not to get ideal audio.
How consistent is generative AI?
That’s a question I’ve been asking since I used Perplexity.ai, a Google replacement that I’ve been meaning to check out for a while but just never quite found the time or motivation to do so.
Seth Godin recently pushed Perplexity in one of his blog posts in which he claimed that he hadn’t performed a Google search in months because he’d been using Perplexity.
So I finally went over to Perplexity.ai and—one, because I’m vain and two, because I think you should search things you know about to test generative AI—I decided to search my name.
The answer it came back with was great and accurate.
It said that I was a writer and blogger at jakelacaze.com and then it gave some of my my views:
Jake LaCaze is a writer and blogger who runs the website jakelacaze.com. Based on the search results, here are some key points about him:
Jake LaCaze values serendipity and embracing uncertainty over relying solely on algorithms and data to curate experiences. He advocates for taking control of one’s online experience rather than being beholden to social media algorithms.
LaCaze is an advocate for Seth Godin’s philosophy of doing work that matters. He appreciates companies like iA Writer that prioritize serving their customers' true needs over pushing unnecessary features.
LaCaze encourages being thoughtful about the digital tools we use daily and bending technology to work for our individual needs, rather than the other way around. He believes processes and workflows should be optimized before adopting new tech solutions.
LaCaze has a critical view of aspects of the tech industry, coining the term “SaaShole” to describe tech bros who prioritize selling solutions over solving customer problems. He is concerned about the motivations and potential negative impacts of tech billionaires.
Overall, the search results portray LaCaze as a writer focused on finding balance and meaning in our relationship with technology, while maintaining a skeptical eye towards the profit motives and potential downsides of the tech industry.’
I didn’t disagree with any of what Perplexity said. I’ve added links to Perplexity’s answers to give ideas of how Perplexity reached its conclusions.
Next, I downloaded the app on my iPhone and I was curious how the same search looked on the iPhone so I searched my name again.
This time it focused on my oil and gas experience.
Two-thirds of the answer was correct but then at the end it said that I have my real estate license and worked for a certain company in Louisiana.
That’s not true.
I’m from Louisiana,but that is another Jake LaCaze that is not hard to find if you just do a quick Google search.
So this brought up an interesting question about consistency and context because it gave me really two different kinds of answers.
The first one leaned on my writing and my views on tech.
The second answer emphasized my oil and gas experience and then went down the avenue of real estate, which is technically related. So I understand why Perplexity confused me and the other Jake LaCaze.
This question about context is really important, because if someone in oil and gas were to search for Jake LaCaze, that person is probably looking for me with my oil and gas experience, but someone else on the internet might just want to know about my tech writing.
It could give different answers depending on who is asking. But if this AI is just generating a new answer every time, what answer is any one person going to get. Does the AI really learn? Would the AI say this person with this Perplexity account is an oil and gas person therefore I should give the oil and gas answer? Would it look at someone else and say: Oh, this person is a tech critic maybe they want to know about Jake’s writing on tech?
These are just interesting questions that I wish we had more answers on.
How exactly does this technology work?
Why did I get two radically different answers?
And also will there ever be a way to reconcile these questions of context and consistency with generative AI?