
Sign up to save your podcasts
Or


In this bonus episode, recorded live at our San Francisco office, security-startup founders Dean De Beer (Command Zero), Kevin Tian (Doppel), and Travis McPeak (Resourcely) share their thoughts on generative AI, as well as their experiences building with LLMs and dealing with LLM-based threats.
Here's a sample of what Dean had to say about the myriad considerations when choosing, and operating, a large language model:
"The more advanced your use case is, the more requirements you have, the more data you attach to it, the more complex your prompts — ll this is going to change your inference time.
"I liken this to perceived waiting time for an elevator. There's data scientists at places like Otis that actually work on that problem. You know, no one wants to wait 45 seconds for an elevator, but taking the stairs will take them half an hour if they're going to the top floor of . . . something. Same thing here: If I can generate an outcome in 90 seconds, it's still too long from the user's perspective, even if them building out and figuring out the data and building that report [would have] took them four hours . . . two days."
Follow everyone:
Dean De Beer
Kevin Tian
Travis McPeak
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
By a16z4.6
2929 ratings
In this bonus episode, recorded live at our San Francisco office, security-startup founders Dean De Beer (Command Zero), Kevin Tian (Doppel), and Travis McPeak (Resourcely) share their thoughts on generative AI, as well as their experiences building with LLMs and dealing with LLM-based threats.
Here's a sample of what Dean had to say about the myriad considerations when choosing, and operating, a large language model:
"The more advanced your use case is, the more requirements you have, the more data you attach to it, the more complex your prompts — ll this is going to change your inference time.
"I liken this to perceived waiting time for an elevator. There's data scientists at places like Otis that actually work on that problem. You know, no one wants to wait 45 seconds for an elevator, but taking the stairs will take them half an hour if they're going to the top floor of . . . something. Same thing here: If I can generate an outcome in 90 seconds, it's still too long from the user's perspective, even if them building out and figuring out the data and building that report [would have] took them four hours . . . two days."
Follow everyone:
Dean De Beer
Kevin Tian
Travis McPeak
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

1,288 Listeners

537 Listeners

175 Listeners

1,089 Listeners

334 Listeners

226 Listeners

211 Listeners

511 Listeners

148 Listeners

61 Listeners

131 Listeners

141 Listeners

21 Listeners

40 Listeners

44 Listeners