
Sign up to save your podcasts
Or
LLMs fail because they're prediction machines, not fact machines. Michelle Robbins, Manager of Strategic Initiatives and Intelligence at LinkedIn, explains why AI models produce inconsistent responses and require critical evaluation. She advocates treating LLMs as thought partners rather than absolute authorities, emphasizing the critical importance of AI alignment to prevent harmful outputs and ensure optimal human outcomes as we progress toward artificial general intelligence.
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
4.3
6262 ratings
LLMs fail because they're prediction machines, not fact machines. Michelle Robbins, Manager of Strategic Initiatives and Intelligence at LinkedIn, explains why AI models produce inconsistent responses and require critical evaluation. She advocates treating LLMs as thought partners rather than absolute authorities, emphasizing the critical importance of AI alignment to prevent harmful outputs and ensure optimal human outcomes as we progress toward artificial general intelligence.
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
114 Listeners
191 Listeners
21,923 Listeners
1,442 Listeners
43,367 Listeners
1,259 Listeners
3,990 Listeners
157 Listeners
26,960 Listeners
345 Listeners
2,633 Listeners
28,589 Listeners
352 Listeners
58 Listeners
73 Listeners
12 Listeners
823 Listeners
83 Listeners