
Sign up to save your podcasts
Or
As businesses look to deploy artificial intelligence, many are concerned about making sure the systems are 100% accurate in their responses, and that ‘AI hallucinations’, where the system seems to make up answers, are eliminated. However, there are cases where AI hallucinations can be good for a business. Keith chats with Ryan Welsh, Field CTO for Generative AI at Qlik, about how companies can determine the right level of accuracy for their AI needs, and whether hallucinations are OK in certain situations.
3.4
1010 ratings
As businesses look to deploy artificial intelligence, many are concerned about making sure the systems are 100% accurate in their responses, and that ‘AI hallucinations’, where the system seems to make up answers, are eliminated. However, there are cases where AI hallucinations can be good for a business. Keith chats with Ryan Welsh, Field CTO for Generative AI at Qlik, about how companies can determine the right level of accuracy for their AI needs, and whether hallucinations are OK in certain situations.
2,958 Listeners
1,881 Listeners
289 Listeners
1,061 Listeners
568 Listeners
1,194 Listeners
1,003 Listeners
203 Listeners
308 Listeners
14 Listeners
481 Listeners
9,495 Listeners
6 Listeners
945 Listeners
1 Listeners
0 Listeners
2,518 Listeners
5,351 Listeners
103 Listeners