
Sign up to save your podcasts
Or
As businesses look to deploy artificial intelligence, many are concerned about making sure the systems are 100% accurate in their responses, and that ‘AI hallucinations’, where the system seems to make up answers, are eliminated. However, there are cases where AI hallucinations can be good for a business. Keith chats with Ryan Welsh, Field CTO for Generative AI at Qlik, about how companies can determine the right level of accuracy for their AI needs, and whether hallucinations are OK in certain situations.
3.4
1010 ratings
As businesses look to deploy artificial intelligence, many are concerned about making sure the systems are 100% accurate in their responses, and that ‘AI hallucinations’, where the system seems to make up answers, are eliminated. However, there are cases where AI hallucinations can be good for a business. Keith chats with Ryan Welsh, Field CTO for Generative AI at Qlik, about how companies can determine the right level of accuracy for their AI needs, and whether hallucinations are OK in certain situations.
1,269 Listeners
1,626 Listeners
880 Listeners
43,918 Listeners
8,589 Listeners
77,380 Listeners
34,084 Listeners
55,933 Listeners
15 Listeners
511 Listeners
6 Listeners
940 Listeners
1 Listeners
0 Listeners
322 Listeners
6,442 Listeners
410 Listeners
5,370 Listeners
2,167 Listeners