
Sign up to save your podcasts
Or
As businesses look to deploy artificial intelligence, many are concerned about making sure the systems are 100% accurate in their responses, and that ‘AI hallucinations’, where the system seems to make up answers, are eliminated. However, there are cases where AI hallucinations can be good for a business. Keith chats with Ryan Welsh, Field CTO for Generative AI at Qlik, about how companies can determine the right level of accuracy for their AI needs, and whether hallucinations are OK in certain situations.
3.4
1010 ratings
As businesses look to deploy artificial intelligence, many are concerned about making sure the systems are 100% accurate in their responses, and that ‘AI hallucinations’, where the system seems to make up answers, are eliminated. However, there are cases where AI hallucinations can be good for a business. Keith chats with Ryan Welsh, Field CTO for Generative AI at Qlik, about how companies can determine the right level of accuracy for their AI needs, and whether hallucinations are OK in certain situations.
7,901 Listeners
111,864 Listeners
338 Listeners
56,221 Listeners
14 Listeners
6 Listeners
1 Listeners
998 Listeners
0 Listeners
2,541 Listeners
28,304 Listeners
1,619 Listeners
50 Listeners
1,127 Listeners
123 Listeners