
Sign up to save your podcasts
Or


As businesses look to deploy artificial intelligence, many are concerned about making sure the systems are 100% accurate in their responses, and that ‘AI hallucinations’, where the system seems to make up answers, are eliminated. However, there are cases where AI hallucinations can be good for a business. Keith chats with Ryan Welsh, Field CTO for Generative AI at Qlik, about how companies can determine the right level of accuracy for their AI needs, and whether hallucinations are OK in certain situations.
By Foundry3.4
1010 ratings
As businesses look to deploy artificial intelligence, many are concerned about making sure the systems are 100% accurate in their responses, and that ‘AI hallucinations’, where the system seems to make up answers, are eliminated. However, there are cases where AI hallucinations can be good for a business. Keith chats with Ryan Welsh, Field CTO for Generative AI at Qlik, about how companies can determine the right level of accuracy for their AI needs, and whether hallucinations are OK in certain situations.

30,680 Listeners

8,763 Listeners

4,343 Listeners

3,061 Listeners

3,708 Listeners

112,277 Listeners

56,530 Listeners

8,484 Listeners

14 Listeners

6 Listeners

961 Listeners

1 Listeners

2,552 Listeners

0 Listeners

6,445 Listeners

494 Listeners

5,509 Listeners

1,657 Listeners

55 Listeners