AI chatbots on mobile devices vary significantly in their data collection practices, raising important questions about user privacy. Recent analysis of popular applications shows that many collect a wide array of personal information, often far beyond what basic functionality requires. One high-profile chatbot was found to gather 23 distinct types of data. This includes sensitive information such as precise location, which is only collected by a small group of competitors. Beyond location, these apps frequently harvest contact details like names, email addresses, and phone numbers, as well as user-generated content, contact lists, and both search and browsing histories.A common misconception is that switching to a paid subscription will reduce the amount of data being tracked. However, evidence suggests that upgrading to a premium plan does not lead to less data collection, as users are often still treated as the product rather than the customer. A major privacy risk involves the use of user interactions to train the underlying AI models, meaning interactions should generally not be considered private. As more data points are gathered, the likelihood increases that specific chat sessions can be identified and linked back to an individual.For those seeking greater privacy, certain integrations offer a safer path by using negotiated contracts that ensure user queries are anonymized and not used for training data. Meanwhile, recent software updates for mobile and desktop operating systems, such as versions 26.4, continue to introduce new features and changes to the digital ecosystem.
Become a supporter of this podcast: https://www.spreaker.com/podcast/tech-talk-daily--6886557/support.