
Sign up to save your podcasts
Or
Scientists have recently published an application that will lead to conclusions about the use of AI by completely different categories of consumers:
What did the app study let us uncover, why is it appeared so interesting to scientists and cause a public outcry?
While most users are passionate about testing the application and trying to prove it invalid, scientists try to draw public attention to the ethical side of the issue and provoke public debate. Thus the researchers hope to unmask the reality of emotion recognition systems. Critics, in turn, argue that technology violates confidentiality rules, and is even racist.
To raise awareness of the technology and promote conversations about its use, researchers developed an emojify.info website - a program to try an emotion recognition system using PC or mobile camera.
Dr Alexa Hagerty, a scientist at Cambridge University who is also a project lead of Leverhulme Centre for the Future of Intelligence, commented on it in the following way: "These developments are based on one form of facial recognition. But technology has gone beyond that, not just by identifying people, but by asserting that it is capable of accurately counting internal experiences and emotions from our faces".
Let’s go directly to emojify.info website and see what functionality the application provides. In one of the ing, users are asked to make multiple expressions of faces on the camera and see if they have been deceived by the technology.
"The developers of this technology claim to read emotions," said Hagerty. In fact, the system can only count the movements of the face and combine with the assumption of what emotions may be behind it (for example, a smile = happiness).
Facial expressions do not always reflect true emotions, nor do people’s habits of accepting a particular expression. For example, smirks, sarcasm and other ambiguous states cannot be recognized by a robot (that is, not all people and not all people are able to recognize them.).
There is already some scientifically sound evidence that the expression of internal state is not quite as simple as the creators of such developments would like.
Which one of us didn’t fake a smile trying to look cheerful at one of the holidays, huh?
Some scientists have already expressed their view that it is now time to pause the development of emotion recognition systems in the growing market.
What do you think - should AI specialists stop playing with fire now?
Our opinion here in Zfort Group is, there could be potential dangers in almost any technology. If we as humanity were to stop exploring the world and science out of fear of something going wrong, we would be stuck in the Middle Ages forever. We are where we are only because we tame the technologies to serve us.
Let us know if you disagree - open to being wrong, as always.
Scientists have recently published an application that will lead to conclusions about the use of AI by completely different categories of consumers:
What did the app study let us uncover, why is it appeared so interesting to scientists and cause a public outcry?
While most users are passionate about testing the application and trying to prove it invalid, scientists try to draw public attention to the ethical side of the issue and provoke public debate. Thus the researchers hope to unmask the reality of emotion recognition systems. Critics, in turn, argue that technology violates confidentiality rules, and is even racist.
To raise awareness of the technology and promote conversations about its use, researchers developed an emojify.info website - a program to try an emotion recognition system using PC or mobile camera.
Dr Alexa Hagerty, a scientist at Cambridge University who is also a project lead of Leverhulme Centre for the Future of Intelligence, commented on it in the following way: "These developments are based on one form of facial recognition. But technology has gone beyond that, not just by identifying people, but by asserting that it is capable of accurately counting internal experiences and emotions from our faces".
Let’s go directly to emojify.info website and see what functionality the application provides. In one of the ing, users are asked to make multiple expressions of faces on the camera and see if they have been deceived by the technology.
"The developers of this technology claim to read emotions," said Hagerty. In fact, the system can only count the movements of the face and combine with the assumption of what emotions may be behind it (for example, a smile = happiness).
Facial expressions do not always reflect true emotions, nor do people’s habits of accepting a particular expression. For example, smirks, sarcasm and other ambiguous states cannot be recognized by a robot (that is, not all people and not all people are able to recognize them.).
There is already some scientifically sound evidence that the expression of internal state is not quite as simple as the creators of such developments would like.
Which one of us didn’t fake a smile trying to look cheerful at one of the holidays, huh?
Some scientists have already expressed their view that it is now time to pause the development of emotion recognition systems in the growing market.
What do you think - should AI specialists stop playing with fire now?
Our opinion here in Zfort Group is, there could be potential dangers in almost any technology. If we as humanity were to stop exploring the world and science out of fear of something going wrong, we would be stuck in the Middle Ages forever. We are where we are only because we tame the technologies to serve us.
Let us know if you disagree - open to being wrong, as always.