
Sign up to save your podcasts
Or


Artificial Intelligence has just pushed the envelope again.
Not content with beating humans at single-player games like Chess and Go, and not resting on its poker-playing and Jeopardy-winning laurels, AI has now proven it can perform as a team and trounce its opposition.
Researchers at OpenAI, a not-for-profit based in California, designed a team of five algorithms to play Dota 2. It's a popular computer strategy game that needs co-operation and team-play to win. And it's reportedly very difficult.
OpenAI set up five algorithms to harness a neural network to learn not only how to play the game, but also how to co-operate with its AI teammates.
And it is this teamwork and co-operation that are of most interest to me as a business communicator. The algorithms co-operated so well that when a human player was entered into the team as a trial, the human player reported feeling himself very well supported.
So the AI bots we will generate for helping us with administration will, if the algorithms are right, provide good support for us in our days at work. They will be able to co-operate with other humans and AI algorithms to support us and our clients in our endeavours.
Equally, the AI we use at home, like Siri, Google Assistant, and Amazon's Alexa will get smarter and smarter as they learn more about us.
This is exciting!
Source: https://www.technologyreview.com/s/611536/a-team-of-ai-algorithms-just-crushed-expert-humans-in-a-complex-computer-game/
But now a message from the dark side of AI -- what happens when decisions are made that can't be reversed?
Ibrahim Diallo is a computer programmer in Los Angeles who was reportedly fired by AI and whose managers and directors were simply unable to counter the decision. And it appears that he was terminated because someone, somewhere, forgot to code in a renewal of his employment contract.
What was worrying was that despite the best efforts of his immediate manager and her director, no input from them was accepted and the AI system sent out repeated emails to various departments authorising termination of employment contract, disablement of security passes, logging out of computers and programs, and that no correspondence could be entered into. Two members of the security department escorted him out of the building, and the poor contractor found his colleagues, who knew nothing but saw Security walking him from the building, becoming distant with him.
He lost three week's of pay and took a job at another company while still waiting for resolution of his case.
As Shel Holtz and Neville Hobson in their industry podcast, For Immediate Release, episode #143 said, it's a pity that AI will get the blame for what is surely a bureaucratic failing. It was a system designed with no supervisor input. There was no opportunity or failsafe stop that allowed a human to review and cast a final decision. Whoever designed the system didn't predict that a disgruntled former employee would fail to carry out their final tasks for the organisation and renew employment contracts. The result was that a contractor who had been doing really well was fired, and AI got a black eye in the media.
Sources: https://idiallo.com/blog/when-a-machine-fired-me https://www.linkedin.com/pulse/fir-podcast-143-fired-mistake-artificial-intelligence-shel-holtz/
And speaking of Shel and Neville, in their latest episode they also talk about IBM's debating AI, Project Debater, and how such an unbiased system could look at a company's business plans and point out where there are problems and challenges that may not have been seen.
Project Debater is a six-year-old algorithm that has been fed millions of articles on all different topics. It was only two years ago that it was able to debate with people, pitting its algorithms against skilled debaters. In one of its most recent debates, it beat the president of Israel's International Debate Society in a structured, formal debate. The audience of 40 preferred the human debaters when it came to humour, but preferred the AI when it came to information and knowledge dissemination and use.
And it's this ability to take a plethora of facts and information and dispassionately create an argument—and point out flaws in others' arguments—that Shel Holtz suggests make it an extremely plausible piece of AI for organisations.
As business communicators, it is often our job to point out the flaws of others' plans, in order to protect the reputation of the organisation. Politically, wouldn't it be better if that dissenting voice was delivered through the impartial voice of AI, rather than our own?
Source: https://venturebeat.com/2018/06/18/ibm-debuts-project-debater-experimental-ai-that-argues-with-humans/
By Lee HopkinsArtificial Intelligence has just pushed the envelope again.
Not content with beating humans at single-player games like Chess and Go, and not resting on its poker-playing and Jeopardy-winning laurels, AI has now proven it can perform as a team and trounce its opposition.
Researchers at OpenAI, a not-for-profit based in California, designed a team of five algorithms to play Dota 2. It's a popular computer strategy game that needs co-operation and team-play to win. And it's reportedly very difficult.
OpenAI set up five algorithms to harness a neural network to learn not only how to play the game, but also how to co-operate with its AI teammates.
And it is this teamwork and co-operation that are of most interest to me as a business communicator. The algorithms co-operated so well that when a human player was entered into the team as a trial, the human player reported feeling himself very well supported.
So the AI bots we will generate for helping us with administration will, if the algorithms are right, provide good support for us in our days at work. They will be able to co-operate with other humans and AI algorithms to support us and our clients in our endeavours.
Equally, the AI we use at home, like Siri, Google Assistant, and Amazon's Alexa will get smarter and smarter as they learn more about us.
This is exciting!
Source: https://www.technologyreview.com/s/611536/a-team-of-ai-algorithms-just-crushed-expert-humans-in-a-complex-computer-game/
But now a message from the dark side of AI -- what happens when decisions are made that can't be reversed?
Ibrahim Diallo is a computer programmer in Los Angeles who was reportedly fired by AI and whose managers and directors were simply unable to counter the decision. And it appears that he was terminated because someone, somewhere, forgot to code in a renewal of his employment contract.
What was worrying was that despite the best efforts of his immediate manager and her director, no input from them was accepted and the AI system sent out repeated emails to various departments authorising termination of employment contract, disablement of security passes, logging out of computers and programs, and that no correspondence could be entered into. Two members of the security department escorted him out of the building, and the poor contractor found his colleagues, who knew nothing but saw Security walking him from the building, becoming distant with him.
He lost three week's of pay and took a job at another company while still waiting for resolution of his case.
As Shel Holtz and Neville Hobson in their industry podcast, For Immediate Release, episode #143 said, it's a pity that AI will get the blame for what is surely a bureaucratic failing. It was a system designed with no supervisor input. There was no opportunity or failsafe stop that allowed a human to review and cast a final decision. Whoever designed the system didn't predict that a disgruntled former employee would fail to carry out their final tasks for the organisation and renew employment contracts. The result was that a contractor who had been doing really well was fired, and AI got a black eye in the media.
Sources: https://idiallo.com/blog/when-a-machine-fired-me https://www.linkedin.com/pulse/fir-podcast-143-fired-mistake-artificial-intelligence-shel-holtz/
And speaking of Shel and Neville, in their latest episode they also talk about IBM's debating AI, Project Debater, and how such an unbiased system could look at a company's business plans and point out where there are problems and challenges that may not have been seen.
Project Debater is a six-year-old algorithm that has been fed millions of articles on all different topics. It was only two years ago that it was able to debate with people, pitting its algorithms against skilled debaters. In one of its most recent debates, it beat the president of Israel's International Debate Society in a structured, formal debate. The audience of 40 preferred the human debaters when it came to humour, but preferred the AI when it came to information and knowledge dissemination and use.
And it's this ability to take a plethora of facts and information and dispassionately create an argument—and point out flaws in others' arguments—that Shel Holtz suggests make it an extremely plausible piece of AI for organisations.
As business communicators, it is often our job to point out the flaws of others' plans, in order to protect the reputation of the organisation. Politically, wouldn't it be better if that dissenting voice was delivered through the impartial voice of AI, rather than our own?
Source: https://venturebeat.com/2018/06/18/ibm-debuts-project-debater-experimental-ai-that-argues-with-humans/