Share AI with AI: Artificial Intelligence with Andy Ilachinski
Share to email
Share to Facebook
Share to X
By CNA
5
4646 ratings
The podcast currently has 242 episodes available.
For the final (for now?) episode of AI with AI, Andy and Dave discuss the latest in AI news and research, including a political declaration from the US Department of State on the responsible military use of AI and autonomy. NATO begins work on an AI certification standard. The IEEE introduces a new program that provides free access to AI ethics and governance standards. Reported in February, but performed in December, A joint Dept of Defense team performed 12 flight tests (over 17 hours) in which AI agents piloted Lockheed Martin’s X-62A VISTA, an F-16 variant. Andy provides a run-down of a large number of recent ChatGPT-related stories. Wolfram “explains” how ChatGPT works. Paul Scharre publishes Four Battlegrounds: Power in the Age of AI. And to come full circle, we began this podcast 6 years ago with the story of AlphaGo beating the world champion. So we close the podcast with news that a non-professional Go player, Kellin Pelrine, beat a top AI system 14 games to one having discovered a ‘not super-difficult method for humans to beat the machines. A heartfelt thanks to you all for listening over the years!
Andy and Dave discuss the latest in AI news and research, including the update of the Department of Defense Directive 3000.09 on Autonomy in Weapon Systems. NIST releases the first version of its AI Risk Management Framework. The National AI Research Resource (NAIRR) Task Force publishes its final report, in which it details its plans for a national research infrastructure, as well as its request for $2.6 billion over 6 years to fund the initiatives. DARPA announces the Autonomous Multi-domain Adaptive Swarms-of-Swarms (AMASS) program, a much larger effort (aiming for thousands of autonomous entities) than its previous OFFSET program. And finally, from the Naval Postgraduate School’s Energy Academic Group, Kristen Fletcher and Marina Lesse join to discuss their research and efforts in autonomous systems and maritime law and policy, to include a discussion about the DoDD 3000.09 update and the high-altitude balloon incident.
https://www.cna.org/our-media/podcasts/ai-with-ai
Andy and Dave discuss the latest in AI news and research, starting with an education program from AI that teaches US Air Force personnel the fundamentals of AI across three types: leaders, developers, and users. The US Equal Employment Opportunity Commission unveils its draft Strategic Enforcement Plan to target AI-based hiring bias. The US Department of State establishes the Office of the Special Envoy for Critical and Emerging Technology to bring “additional technology policy expertise, diplomatic leadership, and strategic direction to the Department’s approach to critical and emerging technologies.” Google calls in its founders, Larry Page and Sergey Brin, to help with the potential threat over ChatGPT and other AI technology. Researchers from Northwestern University publish research that demonstrates how ChatGPT can write fake research paper abstracts that can pass plagiarism checkers, and that human reviewers were only able to correctly identify 68% of the generated abstracts. Wolfram publishes an essay on a way to combine the computational powers of ChatGPT with Wolfram|Alpha. CheckPoint Research demonstrates how cybercriminals can use ChatGPT for nefarious exploits (including people without any experience in generating malicious tools). Researchers at Carnegie Mellon demonstrate that full body tracking is now possible using only WiFi signals, with comparable performance to image-based approaches. Microsoft introduces VALL-E, a text-to-speech AI model that can mimic anyone’s voice with only three seconds of sample input. The Cambridge Handbook of Responsible AI is the book of the week, with numerous essays on the philosophical, ethical, legal, and societal challenges that AI brings; Cambridge has made the book open-access online. And finally, Sam Bendett joins for an update on the latest AI and autonomy-related information from Russia as well as Ukraine.
Andy and Dave discuss the latest in AI and autonomy news and research, including a report from Human Center AI that assesses progress (or lack thereof) of the implementation of the three pillars of America’s strategy for AI innovation. The Department of Energy is offering up a total of $33M for research in leveraging AI/ML for nuclear fusion. China’s Navy appears to have launched a naval mothership for aerial drones. China is also set to introduce regulation on “deepfakes,” requiring users to give consent and prohibiting the technology for fake news, among many other things. Xiamen University and other researchers publish a “multidisciplinary open peer review dataset” (MOPRD), aiming to provide ways to automate the peer review process. Google executives issue a “code red” for Google’s search business over the success of OpenAI’s ChatGPT. New York City schools have blocked access for students and teachers to ChatGPT unless it involves the study of the technology itself. Microsoft plans to launch a version of Bing that integrates ChatGPT to its answers. And the International Conference on Machine Learning bans authors from using AI tools like ChatGPT to write scientific papers (though still allows the use of such systems to “polish” writing). In February, an AI from DoNotPay will likely be the first to represent a defendant in court, telling the defendant what to say and when. In research, the UCLA Departments of Psychology and Statistics demonstrate that analogical reasoning can emerge from large language models such as GPT-3, showing a strong capacity for abstract pattern induction. Research from Google Research, Stanford, Chapel Hill, and DeepMind shows that certain abilities only emerge from large language models that have a certain number of parameters and a large enough dataset. And finally, John H. Miller publishes Ex Machina through the Santa Fe Institute Press, examining the topic of Coevolving Machines and the Origins of the Social Universe. https://www.cna.org/our-media/podcasts/ai-with-ai
Andy and Dave discuss the latest in AI news and research, including the release of the US National Defense Authorization Act for FY2023, which includes over 200 mentions of “AI” and many more requirements for the Department of Defense. DoD has also awarded its cloud-computing contracts, not to one company, but four – Amazon, Google, Microsoft, and Oracle. At the end of November, the San Francisco Board voted to allow the police force to use robots to administer deadly force, however, after a nearly immediate response from a “No Killer Robots” campaign, in early December the board passed a revised version of the policy that prohibits police from using robots to kill people. Israeli company Elbit unveils its LANIUS drone, a “drone-based loitering munition” that can carry lethal or non-lethal payloads, and appears to have many functions similar to the ‘slaughter bots,’ except for autonomous targeting. Neuralink shows the latest updates on its research for putting a brain chip interface into humans, with demonstrations of a monkey manipulating a mouse cursor with its thoughts; the company also faces a federal investigation into possible animal-welfare violations. DeepMind publishes AlphaCode in Science, a story that we covered back in February. DeepMind also introduces DeepNash, an autonomous agent that can play Stratego. OpenAI unleashes ChatGPT, a spin-off of GPT-3 optimized for answering questions through back-and-forth dialogue. Meanwhile, Stack Overflow, a website for programmers, temporarily banned users from sharing responses generated by ChatGPT, because the output of the algorithm might look good, but it has “a high rate of being incorrect.” Researchers at the Weizmann Institute of Science demonstrate that, with a simple neural network, it is possible to reconstruct a “large portion” of the actual training samples. NOMIC provides an interactive map to explore over 6M images from Stable Diffusion. Steve Coulson creates “AI-assisted comics” using Midjourney. Stay tuned for AI Debate 3 on 23 December 2022. And the video of the week from Ricard Sole at the Santa Fe Institute explores mapping the cognition space of liquid and solid brains. https://www.cna.org/our-media/podcasts/ai-with-ai
Andy and Dave discuss the latest in AI news and research, including the introduction of a lawsuit against Microsoft, GitHub and OpenAI for allegedly violating copyright law by reproducing open-source code using AI. The Texas Attorney General files a lawsuit against Google alleging unlawful capture and use of biometric data of Texans without their consent. DARPA flies its final flight of ALIAS, an autonomous system outfitted on a UH-60 Black Hawk. And Rafael’s DRONE DOME counter-UAS system wins Pentagon certification. In research, Meta publishes work on Cicero, an AI agent that combines Large Language Models with strategic reasoning to achieve human-level performance in Diplomacy. Meta researchers also publish work on ESMFold, an AI algorithm that predicts structures from some 600 million proteins, “mostly unknown.” And Meta also releases (then takes down due to misuse) Galactica, a 120B parameter language model for scientific papers. In a similar, but less turbulent vein, Explainpaper provides the ability to upload a paper, highlight confusing text, and ask queries to get explanations. CRC Press publishes online for free Data Science and Machine Learning: Mathematical and Statistical Methods, a thorough text for upper-class college or grad-school level. And finally, the video of the week features Andrew Pickering, Professor Emeritus of sociology and philosophy at the University of Exeter, UK, with a video on the Cybernetic Brain, and the book of the same name, published in 2011. https://www.cna.org/our-media/podcasts/ai-with-ai
Andy and Dave discuss the latest in AI-related news and research, including a bill from the EU that will make it easier for people to sue AI companies for harm or damages caused by AI-related technologies. The US Office of S&T Policy releases a Blueprint for an AI Bill of Rights, which further lays the groundwork for potential legislation. The US signs the AI Training for the Acquisition Workforce Act into law, requiring federal acquisition officials to receive training on AI, and it requires OMB to work with GSA to develop the curriculum. Various top robot companies pledge not to add weapons to their technologies and to work actively at not allowing their robots to be used for such purposes. Telsa reveals its Optimus robot at its AI Day. DARPA will hold a proposal session on 14 November for its AI Reinforcements effort. OpenAI makes DALL-E available for everybody, and Playground offers access to both DALL-E and Stable Diffusion. OpenAI also makes available the results of an NLP Community Meta survey in conjunction with NY University, providing AI researchers’ views on a variety of AI-related efforts and trends. And Nathan Benaich and Ian Hogarth release the State of AI Report 2022, which covers a summary of everything from research, politics, safety, as well as some specific predictions for 2023. In research, DeepMind uses AlphaZero to explore matrix multiplication and discovers a slightly faster algorithm implementation for 4x4 matrices. Two research efforts look at turning text into video. Meta discusses its Make-A-Video for turning text prompts into video, leveraging text-to-image generators like DALL-E. And Google Brain discusses its Imagen Video (along with Phenaki, which produces long videos from a sequence of text prompts). The Foundation of Robotics is the open-access book of the week from Damith Herath and David St-Onge. And the video of the week addresses AI and the Application of AI in Force Structure, with LtGen (ret) Groen, Dr. Sam Tangredi, and Mr. Brett Vaughan joining in on the discussion for a symposium at the US Naval Institute.
Dr. Anya Fink from CNA’s Russia Studies program joins the podcast to discuss the impacts of global sanctions on Russia’s technology and AI sector.
Andy and Dave discuss the latest in AI news and research, starting with a publication from the UK’s National Cyber Security Centre, providing a set of security principles for developers implementing machine learning models. Gartner publishes the 2022 update to its “AI Hype Cycle,” which qualitatively plots the position of various AI efforts along the “hype cycle.” PromptBase opens its doors, promising to provide users with better “prompts” for text-to-image generators (such as DALL-E) to generate “optimal images.” Researchers explore the properties of vanadium dioxide (VO2), which demonstrates volatile memory-like behavior under certain conditions. MetaAI announces a nascent ability to decode speech from a person’s brain activity, without surgery (using EEG and MEG). Unitree Robotics, a Chinese tech company, is producing its Aliengo robotic dog, which can carry up to 11 pounds and perform other actions. Researchers at the University of Geneva demonstrate that transformers can build world models with fewer samples, for example, able to generate “pixel perfect” predictions of Pong after 120 games of training. DeepMind AI demonstrates the ability to teach a team of agents to play soccer by controlling at the level of joint torques and combine it with longer-term goal-directed behavior, where the agents demonstrate jostling for the ball and other behaviors. Researchers at Urbana-Champaign and MIT demonstrate a Composable Diffusion model to tweak and improve the output of text-to-image transformers. Google Research publishes results on AudioLM, which generates “natural and coherent continuations” given short prompts. And Michael Cohen, Marcus Hutter, and Michael Osborne published a paper in AI Magazine, arguing that dire predictions about the threat of advanced AI may not have gone far enough in their warnings, offering a series of assumptions on which their arguments depend. https://www.cna.org/our-media/podcasts/ai-with-ai
The podcast currently has 242 episodes available.