Share CERIAS Weekly Security Seminar - Purdue University
Share to email
Share to Facebook
Share to X
By CERIAS <[email protected]>
4.1
77 ratings
The podcast currently has 1,022 episodes available.
Recorded: 09/18/2024 CERIAS Security Seminar at Purdue University Exploiting Vulnerabilities in AI-Enabled UAV: Attacks and Defense Mechanisms Ashok Vardhan Raja, Purdue University Northwest In recent years, UAVs have seen significant growth in both military and civilian applications, thanks to their high mobility and advanced sensing capabilities. This expansion has been further accelerated by rapid advancements in AI algorithms and hardware. While AI integration enhances the intelligence and efficiency of UAVs, it also introduces new security and safety concerns due to potential vulnerabilities in the underlying AI models. These vulnerabilities can be exploited by malicious actors, leading to severe security risks and operational failures. This talk will focus on securing the integration of AI into UAVs to ensure their resilience in adversarial environments. We will begin by analyzing the data sensing and processing pipeline of key sensors used in AI-enabled UAV operations,identifying areas where vulnerabilities may exist. Following this, we will explore how to develop defense mechanisms to strengthen the robustness of these AI-driven UAV systems against potential threats. AI-enabled anomaly detection. AI-enabled anomaly detection and AI-enabled UAV infrastructure inspection will be leveraged as case studies in this talk. The talk will also cover the use of Large Language Models to improve this integration's security About the speaker: Ashok Vardhan Raja is an Assistant Professor of Cybersecurity in the department of Computer Information Technology and Graphics for the College of Technology at Purdue University Northwest. His research is on secure integration of Artificial Intelligence (AI) and Cyber Physical Systems (CPS)such as UAVs for robust operations. He is expanding his current work by using Swarm of UAVs to address security issues and to other domains in the integration of AI and CPS.
The Information Design Assurance Red(IDART) methodology is optimized to evaluate system designs and identify vulnerabilities by adopting, in detail, the varying perspectives of a system's most likely adversaries. The results provide system owners with an attacker's-eye view of their system's strengths and weaknesses.IDART can be applied to a diversity of complex networks, systems, and applications, including those that mix cyber technology with industrial machinery or other equipment. The methodology can be used throughout a system's lifecycle but the assessments are less expensive and more beneficial during design and development, when weaknesses can be found and mitigated more easily.Developed at Sandia National Laboratories in the mid-1990s and updated frequently, the IDART framework is NIST-recognized and designed for repeatability and measurable results. Atypical assessment includes the following high-level activities:Characterizing the target system and its architectureIdentifying nightmare consequencesAnalyzing the system for security strengths and weaknessesIdentifying potential vulnerabilities that could lead to nightmare consequencesDocumenting results and providing prioritized mitigation strategiesIDART assessors think like adversaries. To do this, they first develop a range of categorical profiles or"models" of a system's most likely attackers. Factors include an adversary's specific capabilities (i.e., domain knowledge, access, resources) as well as intangibles such as motivation and risk tolerance. The assessment team then uses this adversarial lens to measure the risks posed by system weaknesses and to prioritize mitigations.For efficiency and thoroughness, IDART relies on a free exchange of information. System personnel share documentation and participate in discussions that help assessors efficiently find as many attack paths as possible. In turn, the IDART team is transparent in conducting its assessment activities, giving system owners greater confidence in the work and the resulting analysis.All of these traits combine to make IDART a highly flexible tool. The methodology helps system owners identify critical vulnerabilities, understand adversary threats, and weigh appropriate strategies for delivering components, systems, and plans that are botheffective and secure. About the speaker: Russel Waymire is a manager at Sandia National Laboratories in the area of Cyber-Physical Security. Mr. Waymire has over 25 years of experience in the design, implementation, testing, reverse engineering, and securing of software and hardware systems in IT and OT environments. Mr. Waymire began his career as a software developer at Honeywell Defense Avionic Systems in Albuquerque New Mexico, where he developed the requirements, design, implementation, and testing of software for a variety of platforms that included the F-15, C-27J, KC-10, C-130, and the C5 aircraft. He then went on to Sandia National Laboratories in Albuquerque New Mexico where he has had an opportunity to work on a wide range of projects including algorithms in combinatorial optimization, software development for mod-sim force-on-force interactions and cognition/AI development, satellite software for operational systems in orbit, cyber vulnerability assessments for various US government agencies, and cyber physical assessments for numerous foreign partners that included physical and cyber upgrades at nuclear power plants and research reactors worldwide. Russel currently uses his experience and insights to lead a team researching innovative ways to protect critical infrastructure, space systems, and other high-consequence operational technologies.
At Purdue University, Ms. Kubecka will discuss how technologists, especially the next generation of digital defenders, can be empowered to consider ethics in cybersecurity, privacy, and emerging technologies, and how they can use their power for good in tech. About the speaker: Ms. Chris Kubecka is a globally recognized cybersecurity expert with over two decades of experience, known for her pivotal role in digital defense and her commitment to ethical technology practices. She has established a formidable reputation for protecting both national and international cybersecurity interests, often at the highest levels of government and industry.Ms. Kubecka's career began with a strong technical foundation, rapidly advancing into leadership roles that demand both tactical acumen and strategic foresight. Her expertise spans cyber warfare, digital intelligence, artificial intelligence, and the development of robust cybersecurity frameworks, including those addressing the challenges of post-quantum computing.A thought leader in cybersecurity, Ms. Kubecka frequently contributes to international conferences, policy discussions, and academic forums. She is the author of several influential books, including Hack The World With OSINT, and has published numerous research papers on platforms like ResearchGate. Her work often explores the ethical implications of emerging technologies and the critical role of privacy in cybersecurity.Ms. Kubecka serves as the CEO and Founder of HypaSec NL, Senior Cybersecurity Advisor for Elemental Concept, and Chief Hacktress for Unit6 Technologies. Her significant contributions to the field have been recognized with numerous awards, including The Order of Thor. She is also a former Distinguished Chair for the Middle East Institute's Cyber Security and Emerging Technology Program.Throughout her career, Ms. Kubecka has led critical operations that highlight the intersection of cybersecurity and human rights. During the conflict in Ukraine, she used her expertise to facilitate the evacuation of civilians, applying digital intelligence to support these missions. In Venezuela, her investigations uncovered the weaponization of government-backed applications, such as the Ven App and Patrica App, which are used for surveillance and repression of dissent. Her research revealed how these apps are being exploited to target citizens, leading to arrests, disappearances, and even deaths, underscoring the dire consequences of unethical technology use.Ms. Kubecka's background as a USAF aviator and former member of the USAF Space Command highlights her extensive commitment to defense in both the physical and digital realms. Her journey began at a young age, with her early technical skills leading to her first major hacking achievement at age ten.
Students: this is a hybrid event. You are strongly encouraged to attend in-person. Location: STEW G52 (Suite 050B) WL Campus. The rapid commercialization of GenAI products and services has significantly broadened the landscape of potential attack vectors targeting enterprise infrastructure, operations, and processes. This evolution poses substantial risks to enterprise assets and operations, requiring continuous risk, attack, and threat surface analysis. This exploratory study delineates critical findings across three key dimensions:An analysis of current market trends related to AI-driven cyber and information security risks;An overview of emerging regulatory requirements and compliance efforts specific to AI technologies and;Strategic initiatives for identifying and mitigating these risks, informed by insights from both industry and academia.The presentation provides a roadmap for technology practitioners navigating the complex intersection of AI innovation and cybersecurity. About the speaker: David is an Assistant Director in Ernst & Young's Americas Technology Risk Management practice. He focuses on Americas and Global technology risk assessments, supports IT and data regulatory efforts, and coordinates IT risk management processes for member firms. He brings over eight years of external and internal experience in information security consulting, technology, IT audit, and GRC across public and private industries. He previously served as an adjunct instructor and lecturer for undergraduate programs at Purdue University Northwest.David is pivotal in supporting EY's strategic technology, information security, and compliance projects. His specialties include continuous risk identification & analysis, GRC strategy development, security control testing analysis (e.g., NIST, ISO), and solutions development to manage enterprise risks across various IT domains and emerging technologies (e.g., AI).David is a passionate and dedicated professional who embodies the mindset of a continuous learner in IT, information security, emerging technologies, and data privacy. He proactively expands his knowledge and skillsets by pursuing advanced degrees, obtaining professional certifications, and conducting domestic & international speaking engagements.
The increased use of machine learning (ML) technologies on proprietary and sensitive datasets has led to increased privacy breaches in many sectors, including healthcare and personalized medicine. Although federated learning (FL) systems allow multiple parties to train ML models collaboratively without sharing their raw data with third-party entities, security concerns arise from the involvement of potentially malicious FL clients aiming to disrupt the learning process. In this talk, I will present how my research addresses these challenges by developing frameworks to analyze and improve the privacy and security aspects of ML. First, I will talk about model inversion attacks that allow an adversary to infer part of the sensitive training data with only black-box access to a vulnerable classification model. I will then present FLShield, a novel FL framework that utilizes benign data from FL participants to validate the local models before taking them into account for generating the global model. I will conclude with a discussion of challenges in building practical data-driven systems that take into account data privacy and security while keeping the intended functionality of the system unimpaired. About the speaker: Shagufta Mehnaz is an Assistant Professor of the Computer Science and Engineering department at The Pennsylvania State University. She is broadly interested in the areas of privacy, security, and machine learning. Her research focuses on enhancing the privacy and security of machine learning techniques and models themselves, as well as developing novel machine learning techniques to protect data security and privacy. She directs the PRIvacy, Security, and Machine Learning lab (PRISMLab) at Penn State. She obtained her Ph.D. in Computer Science from Purdue University in 2020. She also received the Bilsland Dissertation Fellowship at Purdue. She was one of the 100 Computer Science Young Researchers selected worldwide for the Heidelberg Laureate Forum (HLF) in 2018.
For the past four years, Sandia National Laboratories has been conducting a focused research effort on Trusted AI for national security problems. The goal is to develop the fundamental insights required to use AI methods in high-consequence national security applications while also improving the practical deployment of AI. This talk looks at key properties of many national security problems along with Sandia's ongoing effort to develop a certification process for AI-based solutions. Along the way, we will examine several recent and ongoing research projects, including how they contribute to the larger goals of Trusted AI. The talk concludes with a forward-looking discussion of remaining research gaps. About the speaker: David manages the Machine Intelligence and Visualization department, which conducts cutting-edge research in machine learning and artificial intelligence for national security applications, including the advanced visualization of data and results. David has been studying machine learning in the broader context of artificial intelligence for over 15 years. His research focuses on applying machine learning methods to a wide variety of domains with an emphasis on estimating the uncertainty in model predictions to support decision making. He also leads the Trusted AI Strategic Initiative at Sandia, which seeks to develop fundamental insights into AI algorithms, their performance and reliability, and how people use them in national security contexts. Prior to joining Sandia, David spent three years as research faculty at Arizona State University and one year as a postdoc at Stanford University developing intelligent agent architectures. He received his doctorate in 2006 and MS in 2002 from the University of Massachusetts at Amherst for his work in machine learning. David earned his Bachelor of Science from Clarkson University in 1998.Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
In this presentation, I provide a thorough exploration of how dataflow analysis serves as a formidable method for discovering and addressing cybersecurity threats across a wide spectrum of vulnerability types. For instance, I'll illustrate how we can employ dynamic information flow tracking to automatically detect "blind spots"—sections of a program's input that can be changed without influencing its output. These blind spots are almost always indicative of an underlying bug. Furthermore, I will demonstrate how the use of hybrid control- and dataflow information in differential analysis can aid in uncovering variability bugs, commonly known as "heisenbugs." By delving into these practical applications of dataflow analysis and introducing open-source tools designed to implement these strategies, the goal is to present practical steps for pinpointing, debugging, and managing a diverse array of software bugs. About the speaker: Dr. Evan Sultanik is a principal computer security researcher at Trail of Bits. His recent research covers language-theoretic security, program analysis, detecting variability bugs via taint analysis, dependency analysis via program instrumentation, and consensus protocols for distributed ledgers. He is an editor of and frequent contributor to the offensive computer security journal "Proof of Concept or GTFO." Prior to joining Trail of Bits, Dr. Sultanik was the Chief Scientist at Digital Operatives and, prior to that, a Senior Research Scientist at The Johns Hopkins Applied Physics Laboratory. His dissertation was on the discovery of a family of combinatorial optimization problems the solutions for which can be approximated constant factor of optimal in polylogarithmic time on a parallel computer or distributed system. This was a surprising result since many of the problems in the family are NP-Hard. In a life prior to academia, Evan was a professional software engineer.
The aim of this discussion is to publicize both the challenge and potential solution for the integration of secure supply chain risk management content into conventional software engineering programs. The discipline of software engineering typically does not teach students how to ensure that the code produced and sold in commercial off-the-shelf (COTS) products hasn't been compromised during the sourcing process. We propose a comprehensive and standard process based on established best practice principles that can provide the basis to address the secure sourcing of COTS products. About the speaker: Dr. Dan Shoemaker received a doctorate from the University of Michigan in 1978. He taught at Michigan State University and then moved to the Business School at the University of Detroit Mercy to Chair their Department of Computer Information Systems (CIS). He attended the organizational roll-out of the discipline of software engineering at the Carnegie-Mellon University Software Engineering Institute in the fall of 1987. From that, he developed and taught a SEI-based software engineering curriculum as a separate degree program to the MBA within the College. During that time, Dr. Shoemaker's specific areas of scholarship, publication, and teaching centered on the processes of the SWEBOK, specifically specification, SQA, and SCM/sustainment. Dr. Shoemaker's transition into cybersecurity came after UDM was designated the 39th Center of Academic Excellence by the NSA/DHS at West Point in 2004. His research concentrated on the strategic architectural aspects of cybersecurity system design and implementation, as well as software assurance. He was the Chair of Workforce Training and Education for the DHS/DoD Software Assurance initiative (2007-2010), and he was one of the three authors of the Common Body of Knowledge to Produce, Acquire, and Sustain Software (2006). He was also a subject matter expert for NICE (2009 and NICE II – 2010-11). Dr. Shoemaker was also an SME for the CSEC 2017 (Human Security).This exposure led to a grant to develop curricula for software assurance and the founding of the Center for Cybersecurity and Intelligence Studies, where he currently resides. Dr. Shoemaker's final significant grant was from the DoD to develop a curriculum and teaching and course material for Secure Acquisition (in conjunction with the Institute for Defense Analysis and the National Defense University). He has published 14 books in the field, ranging from Cyber Resilience (CRC Press) to the CSSLP All-In-One (McGraw-Hill). His latest book, "Teaching Cyber Security" (Taylor and Francis), is aimed at K-12 teachers.
How Cybersecurity relates to various fields of business/ industries – how it works in these fields, different risks and vulnerabilities that are out there, which explains why manufacturing cybersecurity into the design of a product or service is so imperative. In companies today Budget Managers and Business Managers and Engineers are making decisions on their cybersecurity options without including cybersecurity experts in that process. Without the input from the cybersecurity experts, some cybersecurity decisions are made with cost savings as the primary goal, and cutting corners in cybersecurity can actually be a bad idea.
Reputation systems are crucial to online platforms' health. They are prevalent across online marketplaces and social media platforms either visibly (e.g., as star ratings and badges) or invisibly as signals that feed into recommendation engines. In theory, good behavior (e.g., honest, accurate, high-quality) begets high reputation, while poor behavior is deterred and pushed off the platform. In this talk, I will discuss how these systems seem to fulfill this mission only coarsely. On one platform, we were able to predict 2 times more suspensions than the reputation system in place using other public signals. On another study, we found that users with high reputation signals were suspended at significantly lower rates (up to 3 times less) for the same number of offenses and behavior as regular users, which suggests they may be impairing content moderation efforts. I will provide some hypotheses to explain these results and offer preliminary findings from current work. About the speaker: Alejandro is a 5th year PhD student at Carnegie Mellon University in Societal Computing, advised by Prof. Nicolas Christin. He is interested in measuring social influence in online communities adjacent to underground economies. His recent work focuses on how reputation is leveraged in anonymous marketplaces, p2p marketplaces, and cryptocurrency communities. He is a recipient of a CMU Cylab Presidential Fellowship, as well as a IEEE S&P Distinguished Paper Award. Prior to CMU, he obtained a B.S. from The Pennsylvania State University, where he worked with Prof. Peng Liu and Prof. Xinyu Xing on a variety of systems security projects. A Paraguayan native, Alejandro has been invited to talk about his work at the Paraguayan Central Bank and the Paraguayan National Police.
The podcast currently has 1,022 episodes available.