
Sign up to save your podcasts
Or
In today’s interview, Guarv “GP” Pal from stackArmor presents a warning call to let people know that a sudden jump into any technology can present unintended consequences. He offers suggestions to make AI meaningful, safe, and dependable.
Everyone remembers Bill Murray on Groundhog Day having the same experience over and over. After a few years of experience in federal technology, we are looking at something similar.
First, the commercial sector I dazzled. Next, federal technology leaders feel like they must get in the same boat. A few years ago, many agencies jumped headfirst into the cloud, and then FedRAMP had to come along to put some guides on the cloud experience.
Fast forward to 2024, today’s Artificial Intelligence is falling into the standard pattern. Federal technology leaders can feel left out and make a quick adaptation, then later guidance emerges to rectify some of the unintended consequences of artificial intelligence.
Researchers are seeing security vectors that are unique to AI. What security principles should be considered when putting together a Large Language Model? Can a bias be introduced?
What controls do you have in place today that you can apply to your agency’s AI journey?
To bring in a diverse set of opinions to offer guidance, GP discusses his company’s development of an AI Risk Intelligence Center of Excellence. They have assembled a high-power group of leaders with federal experience to provide training models and actionable steps for making a safe and secure transition to AI.
5
55 ratings
In today’s interview, Guarv “GP” Pal from stackArmor presents a warning call to let people know that a sudden jump into any technology can present unintended consequences. He offers suggestions to make AI meaningful, safe, and dependable.
Everyone remembers Bill Murray on Groundhog Day having the same experience over and over. After a few years of experience in federal technology, we are looking at something similar.
First, the commercial sector I dazzled. Next, federal technology leaders feel like they must get in the same boat. A few years ago, many agencies jumped headfirst into the cloud, and then FedRAMP had to come along to put some guides on the cloud experience.
Fast forward to 2024, today’s Artificial Intelligence is falling into the standard pattern. Federal technology leaders can feel left out and make a quick adaptation, then later guidance emerges to rectify some of the unintended consequences of artificial intelligence.
Researchers are seeing security vectors that are unique to AI. What security principles should be considered when putting together a Large Language Model? Can a bias be introduced?
What controls do you have in place today that you can apply to your agency’s AI journey?
To bring in a diverse set of opinions to offer guidance, GP discusses his company’s development of an AI Risk Intelligence Center of Excellence. They have assembled a high-power group of leaders with federal experience to provide training models and actionable steps for making a safe and secure transition to AI.
111,160 Listeners
7,779 Listeners
28,412 Listeners
33 Listeners
426 Listeners