BrakeSec Education Podcast

Bronwen Aker - harnessing AI for improving your workflows


Listen Later

Guest Info:

  • Name:       Bronwen Aker

  • Contact Information (N/A): https://br0nw3n.com/ 

  • Time Zone(s): Pacific, Central, Eastern

 

–Copy begins–

 

Disclaimer: The views, information, or opinions expressed on this program are solely the views of the individuals involved and by no means represent absolute facts. Opinions expressed by the host and guests can change at any time based on new information and experiences, and do not represent views of past, present, or future employers.

 

Recorded: https://youtube.com/live/guhM8v8Irmo?feature=share 

 

Show Topic Summary: By harnessing AI, we can assist in being proactive in discovering evolving threats, safeguard sensitive data, analyze data, and create smarter defenses. This week, we’ll be joined by Bronwen Aker, who will share invaluable insights on creating a local AI tailored to your unique needs. Get ready to embrace innovation, transform your work life, and contribute to a safer digital world with the power of artificial intelligence! (heh, I wrote this with the help of AI…)

Questions and topics: (please feel free to update or make comments for clarifications)

  1. Things that concern Bronwen about AI: (https://br0nw3n.com/2023/12/why-i-am-and-am-not-afraid-of-ai/) Data Amplification: Generative AI models require vast amounts of data for training, leading to increased data collection and storage. This amplifies the risk of unauthorized access or data breaches, further compromising personal information.

  2. Data Inference: LLMs can deduce sensitive information even when not explicitly provided. They may inadvertently disclose private details by generating contextually relevant content, infringing on individuals’ privacy.

  3. Deepfakes and Misinformation: Generative AI can generate convincing deepfake content, such as videos or audio recordings, which can be used maliciously to manipulate public perception or deceive individuals. (Elections, anyone?)

  4. Bias and Discrimination: LLMs may inherit biases present in their training data, perpetuating discrimination and privacy violations when generating content that reflects societal biases.

  5. Surveillance and Profiling: The utilization of LLMs for surveillance purposes, combined with big data analytics, can lead to extensive profiling of individuals, impacting their privacy and civil liberties.

  6. Setting up a local LLM? CPU models vs. gpu models pros/cons? Benefits?

  7. What can people do if they lack local resources? Cloud instances? Ec2? Digital Ocean? Use a smaller model?

  8. https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/ 

  • AI coding assets are hallucinating package names

  • 5.2 percent of package suggestions from commercial models didn't exist, compared to 21.7 percent from open source or openly available models

  • Attackers can then create malicious packages matching the invented name, some are quite convincing with READMEs, fake github repos, even blog posts

  • An evolution of typosquatting named “slopsquating” by Seth Michael Larson of Python Software Foundation

  • Threat actor "_Iain" posted instructions and videos using AI for mass-generated fake packages from creation to exploitation

 

Additional information / pertinent LInks (Would you like to know more?):

  1. https://www.reddit.com/r/machinelearningnews/s/HDHlwHtK7U

  2. https://br0nw3n.com/2024/06/llms-and-prompt-engineering/ - Prompt Engineering talk

  3. https://br0nw3n.com/wp-content/uploads/LLM-Prompt-Engineering-LayerOne-May-2024.pdf (slides)

  4. Daniel Meissler ‘Fabric’ - https://github.com/danielmiessler/fabric

  5. https://www.reddit.com/r/LocalLLaMA/comments/16y95hk/a_starter_guide_for_playing_with_your_own_local_ai/ 

  6. Ollama tutorial (co-founder of ollama - Matt Williams): https://www.youtube.com/@technovangelist

  7. https://mhtntimes.com/articles/altman-please-thanks-chatgpt 

  8. https://www.whiterabbitneo.com/ - AI for DevSecOps, Security

  9. https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/ 

  10. https://www.youtube.com/watch?v=OuF3Q7jNAEc - neverending story using an LLM

  11. https://science.nasa.gov/venus/venus-facts 

Show points of Contact:

Amanda Berlin: https://www.linkedin.com/in/amandaberlin/

Brian Boettcher: https://www.linkedin.com/in/bboettcher96/ 

Bryan Brake: https://linkedin.com/in/brakeb 

Brakesec Website: https://www.brakeingsecurity.com

Youtube channel: https://youtube.com/@brakeseced

Twitch Channel: https://twitch.tv/brakesec

 

...more
View all episodesView all episodes
Download on the App Store

BrakeSec Education PodcastBy Bryan Brake, Amanda Berlin, and Brian Boettcher

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

98 ratings


More shows like BrakeSec Education Podcast

View all
Security Weekly Podcast Network (Audio) by Security Weekly Productions

Security Weekly Podcast Network (Audio)

210 Listeners

Risky Business by Patrick Gray

Risky Business

370 Listeners

SANS Internet Stormcenter Daily Cyber Security Podcast (Stormcast) by Johannes B. Ullrich

SANS Internet Stormcenter Daily Cyber Security Podcast (Stormcast)

639 Listeners

Defensive Security Podcast - Malware, Hacking, Cyber Security & Infosec by Jerry Bell and Andrew Kalat

Defensive Security Podcast - Malware, Hacking, Cyber Security & Infosec

369 Listeners

Hacked by Hacked

Hacked

182 Listeners

Smashing Security by Graham Cluley

Smashing Security

320 Listeners

Darknet Diaries by Jack Rhysider

Darknet Diaries

7,987 Listeners

Cyber Security Headlines by CISO Series

Cyber Security Headlines

135 Listeners

Risky Bulletin by risky.biz

Risky Bulletin

43 Listeners

Hacker And The Fed by Chris Tarbell & Hector Monsegur

Hacker And The Fed

169 Listeners