Sign up to save your podcastsEmail addressPasswordRegisterOrContinue with GoogleAlready have an account? Log in here.
Welcome to CyberCode Academy — your audio classroom for Programming and Cybersecurity.🎧 Each course is divided into a series of short, focused episodes that take you from beginner to ad... more
FAQs about CyberCode Academy:How many episodes does CyberCode Academy have?The podcast currently has 159 episodes available.
February 25, 2026Course 25 - API Python Hacking | Episode 4: Structures, Process Spawning, and Undocumented CallsIn this lesson, you’ll learn about:Defining Windows Internal Structures in PythonRepresenting structures like PROCESS_INFORMATION and STARTUPINFO using ctypes.StructureMapping Windows data types (HANDLE, DWORD, LPWSTR) with the _fields_ attributeInstantiating structures for API calls to configure or retrieve process informationSpawning System ProcessesUsing CreateProcessW from kernel32.dllSetting application paths (e.g., cmd.exe) and command-line argumentsManaging creation flags like CREATE_NEW_CONSOLE (0x10)Passing structures by reference with ctypes.byref to receive process and thread IDsAccessing Undocumented APIs and Memory CastingLeveraging DnsGetCacheDataTable from dnsapi.dll for reconnaissanceNavigating linked lists via pNext pointers in structures like DNS_CACHE_ENTRYUsing ctypes.cast to transform raw memory addresses into Python-readable structuresExtracting DNS cache information, such as record names and types, through loops and error handlingKey OutcomeAbility to build custom security tools that interact directly with Windows internalsMastery of low-level API calls, memory traversal, and structure manipulation for forensic or security applicationsYou can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more22minPlay
February 24, 2026Course 25 - API Python Hacking | Episode 3: From ctypes Basics to Building a Process KillerIn this lesson, you’ll learn about:Interfacing Python with Windows API using ctypesLoading core DLLs: user32.dll and kernel32.dllExecuting basic functions like MessageBoxWMapping C-style data types (e.g., LPCWSTR, DWORD) to Python equivalentsError Handling and PrivilegesUsing GetLastError to debug API failuresCommon errors such as "Access Denied" (error code 5)Understanding how token privileges and administrative rights affect process interactionsProcKiller Project WorkflowFind Window Handle: FindWindowARetrieve Process ID: GetWindowThreadProcessId with ctypes.byrefOpen Process with Privileges: OpenProcess using PROCESS_ALL_ACCESSTerminate Process: TerminateProcessProfessional PracticesDocumenting code thoroughlyUploading projects to GitHub to build a professional portfolioKey OutcomeMastery of Python-to-Windows API integration, robust error handling, and creating scripts that can manipulate processes programmatically.You can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more21minPlay
February 23, 2026Course 25 - API Python Hacking | Episode 2: Foundations of Windows Internals and API MechanismsIn this lesson, you’ll learn about:Fundamentals of Windows Processes and ThreadsA process is a running program with its own virtual memory spaceThreads are units of execution inside processes, allocated CPU time to perform tasksAccess tokens manage privileges and access rights; privileges can be enabled, disabled, or removed but cannot be added to an existing tokenKey System Programming TerminologyHandles: Objects that act as pointers to memory locations or system resourcesStructures: Memory formats used to store and pass data during API callsWindows API MechanicsHow applications interact with the OS via user space → kernel space transitionsAnatomy of an API call, including parameters and naming conventions:"A" → Unicode version"W" → ANSI version"EX" → Extended or newer versionCore Dynamically Linked Libraries (DLLs)kernel32.dll: Process and memory managementuser32.dll: Graphical interface and user interactionResearching functions using Windows documentation and tools like Dependency Walker to identify both documented and undocumented API callsKey OutcomeUnderstanding of how Windows manages processes, threads, and privileges, along with the workflow for interacting with the operating system through APIs and DLLs.You can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more22minPlay
February 22, 2026Course 25 - API Python Hacking | Episode 1: GitHub Portfolio Building and Environment SetupIn this lesson, you’ll learn about:Building a Professional PortfolioCreating a GitHub account and configuring it for public repositoriesInitializing repositories specifically for Python projectsUploading and organizing files to showcase practical work for employersSetting Up a Windows-Based Technical WorkspaceInstalling Python 3 and verifying it is correctly added to the system PATHInstalling Notepad++ for code editing and pinning it for quick accessPreparing essential analysis tools:Process Explorer (system monitoring)PsExec (remote execution and administrative tasks)Dependency Walker (PE file structure and reverse engineering)Integrating Online and Local ResourcesCombining GitHub portfolio with local analysis tools for a fully functional workflowEnsuring readiness for practical scripting and system analysis exercisesKey OutcomeA professional online presence plus a configured virtual workspace ready for the course’s technical exercises.You can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more19minPlay
February 21, 2026Course 24 - Machine Learning for Red Team Hackers | Episode 6: Security Vulnerabilities in Machine LearningIn this lesson, you’ll learn about:The major security threat categories in machine learning: model stealing, inversion, poisoning, and backdoorsHow model stealing attacks replicate black-box models through API queryingWhy attackers may clone models to reduce costs, bypass licensing, or craft offline adversarial examplesThe concept of model inversion, where sensitive training data (e.g., faces or private attributes) can be partially reconstructed from learned weightsWhy deterministic model parameters can unintentionally leak informationHow data poisoning attacks manipulate training datasets to degrade accuracy or shift decision boundariesThe difference between availability attacks (general performance drop) and targeted poisoning (specific misclassification goals)Why some architectures—such as CNN-based systems—can appear statistically robust yet remain strategically vulnerableHow backdoor (trojan) attacks embed hidden triggers during training or model updatesWhy backdoors are difficult to detect due to normal performance under standard conditionsDefensive & Mitigation Strategies This episode also highlights why ML systems must be secured across their lifecycle:Restrict and monitor API query rates to reduce model extraction riskApply differential privacy and regularization to limit inversion leakageValidate training datasets with integrity checks and anomaly detectionUse robust training techniques and adversarial testing to evaluate resiliencePerform model auditing and trigger scanning to detect backdoorsSecure the supply chain for datasets, pretrained models, and updatesYou can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more17minPlay
February 20, 2026Course 24 - Machine Learning for Red Team Hackers | Episode 5: The Complete Guide to Deepfake CreationIn this lesson, you’ll learn about:What deepfakes are and how neural networks enable face, voice, and style transferThe standard face swap pipeline: extraction → preprocessing → training → predictionWhy conducting a local dry run helps validate datasets before scaling to expensive GPU environmentsThe importance of face alignment, sorting, and dataset cleaning to reduce false positivesHow lightweight models are used for parameter tuning before full-scale trainingThe role of GPU acceleration in deep learning workflowsWhy cloud platforms like Google Cloud are used for large-scale model trainingThe importance of compatible drivers (e.g., NVIDIA drivers) in deep learning setupsHow frameworks such as TensorFlow power neural network trainingHow frame rendering and encoding tools like FFmpeg compile processed frames into videoHow training previews help visualize model convergence from noise to structured outputsEthical & Professional ConsiderationsAlways obtain explicit consent from anyone whose likeness is usedUnderstand laws regarding impersonation, fraud, and non-consensual synthetic mediaConsider watermarking or disclosure when creating synthetic contentBe aware that deepfake techniques are actively studied in media forensics and detection researchYou can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more14minPlay
February 19, 2026Course 24 - Machine Learning for Red Team Hackers | Episode 4: Mastering White-Box and Black-Box AttacksIn this lesson, you’ll learn about:The difference between white-box and black-box threat models in machine learning securityWhy gradient-based models are vulnerable to carefully crafted input perturbationsThe core intuition behind the Fast Gradient Sign Method (FGSM) as a sensitivity-analysis techniqueHow adversarial perturbations exploit a model’s local linearity and gradient structureThe purpose of adversarial ML frameworks like Foolbox in controlled research environmentsHow pretrained architectures such as ResNet are evaluated for robustnessWhy datasets like MNIST are commonly used for benchmarking security experimentsThe security risks of exposing prediction APIs in black-box servicesWhy production ML systems must assume adversarial interactionDefensive Takeaways for ML Engineers Rather than attacking models in the wild, security teams use adversarial research to:Measure model robustness before deploymentImplement adversarial training to improve resilienceApply input preprocessing defenses and anomaly detectionLimit prediction confidence exposure in public APIsMonitor query patterns to detect probing behaviorUse ensemble methods and hybrid ML + rule-based detection systemsWhy This Matters: Adversarial machine learning highlights that high accuracy ≠high security.Models that perform well on clean data may fail under minimal, human-imperceptible perturbations. Robustness must be treated as a first-class engineering requirement, especially in:Autonomous systemsBiometric authenticationMalware detectionFinancial fraud systemsYou can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more16minPlay
February 18, 2026Course 24 - Machine Learning for Red Team Hackers | Episode 3: Evading Machine Learning Malware ClassifiersIn this lesson, you’ll learn about:What adversarial machine learning is and why ML-based malware classifiers are vulnerable to manipulationThe difference between feature-engineered models like Ember and end-to-end neural approaches like MalConvWhy handling real malware (e.g., Jigsaw ransomware) requires a properly isolated virtual machine labHow libraries such as LIEF and pefile are used to safely parse and analyze Portable Executable (PE) structuresThe concept of model decision boundaries and detection thresholdsWhy “benign signal injection” works conceptually (model blind spots and over-reliance on superficial features)The security risk of overlay data and section manipulation in static analysis pipelinesThe difference between gradient boosting models and deep neural networks in robustness and feature sensitivityHow adversarial examples reveal weaknesses in ML-based security productsDefensive strategies for improving robustness against evasion attemptsDefensive Takeaways for Security Teams Instead of bypassing detection, professionals use these insights to:Strengthen feature engineering to reduce manipulation opportunitiesNormalize or strip non-executable overlay data before classificationIncorporate adversarial training to improve model resilienceCombine static and dynamic analysis to detect functionality, not just file structureMonitor for abnormal file padding and suspicious section anomaliesImplement ensemble detection strategies rather than relying on a single modelYou can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more17minPlay
February 17, 2026Course 24 - Machine Learning for Red Team Hackers | Episode 2: Building and Implementing Evolutionary Testing ToolsIn this lesson, you’ll learn about:What fuzzing is and why it’s a powerful technique for discovering software vulnerabilitiesThe difference between basic randomized fuzzing and more advanced, coverage-guided approachesHow code coverage helps measure which parts of a program are exercised during testingWhy naive random input generation is inefficient for complex formats like PDFsThe concept of mutation-based fuzzing, including byte-level modifications such as insertion, deletion, swapping, and randomizationHow evolutionary fuzzing applies principles from genetic algorithms to improve input effectivenessThe role of a fitness function in selecting high-value test casesHow recombination and mutation evolve a population of inputs to reach deeper code pathsHow professional tools like American Fuzzy Lop instrument compiled programs to detect unique crashes and segmentation faultsWhy fuzzing is critical for secure software development and proactive vulnerability discoveryYou can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more17minPlay
February 16, 2026Course 24 - Machine Learning for Red Team Hackers | Episode 1: Building an Automated CAPTCHA-Breaking BotIn this lesson, you’ll learn about:How CAPTCHA systems (like Really Simple CAPTCHA for WordPress) are designed to prevent automated abuseThe role of reconnaissance in identifying security mechanisms on web applications (for defensive testing with permission)How OpenCV is used in computer vision for:Grayscale conversionImage thresholdingNoise reduction and morphological operations (e.g., dilation)Contour detection and character segmentationThe fundamentals of building a Convolutional Neural Network (CNN) using frameworks like KerasWhy preprocessing (normalization, resizing, padding) is critical for image-based ML accuracyHow browser automation tools such as Selenium function in legitimate contexts (e.g., QA testing, regression testing, accessibility testing)Why CAPTCHA systems can be vulnerable to ML advances—and how modern defenses evolve in responseDefensive & Ethical Takeaway Instead of bypassing CAPTCHAs, security professionals use this knowledge to:Strengthen bot mitigation strategiesImplement more resilient human verification systemsDetect automated abuse patternsTransition toward modern solutions like behavioral analysis and risk-based authenticationYou can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more17minPlay
FAQs about CyberCode Academy:How many episodes does CyberCode Academy have?The podcast currently has 159 episodes available.