Sign up to save your podcastsEmail addressPasswordRegisterOrContinue with GoogleAlready have an account? Log in here.
Welcome to CyberCode Academy — your audio classroom for Programming and Cybersecurity.🎧 Each course is divided into a series of short, focused episodes that take you from beginner to ad... more
FAQs about CyberCode Academy:How many episodes does CyberCode Academy have?The podcast currently has 211 episodes available.
February 27, 2026Course 25 - API Python Hacking | Episode 6: Privilege Modification and User ImpersonationIn this lesson, you’ll learn about:Programmatic Privilege ModificationHow to use the AdjustTokenPrivileges API to enable or disable specific privilegesUnderstanding the TOKEN_PRIVILEGES structure and how privilege attributes are modifiedEnabling critical privileges like SeDebugPrivilege to allow advanced system accessPreparing for Token ManipulationIdentifying a target process or user through window handles or process IDs (PID)Elevating your script’s permissions to allow interaction with protected system processesUnderstanding why privilege elevation is required before duplicating tokensToken Duplication ProcessUsing DuplicateTokenEx to create a new primary token from an existing processUnderstanding how duplicated tokens inherit the identity and permissions of the original userPreparing duplicated tokens for use in launching new processesLaunching Processes Under a Different IdentityUsing CreateProcessWithToken to start applications (e.g., cmd.exe) under another user’s contextUnderstanding how impersonation allows execution with different privilege levelsObserving how processes can run with the security context of another active user or system accountKey OutcomeUnderstanding how Windows tokens can be modified, duplicated, and used for impersonationBuilding the foundation for creating tools that perform privilege escalation, impersonation, and advanced system interactionYou can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more18minPlay
February 26, 2026Course 25 - API Python Hacking | Episode 5: Managing and Verifying Process PrivilegesIn this lesson, you’ll learn about:Fundamentals of Windows Access TokensTokens define a process's privileges, such as shutting down the system or debugging memoryTokens are static: you can enable/disable existing privileges but cannot add new onesDifference between default tokens (limited rights, e.g., SeChangeNotify) and administrative tokens (powerful rights, e.g., SeDebugPrivilege)Programmatic Access to TokensUsing Python’s ctypes to interface with kernel32.dll and advapi32.dllObtaining a privileged handle with OpenProcessAccessing a process token via OpenProcessToken with TOKEN_ALL_ACCESSAdministrative elevation is required to manipulate high-privilege tokensVerifying Privilege StatusDefining C-compatible structures in Python: LUID, LUID_AND_ATTRIBUTES, PRIVILEGE_SETUsing LookupPrivilegeValue to convert a privilege name (e.g., SeDebugPrivilege) to a Locally Unique Identifier (LUID)Checking if a privilege is enabled with the PrivilegeCheck APIKey OutcomeUnderstanding how to inspect, enable, or disable privileges for a processLays the groundwork for advanced topics like token impersonation and privilege removalYou can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more17minPlay
February 25, 2026Course 25 - API Python Hacking | Episode 4: Structures, Process Spawning, and Undocumented CallsIn this lesson, you’ll learn about:Defining Windows Internal Structures in PythonRepresenting structures like PROCESS_INFORMATION and STARTUPINFO using ctypes.StructureMapping Windows data types (HANDLE, DWORD, LPWSTR) with the _fields_ attributeInstantiating structures for API calls to configure or retrieve process informationSpawning System ProcessesUsing CreateProcessW from kernel32.dllSetting application paths (e.g., cmd.exe) and command-line argumentsManaging creation flags like CREATE_NEW_CONSOLE (0x10)Passing structures by reference with ctypes.byref to receive process and thread IDsAccessing Undocumented APIs and Memory CastingLeveraging DnsGetCacheDataTable from dnsapi.dll for reconnaissanceNavigating linked lists via pNext pointers in structures like DNS_CACHE_ENTRYUsing ctypes.cast to transform raw memory addresses into Python-readable structuresExtracting DNS cache information, such as record names and types, through loops and error handlingKey OutcomeAbility to build custom security tools that interact directly with Windows internalsMastery of low-level API calls, memory traversal, and structure manipulation for forensic or security applicationsYou can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more22minPlay
February 24, 2026Course 25 - API Python Hacking | Episode 3: From ctypes Basics to Building a Process KillerIn this lesson, you’ll learn about:Interfacing Python with Windows API using ctypesLoading core DLLs: user32.dll and kernel32.dllExecuting basic functions like MessageBoxWMapping C-style data types (e.g., LPCWSTR, DWORD) to Python equivalentsError Handling and PrivilegesUsing GetLastError to debug API failuresCommon errors such as "Access Denied" (error code 5)Understanding how token privileges and administrative rights affect process interactionsProcKiller Project WorkflowFind Window Handle: FindWindowARetrieve Process ID: GetWindowThreadProcessId with ctypes.byrefOpen Process with Privileges: OpenProcess using PROCESS_ALL_ACCESSTerminate Process: TerminateProcessProfessional PracticesDocumenting code thoroughlyUploading projects to GitHub to build a professional portfolioKey OutcomeMastery of Python-to-Windows API integration, robust error handling, and creating scripts that can manipulate processes programmatically.You can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more21minPlay
February 23, 2026Course 25 - API Python Hacking | Episode 2: Foundations of Windows Internals and API MechanismsIn this lesson, you’ll learn about:Fundamentals of Windows Processes and ThreadsA process is a running program with its own virtual memory spaceThreads are units of execution inside processes, allocated CPU time to perform tasksAccess tokens manage privileges and access rights; privileges can be enabled, disabled, or removed but cannot be added to an existing tokenKey System Programming TerminologyHandles: Objects that act as pointers to memory locations or system resourcesStructures: Memory formats used to store and pass data during API callsWindows API MechanicsHow applications interact with the OS via user space → kernel space transitionsAnatomy of an API call, including parameters and naming conventions:"A" → Unicode version"W" → ANSI version"EX" → Extended or newer versionCore Dynamically Linked Libraries (DLLs)kernel32.dll: Process and memory managementuser32.dll: Graphical interface and user interactionResearching functions using Windows documentation and tools like Dependency Walker to identify both documented and undocumented API callsKey OutcomeUnderstanding of how Windows manages processes, threads, and privileges, along with the workflow for interacting with the operating system through APIs and DLLs.You can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more22minPlay
February 22, 2026Course 25 - API Python Hacking | Episode 1: GitHub Portfolio Building and Environment SetupIn this lesson, you’ll learn about:Building a Professional PortfolioCreating a GitHub account and configuring it for public repositoriesInitializing repositories specifically for Python projectsUploading and organizing files to showcase practical work for employersSetting Up a Windows-Based Technical WorkspaceInstalling Python 3 and verifying it is correctly added to the system PATHInstalling Notepad++ for code editing and pinning it for quick accessPreparing essential analysis tools:Process Explorer (system monitoring)PsExec (remote execution and administrative tasks)Dependency Walker (PE file structure and reverse engineering)Integrating Online and Local ResourcesCombining GitHub portfolio with local analysis tools for a fully functional workflowEnsuring readiness for practical scripting and system analysis exercisesKey OutcomeA professional online presence plus a configured virtual workspace ready for the course’s technical exercises.You can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more19minPlay
February 21, 2026Course 24 - Machine Learning for Red Team Hackers | Episode 6: Security Vulnerabilities in Machine LearningIn this lesson, you’ll learn about:The major security threat categories in machine learning: model stealing, inversion, poisoning, and backdoorsHow model stealing attacks replicate black-box models through API queryingWhy attackers may clone models to reduce costs, bypass licensing, or craft offline adversarial examplesThe concept of model inversion, where sensitive training data (e.g., faces or private attributes) can be partially reconstructed from learned weightsWhy deterministic model parameters can unintentionally leak informationHow data poisoning attacks manipulate training datasets to degrade accuracy or shift decision boundariesThe difference between availability attacks (general performance drop) and targeted poisoning (specific misclassification goals)Why some architectures—such as CNN-based systems—can appear statistically robust yet remain strategically vulnerableHow backdoor (trojan) attacks embed hidden triggers during training or model updatesWhy backdoors are difficult to detect due to normal performance under standard conditionsDefensive & Mitigation Strategies This episode also highlights why ML systems must be secured across their lifecycle:Restrict and monitor API query rates to reduce model extraction riskApply differential privacy and regularization to limit inversion leakageValidate training datasets with integrity checks and anomaly detectionUse robust training techniques and adversarial testing to evaluate resiliencePerform model auditing and trigger scanning to detect backdoorsSecure the supply chain for datasets, pretrained models, and updatesYou can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more17minPlay
February 20, 2026Course 24 - Machine Learning for Red Team Hackers | Episode 5: The Complete Guide to Deepfake CreationIn this lesson, you’ll learn about:What deepfakes are and how neural networks enable face, voice, and style transferThe standard face swap pipeline: extraction → preprocessing → training → predictionWhy conducting a local dry run helps validate datasets before scaling to expensive GPU environmentsThe importance of face alignment, sorting, and dataset cleaning to reduce false positivesHow lightweight models are used for parameter tuning before full-scale trainingThe role of GPU acceleration in deep learning workflowsWhy cloud platforms like Google Cloud are used for large-scale model trainingThe importance of compatible drivers (e.g., NVIDIA drivers) in deep learning setupsHow frameworks such as TensorFlow power neural network trainingHow frame rendering and encoding tools like FFmpeg compile processed frames into videoHow training previews help visualize model convergence from noise to structured outputsEthical & Professional ConsiderationsAlways obtain explicit consent from anyone whose likeness is usedUnderstand laws regarding impersonation, fraud, and non-consensual synthetic mediaConsider watermarking or disclosure when creating synthetic contentBe aware that deepfake techniques are actively studied in media forensics and detection researchYou can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more14minPlay
February 19, 2026Course 24 - Machine Learning for Red Team Hackers | Episode 4: Mastering White-Box and Black-Box AttacksIn this lesson, you’ll learn about:The difference between white-box and black-box threat models in machine learning securityWhy gradient-based models are vulnerable to carefully crafted input perturbationsThe core intuition behind the Fast Gradient Sign Method (FGSM) as a sensitivity-analysis techniqueHow adversarial perturbations exploit a model’s local linearity and gradient structureThe purpose of adversarial ML frameworks like Foolbox in controlled research environmentsHow pretrained architectures such as ResNet are evaluated for robustnessWhy datasets like MNIST are commonly used for benchmarking security experimentsThe security risks of exposing prediction APIs in black-box servicesWhy production ML systems must assume adversarial interactionDefensive Takeaways for ML Engineers Rather than attacking models in the wild, security teams use adversarial research to:Measure model robustness before deploymentImplement adversarial training to improve resilienceApply input preprocessing defenses and anomaly detectionLimit prediction confidence exposure in public APIsMonitor query patterns to detect probing behaviorUse ensemble methods and hybrid ML + rule-based detection systemsWhy This Matters: Adversarial machine learning highlights that high accuracy ≠high security.Models that perform well on clean data may fail under minimal, human-imperceptible perturbations. Robustness must be treated as a first-class engineering requirement, especially in:Autonomous systemsBiometric authenticationMalware detectionFinancial fraud systemsYou can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more16minPlay
February 18, 2026Course 24 - Machine Learning for Red Team Hackers | Episode 3: Evading Machine Learning Malware ClassifiersIn this lesson, you’ll learn about:What adversarial machine learning is and why ML-based malware classifiers are vulnerable to manipulationThe difference between feature-engineered models like Ember and end-to-end neural approaches like MalConvWhy handling real malware (e.g., Jigsaw ransomware) requires a properly isolated virtual machine labHow libraries such as LIEF and pefile are used to safely parse and analyze Portable Executable (PE) structuresThe concept of model decision boundaries and detection thresholdsWhy “benign signal injection” works conceptually (model blind spots and over-reliance on superficial features)The security risk of overlay data and section manipulation in static analysis pipelinesThe difference between gradient boosting models and deep neural networks in robustness and feature sensitivityHow adversarial examples reveal weaknesses in ML-based security productsDefensive strategies for improving robustness against evasion attemptsDefensive Takeaways for Security Teams Instead of bypassing detection, professionals use these insights to:Strengthen feature engineering to reduce manipulation opportunitiesNormalize or strip non-executable overlay data before classificationIncorporate adversarial training to improve model resilienceCombine static and dynamic analysis to detect functionality, not just file structureMonitor for abnormal file padding and suspicious section anomaliesImplement ensemble detection strategies rather than relying on a single modelYou can listen and download our episodes for free on more than 10 different platforms:https://linktr.ee/cybercode_academy...more17minPlay
FAQs about CyberCode Academy:How many episodes does CyberCode Academy have?The podcast currently has 211 episodes available.