Web scraping continues to be a cornerstone of OSINT operations, particularly during Red Team engagements and external attack surface reconnaissance. Yet, as anti-bot technologies grow more sophisticated, traditional scraping methods based on direct HTTP requests are increasingly ineffective.
This talk takes a technical dive into browser-based scraping techniques that closely mimic real user behavior to evade detection, inspired by real-world mechanisms observed across major web platforms.
In Red Team operations and external attack surface assessments, open-source intelligence (OSINT) is a critical step for identifying internet-exposed assets and assessing the associated risks. One of the most common techniques in this phase is web scraping, which automates the collection of publicly available data—often without relying on official APIs that are frequently rate-limited, monitored, or entirely unavailable.
In previous conferences, such as Fabien Vauchelles’s talk "Cracking the Code: Decoding Anti-Bot Systems", the focus was on detecting scraping activities at the network layer using TCP/IP fingerprinting and IP intelligence. This presentation builds on that work by shifting the focus to client-side techniques—specifically, browser-based approaches that mimic legitimate user behavior to evade detection.
The objective of this session is to explore modern strategies for conducting stealthy web scraping by avoiding API usage and minimizing anomalies detectable at both the network and application layers. Based on real-world use cases, the talk aims to provide actionable insights for security professionals involved in scraping—whether performing it or defending against it.The talk will present concrete methods for data collection, including:
- Making direct HTTP/HTTPS requests to web servers—such as websites or HTTP-based services—using libraries that handle protocol-level communication. This method allows efficient data retrieval by bypassing the need to render the page or load additional resources like images, videos, stylesheets, or scripts. It’s fast and lightweight, especially suited for static or partially dynamic content.
- Leveraging headless browsers to simulate real browser behavior without a graphical interface. These tools embed full HTML, CSS, and JavaScript engines, enabling interaction with modern, dynamic web applications. This technique is essential when scraping content that relies on client-side rendering or asynchronous JavaScript operations.
- Using browser-side scripting tools, such as TamperMonkey, within standard browsers. These tools allow custom JavaScript code to be injected and executed directly on the page, offering a practical and discreet way to automate data collection from within the browsing environment itself. This technique has been successfully applied in large-scale scraping operations, including on major social networks where traditional approaches are often ineffective due to advanced client-side defenses.
Beyond the scraping techniques themselves, the presentation will also cover the current detection methods employed by websites to identify automated behavior and how these can be bypassed, including:
- Detection of automation environments via specific JavaScript variables (e.g., navigator.webdriver) or discrepancies in the DOM.
- Behavioral detection mechanisms such as mouse movements, keyboard activity, or interaction timing.
- Identification of scraping-specific browser extensions or content injection tools.
- Detection of headless execution environments using debugging interfaces or timing-based heuristics.
This talk will provide a technically grounded exploration of the current capabilities and limitations of stealth web scraping from both offensive and defensive perspectives.
Licensed to the public under https://creativecommons.org/licenses/by/4.0/
about this event: https://program.why2025.org/why2025/talk/7DMBVR/