Have you ever been asked to create a resilient petabyte scale data collection and distribution architecture? Do you need to transform data before it is indexed to remove unnecessary or sensitive data or even enrich the data with a lookup before writing the data to your index? Do you need to detect specific patterns to identify the event line break, event timestamp, or assign the appropriate sourcetype? Do you need to control where to send the data including the specific Splunk Index(es) or even a non-Splunk Sink?If so, we will show you how Splunk’s Data Stream Processor (DSP) can be used to address these requirements to meet both current and future demands. We will walk through the scenarios that customers are dealing with today for these requirements. Finally we will talk about how Universal Forwarder, Heavy Weight Forwarder, and HTTP Event Collector fit into this new data ingestion architecture.
Slides PDF link - https://conf.splunk.com/files/2019/slides/FN2062.pdf?podcast=1577146201