Splunk [AI/ML, Splunk Machine Learning Toolkit] 2019 .conf Videos w/ Slides

Teaching Splunk to Hear: Audio Spectrum Analysis in Splunk for Event Classification and Anomaly Detection [Splunk Enterprise, Splunk Machine Learning Toolkit, AI/ML]

12.23.2019 - By SplunkPlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

If we hear a nearby gunshot, we instinctively react. A mechanic often knows their machine's sound so well that they can diagnose issues by sound alone. While machines can be given analytical capabilities with machine learning (ML), sensing human inputs - like auditory or other sensory data - in a form that machines can understand is challenging. In Splunk, we have been all about making machine data accessible to humans, but what if we flip that and make human data accessible to machines? I take audio captured from live and recorded sources and using Fast Fourier transform feed it into Splunk's Machine Learning Toolkit (MLTK) for classification and anomaly detection. Can we use Splunk to detect gunshots? Can we learn a machine’s normal sounds to detect pending failures? This presentation uses Splunk to apply superhuman ML detection and learning capabilities to human data to show that the MLTK contains accessible tools you can apply to your IT and security problems.

Speaker(s)

Joshua Marsh, Senior Sales Engineer, Splunk

Slides PDF link - https://conf.splunk.com/files/2019/slides/IoT1560.pdf?podcast=1577146258

More episodes from Splunk [AI/ML, Splunk Machine Learning Toolkit] 2019 .conf Videos w/ Slides