
Sign up to save your podcasts
Or


Matar Haller is the VP of Data & AI at ActiveFence, where her teams own the end-to-end automated detection of harmful content at scale, regardless of the abuse area or media type. The work they do here is engaging, impactful, and tough, and Matar is grateful for the people she gets to do it with.
AI For Good - Detecting Harmful Content at Scale // MLOps Podcast #246 with Matar Haller, VP of Data & AI at ActiveFence.
// Abstract
One of the biggest challenges facing online platforms today is detecting harmful content and malicious behavior. Platform abuse poses brand and legal risks, harms the user experience, and often represents a blurred line between online and offline harm. So how can online platforms tackle abuse in a world where bad actors are continuously changing their tactics and developing new ways to avoid detection?
// Bio
Matar Haller leads the Data & AI Group at ActiveFence, where her teams are responsible for the data, algorithms, and infrastructure that fuel ActiveFence’s ability to ingest, detect, and analyze harmful activity and malicious content at scale in an ever-changing, complex online landscape. Matar holds a Ph.D. in Neuroscience from the University of California at Berkeley, where she recorded and analyzed signals from electrodes surgically implanted in human brains. Matar is passionate about expanding leadership opportunities for women in STEM fields and has three children who surprise and inspire her every day.
// MLOps Jobs board
jobs.mlops.community
// MLOps Swag/Merch
https://mlops-community.myshopify.com/
// Related Links
activefence.comhttps://www.youtube.com/@ActiveFence
--------------- ✌️Connect With Us ✌️ -------------
Join our Slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Matar on LinkedIn: https://www.linkedin.com/company/11682234/admin/feed/posts/
Timestamps:
[00:00] Matar's preferred coffee
[00:13] Takeaways
[01:39] The talk that stood out
[06:15] Online hate speech challenges
[08:13] Evaluate harmful media API
[09:58] Content moderation: AI models
[11:36] Optimizing speed and accuracy
[13:36] Cultural reference AI training
[15:55] Functional Tests
[20:05] Continuous adaptation of AI
[26:43] AI detection concerns
[29:12] Fine-Tuned vs Off-the-Shelf
[32:04] Monitoring Transformer Model Hallucinations
[34:08] Auditing process ensures accuracy
[38:38] Testing strategies for ML
[40:05] Modeling hate speech deployment
[42:19] Improving production code quality
[43:52] Finding balance in Moderation
[47:23] Model's expertise: Cultural Sensitivity
[50:26] Wrap up
By Demetrios4.6
2323 ratings
Matar Haller is the VP of Data & AI at ActiveFence, where her teams own the end-to-end automated detection of harmful content at scale, regardless of the abuse area or media type. The work they do here is engaging, impactful, and tough, and Matar is grateful for the people she gets to do it with.
AI For Good - Detecting Harmful Content at Scale // MLOps Podcast #246 with Matar Haller, VP of Data & AI at ActiveFence.
// Abstract
One of the biggest challenges facing online platforms today is detecting harmful content and malicious behavior. Platform abuse poses brand and legal risks, harms the user experience, and often represents a blurred line between online and offline harm. So how can online platforms tackle abuse in a world where bad actors are continuously changing their tactics and developing new ways to avoid detection?
// Bio
Matar Haller leads the Data & AI Group at ActiveFence, where her teams are responsible for the data, algorithms, and infrastructure that fuel ActiveFence’s ability to ingest, detect, and analyze harmful activity and malicious content at scale in an ever-changing, complex online landscape. Matar holds a Ph.D. in Neuroscience from the University of California at Berkeley, where she recorded and analyzed signals from electrodes surgically implanted in human brains. Matar is passionate about expanding leadership opportunities for women in STEM fields and has three children who surprise and inspire her every day.
// MLOps Jobs board
jobs.mlops.community
// MLOps Swag/Merch
https://mlops-community.myshopify.com/
// Related Links
activefence.comhttps://www.youtube.com/@ActiveFence
--------------- ✌️Connect With Us ✌️ -------------
Join our Slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Matar on LinkedIn: https://www.linkedin.com/company/11682234/admin/feed/posts/
Timestamps:
[00:00] Matar's preferred coffee
[00:13] Takeaways
[01:39] The talk that stood out
[06:15] Online hate speech challenges
[08:13] Evaluate harmful media API
[09:58] Content moderation: AI models
[11:36] Optimizing speed and accuracy
[13:36] Cultural reference AI training
[15:55] Functional Tests
[20:05] Continuous adaptation of AI
[26:43] AI detection concerns
[29:12] Fine-Tuned vs Off-the-Shelf
[32:04] Monitoring Transformer Model Hallucinations
[34:08] Auditing process ensures accuracy
[38:38] Testing strategies for ML
[40:05] Modeling hate speech deployment
[42:19] Improving production code quality
[43:52] Finding balance in Moderation
[47:23] Model's expertise: Cultural Sensitivity
[50:26] Wrap up

1,296 Listeners

288 Listeners

1,105 Listeners

626 Listeners

583 Listeners

306 Listeners

343 Listeners

212 Listeners

551 Listeners

512 Listeners

150 Listeners

101 Listeners

228 Listeners

688 Listeners

34 Listeners