Share Red Hat X Podcast Series
Share to email
Share to Facebook
Share to X
By The Red Hat X podcast series
The podcast currently has 175 episodes available.
Machine learning is transforming the tech sector and other industries like retail, manufacturing, supply chain, banking, healthcare, education, and insurance. The problem is that bringing machine learning into these fields requires not only experts who can train models, but also the ability to deploy and maintain ML models in production. This is a common pain point in many organizations. This is where MLOps is useful. MLOps is a set of practices that aims to deploy and maintain machine learning models in production reliably and efficiently.
Link to academic paper discussed in this episode:
https://arxiv.org/pdf/2209.09125.pdf
You are about to explore computer science and start developing the first applications. What should be your first programming language? Should it be adorable JavaScript, glorious Python, legendary Java, or... something else? Well, as always, it depends.
Join Brian and Denis Magda, Head of Developer Relations at Yugabyte, in reflecting on their experiences in an attempt to find that mysterious programming language X for beginners.
Borderline paranoia – a robot in the 50s has become something that we couldn’t recognize. Today, robots are more compact, and the computers inside are tinier than ever. From the 2018 recognition of the hardware hack from China or the 2022 Starlink hack:: https://threatpost.com/starlink-hack/180389/
Aronetics features Jerod Brennen of Brennen Consulting to join our ongoing conversation that discovers issues with black boxes in your home or business and the complex implicit trust.
Modern cloud-native environments using Kubernetes or OpenShift are driving innovation and speed for development teams but these technologies do not come with a framework or set of rules for implementing container security. Choices for security tooling are often down to what development teams and operations teams regard as best practices. In this session, the Jetstack team will cover why machine identity management is fundamental to delivering container security and discuss what organizations can do to improve best-practice container security.
Enterprises are building and delivering containers and Kubernetes-based applications to their customers. With a distributed architecture, microservices are communicating with each other and 3rd party APIs to enable information exchange and present it to the customers. Such communication via the internet makes these applications vulnerable to external network-based attacks.
In this podcast, we will discuss how traditional runtime threat defense solutions fall short of preventing attacks, and a new approach is required that provides:
Building on our previous discussion on First Mile Observability (Red Hat X Episode 123 - April 26, 2022), we’ll focus on tools and methods organizations can use to optimize their Enterprise Observability Pipelines. Specifically, we’ll discuss 1) Fluentd and Fluent Bit - the evolution of these open-source projects for data collection and transport that have now been deployed over one billion times, 2) Strategies for multi-output distribution to send data from anywhere to anywhere and 3) How Calyptia Core can aggregate your observability data to easily define and manage your pipelines, no matter how complex your environment.
threat detection at runtime is a crytical component of securing containers and cloud. how can you spot malicious activity in a dynamic orchestrated environment based on kubernetes? Today we will discuss runtime security practices using Red Hat opensource.
Have you ever wondered how a geo-distributed app such as a Slack-like corporate messenger is architected and functions? How hundreds of microservices are deployed and communicate across distant geographies? How thousands of user messages and events flow in real-time across the countries? How are petabytes of data stored and accessed across continents?
By taking a Slack-like corporate messenger as an example, we'll discuss the fundamental design principles for geo-distributed apps that are born to work across geographies.
Making a data pipeline fit for machine learning use cases requires more than just additional data monitoring. Furthermore, bringing machine learning into production has traditionally required a lot of manual setup and configuration, even for toy ML pipelines. These manual methods are not reproducible, don’t autoscale, require significant technical expertise, and are error-prone. Among other things, this episode will go over MLOps, a set of practices aiming to deploy and maintain machine learning models in production reliably and efficiently.
The podcast currently has 175 episodes available.