Share Connected Social Media
Share to email
Share to Facebook
Share to X
By Connected Social Media
5
33 ratings
The podcast currently has 1,509 episodes available.
For individuals that struggle with mental health, the intensity of the challenges often evolve over their lifetime due to triggers or phases of life.
In this podcast we speak with Dr. Karen Swartz,psychiatrist and professor from the Johns Hopkins University School of Medicine, about the continuum of mental health and common triggers, including those less often addressed, like regret and loss. Dr. Swartz also shares insight into the youth mental health crisis, the “dangerous levels” of stress that 41% of parents are experiencing, and more.
Tune in to learn how Dr. Swartz’s experience and guidance can help employers more deeply understand the mental health challenges their employees and families are experiencing and better provide meaningful support.
Guest: Dr. Karen Swartz, Director of Clinical and Educational Programs at the Johns Hopkins Mood Disorders Center, Professor in the Department of Psychiatry and Behavioral Sciences at the Johns Hopkins School of Medicine, Founder and Director of the Adolescent Depression Awareness Program (ADAP)
Thousands of Intel’s employees connect to the corporate Wi-Fi network every day. When connectivity issues occur, both productivity and user experience (UX) suffer. Intel IT makes every effort to optimize the network’s performance. Still, until recently, we had little visibility into client behavior through traditional network monitoring solutions because they report issues only from the network infrastructure perspective. At the same time, endpoint IT tools are limited to operating system data.
Based on its existing market presence with PC Wi-Fi products, Intel’s Client Computing product group developed Intel Connectivity Analytics technology. The product group collaborated with Cisco—a well-known enterprise Wi-Fi network infrastructure supplier—and Intel IT to develop compelling use cases and solutions that utilize Intel Connectivity Analytics. Intel IT provided product feedback and practical applications of the technology in our enterprise environment and helped optimize the use cases over Intel’s network infrastructure. Our input helped to better define the data and analytics needed to achieve the desired business outcomes, such as faster troubleshooting, mean time to repair (MTTR), reduced network total cost of ownership, and a better UX.
Intel IT now uses Intel Connectivity Analytics—delivered through the Cisco Catalyst Center (previously Cisco DNA Center) and cloud dashboard—to improve the management of Intel’s Wi-Fi network. We are also working with the overall ecosystem to expand the use cases for Intel Connectivity Analytics and further enhance AI for IT Operations (AIOps). We have reduced some client-side troubleshooting from upwards of 15 minutes to 10–15 seconds. Similarly, finding the root cause for ubiquitous network issues—which could take days—now often takes seconds as well.
We encourage other IT departments to consider deploying IT tools powered by Intel Connectivity Analytics for Wi-Fi, which is built into Intel Wi-Fi adapters and requires no software installation or maintenance. It can be utilized by Cisco and other Intel Connectivity Analytics Program members.
In addition, we are now working with the Intel Client Computing product group to develop an additional Intel Connectivity Analytics offering that shares data through a PC agent to the cloud and expands the available connectivity analytics beyond Wi-Fi to include Thunderbolt technology, Bluetooth, Ethernet, and more.
In this Tech Barometer podcast segment, McDowell shares insights from his report Taming the AI-enabled Edge with HCI-based Cloud Architectures and explores the impact of extending IT resources to the edge and the driving force of AI.
Find more enterprise cloud news, features stories and profiles at The Forecast.
Transcription:
Steve McDowell: The reason we push AI to the edge is because that’s where the data is, you know, we want to do the processing close to where the data is so that we don’t have latency. And in a lot of environments, if we’re ever disconnected, it’s going to shut down my business.
Jason Lopez: The question is, how do you deploy edge resources in real time? In this Tech Barometer podcast, Steve McDowell, Chief analyst at NAND research talks about his paper “Taming the AI-Enabled Edge with HCI-Based Cloud Architectures.” I’m Jason Lopez. Our aim in the next several minutes is to discuss how AI impacts edge computing.
Steve McDowell: We’ve always defined edge as any resources that live outside the confines of your data center. And there’s some definitions that say the extension of data center resources to a location where there are no data center personnel. It’s remote.
Jason Lopez: But AI, of course, adds complexity. One example McDowell cites is automated train car switching. The sides of train cars have bar codes which are scanned, and a local stack of servers processes where the cars are and where they need to be.
Steve McDowell: I can do this in real time. I can partition my workloads so that, you know, computationally expensive stuff or maybe batch stuff can still live in the core. And I don’t have to do that at the edge all the time. So I can really fine tune what it is I’m deploying and managing.
[Related: Slew of Changes Drive VMware Customers to Consider Alternatives]
Jason Lopez: This is important when you consider that AI at the edge differs from traditional edge deployments primarily due to its need for greater computational power.
Steve McDowell: Once we start putting AI in, then suddenly we have to have the ability to process that AI, which often means the use of GPUs or other kinds of AI accelerators. Ten years ago, if we talked about edge, we’re talking largely about embedded systems or compute systems that we treat as embedded. Embedded is a special word in IT. It means it’s fairly locked down. It doesn’t get updated very often. When we look at things like AI, on the other hand, that’s a very living workflow. If I’m doing image processing for manufacturing, for example, for quality assurance, I want to update those models continuously to make sure I’ve got the latest and the greatest.
Jason Lopez: And along with managing fleets of hardware and software in AI deployments at the edge, there’s also the issue of security.
Steve McDowell: By treating edge systems as connected and part of my infrastructure, and not as we historically have treating them as kind of embedded systems, if you will, it also allows me to, in real time, manage patches, look at vulnerabilities, surface alerts back up to my security operations center, my SOC. It makes the edge look like it’s part of my data center.
Jason Lopez: Tools like Nutanix allow for this approach, applying a consistent management practice across both core and edge environments. This involves deciding what tasks to perform at the edge versus the core due to constraints like cost, security, and physical space.
Steve McDowell: A key part of the conversation becomes what lives where? And that’s not a tool problem, right? That’s kind of a system architecture problem. But once you start partitioning your workloads and say, this certain kind of AI really needs to be done in the core, Nutanix gives me that ability and cloud native technologies give me that ability to say, well, I’ll just put this kind of inference in the cloud and I’ll keep this part local.
[Related: Pivot Past the Enterprise AI and Cloud Native Hype]
Jason Lopez: McDowell’s thinking springs from the flexibility afforded by hyper-converged infrastructure. The idea of AI at the edge is part of the whole architecture of storage, network and compute.
Steve McDowell: That can be as disaggregated as it needs to be. So if I need a whole lot of compute in the cloud, I can do that and then put the little bit at the edge and I can manage all of that through that single pane of glass, very, very powerful.
Jason Lopez: Treating edge computing as a part of the data center becomes so interesting because of how the data center itself is being transformed by AI and machine learning.
Steve McDowell: Once we abstract the workload away from the hardware, I’ve broken a key dependency. I don’t have to physically touch a machine to manage it, to update it, to do whatever.
Jason Lopez: The point McDowell makes is how management, not just of the configuration of a node, but across a fleet, is simplified. It enhances efficiency and scalability.
Steve McDowell: We’re taking technology that evolved to solve problems in cloud, but they apply equally to the edge, I think. It turns out, it’s a fantastic way to manage edge.
[Related: More Reasons for HCI at the Edge]
Jason Lopez: AI at the edge is increasingly adopting cloud-native technologies like virtualization and containers. The shift is to container-based deployments for AI models, sharing GPUs and managing them remotely.
Steve McDowell: If you look at how, you know, NVIDIA, for example, suggests pushing out models and managing workloads on GPUs, it’s very container-driven.
Jason Lopez: And McDowell explains why this simplifies edge management.
Steve McDowell: A GPU in a training environment is a very expensive piece of hardware. And giving users bare metal access to that, you know, requires managing that as a separate box. Using Cloud-native technologies, I can now share that GPU among multiple users, very, very simply. That same flexibility now allows me to manage GPUs at the edge with the level of abstraction that works. So I can sit in my data center, push a button and manage that box without actually worrying about what that box looks like necessarily. So I don’t need that expertise kind of onsite, right? Which is a key enabler for edge. If you have to have trained IT specialists wherever you’re deploying, that doesn’t scale. And edge is all about scalability.
[Related: The Future of AI Computing Resides at the Edge]
Jason Lopez: GPUs are typically what power AI, but are are not commonly found at the edge. But inference is a facet of AI that many technologists see value in at the edge. GPUs would be the right fit if at the edge, generative AI is needed. But what’s needed now are inference engines, especially around vision and natural language processing.
Steve McDowell: Take, for example, a retail environment where they have intelligent cameras that are positioned all up and down the aisles of the grocery store. And the only job that these cameras have is to monitor the inventory on the shelf across from the camera. And when they’ve sold out of Chex mix and there’s a gap there, it sends an alert, come restock. I mean, it’s very kind of data intensive and you don’t want to send that to the cloud necessarily.
Jason Lopez: Technology is moving toward managing infrastructure environments seamlessly, such as edge, data centers, and cloud, without changing tools or management models.
Steve McDowell: Nutanix has capabilities for managing AI in your workflow, kind of period, full stop. A good example of this is GPT in a box. Where it’s a technology stack and I plug a GPU in and I can do natural language processing. If I want to push that out to the edge. I don’t have to change my tools. I mean, the beautiful thing, and the reason that we use tools like Nutanix is that it gives me kind of a consistent control plane across my infrastructure. Now, infrastructure used to mean data center, and then it meant data center and cloud. And now with edge, it means data center and cloud and edge. The power of Nutanix though, is it allows me to extend outside of my traditional kind of infrastructure into the edge without changing my management models. So, as AI goes to the edge, I think the things that already make Nutanix great for AI in the data center are equally applicable at the edge.
Jason Lopez: Steve McDowell is founder and chief analyst at NAND research. This is the Tech Barometer podcast, I’m Jason Lopez. Tech Barometer is a production of The Forecast, where you can find more articles, podcasts and video on tech and the people behind the innovations. It’s at theforecastbynutanix dot com.
In this Tech Barometer podcast, Nick Mahlitz, digital infrastructure manager at Forestry and Land Scotland, takes listeners to his homeland, where he helps the government use data and cloud technologies to manage natural resources and meet sustainability goals.
Find more enterprise cloud news, features stories and profiles at The Forecast.
Transcript:
Nick Mahlitz: I learn new things every day as I talk to our staff members. Just recently, we’re using drones with lasers to map out the land. So the drone footage stuff is really, really, really good. They can then tailor what they do with the land around what they found with these drones and with the lasers, they can go through forest layers, they can analyze the types of land that is, and oh, there’s a ridge there that we perhaps need to avoid, and just making better decisions.
Jason Lopez: We’re starting this podcast interview by parachuting right into the heart of the work of Forestry and Land Scotland, the federal organization that manages the country’s public land and wilderness. A year ago we talked with the manager of the organization’s data centers, Nick Mahlitz and posed the question:
Ken Kaplan: The forest needs technology?
Nick Mahlitz: Yes. The forests do, yes, to manage your forests well and good, to use technology in a challenging environment like Scotland, where it’s very remote and the weather can be quite extreme sometimes. Technology and exploring all realms of technology will only help us better manage Scotland’s forests and our land.
[Related: Forestry and Land Scotland Trailblazes Private-Public Shift to Cloud]
Jason Lopez: So, we circled back to follow up on that interview. Forestry and Land Scotland is utilizing technologies like drones with laser mapping capabilities. As we’ll learn, it gets very data intensive which is Nick’s job to oversee. He supports the organization’s mission to balance natural resources. One of those balancing acts is to better track wildlife, particularly deer, to protect young trees.
Nick Mahlitz: Scotland has a lot of deer, so we have to cull a fair amount of them. And the challenges around Scotland being a very remote piece of land and identifying and culling enough deer in a small frame of time can be very challenging. So with technology of tagging deer and using drones to manage where they are, a ranger can go from culling a couple of deer and say half a day or a day to 8, 9, 10 deer within the same timeframe.
Jason Lopez: Scotland’s public land serves many interests. Trees enhance carbon sequestration. Some of its forests are grown for timber, and some of its land is used for renewable energy projects — such as wind and hydroelectric power. Scotland’s goal is to be net zero by 2045. One of the most important uses of the land is for the public’s enjoyment of the outdoors. He takes note: it’s good for people behind the scenes of his own organization to experience the places their work supports.
Nick Mahlitz: You know, I don’t want to be the person in the basement. You know, the data is there, but it’s also good out and enjoy it. And we try and do that. We encourage non-forester staff to go out with a forester for the day. Pick what you want to do. Do you want to go and see a piece of bog peatland be restored? Do you want to go and plant a tree? Do you want to go and uplift trees? You know, we offer these activities to people like myself or back office staff, HR procurement, except anybody who wants an interest in it, because it’s so important. If you understand that you’ll understand your role in the organization and how better you can play a part.
[Related: How a Top University in Scotland Expanded Remote Teaching Tech During a Crisis]
Jason Lopez: This is why the forest needs technology. The earth’s landscape has been so altered by human development for the past 30,000 years, it requires human conservation to prevent further decline or restore land to a balanced state. Intervention is critical.
Nick Mahlitz: Peatland restorations, the soil gets degraded, you know, so we’re trying to restore the balance in the soil so that it can keep more carbon, et cetera, et cetera. So there’s that ongoing. We’ve got big plans for a big nursery that’s coming up where we can plant 19 million trees a year. We’re planting the trees in a special way. The seeds are planted into biodegradable paper that can just be streamlined, planted, and we can plant far more than we could normally. And then as we cultivate them in a nursery, we have to plant them outside in fields. And we’ve re-engineered equipment to then do that planting for us rather than manually. And, you know, just that kind of approach just means far more efficiencies meeting our targets and making our staff more efficient, which is fantastic.
Jason Lopez: What scientists are learning today about the natural world and what technologists are innovating is accelerating.
Nick Mahlitz: And that is unlocked by cloud, by edge, by modern approach. Lots to be done though. Lots to be done still.
Jason Lopez: Nick’s current goal, managing the data centers of Forestry and Land Scotland, is on integrating AI and automation to improve their operations to better understand their data.
Nick Mahlitz: And that’s what we’re actively working on now. And the reality is I can know about AI and automation, me and my team and others in the digital landscape. We know about that for ourselves, but to translate that into somebody with a chainsaw who’s cutting down trees or somebody who’s managing wildlife management as in deer or our nurseries where we’re planting our seedlings to then grow, what does AI and automation do for these ones? And that’s the core of our business.
[Related: Do Forests Really Need Technology?]
Jason Lopez: He reminds us that AI is not the core of the business. AI is a tool. His team’s core business is about data.
Nick Mahlitz: How much data do we have? How can we better understand it? How can it better help us make more informed decisions on our future for sustainability, for a net zero, for generating revenue, et cetera, that we do? So there is now a big piece to understand how we capture data, how we store that data, how we report on that data, how we integrate that data, how we use AI and automation with that data. So that really is a big focus for our organisation over the next year or two.
Jason Lopez: And this exemplifies how Forestry and Land’s data centers operate, with a passion for sustainability. Migrating to the cloud improved efficiency and prompted a shift towards reducing energy and CPU usage ,as well as modernizing.
Nick Mahlitz: So, you know, we have, we have legacy systems and they have legacy interactions and, but they’re now in the cloud. So we can replace some of those interactions with more modern solutions, which gets rid of, you know, having to spend so much energy or CPU, I get rid of that inefficiency, even in code, even right down to the lower levels. And that really can make a big difference as well. And that’s something that’s, FLS is passionate about sustainability, but even, you know, filtering that down to our digital teams, they appreciate that too. So that’s where we can look at and make those changes that just makes everything run better in a more sustainable way.
Jason Lopez: The transition to a full public cloud makes management easier and has simplified data center operations. And it’s allowed for easy integration of other cloud products and solutions.
Nick Mahlitz: So we have no on-prem environment to administer or manage. So that is quite a unique position that we find ourselves in. And we could not be more delighted with the results. The actual transition to the cloud using Nutanix was an experience that made our journey so much simpler and has bought us the time that we need to modernise and transform our solutions into that next-gen approach. So what we did with Nutanix really was, we saw it as groundbreaking. It’s delivered what we aspire to and it had benefits that perhaps we never really realised at the time that we now enjoy.
Jason Lopez: Nick says the migration has resulted in something else they didn’t originally factor in: time savings. He and his colleagues have the time resources to invest in new technologies, which help in the goal of reaching that net zero target by 2045.
Nick Mahlitz: And comparing our footprint of our on-prem environment compared to what we have now through the metrics that we receive from Microsoft, given that we’re in the Azure platform, it’s really heartening to see that that sustainability piece and net zero is being reached in some way or added to our targets that we have as an organization. And then equally that full cloud integration means that we can now tap into other cloud products, cloud solutions, and very easily integrate them into what we have, which before we didn’t fully appreciate that we could do that. And as Nutanix expand their products and services in the cloud, we’re only going to enjoy that more and more.
[Related: Easy Alternative for Migrating or Extending to Public Cloud]
Jason Lopez: The transition to the cloud, again, enabled the organization to adopt other cloud technologies.
Nick Mahlitz: And now that we’re in there on NC2 in the cloud, what we had to do as part of that journey was unlock other cloud technologies to help us realize that. So for instance, identity management, access management, VPN, et cetera, et cetera. We’re now using all the cloud variants of such so that we have no dependency on really local on-prem infrastructure. So what that entails is a very much a kind of zero trust model that’s highly secure within line with modern approach. So that really can then unlock capabilities that we’re just starting to tap into. But given my team, the ability to now manage that in a completely different way to what we had. But it unlocks really future capabilities at which the IT person’s appetite and desire and helps for recruitment and retain of staff and investment and development on staff when what we’ve done can excite people. It can excite the IT people for sure and other people in the organization. So that’s been really, really a good thing for FLS.
Jason Lopez: Astrophysicists remind us what it is to look back, from space, at planet earth, and viscerally understand how vital earth is to life as we know it, yet how small and fragile. People across political boundaries, economies and cultures are galvanizing efforts to preserve oceans, land and forests… and doing this whether its reducing pollution, conserving farmland topsoil, or establishing more efficient data centers.
Nick Mahlitz: And it really just lines up to understanding where we are in technology and our timelines and our lives, working in that agile manner, having the growth mindset, embracing technology, all the attitudes that really permeate behind a good digital team. We took that novel approach. We did our due diligence, but we did something new and exciting. And really that’s the refreshment I have from working for 20 years in this career, so to speak, that still there’s the ability to do novel, new things. We Scots are a passionate people. And as I meet other digital and IT teams in other government areas, there’s a similar aspiration and enthusiasm for understanding technology in their, in their areas too.
Jason Lopez: Nick Mahlitz is the senior digital infrastructure manager for Forestry and Land, Scotland. This is the check barometer podcast, I’m Jason Lopez. You might want to check out the original video we did with Nick entitled “Do the forests really need technology?” You can find that and more stories and podcasts at the forecastbynutanix dot com.
Imagine having the foresight to prepare your organization for a flu outbreak days before it hits, just as you would for an impending storm. The Center for Forecasting and Outbreak Analytics (CFA) at the CDC is pioneering this exact capability.
In our latest podcast, we talk with Dr. Dylan George, Director of the CFA, who shares the parallels between disease forecasting and weather predictions and why the former could help employers to better safeguard their workforce in the future. Listen in to learn how the CFA’s advanced modeling tools can help to anticipate and mitigate health risks, and for real-world examples that illustrate how data-driven decisions can enhance employee safety and maintain productivity.
Guest: Dylan George, Ph.D., Director for the Center for Forecasting and Outbreak Analytics at the Centers for Disease Control and Prevention
In this Tech Barometer podcast segment, NAND Research Chief Analyst Steve McDowell describes how CIOs manage change and mitigate risk in the first year after Broadcom acquired VMware.
Find more enterprise cloud news, features stories and profiles at The Forecast.
Transcript:
Steve McDowell: I think this year is a lot of, you know, resetting how we think about VMware and I think a little bit of resetting everything about Broadcom and what they do. There’s a lot of uncertainty. I don’t think my position is changed.
Jason Lopez: Steve McDowell is the chief analyst of NAND Research. Welcome to another Tech Barometer podcast from The Forecast, I’m Jason Lopez. When McDowell talks about the Broadcom purchase of VMware these days, he’s quick to point out how the discussion has shifted from what’s Broadcom going to do to what customers are going to do.
[Related: Slew of Changes Drive VMware Customers to Consider Alternatives]
Steve McDowell: IT is all about managing risk. As long as there’s uncertainty, as an IT guy, I need a plan. I need to know how to mitigate against that uncertainty. Even if it’s not wholesale replacement, have a plan B. And a big part of this is second source. Start mixing in as new projects come up, other technologies and balance the risk, right? You mitigate risk by balancing the options.
Jason Lopez: He says the way to look at next steps from an IT perspective is that customers have a lot of unplanned stuff on their plates. There’s a challenge to ensure there are no hiccups in the data center, especially if IT has to find alternatives to VMware.
Steve McDowell: There’s nothing that is a hundred percent drop in replacement for all the overlap Nutanix has with VMware, for example. It’s still a big effort. It’s still a big effort. And you’re asking me to do this effort, well, if I’m going to switch, right, while I’m also trying to figure out this AI thing and solve all my cybersecurity problems. If I’m doing a new project, I’m going to look at cloud native, I’m going to look at Nutanix. And it’s really the only two alternatives. I’m either going OpenShift or I’m going AHV.
Jason Lopez: What IT wants is predictability and consistency.
Steve McDowell: That’s all any IT guy wants. He wants not to have to think about this. IT plans way ahead, and there’s so many digital transformation products on their plates. And this is a distraction. And that’s what they hate.
Jason Lopez: And McDowell’s advice to the players who make IT solutions.
Steve McDowell: You know where that pain threshold is, and you need to build your programs around that. You got to make the switching costs come down, whatever that means, rebates, technical assistance, training, whatever, professional services. There’s ways for competitors to come in there and leverage the situation that does bring relief to these IT guys.
Jason Lopez: Steve McDowell is Chief Analyst for NAND Research. This is the Tech Barometer podcast. I’m Jason Lopez. Tech Barometer is produced by The Forecast. You can find us and more tech stories at theforecastbynutanix.com. All one word, theforecastbynutanix.com.
Business Group on Health’s 2025 Employer Health Care Strategy survey reveals 89% of large employers intend to implement programs or strategies to support LGBTQ+ employees in their health and well-being initiatives. This focus is crucial, given the significant disparities faced by LGBTQ+ adults.
In this podcast episode, Dr. Mitchell Lunn, a Stanford University physician and professor as well as the director of The PRIDE Study, a research initiative that assesses the health of over 29,000 sexual and gender minority adults in the U.S., shares the disparities in health and health care facing the LGBTQ+ community. This episode covers the implications for employers and how inclusive employee surveys can foster a more supportive work environment.
Guest: Mitchell Lunn, MD, Associate Professor of Medicine (Nephrology) and of Epidemiology and Population Health at Stanford University School of Medicine, Co-director of The PRIDE Study
Thank you to the episode sponsor, Aon.
Increasing Electronic Design Automation (EDA) performance and throughput is critical to Intel’s silicon Design engineers.
Silicon chip Design engineers at Intel face ongoing challenges: integrating more features into ever-shrinking silicon chips, bringing products to market faster, and keeping Design engineering and manufacturing costs low. Design engineers run more than 273 million compute-intensive batch jobs every week. Each job takes from a few seconds to several days to complete.
As design complexity increases, so do the requirements for compute capacity, so refreshing servers and workstations with higher-performing systems is cost-effective and offers a competitive advantage by enabling faster chip design. Refreshing older servers also enables us to realize data center cost savings. By taking advantage of the performance and power-efficiency improvements in new server generations, we can increase computing capacity within the same data center footprint, helping to avoid expensive data center construction and reduce operational costs due to reduced power consumption.
To meet Design engineers’ computing capacity requirements, Intel IT conducts ongoing throughput performance tests using real-world Intel silicon Design workloads. These tests measure EDA workload throughput and help us analyze the performance improvements—and in turn, business benefits offered by newer generations of Intel® processors.
We recently tested two-socket servers based on the Intel® Xeon® Platinum 8400 and 8500 processor Series, along with the Gold 6400 processor Series. The test included operating single- and multi-threaded EDA applications running Intel silicon Design workloads for more than four days. Select results include the following:
• Higher frequency for per-core performance. For critical-path EDA workloads, selecting a high-frequency CPU like the Intel Xeon Gold 6444Y processor (32 cores per server) can deliver up to 1.14x higher per-core performance compared to lower-frequency, higher-core-count CPUs in the same generation of processors.
• Higher core counts for throughput. For volume validation runs, selecting a higher-core-count CPU at optimal frequency like the Intel Xeon Platinum 8462Y+ processor (64 cores per server) can deliver up to 1.75x higher Register Transfer Level (RTL) Simulation throughput per server when compared to a lower-core-count CPU (32 cores per server) in the same generation of processors. The Intel Xeon Platinum 8462Y+ processor (64 cores per server) completed workloads up to 2.17x faster than a previous-generation Intel Xeon Gold 6346 processor-based server, which has only 32 cores. Compared to a 2nd Gen Intel Xeon Gold 6246R processor (32 cores per server), the server with the newer processor outperformed the older processor by up to 2.40x in throughput.
• Additional benefits from 5th generation processors. At the same Thermal Design Power (TDP) of 350W, the Intel Xeon Platinum 8580 processor completed workloads up to 1.27x faster than a previous-generation Intel Xeon Platinum 8468 processor-based server.
Based on our performance assessment and our refresh cycle, we are deploying servers based on the 4th and 5th Gen Intel Xeon Scalable processor family in our data centers. By doing so, we have significantly increased EDA throughput performance to improve the overall EDA design cycles and optimize time to market of Intel chips.
Learn how Intel’s manufacturing engineers are using natural language processing (NLP) to streamline failure mode and effects analysis (FMEA).
Intel Manufacturing Automation has developed an innovative methodology for performing FMEA in manufacturing by using artificial intelligence (AI) techniques to analyze users’ emotional tone—positive, negative, or neutral—about the manufacturing equipment.
Failure mode and effects analysis (FMEA) is a critical process in manufacturing that is used to identify potential failures, assess their impact, and prioritize preventive actions. The FMEA process involves breaking down the system into its individual components; analyzing each component for potential failure modes; determining the effects of these failures; and assigning a risk priority number based on severity, occurrence, and detectability.
Traditionally, FMEA involves time-consuming manual extraction and analysis of large text data from tool logs and other sources, leading to weeks of engineering effort per analysis. This approach must be repeated across large fleets of tools by the engineers who support these tools.
Our new approach uses natural language processing (NLP) and sentiment analysis (SA) to reduce the labor required for FMEA from weeks to seconds. When we compared the traditional FMEA approach to our SA-based system, the software discovered everything the engineers found with their manual system approach—plus even more that they missed. Our system performed FMEA on six months’ worth of data in under one minute, saving weeks of engineering time.
Our SA analysis extracts keywords from comments, notes, charts, and quality systems data to find words with negative connotations about a tool, such as “abort” or “fail.” We’ve customized the analysis for domain-specific language, such as words and phrases that are specific to the semiconductor fab environment (e.g., “defect” and “excursion”). We also implemented custom replacement keywords (“ABORT” vs. “ABT” vs. “ABRT”) to accommodate various abbreviations and spelling errors commonly found in tool logs and other data sources.
The system analyzes sentence structure and filters out brackets for HTML and other special characters used in programming languages, allowing for a wide variety of inputs. While Intel’s manufacturing logs are predominantly in English, the libraries can be extended to handle multilingual input.
Integration with our in-house factory data analytics platform, Data on the Spot (DOTS), further democratizes FMEA results by enabling users to easily access this data and use it to identify root causes from the original sources. The system is now deployed across all Intel semiconductor factories, with a focus on front-end manufacturing.
The Intel® architecture-based components of our new FMEA system enable us to deploy new features rapidly. The FMEA system is powered by Intel® Xeon® processors, which provide the computational power and performance needed for real-time analysis of large datasets. The system uses high-speed SSDs and advanced memory configurations to ensure rapid data access and processing. Fast Intel® Optane™ persistent memory is employed to enhance caching capabilities and accelerate data retrieval.
This transformative approach promises to dramatically streamline the FMEA process, freeing up engineers’ time to focus on developing innovative solutions to further enhance Intel’s manufacturing tools and processes.
Once rare in younger adults, colorectal cancer is increasingly affecting adults under 50, with the biggest increase in those aged 20-29. With many searching for answers, this growing health concern brings to light the unique challenges and comprehensive care needs of younger cancer patients.
In this episode of the Business Group on Health podcast, we speak with Dr. Robin Mendelsohn, Co-Director of the Center for Young Onset Colorectal and Gastrointestinal Cancer at Memorial Sloan Kettering Cancer Center. Dr. Mendelsohn explores lifestyle factors, genetic predispositions, and the ongoing research to uncover the potential reasons for this global trend. The discussion also emphasizes the critical role of early detection, coordinated care, and what employers need to know about prevention and treatment.
Guest: Dr. Robin Mendelsohn, Co-Director of the Center for Young Onset Colorectal and Gastrointestinal Cancer at Memorial Sloan Kettering Cancer Center
Thank you to the episode sponsor, Aon.
The podcast currently has 1,509 episodes available.
4 Listeners