Share Emerson Automation Experts
Share to email
Share to Facebook
Share to X
Excellent process control requires accurate measurements, a robust control strategy, and precise final elements such as control valves, actuators and regulators. Measurement instrumentation is an essential consideration.
In this Emerson Automation Experts podcast, Emerson’s John van Gorsel joins me to discuss these considerations in pressure and other measurements. Technology advancements have enabled new capabilities and simplified configuration, operation, and ongoing maintenance.
Give the podcast a listen and visit the Pressure Measurement and Measurement Instrumentation sections on Emerson.com.
Jim: Hi, everyone. I’m Jim Cahill with another “Emerson Automation Experts” podcast. Accurate and reliable pressure measurement is crucial in a broad range of chemical industry applications, and safety is truly a life-or-death issue due to the presence of toxic, flammable, and potentially explosive products.
I’m joined today by Emerson’s John van Gorsel to discuss some of these challenges. John is the product manager for Pressure Solutions in Europe and will share how the latest instrumentation functionality minimizes installation and operating costs and increases reliability, efficiency, and safety in chemical industry applications. Welcome, John.
John: Well, thank you, Jim. It’s a pleasure to be here.
Jim: Well, I’m so thankful that you’re here. Well, why don’t we start out by asking you if you can share a bit of your background with our listeners?
John: Yes, my background is electrical engineering and I started working for Emerson in 1986 as an inside sales engineer for the measurement products. I did it for a couple of years. I moved into an outside sales role. And in the early ’90s, I moved into a product support role for various measurement products. It was pressure, temperature, it was wireless, it was level.
The key thing in that period, which I did that for 25 years almost, was helping customers to solve problems. In 2015, I moved to the European role as a product manager for Europe, and now I would almost say I’m still heavily involved in helping customers solving applications and challenges. Now I work with a larger team of technical specialists and product managers in the countries.
Jim: Now here I thought I was a long-timer with Emerson, starting in ’88, but I guess you got me beat there. Can you highlight some of the changes and challenges in the chemical industry?
John: Yes, absolutely. I think at the moment, one of the key things that we see is that the chemical industry is moving towards a life cycle approach. We’ve seen this discussion on the Plastic Soup and the microplastics and the PFAS. We see that chemical companies are now focusing on the full cycle. In their design of chemicals, they take the raw materials into consideration, the energy used, but also the recycling. We see that that poses some challenges on the technology that we have.
And especially, as an example, the recycling that we now see happening, the advanced recycling really pushes the envelope of what we can do with the instrumentation in terms of high temperatures and high pressures and vacuum pressures. That is what we see where the chemical industry already started to spend a lot of effort in energy efficiency, reducing their energy use. We now see that they are moving towards electrification of processes, the use of hydrogen or biomass, geothermal energy, and that all will require new technologies to adapt to those new circumstances.
Now, if I would mention that what I see as the biggest challenge that we see in the chemical industry is defined skilled personnel. This is partly due to the aging of the workforce, the graying of the workforce, but we also see in Europe that in some countries, there’s still a reduced number of people moving into technical studies. The challenge is to get that personnel into the chemical industry.
I probably should mention as well that we still see that there is a little bit of an assumption that working in the chemical industry you get dirty hands and that stops people from going in that direction.
The one thing that chemical customers are doing at the moment, what the chemical industry is doing is making the chemical industry a better place to work by making tasks easier. A very important one. There is, for instance, to make sure that you don’t need to spend too much time inside the actual installation. People don’t always realize that to safely work in a chemical factory you need protection equipment, PPEs. And believe me, I’ve been there when you have to work inside a plant wearing these flame-retardant overalls, safety shoes, safety gloves, safety helmet, mufflers, safety goggles. It’s pretty tough to work under these circumstances.
So if you want to make a chemical plant a better place to work, the one thing you should try is to reduce the time that people actually need to work inside that plant. Now, what we see as a result of that is that we see requests for simpler instrumentation, easier to understand, reducing the time that you need in the plant, in the fields to configure these instruments.
Jim: Well, that sounds like a number of changes and challenges from some of the regulations that are ever evolving to, yes, that demographic challenge of a lot of us reaching the end of our working lives, and technology can play a role in that. So I guess, how does the new technology meet these challenges in chemical processes?
John: Yeah, I want to explain that. I think it’s best to split it up into two categories. On one hand, we added several features and technologies to address specific issues that we know of. And on the other hand, we make the everyday tasks that you need to do with instrumentation easier. If I start with that, the latter, if I start with that making things easier, for instance, on our 3051 series of pressure transmitters, we gave that transmitter an upgrade and now it can have a graphical backlit display.
And this is a big step forward from the legacy segmented display because a graphical display backlit is easier to read under all circumstances, but it also allows us, for instance, to show more information by showing, as an example, the icons that we may know from the NAMUR NE 107. So an icon indicating what the condition of an instrument is. That was never possible on the legacy displays.
In addition, the display also supports multiple languages. So we can set it to Spanish, to German, to Italian. So that is the outside of the instrument. But in the configuration, we also made some improvements in the user interface. One example, there is a lot of pressure transmitters are used for a secondary function. Think of flow, think of level. And to configure an instrument, for instance, for flow, you need to set certain parameters. You need to set the square root function. You need to maybe set the units in cubic meters per hour or liters per second.
To make that easier, we grouped all that information onto a single page. That reduces the risk of making mistakes or forgetting certain parameters, and it makes it much more easy for the person that is configuring the instrument to do this, saving a lot of time.
What also should be mentioned is that we added Bluetooth configuration or Bluetooth communication to the instrument. Bluetooth communication is a short-range communication that allows a maintenance person or an engineer to look at the condition of an instrument, to configure the instrument, to make changes in the configuration without having to physically open the instrument or climb towards the instrument. It reduces the time. It also makes it possible to do maintenance tasks without opening the instrument, so without having to power down the instrument. In general, it reduces the time that people need to be in the plant. It reduces the time that people need to be working at a certain height. That is a big step forward.
In terms of additional functionality to address specific problems, we added some diagnostics to the instrument. Plug line diagnostics is one example, but we also have loop integrity diagnostics. Both of these diagnostics address specific problems that we see in the chemical industry, being the clogging of impulse lines. You’ll lose your measurement because the impulse line is freezing up or clogging with dust. And the power diagnostics monitor the condition of the power supply or the power to the instrument. Faulty wiring, corrosion of terminals, etc., is detected by the diagnostics. And we added guided proof-testing to the instrument that help maintain a safety system.
Jim: Yeah, so that sounds like improvements in the local display there with the backlighting and organizing the tasks people need to do through the user interface and everything, are steps along the way to making it easier for them to work with and addressing the skills challenges that you brought up earlier.
Now, you mentioned proof-testing these instruments, which I know is required when they are deployed in safety instrumented systems or SIS. These can be quite a complicated, laborious, and time-consuming task. Can you elaborate a little more on why this is so?
John: Yes, it’s probably good to explain a little bit on how the instruments are selected or how safety instrumented system is designed. It’s all about the risk that you have in an installation and you try to mitigate or to reduce that risk in the engineering phase or in the design phase by, for instance, changing the hardware of the design, changing the rating of certain components, keeping people away. That all reduces the risk. But there is a certain risk that remains and you need to solve that, you need to mitigate that with a safety function, a safety instrumented function.
Now, the way that the safety works is that based on the level of risk that the process has, there is a certain requirement for the safety function. We allow the safety function to fail a certain amount of times and you have to think then that when you have a certain risk from the process, we allow the safety function to fail once every 10,000 years. So that is a little bit of what you should think about. But that is based on the reliability of the components that you use in that safety function.
Now, when you design that safety function, you can look up all those failure rates from the instruments and you have the failure rate at that day. But how can you maintain that failure rate? How can you know that after a year in bad weather and changing conditions and a vibrating environment etc., that it still has the same reliability and that there are no hidden errors somewhere in that system? And that is why you need to do proof-testing on the instruments.
If you do a proof test, what you actually do is you test the instrument for hidden failures. And hidden failures are failures that may affect the output of the transmitter, so the functioning of the transmitter, but it’s not detected by the normal safety system. It’s not detected while the instrument is in use. So you do an additional test and that additional test will reveal a certain amount of the possible failures in the instrument.
Now, we as Emerson created the procedures for our instruments and we used a third party to help us with that. So we tell exactly what you need to do to test the instrument. Now, imagine what happens when you have a lot of instruments installed and all these instruments may have a different procedure to do the proof-testing. You need to follow that proof-test that is recommended by the supplier because otherwise, you don’t know how that test will reveal failures.
So we have a documented procedure. And now when you do the proof test on a certain instrument, you have to find that procedure. You need to find the right manual. You need to verify that that manual is actually valid for that revision of the instrument. So we thought, well, this could be easier if we store that information in the instrument or in the device description, the device driver. So now if someone wants to do a proof test or needs to do a proof test, the tool that he or she uses will tell exactly what steps to take.
And when the test is done, where you know exactly that you did the right test, it will also store the result of that test in the form of fail or pass in the transmitter. So you can always see when the test was done and what the result of that test was.
So the 3051 with guided proof-testing saves you a lot of hassle in finding the right documentation. So it reduces the potential errors that you make by using the wrong procedure. So it’s easier to keep your safety system within the tolerable limits.
Jim: Yeah, that sounds like instead of hunting down manuals to see what you need to do with that stored in the instrument, and basically tells you and guides you through the process. Yeah, I can see that being huge time savings when it comes around to proof-testing all those instruments.
Now you had mentioned diagnostics, the built-in diagnostics. I’ve heard some about the changes in diagnostics. So can you tell us some more about this?
John: Yes, absolutely. I think there were several changes that we’ve seen with diagnostics. In the initial versions of instruments, when instruments became smart, the diagnostics were all around the health of the instrument. So the diagnostics would check, is the instrument still working within its parameters? Is it still good, or did it fail?
At a certain moment, we realized that we could do a lot more with the instruments, that we could actually help customers to identify issues with that process rather than only with the instruments. And the current transmitters, the current 3051 that we have has a couple of these diagnostics. So it does have the self-diagnostics that tell you that the instrument is still functioning well.
But it also has what we call process alerts. Process alerts are alerts that an operator or an engineer can set in the instrument completely separate from the primary 4 to 20-milliamp signal. So it’s not interfering with the control loop at all. But you can set some process alerts where you can monitor if the pressure is above or below certain thresholds. And when the pressure comes above that threshold, it writes it in the log. So afterwards, you can look in the instrument and see, hey, I’ve seen that the pressure was five times above the limit that I set or the alert limits that I set where it was below that pressure. So it gives you an opportunity to do some root cause analysis when there is an issue with the installation.
We cannot do this only on the pressure. We can also do that with the temperature sensor that is in the pressure sensor. So we have a pressure or a temperature sensor in there. We use that for compensation. But you can use that as well. And there you can see that maybe the temperature of the transmitter was high for a certain amount of time, or it was low with the risk of freezing of the impulse lines, for instance. So that is an added diagnostics that tell you a little bit about the surroundings of the transmitter rather than the functioning of the transmitter itself.
Some other diagnostics that we have. One was already available for a while in the 3051 series of transmitters is the loop integrity diagnostics. The one critical thing with 4 to 20-milliamp instruments is that the whole loop is a milliamp signal that goes through that loop. And the advantage is that if there is a voltage drop somewhere because of issues in the line or whatever, it doesn’t affect that milliamp signal going in that loop. And that is a big advantage.
Now, the challenge is that when there is too much resistance in that loop, or there is a leak current, at a certain moment, you may see issues with the measurements. A transmitter needs to have a certain voltage at its terminals to operate. Once the voltage is below that limit, which is typically between 10 and 12 volts, when it comes below that minimum voltage, the transmitter will stop working. So you lose your measurements. The diagnostics will tell you well in advance that the voltage is becoming too low. And the voltage becomes lower when there is, for instance, corrosion or moisture in the terminals. So it’s warning for a situation that could potentially make you lose your measurements.
And the second of the advanced diagnostics is the plug line diagnostics. And this is completely new for the 3051. It’s a technology that we already used for something like 15 or 16 years in a different transmitter. And we now have it implemented in the 3051. And what we do, well, if I simplify it, then I would say we use the pressure sensor as a microphone.
And I once made a comparison that I said, “Well, we’re actually mimicking what the older engineers did.” They listened to the process and they said, “Well, hey, I listened and I hear that there is something wrong in the process. There is a valve cavitating,” or, “A pump is losing its bearings.” And we do the same with the pressure sensor. We listen to the noise of the process and that noise of the process will change if something happens in the process.
So there is a certain almost like a fingerprint noise of the process. And when that changes, we know that the impulse lines are clogging, for instance, because that will give a damping on that noise. And it’s a very powerful type of diagnostics because it warns you that you’re about to get a problem with your measurement. But it warns you before the actual problem occurs. So that makes it a very useful addition for situations where there is a risk of clogging and losing the measurement.
Jim: That’s so interesting how technology has advanced where it’s not just looking internally to itself, how are all the components and the transmitter doing, but look externally at the process like some of those examples you gave and the electrical side of things of what you’re doing to power it up, all the things that could spot problems. Maybe the device itself is fine, but there’s issues outside. So that’s very powerful diagnostics there.
John: Yes, it is.
Jim: Now, I know differential pressure level measurement is used in many chemical process applications. What are some of the benefits of using electronic remote sensor-based DP level measurement?
John: Yeah, that’s really a very good question, Jim. And I do need to go back in time a little bit to explain this because this is something that the whole development took decades to happen. Differential pressure transmitters have always been used for level measurements. And the reasons are, it’s simple. Everyone understands how a pressure transmitter works and it’s easy to calibrate, etc. And any liquid in a vertical column, the weight of that liquid will generate the pressure. So when you measure that pressure, you can find out what the level is. So it’s a simple, straightforward measurement with a few exceptions, but it will always work.
Now, there are some measurement challenges, and one of the challenges is when you have a tank that is pressurized, so it’s not connected to the vapor, the vapor space is not connected to the atmosphere, you need to compensate for that pressure that is present in the tank. And the easiest way to do that is you have a pressure transmitter, you mount it to the bottom of the tank and you connect the reference side, the low-pressure side of that pressure transmitter, that differential pressure transmitter, all the way to the top with an impulse pipe and you connect it there.
Now, in some applications that works, but in a lot of applications, what happens there is that you still have some evaporation from the process, it condensates in the impulse line, and over time you get liquid in that impulse line, and that gives you a measurement error. So you need to train it, so that means you have additional maintenance work on that tank.
Then someone very clever came with the idea of what if we already fill up that impulse line so that we already have liquid so we don’t have an issue with the condensation. That works, but you need to make sure that the liquid is always at the same level. So you need to control the level, and that’s also another maintenance task. Someone needs to climb up to the top of the tank and check the level in that impulse line.
And there are some other issues as well, even apart from the fact that making an installation with impulse lines is very expensive because it’s a lot of work. It’s sometimes complicated work too, what you do, you need a specialist to do it. But the fill fluid that was used in those wet lag systems, as we call them, is usually glycol and that’s slightly hygroscopic, so over time it will pollute with water, it changes the density, etc.
The most common solution to all of this is the use of remote seals. A remote seal is a capillary connection, a capillary of a certain length connected to the transmitter, and then an external additional diaphragm. The transmitter is mounted to the bottom of the tank and the capillary runs all the way to the top of the tank and there we connect a reference site. Now, a capillary filled with a certain fluid will create issues when the temperature changes, so that fill fluid will expand, it will contract, it will change the pressure.
One bright idea was what if we have exactly the same capillary on the high and the low-pressure side? Because that will cancel out the effects on both sides. That works to a certain level, but there is still an effect that is called the head effect and that has to do with the vertical distance on the low-pressure side. So if the temperature changes, you still have an effect. And just to give you a number, in a normal-level application, that effect will probably give you a 5% error on the measurement.
And the other thing, and I haven’t even mentioned that, is that when you have a transmitter with two remote seals, and I know a lot of people will recognize this, where a standard transmitter is something like 2, 3 kilograms, one person can pick it up and go into the plant, install it, etc. A transmitter with two remote seals can… First of all, the remote seals are supposed to be flexible, but it’s bendable at most. So it’s really a tough construction. You need two people to carry that. It can be 25 to 35 kilograms depending on the types of remote seals. So you need two people to install it.
Can you imagine how it is if you need to climb up a tank to install such an instrument? So it’s a lot of work. You need a lot of safety measures in addition to what you normally do. You probably need scaffolding.
So we got this question, is there a possibility to do these measurements, but then electronically? So have the two seals so we don’t have the issues with the dry and wet leg systems, but have the measurement at the top of the tank and at the bottom of the tank, but not all the disadvantages of the capillaries. So we designed what we call the Rosemount 3051S Electronic Remote Sensors. So instead of having a capillary, now you have two separate sensors where one is the master, one is the slave, and the master calculates the differential pressure, but we only connect these two sensors with a four-wire electrical connection.
So one person can install it, can install one sensor, run the cable down, connect it to the second sensor at the bottom of the tank, and you have a very accurate measurement not affected by temperature changes. And the main thing is that it’s much, much easier to install with less time needed at height. You don’t need to climb with a very awkward construction. So the electronic remote sensors really solved big problems that we saw in the chemical industry. The applications where we see it, everything of more than 3-meter height there is already an advantage.
Now, think of distillation columns. They can be 30, 40 meters high. If you want to cover that distance with a standard transmitter, with capillaries, capillaries, in reality, you cannot make them longer than 10 meters. That’s more of a practical limit. So you need to find a different solution. When you look at electronic remote sensors, we could put them 100 meters apart if needed. We can have more than 100 meters of electrical cable between the sensors. So we’re not limited at all there. So it’s a big change and it solves a lot of the disadvantages that were traditionally connected to differential pressure level measurements.
Jim: Yeah, it sounds like the installation challenges we’re able to overcome, as well as the accuracy of that electronic connection between the sensors sounds like a great solution for many applications in chemical processes.
Now, I know pressure gauge applications are another thing commonly seen for local visual display within the process. Can you share some of the innovations in this area, such as Rosemount Wireless Pressure Gauges?
John: Yes, I can. The Wireless Pressure Gauge, it all started with customers coming to us and explaining some of the challenges that they saw with gauges. A gauge is usually a mechanical device and it’s based on what is called a Bourdon tube. And a Bourdon tube is really a hollow tube, elliptical tube, slightly bent. And when you pressure on it, then it will stretch and that movement is used to drive a needle on the dial.
And the big advantage there is that it’s a mechanical device. And so it gets damaged pretty easy. It is not protected against overpressure. Now, the big risk that customers also see is that when there is a leak, that Bourdon tube… So the process is behind the dial, meaning that the operator that is looking at the dial of a mechanical pressure gauge is almost looking at the process.
So right behind the dial is the process pressure. So whenever you have a risk of these gauges failing, then you need to take other measures to protect the gauge by adding a remote seal, for instance. By the way, all the mechanical gauges have a plug on the backside that if there is a pressure buildup, that it can blow in the opposite direction away from the operator.
So gauges fail quite often. Metal fatigue, they’re not very good in dealing with overpressure. They’re not very good in dealing with vibration, for instance. And the one thing that I remembered is that customer was really concerned. He said, “Well, if that gauge says 0, and I want to do an operation, I want to start a pump, gauge says 0, how do I know if it’s really 0, if there’s really no pressure or that gauge failed?”
And when we heard that, we thought, “Well, we need to design something else. We need to go back to the drawing board and not try to make a better gauge but use a different design.” So we started with a pressure sensor, the one that we already know because a lot of what we do is based on pressure sensors. And we made the gauge electronic and electronic in a way that it’s battery-powered but it will work for 10 years or more. Because we use a separate pressure sensor, it’s protected against overpressure because this pressure sensor that we use can deal with up to 150 times the pressure that is present in the process. So there’s no risk of leakages or an overpressure incident.
The other thing that we solved with this solution to pressure gauge application is that the dial that we have, the needle that we have is driven by a stepper motor. When you see a traditional gauge on a vibrating application, you see that needle moving up and down and it’s just a gray area where that needle is somewhere. When you see the gauge that we have, the needle is perfectly still indicating the right pressure. And if there is a failure, you will see it because there is an LED indicating the condition of the instrument. So we solved a number of the big problems that customers have.
And in addition, we added the WirelessHART communication to the Wireless Pressure Gauge. So that gives an additional possibility of reading both the measured value, but also the condition of that gauge in a central location. Or you can use it to strengthen a network if it’s already there.
Jim: Well, that sounds like it solves issues around potential safety problems. And then adding the WirelessHART component, you’d be able to get that remote reading in addition to the local one you’d get through the display there. So that’s really, I think, valuable for a lot of the applications there.
Well, John, this has been a really great discussion. Can you kind of summarize for our listeners some of the key takeaways from our conversation?
John: Yes, I can try to do that. So we heard that the chemical industry has several challenges and these challenges are all around, how do I keep my plant safe? And how can I reduce the amount of time and effort that is needed to keep a plant safe? And we also heard that the chemical industry is sometimes struggling with the complexity of instruments and they need simpler instruments that are more user-friendly. And we designed and improved some of our instruments to address specifically these challenges that the chemical industry have. And we also have some advanced solutions like the Wireless Pressure Gauge and electronic remote sensors that address specific challenges that exist in the chemical industry.
Jim: That’s a great summary of things. And I guess to close things out, where can our listeners go to learn more about some of these things we’ve discussed?
John: Well, I would suggest to use our website, emerson.com. And you could even use rosemount.com, and you can use the search function for 3051, and it takes you to multiple pages with information, including an interactive 3D animation of the instrument where you can test all the features that it has. Or you can search for the 3051S ERS or Wireless Pressure Gauge to go to the respective pages of these products. And you’ll find a lot of information, documentation, manuals, user experience with these technologies, information about applications, etc.
Jim: And I know for some of the things that you mentioned along the way, I’ll add hyperlinks in the transcript, make it easier to get to some of those. Well, John, thank you so much for sharing your expertise with our listeners today.
John: It was a pleasure. Thank you, Jim.
-End of transcript-
The technology transfer process for manufacturers in the Life Sciences industry has been traditionally a slow and cumbersome process to scale up from research and development to commercial production. Capturing the process knowledge along the way and making changes can slow down the overall process.
In this Emerson Automation Experts podcast, Bob Lenich joins me to discuss how the Process Knowledge Management (PKM) digital collaboration platform can help streamline the process and minimize errors, resulting in faster time-to-market.
Give the podcast a listen and visit the DeltaV Process Knowledge Management section on Emerson.com for more on how to drive performance improvements across your pharmaceutical and biopharmaceutical development lifecycle.
Jim: Hi everyone, I’m Jim Cahill with another Emerson Automation Experts podcast. Digital technology transfer is a growing opportunity to reduce time to market for new therapies. The current approach for developing and transferring a manufacturing process to different product lines across the globe is difficult. The development phase within a global supply chain is very cumbersome, inefficient, and time-consuming. I’m joined here today by Bob Lenich, business director for Emerson’s Process Knowledge Management digital collaboration platform to discuss what is being done to improve this key activity and reduce time to market. Welcome Bob.
Bob: Hey Jim, happy to be here.
Jim: Well, I’m glad that you’re here to share some of your knowledge with our listeners. So Bob, I’ve been around here at Emerson a very long time and I know you have too. Can you share a little bit about your background with our listeners?
Bob: Sure, I hate to talk about how long it’s been because it has been a while but I’ve been at Emerson for several decades and most of that has been in the Life Sciences, process management and operations space. I’ve worked on projects, I’ve worked in product development, I’ve worked in marketing and consulting and again I have a really broad background in all things related to Life Sciences, manufacturing, operations, and overall supply chain.
I’m very excited about understanding both the opportunities that are coming and what Emerson can do to help make those get realized, especially given everything that’s going on in Life Sciences right now because it is such a dynamic space and a lot of the things that we’re talking about doing are really important to just help the industry get better and better.
Jim: Well, that sounds like a wealth of background that you bring to our conversation today. So let’s dive into it. What is driving this change in, I guess, digitizing tech transfer? What is the technology transfer and what are the problems today?
Bob: Sure, so the idea is today there are just a growing number of new therapies being developed. Some of the more recent data we have shows that there are more than 40,000 clinical trials currently underway here in 2024. And so in order for a development to happen, it starts in early stage, it goes to later stage development, it goes to clinical, then it goes to commercial assuming it’s successful. As you’re going through each of those phases of development, someone has to define here’s how we want to make this particular thing and then as it moves from phase to phase, until the next phase, okay, here’s the next thing that we want to do. So capturing all of the information, defining all of those things and making sure that what was done in an early stage is effectively passed on to a subsequent stage and that what’s done in that is captured and continued to be passed on is the opportunity that’s in front of us.
Today, doing things in an individual stage can represent thousands of parameters, and parameters include things like equipment definition and specification for capabilities, material information, operating parameters, quality parameters, just all kinds of things around that. And so if you say, great, there’s 40,000 clinical trials happening, there are thousands of pieces of information to manage around each of these, you’re going to go through multiple phases. You can just see the growth in terms of the amount of information to be managed and how to do that.
Today, all of that is typically done through a paper process or spreadsheets or things like that, which works because it reflects how things have been done in the past, but it’s really not very effective. And so the challenge is people want to accelerate what they’re doing. They have a large and growing volume of all of this and they really would like to apply technology to start doing it. There are two other things that have kind of really impacted this. One, during COVID, everybody saw that if you throw lots of people and bodies at things, you could actually get things done much more quickly. I mean, everybody saw that vaccines were developed in 18 months versus what normally takes 10 years. But again, that was because they threw lots of people at it.
Also, there have been some incentive changes like recent laws in the U.S. with things like the Inflation Reduction Act have also combined to reduce the amount of time patents are in play for development that’s done. So the combination of all of these new things coming, the fact that regulatory groups are reducing the amount of time you’ve got patent protection, and everybody needs to do this. But managing all this information is becoming more and more challenging as the number of these things grow, says that we really need to figure out how to apply technology to this process to make it more efficient and effective.
Jim: Wow, my brain hurts from that 40,000 times 10,000s per each of these in there. That’s really a lot of information to manage, I guess, especially with all the compliance and validation requirements in the industry. So I guess what tools or procedures exist today for this?
Bob: Right. And today, like I was saying earlier, this is kind of a, it reflects the way things have historically been done, which is a combination of people and paper. Traditionally, it’s a paper thing. You know, I mean, you basically have to write things down in documents, create reports, people sign those reports, and then they manage those documents through a paper process. Today, a lot of that has been, you know, kind of converted from paper to paper on glass, but it’s still very much a paper process. And the thought here is, is that just because it’s in a digital document and it’s digitally available doesn’t mean that all of the information in that particular report is easily available because getting information out of a Word document can be challenging. Getting information out of a spreadsheet can be challenging.
And just to give you a quick example, even though things are digital today, today, when someone is working on a spreadsheet to characterize, you know, how to make something or how to scale a piece of processing equipment up, a scientist will sit in and they’ll start working on their calculations and they’ll start typing it in. And when they figure it out in order to meet the regulatory and validation requirements, a second person has to sit with them and verify that they typed in the calculation correctly and sign off on the fact that, oh yeah, you typed it correctly. Guess what? That’s the kind of thing that, well, that’s not what those people originally went to their, you know, they got their degrees in to watch somebody else type in Excel. So if there are things that we can do to eliminate that kind of a step, it’s a huge savings to those people because it gets them away from the tedium of just managing the process to actually doing the kind of work that they want.
And then by the same token, when you take it from site one to site two and you go from early stage to late stage to clinical, every time you do a transfer, two people have to get involved in that exact same kind of a process. The source site has to say, here’s what we did. The destination site says, okay, we got it. Here’s how we’re updating it. Everybody signs off on that. And so again, just the tedium of managing the exchange, setting up the meetings to do that, making sure that people are available, you know, all of those kinds of things, just make it a real headache to try and do it with, you know, today’s typical tools.
Once you have those things, even though you do then have those documents in like a Word document or a spreadsheet, those are typically then managed on SharePoint. And again, SharePoint has a lot of benefits in terms of making it easily available. But finding stuff on SharePoint is a real headache. And if you’re trying to find a particular parameter in a particular file on SharePoint and you want to find out, oh, where was that pH value referenced in five different places across five different SharePoint sites, again, that’s a major headache to try and go in and do it. So while things work today, they don’t work very efficiently and effective today. And so there’s a lot of opportunity to go in and start converting them.
So what we’re really talking about doing is trying to take what had been this, you know, traditionally paper process, even though it’s somewhat digital, and convert it from a paper process to more of a data process. So instead of having a report or a spreadsheet, all of the elements that are going into that document or into that spreadsheet are now pieces of data that we want to manage independently through some kind of a centralized application. And then we want to do collaboration, change management and things around that so that you’re converting it from the way you’ve done things to something that takes advantage of new technology.
Jim: Yeah, I think you describe that challenge really well that, you know, one step is just another from pieces of paper to really pieces of paper behind a computer monitor, which is a step in the right direction. But it certainly doesn’t give you the benefit of what could be possible. So I guess what are some of the new approaches to replace and improve this traditional approach?
Bob: Sure. So one of the things that’s out there is this idea of digital process knowledge management is a growing opportunity and there are growing solutions out there. So about three years ago, three, four years ago, we had been working in this space and we were trying to figure out good ways of doing that. There was a company out there called Fluxa. They had a package called Process Knowledge Management. Emerson did an acquisition of that company and we’ve been working now to embed that product platform into the overall Emerson portfolio.
The idea here again is that instead of having SharePoint that’s managing all these individual documents and spreadsheets and things, what we’re doing is we’re extracting the information out of those sources. We’re putting that into an enterprise-wide database application so that all of those things are available, changeable, traceable, etc. so that you can do your job much more effectively. And so the thought here is that particularly when you’re defining a process, a manufacturing process, and you need to say things like, okay, what are the step sequences that are required in order to go through and make this? What are all the associated operating parameters and then attributes around that? If it’s a pH for making sure that the bioreactor is operating at the right level for growth, what’s the appropriate operating range? What’s the appropriate calibration range? Just making sure you’ve got all of those things captured.
Similarly, when we talk about the equipment and that bioreactor, what are the characteristics and capabilities of that bioreactor or other things in the process so that you can really say, yep, I’ve got a good understanding of how that piece of equipment is going to work. And then as I go from stage or phase-to-phase and development line to development line, I can confirm that, yep, I know that the new piece of equipment I’m going to use reflects what the original one had and therefore we can confirm that, yep, it’s going to work and you’re going to get the same kind of result.
So this idea of converting all of this information from something that was captured in a spreadsheet or a document, making each of those things an individual element, an individual recipe object is one of the first things that happens in PKM. And the key thought there is that when you have all of those things broken down into this level, then you can provide traceability and versioning and automation. So the thought here is that when somebody then says, great, we define the pH value, but you know what, we found out after running some tests that instead of running at 3.7, we want to run at 3.9. And great, you’ve got to go through and do change management. Doing that, again, in SharePoint with spreadsheets is a real headache. With something like PKM though, you make the change in one place and then every place that particular value is referenced is automatically updated. So that’s again taking advantage of the propagation characteristics of new technology and taking advantage of this new model.
Similarly, once you start combining things together and you’ve got all those building blocks in, then you can start building things like templates to say, okay, let’s define how a bioreactor works. What are the key operating activities, key step sequences, and then all of the associated parameters with that so that you’ve got a standard approach to say, you know, this is how bioreactors generally work. And then you can customize that for each individual product that you do.
So having this template then gives you the ability to be much more efficient. And then again, as people make changes to the template, any place that template was used, you can reference that, you can propagate the updates, and you can ensure that any changes are done are appropriately reflected everywhere it was used. And you have confidence because you’re using digital technology to track all of that versioning, all of the updates and everything else.
Then once you’ve got the building blocks done, then you can start writing recipes. And there has historically been something called ISA88. ISA88 is a standard that the industry has adopted for a number of years. And what it really does, it talks about having general recipes, which are scale agnostic, but kind of defined in general how things work. They will have site recipes, which are more specific and are starting to getting very scale-specific, as in instead of it being a benchtop scale at a five-liter bioreactor versus a commercial scale at a 20,000-liter bioreactor, you’re getting into those kind of distinctions. And then you go to master recipes and control recipes.
So what we’ve done with PKM is we’ve historically never really been able to do the general and site recipes in a very efficient manner. So PKM really focuses on those things. And then what it allows you to do is once you’ve got general and site recipes defined, you can then very easily export that information and integrate it with the execution systems that do the master recipes and the control recipes to actually do things. So this combination of defining things across the full scope of a recipe hierarchy and then being able to integrate things is another one of the major elements that you have and that you can do.
And then the last part is that when you look at all of this, you also want to be able to go through and do risk assessments, because again, it’s great to be able to say, here’s the definition of everything and here’s how it should work. But in order to ensure that the pharmaceutical manufacturing is meeting all of the safety requirements that are out there, then you want to be able to say, great, what are all the key risk areas at each stage against each piece of equipment, against each parameter that I’m talking about, and be able to again capture that and go, Okay, yep, this is a real significant risk. Let’s identify that. Let’s identify how we’re going to deal with that and turn that into something that we’re going to actually manage. And so, again, capturing that definition, capturing that information, and then being able to reference that as you’re actually doing things to address those risks is something that’s captured by PKM.
So a lot of stuff there. It’s a really powerful package and it really takes advantage of new technology so that you’ve got everything in a centralized database. And then you can have multiple people work on common areas, you know, collectively. They don’t have to be in the same time zone. They don’t have to be in the same location. They can see changes that are done. You can add comments and annotations. You know, you can have chats back and forth. There are just a lot of things that make this a much more effective way of taking advantage of digital technologies for people to share information about how to do something.
Jim: Yeah, it sounds like it’s not just organizing it better in the database and make an update here reflects in the places where it needs to reflect is that collaborative element of people being able to work and seeing what each other’s doing that’s really able to foster that. So, Bob, can you share some details like where this has been done or what kind of results and benefits that have been achieved so far?
Bob: Sure. Like I said, this is a really growing opportunity and a number of leading-edge life science companies are actually applying this and have been doing this for a number of years. So it’s really interesting what they’ve been able to accomplish.
The first thing that’s done is that they’ve been able to go through a standardization exercise. One of the things that companies have found is that when they’ve started out, they’ve had a lot of differences that are minor differences, and they’ve always kind of agreed, Okay, the way we do it in site one versus site two, they’re close, but they’re not identical. And that’s okay. And we’re not saying you need to make them identical. But what’s been interesting is when you start with this combination of general and site recipe approach, getting some kind of common naming convention and standardization really provides a lot of benefits because it helps you eliminate redundancies and it makes communications much clearer, both within your organization and then when you actually talk to the regulatory groups.
So when you’re doing a regulatory submission for a filing for a new indication, for a new therapy, having common names for things so that people understand and realize, oh, yeah, I called it bioreactor and I called it titre. And exactly what that means and knowing that it’s the same thing across different sites that you have in different products that you’re talking about is a really valuable thing that those people that they didn’t necessarily realize when they started, but it’s become something that’s very evident as they’ve been using it. And again, the thought here is that people were doing the same things, but they were calling it slightly differently. And so aligning on that is a really big benefit.
The second thing that’s been going on is that the opportunity to start doing this and then be able to really reduce the amount of time, because when you look at what we’re talking about here and what I talked earlier about saying, you know, people are using spreadsheets and they’ve got two people typing in a spreadsheet and you’ve got people reviewing it and you’ve got to transfer it over to another group. They’ve got to do more typing, then when you connect it to an automation system, somebody’s got to extract the information on the spreadsheet, you know, put it into an automation system. Somebody’s got to verify they did that correctly. You’ve got to go through all this testing. There’s just a whole bunch of data integration and management activities that don’t provide a lot of value add but you have to do just because today all of those separate systems have to work together in order to make this happen.
So as a great example, Roche recently won facility of the year in 2023 for their clinical supply chain facility in San Francisco. And it’s one of the best applications of taking advantage of Process Knowledge Management and connecting that into execution systems. And so the idea here is that they’ve established, you know, “general recipes.” So they have process families defined for general classifications of things like monoclonal antibodies or other things like that. And they have a general way of approaching those. Then when they’re doing an individual product, they will define that product. And as they’re going from, you know, an initial site for development to a launch site to a full-blown commercial site, they will have site versions of that one general recipe for that particular product. And they’ll be able to automatically scale things.
So the fact that you can take advantage of the information, you can embed calculations that they refer to the scale of the site and automatically scale things up so that if you’re at a 2K site versus a 20K site, you know, the calculations for how to size something, how to, you know, figure out the amount of materials that you need, things like that, those kind of things can automatically be done and then tracked as a part of what you’re doing. So Roche has embedded that. They’ve also integrated it with a lot of their existing systems. So they’ve tied their knowledge management system into their materials system. So they know that the standardized material names and suppliers that exist in their overall enterprise planning system are the ones that are being referenced by the scientists. So, again, nobody’s having to go through and cross check those kind of things.
So the idea that this information that’s required from material definitions, to equipment characterizations, to telling the automation system what to do, all of those things are connected together and integrated. And the real benefit of that is one, they’re able to do things on a much faster basis. And Roche will tell you they have been able to dramatically reduce the amount of time and effort it takes them to create a new product. And in the clinical supply center, they have their digital knowledge management system integrated with their execution systems. And they will tell you that they have eliminated the need for transcriptions, you know, so that people are not involved at all in moving something from system one to system two, because it’s all integrated. Therefore, all the transcription errors have gone away. All of the work to review that, sign off on that, have gone away. And they will clearly document that they have been able to save significant amounts of calendar time, significant amount of costs as they’re going through and doing this. And this is now a core part of what they’re doing at that facility. And they have certainly been working to extend that to other words.
So the real plus here is that you can significantly reduce the amount of calendar time that historically is taken to go, you know, from site one to site two or to go from version one to version two, eliminating a lot of the testing, you know, verification, confirming that there aren’t any errors, things like that because you’re doing all of this with digital integration and you’re able to confirm that, you know, that the thing’s transferred correctly and you’ve got the traceability, you’ve got the data integrity, you’ve got all of the elements around that. It just significantly reduces the amount of time and effort that it takes.
It does take a chunk of work. You do have to have done this standardization exercise up front. You do have to have worked to make the integration there. And so what we’re also working on is making it so that eliminating those kind of barriers, making it easier to start, making it easier to do the integration so that you can get to this end result, dramatically reducing the amount of calendar time and or effort it takes to bring in a new product to a new facility. Those are things that are coming. But at the moment, you can certainly go out and look and there are a number of papers that are out there, presentations that have been done that just kind of document it. And like I said, that facility of the year award from ISPE for 2023 is one of the best ones we’ve seen.
Jim: You can just see that the workflow, the whole process getting way more efficient and way less errors involved. And I liked what you said at the start about, you know, kind of bringing that common tribal language between facilities all across the world. You know, everyone’s doing things a little bit differently. This will get away from that. So it’s really common language of communication, which is really a huge benefit, I think. So where can a life sciences manufacturer get started to learn more about the Process Knowledge Management solution?
Bob: You know, there’s some really good places to go. I mean, obviously, we’ve got a lot of information on the Emerson websites. If you go to Emerson and you start searching for Process Knowledge Management, it’ll take you to the websites on the Emerson portals where we have what we’re doing and examples of what have been done there. And again, you can see videos of things people have done. We can show you the kind of capabilities that we have. And obviously, we are ready to talk to you about anything else we can do to help you understand things and potentially start showing you and give you demonstrations of this.
The other places you can go, hopefully, people in the life science world are familiar with Bioforum. Bioforum is an industry consortia that consists of a large number of end users, technology providers like Emerson, equipment providers like, you know, Cytiva and Sartorius, regulatory groups, academic groups, etc. And, you know, we work collaboratively together to start kind of capture what the industry needs. And we do this in a I’ll say an agnostic perspective so that we’re capturing the needs, but we’re not doing anything that’s proprietary there. And if you go to the Bioforum site, you can also get a lot of good information about what’s going on relative to tech transfer and more of the general things that are happening. But if you want to start looking at specific examples and how to start targeting it and make it real for you, then I would say you need to talk to someone like us that’s actually working and doing it with people. But you need both. You need to understand here’s in general what’s coming and how it’s being looked at by both the industry, the regulatory groups, etc. And then you can look at here are places to go to show you specific examples and the real benefits that have been achieved.
Jim: Yeah, and I know we’ve covered a number of things in the Bioforum that they’ve put out in separate blog posts. And I’ll provide a [Biophorum] link to where we have some of those located here on the blog and some of the other things that you mentioned. So, yeah, there’s a whole lot of information and videos and other documents to learn more or connect with the experts that we have around the world. So, Bob, thank you so much for joining us today and sharing your expertise with our listeners.
Bob: Happy to do so. Again, this is a really exciting time in Life Sciences because making these kind of changes is something that everybody’s been interested in for a long time. And the reality is it’s happening now and people are really excited about actually making these kinds of things real and then getting the benefits from, it is a really exciting place to be. Thank you.
-End of transcript-
En este episodio la experta Darianyela Cedeño, nos instruye sobre la importancia de las colectoras de polvo en las diferentes industrias y nos comparte nuevas tecnologías de automatización para estos activos, que ayudan a maximizar su eficiencia y minimizar su impacto medioambiental.
Te invitamos a ver o escuchar el episodio completo y suscribirte a nuestro canal para conocer más sobre innovaciones en automatización industrial.
Personalized healthcare, time-to-market pressures, and accurate, timely data collection and dissemination are all significant trends for manufacturers in the Life Sciences industry. Knowing that digital transformation efforts are required is one thing, but determining how best to proceed is another.
In this podcast, Bill Carey joins me to discuss the Life Sciences Showroom facility in the Austin, Texas area. The showroom demonstrates how to address the challenges in developing a path forward. He discusses how technology has simplified tech transfer processes and how real-time data collection and predictive diagnostics can keep production on track to avoid delays caused by rework and information gathering.
Give the podcast a listen and visit the Life Sciences & Medical section on Emerson.com for more on the technology and solutions to help drive improved performance and stay ahead of the trends.
Jim: Hi, everybody, this is Jim Cahill with another Emerson Automation Experts podcast. Emerson is helping our customers accelerate the personalized healthcare revolution and facilitate the end of treatable disease-caused deaths in our lifetime. In this podcast, I’m joined by Emerson’s Bill Carey to discuss the acceleration of technology adoption with demonstrable manufacturing operations management solutions. We’ll talk about the evolution of Emerson Life Sciences’ digital ecosystem with innovative technology. Welcome to the podcast, Bill.
Bill: Thanks, Jim. Great to be here.
Jim: Well, it’s great that you’re here sharing your expertise with our listeners. Bill, I moved over from the other Austin area Texas office building, the building you’re in to make room for the Austin Life Sciences showroom. Can you share with our listeners what’s available in that space?
Bill: Yeah, the Austin Life Sciences showroom, it’s a new space that we just built out in the Emerson Innovation Center in Round Rock, Texas. So here we’re able to show off our standardized Emerson Life Sciences digital ecosystem. And this space is now another example that highlights Emerson’s commitment to the life sciences industry. We built this beautiful glass-enclosed conference room with two connected clean room demonstration labs.
Jim: Yeah, it’s really a sight to see over there really beautifully how it all came together. So I guess why is it important for customers to be able to see the depth and breadth of this digital ecosystem?
Bill: Emerson’s here to help our customers realize significant tangible benefits like faster new product introductions, shorter lead times, improved quality and maximizing productivity. But seeing is believing. So by showcasing the Emerson Life Sciences digital ecosystem, we expect to accelerate technology adoption with demonstrable manufacturing operations management solutions. Also, we’re propagating best practices, we’re using current technology and by having the audience’s initial impressions, be a view of an interconnected ecosystem.
Jim: That’s right. It’s one thing, I guess, to talk about it, but it’s a whole other thing to really see some of this in action. So with, you know, that seeing is really believing what’s happening, can you tell us about some of the demonstrations available today?
Bill: Yeah. So as I mentioned, we have two labs. In our lab number two, we demonstrate a one-click tech transfer to scale up a benchtop process to clinical or commercial production. It’s built around a cell culture process for traditional biotech. So we’ve been presenting this demo to customers since January. But of course, we’re continually adding new capabilities. So when we finalize the equipment going into lab one, we’re currently intending to build out a demo also demonstrating one-click tech transfer. But in this case, a scale-out from a process from one to many production lines. And the intention there is to base it on a CAR-T process for advanced therapy medicinal products, or ATMP.
Jim: Yeah, I think that scale up process is the thing that can really shrink the timelines between discovery and usable therapeutics and medicine for our customer. So it’s interesting that you are integrating Emerson software solutions with the physical equipment that our customers would actually use. Can you tell us how you choose what gets included in the showroom?
Bill: We’ve been working with some companies in our OEM partner program. So ideally, the equipment is given to us on a loan or on consignment basis, maybe for two to three years. And then we’ll periodically refresh them to just have everything stay evergreen. So we’re always talking about some of the latest equipment. So far, we’ve been able to utilize OEM partners, DeltaV-specific libraries and graphics. And currently we have an Aplicon three-liter bioreactor, a Thermo Fisher Rotea centrifuge, Kaizen Raman for spectral PAT and a biostat STR 50-liter bioreactor from Sartorius.
Jim: Wow, it sounds like we can begin manufacturing some things there with all that equipment in there. That’s very impressive.
Bill: Yeah, we’ve been talking if we could do any of that after hours.
Jim: Now, that would be fun. So what Emerson products are you currently highlighting in the showroom?
Bill: So we’ve pulled a bunch of things from our DeltaV platform. So we have DeltaV DCS, DeltaV MES–manufacturing execution systems, DeltaV real-time scheduling, DeltaV process specification management with spectral PAT. We’re using both Simca and AspenTech models and our latest offering of DeltaV workflow management.
Jim: And for listeners, there’ll be a transcript that comes along with the podcast and I’ll make sure to add hyperlinks to all of those solutions and products Bill just mentioned there. You mentioned you’re always adding new capability. Can you give us a sneak peek of some coming attractions?
Bill: Well, I’d like to be able to show more about where contextual data is and being able to do monitoring, analysis and reporting on it. So to serve that, I intend to add DeltaV Edge, InMation and Seeq.
Jim: Wow, that the management of data has always historically been one of the huge challenges. So that should be some exciting additions once you have all that in place. So I guess when you’re speaking with customers who come in to the facility, what are some of the key challenges that you hear about from them?
Bill: Well, most often is needing to take advantage of all the data that’s been collected from the islands of automation. And customers want to move forward on their digitalization journey toward a predictive or adaptive plan. But of course, then there’s also the startups to come in that are just working on their first batch records and they need help deciding what even to automate.
Jim: Yeah, I can see that full side that that data is everything. And as we move forward and technology collects more and more data everywhere, managing that, making sense of it, and getting it to provide useful information back that’s actionable is really important. So I guess just as we wind things down a little bit, if a team from one of the life sciences company wants to come visit us, how should they go about getting a visit scheduled?
Bill: They should reach out to their local Emerson business development team. That could be Emerson or an Emerson Impact Partner. From there, we’re going to follow the same process we’ve been using for years when we have folks visiting our IOP [Integrated Operations] Center here in Austin.
Jim: Well, that sounds easy enough. Reach out to one of our Impact partners or people, you know, within the Emerson organization and come here to Austin to check it out. Well, Bill, I want to thank you so much for joining us today and discussing what’s available in our Austin Life Sciences showroom. Thanks a lot.
Bill: Thanks for having me. It’s exciting talking about this new space and getting more folks in here to see it.
-End of transcript-
Manufacturers and producers across all industries know the challenges in keeping their operations cyber-secure. Industries such as pipeline transportation and electrical & gas distribution networks face additional challenges in the wide geographic spread of their operations and the need for reliance on public communications networks.
In this podcast, I’m joined by Emerson cybersecurity expert Steve Hill to discuss these additional challenges and ways the companies in these industries, suppliers, and federal regulators are collaborating to develop and implement best practices for strong cyber resiliency.
Give the podcast a listen and visit the SCADA Solutions & Software for Energy Logistics on Emerson.com and the AspenTech Digital Grid Management page for methods and solutions to improve your cybersecurity defenses and ongoing programs.
Jim: Hi, everyone. This is Jim Cahill with another “Emerson Automation Experts” podcast. Pipelines cover a wide geographic area and require continuous monitoring for safe, efficient, and reliable operations. Today, I’m joined by Steve Hill to discuss the challenges pipeline operators face in keeping their pipeline networks cybersecure. Welcome to the podcast, Steve.
Steve: Thanks, Jim. Pleasure to be here.
Jim: Well, it’s great to have you. I guess, let’s get started by asking you to share your background and path to your current role here with us at Emerson.
Steve: Thanks, yeah. I’ve been in the automation and SCADA industry for about 40 years, started on the hardware design and communications that then moved over to software. And it’s nearly 20 years I’ve been with Emerson. I joined as part of the Bristol Babcock acquisition. My main focus now is working in wide-area SCADA as the director of SCADA Solutions for Emerson, and most of that’s working in the oil and gas industry, working with Emerson sales and the engineering teams and our customers as they design systems and products for the industry.
And also, alongside that, for the last few years, I’ve been collaborating with CISA. That’s the U.S. government Cybersecurity and Infrastructure Security Agency as part of the Joint Cyber Defense Collaborative.
Jim: Okay. That’s a nice, varied background. That’s really good for our discussion. So, what exactly do you mean by wide-area SCADA?
Steve: That’s a great question. There’s a SCADA system where the software is monitoring equipment across a very wide area. It might be a very large geographic area, like a pipeline or gas, or water distribution network, or perhaps a well field. I mean, some of the systems, for example, I was speaking to a customer last week who is monitoring an entire pipeline across Peru, and yet, their control centers are actually in Mexico. So, to do that kind of thing, the equipment is usually connected via public networks. You know, private networks don’t extend that far, and even the control centers may be widely distributed.
And as part of that, compared to in-plant control, there’s an assumption that your communications are clearly not gonna be 100% perfect. You’re gonna lose communications either momentarily, like with cellular networks, and when, for example, like we’ve got in Texas this week, with natural events like hurricanes can cut communications for hours. But because these systems are all critical infrastructure, such as pipelines or electrical distribution, the actual operations, the process, must never be interrupted. Today, we’re talking about cybersecurity, and that same sensitivity is why these systems are now the target to some of the most sophisticated cyberattacks.
Jim: Okay, that gives a picture of the breadth of these types of SCADA systems, and you had mentioned you’d work with CISA, the cybersecurity infrastructure defense agency, and the Joint Cyber Defense Collaborative, which I’ll just call JCDC for short. Can you give some more examples on that work?
Steve: Yeah. Really, I could give you a bit of background. Probably many of our listeners know that there’s been several successful cyberattacks against critical infrastructure over the last few years. Probably the most famous in the pipeline industry was an attack that’s referred to as the Colonial Pipeline attack. That was actually a criminal ransomware attack that resulted in gasoline and jet fuel shortage across the Eastern U.S. for several days, and that was criminals basically trying to get money. And it was almost a random attack, it wasn’t targeted.
However, there have been actual state-sponsored attacks, and probably the one that was most successful was prior to the Russian military attack against Ukraine. They actually instituted several successful cyberattacks against the Ukrainian power grid. And very concerning is, in recent months, the U.S. infrastructure, including pipelines, have been successfully infiltrated by a group that are called Volt Typhoon, who are thought to be from the People’s Republic of China. So JCDC and CISA are working hard to really counter and protect against these threats.
Jim: Wow. Well, that’s clearly a huge concern. What is the JCDC doing to address these challenges?
Steve: Well, in 2023, so last year, JCDC facilitated the development of something called the Pipeline Reference Architecture. Basically, Emerson, alongside other industry vendors and also pipeline operators, participated in the development of this Pipeline Reference Architecture, which I’ll refer to as the PRA. It’s a fairly short document that outlines the design and operating principles for SCADA systems in the pipeline industry. And one thing the government is keen to point out, it’s not a regulatory document, but it does set out the best principles and is intended as guidance for the industry. Really, they want to work with the industry to come up with best practices.
Jim: Well, it sounds like this PRA is another set of standards to address cybersecurity. Why is another document needed in the industry where a bunch of standards exist now?
Steve: Yeah, that’s a question I and other members get asked quite a lot. The main reason is that wide-area SCADA represents a very different set of challenges to traditional SCADA, which we refer to as inside the wire. So for example, a refinery or a manufacturing plant, everything is in one location. But as I mentioned before, wide-area SCADA has got a very wide displacement, physically. It also actually has a lot of remote field workers. There may be folks working on that system hundreds of miles from base, and you’re also using communications networks that are not even owned or operated by the owners of the pipeline. Though this PRA is really intended for the pipeline industry, clearly, it’s applicable to almost any wide-area SCADA, that’s water or electrical industry as well.
Jim: Okay, that makes sense. So those are definitely challenges that don’t exist for more automation systems, as you say, inside the wire. Tell us more about how the PRA addresses these.
Steve: Well, the big thing is segmentation, basically, taking the network and splitting it into different levels that represent different areas of the operation. For example, the internet would be what’s referred to as level zero, and moving all the way down to the bottom of the network, that’s level nine. And the levels in between that represent different levels of trust. Now, those who are familiar with cybersecurity and SCADA are probably familiar with something that is called the Purdue model, which I think first came out in the late 1980s, and that also splits up SCADA and control networks and actually business networks into different levels. However, when that came out, the internet was in its infancy. No one would ever have used the internet or even really public IP networks for their connectivity. So it doesn’t really take into account many of the things we take for granted today in these systems.
So the PRA is intended to expand and take into account the reality that, for example, some of this critical data will actually be transiting across a public network, right? And in order to achieve that with this segmentation, we’re using a concept called Defense in Depth, right? And as you go down the different levels of the network, the assumption is you can trust each item on that network better. So, for example, on the internet, you don’t trust anything, but when you get down, let’s say, to the communications between an RTU [remote terminal unit] and a gas chromatograph on a local serial link, you might completely trust that. Now, it’s interesting, although that’s part of the PRA model, that does actually conflict with a security concept called Zero Trust, which is something that Emerson has really based our products on. But both zero trust and defense in depth are valid.
Jim: Now, you had mentioned a couple of concepts I’d like to explore a little bit more in there, and let’s start with zero trust. Can you explain that concept to us?
Steve: Oh, yeah. Yeah. Zero trust is a concept where any piece of equipment or software should trust nothing. Don’t trust anything else on the network, don’t trust the network to be safe, and it should not rely on anything else for protection. And historically, SCADA was protected, for example, by firewalls. You would use insecure products that were known to not be secure because they were developed perhaps 20 or 30 years ago and hide them behind firewalls, and that’s really how we’ve handled security today. But there’s a realization you can’t do that. So we now need to design products so that they don’t trust anything.
But the reality is many of our customers, Emerson’s customers and pipeline operators, have devices that were installed perhaps 30 years ago. That’s the typical lifespan of some RTUs and controllers in this industry. So as a result, when you get down to the lower levels of the network, zero trust doesn’t work. So you do have to have levels of additional protection. So for example, if you had a Modbus link, which is basically insecure almost by design, that should be protected by additional levels of firewalls and so on. But if you’re designing a modern product, it should be designed so it doesn’t rely on anything else. And that’s the concept of zero trust.
Jim: Okay, got it. So don’t trust anything. Everything must be proven out. And the other concept you talked about was defense in depth. So, what does that mean?
Steve: Well, the phrase is most commonly used where we’re talking about a network with multiple levels in. So when you come from, for example, the internet into your business network, you would have a set of firewalls and what’s called the demilitarized zone. And then when you go from your business network down to your controls network, you’d have another set of firewalls. So it’s multiple levels of protection. However, that same concept should be used actually within products as well. And, in fact, Emerson takes that very seriously with our secure development lifecycle certifications, IEC 62443, and how we design those products.
Jim: Well, that’s good. As you get those two and as you put in more modern technology, that it complies and has that cybersecurity built into mind there. So, can you give us an example of how it’s built in?
Steve: Yeah. That great one. If I take, for example, the Emerson FB3000 RTU, that’s a flow computer and a controller device that’s designed specifically for the oil and gas industry, especially for pipelines, an obvious concern is that that may be attacked externally to modify the firmware. Now, at the first level, the RTU itself has secure protocols. It uses something called DNP3, which would, in theory, provide access to the device. But then the firmware, when we issue new firmware, we put it on a website so we have protection of the website, we also publish a hash, which is basically a unique key that the customer downloading the firmware can check. It hasn’t been modified by anyone attacking the website. But then, when they actually put it into the RTU, so they’re updating firmware, the RTU will check that that firmware was developed by Emerson and was intended for that device. It does that by certifying certificates on the load.
Now, once it’s in the device and it’s running in the field, you might say, “Well, the task is done,” but there’s an additional level of protection. It will continually and on boot, check that firmware, make sure the certificate still matches, it’s not being changed. And if it has been changed, it will actually revert to a known good factory firmware that’s basically embedded in the device. So you can see that there’s really five or six different things all checking and ensuring that firmware in that device was not compromised. So basically, multiple levels within the device, and in addition, there’s multiple levels on the network. So the bad guys have to get through a lot of different levels to damage or compromise the device. And we’re trying to do that with everything we design today.
Jim: Yeah. And with modern cryptography and making any change completely will change that hash and everything and make it impossible to slip something in without it being noticed. So that’s really a nice thing.
Steve: Yeah. And the fact that even if it detects it, it then goes back to factory firmware, which may be a slightly older version, but your operation will keep running. It will keep controlling, which is a very nice feature.
Jim: Yeah, that’s a great example there. I guess, going back to the PRA, what else does it include other than the segmentation that you discussed?
Steve: There’s about 10 high-level principles that cover aspects of the design and operation of the SCADA system. And for each of these, there’s various examples and guidance on how to actually follow the principle in a real-world system. So, for example, there was a whole section on how to manage third-party devices in the contractors, because on a pipeline system, you’re almost certainly gonna have, for example, engineers from Emerson coming in from third parties. So it gives examples on the real-world aspects of operating the system.
Jim: Are there other examples from it you can share?
Steve: Yeah. One important one is when you’re designing the system, you should identify and document all of the different data flows that occur. And that’s, when I say data flow, communications or conversation between different pieces of equipment. So, for example, this RTU may communicate with that SCADA platform on this particular machine and may communicate with a measurement system on another machine, document all of those data flows, and then deny all other data flows by default. Then, after the system is running, continually monitor it passively. And if you see an additional communication, say, between two pieces of equipment that normally never communicated or didn’t communicate on a particular IP socket, flag that immediately, because it may be something that’s going on that was unexpected. It certainly was outside the original design of the system.
Jim: This has been very educational. Thank you so much, Steve. Where can our listeners go to learn more?
Steve: Well, really a couple of places. If you go to the CISA blog, which is at www.cisa.gov/news-events, there’s details there. The actual PRA was published on March the 26th of this year. And also, if you want to discover more about Emerson’s involvement in wide-area SCADA and the cybersecurity associated with it, if you go to Emerson.com/SCADAforEnergy, you’ll find some information there.
Jim: Okay, great. And I’ll add some links to that and to some of the other things we discussed in the transcript. Well, thank you so much for joining us today, Steve.
Steve: Not a problem. It’s a pleasure.
-End of transcript-
Due to the rapid advancement of technology, we face the issue of technology obsolescence in our daily lives. Control systems that have been operating for decades face this challenge. While incremental improvements can be made over time, sometimes a more comprehensive modernization project must be initiated to meet the control challenges and improvement opportunities of the production operations.
Emerson’s Scott Campbell joins me in this podcast to discuss the planning, execution, and follow-up of a successful modernization project.
Give it a listen and visit the Modernization section on Emerson.com for more information on the technologies and services to help you navigate your facility through the project and toward more efficient, reliable, and sustainable operations.
Jim: Hi, everyone. I’m Jim Cahill with another “Emerson Automation Experts” podcast. Control systems are the lifeblood of production processes to keep them running safely, efficiently, and reliably. Like all technology, they can come to the end of their lives and require modernization. We shared some ways to simplify the control system modernization process in earlier podcasts, such as Simplifying Control System Modernization with DeltaV IO.Connect and Simplifying Your Path to Optimal Control with REVAMP. Today I’m joined by Scott Campbell, an experienced modernization consultant, to discuss the challenges surrounding modernization projects, the value that can be generated from these projects, and Emerson’s approach in helping customers achieve their business objectives and project goals. Welcome, Scott.
Scott: Thanks, Jim. Nice to be talking with you.
Jim: Well, it’s great having you sharing your expertise with our listeners today. Well, let’s jump right into things. What pains are our customers facing when we start interacting with them?
Scott: Well, like you mentioned, usually it starts with obsolescence. So, there are existing technologies at end of life, and a lot of times at that point they are buying parts off of eBay, trying to source them from other plants in their area, and generally just struggling to keep parts on hand. But in our conversations, we’ve learned that a lot of times, that’s not really enough to justify a project at that point. You know, these plants have lots of projects going on, lots of things they’re trying to fund, and lots of people fighting over capital. So, we’ve learned that obsolescence and fixing obsolescence for its own sake a lot of times isn’t enough to justify modernization projects anymore. So, we kind of have to look at the downstream effects. And we can see a lot of effects that having an aging control system has on a plant besides just that obsolescence. So that failing hardware can cause process shutdowns, can cause process disruptions, you know, loss of product when that happens, and obviously loss of the money-making part of their facility.
So that’s one of the justifications we see with the older technologies within these plants, that production quality limitations to throughput, just not getting everything they could out of their system and equipment, are all drivers to move these projects forward. And, you know, we’re seeing a lot more of a resource crunch in a lot of these facilities, where their staffing has been dwindling over the years and their engineering resources are cut. And to be honest, when they have a control system in the plant that’s 30, 40 years old or more, they struggle to get and retain resources. If you’re a 22-year-old controls engineer coming out of college, you don’t really wanna go work on something that’s already obsolete. So, all of those are drivers to get customers having these conversations.
Jim: Yeah, that sounds like a number of things from, you know, production and the financial impact of that, and just getting people to be able to work on it can all be part of the justification process. So, with all customers are facing, what’s their best path forward?
Scott: So, their path forward really at that point is to take on some kind of obsolescence project. We used to talk about migrations and call these projects migrations, but really, when we use the word “migration,” we’re talking about kind of a replace in kind. So, the customers getting that obsolescence issue fixed, maybe getting some weight off their shoulders that’s not looming over them, but it doesn’t necessarily move them forward technology-wise and they don’t get a lot of value of the new system. So really, we tend to talk now about modernization projects where it’s not necessarily just a replace in kind, but a refresh of the control system overall and a chance for them to look at their control system and really plant holistically. You know, some of these projects, when we go into do a modernization, we’ll touch on instrumentation like networking infrastructure, cybersecurity, documentation, a lot of things that go wider than the control system itself.
Jim: Yeah. That makes sense. You know, that replace in kind seems it kind of limits you to, you know, the thoughts of what you did 20, 30, 40 years ago. So, it seems like, yes, it is an opportunity and take advantage of what modern technology offers, too. Now, I’d assume, you know, with these drivers and the possibility of unplanned shutdowns and process disruptions, and things that the customers would be knocking down our doors for modernization projects. Is this actually happening?
Scott: Well, actually, no. A lot of times, there’s hesitancy from the customer’s standpoint. Even when they know they need to do something, they start looking at this as a possibility and kind of draw back from it. You know, these automation engineers and automation departments within these plants, they spend a lot of their time today just trying to keep the lights on. You know, we’re talking about aging technology that may be having failures that needs a lot more hands-on work just day to day. So, they have a day job, and their day job is running the plant with the system that they have in place. And we talked about, you know, the staffing shortages at some of these plants. They just don’t have the bandwidth to take on a whole nother project.
And most of them have never done this. We’re talking about systems, again, that have been in the plants for 30, 40 years. So, it’s not something that a plant does every 10 or 15 years that they have experience doing. So, when an automation manager or a plant manager at one of these facilities starts looking at this, a lot of times there’s a real hesitancy to get started just because it is so overwhelming.
Jim: Yeah. I can see that. If you got your hands full with just the day to day, that would seem like a huge mountain out there in front of you, and to figure out how to go forward, even if logically, you know, that’s what needs to be done. So, I guess if you’re, you know, breaking through and say, “Yep, we gotta do this,” what are the real challenges in executing a modernization project?
Scott: Well, there’s a lot of them, and they kind of span most of the things in the plant. But we bucket them, to start with usually, into time, money, and space, because a lot of times those are the constraints that drive how we execute these projects or start planning them. So, when we’re talking about time, we’re talking about a couple of different things. One is just the timeline of the project. Depending on how big the facility is or the process area we’re talking about, it could take a year or two years to do the project overall, so they have to start planning ahead. With this idea of obsolescence and unplanned shutdowns looming over them, they have to get started early. But on the project itself, we talk about time in the sense of a plant shutdown and what their timeline is and what their production schedule is. So those two go hand in hand with how long it takes to execute and how you can schedule for an actual planned shutdown. And that’s one of the main drivers for how we need to execute the project.
We also talk about space constraints. You know how it is, if you buy a bigger house and you tend to fill it as you live there for a little while, these plants kind of expand with projects to fit the space that they’ve got. And so, by the time we get in there, most of the square footage is taken up. You know, they’ve taken up with expansions or other projects they put in over time. And maybe their I/O room is full, or the control room doesn’t have any extra space. So, you kind of have to come up with a way to work alongside your existing infrastructure or plan around it to be able to get a new control system in there at all.
And of course, everything’s going to come down eventually to money. And most of these things have a tradeoff for the cost of the project itself. So, in general, we all kind of wanna pay as little as we can for what we need. But just like in real life, there are strategic investments that we can help the customers make to help save money in the long run. So it may be that spending a little more money on a project has a payoff at install. Doing a little more work on engineering makes a project cheaper to run the plant eventually. So, we have to talk about those tradeoffs.
And then kind of as minor challenges, we talk about risk, and we talk about complexity, both, and ways to…those are really kind of cost and time and space all bundled into their own metric. You know, how do we manage the risk? And you can pay more money to manage risk up front, or some customers just say, you know, “We’re willing to live with a certain level of unknown when we get into startup and just manage it there.” So, we’re always taking into account all of those factors and what a project should look like for that specific customer.
Jim: Okay. Now we’ve really hit on the challenges around time, space, and money, and those have to be factored in as well as the risk and complexity of everything. Now, I know those have to be weighed against the real benefits that you can get when doing a modernization project. Can you talk about what some of the benefits could be?
Scott: Yeah. Absolutely. When you’re talking about, you know, moving technology forward 30 or 40 years, there’s a lot that comes with that. And luckily, for a lot of our customers, just by installing DeltaV, a lot of those extra benefits just come along for the ride. So built into DeltaV, we have the ability to do, you know, basic loop tuning all the way up to advanced process control and multivariate control, all things that can improve the way that their plants are running today. Even if they’re not changing the actual structure of the controls, it can help them run tighter to baseline so they can, you know, increase throughput, maintain the quality they’re looking for. They cannot put as many expensive ingredients into the system as they may be doing today if they can run a little tighter to where their planned process variables are. So, a lot of that goes into just really increasing what the process itself can do, even with the existing instrumentation.
You know, we talked about…big data has been kind of a buzz word the last couple of years, maybe fallen off a little bit, but it still is getting a lot of run in the automation space. So, we talk about, you know, getting more data out of things, but getting it, contextualizing it, and using it appropriately is still something that we get asked about a lot, and it’s something that’s not as straightforward in a lot of systems. So just doing a DeltaV upgrade and some of the basics of what Emerson has to offer can help pull some of that data out of the silos it’s in today where it’s inaccessible with the existing systems, and contextualize it, put it in front of an operator or maintenance personnel, and give them access to make those decisions a lot easier. So, it increases the ability of the existing personnel on site to be able to run their plant better, smarter, and a little bit faster.
We talk about cybersecurity, it’s in the news a lot. And cybersecurity attacks on process plants have gone up, and so putting in a modern control system comes with some baked in cybersecurity. You know, the base DeltaV holds a lot of cybersecurity certifications. Just by installing that, you get a lot more control over who has access into your plant, and you have a buffer against the outside world. So, a lot of that just comes for the ride, and that’s before talking about, you know, the more advanced things you can buy and bolt onto your DeltaV system. So, it really moves people a long way forward, and there’s a lot you get out of just doing one of these projects.
Jim: Yeah. It seems like it. You talked about the data and having it contextualized. You know, there’s so many more things, you know, people are trying to operate more efficiently and sustainably, and all that. And data is such a key part to be able to see it, to drive it beyond, you know, what we imagined decades ago and what you need to do, just keep, you know, the plant running safely and everything. So, yeah, you talked about a lot of benefits in there and we discussed the challenges. So, do we get involved or help customers weigh the challenges against the benefits, you know, or what’s that process about?
Scott: Yeah. Absolutely. Emerson has realized, one, how important these projects are to our business, how important they are to our installed base and our customers, and also how difficult they are. So, they’ve really invested in being able to make these projects better, cheaper, faster, and reduce risk there. So, I’m a modernization consultant in the modernization solutions group led by Aaron Crews. And so, this is what I do kind of all day, every day. And what our groups aim is to make those projects run better. In some cases, that means going out to customer sites, coming up with a plan to plan out their project, to plan on what their plan should look like in 2 years, 5 years, 10 years, working with our engineering centers and impact partners on estimates, you know, lots of consulting-level work to just come up with solutions for our customers and partners.
We also develop technologies outside of the normal DeltaV product lifecycle specifically to enable these projects. So, we have product lines out of the modernization group that are modernization-specific to help get the barriers to do modernization projects out of the way. Because of the emphasis Emerson puts on these projects, we’ve now done over 7,500 modernizations in 88 different countries. So, one of the biggest values we see is that while customers may do this once in a career, you know, once every 30 years, so having done these projects, you know, our group has a lot of experience that we can bring to the table for these customers who haven’t done this very often or haven’t ever done one of these before. And, you know, our job is to constantly innovate to bring the projects to fruition.
Jim: That’s a real important point where a customer may go through it once in their entire career. But we have a team of consultants that do these 7,500 projects you mentioned. So that’s bringing a lot of experience really across the project, from the planning upfront to seeing it through and being able to take advantage of, you know, what some of these technologies can bring you. Can you share or give us a little bit more enlightenment on the project approach if somebody’s moving forward with one of these modernization projects?
Scott: Sure. So, in coordination with whatever the customer’s normal project execution model is, whether it’s, like, a stage gate or a phased process, we have our own processes and standards behind the scenes that try to make these projects really standardized and remove some of the doubt around how they get executed. And it also helps us take advantage of the efficiencies built into having a set of resources as wide and as knowledgeable as we do. So, the global project management office helps to put together those standard deliverables, and intakes, and processes. And then, you know, across the U.S. and across the globe, we have Emerson engineering centers that a lot of times are doing these projects all the time as well.
So, they’re actually executing modernizations every day. It’s always great to hear about a new multi-billion-dollar brand-new facility that’s being built. And those are great projects for us to be invested in, and we love working on those, but, you know, they’re only so often that someone’s building a facility like that. Meanwhile, you know, if a plant has 100 I/O or 100,000 I/O, at some point the facility is gonna need an upgrade. So, we’re in those facilities all over the country and all over the world constantly doing these projects. So, the engineering centers have a lot of experience and a lot of people who have done these as well day in and day out.
Jim: I know from some of the earlier modernization-related podcasts that I mentioned at the start of the podcast that we have a broad offering of modernization solutions. Can you expand a little bit on how and where these are used and their impact on a project?
Scott: Yeah. Absolutely. So, we have a product portfolio that we call DeltaV Connect. And the portfolio is built in such a way that whatever timeline the customer is on, wherever they’re seeing obsolescence across their plants, and whatever their different installations are today where they may be having issues with it, we have a solution that can help with whatever it is that they’re facing. So, we have products that are meant to address each step along that path. For example, if Console.CONNECT… Console.CONNECT is an HMI operator interface solution for…You know, on the obsolescence path, HMIs, computer hardware, those are the shortest timeline. You know, you may replace a computer in your office or your laptop every three, four, or five years. And a lot of the automation hardware runs on similar platforms.
So, we have Console.CONNECT to come in and address that obsolescence at the HMI and computer hardware solution level. And what that allows us to do is put in a brand new DeltaV interface with some of the benefits that we see in technologies like DeltaV Live. And we can get those operator efficiency benefits right up front in a project without having to go through the three, four, five-year project lifecycle before they start seeing some benefits of modernization. We have a suite of wiring solutions that we call Flex.CONNECT. So, customers who wanna get the most value out of one of these projects, they may want to remove as much of the existing system as possible. So, get rid of everything, the controllers, the hardware, the I/O, and kind of start from the ground up to really refresh the entire system. Unfortunately, it’s pretty labor-intensive, time-intensive, and costs a lot of money.
With Flex.CONNECT, we can address part of that with these wiring solutions that are meant to reduce the amount of time it takes to rewire the I/O. So, what we do is go in and put in some prefabricated connectors, prefabricated cables, and reduce the individual number of screws and wires that an electrician touches on shutdown. So, we’re taking the amount of shutdown time down by quite a bit, you know, 10-fold or so. For some customers, though, even that reduction isn’t short enough. So, Flex.CONNECT isn’t an option because they just can’t be down that long. Or they have a really large facility that even with those advantages that Flex.CONNECT brings, it still leads to a multiple days of downtime, and they just can’t afford that in their production cycle.
So, you know, one of those products that you’ve talked about previously is IO.Connect. And IO.Connect being a software solution that allows DeltaV to control the native I/O of other systems without lifting any of those wires and without doing any of that fieldwork. So, the project then becomes essentially a software configuration project with a little networking on top. And the downtime goes from weeks or days down to hours, potentially. So, with all those solutions, they help with cutover planning and hardware strategy, and how do we go about getting the actual physical DeltaV system into the plant? But that’s only about half of the project, right? So, there’s a whole programming side that we still have to convert from the existing control system into DeltaV.
For that, we have a product that we call Revamp. Revamp is a cloud-hosted machine learning and AI tool that analyzes and documents the legacy control systems that we get from the customers on their running system. More than just documenting, Revamp goes an extra step and converts that through just pattern matching into actual DeltaV code. So, in the most efficient cases, we can go directly from a legacy control system code into DeltaV, all digitally and automated.
Jim: Yeah. That part is impressive. And we did a podcast on that earlier, and incorporating that machine learning and, you know, AI characteristics in there, it seems like it improves with every project we do because there’s more information that sees new configurations or that it hadn’t seen before. So that seems like a really powerful tool. Well, this has been a really great discussion, and I hope our listeners get a keen sense of the value you can get from these modernization projects and really a path forward to plan and execute a successful project for them to really reap some of these benefits. Where can they go to get a little more information?
Scott: Yeah. You can go out to Emerson.com/modernization, and from there it links to all of the different offerings that we talked about, all of the different Connect products. And you can reach out to your local Emerson contact or Impact Partner if you have a project that you wanna talk about or to get one of us out to look through your system, and we’d be happy to talk to you.
Jim: Well, that sounds great. You can learn about the technology offerings and connect in and get some of the expertise to help get you started in there. And I also recommend listening to the “I/O Connect” podcast and the “Revamp” podcast. And I’ll have links here in the transcripts as we post this. But for our listeners, you can also just do a search on “I/O, Connect” podcast or “Revamp” podcast, and you’ll find it that way. Well, Scott, thank you so much for joining me today to share your expertise with our listeners.
Scott: Yeah. Thanks a lot for having me on, Jim.
-End of transcript-
The Life Sciences industry has undergone significant changes over the past several years. From responses to a global pandemic to the advancement of personal therapies, the need for automation and effective data management has never been greater.
In this Emerson Automation Experts podcast, Dave Imming joins me to highlight some of these trends, the challenges they have introduced, and technology’s role in addressing them.
Give the podcast a listen and visit the Life Sciences & Medical section on Emerson.com for more information on the technologies and solutions to help you drive greater performance.
Jim: Hi, everyone. I’m Jim Cahill with another “Emerson Automation Experts” podcast. The life sciences industry is undergoing a significant amount of change in recent years. I’m joined today with Dave Imming to talk about some of these trends. Dave is the vice president of marketing and sales enablement for Emerson’s life sciences business and can comment on how these trends are shaping automation and data management strategies. Welcome, Dave.
Dave: Thanks, Jim. It’s great to be here.
Jim: Well, it is great having you, so let’s dive into it a little bit. Dave, what are some of the recent trends in life sciences?
Dave: Well, Jim, I think the biggest change or trend in the life sciences industry is the increasing number of technology breakthroughs and new modalities for biopharma now and in the coming years. This includes both the expanding biotech capabilities as well as these advanced therapy medicinal products, frequently shortened to ATMP. These technologies include cell therapy, gene therapy, and tissue engineering and hold incredible potential for new ways to cure diseases and also to improve our health.
In addition to that, while data has always been very important to the life science industry, there’s a huge drive to better capture and manage that data. Part of this is to streamline the release process, which is tied to collecting and verifying the appropriate data, but also to have that data available for additional analysis. And specifically, companies are beginning to work to see how they can harness artificial intelligence, or AI, to pour through that data and bring out new insights, and identify ways of doing things even better.
Jim: Wow. That does sound like a lot is going on here. So how are the new ATMP processes similar to the traditional biotech and, in other ways, how are they different?
Dave: Well, Jim, there are a lot of ways where they are very similar to traditional biotech. In both, you know, ATMP and biotech, the process development activity has lots of flexibility. But then this flexibility gets narrowed down as you move towards commercial manufacturing. Of course, in commercial manufacturing, there has to be very strict adherence to process requirements and almost no flexibility for obvious reasons of quality and consistency. But likewise, the recipe management, batch records, equipment and material tracking, all those things are quite similar across the various types of life science modalities. But there are also many differences.
When we talk about advanced therapy medicinal products, this space tends to have many more but smaller batches. Each batch is created for a specific patient population, and the therapy is highly targeted, which is great, but it does lead to lots of smaller batches, and in the extreme, it is referred to as personalized medicine or autologous, which is one batch for one patient. And by increasing the number of batches, this has a huge impact on the data collection and management requirements. Each data set with all of the information on materials, equipment, process conditions, all have to be captured and managed. For personalized medicine, it would also include the tracking of the patient samples together with the data for chain of identity, chain of custody, chain of environment, and all the parameters that might affect a sample. The reality is that these new therapies are amazing, but they are making the supply chain for life sciences much more complex than in the past. And in addition, many of these new ATMP processes are more manual and less automated in the past. And this is due in part to patient variability that must be accounted for and adjusted in the processing.
Jim: Well, that’s no small set of challenges there. So, I imagine automation can play a role to help with that. But, you know, it’s what we’ve been doing for years, isn’t automation automation?
Dave: Well, in some ways, you’re right, it is more the same in some cases. But as I mentioned, many of these new processes are done more manually, but with a fair amount of human intervention. And again, as I said a moment ago, this accounting for patient variability. So, this can be especially true of the personalized medicine or autologous therapies. In those cases, the, you know, automation, if you will, tends to be more of providing a guided or prompted path for the operator where the automation lays out, “Here’s the next steps that need to be completed and the data that needs to be collected for that.” And then the operator would adjust these steps as necessary based on the patient sample characteristics.
So, another difference is that while traditional pharma and biotech certainly utilize skids, this is done to a much greater degree in the ATMP market, utilizing specialized skids combined together or creating a work cell for that particular therapy. Scheduling can be very challenging in this space, in part due to the bidirectional flow of samples and therapeutic products, personalized medicine, and allocating capacity against patient demand, and ensuring that the round trip from collecting the patient sample, processing it, and getting it back to the patient is all done within the allowable timeframe for that particular therapy.
So advanced control techniques is another area that can be leveraged in some cases, and analytical capabilities like PAT [process analytical technology] or Spectral PAT data can be used as soft sensors, which allows operators to have real-time information on some of their metrics where they normally have to wait for a lab sample to be processed. So having that real-time metric using spectral PAT technology can be a real benefit in some cases.
And another thing is that with many of these batches, when they have more of them, it’s smaller in size, while the automation is similar, the throughput of just the number of batches that they’re dealing with increase significantly. So, it’s important that your automation system is up to that task of the higher capacity. Some of our customers that we work with are increasing their capacity in multiples of 10x, you know, each year as they increase the capacity for these new therapies. It also dramatically increases the demands on your data management system, as you might imagine, to capture all of that data from these mini batches, set the data in context so that it can be easily retrieved, consolidated, and as well, compared and analyzed in the future for new learnings.
Jim: Wow, that 10x growth made my eyes widen in there. Growing that kind of capacity in there sounds like scheduling of all these batches, particularly for ones of those autologous…and I think I just learned a new word here today. Those autologous therapies would be challenging, is it?
Dave: Oh, yes, it absolutely can be. As I mentioned earlier, there’s the challenge of moving the patient samples to the operational area and then back, all while tracking the environmental conditions in both directions and getting it completed within that allowable time that has been put forth for that particular therapy. So, it makes it especially important that things run as smoothly as possible in the operations process. For example, Emerson’s DeltaV Real-Time Scheduling has been used in this space, cell therapy offerings, to manage the schedule and keep things on track for the production process.
DeltaV Real-Time Scheduling is extremely dynamic and can respond and adjust or alter the overall plan schedule based on real-time conditions. This might include variability in processing due to cell growth. It might be due to unplanned events, like a specific skid or piece of equipment having a fault, or perhaps being unavailable for some reason. So now what? Well, the “now what” is handled by DeltaV Real-Time Scheduling, which takes the event into account and then adjusts the schedule automatically with the least impact on production. There’s also a companion product [DeltaV Discrete Event Simulation] to the scheduler for capacity planning and debottlenecking. This product can be extremely helpful to help users see and understand where the dwell times are in the process and what opportunities they might have to reduce the cycle time and, therefore, increase their production capabilities.
Jim: Well, those technologies sound like..I just envisioned the big spreadsheet and trying to keep up with it manually and optimize as best you can so to have that online and be able to say, “Okay, now here’s the schedule,” that seems like that can go a long way for the efficiency of the operation. Now, Dave, you had mentioned data management and real-time release. Can you elaborate a little more on those?
Dave: Sure. And so, what I’m talking about there is that when they do the production of these, you know, drugs or therapy products, before they’re able to release them and get them shipped out, they have to go through a quality review or release process to do that. And, of course, a lot of what drives that process is data. So, they have to make sure that they can sync up all the data associated with that particular production batch. And in the case of personalized medicine, that would include the patient samples being synced to the processing, to the materials used, the equipment used, and put all those in context with the batch information. And then any exceptions that occurred during the production process need to be captured during that process and then resolved prior to when they can do the release.
One of the offerings we have is called Quality Review Manager, which helps customers review and resolve captured exceptions so that they can go through those, figure out what’s happened, and then resolve them and release the material more quickly, which again, that helps a company be much more financially successful. And of course, for these autologous or personalized medicine processes, the timing is often crucial, which relates directly to capturing the right data in real-time and getting it released and shipped because the shelf life of these treatments can be quite limited.
Data is also critical in both directions. When a sponsor company uses a contract development and manufacturing organization, or CDMO, the data on how to do the therapy must be transferred to the CDMO. And then when a batch is executed, the CDMO typically transfers the batch data, either some part or all of it, back to the sponsor organization for them to confirm the release. So again, the supply chain is much more complicated in these new therapies. And everyone in the industry is moving towards more sophisticated data stores, sometimes replacing, but more often augmenting their time series historian with something that can accommodate many more data types in which can set the data in context, which helps them make the post-processing analysis just that much more valuable. And as mentioned earlier, people are beginning to explore what artificial intelligence can do for them to gain more insights from their data or help them do their operations work.
Jim: Yeah. It just sounds like the complexity is all around that and trying to manage it and make sense of it and all is just a growing challenge. So, what product offerings do Emerson and AspenTech offer to help with all this?
Dave: Well, there’s a number of things that we have, we have a very comprehensive portfolio for our life science customers. And in the area of data management, you know, some of the ones that come to mind immediately are this Quality Review Manager that I mentioned, which is part of our DeltaV MES [Manufacturing Execution System], or formerly known as Syncade MES offering. We have IP21 [Aspen InfoPlus.21], which is a time series historian. And then we have [AspenTech] Inmation, which is a data management product from our AspenTech partner. And Inmation is an extremely flexible database which can handle all sorts of data types and also put them in context, which helps people utilize the data, find the data, retrieve the data much more easily than without that context, and knowing the relationships between data that then later AI analysis can create new conclusions and new insights by looking at those relationships between the data, that’s all based on the context that Inmation provides. So those are probably the three top offerings that would come into play in this data management space.
Jim: Yeah. It sounds like having that glue to take these different sources and methods and all that, make sense of it, and then add the AI on top of it to help make sense of it and point you in a path in the right direction, that does seem helpful there. So, does Emerson have anything to help life science companies with managing how they transfer data or data management, I guess, you know, products to help get these products to market faster?
Dave: Yeah. Absolutely. That is a huge area of interest. Jim, as bad as the pandemic was, one of the good things that came out of that experience for everyone in the life sciences industry was to see what is possible in terms of accelerating the pipeline and moving something from laboratory into production, and get all the checks done very quickly and efficiently. And so, that’s really started a lot of wheels turning in terms of how we can do that better.
Emerson has a number of things going on in that area. One is in the area of process knowledge management. In that category, we have an offering called [DeltaV] Process Specification Management, or PSM. And Process Specification Management helps a life science company gather, collect, store, and organize all the information as they’re doing drug development. And they help pull all that specification information together so that it can be submitted for approvals to the regulatory bodies and also that it can be passed downstream when they need to run pilots for clinicals or when they need to transition to commercial manufacturing. And this is something where making that tech transfer much faster than it has been historically.
In the past, this has taken years to move something from the lab into commercial production. And Emerson has created a consortia of companies to work together with Emerson to tackle this problem and come up with some really game-changing technologies to make that tech transfer much, much faster than it’s been in the past. And it’s a combination of some of the current offerings, like Process Specification Management. There’s also [DeltaV] Process Risk Assessment to help you look at the risk associated with various techniques as you’re doing your development, together with this tech transfer effort that we’re doing with the consortia to see if we can really cut that time by, like, an order of magnitude as we go into the future.
Jim: Now, you talked a little bit about collaboration in that area, but I understand there’s also collaborative efforts going on in the life sciences industry. Can you talk a little bit more about that?
Dave: Yeah. Absolutely. There are many different organizations, industry groups in the life sciences industry which help companies, both end users as well as some of the vendors that serve this market, share ideas and help tackle some of these problems together. BioPhorum is one of the ones that we spend time at their meetings. I’ve been to a number of their sessions. Really, really good collaboration with other folks in the industry and talking about some of these problems and how we can make things better. NIIMBL is another one that we work with very, very closely. ISPE, another great organization that helps move the industry forward. And there’s others, but those are some of the big three that Emerson and AspenTech work closely with.
They work on things like standards, ontology, the nomenclature and the data structures that we wanna have a common way of talking and passing information back and forth, which can make things much easier. We work on pilot programs, we work on proof of concepts to take some of these new technologies and find a place to try them out, and then publish that so that others can benefit from that research. So, a lot of great work being done by those organizations, and we’re very happy to be a part of those efforts.
Jim: And I invite our listeners, if you’re on the blog, do a search on, like, BioPhorum, because I know our subject matter experts get involved in a lot of what gets developed and then published as part of it. So, we’ve recap several of the things they put out over the years, so I invite our listeners to go find that. Well, this has been a very educational discussion and one I really appreciate you sharing, Dave, but where can our listeners go for more?
Dave: Well, yeah, we have quite a bit of information up on the website, as you might imagine. If you go to Google and you google “Emerson” or “Emerson Aspen Tech and life sciences,” you’ll very quickly find our Life Sciences and Medical page, and from there you can find more about, you know, some of these ATMP therapies or markets and the different products and solutions that both Emerson and AspenTech have to help these customers and our life science partners deliver some of these to patients.
Jim: That’s great. And some of the products you mentioned along the way, I’ll make sure to include hyperlinks to where there’s more information available on all of those. Well, Dave, this has been great and a lot of fun. Thank you so much for joining us today.
Dave: Absolutely. Happy to do it, Jim.
-End of transcript-
You may have heard of various color adjectives used to describe hydrogen production based on the amount of CO2 produced as a byproduct. Green hydrogen comes from renewable energy sources, powering an electrolyzer to split water into hydrogen and oxygen.
Blue hydrogen is produced from typical sources like natural gas, but the CO2 produced as a byproduct is captured and stored away. This decarbonization process is one of the reasons why companies like oil refineries and chemical plants choose blue hydrogen as a fuel source and feedstock.
In this podcast, Scot Bauder describes the important role that valves, actuators, and regulators play in optimizing and scaling these production operations in applications like steam methane reformers (SMR) and autothermal reformers (ATR).
Give the podcast a listen and visit the Blue Hydrogen Valve, Regulator & Actuator Solutions section on Emerson.com. Here, you’ll find advanced blue hydrogen production valve solutions that are not just safer and smarter but also more reliable, ensuring your operations are always in good hands.
Jim: Hi, this is Jim Cahill with another “Emerson Automation Experts Podcast.” You likely have heard about hydrogen’s role as an energy carrier in global efforts in an energy transition to non-carbon sources. Today, I’m joined by Scot Bauder, sales director for Flow Control solutions for the sustainable industries, to discuss hydrogen energy, specifically the production of blue hydrogen, with a focus on end users like chemical plants and oil refineries. We’ll be discussing the crucial role valves, regulators, and actuators play in blue hydrogen production for safe and efficient operations. I’ll start by letting Scot explain what blue hydrogen is all about. Welcome, Scot.
Scot: Thanks, Jim. Just as you’ve promised our listeners, I’ll talk a little bit about what exactly blue hydrogen is, but I’ll also talk about the different colors of hydrogen. Hydrogen comes in many different colors, and really, the colors are dependent on the carbon intensity of the different hydrogen types. The two most common are probably green and blue hydrogen. Green hydrogen coming from energy sources that are non-polluting, like wind, solar, hydroelectric, things of that nature. And then the blue hydrogen is where typically natural gas is used, and the CO2 emissions are removed through the process of making the hydrogen. This is important to large industrial customers, where they have a large demand for hydrogen, and the blue hydrogen processes tend to support that a little better.
Jim: Okay. Got it. So, why are companies in the downstream hydrocarbon industries like oil refineries and chemical plants using blue hydrogen as a feedstock for their operations?
Scot: That’s a good question. So, these industries, the petrochemical industry and the refining industries, have been using hydrogen to finish and make products for decades now. As the world has moved to reduction of its carbon footprint from different processes, many of these large companies have adopted ESG and sustainability goals to reduce greenhouse gas emissions from their processes themselves. Again, you know, making a large quantity of hydrogen, blue hydrogen is a great method of doing that. It’ll meet their demands for their products, but at the same time, being able to reduce or remove that CO2 emission that comes with it.
There’s also many countries that have decarbonization goals that have been legislated or been imposed by stockholders or shareholders of the company. With that, there’s either carbon taxes from the government or a lot of governments also incentivize customers to help remove their CO2 emissions from their processes. We, in North America, and around the world, there’s a lot of areas that have natural gas. It’s relatively inexpensive, and it makes a great feedstock for blue hydrogen. And right now, it is cheaper than green hydrogen.
Many companies are investing in blue hydrogen, like Linde, they have a Nederland project in the Gulf Coast here in the United States, and then Exxon is also working on a project for their Texas Baytown site to add a blue hydrogen unit for that facility also.
Jim: Yeah. I see and hear a lot about carbon capture projects going on in different spots. So that really is a lot of activity there. Well, blue hydrogen is produced via steam methane reforming, or SMR, and autothermal reform, called ATR. What are the critical aspects of blue hydrogen production using the SMR and ATR processes?
Scot: Both processes have benefits. The steam methane-reforming root process has been around for a very long time. It’s commonly used today at the petrochem and refining sites to make gray hydrogen by adding the CO2 removal, then they can make blue hydrogen and, again, reduce those greenhouse gas emissions from those processes. Newer in this area is the autothermal reforming technology. The real big benefit it brings is efficiency. It only has one CO2 source in the process, and it, again, increases the efficiency and reduces the complexity of capturing the CO2, where steam methane reforming has a CO2 component in the reaction when they’re making hydrogen itself and then also a CO2 removal component from the combustion process.
Jim: That’s interesting. I was familiar with SMR, but I hadn’t heard about ATR. So, I’ve learned something here today, and hopefully so have some of our listeners. I know safety is a paramount concern for the industry. What are some safety concerns related to blue hydrogen, and how do valves play a role in helping to ensure safe operations?
Scot: Yeah. There are several areas in the blue hydrogen process that safety comes into play. One is the mitigation of leaks. We know anytime we’re in a process environment that valves and pipe flanges and those things can have leaks. You know, when we’re working with fuel gases, with that being natural gas in this case, and then a product of hydrogen coming out of that, it’s always important to have the right products that you can isolate the process when you’re shutting it down, and then if there is a need to shut down the unit in an emergency, you know, isolation products like KTM ball valves can play a critical role in that safe operation.
Pressure relief valves [PRVs] are always important to protect the system from overpressure during the process operation. Anderson Greenwood has some nice pilot-operated PRV valves that have a monitoring component to them, being able to provide data of if there is an emission or a relief action that’s taken, you may be able to measure how long that was, and then also help learn through that process. Those valves also have an inherent ability to reseat themselves after an event, which is important. Hopefully, if there is any event, it’s short, and those valves can help reduce those emissions to the environment.
And then the last thing that spans across many of the valves that are used in processed plants, Emerson isolation valves and control valves, we have an extensive line of packing systems that fit with the extensive line of valves that have been tested ISO 15848, which has become the global standard for packing testing, and then also, you know, just having tight non-emitting packing systems. And we also have third-party certifications and testing that’s been done to the ISO standard that are available with our products.
Jim: Well, that sounds like a number of things to address those safety concerns. And I know from the monitoring of the pressure relief valves, I was recently at an environmental health and safety conference, and that was a large topic just from a regulatory standpoint to really track what the emissions were, how long, the amount. So that’s really important technology, I think it’s growing in importance. So, what are the challenges related with SMR blue hydrogen units? And how can valves help address these challenges?
Scot: Yeah. So, with an SMR, probably the biggest efficiency or challenge is the fuel gas to steam ratio and, you know, efficiency of energy use and also the emissions that come from that. It’s important to have control valves that have been sized correctly. Those are the components that help control that process and the output of that process. Fisher has a wide range of easy-e globe valves that fit well within that process and can be sized with the correct trim & actuation to meet those needs. Teamed with a DVC6200 positioner that gives accurate control, again, will help promote safe operation of the unit. And then at the end, you know, it also improves the yield across the unit, increases the chemical and catalyst lifespan, and again, just brings that efficiency across the entire process unit.
Jim: Yeah. So, it sounds like energy savings, better product yields, some ways that the technology can help there. That’s great. And I guess similarly, what are the challenges with ATR in blue hydrogen units, and how can valves address those?
Scot: Right. That’s a good question. So, with the ATR units, the temperatures are much higher. The way that that process is designed with the reactor unit, the temperatures are much higher than the SMR. With that, that brings challenges in the right materials and the valve products that can help control and then isolate those units when they’re in operation.
A good valve we’ve seen there is the triple offset valve from Vanessa. It’s capable of high-temp applications and will seal tight, you know, based on the inherent triple offset design and the way that seal works in that product. On the flip side, the ATR, again, like I said earlier, has a single CO2 emissions. So, you know, having the right products and being able to control just continues to add to the efficiency that we see with the ATR technology itself.
In fact, both Linde and Exxon of their blue hydrogen units that have been built or they’re in the process of working on, are both going to use the ATR process. So, I think we’ll see more of this in the future. You know, these are large process units running, you know, large quantities of hydrogen. So, the efficiency definitely will have a payback in the long term.
Jim: Yeah. That makes sense, that the more efficient process should be able to win out over time. And I know that hydrogen is a tiny little molecule, and so given that, how does that impact the valve material selection?
Scot: Yeah. Hydrogen brings some challenges with it, just at the molecular level. You know, one of the biggest challenges from a metallic standpoint is the potential of embrittlement of those materials. Emerson has invested in quite a bit of hydrogen testing in the last couple years. We’ve done metallic testing, you know, where we’ve saturated test samples in an outdoor lab and then tested those samples after they’ve been saturated at a pressure and a certain duration of time, you know, bring them back, do mechanical testing on them, looking for embrittlement.
We have not found if the right materials are used, typically, 316 stainless steel or carbon steel is in the right condition. We’re not seeing any mechanical impacts from hydrogen embrittlement or really absorption of that hydrogen that would lead to the embrittlement. With that, you know, we’ve also been supplying hydrogen valves in the refining industry for probably 50 years now, and we’re not seeing failures or other issues with those products that have been used, you know, in the refining and petrochemical applications, again, for decades.
If we look at soft parts, packing, and gaskets, again, those materials, we’re not seeing absorption issues with the hydrogen. So, again, a lot of standard materials that we’ve used for many years fit well within these applications in the hydrogen space.
Jim: I’ve always heard that concern that hydrogen embrittles metals, but it’s good to hear through the course of that testing that you’re just not seeing it, that you’re not seeing failures from doing that. So that’s great news. I guess blue hydrogen is a cleaner gas because the production process of blue hydrogen molecules captures carbon dioxide or CO2. I guess, where are some of the common challenges associated with the CO2 capture process, and how can valves help solve them?
Scot: Yeah. So, again, the process that we’re seeing the companies use is chemical amine process, which has been used by the industry for a long time to remove CO2 and other gases from different flue streams. This technology is proven, it’s understood, and with that, we have experience. We know there are some inherent applications with the chemical amine process that brings noise and cavitation. An example would be a lean amine feed control valve. We see those valves cavitate based on the typical process conditions. Fisher has Cavitrol trim that will address that cavitation and remove that opportunity, and really protect the valve in the system from any cavitation damage.
And then another application that’s common in the amine process is a rich amine letdown valve. And common to this is outgassing, you know, depending on the process conditions. In those applications, we tend to use a noise attenuation trim. In the case of Fisher control valves, that would be a Whisper III trim set. And again, that will reduce the outgassing and then the noise that would be associated with that.
Jim: Okay. That sounds great. After CO2 is captured, what is the next challenge in the process?
Scot: Yeah. So, we’ve kind of walked through the process of how we make the blue hydrogen, and the goal is to capture that CO2. Now there has to be a path to store that or use it in a different way. Right now, we’re seeing a lot of projects that are around the carbon capture and then also into the storage. And when we get into the storage piece of it, compression will become an element in those processes. And anytime we have compression, we’ve got a compressor, and anti-surge is always a concern on those compressors.
We have technology with [anti-surge] control valves, as well as our optimized digital valve using a Fisher DVC6200, that can protect that compressor and ensure that there’s not a process upset if something does not go to plan. And then from there, once it’s compressed, you know, typically it will be put in a pipeline and it will be moved to another location for storage, whether that’s sequestration or, again, to be used in a different type of product.
When we get into the pipeline side of things, you know, those tend to be more remote. We do have some nice actuators from Bettis. We have some electric actuators that work well for modulating and on-off. And Bettis has also developed a new product in the last couple years, it’s called ECAT. It’s a pipeline actuator that has zero emissions. So, a lot of times, the pipeline actuation uses the pipeline gas as the motive force for the actuator. This ECAT actuator can use CO2 or natural gas and not emit any of that to the atmosphere. So, another nice piece, you know, removing CO2 as a greenhouse gas emission, this fits well with that scheme of not putting that back into the atmosphere.
Jim: Yeah. That’s an important technology as everyone’s trying to drive their methane emissions to negligible amounts. To wrap up, Scot, can you summarize some of the key challenges related to blue hydrogen production and the role that valves, regulators, and actuators play in ensuring reliable and safe operations?
Scot: Yes, Jim. Many of the challenges that we talked about in blue hydrogen production center around high temperatures, potential embrittlement of metallics, cavitation, noise, and high-pressure drops. Correct valve selection has a direct impact on a plant’s reliability and profitability. Remote applications are best served with electric actuation that focuses on zero emissions. We also talked about extensive lab testing that Emerson has completed in the hydrogen space to ensure that the materials that are selected resist or prevent embrittlement from happening in the hydrogen applications, and always important to select equipment that promotes safe and efficient plant operations.
Jim: Well, that sounds pretty comprehensive of how we can help the hydrogen producers and, I guess, on the consuming side of the hydrogen coming in, as we’ve done for decades. And I guess, finally, where can our listeners go to learn more about valve solutions for blue hydrogen applications?
Scot: That’s a good question. For more information, you can go to emerson.com/blueh2valves, and that will take you to our page, that will take you deeper into the hydrogen applications specific to blue hydrogen and the products that support those challenges that come with that process.
Jim: And I’ll make sure to put a link to that, the emerson.com/blueh2valves, as well as some of the other products you mentioned through the course of the podcast so our listeners can go learn more about each of those. Scot, I wanna thank you so much for sharing your expertise today with our listeners.
Scot: Thanks, Jim. It’s been great to be on the podcast with you, and hopefully, everybody learned something along the way.
-End of Transcript-
Process Analytical Technology, or PAT, was introduced by the U.S. Food & Drug Administration back in 2004. The purpose of this guidance was:
…to describe a regulatory framework (Process Analytical Technology, (PAT) that will encourage the voluntary development and implementation of innovative pharmaceutical development, manufacturing, and quality assurance.
Raman spectrometers perform spectral analysis to enable consistent product quality, process intensification, and real-time control. The challenge with their use is to integrate their wealth of multivariate data into the control strategies to drive these performance improvements.
Boundless Automation liberates data to unleash the power of software to drive world-class performance. One example is Emerson’s solutions for PAT-based manufacturing, which helps streamline manufacturing processes by integrating multivariate data and real-time process models to manage critical quality attributes and predict final product quality during production. DeltaV Spectral PAT brings in real-time multivariate analysis to help enable fully automated manufacturing for improved speed-to-market.
In this podcast, Emerson’s Jorge Costa and Bruce Greenwald join me to discuss the application of PAT technologies not only within the Life Sciences but across many of the process and hybrid manufacturing industries.
Give the podcast a listen and visit the Process Analytical Technology and DeltaV Spectral PAT sections on Emerson.com for ways to help you drive innovation and improved performance.
Jim: Hi everyone. I’m Jim Cahill with another Emerson Automation Experts Podcast. Process Analytical Technology, or PAT, is a known solution for in-line, real-time quality monitoring in manufacturing. Yet, one can debate to what extent it has been adopted across industries. Emerson has recently released DeltaV Spectral PAT to help the industry further progress in their PAT journey.
This podcast will revisit PAT’s value proposition, highlight the technology, and recommend a strategy to realize the full potential of PAT in manufacturing. Bruce Greenwald and Jorge Costa have been collaborating on PAT business development, namely in the Life Sciences industry. Bruce Greenwald is the DeltaV platform business development sales director based in Austin, Texas.
And Jorge Costa is a Life Sciences industry consultant based in Lisbon, Portugal. Welcome, Bruce and Jorge.
Jorge: Hi, Jim. Thanks for having me.
Bruce: Thanks, Jim.
Jim: Hey, it’s great to have you both. Why don’t we get started? Bruce, can you share a little bit of your background with our listeners?
Bruce: Sure. I’m a 1979 graduate of the University of Kansas where I majored in chemical engineering with an emphasis in process control and have spent the last 50-plus years in the automation space, working on originally our Provox platform. And then moving to the DeltaV platform back in the early 2000s.
Jim: Well, that sounds like a great backdrop and background there. Jorge, can you share your background as well?
Jorge: Sure. So I am, as you said, I’m based in Lisbon. So I studied here and did my career over here. I have a degree in chemical engineering in the nineties.
I joined Life Sciences industry in 99. I worked as an automation engineer for a few years, then went into automation management and corporate management. I spent a lot of time on process digital transformation and digital transmission strategies. And in the last two years, I joined Emerson as a Life Sciences consultant, as you said, and I’m also helping here about bringing the value message to our customers about the solutions from AspenTech and Emerson.
Jim: Well, that’s great. Thank you both for that background. Well, let’s get started and dive into the topic of PAT. Jorge, let me start with you, what is the value of PAT to the industry? And I guess what industries can it fit?
Jorge: Well, thank you for this question. So yeah, definitely, there is a big value, and there are different dimensions of this value.
I would say performance, compliance, throughput, and safety is what comes to my mind. So I can help break down and kind of give you some more context. So when we are talking about PAT, it’s about bringing the capabilities of the quality control lab into the production shop floor. And, you know, as a result of a process, of data modeling that starts in development space that can correlate statistically product quality with some analytical equipment raw data.
So if such work, on modeling, is developed and then transferred to the shop floor, it’s enabling to have inline real-time quality monitoring in the process. And this is as a first perspective. First, I mentioned that it is a huge boost to performance because it eliminates all the manual work that we are used to seeing in the industry on managing samples between production facilities and quality control lab facilities.
And all the necessary steps that are also eliminated and the time and the cost associated to it from the standpoint of extracting the sample to the moment where the sample is actually analyzed and results are produced, checked, and then eventually transferred back to production records that will potentially drive decisions to make process adjustment.
All these sequence, all these cycle takes time, involves lots of labor from different key roles in operations. So comes at the cost that is significant. And because it’s so manual, it has inherently some variability. All of that is now streamlined by using PAT and so productivity, consistency, batch-to-batch, and consistent quality.
Also, of course, compliance, as I said, comes to my mind. So less transcription of data. Less variability. Also the fact that there is less variability allows to operate closer to quality limits and that’s why I talk about throughput as well. So you can challenge the process further. And safety I’ll leave it to last because there’s I would say minor contribution to safety because there’s less exposure of people to the product since we are eliminating all the sample management.
Regarding what industries can it fit? Yeah, definitely Life Sciences, which is where I’m coming from and it’s closer to my heart, but definitely any process that has chemical transformation involved and requires chemical analysis. And so that will span from Life Sciences through oil and gas, chemicals, pulp and paper and metals and mining, I would say.
Oh, and by the way, food and beverage definitely as well. So let me summarize these. Life Sciences, oil & gas, chemicals, pulp & paper, metals & mining, and food & beverage.
Jim: Which pretty much sounds like most of the process industries and hybrid industries and yeah, so if there’s sampling involved in that, I think you really hit on those challenges really well of, you know, it’s manual, so it’s time-based, there’s time lags and getting the information back to make whatever adjustments need to be made, a whole host of things that if you can automate that process much more, it’ll drive a lot of really good things from quality to availability, to throughput that you mentioned.
So let me stick with you a minute here, Jorge, what strategic and operational aspects should an organization consider in introducing?
Jorge: So as in all things that are related to digital transformation, I would start by people reviewing work processes and finally the technology. So starting by people as a standpoint of strategic actions, it’s very important to assess if the organization has the right people. And so I’m talking about PAT scientists, data scientists are key for the correct preparation of introduction of such kind of technology.
In terms of the actual work process or strategy, it’s very important to consider first and foremost, to prepare in the development space, the introduction of PAT, but always, in my opinion, with a mindset of bringing over to manufacturing operations. And the reason why I insist on this is because there is plenty ways to do PAT and there are plenty mathematical packages or strategic statistical packages, if you will, and PAT platforms that are products that come off the shelf in the market, not all of them have the best features for operations.
And namely the level of customization, the level of effort in software validation comes to my mind. So it’s very important to have comprehensive, deep analysis of what’s the solutions out there and how the organization wants to fit them and where wants to fit them for their particular use case.
And finally, in terms of the operational aspects, well the obvious one is to consider what are the organization’s products, quality parameters, what are the relevant quality parameters? And make the choice of the PAT instruments that are required, get them in place, integrate them with the production assets.
As a last point, to consider is the integration with not only the control systems but even with manufacturing execution systems to reach the peak of value, the maximum value of the technology, which would be getting to real-time release.
Jim: Yeah, it sounds like, you know, oftentimes people want to start with the technology, but what you just described is really starting with the people and work processes and then working your way forward with what you can do and really looking at something built for purpose seems like a very important thing there.
Bruce, let me. Turn it over to you. What is the technology behind DeltaV Spectral PAT?
Bruce: So, you know, as, as Jorge mentioned that there’s a lot of folks that are trying to put this together. And with the advent of these really smart analyzers, spectral analyzers, we’ve seen a lot of these deployed in the process, but it’s a very complex architecture, because there’s computers that are up at the L3 portion of the network, there’s the L2 portion, the data exchanges between the disparate systems.
And so, when we looked at all that, we said, well, what could we do to collapse that architecture and make it easier to deploy? And, that’s how we evolved into our DeltaV Spectral PAT.
So, we now bring the spectral analyzers directly into DeltaV. We bring in the chemometric models that the scientists have developed also into DeltaV. We execute the models then against the spectral array directly in an Application Station on the DeltaV side, and produce the predicted results. So, now we don’t have to worry about multiple computers, multiple validation touchpoints, fragileness of the architecture, because it’s all solidly done at the DeltaV level.
Jim: Well, yeah, that sounds like it simplifies it. Just trying to move data around and make sure it’s in the right format and in a timely way. The network connections are good between sounds like, it’s simplified quite a bit. So Bruce, I guess what PAT instruments can it handle?
Bruce: That’s a great question. Cause that was one of the focuses that we had when we developed a DeltaV Spectral PAT was to make use of OPC UA. And so really any spectral analyzer that can put out its spectral array as an OPC UA floating point array we can consume with DeltaV. So we are not at all specific to the analyzer that’s used.
We can talk to any analyzer that supports OPC UA.
Jim: Well, that’s good. So as long as the supplier has it to the standard of OPC UA, then we can bring it in and process it and do all the stuff we can do with it. Jorge, there are various PAT software platforms in the market. What’s unique or what is the unique value of this solution to the industry?
Jorge: Yeah, so this question is quite linked or to what Bruce said when he introduced DeltaV Spectral PAT solution. It’s an embedded solution. So naturally more simple, more robust, and that enables automated process adjustments and end-point decisions. Both of these have additional value to our customers. And so these are generating a higher return on investment and less risk to operations.
Jim: Interesting. And I guess with AspenTech being our partner and one of the PAT software vendors, is there value or does it make sense in adopting both Aspen Process Pulse and DeltaV Spectral PAT?
Jorge: Absolutely. Absolutely. I normally say that these two products are independently producing value to the end user in different ways for different purposes. I would say Aspen Process Pulse, it’s all about predicting the near future.
So when you’re running the operation using Process Pulse, you have an additional monitoring capability that’s giving you a near future prediction of your process trajectory using a statistical model based on a past data on a number of batches or segments of data if you’re talking about continuous manufacturing it allows for that model to be generated and plotted in, in an operator display, and then projects the current operating conditions versus that representation of the population of the past data that has been made available.
And so it’s allowing the operators to have an additional pair of eyes, let’s say, to help to detect or predict any kind of excursion to quality. and take preventive measures to avoid any process deviation.
Spectrum PAT is more about the hear and now. It’s about the real-time monitoring and immediate response to operating conditions and enables closing the loop between the quality control and the critical process parameters control.
Jim: Well, that does make it sound that they’re complementary and can provide value in doing that.
Bruce, I guess back to the spectral PAT technology, what are the solution requirements and limits?
Bruce: So, we have launched our DeltaV Spectral PAT now with version 15. 15.LTS of DeltaV, and we support both the Sitorius SIMCA Modeling application along with AspenTech’s Unscrambler application for generating the models.
And then, as I mentioned, we would then transfer those models to an Application Station on the DeltaV side, where we would also tie off the spectral analyzers and then using standard function blocks in DeltaV, convert or process that spectral array against the model that we’ve transferred in, in order to generate the desired value.
So, it runs in an application station, version 15 and above of DeltaV, and then from a licensing perspective, it’s done on a per-function block basis. So for each spectral PAT function block, we would license that to run on that Application Station.
Jim: Okay, that sounds easy enough. So for a listener that’s been listening to us for this long and wants to see it in action, can it be demonstrated?
Bruce: Yeah, Jim, we’ve got a couple of things, ways we can demonstrate. So first off, a plug for our new Life Sciences showroom that we have here in our headquarters in Round Rock. We have a Raman Analyzer connected to a two liter bioreactor where we’re measuring the glucose in the bioreactor. We can do that live in front of folks and show how the models can run against the spectral array.
We also have a very similar setup where we’ve got a virtual ramen analyzer and do all of this as a virtual session where you can see the models run against the spectral arrays and even do closed loop control back into DeltaV on the example is a perfusion bioreactor.
Jim: Well, that seems like, something.
Make the trip here to the Austin area to take a look at that. That sounds…
Bruce: Absolutely.
Jim: And Jorge, I guess from a financial perspective, do you have any recommendation to justify such an investment?
Jorge: Yeah, absolutely. So I would suggest to start by defining a strategy, which means start by figuring out what are the company’s key goals?
What are the operational pains to eliminate or to mitigate? What are the targets for affected metrics that are expected to change? Estimate costs, estimate the benefits of the selected solutions, and finally, calculate the return on investment based on the quantification of costs, but also the benefits. And finally, in terms of an approach of a proof of concept.
Yeah, definitely we recommend to start by, a sandbox environment, that you can use as platform to learn, to develop standards and to validate such standards, generating the templates and all the associated necessary applicable compliance documentation. And then use that to instantiate to create any necessary replicas that will be then rolled out to production life systems.
Jim: Yeah, that proof of concept is an important step in the process. After you’ve looked at all the benefits that you can deliver and do that ROI calculation on it. And I guess to wrap things up, Bruce, where can our listeners go to find more information on this important topic of PAT?
Bruce: So you can head right over to the Emerson.com website. We have a dedicated page for DeltaV Spectral PAT on the Emerson.com site. There’s, several videos that you can watch. We’ve got our product data sheet available there. There’s also an eBook that can be downloaded as well as connecting with the proper sales office to get more information and be able to kick the tires with our offering.
Jim: Well, that sounds straightforward enough. And I’ll add a link in the transcript on exactly where the page is. Well, Bruce, Jorge, I want to thank you both so much for joining us today and sharing your expertise with our listeners. So thank you.
Bruce: Thank you. Appreciate the opportunity.
Jorge: Thank you. Same here. It was a pleasure.
The podcast currently has 209 episodes available.