
Dynamics Corner
About Dynamics Corner Podcast "Unraveling the World of Microsoft Dynamics 365 and Beyond" Welcome to the Dynamics Corner Podcast, where we explore the fascinating world of Microsoft Dynamics 365 Business Central and related technologies. Co-hosted by industry veterans Kris Ruyeras and Brad Prendergast, this engaging podcast keeps you updated on the latest trends, innovations, and best practices in the Microsoft Dynamics 365 ecosystem. We dive deep into various topics in each episode, including Microsoft Dynamics 365 Business Central, Power Platform, Azure, and more. Our conversations aim to provide valuable insights, practical tips, and expert advice to help users of businesses of all sizes unlock their full potential through the power of technology. The podcast features in-depth discussions, interviews with thought leaders, real-world case studies, and helpful tips and tricks, providing a unique blend of perspectives and experiences. Join us on this exciting journey as we uncover the secrets to digital transformation, operational efficiency, and seamless system integration with Microsoft Dynamics 365 and beyond. Whether you're a business owner, IT professional, consultant, or just curious about the Microsoft Dynamics 365 world, the Dynamics Corner Podcast is the perfect platform to stay informed and inspired.
Dynamics Corner
Episode 429: Transforming Manufacturing: AI's Role in the Modern Workplace
In this episode of the Dynamics Corner Podcast, hosts Kris and Brad speak with Bryan DeBois, Director, Industrial AI at RoviSys, about the transformative impact of AI in the manufacturing sector. Bryan shares insights on how AI is bridging the skills gap, enhancing efficiency, and reshaping the modern workforce. From autonomous AI agents to predictive maintenance, discover how AI is not just a tool but a pivotal player in the future of manufacturing.
#MSDyn365BC #BusinessCentral #BC #DynamicsCorner
Follow Kris and Brad for more content:
https://matalino.io/bio
https://bprendergast.bio.link/
Welcome everyone to another episode of Dynamics Corner Brad, can AI do more than just put words together? I'm your co-host, Chris.
Speaker 2:And this is Brad. This episode was recorded on July 25th 2025. Chris, chris, chris, can AI do more than put words together? Ai can do a lot, and AI is all over the place these days. I know we often focus and talk about how AI can help you within your ERP software, but AI can help you outside of the ERP software and with us. Today we had the opportunity to speak with Brian DuBois about AI and manufacturing. Good afternoon, sir. How are you doing hello, hello, doing well. How? How are you Doing very well? Thank you, very well. Thank you for taking the time to speak with us.
Speaker 2:I was just talking with Chris, I have two new obsessions and I don't know how I got into these obsessions. Someone told me I'm a year late on one of them. I'm into making sourdough bread.
Speaker 3:Okay, yeah, you're way behind, man, I'm way behind, so I made my own starter this week.
Speaker 2:I think one day I made three loaves, I don't know.
Speaker 1:And the funny thing is I hey, you got to keep it alive man, you got to keep alive this starter.
Speaker 2:I do. I have the starter, I keep it and I get scientific. Like I measure the flour, I measure the water.
Speaker 1:I measure the water, I stir it, I track it. When you keep going, you stop caring about measuring those stuff, you just do it, man.
Speaker 2:No, it's going well. And then now I'm practicing the designs, but I'm practicing. I think I finally have the recipe that I like. I don't know if it's the flour, the air or what, but this stuff. I fed the starter this morning and four hours later it had already doubled in size. It's like a ferocious flower eating.
Speaker 1:Yeah they're alive man, they're alive.
Speaker 2:I know they're alive, but I'm not going to feed this thing frigging four times a day.
Speaker 1:You have to, no, I can't.
Speaker 3:You don't have to feed it four times a day, do you?
Speaker 1:No, you kind of have to keep an eye on it. There's a point. It's not really more of a time, it's more like when you see it rise and you know you got to remove some of the stuff. Yeah, it's when it doubles in size. I don't know.
Speaker 2:Everyone says when it doubles in size you have to take it in half. So this thing doubled in size in four hours. Like, take it in half. So this thing doubled in size in four hours. Like it started, you know, you put it in, you wait 24 hours. Well, 10 days I, you take whatever it's seven to ten days for it to to grow finally or to become alive. Okay, and then the first day I did it was like a little slow. And then I'm like, dang, this is fast. So it's like 12 hours it doubled. And then I started baking and then I'm doing this and now it's like four hours is doubling like it's a full-time job.
Speaker 2:The um. The other obsession that I have is I've been messing with raspberry pies. I have that also.
Speaker 2:You're like a decade behind sir listen, I am an old man so I'm a little behind the times, but at least I'm behind the times and I'm able to follow, because now I am a vibe coding ai pi app creating person. I am creating so many things. I bought all these like hats to put on it about the sense hat, but the e-paper I knew nothing about python. I still know nothing about python, but you should see the stuff that I'm doing because you'll pick it up. No, I am picking it up, but ai does everything for you, right that's true.
Speaker 1:You see what? Python is such a popular thing that ai knows a lot about it. You can literally yeah, like you said just vibe code with Python.
Speaker 2:Yes, yes, I mean, I have the sensor, I'm tracking the temperature. I didn't know anything. I said okay, write something to track the temperature from the Sense hat. Okay, now save it to a CSV file. Okay, I need to display a web page. So I went through, installed Apache on the Pi in a Docker container. I have a JavaScript that reads a flat file it's our csv file of temperatures and goes which is ai.
Speaker 3:And what are you? What are the temperatures? Oh, this is. Is it the sourdough? Is that the temperature? No, I shifted.
Speaker 2:No, it's the temperature I track the temperature of temperature, humidity and pressure of my house okay as well as track the temperature of the system. So I made little graphs for it, so it was all AI driven, and with that, mr Brian. Sir, would you mind telling us a little bit about yourself?
Speaker 3:Sure can. So, brian Dubois, I'm the director of industrial AI for a company called Rovisis, so we are a system integrator and we are focused exclusively on manufacturing and industrial customers. So, you know, I like to kind of you know I love AI, I love talking about this stuff, but I do like to kind of narrow the scope somewhat so we can go anywhere with this conversation about AI. But you know, I don't know anything about AI in fintech, I don't know anything about AI in healthcare, but AI in the industrial space. I am an expert in that, both in terms of what's possible today and how we can apply AI in the industrial space today, but also kind of where things seem to be going in leveraging AI in manufacturing. So, yeah, happy to be here today.
Speaker 2:Thank you. Thank you for taking the time to speak to this. Ai is showing up everywhere and AI is one of those terms that is like oh, I know AI, or you know AI, it's so broad. It's just like what people used to say is oh, you're an IT guy, yeah, and they didn't understand that IT, you know, information technology, has so many different areas and AI. As we've been going through this journey of talking with individuals about AI, I've also learned that AI has many different facets to it. There's many pieces of it that comprise AI and AI. As you had mentioned, it's not just helpful with using the tools to create emails or creating the tools to create emails or creating the tools to. We can talk about generative AI towards the end. I know that's a favorite topic of yours, so we'll jump on that. But AI in the manufacturing space can you tell us a little bit about some of the advances in AI, and even now, or what we should call it, maybe within the manufacturing space?
Speaker 3:Yeah, and actually what to call it is an interesting question. So typically when I present on this, I talk about three categories of AI in the manufacturing space. The first one I have dubbed traditional AI. Now, it sounds kind of funny to talk about traditional AI in a space in an industry that is really just now starting to adopt AI, but the reality of it is is that in this category are algorithms and ML models that have actually been around in the manufacturing space for quite a while 10, 15 years. So this comprises things like anomaly detection. So if you guys are familiar with that, that's where you hook up a model to the process. It needs no a priori knowledge and it just starts monitoring the process and it can start to tell you when things go abnormal. Now, importantly, it can't tell you why things are going abnormal. All it can say is is based on everything I'm seeing today, it doesn't look like it did yesterday. So that's anomaly detection.
Speaker 2:When you're looking at anomalies, are you looking at anomalies for time, anomalies for output, like what are you measuring or what is a good?
Speaker 3:measure. Yeah, an anomaly detection algorithm can really do any of those things, so it can look at. It could be maintenance, so it could be looking at the RPMs or the current draw of a drive or something like that to determine whether or not it's the same as it was yesterday. It could be looking at temperatures, pressures of the process, saying things are not going the way they did yesterday. And it can typically. It can typically pick up on those, those trends, faster than a human operate, even an experienced human operator can, because it's looking for very nuanced correlations between a lot of different variables. So that's anomaly detection.
Speaker 1:Are these IoT devices that kind of collects all that data and then that's what it's doing it's just collecting all this information and see if there's any anomalies.
Speaker 3:It is, yeah, and it's interesting because five, seven years ago there was that big push around IoT and you guys remember that and everyone was talking about IoT.
Speaker 2:What's IoT? Internet of Things.
Speaker 3:Internet of Things and then there was an industrial Internet of Things, so there was iIoT that was being marketed to my industry, and the interesting thing about it is that now, fast forward five years we are not seeing the adoption of IoT like I think that like everyone thought was going to happen, and part of the reason why is that we already have sensors, we already have instrumentation, we already have all of these things and they flow through what's called the control system, which is the system that actually makes everything move and work inside the plant, and that's all been around for 30 plus years. So IoT was just kind of an adder on top of that. And there were some very specific things where it was an interesting choice to have it. You know it's actually Brad you brought up like humidity and temperature. Those are the types of things where we could slap an IoT sensor in. It was cheaper than trying to network all of that through the control system and great. So now we've got a couple extra data points that we can use, but the vast majority probably over 85% of the data coming from the plant floor comes from the existing instrumentation, sensors and things that we already have. So anomaly detection oftentimes to get back to your question. We can oftentimes just use the data sources and the data trends that exist on the plant floor today and we can send that into the anomaly detection model without having to add a whole lot of extra sensors and instrumentation.
Speaker 3:Another area where IoT got a lot of play was around predictive maintenance. So this was the idea that I can attach one of these IoT modules to a drive, to any kind of rotating equipment, and it would determine vibration, it would determine temperature, it would you know, and so and there's been some adoption of that, but not the uptick that I think a lot of people anticipated around IoT. So that's anomaly detection. We're still talking about traditional AI. Another category under that would be the predictive models. So anytime you hear the word predictive, they're pretty much all the same. So predictive quality, predictive maintenance, predictive set point the idea is that you're going to take large volumes of very clean, very correlated data, you're going to send them into a model and ultimately, you're going to be able to learn how to predict a single value. Now, that's all it will ever be able to do, right? So in the case of predictive quality, what's the quality of this batch going to be before I complete it? Right, in the case of predictive maintenance, how many days until this piece of equipment is going to go down You're going to be able to predict a single value. Now, importantly built into that prediction is that someone has to know what to do with that prediction, right? So if I can predict that the quality of this batch is going to be low, it's going to be off spec. Somebody has to know what to do to be able to fix that right. What additives do we need to increase the temperature, reduce, you know, the pressure? They have to know what to do with that, with that information. But that's the predictive category and then the final subcategory.
Speaker 3:Under traditional AI I typically lump in all the vision stuff. And again, but vision's been around for a long time in the industrial space. So we've been doing vision, we've been doing object detection for a long, long time. Even defect detection we've been doing for 15 years. I will say that the vision systems have advanced quite a bit in the last couple years and so we can do more with it.
Speaker 3:But the other thing that I kind of emphasize with clients that I talk to about this is that vision systems should really be another source of signal. So you should be as typical vision system. You should be able to get four to ten new signals coming off of that. Not just I detect that there's an object here, but how many of those objects and where are they placed, and maybe heat signatures and things like that, an angle of a certain thing coming through a conveyor. Let's get all of that data and send all of that back to the control system so that we can make better decisions with it. So again, that's traditional AI, and then I can talk about the other two here in a second, unless you guys had any questions about that.
Speaker 2:I have lots of questions with all of this because I could see within a facility the savings that they could have with incorporating AI, and you said a lot of this could be used with the existing controls that you have on the floor. How does it, how would one know or begin to go down this road to see how they could incorporate AI into their existing structure? And then another question I'll ask. I like to ask a lot of. I stack on the questions because I get excited about this.
Speaker 2:I hear you mentioned the word model. Right, this is another one of those. So I hear the two biggest words. I hear when I, when I hear uh ai, is ai or a phrase, I guess you could say and model. So what is a model and how does one know which is the most appropriate model to use and where do these models come from?
Speaker 3:Yeah, okay, so there's a lot there to unpack. Yes, let's address the model thing first. So, a model we use these terms even I use these terms kind of sloppily so we talk about AI very broadly, but I don't even know that anyone has really categorically said here's everything that's in AI and here's everything that's not in AI. Right, so you could throw in things like decision trees and rules-based systems. When I was in college, many, many years ago now, I took a course on AI and at the time neural networks were kind of out of vogue and at the time rules-based systems were what it was all about. So everything we learned about in this AI class was all about rules-based systems and we barely talked about neural networks. Now, fast forward, many decades later, and neural networks have come back very strong and are at the heart of a lot of these models. But ultimately, I guess, if I had to kind of very broadly look at it, a machine learning model, an ML model, is something that you can put inputs in and it's going to leverage those algorithms and then it's going to give you some kind of output back out. Now, typically that's in the form of a prediction, but in the case we've seen with GPT models. That's going to be in the form of effectively guessing what the next word is and completing those phrases and sentences with those next words. In the case of autonomous AI, which is the next category of AI that we're going to talk about, it is really doing, it's perceiving the current state of the system and then it's making a decision about the next best move that you can make. So we'll talk about that here in a second.
Speaker 3:I want to address another question you had, though, about where are we getting all this data from and where do people start? So most people underestimate the amount of data that they already have in their plant, in their facility. They are typically collecting orders of magnitude more data than they realize. If they're not collecting any data at all, then they've kind of missed the boat and they missed the messaging over the last two decades. So when I started I've been with Rovis's for 25 years now so when I started my career in the early 2000s, we were still educating everyone as to why they should collect this data right, and I remember I'll never forget I had a customer who said well, we just throw out the data, why would we keep this data right? And now, 25 years later, that sounds kind of quaint, but that was the mindset back then, like why are we going to spend this money to collect this data? What in the world would we ever do with it? Right? So now the good news is is that most manufacturing customers drank the Kool-Aid, and in the last two decades, they have been collecting all of that data from the plant floor.
Speaker 3:So they typically have lots and lots of data and, again, everything on the plant floor, especially within the last 10, 15 years, everything on the plant floor is smart, right? This is smart equipment, smart assets. It can give you. You know, when I started my career, we were still dealing with very, very old equipment that could maybe give you 10 data points. Now, every piece of equipment can give you like 100 data points. It can self-monitor, it can give you all kinds of information about how it's performing.
Speaker 3:So we've got actually lots of data. It's rare it does happen sometimes, but it's rare that we have to add more instrumentation or sensors. We typically have all the data we need to do what we want to do with it. Oftentimes, though, that's necessary, but it's not sufficient. So we've got all that data from the plant floor. We've captured it all in these time series databases that we call historians.
Speaker 3:When I started in this career I knew nothing about historians, but they are dominant in this space.
Speaker 3:They're time series databases and we use them all over the place and we can gather that data from those time series databases.
Speaker 3:But we have to bring them into, we have to build data sets out of them, and oftentimes we're taking data from the plant floor and we're mixing in IT data. So think of data like from the systems you guys typically deal with the dynamic systems, the ERP systems, your supply chain, from the systems you guys typically deal with the dynamic systems, the ERP systems, your supply chain, or mixing that data in to build a complete data set, a complete picture into what it took to make that particular product. Now, with that data set, now we can start to use that to train those ML models that we were talking about to be able to make predictions. So I'll give you an example of that. We had a customer and they do supplements, like powder supplements, and so they fill these plastic tubs with powder, right. And so they came to us and they said look, we've got an issue where we'll run. You know, we'll fill these containers right, we'll fill these containers.
Speaker 2:Right, you mentioned supplements. I just have to interrupt. Yeah, when you talk to them, can you tell them I'm interested in hearing this but can you tell them, when they make those containers, to not put that little lip around it so that you can't like pull the last of it out. Because, you fill that up. You always you could scoop it out you can scoop it out, but then at the very end, dump it out.
Speaker 2:Nor can you scoop it out, because the scoop that they put inside is not small enough to get to the little round proper shape see this is.
Speaker 3:Everybody knows what I'm talking about. I will feed all this back.
Speaker 1:Well, they should use ai how to be more efficient with that stuff.
Speaker 2:Come on, yes yes yes, so I don't waste my supplement because I can't get it out and oftentimes I like flip it upside down.
Speaker 3:I bang it because I try to pour it into the other container, because I don't have enough left and you're getting it all over the counter and I wasted it right.
Speaker 2:So if you just pass that along, I will pass that on I'll pass that on.
Speaker 3:So they're filling these containers and they're going in there and and they're running a whole batch of the powder and no problems feels, fills fine, same batch, same skew, same everything. And they start filling you know the next batch into the containers and the filler jams up. Well, when that happens, it is hours and hours of downtime for them to clear the line, clear the fillers, get everything reset and restart the process and they're like banging their head against the wall. They're like we don't understand what's going on, why these fillers are constantly jamming Again. All indications are that we're running the exact same product here.
Speaker 3:So what is the problem? So that's the kind of problem where all the easy solutions have already been tried. Right, they've tried all the easy stuff. Now they're looking to AI to try to tease apart that problem and so. But to be able to understand what the actual problems are there and get to that root cause, we've got to look at a lot of different data sources.
Speaker 3:So we've got to build data sets, like I said that, pull together data and in the end and I don't remember exactly what it was, I'm paraphrasing, but it was basically like when the raw material was from this supplier and it sat in the warehouse for this long and the humidity was this and this particular filler had not been maintained in X number of days or whatever.
Speaker 3:That's when we see this right. That's when we get this perfect storm where the filler clogs up. Well, okay, but all those things, all those data points that I just talked about you can imagine all the different data, all the different places we've got to go to actually be able to build a data set that captures that. So now we can leverage data science to actually tease that apart and find out what the root causes are. So we give them that report and they're like, oh, thank God, this is it, this is what we needed. But then we don't stop there. We take it to the next level and now we build an ML model that we can then hook up, we can operationalize, we can hook it up to the plant floor so that it can monitor when that perfect storm is coming, so that they can now have some early warning as to, hey, hang on, you're getting into that situation where you know all the pieces are falling into place, where you're going to start to get into jams on the filler.
Speaker 1:You know you had made a, you had made a comp, you made a comment about. You know, using there's already a lot of data and you can use those existing data. I think that's a big problem right now, at least in my experience, where they want to put more data in but they fail to understand what you already have. So let's start with what you already have and get the most out of that and then, like you said, slowly bring in other data points. If you are trying to solve other things, or maybe you realize, okay, we've used all this data, we need a little bit more. How can we do that? Or what other data points can we add? So that's a good call out, because I get this all the time Like we need more data, we need more data and, as you already have plenty, let's answer that first let's take that first.
Speaker 1:It's like you're, and then you're causing more variables, and then you're not like getting the right, the answer that you may be looking for or the right questions you should be asking. So I appreciate you calling that out, because it's a big problem still.
Speaker 3:And let me build on that. So you know, I typically tell customers I've got more data. There's no limit to the amount of data I can give you right. Like I said, like everything on the plant floor is smart, I can give you more data than I could overwhelm your systems with the amount of data coming from the plant floor. That's not actually what you want. So what we end up doing is we start with use cases. So we're big believers here that use cases should lead, technology should follow right, and so we identify what the use cases are like in the case of that filler problem, we keep interrupting.
Speaker 2:I'm sorry but I like that because you have to identify the problem that you're trying to solve before you solve it with technology. A lot of times, people think I can use technology because it exists, therefore I have to use it, versus using the right technology to solve the problem at hand.
Speaker 1:Just like you, Brad, when you're developing. They're just like hey, can you add this field so we can collect this data, but it's a calculation of other areas. Well, why do you have to create that field? Just calculate it from the backend side of things if you just want that data. So it's pretty wild, yeah. And then you work backwards.
Speaker 3:Once you've identified that use case, like the filler problem. Right Now we can work backwards and say, okay, we're going to need information from your warehousing system, we're going to need information from your raw materials, from your ERP system, and yeah, so then we start to build that data set. Now the good news is is that these data sets typically can answer a lot of questions. So it's not like you're building a data set and you can only ever use it for one use case. So typically you're identifying two, three, four use cases that all are going to use a very, very similar data set. Now let's build that very clean, very correlated data set and now we can start to attack these different use cases with it.
Speaker 2:My mind goes with this as a million different directions with this. It's so practical and how it can be helpful and to see and also to make sure, as you said, you figure out your use cases, which oftentimes people have a hard time identifying, right? It's also I know what I need to do, but okay, let's just throw technology at it, let's just throw this at it and you know, hope that it is solved. So, wow, so now you can. You you have the data points, which term we added? The data points. We determine the use cases. Then you can utilize technology. So you take the data sources from the different systems, from your erp system, from your control systems, and now you have this model that you made, or this model that exists. How do you tell it what to do? How does it know what to do?
Speaker 3:well and again. So, okay, so we're still in this realm of traditional AI, right? So traditional AI effectively just can answer questions. It's just going to make a prediction, so you're going to send it inputs and it's going to predict an output. That's all it ever is going to do and, importantly, it can only ever predict one output, right? So, again, in the case of that filler system, are we trending towards that issue that we have where the fillers, you know, clog up?
Speaker 3:But you're hitting on an important point there, and that's that many, most many of my clients, when they start to think about this, and even if they've got, you know, maybe an internal AI team, data science team that has started to build some models and things like that, the problem is is that no one sitting around the table knows how to actually operationalize those models on the plant floor, and that's one of the key points that I make to my customers is is that until that model is put in operation, until somebody is making decisions based on the predictions of that model, you have not seen a lick of ROI.
Speaker 3:Everything that went into that, right, was a big science, expensive science, experiment. Until it's actually on the plant floor and people are taking action based on the prediction of that model, and what that also touches on, though, is organizational change management, because now you're talking about trying to get operators and supervisors and folks who maybe have spent a lot of time on the plant floor to and they've learned to trust their ears and how the machines sound and their smell and how things look and stuff, and you're like forget all of that, put all of your trust into this AI model, right, and?
Speaker 1:that's a big lift, really quick on that, because that has always been a problem, always been, and I don't think it'll ever go away, because we know that there's a lot of forecasting tool there for you know predictable of like when to order. So demand forecasting right. It's always that problem. Anytime that I've implemented demand forecasting, there's always that one or two people in the company are like well, I've it for this long, I don't trust it and I'll never trust it. And so you, you spend all this time and money and effort. You always gonna have that one person who's like I'll not trust it, I'll make some adjustments and stuff like that. But people need to give it a chance to like, hey, let's, let's, let's give it three months, six months or a year, right?
Speaker 1:And Brad and I had a conversation about AI in general. Where do you trust enough on AI's responses and results If they make one mistake? All of a sudden, we're like, oh, we don't trust it at all. But if a person makes a mistake, it's okay for us. Ah, you made a mistake, you're human. Blah, blah, blah. You can make more mistakes.
Speaker 3:So it's like there there's a bar higher for AI. The bar is higher for AI and I'm sympathetic to those folks. Like I really have a ton of respect for the folks who run these plants and so I think that we have a good approach there, because we've been, you know, as a system integrator. We've been around for 36 years now, so we've been implementing new technology on the plant floor for a long, long time. So I think we've got a good approach there.
Speaker 3:But part of that is getting those folks engaged right from the beginning and making sure they feel like they're part of the project and making sure that they feel like this is a solution that they helped implement. That's a key aspect of getting over those objections and making sure that there's buy-in from all of them. But to your point, chris, I'm also sympathetic to the fact that, yeah, it's a higher bar with AI than it is with humans. You're investing this money, it's new technology, and the expectation is that it's going to be right pretty much all the time. Whether or not that's fair or not, that's just how humans are. So we've got to kind of work within those bounds.
Speaker 1:Yeah, yeah. But I think there's a way to get around that and we've had. You know, brad and I have conversations of industry experts where you know there's that trust relationship you have with someone that's human and you understand that they can make a mistake and all that stuff. But from the AI side, right now as it is, we kind of just take it for what it's giving you. Like, I don't know how accurate it is, but if you have a little bit of visibility of like, okay, what's the probability of the accuracy? Or what's the accuracy of this, if it's like 99% accurate and it's telling you, hey, this is 99% accurate, then I'll have a little bit of trust. But if it's coming back to say, you know, it says I'm 60% accurate, okay, maybe I need to add more data points to make it more accurate. Right now there's nothing, there's no system out there that would give me that. Currently You're just kind of like, oh, that's the result. There's some calculations done in the background.
Speaker 2:I don't know.
Speaker 2:Know it's a mathematical calculation that goes to your whole person conversation yeah, somebody comes to do service at your house, you're going to trust that they know what they're doing and they're going to be able to fix the problem. You don't know what's behind the box. I guess you could say I understand the point of it's. It's how do you, how does one build trust in anything? How do you build trust in driving a vehicle? Forget ai? How do you build trust in walking over anything? How do you build trust in driving a vehicle? Forget AI? How do you build trust in walking over a bridge? How do you build trust in everything? And that's the question and that's what we need to come up with in this case.
Speaker 2:This is, I think, a little more. I don't want to take away from your time, but you're kind of going to shoot me down a different path here for a minute. It's new technology. It's scary technology to many Because, to be honest with you, if you look at what you can do with coding and again, we've had the conversations, chris and I, with others about vibe coding what's vibe coding?
Speaker 2:Listen, people have been taking code and reading samples and doing things and putting code together for years, so that's not a new concept the reading samples and doing things and putting code together for years. So that's not a new concept. The concept is is you can do this a little bit quicker and it's coming back and it's almost like magic. Right, it's, it's. I tell people it's magic because sometimes I start typing something and it fills out lines and lines of code that I was just thinking about. So I think there's a little bit of fear in that, because nobody really understands it or knows what it is. But then how do you come to have trust? Chris, to your point, what do you need to have trusted? Because no one's going to tell you that something can be 100% correct. Right, there's too many variables on this planet where you can't have something 100% correct, because even if something's level or not level, just have a slight shift in the earth, and now you're not level and something will go off. So how do you gain trust in a system that you use?
Speaker 3:Yeah, and I think it's. The answer is it's the same. It comes back to organizational change management, which has been around for a long time, right? So we understand how to get folks to adopt new processes, how to get them to adopt new systems. None of that is new, right? Yes, it's a new tool with AI, but that's one of the things that I feel like is part of my job and the role that I have, working for an independent system integrator, is to demystify what AI is about, right? So it's a new tool in our toolbox, yes, but it's not like we have to throw out the whole playbook, the whole rule book of how to implement new systems, new processes within an organization. We know how to do that. As humans, we've been doing that consultants have been doing that for decades now, so it's just about leveraging that organizational change management to build trust in this new system. This new system happens to be very capable. It's called AI, but it is just another system that we're implementing.
Speaker 3:I will say that there is one other aspect, though, to AI and researchers are working on this, but it'll be a ways out still and that's called explainable AI, and I don't know if you guys have looked into that much. But you know, one of the challenges of AI right now is it is a black box, so it is very difficult to understand how it came to the answer that it came to Right, and that's where, chris, you were talking about. Like, I don't know if I can trust this answer or not. How did you even get to this? And that is something where, at least with a human, if they make a bad decision, you can go back with the human and you can say well, why did you say that? Well, I thought we were in this state, but it turns out we were in this state, right? So explainable AI. There's a lot of research there, and what that will give you is the ability for the AI to go back and say here's why I said what I said oh yeah, the reasoning.
Speaker 3:It's reasoning, the reasoning of how I got to this point. It's actually a really hard problem, surprisingly hard problem for AI to do, but that's where they're working on so that we can at least get that, and I think that will help with building some of that trust I did want to talk about. So we talked about traditional AI. Let's look at the next category about traditional AI. Let's look at the next category. There's three of these categories. The next one is called autonomous AI, and what autonomous AI is is, to me, this is the future of manufacturing. This is where we want to get to.
Speaker 3:Autonomous AI does, frankly, what most of my clients want AI to be able to do, and that's that it makes a decision. So it actually looks at the state of the system and it says here's your next best move. So, unlike the predictive models, where it can recognize, maybe, that there's a problem, but it doesn't know how to fix it, autonomous AI can actually work its way out of a problem, and so what it leverages is this underlying algorithm it's called deep reinforcement learning, but it's been around now for almost a decade. Algorithm it's called deep reinforcement learning, but it's been around now for almost a decade, and it allows the autonomous AI to make decisions like a human can, it can actually build long-term strategy. This all came out I don't know if you guys remember around 2016,. There was DeepMind. It was a Google spinoff and they built some tools to be able to play a program, to be able to play Go, alphago, yes.
Speaker 2:I remember that.
Speaker 3:Yes, okay, well, that didn't go away. I mean, it made a lot of press back then but then it kind of like went underground or whatever, like that didn't go away. That algorithm has made huge impacts on a lot of different industries, but including the manufacturing industry, so we have actually been adopting that and leveraging it. So I've got DRL models that are running in plants right now that are acting like expert operators, and it is like magic. It's wild to see these autonomous AI models, these agents, and what they're able to do and how well they're able to perform and, in a lot of cases, outperforming even the experts who helped train them. So autonomous AI I'm very bullish on.
Speaker 3:I do feel like that is the inflection point in this history of manufacturing. This is going to be the next big thing. Is this autonomous AI? I know everyone thinks it's going to be generative AI and, like I said, we can talk about that here in a minute. That's my third category is generative AI. Yeah, and we'll talk about that here in a second. But I really believe that autonomous AI is going to be the thing that, when we look backwards, that's going to be the thing that propelled us forward in this manufacturing journey.
Speaker 1:I think that's the case, because that's what the big focus right now is the agentic AI, right. So now there's a term where and I read this somewhere I don't remember where where you know back in the day, there's an app for that, right, there's an app for that when you got your cell phone. Now the idea is in the next, you know five years or the next year it's going to be there's an agent for that, right, there's an agent for that. So it'd be the agentic AI where it's going to be autonomous, where it's going to do things for you, removing the tedious component of that. I think that's great, but for me, it's long-term wise. That's perfect to do all those tedious work, work, but I would love to have a little bit more of a predictability where you know, not so much generating responses. It's more like I wanted to predict my life's going to be, or if I do this, what would it happen? More so than just having a conversation, like I don't. I can have a conversation with anybody.
Speaker 3:Well, and so you're starting to hit on some of the limitations of generative AI, and unfortunately so. Forbes, I think, said 2025 is the year of the AI agent, right, and so there's all this focus on agentic AI and agents. The problem is, is that the underlying technology that they're looking at is generative AI? Well, the big problem with generative AI is that and Apple proved this once and for all last year in a study that was released October of last year generative AI can't reason. That's a pretty big limitation of any AI system is so it can't reason. It does not understand causal effects. They were giving it simple math problems and it could not reason through these simple math problems, let alone the types of problems like you're talking about, chris these big, complex problems Think about political problems, think about big legal problems, think about these big problems that humans face and generative AI struggled to reason about the simplest math problems. So all of the intelligence that we attribute to generative AI is actually us just projecting intelligence onto it. It is very, very dumb. Generative AI is good at one thing, and that's guessing what the next word is. It is effectively an autocomplete on steroids, and I hate to pull the curtain back for those folks who don't realize that. But that's all that it is. And so when we the challenge is that these agentic AI, what they're doing is they're having generative AI effectively similar to writing a program lay out a script of what it should do to try to accomplish that task, and then you can hand that script off to something else to actually run the code, right? Well, the problem is that, I mean, it's so limited in what it can generate and it really can only ever generate things that it's seen in the past Like it has to be. It has to have seen something like that. Now it's been trained on vast volumes of human information, right? So it's seen a lot, but it still is not going to be able to get creative, it's not going to be able to work around certain problems. And then you mix into that the problem that it has with what are called hallucinations. And so for your listeners who are not familiar with hallucinations, what that is is that's where the generative AI just makes stuff up. It just makes it up out of whole cloth, and you can't tell the difference between what's made up. It up out of whole cloth and you can't tell the difference between what's made up and what's real. And I'll give you a couple examples of that.
Speaker 3:I was using it the other day. We were my daughter and I. We were going to generate a playlist together. And so I go into ChatGPT, I'm like generate a playlist of I don't remember what it was beach songs or something like that and so it generated like 25 songs. And so we're we're starting to program these into a Spotify playlist. And we get to this one and we're looking, we're searching for the song. I'm like I don't, I am not finding this song anywhere. So I go back to generative AI. I go this song here, did you make that up? And it's like, yeah, thanks for pointing that out. I actually did make that up, and this happens way more than people realize.
Speaker 3:A more potent example happened in May of this year. The Chicago Times published in one of their Sunday circulars they had kind of a fluff piece. It was the summer reading list for 2025. I don't know if you guys saw about this, but it was their summer reading list for 2025, right, this. But it was their summer reading list for 2025, right. And well, someone recognized pretty quickly and posted on Twitter or X posted that it turns out that out of the 15 books in that list that it had generated, only five of those books actually existed. The other 10 books were completely made up Now remember this was not.
Speaker 3:This was an article in the in the Chicago Times, you know Sunday Circular. So of course the Chicago Times was embarrassed. They went back and they did research into what happened there. They interviewed the author. The author of the article did admit he had used ChatGPT or ChatGPT to generate the article and he had not bothered to double check that any of those books actually existed. His editor didn't bother to check that any of those books actually existed. So this is a real problem. And so when people talk about agentic AI and they're really stoked about it, I'm like, oh boy, that is very, very dangerous. To start giving this technology that can just make stuff up and go down these really weird tangents and giving it the ability to actually execute that code itself makes me very, very nervous.
Speaker 2:I'll throw a twist in this, though, just for the conversation. How is that different than a person? Because it goes back to what I was saying. We talked about the trust you talked about. It doesn't have the ability to reason. To be honest, with that question, I question a lot of people.
Speaker 2:I was going to say the same thing, and also you think of creativity and human creativity and what humans put together. A lot of times we put together things based on what we think, we know or what we remember of what we've experienced. So if you're saying you're feeding off this information to AI, is it its lack of reasoning or is it lack of being able to experiment and gauge the results? Because if you're coding, you're saying you can say create a script, give me a script, I can say that to Chris, and Chris can give me something and it may not work.
Speaker 2:It may work, it may be whatever. And Chris can give me something and it may not work. It may work, it may be whatever. And the only way we know is we have to test it to make sure that it works properly based upon the requirements that we were given. So I think I'll always go saying is AI is a tool like any other tool and people need to realize that, and then, with certain things, it may do better. I guess you can say, and some other things it may not, but you still just as if I'm not going to have a bunch of people build the jet without having somebody do an inspection to make sure that the jet was built properly or put together properly, so there are things that someone should do when it comes to AI to make sure that whatever they're using is sound.
Speaker 3:Well, so I'll say there's two issues there where I think that makes it distinct from just going to your assistant and saying can you do this for me, can you book this for me, or can you write this code for me To a, to a human assistant. I mean, there's two kinds of distinctions there. One is is the perception of performance, and right now, because we're on that you know hype cycle, all these folks that you're talking about, who would use this tool, are being told by the media that AI is the, is here, the future is here, it's going to do everything you want it to do. We've got these agents now that do these amazing things, so, and the vendors, honestly, are incentivized to say that right, these AI vendors are incentivized to create that perception. So we've got this perception, this incorrect perception, that's being pushed down to the masses and most people. It's funny like I thought this was so well known and I'm shocked at more and more, when I talked to people that they did not realize I was talking to my mom this was last month and I was talking to her about chat, gpt, and I'm like now you, you know that it can make stuff up right. She's like what do you mean it can make stuff up. I'm like it can just make stuff up, like it'll make facts up and it will give the appearance that those are correct and it will not be correct. It will just make stuff up and she's like I didn't know you could do that. So that's a problem. When you have the masses leveraging these tools without understanding what the limitations are. All they get is this little thing at the bottom that says AI can make mistakes, double check its work. Right, that's it. That's all we get, not this deep analysis of no, it can make really big mistakes and say really, really misleading things. That's the first problem. The other problem and this is really a big issue is the hubris of the models themselves. So ChatGPT will freak.
Speaker 3:I'm a heavy user of ChatGPT so I run up against its limitations pretty frequently. It will frequently tell me it can do things it can't do. Now I don't know, like if you had a personal assistant that you hired and they came in through the interview process and they said, yeah, I know how to do this and I knew how to do this and I can do this. And I, yeah, I've done that a million times. Whatever, it's not going to take you very long to realize. If they were just full of it and they were just, you know, and they get in there. And it was like fake it till you make it and they get in there and they can't do any of those things. You're like you don't even know how to use Excel. You don't know, you don't know how to use any of these tools. You said in your interview process you could do it. That's the level of hubris that these models have so frequently.
Speaker 3:Chatgpt and I mean this was just from a couple days ago. I had an existing PowerPoint presentation it's a training I do and I said I don't like the layout, the flow of this presentation. So I fed it the presentation and I said can you help me kind of reorganize this so it kind of flows better? Right, and so it did. And it gave me this nice outline and said, okay, what if you move this slide up here? And it was great, great, okay, awesome.
Speaker 3:Then it says would you like me to reorder that PowerPoint presentation for you and get all these slides? And I can do all that for you. I'm like, really, you can? Okay, sure, and it just corrupted the PowerPoint completely. It can't do it, but it, with all the confidence in the world, said yeah, I could absolutely do that For sure, give it to me, I can do whatever you need me to do with it. Right, that's a problem. That's a problem when the AI itself doesn't understand its own limitations. That's how we get ourselves into some really bad places. So it's twofold. It's yes, it's the education side. The masses are on that hype cycle and we'll get to what is it? The trough of disillusionment, eventually, and people start to realize the limitations.
Speaker 1:But the other big problems. The AI itself is overpromising and underdelivering. Over and over again. I think, from a personal use, yes, I see that being a big problem, but maybe and again, this is just my opinion from an enterprise level, from a business level is from my, just my opinion from an enterprise level. From a business level, you can ground those models to know like here, here's your limit, just work within these bounds of these information that you're given so you can minimize the risk.
Speaker 1:I'm not saying eliminate the risk entirely, and that's also. I mean you could do the same thing with ChatGPT. You can create a parameter of like this is the type of what you are, only work here in this space, feed it whatever you want to feed it. So you could do that and limit some of those risks in creating those limits for that AI model. But it requires a little bit of work and it requires people to understand that requires a little bit of work and it requires people to understand that. Unfortunately, a lot of people were sold, like you said, in the media, where it's going to solve all your problems. Well, that's not the case, because you still have to understand the tool, as what Brad mentioned. You have to understand that it is a tool and a tool could be used incorrectly. Can they still use it Absolutely, but you can certainly use it incorrectly.
Speaker 3:And there are. You know it's not like the LLM researchers. The generative AI researchers don't know about this problem, right? They are actively looking into it.
Speaker 3:And, to your point, chris, there's a technology to ground it. It's called RAG retrieval augmented generation where, basically, it forces the GPT to cite its source. You have to give me where, in whatever manual or whatever you read that you've got to give me a chapter and verse. You got to point to where and you've seen this now ChatGPT has incorporated this where it will occasionally give you sources, right, so that it can give you an idea of where it found that information. But there are limitations to RAG. One of the big limitations in the enterprise setting is that RAG relies on comprehensive enterprise search, which is a problem that we've been trying to solve for 20 plus years now, and nobody really has a good handle on enterprise search, and RAG relies on that to be able to find the sources of its information. Wag relies on that to be able to find the sources of its information. So there's limitations there. But to your other point, chris, like when you? So let's take this now to the plant floor.
Speaker 2:This is why I when you go back to, can you explain what you mean by enterprise search?
Speaker 3:Yeah, like I'm just saying, like enterprise search, like trying to find, like when's the last time you used enterprise search to try to find something on your network? Did you find it the first time you searched for it? Did you find it the second time you searched for it? Did you finally give up trying to find that document and you just recreated the thing from scratch? Like Enterprise Search is one of those really hard problems that we really still have not solved and we don't have good answers for Enterprise Search. People just live with the fact that enterprise search kind of sort of works, and RAG, which is that technology you're talking about, where you're forcing the generative AI to cite its source, relies on good, solid enterprise search. I've seen these AI vendors' architectures and they'll lay out the whole thing and there's a box there and in order for it to cite its sources, there's a box there that says enterprise search. I'm like, no, wait a minute here. That's not a solved problem by any stretch of the imagination. Enterprise search is not very good still 20 plus years later, of us trying to solve that problem. So, yes, there are things that we can do and we're going to continue. It's going to continue to get better. I know that, but as of today. So let's take it back to the plant floor day. So, let's take it back to the plant floor. This is why I'm not a big proponent for leveraging generative AI on the plant floor.
Speaker 3:Yet, because of these major limitations and to your point, chris, the one person who's going to be using this so let's say, you're trying to solve a maintenance problem on the plant floor the person who's going to be using this is the one person who knows the least amount about it. Right? Because that, obviously, if this was the expert, they would just go fix the problem. Right? We're talking about and the AI vendors are selling this vision of being able to get your least experienced maintenance person, who's been there for three weeks, giving him or her a chat bot that they can ask questions about how to fix this piece of equipment, and it could just make stuff up. It'll just make stuff up. It'll just make stuff up. It'll say whether or not it knows how to fix that particular piece of equipment. It'll say, yeah, I know exactly how to fix that. Here's what you're going to do You're going to torque this bolt and you're going to rev this thing and you're going to add this additive in and you could blow up the plane. You could kill somebody.
Speaker 1:Yeah, that I always share about using AI. It's like having an autopilot on flying a plane, right? So you trust it that it's going to take you from one destination to another and it will adjust accordingly and all that stuff. But if that doesn't work, you, as a pilot, should know how to fly it manually. You should still know how to learn to do that. No different than a developer that uses a AI to help develop a software, you can use AI to get all the stuff, but you should still understand what it's doing. That should be the initial approach of any AI uses in your business. That should be a core. So that's an important component.
Speaker 3:And that autonomous AI that I was talking about. That's like building an autopilot. It's like building an autopilot for that part of the process and it will look over your shoulder and make recommendations and say you should do this, you should do this, you should make this change. But, importantly, like a real autopilot, a real autopilot will kick off when it recognizes that it's outside of its operating parameters. Autonomous AI, unlike generative AI autonomous AI can say I wasn't trained to do this, you're back in control, I'm kicking off. You need to take back control of the process so it can hand that operations back to the operator when it recognizes it's over its skis. Generative AI doesn't it just?
Speaker 1:makes stuff up, so from your world. Then, brian, when you're working on the manufacturing warehouse, you know machine learning and IoT, things like that. What specific model or AI model are you using? If you can share that, I don't know if you could if it's a proprietary thing.
Speaker 3:No, no, no, yeah, it's not at all. So it's a frequent question I get and it's a misunderstanding of how we're approaching these problems. We have no a priori models that we're bringing to the table. We are building these models for each customer, and there's a couple of reasons why we do that. A every customer's data is different, every customer's equipment is different, their processes are all different, so it's really kind of impossible for us to build this library of models that are going to apply in all these different situations, right? So A it's not even possible really, anyway. So we're always starting from the data, building those data sets and then generating models off of them. As far as what the underlying algorithm is underneath the model, honestly, in this day and age, you don't really have to even worry about that, whether or not it's a decision tree under the covers, or it uses a neural network, or I mean there's all kinds of different deep forest search. I mean there's all kinds of crazy different algorithms under the covers. You don't even have to worry about that. The systems themselves that train these models already will pick the right one. They'll run tests against each other and find the one that's the most performant. All of that happens automatically under the covers. So now we've got a trained model that, again, black box inputs come in and then it's going to make predictions coming out of it. Or, in the case of autonomous AI, perceptions come in, decisions come out, coming out of it. Or, in the case of autonomous AI, perceptions come in, decisions come out. The other side of it.
Speaker 3:The other aspect of that, though, is that one of the most common questions I get is everyone asks are you going to use my data to make your models better and then hand it over to one of my competitors, right? So? And the answer to that is no, like we're starting from scratch. We're building these models for you, mr Customer. You get to keep that model when it's all done. You have all the IP. We don't ever leverage any of your data to train other models.
Speaker 3:Now, hilariously, then, they typically follow up and ask that question that you just asked, chris, like do you have a bunch of models? Then Are you going to have to start? I'm like no, I don't have a bunch of models. It's the same rules for you as for everyone else. You don't want me taking your model and giving it to your competitors. It's the same thing back to you. So, no, we don't do anything like that. We always start from scratch and we build these models from scratch for each customer and that's not a huge lift. It's not as big of a lift as it sounds. In fact, the data science to be able to get that data in a state. Typically on these projects, the data science is typically 75, 80% of the effort is the cleansing the data. It's the unsexy part of AI, right? You're cleansing the data, you're getting rid of bad data, you're eliminating rows with empty cells in them and things like that. So, getting all of that data right, that's where the effort is Actually. Training the model doesn't take that long at all.
Speaker 1:So you're running. So is this kind of a big data world then? So if you're pulling all this data, it's a big data, yeah, so what database does it go into, is it? And certainly I hope it's not SQL.
Speaker 3:Well, so you know, I don't know how familiar you guys are with the world of data science, so it's a pretty established world right now and so they have tools that they're using already and typically, yes, you're working with hundreds of thousands, maybe even millions of rows, but you're typically working with it in Python types of environments. You're typically working with it in what are called Jupyter Notebooks. So these are established tools. You're using pandas, You're using some of these established data science tools and then, yeah, I mean you can certainly leverage. It is a big data type problem. So there's tools like Microsoft Fabric and there's Databricks and there's Snowflake and there's some tools behind the scenes that can help when you're working with that volume of data. The other thing that's really interesting that most people do not talk about and I like the Databricks and, by extension, the Microsoft Fabric, because it leverages the Delta format, but I like their approach to it and that is versioning these datasets. So, just like you, version code, when we're building these datasets, we build them, we test them, we do some preliminary modeling, we run some algorithms on them to determine are they predictive, Are there gaps in the data? We run some heuristics and things like that, and then we'll go and we'll modify, we'll do some more data science and we'll modify that dataset, that history, that evolution of that set over time.
Speaker 3:You want to version that, right. You don't want to get back to the old days, like we used to do with versioning source code, where you're like putting the date and then the time of that source code file and you know, as you're changing that source code file, you know you're losing track of which one was which. Like. You want to version that data set, and so tools like Databricks, tools like Fabric, have a really nice method of actually versioning those data sets over time as they evolve. And that's what you want to do, Because if you find out that you introduced bad data or you somehow, you know, in your effort to cleanse this data, you screwed up the data, you need to be able to go back a couple of versions and say, oh, here's what we did wrong and then be able to play forward from there. So this again, this is all. We haven't even gotten to AI, right? This is just all the data science stuff that it takes to get to a good ML model.
Speaker 2:See, this is what I was saying is people think of AI. They just think of like just said is oh, write me an email to say I'm sorry for missing your party on Friday and it will generate a response for you A nicer response.
Speaker 1:They have not gone to your stupid party Sometimes.
Speaker 2:Listen, I've been doing this for a while. I told you I've moved to use Grok. I think Grok is my friend now, and AI with Grammarly have seemed to solve a lot of my problems.
Speaker 3:Well, and this is the blessing and the curse for me of generative AI, right? So we started this industrial AI division in 2019. Now, no one had even heard of LLMs in 2019, right. And so we were doing this AI stuff with customers and we were talking about building big data sets and building models and whatever. Then, november of 2022, suddenly the world gets introduced to ChatGPT, right. And now my phone's ringing off the hook because everyone wants to talk about AI, but it's the wrong AI. They want to talk about this generative AI, and I'm like that's great, but let's talk about AI that we can actually implement today on the plant floor and have a big impact. So it's my blessing and my curse it gets me in the door, but then I typically pivot pretty quickly to other types of AI.
Speaker 2:Yeah, there's a lot to it. One of the things that I don't want to take away from the topic that we're having, but I think of this and I think from the business point of view I don't want to take away from the topic that we're having, but I think of this and I think from the business point of view, when we deal with AI and it's even myself it's sometimes is AI can do so much. As Chris started talking about the agents, you talked about autonomous. You talked about the predictive. We talked about all these different things and sometimes how does someone come up with where they can apply this? It's more of so.
Speaker 2:What are some things that somebody could do to make their business a little more efficient? We talked about it in. You talked about industrial manufacturing, manufacturing. You have the tools to help that. They can see predictive maintenance. The other example that you had was a really good example. You could see predictive failures of output for due to external conditions and such. But how does, how does that journey even begin? Because there's so much to this. To be honest, it sounds overwhelming to me, and it's not this conversation, it's just all this. Oh, you can use agents that can do all this, and then you have an agent that manages all the other agents and it's like what do I do?
Speaker 3:Yeah Well, so we'll bring it full circle back to the beginning of our conversation. We talked about use cases, so it's still the same answer Use cases. Lead, technology follows, so it's still the same thing with AI. So what we do is we typically sit down with the customer and we talk about where can they leverage AI and what are the use cases that we can use. Now, it's kind of a chicken and egg thing, because they don't always know what the state of the art is. So we prime the pump, we teach them just enough to be dangerous about AI. We typically it's a couple hour long workshop that we do, and then now okay, now they can start to. Now the wheels are turning and they're starting to say, oh, I get where that could apply, and we're showing them use cases that other customers have done. Here's where other customers have found success with AI. So now they're starting to say, okay, now I see how that could apply here. And so they start saying, you know, what about if we used AI here? What about if we use AI there? And then they're sitting down with experts you know in this, and so we can very quickly say, yes, that would be a perfect use of AI, or that's more like Skynet, and we're not there. So we can very quickly, you know, separate the wheat from the chaff, and we can get to here's some you know, here's 5, 10, 15, really high value projects that we could tackle right now leveraging AI that exists today. That would have a huge impact. And then we work with them to figure out what's the ROI of those projects, right, and so then we and then, of course, we got to get justification, all that, and so finally, then we actually can attack that project and we can make it a success. And then it's just rinse and repeat. You grab the next project off the list and you, you go through that process again. So that's, you know.
Speaker 3:One of the things that I always tell you know customers this is actually how I wrap up when I'm speaking at trade shows is take that first step right. I know that not everyone feels like they're ready to take on an AI project, but whether or not you feel like you're ready or not, one of your competitors is probably taking that first step. So you need to take that first step and even if you don't want to go full AI, then at least do some AI readiness right. How does your infrastructure look? How's your data infrastructure? How's your networking infrastructure look? What's your infrastructure look? How's your data infrastructure? How's your networking infrastructure look? What's your data story? Do you have all of these things in place so that when your organization is ready to take on AI, that you've got all the building blocks ready? So even if you just want to take on AI readiness, that's fine, but take that first step.
Speaker 3:But the second thing I tell them, and I said I always say I know it sounds self-serving, but get an independent system integrator involved, because it is complicated and there are so many ways to skin this cat and there's so many different paths you can go down. And if you just go to the AI vendor, they have a hammer and everything's got to look like a nail right, so they've got one tool that they can use to solve every problem. So, whereas with an independent SI like I've got all the tools in the world right, and sometimes we have these conversations and we identify a use case and it turns out like maybe that's not even really an AI use case. Maybe we can solve that with just some visualization and some existing analytics off the shelf analytics without having to go full AI to solve that problem. So we can go anywhere where the conversation needs to. We can go anywhere where those use cases lead us. And so, yeah, have that initial conversation with someone who's an independent expert in this field and have them help you build a roadmap on how to get to where you wanna go.
Speaker 3:The other thing is is that once we've built that roadmap, now we can start to bring vendors in. And now we can start to, because ultimately it is gonna be running on some platform. So now we can start to bring vendors in and look at what the options are for tools like that that can solve that problem. And in our world we've got tools like there's a company called Cognite and so they do IT, ot, building those data sets of combining IT and OT data together. We've got tools like Composable that does autonomous AI.
Speaker 3:That you know, that's all they do Autonomous AI in the industrial world right Now. These are not vendors that you're going to stumble on, but these are all vendors in my toolbox that I can pull from to be able to. You know, but maybe it's you know, we work with Rockwell, which is a big player in this space. We work with Rockwell, which is a big player in this space. We work with folks like Ignition in this space. So we work with all these major vendors in this OT space, this operational technology space, the world we live in. We work with all these different vendors and we can bring them to the table and build those solutions as necessary.
Speaker 2:That's good, because it does sound overwhelming, and the perception that many may have is that it sounds complicated. Some people think it sounds easy, I can just do it. This is where it's even with what we deal with with the implementations and working with software implementations. Everybody always thinks it's so easy, I can do it, but then you don't know what you don't know. You don't know what you don't know, and then sometimes you find yourself, you put yourself into a position where maybe you didn't make the decision because you don't have that experienced reasoning to understand the cause and effect of what you're doing as well. So what are some other from the manufacturing point of view, what are some other efficiencies that you've seen that organizations have gained by implementing AI in their organization?
Speaker 3:Yeah, so, and you know it's as varied as the customers that we work with, and so I'll give you a couple examples. We worked with a life science manufacturer and they had certain published sustainability goals about how they were trying to reduce their greenhouse gas emissions right by X percentage. By this, you know target year, and you know that's one of those things where I don't. When people talk about sustainability, I don't know what other technology you're going to grab other than AI to meet those goals. Everyone's got these really aggressive sustainability goals. It's not like there's some technology that's in the wings that's about to come in and revolutionize energy. We know about all the technologies that exist here. So AI is the technology that we can leverage to hit those sustainability goals. So they had these published sustainability goals. And so in life science if you're not familiar the environmental systems, obviously you're trying to control very tightly. You're trying to control for humidity and pressure, temperature, of course, but those systems are oftentimes set in and forget it. So it's nobody's full-time job to sit there and turn the knobs and try to, you know, keep it within spec, but then also to minimize energy usage. So we trained an autonomous AI agent to actually be able to do that. So it can actually and it does not do set and forget, it actually sits there and makes micro adjustments 24 hours a day to try to. So it's always working to keep. You know the constraints are it's got to stay in spec, but it's always working then to minimize energy usage and it can make intelligent decisions based on past demand. It can look at all kinds of signals, past demand. It can look at outside temperature, outside humidity and pressure. It can look at what the cost of energy is at that moment. It can, it can do leverage, can leverage all of those signals in the exact same way that a full-time human could if that was their job. It can do that and control that. They were able to get double-digit percentage decreases in energy usage, which is more than enough to get them to their sustainability goals that they were trying to get to. That was really just one site. We're trying now to work with the customer to expand that out to many, many more sites Now from energy, same technologies.
Speaker 3:But let's go to. We've got a customer that makes glass bottles and the glass bottle process. If you're not familiar with it there's a gob of molten glass, it falls into a mold, air is blown in and then that's how you make glass bottles. Well, that process is very finicky, so it takes an operator with a light touch. They've got to be kind of really tuned in and really the customer told us we've got two expert operators who are really good at this and everyone else is just okay at operating this process and it's a very drifty process. So once it starts drifting, you're making bad bottles maybe for at least the next 20 minutes, maybe half hour, before you can finally get the knobs turned to bring it back to where you're making on-spec bottles again. So we were able to train an autonomous AI agent to get back to making on-spec bottles in less than five minutes. Consistently it never took longer than five minutes and typically it took like two minutes to get back to making on-spec bottles.
Speaker 3:And one of the challenges with this and why autonomous AI works so well for this, is there's a lot of compensating moves.
Speaker 3:So if I increase what's called the orifice the size that the molten glass goes through, if I increase the orifice size or I decrease the temperature on the orifice the size that the molten glass goes through, if I increase the orifice size or I decrease the temperature on the orifice or I increase the pressure on the plunger behind it.
Speaker 3:When I'm making these moves, I've got to make a compensating move somewhere else, right? So there's just a lot of knobs to turn, and it's one of those things where it's almost too much for a human to try to keep track of all that. So what humans end up doing is they do a lot of test and check, right, so they'll make a change and then they'll see if it makes an improvement, then they'll make another change. The autonomous AI doesn't do that. It's got in mind where it's going to go and it can turn all of those knobs all at the same time to coalesce to making on-spec bottles again. And so that's another example. You know, when you look at being able to get from making on-spec bottles, getting back to making on-spec bottles in less than five minutes versus 20, 30 minutes, I mean that's huge savings, that's huge throughput, that's millions of dollars.
Speaker 2:You eliminate waste as well. It all comes down to, I think AI can help you increase accuracy to eliminate waste. And that autonomous AI of turning all those knobs sounds like a person to me. And almost in some cases I wonder if they could do it more reliably than a person. I think.
Speaker 3:Well, I mean, it never calls off, it never takes a break, it never goes on vacation. But the reality of it is is that, with most of these, we're not building this to replace a person. The it is is that, with most of these, we're not building this to replace a person. The problem is is that in the manufacturing world, they have had massive losses of expertise. So as the baby boomers retire, they have lost decades and decades of experience. There was a report from LNS Research in 2019. This is for US In 2019, the manufacturing workforce the average years, the average tenure in a certain position in the manufacturing workforce was 20 years in that position.
Speaker 3:By 2023, it dropped to three years. Three years tenure Like that's. That's insane. And so and when I talk to my clients, they're all seeing this. They're saying I've got high turnover, it's a generational thing. Like nobody wants to do these jobs, nobody wants to work these factory jobs, and we can wish that it wasn't the case and we can certainly hope that it changes or we can just live with the fact that this is the reality that my clients are facing, and so they're not looking to replace people, they're looking to try to get that person with two weeks of training who's probably going to quit in six months, to at least get them to where they can make good bottles.
Speaker 2:Yes, see, this is. This goes back to something I always say and I've been saying AI is not going to replace you. Someone using AI will. That's right, because that's it is. It's it's another tool that you're using. And if you look at the industrial revolution, the advances in civilization, we've always done things to make it easier for people to do more, in a sense. So again you go back to where someone may not want to do those specific positions or there may not be enough within the talent pool for those positions. If you can have some something reliable to help, then you can still continue to prosper, be successful.
Speaker 3:And that's how we stay competitive. And when I say we US, you know I'm a US citizen. I was born and raised in Ohio. Love this country, love manufacturing. This is how we stay competitive. This is how we stay ahead is leveraging these technologies to look over the shoulder of our least experienced operators and get them to where they can run these lines in an expert fashion. That's how we're going to do it. And AI it's not the only way to do it, but it is one mechanism that we're looking at to try to do that. And again, in the role that I'm in, director of industrial AI, obviously it's one of the most common questions I get Are you putting people out of work? And again, the answer is no. My clients don't have enough people to do this.
Speaker 3:But there is a certain aspect of embracing automation and realizing that the jobs aren't going. Certain job titles may go away, but it's just gonna create new job titles in the future. And you know, I got a couple examples of that. You know, lamplighter was a job. Someone's full-time job was to light the lamps in the town at the end of the day. And you know, of course, with Edison and the adoption of electricity, that went away. Elevator operator was a job. That was a real job, where someone went to work every day and they were an elevator operator and no one's lamenting the loss of elevator operating. We're not walking down the street and seeing homeless people elevator operators sitting there on the streets because they're out of work.
Speaker 3:The jobs change, and so that's what's happening and that's what's going to continue to happen, and so that's what's happening and that's what's going to continue to happen, as my clients, as these manufacturers continue to, you know, find challenges in finding this talent and they can't find these folks. As that continues to happen, the jobs will change. They'll start to adopt more automation, and these are, I mean, you know, look, you know, I've been out in these plants for my whole career. They're not the sexiest jobs. Some of these plants are very dirty, you know. They're not the jobs. They're very repetitive. They're not the jobs that humans want to do anyway. So let's get those humans into jobs that are rewarding, that are jobs they're excited to come to every day, and let's let AI do some of these jobs that are, that are menial and that are dangerous, like that's the other thing.
Speaker 1:There's a lot of these jobs that are very dangerous, that we don't want humans doing in the future anyway, yeah, for sure, and I think a lot of the AI is filling those gaps where you are lacking the skill. So, you know, fill those in. So, like you're right, you have to have a positive outlook of the tools that we're creating and it's just like Brad mentioned it's part of human civilization, of improving our lives and it doesn't mean it's going to eliminate a person's value. That value can shift to other areas that are much more productive and maybe more creative and more strategy around that, and then not worry about those, like you said, dangerous, tedious work and have a system do that for you or have a robot do that for you. You know we'll get just like the horse right. Not the horse is used for some other things.
Speaker 3:Yeah, and that's been the story of industrial progress. So we're not. Yes, it was not new, that's why I keep going. Coming back to ai is not new in that regard. It's just the next tool, it's the next evolution, it's the next step, uh and and. But this has been the story, you know, as we continue to move from manual processes where you remember, you've seen the old pictures during the industrial evolution of people lined up and working with their hands and making stuff, to where we are now, where all of that is done with a machine the loom.
Speaker 2:I can think of so many jobs that they had children in the mills. They used to run the looms, they used to run the yarn quickly through the wires, yeah, and they used to get injured and hurt, oh my gosh. So there are some benefits, I think, with AI.
Speaker 2:I think more of it's the mystical, magical black box and the magic that it does. When you're sitting there creating code or you're doing some processes, it just seems to know what you're thinking. So I think some of the apprehension is the fear of it, and I think you've had that with any tech, the adoption of it, and I think you've had that with any advancement that was made. You know, you look back to the automobile, some of these larger advancements and even some other tools that were created. There's so much to this it's I could talk about AI for hours and days, days and hours, hours and days. I don't even know anymore.
Speaker 1:Now you have more time, brad, to talk about it, because AI will do the rest of your tedious work.
Speaker 2:I have so many things that I wish AI could do for me. I just need to figure out how to apply it to get it done. And the one thing I keep saying I'm still looking for the ability to manage multiple calendars in one place easily, without having to pull everything into one calendar. Yeah, yeah, that's such a simple thing too.
Speaker 3:You talked about enterprise search.
Speaker 2:I was thinking. The first thing that came to my mind was do you know how difficult it is to find an email? Oh my God, yeah, yeah, and you go back to. We have all these tools and all these wonderful things and it's. How do I find this email? And I'm with you on that, that's something so simple. With my own inbox or not my inbox, but my old email box that is very difficult for me to find an email without having the exact match that I'm looking for an email without having the exact match that I'm looking for.
Speaker 1:But you know, I got to tell you I think that's one of the use case of how people can get into utilizing these tools, because I get that all the time Like I'm coming up to a meeting and I don't remember the conversation or maybe don't have an idea of what the meeting was about. Maybe I got pulled in because I got to make a decision was about. Maybe I got pulled in because I got to make a decision. So, and I asked AI, I asked co-pilot for the Microsoft product where, like, hey, I'm coming up to this meeting, I'm going to be talking to these people and here's a topic that we're going to talk about. Give me everything I need to know and all the communication about this, and it does a wonderful job. So I come in and I don't look very you know, I don't look like I was not organized co-pilot organized for me, and it saves you time, right, even as simple as that Start with that?
Speaker 3:Oh for sure, yeah, and you know. Back to this idea of time, you know one of the things that we will see with there's a lot so much fear and uncertainty and doubt around AI, One of the things that we're going to see is an increase in leisure time, and one of the things like we just take for granted that there's a 40-hour work week, but that was not always the case, right? I mean, the reason why we're able to have a 40-hour work week is because of the advancements in technology and automation that made that possible, where you could be more productive with less actual time. We're going to continue to see that, and AI is going to accelerate that. So we're already starting to see some rumblings in Europe about moving to a 32-hour workweek.
Speaker 3:I'm all for it. That's important. That leisure time. That's what makes life worthwhile, when you can spend that time with your friends and your family and be creative and pursue hobbies and pursue things that you're passionate about and not just spend your entire life working. So I am all for that and I'm ready for that, and AI is going to be one of those tools that's going to bring that.
Speaker 2:It will be helpful to go with that, just to add to it a little bit more. We've spoken about it before. We need to change the time value that we have. We need to value productivity and output, not time, because some are fearful. Well, with AI, I could do something twice as fast if I have to do twice as much where we have to come up with a fair way to measure productivity and output. To get back to where you were talking about, to where maybe you don't have to work the 40, 50, 60, 70 hours a week with 32 hours of solid time is enough.
Speaker 2:And then also, I've read studies. I'm not a scientist, doctor or any of those, but I read. I do a lot of reading where the in it. But I've also experienced it myself where sometimes you put something down, forget about it, forget about it for a little while. You come back, you're more creative, you're more energized and you're more productive.
Speaker 2:I worked at a place that they used to force you to go out for lunch, and the reason why is because the owner of the place and he would buy people lunch and stuff. He'd want you to go outside because he said that he realized that the individuals were more productive if they got up from the desk and didn't continue working through lunch. It wasn't forcing people to go out to eat. His concept was is we want you to take a break during the day, go out, walk around the building, do something so that you're not sitting at your desk all day and you can be a little more productive? And he was right. A little fresh air did some wonders, because you come back after lunch and you don't have that afternoon need for a nap. I guess you could say so. That afternoon need for a nap, I guess you could say so.
Speaker 2:Brian, we appreciate you taking the time to speak with us today. As I always say, time truly is the currency of life. Once you spend it, you can't get back. So anybody who spends time with us, we greatly appreciate. We enjoy hearing your insights. I'd love to maybe talk to you again in the future, get a little bit deeper in some of these areas. But if someone would like to talk with you more about AI and learn a little bit more about what you do to help manufacturing organizations gain some efficiency from AI, what's the best way to contact you?
Speaker 3:Yeah, so easiest way is to go on my LinkedIn, so you'll find me there Brian B-R-Y-A-N DuBois, D-E-B-O-I-S. If you just search for that on LinkedIn. Yeah, and reach out to me. Or the Rovisis website is a good way to get in contact with me as well. It's Rovisis R-O-V. As in Victor I-S-Y-S dot com, slash A-I. We'll take you right to my landing page on the website. But, yeah, happy to talk to anyone about this. Please reach out and, yeah, appreciate the time.
Speaker 2:Great, thank you. Look forward to talking with you soon All right, sounds good, thanks guys. Thank you, chris, for your time for another episode of In the Dynamics Corner Chair, and thank you to our guests for participating.
Speaker 1:Thank you, brad, for your time. It is a wonderful episode of Dynamics Corner Chair. I would also like to thank our guests for joining us. Thank you for all of our listeners tuning in as well. You can find Brad at developerlifecom, that is D-V-L-P-R-L-I-F-E dot com, and you can interact with them via Twitter D-V-L-P-R-L-I-F-E. You can also find me at matalinoio, m-a-t-a-l-i-n-o dot I-O, and my Twitter handle is matalino16. And you can see those links down below in the show notes. Again, thank you everyone. Thank you and take care.