Dynamics Corner
About Dynamics Corner Podcast "Unraveling the World of Microsoft Dynamics 365 and Beyond" Welcome to the Dynamics Corner Podcast, where we explore the fascinating world of Microsoft Dynamics 365 Business Central and related technologies. Co-hosted by industry veterans Kris Ruyeras and Brad Prendergast, this engaging podcast keeps you updated on the latest trends, innovations, and best practices in the Microsoft Dynamics 365 ecosystem. We dive deep into various topics in each episode, including Microsoft Dynamics 365 Business Central, Power Platform, Azure, and more. Our conversations aim to provide valuable insights, practical tips, and expert advice to help users of businesses of all sizes unlock their full potential through the power of technology. The podcast features in-depth discussions, interviews with thought leaders, real-world case studies, and helpful tips and tricks, providing a unique blend of perspectives and experiences. Join us on this exciting journey as we uncover the secrets to digital transformation, operational efficiency, and seamless system integration with Microsoft Dynamics 365 and beyond. Whether you're a business owner, IT professional, consultant, or just curious about the Microsoft Dynamics 365 world, the Dynamics Corner Podcast is the perfect platform to stay informed and inspired.
Dynamics Corner
Episode 353: In the Dynamics Corner Chair: The Role of AI: Ethics, Insights, and a Path Forward
The Role of AI in Business Processes: Ethics, Insights, and a Path Forward
💻+ 🙋 How is AI shaking up business processes, especially in ERP systems like Business Central? In the latest episode of the Dynamics Corner podcast, Kris and Brad are joined by experts Søren Friis Alexandersen and Christian Lenz as they delve into critical issues of the day. We talk about using AI to meet business goals, why it's crucial to be clear about how AI makes decisions, and the ethics of using AI. We also look at balancing human touch with AI automation, the risk of de-skilling due of reliance on AI, and the limits of what AI can do. The big takeaway? We need to be smart about how we bring AI into business.
Here are some other topics we covered:
🪟 Transparency is key when working with AI: Always know how AI makes its decisions.
🤖 AI doesn't stand on its own: Humans + AI = better results.
🍀Ethics matter: We need to continually reflect on the moral aspects of AI use.
👾Know the limits: We need to be clear on what AI can and can't do and define its boundaries to keep it effective.
🫶 Societal benefits: Yes, we have concerns, but how can AI benefit social progress?
#MSDyn365BC #BusinessCentral #BC #DynamicsCorner
Follow Kris and Brad for more content:
https://matalino.io/bio
https://bprendergast.bio.link/
Welcome everyone to another episode of Dynamics Corner. Is AI a necessity for the survival of humanity? That's my question. I'm your co-host, chris, and this is Brad.
Speaker 2:This episode was recorded on December 18th 2024. Chris, chris, chris. Is AI required for the survival of humanity? Is humanity creating the requirement for AI for survival? That's a good question. When it comes to AI, I have so many different questions and there's so many points that I want to discuss about it With us. Today we had the opportunity to speak with Zoran Fries-Alexanderson and Christian Lenz about some of those topics. Good morning, good afternoon. How are?
Speaker 3:you doing there, we, there we go good day good afternoon over the pond.
Speaker 2:How are you doing? Good morning, well, good good good, I'll tell you, soren, I love the video. What did you do? You have the nice, the nice blurred background, the soft lighting yeah, it's uh.
Speaker 3:You can see great things with a great camera.
Speaker 2:It looks nice, it looks really nice, christian. How are you doing?
Speaker 4:Fine, thank you very much.
Speaker 2:Your background's good too, I like it, it's real.
Speaker 1:Back to the future.
Speaker 2:It is good, it is good, but thank you both for joining us this afternoon, this morning, this evening, whatever it may be been looking forward to this conversation. I was talking with chris prior to this. This is probably the most prepared I've ever been for a discussion. How well prepared I am we'll see. Uh, because I have a lot of things that I would like to bring up based on some individual conversations we had via either voice or via text. And before we jump into that and have that famous topic, can we tell everybody a little bit about yourself, soren?
Speaker 3:Yes, so my name is Soren Alexandersen. I'm a product manager in the Business Central engineering team working on finance features basically rethinking finance with co-pilot and AI.
Speaker 2:Excellent, excellent Christian.
Speaker 4:Yeah, I'm Christian. I'm a development facilitator at CDM. We're a Microsoft Business Central partner. Development facilitator at CDM. We're a Microsoft Business Central partner and I'm responsible for the education of my colleagues in all the new topics, all the new stuff. I've been a developer in the past and a project manager and now I'm taking care of taking all the information in that it leads to good solutions for our customers.
Speaker 2:Excellent excellent and thank you both for joining us again. You're both veterans and I appreciate you both taking the time to speak with us, as well as your support for the podcast over the years as well. And just to get into this, I know, soren, you work with AI and work with the agent portion I'm simplifying some of the terms within Business Central for the product group and you know, in our conversations you've turned me on to many things. One thing you've turned me on to was a podcast called the Only Constant, which I was pleased I think it was maybe at this point a week or so ago, maybe a little bit longer to see that there was an episode where you were a guest on that podcast talking about AI, and you know Business Central, erp in particular.
Speaker 2:I mean, I think you referenced Business Central, but I think the conversation that you had was more around ERP software and that got me thinking a lot about AI, and I know, christian, you have a lot of comments on AI as well too, but the way you ended that with you know nobody wants to do the dishes is wonderful, which got my mind thinking about AI in detail and what AI is doing and how AI is shaping. You know business, how AI is shaping how we interact socially, how AI is shaping the world, so I was hoping we could talk a little bit about AI with everyone today. So with that, what are your thoughts on AI? And also, maybe, christian, what do you think of when you hear of AI or artificial intelligence?
Speaker 4:I would say it's mostly a tool for me Getting a little bit more deeper into what it is. I'm not an AI expert, but I'm talking to people who try to elaborate how to use AI for the good of people. For example, I had a conversation with one of those experts from Germany just a few weeks before directions and he told me how to make use of custom GPTs and I got the concept and tried it a little bit custom GPTs and I got the concept and tried it a little bit and when I got to Directions EMEA in Vienna in the beginning of November, the agents topic was everywhere, so it was co-pilot and agents and it prepared me a lot how this concept is evolving and how fast this is evolving. So I'm not able to catch up everything, but I have good connections to people who are experts in this and focus on this, and the conversations with those people, not only on the technical side but also on how to make use of it and what to keep in mind when using AI, are very crucial for me to make my own assumptions and decide on the direction where we should go as users, as partners for our customers, and to consult our customers and on the other side.
Speaker 4:With the evolving possibilities and capabilities of AI, generating whole new interactions with people, it gets much more harder to have this barrier in mind. This is a machine doing something that I receive and this is not a human being or a living being that is interacting with me. It's really hard to have a bird's eye view of what is really happening here, because it's so like human interaction that we have with AI, that is hard to not react as a human on this human interaction and then have an outside view of it. How can I use it and where is it good or bad, or something like that, that moral conversation we're trying to have. But having conversations about it and thinking about it helps a lot, I think.
Speaker 2:Yeah, it does, Saren. You have quite a bit of insight into the agents and working with AI. What is your comments on AI?
Speaker 3:I think I'll start from the same perspective as Christian. From the same perspective as Christian, that for me, ai is also a tool in the sense that when looking at this from a business perspective, you have your business desires, your business goal, your business strategy and whatever lever you can pull to get you closer to that business goal you have AI might be a tool you can pull to get you closer to that business goal you have. Ai might be a tool you can utilize for that. It's not a hammer to hit all of the nails. I mean it's not the tool to fix them all. In some cases it's not at all the right tool. In many cases it can be a fantastic tool. So that depends a lot on the scenario. It depends a lot on the goal. It can be a fantastic tool. So that depends a lot on the scenario. It depends a lot on the goal.
Speaker 3:I will say that I'm fortunate in the way that I don't need to know the intricate details of every new GPT model that comes out and stuff like that. So that's too far for me to go and I could do nothing else. And to your point, christian. So you said you're not an ai expert. So but I mean by by modern standards and the ai that we typically talk about these days. Well, lms, it's only been out there for such a short while. Who who can actually be an ai expert yet? Right, I mean, it's been out there for a couple of years.
Speaker 3:In this modern incarnation, no one is an expert at this point. I mean, you have people who know more than me and us, maybe given in this audience here, but we all try to just learn every day. I think that's how I would describe it. There's some interesting things. I mean from my perspective as a product manager. What I'm placed in this world to do is to basically rank customer opportunities and problems. That's my primary job. Whether or not AI can help solve some of those opportunities or problems that's my primary job. Whether or not AI can help solve some of those opportunities or problems great. So that's what I'm about to do, like reassess all those things that I know about our customers, our joint customers and partners, and how can AI help those?
Speaker 1:Yeah, just when you started speaking about the dishwasher, it made me chuckle and say how can you relate that to why AI was invented? And I had to look it up. I looked up, you know why was the dishwasher invented? So I thought it was pretty interesting to share to the listeners. One was to Josephine Cochran, who invented the dishwasher, and her reasoning was to protect her china dishes and she didn't want to hand wash and then free up time. And how relatable is that with AI? Is that we want to free up our time to do other things and use AI to. In this case, she had noted that hand washing, avoiding hand washing, she wanted to create a machine that could wash dishes faster and more carefully than she could. So, in a sense, when AI is invented, you kind of want to have a tool in this case an AI tool to do other things for you, maybe better than you can and maybe more carefully in feeding you information. I don't know, but I thought that was pretty interesting.
Speaker 3:The relatable component there and that makes total sense to me. That makes sense in the sense that AI is very good at paying attention to detail that a human might overlook if we're tired or it's end of the day or early morning. Even so, there's so much relatable things to what you just said that applies for AI, or even just technology, I mean, and automation. It's not just AI, because IT is about automating stuff. Ai just brings another level of automation.
Speaker 2:You could say it is a beneficial tool. But, chris, to go back to your point with the invention of dishwasher and maybe even the invention of AI, I think I don't know the history of AI and I'm not certain. If you know, I'm sure you could use AI to find the history of AI. But is AI one of those tools? I have so many thoughts around AI and it's tough to find a way to get into unpack all of the comments that I have on it. But a lot of tools get created or invented without the intention of them being invented.
Speaker 2:You know it's sometimes you create a tool or you create a process or something comes of it and you're trying to solve one problem. Then you realize that you can solve many other problems by either implementing it slightly different, you know, working on it with another invention or a tool that was created. So where does it end? And with AI, I think we're just I don't know if we'll ever or we can even understand where it will go or where it will end. We see how individuals are using it now, such as creating pictures.
Speaker 2:Right, I'm looking at some of the common uses of it outside of the analytical points, points of it people creating pitches you know a lot of your search engines now will primarily give you the ai results of the search engines, which is a summary of sources that they cite. Uh, ai gets used, you know, from that way, from like the language model points of view, but then ai also gets used from a technical point of view. Um, I'm also reading. I started reading a few weeks ago a book uh, moral ai and how we get there which is by pelican books and I think it's borg, synod, armstrong and contents I'm so bad with names which also opened up my eyes to ai and how ai impacts everybody in the world.
Speaker 1:I think it creates different iterations, right with AI. You know, clearly, you see AI in almost practically anywhere you had mentioned. You know creating images for you and started with that and then followed with creating videos for you now and and so much more, and then you know, uh, sorted. You know I was trying to. I mean, I was listening to your episode um, you know, where does ai come into play in erp and where does it go from there? Right, I'm sure a lot of people are going to create different iterations of AI and Copilot and Business Central, and that is where I'm excited about.
Speaker 1:We're kind of scratching the surface in the ERP and what else can it do for you in the business sense? Of course, there's different AIs with M365 and all the other Microsoft ecosystem product lines. What's next for businesses, especially in the SMB space? I think it's going to create a level playing field for SMBs to be able to compete better and where they can focus more on strategy and be more tactical in the way they do business. So that's where I'm excited about and and I think a lot of us here in this call we're the, I guess, curator and and that's where we become more of business consultants in a sense of how you would run your business utilizing all these Microsoft tools and AI.
Speaker 4:I think yeah.
Speaker 1:I think, Go ahead.
Speaker 3:Christian.
Speaker 4:Okay, I think that we see some processes done by AI or agents which we never thought would be possible without doing the human. What was presented is really mind what level of steps and pre decisions AI can make and offer a more, better result into the process until a human needs to interact to that. And I think that will go further and further and further. What I'm thinking is where is the point where the human says okay, there is a new point where I have the feeling that now I have to grab into this process because the AI is not good enough and that point is, or this frontier is, leveraged on and on and on, something like that. But to have this feeling, to have in mind this is the thing AI cannot do. I have to be conscious and cautious and I think, on the one hand side, with AI we can make more processes, we can make more decisions easily, and on the other side, the temptation is high that we just accept what the AI is prompting to us or offering us.
Speaker 4:I like the concept of the human in the loop. So at least the human at some point in this process has to say, yes, I accept what the AI is suggesting, but having more time to process. More communication is also critical. Just to click yes, okay, okay, okay. I think we should implement processes where we just say, okay, let's look at how we use AI here and take a little bit back and say, wow, what number of steps AI can make for us. But just think where it just goes too far.
Speaker 3:I think that's an interesting line of thinking, christian, and I think so. Before we go deeper, let me maybe just say that some of the stuff that we talk about in this episode like, if nothing else is mentioned, these are my personal opinions and may not reflect the opinions of Microsoft. Let's sort of get into product-specific stuff, but I would like to take sort of a product's eye view on what you just said, which is when we look at agents these days and what an agent can do and what should be the scope of a given agent and what should be its name, and so now we've released some information about the sales order agent and described how does it work and actually being fairly transparent about what it intends to do and how it works, which I think is great. We actually start by drawing up in the process today, before the agent. How would this process look? Where are the human interactions between which parties? Now bring in the agent?
Speaker 3:Now, how does that human in the loop let's say flow look like? Are there places where the human actually doesn't need to be in the loop? That's the idea. Don't bring in the human unless it's need to be in the loop. That's the idea. Don't bring in the human unless it's really necessary or adds value. So that's the line, that's the way that we think about it, to try to really apply. You know, if that A to Z process can remove the human like can automate a piece We've always been trying to automate stuff right for many years. If AI can do that better now, well, let's do that. But of course, whenever there's a risk situation or wherever there's a situation where the human can add value to a decision, by all means let's bring in the human into the loop. So that's the way that we think about the agents and the tasks that they should perform in whatever business process.
Speaker 3:And to your point, chris, I think that the cool thing about AI in ERP, as in Business Central these days, is that it becomes super concrete.
Speaker 3:Like we take AI from something that is very sort of fluffy and marketing and buzzwords that we all see online and we make it into something that's very concrete. So the philosophy is that in BC unless, of course, you're an ISV that needs to build something on top of it, or a partner, a customer wants to add more features AI should be ready to use out of the box. You don't have to create a new AI project for your business, for your enterprise to start leveraging AI? No, you just use AI features that are already there, immersed into the UI and among all other feature functions in Business Central and among all other feature functions in Business Central. So, because small medium businesses, many of them don't even have the budget to do their new AI project and hire data scientists and what have you and all these things create their own models. No, they should have AI ready to use. So that's another piece of our philosophy.
Speaker 2:AI is. I look at that as more as AI as a function, because if you have AI as a function, you can get the efficiencies. I think, to some of the comments from the conversations that we've had and the conversations that I've heard, you look for efficiencies so that you can do something else. People want to use the word something else or something that they feel is more productive and let automation or AI or robots I use the word quote do the tasks that are mundane or some would consider boring or repetitive. And we do use AI on a daily basis and a lot of the tools that we have. To your point, Soren, that it's just embedded within the application If you buy a vehicle, a newer vehicle now, they have lane avoidance, collision avoidance, all of these AI tools that you just get in your vehicle. You either turn it on or turn it off, depending upon how you'd like to drive, and it works and it helps the, the function, uh, be there for you. But to kind of take a step back from um ai in that respect.
Speaker 2:But a couple things that I come with ai we. We talk about the vehicle. Um, I'll admit I have a tesla. I love the fsd and I used it a lot and it just seems to improve and improve and improve to the point where I think sometimes it can see things I use the word see or detect things faster than I can as a human right Now. Ai may not be perfect and AI makes mistakes. Humans make mistakes. Humans get into car crashes and have accidents right for some reason, and we have accepted that. But if AI has an accident, we find fault or find blame in that process, instead of understanding that. You know, in essence, nothing is perfect, because humans make mistakes too and we accept it. Why don't we accept it when AI may be a little off?
Speaker 3:That's such a great question and the fact is, I think right now is that to a point that we don't accept it, like we don't give machines that same benefit of the doubt, or like if they don't work it's crap and we throw them out, like I mean that's like, but humans like we, we're much more forgiving, like we give them a second chance.
Speaker 3:And oh, maybe I didn't teach you uh well enough how to do it, or so, but that's a good point and I, I, I love your example with the Tesla. So I also drive a Tesla, but I'm not in the US, so I can't use the full self-driving capability, so I use the what do you call it? The semi-autonomous, so it can keep me within the lane. It reacts in an instant if something drives out in front of me much faster than I can do. So I love that mix of me being in control but just being assisted by these great features. That uh makes me drive in a much safer way. Basically, uh, I'm not sure I'm a proponent of sort of full self-driving. I don't know, I'm still torn about that, but uh, that could lead us into a good discussion as well, um I think you have that trust because that I'm.
Speaker 1:I'm the same way with brad, you know, I love, I love it, um, as as I, you know, continue to use it. But in the very beginning I could not trust that thing. I had my hand in the steering wheel. Um, you know, a white knuckle on on the steering wheel. But uh, eventually I come to accept it and I was like, oh, that's a pretty good job, uh, getting me around. Uh, am I still cautious? Absolutely, I still want to make sure that I can quickly control something if I don't believe it's doing the right thing.
Speaker 3:So I, I think, um, actually my reason for not being a sort of full believer in in sort of full self-driving, like complete autonomy with cars is is not so much because I don't I mean, I actually do trust the technology to a large extent. It's more because of many of the reasons that are now in that book that I pitched to all of you that moral AI like who has, like if something goes wrong. And there's this example in the book where, where an uber car like you would think it was a volvo they, they test an uber car, some self-driving capabilities in some state and it accidentally runs over a, a woman who's who's passing the street in in an unexpected place and it was dark and things of that nature, and the driver wasn't paying attention, and there was all these things about who has the responsibility for that end of the day. Was it the software? Was it the driver who wasn't paying attention? Was it the, the government who allowed that car to be on that road in the first place?
Speaker 3:But while testing it out all of these things and if we can't figure that out or all those things need to be figured out first before you allow a technology loose like that, right, and so that and I wonder if we can do that. If we can, we like we don't have a good track record of of doing that, uh. So I wonder I I'm I'm fairly sure the technology will, will get us there, if we can live with the uh, uh when it doesn't work well. So what happens if a self-driving car kills 20 people per year, or cars multiple? Um, can we live with that? What if 20 people is a lot better than 3000 people from from human drivers Like yeah, that is.
Speaker 2:I think in the United States there's 1.3. I don't don't quote me on the statistics. I think I heard it again with the all these conversations about self-driving and you know the Moralei book and listen to some other tools. I think in the United States is one point three million fatalities due to automobiles a year. You know I forget if it's a specific type, but it's a lot. So, to get to your point, you know not to focus on the you know, the driving portion, because a lot of topics we want to talk about. Is it safer? In a sense, because you may lose 20 individuals tragically in an accident per year, right, whereas before it was a million because AI? You know I joke and I've had conversation with Chris talking about the Tesla. I trust the FSD a lot driving around here in particular, I trust the FSD a lot more than I trust other people. And to your point of someone losing their life tragically, crossing in the evening at an unusual place and having a collision with a vehicle, that could happen with a person doing it as well, and I've driven around and the Tesla detected something before I saw it. So the reaction time is a little bit quicker because if you're driving right and it goes up to a couple points I want to talk about, which I'll bring up to is, you know, too much trust and de-skilling. I want to make sure we get to those points. And then also, if we're looking at analytics, some you know harm bias as well, and then also, if we're looking at analytics, some you know harm bias as well, no-transcript. And then to Christian's point and even your point where the humans are involved. Are the humans even capable with the skilling? Because you don't have to do those tasks anymore to monitor the AI? You know, if you look back, I'm going to go on a little tear in a moment.
Speaker 2:In in education, when I was growing up, we learned a lot of math and we did not, you know, use calculators. I don't even know when the calculator was invented, but we weren't allowed to. You know, they taught us how to use a slide rule. They taught us how to use a slide rule. They taught us how to use even believe it or not, when I was really young an abacus, and now and then I could do math really, really well. Now, with the, you know, ease of using calculators, ease of using your phone or ease of even using AI to do math equations?
Speaker 2:can you even do math as quickly as you used to? So how can you monitor a tool that's supposed to be calculating math, for example?
Speaker 3:I, I, I think you're, I mean, you have very good points about the like. Just coming back to the car for a second, because, uh, I mean, technology will speak for itself and what it, what it's capable of, I think. I think where we have to take some decisions that we haven't had to before is when we dial up the autonomy to 100% and the car drives completely on its own, because then you need to be able to question how does it make decisions? And get insights into how does it make decisions based on what? Who determines how large an object has to be before the car will stop if it runs?
Speaker 3:So I think back in the old days in Denmark, insurance companies wouldn't cover if the object you ran over was smaller than a small dog, something like that. So who set those rules? And the same thing for the technology too Should I just run that pheasant over or should I stop? For the pheasant? Those kind of decisions. But if it's a human driving in control, we can always just point to the human and say, yeah, you need to follow the rules, and here they are. But if it's a machine, all kinds of things, and eventually if the machine fails or we end up in some situation where there's a dilemma who's responsible, who's accountable and that just becomes very hard questions. I don't have the answer, but I think when we dial up the autonomy to that level, we need to be able to have you know and we need to talk about what level of transparency can I demand as a user or as a bystander or whatever? So there's just so many questions. That opens up, I think.
Speaker 4:And if you are allowed to turn off AI assistance, will, at some point in time, when a failure is occurring, you be be responsible for turning that assistance off.
Speaker 2:That's a very good point.
Speaker 4:Someone could say. So you have to keep in mind that with assistance you're better. Like in the podcast episode you mentioned, a human together with a machine is better than the machine. Other ways you could say a human with a machine is better than another human or just a human. And I think at some point in time, companies who are looking for accountability and responsibility will increase the level of you have to turn on AI assistance.
Speaker 4:You could imagine when you get into a car that is recognizing you as a driver your facial expression or something like that that it can recognize if you're able to drive or not, and then the question is will it allow you to drive or will it decide no, don't touch the wheel, I will drive, or something like that. Or if something pops up you're not able to drive, I decide that for you and I won't start the engine. Will you override it or not? That are those scenarios that pop up in my mind. And and how will you decide as a human when you have something, uh, emergent happening? You have to drive someone to the, to the hospital or something like that? You will override, but will the system ask is it really an emergency? Or something like that? You say I just want to do this. How are you reacting in this moment?
Speaker 3:I think that's super interesting. And coming back to the transparency thing, one of my favorite examples is if I go to the bank and I need to borrow some money, for many years, and even before AI, there's been some algorithm that the bank person don't even know about how it works, probably, but can just see a red or green light after I ask so, okay, how much money do you want to borrow? Oh, I want to borrow 100K. No, you can't do that, sorry. Uh, machine says no, right. And and uh, even before ai, if something is complex enough, uh, it doesn't really matter if it's ai or not.
Speaker 3:But in these sort of life impacting situations, do I have a right for transparency? Do I have a right to know why they say no to lend me money, for example? The same if I get rejected for a job interview based on some decision made by an algorithm or AI. These are very serious situations where that will impact my life and of course, they don't go.
Speaker 3:You can't claim transparency everywhere, but I think there are some of these situations where, as humans, we do have a right for transparency and to know how do these things know? And there is a problem if the person who's conveying the information to us. The bank bank person doesn't even have that insight, doesn't even know how it works. They just push the button and then the light turns red or green. So that's yeah, but again, so many questions, and that's why I'm actually happy that today I don't know if you saw it we released a documentation article for BC about the sales audit agent that, in very detailed way, describes what this agent does, what it tries to do, what kind of data it has access to, what kind of permissions it has, all these things. I think that's a very, very transparent way of describing a piece of AI and I'm actually very, very proud of that. We're doing that.
Speaker 3:Yeah, just want to make that, doesn't make that segue.
Speaker 4:Yeah, it's filling the need of humans to know how does the system work or does the system make decisions? To proceed to the next step, Because I think there's a need to have a view on is what has happened before and has an influence on me as a human is judged in a way that is doing good for me or not? Like your example, what is evaluated when you ask for a back credit or something like that. And having this transparency brings us back to yes, I have an influence on how it is needed, Because I can override the AI, because I can see where it makes a wrong decision or wrong step or something like that. Make the wrong decision or wrong step or something like that, Like I would do when I talk to my bank account manager and say, hey, does it have the old address? I moved already. Oh no, it's not in the system. Let's change that and then make another evaluation or something like that.
Speaker 4:And I think this autonomy for us as users to keep this in play, that we can override it or we can add information, new information, in some kind of way. We can just do it when we know where is this information taken. We can just do it when we know where is this information taken, how old is it and how is it processed. So I like that approach very much. I don't think every user is looking at it, but as an ERP system owner like I'm in our company as well needs to have answers to those questions from our users when we use these features, but it's true and just so.
Speaker 3:Yeah, coming back, just come back to the banking sample just again. So the bank person probably doesn't know if their AI or algorithm takes into account how many pictures they can find with me on it on Facebook where I hold a beer, like would that be an influencing factor on if they want to lend me money? So all these things. But we just don't have that insight and I think that's a problem in many cases. You could argue I don't know how the Tesla autopilot does its. You know whatever influences it to take decisions, but that's why I like the semi-autonomous piece of work right now.
Speaker 2:No, it is, I think. But listening to what you're saying, I do like the transparency, or at least the understanding. I like the agent approach because you have specific functions. I do like the transparency so that you understand what it does, so you know what it's making a decision on. So if you're going to trust it in a sense or you want to use the information, you have to know where it came from. Ai or computers in general can process data much faster than humans. So, being able to go back to your bank credit check example, it can process much more information than a person can. I mean a person could come up to the same results, but it may not be as quick as a computer can, as long as that information is available to it. But I do think for certain functions the transparency needs to be there because in the case of bank credit, how can you improve your credit if you don't know what's being evaluated to maybe work on or correct that? Or, to Christian's point, there may be some misinformation in there that, for whatever reason is in there, that's impacting, so that. Or to Christian's point, there may be some misinformation in there that you know, for whatever reason was in there, that's impacting so that you need to force it.
Speaker 2:Some other things, to the point that Christian also made. You know humans with a machine is better than a human. You know, potentially in some cases, because the machine can be the tool to help you do something, whatever it may be. You referenced the hammer before and I use that example a lot. You have hammers, you have screwdrivers, you have air guns. Which tools do you use to do the job? Well, it depends on what you're trying to put together. Are you doing some rough work on a house where you need to put up the frame, so maybe a hammer or an air gun will work, and if you're doing some finish work, maybe you need a screwdriver. You know, with a small screw to do something. So there does have to be a decision made. And at what point can AI make that decision versus a human make that decision? And, to your point, where do you have that human interaction? But I want to go with the human interaction of de-skilling, because if you have all these tools that we rely on.
Speaker 2:To go back to the calculator, and you know we've all been reading, you know I think we all read the same book and I think we all listened to some of the same episodes. But you look at pilots and planes with autopilots right same thing with someone driving a vehicle like, do you lose the skill to? You know ai does so much portion of flying a plane. I didn't even really think about that. You know AI does so much portion of flying a plane. I didn't even really think about that.
Speaker 2:You know the most difficult or the most most dangerous is what? The taking off and landing of a plane, and that's where AI gets used the most. And then a human is in there to take over in the event that AI fails. But if the human isn't doing it often right, even with the reaction time, okay well, how quickly can a human react, you know, to a defense system? Same thing, you know, if you look at the Patriot missile examples, where you know the Patriot missile detects a threat in a moment and then will go up and try to, you know, disarm the threat. So at what point do we as humans lose a skill? Because we become dependent upon these tools and we may not know what to do in a situation because we lost that skill.
Speaker 1:That's a good point. Sorry, go ahead. No, it's a really good point.
Speaker 3:Sorry, go ahead. No, it's a really good point. I like that example from I think it was from the Moral AI book as well, where there's this example of some military people that you know they sit in their bunker somewhere and handle these drones like day in and day out and, because they're so autonomous, everything happens without their. You know they don't need to be involved, but then suddenly a situation occurs. They need to react in sort of a split second and take a decision, and I think one of the outcomes was you know, their manager says that. Well, who can blame them if they take a wrong decision at that point? Because it's three hours of boredom and then it's three seconds of action. So they, they're just not feeling it.
Speaker 3:Where, to your point, right, if they were like they're, they're, they're being de-skilled for two hours and 57 minutes and now there's three minutes of action where everything happens. Right, who can, who can expect that they keep up the level of you know, skills and what have you if, if they're just not involved. So it's super interesting point. Um, yeah, so many, so many questions that it raises.
Speaker 2:Uh this, it goes on, it goes on, it goes on, it's, and it is in that moral a book is, and it was the patriotot missile example. Because the Patriot missile had two failures, one with a British jet and one with an American jet shortly thereafter. And that's what they were talking about is how do you put human intervention in there, you know, to reconfirm a launch? Because in the event, if it's a threat, it will use the word threat. How much time do you have to immobilize that threat? Right, you may only have a second to two. I mean, things move quickly in the. In the case of the patriot missile, again, it was intended to disarm, uh, you know, and again, missiles that are coming at you, that are being launched, you know, over the pond, as they say, so they can take them down, and that's the point with that.
Speaker 1:And if I could step back for a second. You know when we're having a conversation about the usefulness of AI is based upon the source that it has access to and you know understanding where it's getting its source from and what access it has. If you're limiting the source that it can consume to be a better tool, are we potentially limiting its capabilities as well, because we wanna control it so much, in a sense, to where it's more focused, but are we also limiting its potential, right? Yes, so yeah, go ahead, sorry.
Speaker 3:Yeah, no, I think that's very well put and I think that's a consequence and I think that's fine. I mean, just take the sales auto agent again as an example. We have railed it very hard. We put many constraints up for it, so we can only do a certain number of tasks. We can only do task A, b and C, d, e, f. It cannot do. We had to set some guardrails for what it can do.
Speaker 3:It's not just about and I think this is a misconception sometimes people think about agents and say here's an agent, here's my keys to my kingdom. Now, agent, you can just do anything in this business, in this system, and user will tell you what to do or we've given you a task. That's not our approach to agents. In BC. We basically said here's an end-to-end process or a process that has sort of a natural beginning and a natural ending. In between that process you can trigger the agent in various places, but the agent has a set instruction.
Speaker 3:You receive inquiries for products and eventually you'll create a sales order. Like everything in between there could be all kinds of you know human in the loop and discussions back and forth, but that's the limit of what that agent can do and that's totally fine. It's not fully autonomous. You can't just now go and say, oh, by the way, buy more inventory for our stock, that's out of scope for it, and at that point I think that's totally fine. And it's about finding those good use cases where there is a process to be automated, where the agent can play a part, and not about just creating a let's call it a super agent that can do anything with like. So I think that's it's a very natural development.
Speaker 4:So you don't aim for a T-shape profile agent like it is in many job descriptions Now. You want a T-shape profile employee with a broad and deep knowledge. We as human can develop this, but the agent approach is different. I would more say it's not limiting the agent or the AI of the input or the capabilities. It is more like going more deep, having deep knowledge. In this specific functionality, the AI agent is assisting. That can be more information and it can go deeper than a human can be.
Speaker 4:For example, I was very impressed by one AI function I had in my future leadership education.
Speaker 4:We had an alumni meeting in September and the company set up an AI agent that is behaving like a conventional business manager. Because we learn how to set up businesses differently and when you have something new you want to introduce to an organization, often you are hit by the cultural barriers and just to train that more without humans, they invented an ai model where you can put your ideas in and you have a conversation with someone who has traditional tayloristic business thinking and something like that. So you can train how you um put your ideas to such a person and what will the reactions will be just to train your ability to be better when you place these new ideas to a real person in a traditional organization or something like that and that had such a deep knowledge about all these methodologies and thinking and something like that. I don't know who I could find to be so deep in this knowledge and have exactly this profile, this deep profile that I needed to train myself on.
Speaker 1:That is a really interesting use case.
Speaker 1:I think then it becomes to continuing a conversation about maybe there's a misconception or misunderstanding in the business space, because right now, you know, I've had several conversations where AI is going to solve their problems. Ai is going to solve their business challenges, but they, you know, from a lot of people's perspective, it's just this one entity of, like it's going to solve all my business problems, whereas for us engineers, we understand that you can have specific AI tool that would solve a specific problem or a specific process in your business. But right now a lot of people believe, like I'm just going to install it, it's going to solve everything for me, and so not realizing that there are different categories for that, you know different areas and I think having these kinds of conversation in hopes that know it's it's not just a one-size-fit-all um kind of solution out there, yeah, and indeed, and when you see, like the um industrial work developed in the first phases, it's like going back to um having one person just fitting is a bold or a school or something like that.
Speaker 4:That is the agent at the moment, just one single task it can do. But it can do many, many things into this task at the moment and what I think it will take some time to develop is developing this T-shape from the ground of the T to have this broad knowledge and broad capabilities out of one agent, or the development of the network of agents. So in some sessions in Vienna that was presented, the team of agents, that was presented, the team of agents. So you have a coordinator that coordinates the agents and then brings back the proposal from the agent to the user or something like that. That will look like the one agent can do all of these capabilities for the user. That is presented. But in the deep functionality there is a team of agents and a variety of agents doing very specific things.
Speaker 2:I like that case. It goes to, chris, to your point of sometimes it's just a misunderstanding of what AI is, because I think there's so many different levels of AI and we talked about that before. You know what is machine learning, what is large language models. I mean, that's all in AI. A lot of things you know can fall into AI. But to the point of the agents to go into ERP software, even Christian, to your point, maybe even in an assembly line or manufacturing, I'd like the agents in the business aspect to have a team of agents together so they all do specific functions.
Speaker 2:To Soren's point of where do you have some repetitive tasks or some precision tasks, or even, in some cases, some skilled tasks that need to be done, and then you can chain them together. Because even if you look at an automobile we talked about an automobile there isn't an automobile, that just appears. You have tires, you have engines, you have batteries, you have right. The battery provides the power, the wheel provides, you know the, the ability to easily move right. The engine will give you the force to push. So putting that all together see, this is how I start to look at putting that all together now gives you a vehicle. So the same thing if you're looking at erp software. That's why when I first heard about the agent approach when we talked some months ago, soren, that having an agent for sales orders or having an agent for finance or having an agent for purchase orders or something, a specific task, you can put them all together and then use the ones you need and then have somebody administer those agents, so you have like an agent administrator.
Speaker 4:That is where the human comes back into the loop, because at some point you have to put these pieces together. I think at the moment, this is the user that needs to do this, but this will develop further in the future. So you have another point where you end in or where you need ideas or something like that, because that is also what I learned and found very interesting. When you see an AI suggesting something to you, this feeling this is a fit for my problem is inside your body and at the moment, you cannot put this into a machine. So the idea, if the suggestion is right and you decide to take it and to use it, you need a human to make this decision, because you need the human body, the brain and everything together seeing and perceiving this, to make this decision if it is wrong or good for this use case.
Speaker 3:I think that depends a bit Christian, if I may. So there are places where, let's say, one AI could you could give it a problem to tackle and it will come with some outcomes. And there could then be another AI and now I use the term loosely but another process that is only tasked with assessing the output of the first one within some criteria, within some aspects. So that has been, say loosely, now trained, but its only purpose is to say, okay, give me the outcome here and then assess that with complete fresh eyes like it was a different person. Of course it's not a person and we should never make it look like it's a person but one machine can assess the other.
Speaker 1:Basically, that's what I'd say to a certain degree, right, if we can frame the problem, right, yeah, and you had mentioned about from the human aspect, to take over and said you know that's wrong. Right, like, oh, it's wrong, I know it's wrong, I'm going to take over. It reminds me of a story when I did a NAV implementation a while back where we had demand forecasting and when we introduced that to the organization it does like tons of calculation and it's going to give you a really good output of what you need based upon information and data that you have. And I had this individual person that I was working with, or that person was working for this organization, where that's not right, that's wrong, and I would ask can you tell me why it's wrong? I'd love to know, like, how are you feeling? Like, what made you feel like it was wrong?
Speaker 1:Do you have any calculations? No, I just know it's wrong because typically we do it, you know we, typically it's this number right, but they couldn't prove it. So that's also a dangerous component where a person could take over and then whatever decision, whatever they feel like it's wrong, it could. Where they think it's wrong, they can also be wrong. Right, it's just like the human aspect of it. But, but they can. But they can.
Speaker 3:Yes, but they can, yeah, yeah and I think I mean and that. So the first time when I learned more about sort of ai, like these recent years, was some eight, nine years ago when we we did some of the classic sort of machine learning stuff for some customers and what was an eye-opener for me was that it didn't have to be a black box. So back then, let's say, you had a data set. I think the specific customer wanted to predict which of their subscribers would churn right, and there was a machine learning model for that on Azure that they could use for that. I don't know the specific name of it and the data guy that helped us one of my colleagues from Microsoft back then showed them data because they had their ideas on what were the influencing factors that made consumers churn. These were, these were magazines that they were subscribing to, and when he told them, show them the data, and then said uh, and showed them because they could do that with with the machine learning tools they could, he could show them these are the influencing factors, like actually determine based on the data that you just see and he had validated against their historic data. They were just mind-blown.
Speaker 3:So it turned out I'm just paraphrasing now that people in the western part of the country were the ones who churned the most. So the geography was the predominant influencing factor to predict churn. They were just mind-blown because they had never seen that data. They had other ideas of what it means to churn. Like to your point, chris, like. But that was just so cool that we could bring that kind of transparency and say this is how the model calculates, these are the influencing factors that it has found by looking at the data. So I just thought that was a great example of bringing that transparency when humans, like you say, are just being stubborn and saying no, it doesn't work, it's not right.
Speaker 2:That's definitely another factor, because we've all come into those situations where that just doesn't feel right and in some cases it could be correct.
Speaker 1:But it depends on the skills. That's what I want to go back to is the skills. It's the skills.
Speaker 2:How, if we're going to keep creating AI tools to help us do tasks okay, one, I'm going to go off on a tangent a little bit. One how do we ensure we have the skills to monitor the AI? How do we ensure that we have the skills to perform a task? Now I understand. The dishwasher Chris you talked about was invented. Now we don't have to wash dishes manually all the time to save us time to do other things. We're always building these tools to make things easier for us and, in essence, up the required skill to do a function, saying we need to work on more valuable things. Right, we shouldn't have to be clicking post all day long. Let's have the system do a few checks on a sales order. If it meets those checks, let the system post it.
Speaker 2:But when is there a point where we lose the ability to have the skill to progress forward? And then with this, with all of these tools that help us do so much, because now that we have efficiency with tools, oftentimes it takes a reduction of personnel. I'm not trying to say people are losing their jobs. It's going to take a reduction of personnel to do a task. Therefore, relieving the dependency on others. Humans are communal. Are we getting to the point where we're going to lose skill and not be able to do some complex tasks because we rely on other tools?
Speaker 2:And if the tools are to get more complex and we need to have the skill to determine that complexity, if we miss that little middle layer of all that mundane building block stuff, how do we have the skill to do something? And two, if I can now I see AI images, I see AI videos being created all the time. It does a great job. Before we used to rely on artists, publishers, other individuals to create that content for the videos, for brochures, pictures, images, the B-roll type stuff we'll call it. If we don't need any of that stuff and we're doing it all of ourselves, what are we doing to us to be able to work together as a species if now I can do all the stuff myself with less people?
Speaker 2:So I have many points there. One, it's the complexity of the skill. And how do we get that skill if we immediately cut out the need, for we no longer need someone to put the screw on that bolt. As you pointed, christian, we need someone to come in and be able to analyze these complex results of ai. But if nobody can learn that by doing all those tasks, what does that give us? So that's my little, so two points so what is?
Speaker 3:yeah, no, that's great, great questions. So what you're saying is how do we determine if this car is built right if there's no drivers left to to to test it, like no, no one has the skill to drive anymore. So how? How can they determine if this car is built up to a certain quality standard and what have you? Well, the other answer would be you don't have to because it will drive itself. But until we get that point, like in that time in between, you need someone to still be able to validate and probably for some realms of our work and jobs and society, you will always need some people to validate. So what do you do? I think those are great questions and I certainly don't have the answer to it.
Speaker 1:I would say I've had this conversation with Brad for a couple of years, I think him and I, you know, we just we love where I love where AI is coming and I pose the question about, you know, is AI becomes a necessity for the survival of humanity. Becomes a necessity for the survival of humanity Because, as you all pointed out, that eventually you'll lose some of those skills because you're so dependent. Eventually you'll lose it. And I've had tons of conversation Right now we don't need AI. We don't need AI for the survival of humanity, but as we become more dependent, as we lose some of those skills, because we're giving it to AI to do some tedious tasks sometimes it could be in the medical field or whatnot it becomes a necessity in the future. It will eventually become a necessity in the future for humanity's survival, but we're forcing it. Right now we don't need it.
Speaker 2:We are forcing the dependency by losing this Because. I'm not saying it's right or wrong, but I'm listening to what you're saying, saying that we are going to be dependent on machine for the survival of the human race. I mean, humans have been around for how long?
Speaker 3:But we're already dependent on machines. Right, we've been around for how long? But we're already dependent on machines. Right, we've been there for a long time. We're forcing ourselves to be dependent upon it.
Speaker 2:That's why I use the word machine, because we force ourselves to be dependent upon that right. We force ourselves to lose the skill or use something so much that it's something that we must have to continue moving forward.
Speaker 3:Yeah, my point was that that's not new. I mean, we've done that for 50 years like force dependency of some machines, right? So without them we wouldn't even know where to begin where to do that task. So AI is just probably accelerating that in some realms now, I think.
Speaker 1:Yeah, it is, Because, you know, as humans' desire is to improve quality of life, expand our knowledge and mitigate risk. It's not improving quality of life.
Speaker 2:It's to be lazy? I hate to tell you it's. Humans take the path of least resistance and I'm not trying to be there's a little levity in that comment. But why do we create the tools to do the things that we do? Right?
Speaker 2:We create tools to harvest fruits and vegetables from the farm, right, so we can do them quicker and easier and require less people, right? So it's not necessarily, you know, we do it because to make things better. We do it because, well, we don't want someone to have to go to the field and, you know, pick the cucumbers from the cucumber vine, right, we want, you know, they shouldn't have to do that, they should do something else. We're kind of, in my opinion, forcing ourselves to go that way. It is necessary to harvest the fruits and the vegetables and the nuts to eat, but, you know, is it necessary to have a machine do it? Well, no, we just said it would be easier, because I don't want to go out in the hot sun all day long and you know harvest.
Speaker 3:You can do the dishes by hand if you like, right yeah?
Speaker 1:If you like, yeah, if you choose to. No one wants to. No one wants to do the dishes.
Speaker 3:trust me I will never live in a place without a dishwasher. I mean, it's the worst that can happen.
Speaker 2:It is, and the pots and the pans forget it right.
Speaker 4:If you take this, further at some point in time. If you have a new colleague and you have to educate him or her, do you educate him to make these steps the sales order agent is doing by him or herself, just to have the skill to know what you're doing. Or if you are saying, just push the button.
Speaker 1:Yeah, but I think what? Eventually, as you continue to build upon these co-pilots in AI, eventually you just have two ER pieces and talk to each other. And then what then? Where are we then?
Speaker 3:Yeah, super interesting. What then? Where are we then? Yeah, super interesting. I mean, who knows? I think it's so hard to predict where we'll be even just in 10 years.
Speaker 2:I don't think we'll be able to predict where we'll be in two years, I think it's.
Speaker 2:Will we ever be able to press a button Like right now? I can create video images and still images. I'm using that because a lot of people relate to that, but I can create content, create things. I've also worked with AI from programming in a sense, to create things.
Speaker 2:I was listening to a podcast the other day. In the podcast they said within 10 years, the most common programming language is going to be the human language. Because it's getting to the point where you can say create me this. It needs to do this, this and this, and an application will create it, it will do the test and produce it. You wake up in the morning and now you have an app. So it's going to get to the point where what happens now? Let's move fast forward a little bit, because you even look at github, co-pilot for coding, right. You look at the sales agents chris's point erp systems can just talk to each other. What do you need to do? Is there going to be a point where that's what I was getting at where we don't need other people because we can do everything for ourselves? And then how do we survive if we don't know how to work together because we're not going to need to?
Speaker 3:that is so how we go yeah, I'm sorry.
Speaker 2:Sorry, now, that's so. To go to your point, how is ai going to help progress, the human civilization, right, or the species, if we're going to get to the point where we're not going to need to do anything, we're all just going to sit in my house because I can say make me a computer and click a button, it will be, you know there and that's you know where I come from with I would in that other podcast show that you mentioned, where I quote james burke when he says that we will have these nanofabricators, that in 60 years, everyone will have everything they need, and just produce it from air, water and dirt.
Speaker 3:Basically right, so and uh, so that that's the end of scarcity. So all the stuff that we're thinking about right now are just temporary issues that we don't need to worry about in 100 years. So that that's just impossible to even imagine. But because, as one of you said just before, we'll probably always just move the needle and figure out something else to desire, something else to do. But I think it is a good question to ask but what will we do with this productivity that we gain from AI? Where will we spend it? So now you're a company, now you have saved 20% cost because you're a company. Now you save 20% cost because you're more efficient in some processes due to AI or IT in general. What will you do with that 20%? Do you want to give your employees more time off? Do you want to buy a new private jet? I don't know.
Speaker 3:You have choices right, but as a humanity, I definitely personally my's. Uh, you have choices, right, um, but as a but as a humanity, I definitely. I personally, my personal opinion is I mean, I would welcome a future where we would, where we could work less, where we could have machines to do things for us. But it requires that we have a conversation, start thinking about how will we interact in such a world where we don't have to work the same way we do today. What? What will our social lives look like? Why do we need each other? Do we need each other? We are social creatures, we are communal creatures. So, yes, I think we do. But how, what will that world look like? I think this keeps me up at night sometimes.
Speaker 2:I can't imagine, and nor did I imagine, there'd be full self-driving vehicles within a short period of time, as it had to. I mean, I think, as you made a great point, soren, I don't think anyone can know what tomorrow will be or what tomorrow will bring with this, because it's advancing so rapidly. And go back to the points I said I had mentioned you talked about the podcast with James Burke, which was a great podcast as well too. That was the You're Not so Smart episode I think it was 118 on connections, which talked a lot about that.
Speaker 2:And yes, it was a great episode. That's another great podcast, and a lot of this stuff is going to be building blocks that we don't even envision what it's going to build. You know, look at the history of the engine. You look at the history of a number of inventions. They were all made of small little pieces. So we're building those pieces now. But also our mind is going to need to be I use the word stimulated. If we're going to get to the point where we don't have to do anything, how are we going to entertain ourselves? We're we going to entertain ourselves? We're always going to find something else right to have to do, but is there going to be a point where there is nothing else because it's all done for us?
Speaker 3:yeah, just want to comment on that one thing. You said there like that you referenced that no one, no one just imagined the car, like you. You know, people did stuff, invented stuff, but suddenly some other people could build on that and invent other stuff and then eventually you had a car, right? Or anything else that we know in our life. And I think James Berg also says that innovation is what happens between the disciplines, and I really love that. I mean, just look at agents today. Like four years ago, before LLMs were such a big thing. I know they were in a very niche community, but with sort of the level of LLMs today, no one said let's invent LLMs so we could do agents. No, I mean, LLMs was invented Now because we have LLMs, so we can do agents. No, I mean, llms was invented Now because we have LLMs. Now we think, oh, now we can do this thing called agents and what else comes to mind in six months, right? So it just proves that no one has this sort of five-year plan of, oh, let's, in five years, do this and this. No, because in six months someone will have invented something that, oh, we can use that and oh, now we can build this entirely new thing. So that's what's just super.
Speaker 3:It's both super exciting, but it's also a bit scary. I mean I can, I can speak for as as a product developer. It's definitely challenged me to rethink my whole existence as a product person, because now I don't actually know my toolbox anymore. Two years ago I knew what AL could do Great. I knew the confines of what we could build. I knew the page types in BC and stuff. So if I had a use case, I could visualize it and see how we can probably build something. If we need a new piece from the client, then we could talk to them about it and we can figure that out. But now I don't even know if we can build it until we're very close to having built it. I mean, so it's. There's so much experimentation that, yeah, we're building the airplane where we're flying it in that sense, right and so that also challenges our whole testing approach and testability and frameworks. But so, which is super exciting in itself, so it's just a mindset change, right, um, but, but definitely challenge your product people oh, it definitely does.
Speaker 2:I I think uh ai is um, it's definitely changing things and it's here to stay. I guess you could say. I'm just wondering, you know. I say I think back of a movie was it from the 80s, called Idiocracy. You know, if you haven't watched it it's a mindless movie, but it is. It's the same type of thing where a man from the past goes into the future movie, but it is. It's the same type of thing where a man from the past goes into the future and you know what happens to the human species in the future and how they are. It's pretty comical. It's funny how some of these movies are some of these circling back. Yeah, they circle back, you know with.
Speaker 4:You know star trek, star wars I'm wondering when we will be there.
Speaker 3:That already happened. I just hope we won't get to the state where I think you said that cartoon or that animated movie Wall-E where the people are just lying back all day and eating and their bones are deteriorating because they don't use their bones and muscles anymore. So the skeleton sort of turns into something like they just become like wobbly creatures that just lie there.
Speaker 4:As I don't know seals, or consuming what was really interesting with Back to the Future is this thing here, because Doc Brown made this time machine using a banana to have the energy of 1.2.1 gigawatts or something like that. You don't have to wait for a thunderstorm to travel into time a bit. This idea was mind-blowing back then and and I I'm dreaming of using my using free time as as a human to to make this leaps. Because we are. We have this scarcity in resources and, even if this goes further and further and further, I assume that we don't have enough resources to make this machine computing power to fulfill all that. I think there will be limitations at some point in time, and most of what is AI freeing us up is to have ideas on how are we using our resources that is sustainable.
Speaker 3:I like that. I have no way to say what you fear will become true or not, but I like the idea of using whatever productivity we gain for more sort of humanity-wide purposes, and I also hope that whatever we do with technology and AI will reach a far audience and also help the people who today don't even have access to clean drinking water and things like that. So I hope AI will benefit most people and, yeah, let's see how that goes.
Speaker 1:Yeah, I think it's going to redefine human identity. Yeah.
Speaker 2:I'd like to take it further and I'd say the planet. I think you know, with the AI, I hope we gain some efficiencies, to go to your point, christian, that we don't. We can have it all sustainable so we're not so destructive, because you know the whole circle of life, as they say. You know it's important to have all of the species of animals. You know plants, water, you know anything else is on the planet. It's an entire ecosystem that needs to work together. So I'm hoping, with this AI, that's something that we get out of. It is how to become less destructive and more efficient and more sustainable, so that everything benefits, not just humans because we are heavily dependent upon everyone else.
Speaker 4:That's the moral aspect of it.
Speaker 4:So if we use it to use all of the resources, then it is moral aspects bad because it is not sustainable for us as a society and as human beings on this planet.
Speaker 4:So, as I see, moral is a function of keeping the system alive, because we use the distinction between good and bad in that way that it is not morally good to use all the resources. So if we could extend anything that we can do with AI using all of the resources, that is not really good and that what we can use with our brains is think ahead when will this point in time will be and label it as bad behavior. So the discussion we are having now and I'm very glad that you brought this point, sorin is that we have this discussion now to think ahead. Where will the use of AI be bad for us as a society and as human beings and for the planet? Because now is the time we can think ahead what we have to watch out in the next month or years or something like that, and that is the moral aspect I think we should keep in mind when we are going further with AI.
Speaker 3:I think there are so many aspects there to your point, christian. So one is of course the whole, like we all know, the energy consumption of AI in itself, of AI in itself. But there's also the other side, I mean the flip side, where AI could maybe help us spotlight or shine a bright light on where can we save on energy in companies and where can AI help us, let's say, calibrate our moral compasses by shining a light on where we don't behave as well today as a species. So I think there's a flip side. I'm hoping we will make some good decisions along the way to have AI help us in that.
Speaker 2:There's so many things I could talk about with AI and we'll have to have I think we'll have to schedule another discussion to have you on, because I did. I had a whole list of notes of things that I wanted to talk about when it comes with AI, not just from the ERP point of view, but from the AI point of view, because, you know, after getting into the more AI book and listening to several podcasts about AI and humanity, there's a lot of things that I wanted to jump into. You know we talked about the de-skilling. We talked about too much trust. I'd like to get into harm bias and also, you know how AI can analyze data.
Speaker 2:You know that everyone thinks anonymous because, reading that Morley, I booked some statistics they put in there. I was kind of fascinated. Just to throw it out, there is that 87% of the United States population can be identified by their birth date, gender and their zip code. That was mind blowing. And then 99.98% of people can be identified with 15 data points. So all of this anonymous data. You know, with the data sharing that's going on, it's very easy to make many pieces of anonymous data no longer anonymous. Is what I got from that. Um. So all that data sharing with those points, that um, the, the birth date, gender and five digit us zip code here again, that's in the united states was one that that shocked me, and now I understand why those questions get asked the most because it's going to give, with a high probability, 87 percent.
Speaker 3:Uh who you are maybe just for the audience, uh, watching this or listening to this. So so the book that we're talking about is this one Mole AI. I don't know if you can see it. Does it get into focus? I don't know if it does.
Speaker 1:Yeah now it does.
Speaker 3:So it's this one, mole, ai and how we Get there. It's really a great book that goes across fairness, privacy, responsibility, accountability, bias, safety, all kinds of and it tries to take sort of a pro-con approach. You know, because I think maybe this is a good way to end the discussion, because I have to go. I think one cannot just say AI is all good or AI is all bad, like it depends on what you use it for and how we, how we use it and how we let it be biased or not, or how we implement fairness into algorithms, and so there's just so many things that we could talk about for an hour. But that's what this book is all about and that's what triggered me to to share a month back. So just thank you for the, for the chance to talk about some of these things, and I'd be happy to jump on another one.
Speaker 2:Absolutely, We'll have to schedule one up, but thank you for the book recommendation. I did start reading the Moral AI book that you just mentioned. Again, it's Pelican Books. Anyone's looking for it. It's a great book. Thank you, both Soren and Christian, for taking the time to speak with us this afternoon, this morning, this evening, whatever it may be anywhere. I know where I have the time zones and we'll definitely have to schedule to talk a little bit more about AI and some of the other aspects of AI. But if you would, before we depart, how can anyone get in contact with you to learn a little bit more about AI, learn a little bit more about AI, learn a little bit more about what you do and learn a little bit more about all the great things that you're doing?
Speaker 3:Soren, so the best place to find me is probably on LinkedIn. That is my only media that I participate in these days. I deleted all the other accounts and that's a topic for another discussion.
Speaker 2:It's so cleansing to do that too.
Speaker 4:Yeah, and for me it's also on LinkedIn and on Blue Sky. It's Curate Ideas excellent, great.
Speaker 2:Thank you both. Look forward to talking with both of you again soon.
Speaker 4:Ciao, ciao thanks for having us. Thank you so much bye, thank you guys.
Speaker 2:Thank you, chris, for your time for another episode of In the Dynamics Corner Chair and thank you to our guests for participating. Thank you for your time for another episode of In the Dynamics Corner Chair and thank you to our guests for participating.
Speaker 1:Thank you, brad, for your time. It is a wonderful episode of Dynamics Corner Chair. I would also like to thank our guests for joining us. Thank you for all of our listeners tuning in as well. You can find Brad at developerlifecom, that is D-V-L-P-R-L-I-F-Ecom, and you can interact with them via Twitter D-V-L-P-R-L-I-F-E. You can also find me at matalinoio, m-a-t-a-l-i-n-oi-o L I N O, dot I O, and my Twitter handle is Mattelino16. And see, you can see those links down below in their show notes. Again, thank you everyone. Thank you and take care.