Dynamics Corner

Episode 331: In the Dynamics Corner Chair: AI is a Complex Concept and Confusion What It Can Do

July 30, 2024 Kamil Karbowiak Season 3 Episode 331

Send us a text

This discussion focuses on AI and Copilot, aiming to address the confusion and misconceptions surrounding AI. Our guest, Kamil Karbowiak, emphasizes that AI is a complex concept and that more clarity is needed about its capabilities. The conversation also covers various copilots designed for specific tasks and domains. Kamil delves into the limitations and future potential of AI in Business Central.
 
 
Connect with Kamil on LinkedIn (https://www.linkedin.com/in/karbowiakkamil/)

#MSDyn365BC #BusinessCentral #BC #DynamicsCorner

Follow Kris and Brad for more content:
https://matalino.io/bio
https://bprendergast.bio.link/

YouTube:
https://www.youtube.com/channel/UCiC0ZMYcrfBCUIicN1DwbJQ

Website:
https://dynamicscorner.com

Our equipment:

Disclaimer: This podcast episode may contain affiliate links, which means we may receive a small commission, at no cost to you, if you make a purchase through a link. This helps and support our podcast.

Speaker 1:

Welcome everyone to another episode of Dynamics Corner, the podcast where we dive deep into all things Microsoft Dynamics. Whether you're a seasoned expert or just starting your journey into the world of Dynamics 365, this is your place to gain insights, learn new tricks and the possibilities of co-pilot AI in your life and business. I'm your co-host, chris.

Speaker 2:

This is Brad. This episode was recorded on July 10th 2024. Chris, chris, chris. If I had a dime for every time I heard the word co-pilot, I would be retired yeah, right, we would just be doing this podcast for the rest of our lives my kids, kids, kids, which would be what great grandchildren podcast for the rest of our lives.

Speaker 2:

My kids, kids, kids, which would be what Great grandchildren could probably live off of that. This world of AI is mystifying, and each time we talk to somebody about it, it becomes a little more demystified for me, so I have to keep having these conversations. Today, we had a great conversation with Camille Kawaiowski from Data Coverage.

Speaker 1:

Good afternoon, Camille.

Speaker 3:

Hi, good afternoon Chris, good afternoon Brad.

Speaker 2:

Good afternoon, Camille. How are you doing? It's nice to talk with you again.

Speaker 3:

Good afternoon Chris, good afternoon Brad, good afternoon Kamil. How are you doing? It's nice to talk with you again. It's very, very nice to talk to you guys. How's the weather and everything else in, wherever you are, it's hot, it's hot everywhere it's hot.

Speaker 2:

It's hot everywhere. Over here we're baking. How is it over there for you?

Speaker 3:

Same here, same here, well, in Celsius it's 35 degrees today. That Same here, same here, well, in Celsius it's 35 degrees today.

Speaker 2:

Quite unusual. Yeah, it seems like it's hot everywhere. Everyone's cooking and everybody has a different interpretation of hot, but it seems like everybody's up in the heat. You've been running around, I see, so I appreciate you taking the time to talk with us. I've been wanting to catch up with you for a long time about a topic that we started to see a lot more traction on at directions north america back in april, and since then it seems like it's taking over the world and that is that is, ai, co-pilot and anything else related to that.

Speaker 2:

It seems like you at directions or if we go back to then or even in between it's, it's almost difficult to go a day hour or maybe even a half hour without hearing some mention of the word co-pilot and AI.

Speaker 3:

Everyone is getting good, getting a little bit tired.

Speaker 2:

I'm getting a little tired of it too, and I'm getting tired of it for a number of reasons. But to try to get some clarification on this whole thing, I was wondering if you could talk with us a little bit about I know it's in your space and what you do within your realm of work and the services that you offer is hopefully shed some light and talk with us about some of the confusion around AI, co-pilot and what everybody thinks or what's considered over-promising for AI and what we can do. So before we get into that, can you tell everyone a little bit about yourself?

Speaker 3:

Yeah, so Data Courage, camille Karbowiak, I'm the managing director of a company that is based in, headquartered in Poland, but we have offices and employees in different areas of the world, including US, and we focus on data analytics for the, you know, in very wide space, including fabric, including everything that is data warehousing and trying to get Power BI in the hands of people at various organizations, from the smaller to the ones that are in 17 or 20 countries, and trying to bring that up to speed for them.

Speaker 3:

And AI has been with us. As we've met I think it was about five, seven, six years ago, something like that at the first conference, when Copilot was not existent yet and AI was taking shape. There was more machine learning at the time, and I think we've got a different, I would say, reiteration of that, with generative AI now adding another piece of complexity or actually something that was supposed to, in the natural language, answer any question that we might have around anything that happens around our data that we collect in our companies, and I think that is what we're also doing Now. We're helping also partners with arranging their story around AI. We're helping Business Central partners to deliver some things with things such as Copilot Studio, for example. So we do that as well, and we have our own products as well that we ship to the app source that are integrating directly with Business Central. So that is really what we do at Data Courage.

Speaker 2:

Oh, great, Great. Thank you for that. You guys are doing a lot of great things. I see what you're doing and I also enjoyed the conversation we had and learning a little bit more about AI. Basically put in your terms, what is artificial intelligence and is it really intelligent?

Speaker 3:

Complicated question it is it is, it is a very complicated question, and then you need to unravel those different. You know there's so much, as you mentioned, there's so much confusion around it and what is being put in front of us in a system or into an environment. It would be able to answer all these questions that we have around it. This is where I would like it to finally be, where you know, no matter what task you have for it, it will actually assist you, and I think we're not there yet and that's where the confusion came with a lot of the marketing that is coming from different channels, from different companies that it would be able to do that or it will be able to do that to an extent. You know, people think when they see certain videos or certain functionalities, they see that, oh, it can help me with so much more that is on that video and I wasn't able to do that before, and what we're seeing and experiencing is that it's not there yet in many of those spaces. So, if you think of the revelation of generative AI, there are four major aspects that generative AI is able to help us with. One of them is content generation, as we know that there is a content summarization as well, and there is also things like question and answer. An incarnation of that would be co-pilot chat, for example. And then there is the other thing that is also used and I think this is the broadest subject that has been picked up on by a lot of the partners who are kind of trying to get into this journey is the automation. So trying to automate certain tasks and these tasks that people want to accomplish. They can be assisted, ai assisted, but still it's a domain specific. It might be generic, such as things like bank reconciliation, but if you think about bank reconciliation aspect, it is a specific concept, it's around a specific functionality and it allows us generative AI is going to enable us, with specific prompting, to accomplish certain things and put some things into Business Central, into tables, and then have a human accept it or reject it or somehow correct it before posting. So it saves a lot of time and we cannot take it from the co-pilot functionality that it actually saves a lot of time. Even things like you know the aspects of Teams or you know trying to get notes from the meetings this really, you know Teams and Copilot for Teams, for example, is accomplishing really well. There are obviously other solutions to it, which has a different approach to that subject. But that's really the four domains where we think that the generative AI in our space, in our business central space, I would say, would help us and, of course, if you think about coding and things like that, that allows it to do that as well.

Speaker 3:

Where the problem is is, you know and I've heard it many times over the conversations we had the directions with different people at different Microsoft sessions as well Is it like, is it one co-pilot or is it many co-pilots? And we know, right, we know that there are many of them, but this is just one of the aspects, and really the limitation of how this works now is that they do not communicate together yet, so these co-pilots one co-pilot cannot communicate with another one, pass some results to another one, and, of course, this is the journey that we're going through now. So you know, one of the things, one of the key takeaways, I guess, from this conversation, but also from the directions, is that there are many co-pilots and they're specific to different tasks rather than just one co-pilot for everything.

Speaker 2:

That's what I wanted to ask. I have a lot of questions for you based on what you had just mentioned and hopefully I can. You know, I need co-pilot to help me, a co-pilot for my brain to help me go back and answer all these questions, or ask all these questions. You said there are many co-pilots. What is the difference between these co-pilots? Is it the data I guess, is that even the appropriate term the data that they work with? Is it the function that they perform or is it something else, like what is the? Again, it's like people saying I have a car or a vehicle, right, you have many different types of vehicles. You know I can differentiate between a sedan, truck, tractor, semi-tractor. You know there's a number of different things within there. When dealing with a co-pilot, what differentiates them from each other?

Speaker 3:

So we need to go back answering this question. I think we need to go back answering this question. I think we need to go back to the, the whole large language model aspect and when you think about that, is that you have in front of you, uh, when you're using I don't know chat, gpt or whatever you're using the whole knowledge that it aggregated, and then you have to put it into a direction and give it more context of what you want to do and achieve with it. And the same goes with those different co-pilots. So if you have a co-pilot that helps you with giving text generation, marketing text generation, or a co-pilot that helps you with bank reconciliation, you're giving it a different context and giving it different prompts, right. So you need to tailor it to what you want it to do. It will answer questions in different ways when the domain changes. And that is really what kind of you know in the human nature we would say we would answer the same question differently to. You know, a five-year-old and someone who has a PhD, if we have that knowledge right to talk at that level. So that's why these co-pilots, or these agents or these assistants, they need to be taught, or I would say, their broad knowledge has to be kind of tailored to that specific task that we want it to accomplish, and that's why we had this.

Speaker 3:

At the first we had this confusion, and I think that we're on the road of explaining it better, that we can actually plug it into different functionalities and start doing that.

Speaker 3:

And then there's obviously the stuff that I've heard, also from Microsoft events directions as well. One of the sessions with Evgeny Korovin they were saying about they were putting things out there as in they will be introducing the connected co-pilots as well, so something that will perform a certain action and then that you know it's just like a function. If you think about it, it's a function that has certain tasks, it completes it and then it passes the parameters to a different one that is tailored to do something else. And we're doing this as well in one of our apps, for example, where we're passing certain things or certain things that one AI element is doing for us and then it passes it on to a different element and different model that has been pre-trained and fine-tuned to perform a different task and then, with the iterations of that, you have a specific functionality that you can accomplish and work with. You know and work with the results of it.

Speaker 2:

I hear these terms and I do want to jump into more of some of the functionality that you can use with Business Central conversation, which I'll mention in a minute. But I do appreciate that AI could be used, or Copilot could be used, for, as you had mentioned, content generation or idea generation, which is helpful. I could do summarization of stuff right and we can talk about the stuff and then also you can ask questions, get answers. You had just also mentioned that you can train it right. I hear this word train almost like I'm teaching or training a person to do a job right. So again, now we go into some of the misconceptions and confusion and this might even take it a little bit deeper. You know, it might put us down another tangent what is it to train AI?

Speaker 3:

I wouldn't say that you would train a new model for yourself, and I wouldn't want. I think we don't want to go that path At the level of and I'm limiting ourselves because there are some companies that have started doing and training their own models, but it takes a lot of cpu power, gpu power to do that. What we're doing is we are using a trained model and then we're using the pre-prompting techniques, so kind of something that also microsoft really puts on on their slides as well as grounding, yeah, so something that tells you how you should behave now. Well, how do you know in which domain we are now and what? What domain do we now uh want to focus and what we want your answers when they when, when it's specific to your, to that specific domain, yeah, so when we're in finance, we're going to be talking about finance.

Speaker 3:

When it's finance for, I don't know, an NGO, or whether it's a finance for a company that sells electricals equipment, the answers should be tailored for that. So we're giving it more context. So we are not training the model, we are just adding, uh, the concept to, to train it based on, on data that uh, we, we, we would need to pretty much put a machinery and and start building it, start building the lm for a specific purpose, and I don't think that we should be uh, I don't know it's. It's a concept for these large companies that needs to solve specific.

Speaker 2:

So is training in essence limiting? I'm trying to simplify this for my knowledge and my limited knowledge of AI. I know how to ask questions to co-pilot and get some ideas from it and create pictures and stuff, but is training in essence limiting the data that it reads, I guess you could say, or loads or associates or links or puts together to limit what it can do? Or is training it like teaching it a new set of functions, whereas if I have a picture, for example, and say, ai, this is a banana, now going forward, it can see that picture and it knows it's a banana. Or is it that it takes up all the data in the world, of all the fruits and vegetables, and it can see that someone already identified it as a banana and now puts it in there and I can say what is this? And it will already?

Speaker 3:

know it's a banana, yeah. So training is that?

Speaker 1:

yeah, Training is that machine learning? Yeah, it is. It's a different concept. Yeah, in that concept it's a different concept.

Speaker 2:

Yeah, exactly.

Speaker 3:

But you know that now LLMs are multimodal, so they also use the algorithms. That's even more complex because it's uh, you know it can. Now the multi-model that, yeah, the the term of multi-model is that you can show it, um, I don't know, some picture. It's going to generate a text. From the text, it's going to generate a picture or a video or something else. Yeah, from a picture it's going to generate a video. We've we've seen. I've seen that really great.

Speaker 3:

Yes, you've seen sora, you've seen probably some other models that are out there and this is amazing how this is being done. And, of course, you know the problems with hands and so on and so forth. So, yes, we definitely have to probably go down a certain path in this conversation so that we are not getting lost in all of those concepts. So, yes, training with a specific data set, with training an LLM, is when you have a wide database of different concepts and then the LLM was trained with the vector database to identify which word is with proximity to another word. And I'm not saying it for no purpose. I'm saying it because I think that the next element that people have a misconception of and I think that you know we also discussed it with a lot of people why it doesn't handle data as it should right. So why it doesn't handle the numerical as it should right.

Speaker 3:

Why is it hallucinating all the time? It can give you different answers to a very simple questions as multiplication. You know variances. It's not going to calculate it correctly, and why? And the reason is that it's how would you identify that one is greater than two? You need to somehow train a different type of model to be able to understand that. For the language model, it really is a very difficult concept to apprehend when you have a number somewhere, yeah, how do?

Speaker 2:

you understand. Is it a small or is it a? We would need a math model, basically to explain how the numbers are related. I did learn about a vector and what it is and how it's there in a previous episode. It took me a long time to wrap my head around that whole process. By the way, so now I understand you use hallucination.

Speaker 2:

I understand why it hallucinates now because a language model references information based on how it relates in space, not necessarily the function or use of it. See, I'm picking this up from what you're saying. Then tell me if I'm wrong. That's exactly right, and then if we wanted to use that information, such as math, we would need to teach it differently that they're not using these numbers as they relate. We're using them as a function. Man, man, I'm learning so much with these conversations.

Speaker 1:

So yeah, it's conversational at this at the moment.

Speaker 2:

Yes, yeah yes, exactly, that is a big thing with this and that's. I have conversations and it's almost like with anything new, and the negative bias that we have as humans is we come out and say it did this wrong, it did this wrong it. Look I can confuse it. Oh, look, I can have it go this wrong. It did this wrong. Look, I can confuse it. Look, I can have it go this way, when the reality is, as I say over and over again, it's just a tool that you use.

Speaker 2:

So even if you're using it for content generation, as you mentioned, it may give you an idea and then you can use your content in your brain to expand on that idea, to create content. If you ask it questions, it could give you some ideas. You still should use it just as a tool, with the answers, and validate it. It will make you become, I say, more efficient, because it will give you ideas faster than if you had to do it on your own, just like a carpenter with a hammer. It's a tool. The hammer's not going to do anything on its own. It's all a matter of how you use it right. See, I'm getting there.

Speaker 1:

Yeah, it's going to give you a baseline. It's basically that's where it's helpful, giving you a baseline, and then you take that information from a conversational standpoint and then the creativity still be coming from you, because you would still have to figure how would you use the result that AI gives you.

Speaker 2:

So with that, let's take it back now within the space of Business Central and data, and how a business, a partner or someone can use AI within their application, or someone can use AI within their application, whether it be a partner creating an application or an extension for Business Central that utilizes AI, or whether it's the AI co-pilot stuff that Microsoft has added to Business Central and is planning to add to Business Central, and then this crazy thing, co-pilot Studio now, which again used to have another name.

Speaker 2:

Everything seems to have a different name, and I'm just going to load this all on there because it's on my mind. Also, one of the concerns that I hear and I have as well, with ai in a business space not necessarily just business central, it's ai in a business space is the data, so one we have to train it to process our data or use our data, understand our data, which in an ERP system can be different because they have different modifications. But also I may have access to the entire system for data such as finance. Chris could work in another office in the company and he does not have access to finance data. He only has access to inventory. How can we ensure that if Chris uses the AI tool to ask questions, he doesn't get the finance data that I have access to, whereas if I ask a question, I get all encompassing item, finance and such.

Speaker 3:

So we need to break it down a little bit, and I don't know. I'll start with the last one, because I think it's the concept that is also in the mind of, but the concept that is also in the mind of many and we already, I think you know, with one of the apps that we all of the apps actually, with Business Central, it's quite easy, right? So you are not giving the access, just like you don't give access to the chart of accounts, you are not giving the access to the functionality and the way that Copilot works is I think it's a I don't know how to say that, but I think this is a missing element. Yeah, with all due respect for Copilot and Microsoft, it's an element that is missing is that you should be able to pick and choose where you would want your Copilot to actually show up on your screen and have specific permissions for that person or attached to it. So the way we have approached it with the financial intelligence or any other app that we gave it, is based on the end user permission.

Speaker 3:

So if the user has access to the chart of accounts, he's going to be able to ask questions and he's going to be able to use the AI financial intelligence, rather than having a copilot somewhere where it's a plugged in top of Business Central and you just start and chat about different things. And this is where I think that this has to change, so that people know that when they're enabling Copilot for finance or Copilot for bank reconciliation or any other thing that will be attached to, I know where this is coming from. You don't want a person from, for example, french area, for example, for French subsidiary, to have access to the German database or knowledge base. This has to be separated and if you ask a question, you need to have permissions to only ask questions about your P&L data. That is, in this company that you have access to.

Speaker 2:

Yes, and that's where I see a big challenge because someone could have access to both, so they want to be able to analyze across both, and then, as you're building this vector of data, or we're teaching the system ai, whatever you'd like to call it how to analyze or process this data within its model for processing data.

Speaker 2:

See, I'm trying to learn how does it know to put up the walls between people without changing the efficiency, because it's to me it would be a filtered data set for a person if they don't have permission at some sort of dimension, and then that dimension needs to be factored into the processing to eliminate a chunk of data. But it could be multi-dimensional. So it's, it's a big challenge and I don't, you know, I think somewhat, you just assume. Well, you just have to not try to understand it. But these are just some of the questions or concerns that I've heard. About Copilot, yes, it's great to create pictures, great to analyze information on the web, but now we want to transfer it to be a practical business solution not executive-wide, but business want to transfer it to be a practical business solution, not executive wide, but business wide, to gain the efficiencies.

Speaker 3:

So, again, coming back to some of the customers that we have on financial intelligence and we were actually not that we were challenged by these questions, but people were just asking how it's going to behave and the way we have it. Now we don't have the, let's say, the consolidated version is ahead of us, so we're not yet doing the consolidation. But in terms of dimensions and whatever the user has access to, this is what we or a specific user can actually use within the analysis, right, so he's not going to be able to compare it with a different division or I don't know something else that is captured as dimension if he doesn't have access to it. Yes, there's a lot of work to do that. Yes, it will not give you those answers on that level because you don't have access and permissions to do that. Yes, it will not give you those answers on that level because you don't have access and permissions to do that. But on the level of the user that has access to all of these things, then it will pass on much more data. That is relevant.

Speaker 3:

To answer a specific question about that, um, about that data set yeah, so it's all. Yeah, it starts with the data set. Then you, you give it some, uh, some information, some prompting, and then you're returning it based on the data set that you were given. Yeah, and how, how you're doing it in in the and how you prevent it from hallucinating is another aspect. But to have the permissions set and to control this, we were seeing that this is, you know, and Business Central actually helps with permissions to send the data.

Speaker 3:

And then you know, and then there's another aspect of it. So people were asking oh, are you reading our prompts? Right, so is microsoft reading our prompts? Is microsoft knowing what's going out and in, given that that everything is in cloud, and why would you read the prompts and why would you make a deal out of it? And just the ai concept brings those questions.

Speaker 3:

You know, there are people were very um, let's say um, they didn't want to go to cloud, for the same reason why they don't want to go to ai now, because it's the privacy, right and somebody reads what we're asking, uh, and, and what microsoft has, you know, announced several times, is that they, they do not see any of that because it passes through OpenAI and goes through that, and so that's the area of security, you know, besides the fact that you know there are terms and conditions that you accept with the co-pilot and whatever the. I don't know if there's any any way to check, uh, whether or not somebody is is is reading uh that. But we have already customers that are using our tools in production and, um, you know, they, they just want to have that ability to ask these questions to those numbers and I think that that's more valuable. Yes, it's a very difficult aspect to unravel without the people. That would say from the Microsoft side yes, we know, and we don't do that.

Speaker 3:

We don't, because that's the thing.

Speaker 1:

I think that they call it responsible AI. It's well written by them. I think it's been out for quite some time. I think it's what? Four rules or policies fairness, reliability and safety, privacy and security, which covers that, and then inclusiveness is their goal.

Speaker 3:

Exactly. And then there's other aspects, because if you know I think it was today or the day before that the Copilot chat will not be available outside of US for until October, then if you read something like that, is it based on some regulations that were put in by, for example, I don't know European Union on those pieces, or whether it's the technology, what is behind it, that there's another, another thing. But if you cannot trust these kinds of you know big companies for for for being reliable in terms of your data, then we're not going to get anywhere. Yeah.

Speaker 1:

And that's true.

Speaker 3:

And then you know if, if you think of deploying your own model, you're going to be maybe I don't know two, three years time and you're going to be doing something for yourself. Yes, there are smaller LLMs that you can train and do it yourself, but then there's this whole cost of maintenance, trying to up-train it with new data and so on and so forth. It takes a lot of time and money to put it out there.

Speaker 1:

That's true. I want to point out too. I said four, there's actually six principles. The two missing ones transparency and accountability. Okay, yeah.

Speaker 2:

Just not to derail us, but I have a question for you that came into my mind as I was listening to the two of you speak. It's outside of practice, it's an offshoot, but it jumped into my head. We talked about AI being trained by data. Okay, the data can be informational data within your company that you're collecting, so it's transactional data, so that you can have information for your products, finances, sales and everything like that and you can teach it to calculate sales. The other area that I see it being widely used for is content creation, and then the AI gets trained based on content. So let's just say it creates a bunch of website data, so somebody may say, okay, teach me about this, there's a lot of content on the internet. It generates an article. Someone publishes something that was created by AI. Now somebody goes and creates content trained by all this information, including the AI. Is there ever a point where it just doesn't know anything anymore, because AI is learning on AI content, whether it's images, text and also on the?

Speaker 3:

That is also widely, I would say, discussed subject, because you cannot train when you don't have that additional knowledge. Uh, that um, that, that knowledge, that um, it could be trained on. Yeah, so if it's going to be training on ai elements, yes, it's a, it's a kind of a kind of a constant uh, look, yeah, uh. And we need to also say that one thing that it's not and we need to emphasize this really for co-pilots, from what we're hearing from microsoft, whatever is your proprietary data, this doesn't get to be exposed to train the models. It doesn't. Doesn't additionally, train the models with. With that data that you're exposing, it can just like with the Teams conversations. You will see on the co-pilot that it does not store it once you leave it, once you leave the conversation, you can. If we were doing this on Teams, we would be able to have the conversation with co-pilot about our conversation. It would be able to summarize, it would be able to suggest some questions, it would be able to add some ideas. It would be able to add some ideas around it. But as soon as you close the conversation, it's not going to give you that ability because it does not store it and it is not used for training. So I think that this is also for this responsible AI, that it's not being trained on any of the proprietary data responsible AI, that it's not being trained on any of the proprietary data. And then there are other programs where you can enroll and I think Microsoft was actually talking about it as well. I can't remember what was it, but there are some, you know, like when you have an industry expert in a certain area and you would like to read the answers generated by AI and kind of influence it and say, well, this is wrong, this is not where it should be, it should be something else, and this kind of feedback then gets uploaded into the training. Yeah, so the next time you're gonna be doing this.

Speaker 3:

But this is separate. This is separate to and it's done on uh, artificial data, on on the data that is generated, uh, and and has got no, nothing to do with the real data that is coming from any of the companies or anything like that. So there are other and and this is the concept that you you touched base on on just a minute ago that, yes, it would. It would lose more meaningful aspect of it if you don't train it more or further with it and you need to give it a score. So you're seeing the chat flags about like or dislike or whatever. You like the conversation or you didn't like the conversation. That's where it feeds that into.

Speaker 3:

The training out there to our customers is that they will be able to also flag their answers, the answers that are coming from AI on their financials, whether or not it's something that they would like, or maybe they would like to add something else, and then you have to have a different, separate model that will be trained on those answers. But these are some of the concepts that we're also working on uh right now.

Speaker 2:

it's challenging if you really start to think about all that's involved in this, because, for human thought, we just have all this information. We've learned all this information, we pull it out, naturally, and we understand what we want to do, but now we're trying to almost replicate how humans store information in their brains and how they recall information and then use the information that they recall. It's crazy, and I saw recently some of the demonstrations with AI where you can talk to it and it will talk back.

Speaker 2:

Talk about creating more and more seclusion because, chris, we can have podcasts with AI and just have conversations or even I just can have a conversation with myself and I'd never have to leave the house or talk to anybody else.

Speaker 3:

True ever have to leave the house or talk to anybody else. True, but if you think about it from a different perspective, what I've seen recently from Google, or also from, I think, openai, is that you have this, you know, when you have an impaired vision and you want to you know and I have that in the family.

Speaker 3:

So I'm actually looking into this area very closely because I want to help someone from the family to be able to orientate herself around the house and it is amazing how this technology can help with just a phone in her hand and just telling her where the salt is and where the pepper is and what to put in a jar and things like that. So if you have that, you don't have to have an implant or anything else. You can use your phone. And she was using also a lot of the technology before reading the messages or something like that. But now it's it's, it's another level. Yeah, so it's very, very inclusive for the for for. For some of the people that are, it's kind of impaired vision.

Speaker 2:

Those types of things.

Speaker 2:

You know, a lot of information about AI is all about the business use or the functional use of, you know, creating pictures or doing this, and it knows to identify where the salt is, where the pepper is, just by where it's placed, not by where it should be, so that somebody who has an impairment knows exactly where to go without needing assistance.

Speaker 2:

So it creates a level, that stuff right there. I see that these are the pieces of some of these technologies that I wish had more visibility, technologies that I wish had more visibility, because it's not to say that it's not visible in some areas, but on a wider market, and I think it would help promote and give more understanding to how this technology can be used other than just creating pictures, but to help people function that may have, like you had mentioned, visual impairment or other challenges. To facilitate so they can have more independence and do things is amazing and it's just, it's helpful and that's the type of stuff I like to hear about you know, as far as how this technology is used, I didn't know that that technology, you know that's how that's being used in some places.

Speaker 2:

I'm very happy by that.

Speaker 1:

I think, with recent GPT-4.0, with its whole vision component, it's going to, you know, skyrocket us to the next steps, which would lead you to using your smartphone or smart glasses for it to have an interaction with you of like, hey, this is what you could do based on what it's in front of you, Like, here's what you can Like. For example, someone that I enjoy cooking, Like, if I have all these recipes, instead of me looking, I can have glasses, or I can take a picture and it'll say, hey, you can make all these different things and this is how you can make it and this is what you need to do to prepare. That's impressive, because you know it's right at the tip of your fingertips, without having you to. You know what we used to do Google things.

Speaker 3:

Yeah, yeah.

Speaker 2:

It's a different form of Google. Yeah, it's a different form of Google. So, to go back to the business function of this for a moment, and with Business Central, copilot Studio, how does Copilot Studio fit into this AI, business Central, business function ecosystem?

Speaker 3:

So in many ways actually, because, uh, what, uh, what co-pilot studio brings is this or it, it, it it pulls the level of your where you need to be, where you are, where you need to understand the the parts of the stack and technology. It it pulls it down, pulls it to where you have someone who understands a little bit of development and can write prompts and can, you know, like, join pieces together and then be able to accomplish a task without writing complex Python code or something like that and trying to use different components to bring it, let's say, back to the business central. So it's this concept of if you thought about power apps low code. So I would say it's a low-code AI kind of a thing which allows more people, it gives more accessibility to it. And then, if you combine my sentences, last sentences, with something that I remember you had a conversation with Vincent. He was talking and describing different types of prompting this is really where this knowledge will be also useful in order to accomplish certain things with Copilot Studio and then bring it to the end user. And I think that this is really where and I see a lot of excitement in the channel that they can actually do something. Yes, microsoft does not share their prompts, because it's Prompt is proprietary knowledge.

Speaker 3:

Now, so, however you're going to direct your AI, you're going to not train but just direct it, ground it. That grounding and that task, and the way you're going to create those tasks and how generic or how specific they're going to be. This is where your power is going to be, because you're going to be able to accomplish certain tasks much faster in your business central implementations, much faster in your business central implementations. So that's why I think that you know when somebody is saying from the stage, you know AI is going to be everywhere, you need to jump on that train. It's literally because of that. It's because of the fact that if you're going to be using that and you're going to create those functionalities, you will win over the competition, even in the business central space, because these things will speed up your implementation process, configuration process, because you can imagine generating not only the text but also generating tons of master data components, that you can speed this up with Copilot Studio and filling those tables automatically. You know, even generating demo data or something like that. Yeah, so there's just a use case Copilot Studio and off you go with the new functionality. That's really how things.

Speaker 3:

I think it's a really nice step forward for whatever has been brought out to the partners ecosystem. And first, I don't know if they finally did it, but I've heard on numerous occasions that there is an instance of Azure OpenAI that you can use within Copilot Studio that will be free of charge. So you're not paying for and this has to be confirmed, because I don't know if it's actually happened, but I remember that there were discussions about doing it because when you're prompting, when you're changing and adjusting your prompt, you're investing money right, because you need to have this workloads going to Azure OpenAI and something needs to calculate it for you and bring you the relevant information, and then, by tweaking the prompts, you are able to go to this element where this actually fits your purpose. And for that training element, for that ongoing work that needs to be in the refining of the prompts, partners would not be charged for, but this has to be reconfirmed, because that's what I've heard.

Speaker 2:

Yeah, it's all too much for me, so I don't even know. I try to keep up with so much. It's tough, it is really challenging to keep up with, and sometimes you just have to trust it. In a sense, I guess you could say trust, but I don't say trust, but verify trust with caution and hopefully we get the benefits and the efficiencies of it without some of the damages that could come with it as well too. I I mean, I think, as with anything else, it can be exploited once you start.

Speaker 1:

I have a question really quick. We talked about so many possibilities of what Copilot and OpenAI or ChatGPT can do for you, primarily around gathering information and then summarizing it in a way that in a natural language way that I can understand and interpret it, and then summarizing it in a way that, in a natural language way that I can understand and interpret it, and then get the result. Have you found? Maybe the question should be what is his current limitations?

Speaker 3:

Yeah, data, numerical data is the where it's challenged. But that's why I was delaying this for so long, because I knew that something has happened in this area and I thought that it's going to be revealed. But probably that's going to be revealed maybe later this year, and I think there's going to be a breakthrough in that space as well. And it's just about how the data is stored, how it's being interpreted, how it's being given certain context, and once you have that all of that, you can join the LLM forces with numerical and you're gonna be able to do the stuff that usually people ask for. So when we have demonstrations of our apps, then it's the question of can it do this, can it do that?

Speaker 3:

And then when you have a co-pilot, at its current stage it has certain limitations and I think that these limitations will go away sooner than later. So we're going to have the ability to ask questions, as in what is the? Show me all the records where my you know receipt date or you know not even that you know. You're going to be able to say well, where, when was I late with my um, I don't know, with my shipments, or something like that, and it's going to be able to to to answer those questions with data, with charts and things like that. So this is where I think that that's the current limitation and that's where people thought that it's going to be able to do that. When you're going to put Copilot on top of it, it's going to unravel those things and we're going to be off. You go with everything.

Speaker 1:

Yeah, you made a great point because I think it confused a lot of people when Copilot came out. Like to your point, I can ask all the business questions that I may have within Business Central or within my data and then people realize, well, that's not what was I mean, that's the limitation. Conversation wise. And so you're saying that within maybe a year it will start to do the functions of crunching data and interpreting that into a natural language where it would make sense for everyday people. Do you see that happening in addition to the tools that's already out, like, for example, power BI? Had that, you know, ask the question, the Q&A Do you see that being available through there first, or do you see that available in a certain product?

Speaker 3:

on the multitude of different co-pilots, yeah, I think it's going to go into Power BI for sure. Naturally, it might go to. Well, it's going to definitely go to Project Sophia if it's going to end up in production as well. And I think that Business Central is a natural way and place where this should be put in, because the demand is already there. So the demand from the customers is already there, right? So the demand from the customers is already there.

Speaker 3:

You know, when we're having those conversations on the product that we have inside of Business Central from the app source, there are certain questions that we cannot answer. But the questions are there. The questions that they would like to be answered are already there. So they're waiting for this iteration to happen, when they will be finally able to answer these questions with a natural language, because they want to have the ERP of any sort is there to collect and help you with some operations and everything else. But in the end, you want to be able to ask that question and know what's going on.

Speaker 3:

And at the moment, what we need to do is somehow tailor those questions still to answer specific questions. So, tailor those answers, tailor the prompts or the data that's coming into those prompts to be able to answer specific questions, whereas at a certain point and it's going to be sooner than later it will be able to actually start understanding what's there, right? Yeah, what this data set really means, what it represents, what can I do with it, what kind of insights can I give you on that data set and collate it with some additional information? And that's where I think that you know the customer's already there, yeah, with the demand, but the technology is not forcefully there and available in a co-pilot window or chat window.

Speaker 1:

Yeah, I think that's an exciting path that we're going to. I mean, it's really putting the word intelligence into practice. When you're talking about business intelligence, right, so you're using it as a you know, referencing now to your data specific to your business data, not just information that you, you know, you can randomly feed into. So and I think that's, I think that's very exciting because it removes the human interpretation. You know, I remember early day in my career, you build your data model and you do all your calculation and you have some form of forecast, right, and then I have to interpret that and create stories. You know, behind the data that I've been given, and you do your formulas and say, hey, this is what I think the business should be doing and that's my interpretation.

Speaker 1:

Now, with Copilot, it gives you much more power where it could also answer questions that we maybe didn't think about asking. So I think that in itself, in the combination of a natural language, is really going to be a game changer, and so it's going to give a lot of businesses a leg up, especially small businesses that may not be able to afford, maybe big data or afford an analyst. They can just use Copilot to say, hey, hey, what can I do differently to increase my revenue? Right, and it's just going to look at all that information and, of course, your knowledge base, maybe all your marketing materials, everything about your business, and it'll just tell you, hey, you should do it this way, because here's the data that would support that. So super exciting.

Speaker 2:

It's all too much for me still.

Speaker 1:

It's going to change.

Speaker 2:

It is. It's changing the way individuals do business, just as the automobile changed. You know the way people work and you know every few years something new comes out and just changes the way we work. We just have to learn how to use it as a tool, way of work. We just have to learn how to use it as a tool I think is the important thing and understand and truly understand its limitations to level, set the expectations of how it could be used in a business application and even in everyday life, and then also knowing the importance of verifying some of the information that gives back to you, not just taking a run with it. So it could help give you the efficiencies of presenting the data, but you still would need to analyze and review the data.

Speaker 1:

You know one of the wild things that, Camille, you had mentioned about connected co-pilots and having a co-pilot speak to another co-pilot. You know, because it does a different function than this one co-pilot. What's wild to me, and looking at way ahead, where you can have two ERP system with all the data about your company and you have a co-pilot that knows about that, communicate to maybe a vendor or a partner and to their co-pilot, just have them talk to each other and say, hey, this is how we can work together and increase our efficiency and profitability, and just have co-pilot just basically run the whole thing and your job is to just make a decision at that point.

Speaker 3:

Yeah yeah, yeah.

Speaker 1:

Not wild.

Speaker 3:

Run with it. Yeah, yeah, I don't know what shape it's going to take, I can see. You know. Obviously there are obstacles when you give insights, whether or not somebody's going to use it, there has to be a trust.

Speaker 3:

Everyone needs to go back to numbers, so when something is being revealed to the user, he needs to go back to a lot of different areas to check it before he commits to put an order. We've seen some chats already that were deployed too prematurely and some orders happening for a BMW for $1 or something like that. So there are things happening like that when someone knows how to break something, and we know that there are things that can be broken this way. So this is interesting, but maybe let's move so far away because it's going to frighten the people out.

Speaker 2:

Oh yeah, I think so too, camille. You're doing some great things with AI and data within the space of Business Central and other areas. What are you working on now?

Speaker 3:

So we are working on some new addition as well, too, and we're going to probably reveal it at Direction Zemilla going to probably reveal it at Direction Zemilla. I cannot tell you what it is because it's going through the process of, you know, approvals and patents and everything else, but it should be exciting. Nothing that has yet been in the Business Central has not been addressed, so hopefully that's going to be an interesting point. At the moment, we are running with three apps on the marketplace on the app source, customer, item and Financial Intelligence and we see that it's a new space, it's a new category where you're delivering insights based on data, and I think that's the most difficult aspect about it is that it's not a reporting tool, it's not something that is a BI.

Speaker 3:

It's something that is going ahead and saying to you well, listen, this is what we're spotting here, what are you going to do about it? And then that's, you know it's not about. Oh, let me go back to the numbers. Listen, this is what we're spotting here, what are you going to do about it? And then that's, you know it's not about. Oh, let me go back to the numbers, let me just calculate everything, and then we're going to make a decision. It actually tells you certain things and it's going to, you know, wait for the decision right.

Speaker 2:

So, whether or not, you're going to, you know, is it something where you can predict? You know one thing with one of the big challenges is forecasting, or prediction. You know a lot of businesses wanted to make decisions based upon trends. There's always some variable factors that need to be accounted for, so having a tool like that would be amazing and a huge time saver and also help businesses run a little bit more efficiently.

Speaker 3:

Yeah, at the moment, if you yeah, exactly at the moment, if you think of even collating that information about your historical sales and and and, when you're going to spot the the the difference in trend, when something is uh, you know, when you're going to spot the difference in trend, when you're reviewing your reports, when you're reviewing your dashboards, somebody's going to spot it, or you're just going to have it right away inside of Business Central. It's going to tell you that 10 SKUs in your inventory have jumped from one category to another and you should probably review what to do with them, because it's something that is not being seen before or it's not following the regular pattern, so something like that. So adding machine learning and then demystifying all of that information as well, it's one of the products.

Speaker 2:

No, that is a challenge. It's a challenge because you can analyze the data and one of the topics that I've mentioned and I think of is with that. Forecasting type or predictive type model is in fashion. They have seasonality, but even on any business, it's what was happening also at the time, because if you take back, if you remember covid at one point, that's all you could. You know, I heard the word covid just as often as I hear the word co-pilot today, years ago, and now I can go weeks without hearing the word but a. In that case, a particular company's sales information may be different because of those factors outside, also depending upon the type of business you have. If there's some storms or if there are other type of weather-related events, they could impact your sales. So being able to analyze that information with the external forces, along with your internal data information, is a dream of mine.

Speaker 3:

So that aspect we don't cover in the app itself, but we do have the projects where we're doing exactly that. So demand forecasting is one of the aspects that we cover, and then we talk with the business and we unravel what has the impact. Then we talk with the business and we unravel what has the impact. What are the other aspects or factors we need to put into the data set to be able to create a model that would help them make those decisions? So this is part of that you already have it.

Speaker 2:

My dreams have been answered right here today. Yeah, yeah, yeah, but it's not something.

Speaker 3:

And I had this question before After my session. Somebody session, somebody said well, can we scale it? Can we do it, you know, like on scale? No, you cannot do it on scale because you need to figure out what are the other factors that are affecting this. And you know, and it goes down to, you know, regions, seasonality, as you said, weather and other aspects. Yeah, and you can be always 100 percent right. Yeah, there's the whole pattern.

Speaker 1:

If I can add something funny to that, because you know you look at all those different aspects. But I also want to add the emotion component of that, because I'll tell you you can give them all the data you you can and forecast. There's always going to be somebody in the in the company and it says that's been there for maybe 20 plus years. That's not how I feel. You know what I mean. I I always have to manipulate the data because I think because of this, that is typically a challenge when you are like you can have a greatest tool, but the behavior is still the same thing. Yeah, it's almost like it means nothing.

Speaker 2:

You can never account for that, yeah, or you can never forget to account for that human behavior factor, and you know the behavioral changes sometimes that are necessary with humans is is an aspect of it. Well, Camille, it's always a pleasure. Thank you for spending the time with us today. Time is truly valuable. Everybody's time is truly valuable and I appreciate you taking that time with us because it truly is the currency of life. Once you spend it, you never get back, and I look forward to talking with you about this more in person. Where will you be? You never get back and I look forward to talking with you about this more in person. Where will you be? Will you be going to Summit in October? Days of Knowledge in September? You coming over to the.

Speaker 3:

States anytime soon. So yeah, yeah, yeah. So all of that.

Speaker 2:

All of that, all of that.

Speaker 3:

So we're going for sure directions in EMEA. We have a meeting at IMCP because we just started a Polish chapter. I'm one of the board members of the Polish chapter of IMCP and we're definitely going to directions North America and Summit.

Speaker 2:

Excellent, excellent. I look forward to seeing you there. We'll have to have a drink and catch up and don't forget what you forgot last time.

Speaker 3:

The Kruvki. I will bring it with the whole package.

Speaker 2:

Yes, I was disappointed. I was all excited.

Speaker 3:

I know. I even had you on my session just because of that.

Speaker 2:

Yes, yes, Just to say I'll bring it.

Speaker 3:

Yes, I hope the customs and the Homeland Security will not, you know, take some for themselves.

Speaker 2:

I wouldn't be surprised. You could just send me a box if you want.

Speaker 3:

Exactly.

Speaker 2:

And then we'll just be square and I'll bring it with me and I'll act like I got it for me. But again, thank you for taking the time to speak with us. If somebody else would like to contact you to learn more about the great apps that you're creating, great services that you're providing with AI within the business central world and see some of the other great things that you're doing, how is the best way or what is the best way for someone to get in contact with you?

Speaker 3:

So LinkedIn is always a very direct. I'm always on LinkedIn and also datacouragecom to kind of learn about what we do and and and and emails. It was a good option. Yeah, so uh. Camille at datacouragecom.

Speaker 2:

Do you have copilot? Read your email.

Speaker 3:

Summarize that I do. I do have some summarization tools.

Speaker 2:

I need to get that.

Speaker 3:

I need to use that and also for the meetings. Honestly, I love the meetings. Part of it. It's just a revelation.

Speaker 2:

You don't have to pay attention as much. No, it's great. I appreciate it. Thank you again, and I look forward to talking with you further. Every time I talk with someone about AI, I learn just a little bit more, and hopefully I'll get it all soon. Thank you again.

Speaker 3:

Thank you so much. Take care All the best. Have a good one. Bye, ciao, ciao.

Speaker 2:

Thank you, chris, for your time for another episode of In the Dynamics Corner Chair, and thank you to our guests for participating.

Speaker 1:

Thank you, brad, for your time. It is a wonderful episode of Dynamics Corner Chair. I would also like to thank our guests for joining us. Thank you for all of our listeners tuning in as well. You can find Brad at developerlifecom, that is D-V-L-P-R-L-I-F-Ecom, and you can interact with them via Twitter D-V-L-P-R-L-I-F-E. You can also find me at matalinoio, m-a-t-a-l-i-n-oio, m-a-t-a-l-i-n-o, dot I-O, and my Twitter handle is Mattalino16. And you can see those links down below in the show notes. Again, thank you everyone. Thank you and take care.

People on this episode