Dynamics Corner

Episode 421: Business Central Buzz: Agents, Page Scripting and Fire Fighting

Vincent Nicolas Season 4 Episode 421

Get ready for a mind-blowing episode of Dynamics Corner as hosts Kris and Brad sit down with Vincent Nicolas to dive into the whirlwind of AI and Microsoft Dynamics 365 Business Central innovation! Fresh from a conference, Vincent spills the tea on how AI transforms everything from page scripting to tenant communication. From automating testing with smart test data to wrestling with security and human oversight, this convo is packed with insights on orchestrating AI agents and building trust in faceless tech. Plus, they tackle the big question: how will AI reshape jobs without replacing you? With plenty of “mind-blowing” moments and practical takeaways, this episode is a must-listen for anyone ready to ride the AI wave in Business Central!

Send us a text

Support the show

#MSDyn365BC #BusinessCentral #BC #DynamicsCorner

Follow Kris and Brad for more content:
https://matalino.io/bio
https://bprendergast.bio.link/

Speaker 1:

Welcome everyone to another episode of Dynamics Corner. My mind is blown. I don't even have a line for it. I'm your co-host, Chris.

Speaker 2:

And this is Brad. This episode was recorded on April 22nd 2025. Chris, chris, chris Another mind-blowing episode and we were able to talk about one of my favorite topics page scripting. We also talked a lot about Business Central with the AI agents, and we also talked a lot about MCP service With us today. We had the opportunity to All right, good afternoon, how are you doing?

Speaker 3:

That's better. I can hear you and see you guys now. Oh, excellent.

Speaker 2:

That's cool.

Speaker 3:

Excellent.

Speaker 2:

Excellent, but I don't know what's worth, no, that you go.

Speaker 3:

Excellent, excellent, excellent.

Speaker 2:

No, that's good. Good afternoon, how are you doing? Hope you recovered from the wonderful trip to Vegas.

Speaker 3:

Yeah, actually I'm just back. I'm still in the jet lag. We took a vacation and went to Costa Rica after Vegas.

Speaker 2:

Oh, very cool, nice, nice we just landed on Sunday.

Speaker 3:

So yeah, still recovering a little bit from the jet lag.

Speaker 2:

Oh, the time difference. I have a three-hour time difference and I only went to Vegas for the conference and back and it took me, I think, a few days to recover just from the three-hour time difference and being there for a few days, to recover just from the three hour time difference and being there for a few days. So you have an opportunity to get used to the time difference.

Speaker 2:

Then have to go back must be a little more challenging three hours is nothing man, that was my point is three hours isn't a lot, uh, someone like yourself who I think you had eight hours, I believe, for Vegas or nine hours, excuse me, yeah, it's almost a full day. It makes for a challenge to go through it too. So how was your experience at the conference? I thought that was one of the best conferences that I had attended.

Speaker 3:

Yeah, it was great to interact with partners and the community. As usual. We were discussing internally how much we come with specific intentions. We want to drive a message and drive a. We have an agenda. We want the channel to basically ramp up on this whole AI thing. It was mostly one of the big things in the conference and we really want people to start doing things with that fantastic new technology. So that's the message we're trying to deliver at that conference and I hope we succeeded. I mean, you guys you tell me I hear a lot of. So my personal impression is that the channel is very there's a whole spectrum. It goes from people realize that there's a whole spectrum. You know people, you know it goes from people realize that, um, there's something there which probably is going to be interesting for their uh and valuable for their businesses, but they haven't started yet.

Speaker 3:

Uh, until you know people were deep into it and already doing you know prompt engineering and have their own you know ai feature and and then there's everything in between, but I think it's partly also because this thing is going so fast that even we have a hard time to keep up with it. So I imagine that it's hard for everybody to figure out what should I do. There are some challenges with it.

Speaker 2:

I think the big point that you made is it's going too fast and it's faster than I think users can adopt.

Speaker 2:

And the conversations I typically have is if you look at the advances in civilization, it took us 50,000 years to make an ax and then, as time progressed, the amount of time before new technologies released now is it's almost like Moore's laws with the chips it's you can only go so fast and the window and window is getting smaller and smaller, where it almost feels like every day there's a new feature and I don't know if our brains can keep up with it.

Speaker 2:

But it's also from the customer's point of view how can they change their business to adopt these technologies so quickly as well? So it's not always sometimes the partner point of view. It's the partner point of view to be able to accept the technology, to be able to work with customers to implement it. But it's also customers to be able to have. How can I in some instances have a radical shift in my business to be able to take advantage of these new features and functionality? And at the rate where it's coming out, it's almost like trying to change the wheels on the bus as the bus is driving down the highway in some cases.

Speaker 1:

And I think that's a big challenge, brad, because I know it's a partner-focused event with ISVs right, and the challenge is two parts. It's the way I see it. One, you know, considering there's different sizes of partners too, so there's small partners, large partners, medium partners. How do you utilize the co-pilot within the organization first, and then you're expected to implement it for a client. So not only you're trying to figure out how would you use this day-to-day within your organization and then also, at the same time, convince your customers, your clients, how do you use co-pilot when, in fact, you're also still trying to figure out yourself? So can you build a solution for a small partner? That's a little tough. You're like building a solution around Copilot At the same time. You're a small shop and then, at the same time, trying to convince your client to use Copilot. That's a big challenge.

Speaker 2:

Yes, I do have one big takeaway from the conference and I think it's not narrowed on the ai and something that I realized from conversations with partners in some of the sessions. But before we jump into that, uh, would you mind telling us a little bit about yourself?

Speaker 3:

uh, yeah, you mean like in general introduce yourself.

Speaker 3:

Yes, yes, yes so yeah, my name is vincent miguelis. Uh, I am the chief architect for business Central. I've been working with Business Central for, I think, more than 10 years now and yeah, so we have a relatively small team working for me and what we look into is mostly innovation, new technology. So obviously we do a lot of things with AI at the moment because that's a new thing, and so we're trying to kind of do the groundbreaking stuff. What's the next big thing?

Speaker 3:

So other things we're working with we'll never make it to the product, but some of them you might see in two, three, four releases from now. We'll never make it to the product, but some of them you might see in two, three, four releases from now. Uh, so we try to be a little bit ahead of the um, you know, ahead of the, the game and um, and at the same time, you know when we have you know if there is something like if there's a fire burning, like we have these, you know, major performance issue or something, then they install men's on deck. So we, you know some of our folks are some of the best folks we have in our organization, so they go in and try to fix whatever needs to be fixed. That's kind of the charter of what I do.

Speaker 2:

You're the innovative firefighters, so you're working on keeping things moving forward, but when there's a fire, you're the special reaction team that jumps into working with it. That's a fire? You're the special reaction team that jumps into working with it. That's a big responsibility and I'm interested to hear if you know some of the things we just talked about with innovation and how fast it's coming out, how to plan three or four releases ahead, because the way technology is moving, we don't know what tomorrow will bring in a sense. We don't know what tomorrow will bring in a sense. But to go back to the one big thing and I had mentioned it just in the previous episode, I think that came out of it is we have a lot of features and functionality. I think this is across the board and it's stemmed from conversations during some of the sessions and also some of the networking events.

Speaker 2:

We have a lot of features and functionality within the application and it's it's great to say that we have these features and functionality within the application and it's great to say that we have these features and functionality a lot of us that work with it from the partner point of view, and this is the responsibility of the community, the partners, and it's not a fault with Business Central, because Business Central to me it's been my life and I'm a super fan of it and I realized in conversation is we have all these features and functionalities and I think what's being missed in some cases is how to apply it in the real world. So we have features and functionality that are added, but what is a good user story for how to use the features and functionality and, more importantly, when to use it? Because in some cases, you could set something up one way or set it up another way or use a different part of the application to help make for a better implementation. So I think that was a big takeaway that I had from some of the sessions, because the sessions themselves did have good user stories for when to use the features and functionality. And that's what I realized like ah, this is. It was like an aha for me of it's not just the lateral. We can do this.

Speaker 2:

Now we have an AI agent. We have, you know, project orders. We have this. It's okay. Well, now you have an AI agent. This would be a good case. When to use the AI agent, this would be a good case. When to use a production order, this would be a good case to use an assembly order as such.

Speaker 3:

So I think that's something. So I'm glad to hear that, because that kind of so, from the Microsoft side, we will ship some AI-based features. We'll keep doing that. But I think very Personally, I think very quickly, we'll be somewhat I don't want to say limited, because there's tons of things we can do. But the real value back to your point, christophe, before the real value will come from the partner channel, because we will never implement an AI feature which is industry-specific. But that's, I think, where the real value is.

Speaker 3:

If you're working, I don't know, in construction, business or wineries or whatever, I'm sure there are tons of scenarios there that can leverage AI and that will need to come from you guys, that will need to come from the community and the channel. Our hope is that at a conference like this one, when you see what others are doing, get inspired, even if it's not. You might not be your own domain or your own industry, but you can inspire, say, okay, they are doing that, maybe I can do, maybe that could somehow translate to my domain, to my customers. You know what they are doing and the solutions I'm working on somehow. So that's you know. I think that's one of the great thing with these conferences, that too, if that type of osmosis can happen, then then that I think that's great, and that's how we will, you know, be, you know, be successful, you know all together.

Speaker 2:

No, absolutely, and that that was my takeaway from that. It's with right, along with what you were saying is is it is us to be able to have that good user story, like you said, because you have the features, you have the functionality and it's up to the community of almost how to apply it or to come up with the ways to apply it for specific functions. Because we all know, even with what we do downstream from what you with the product, it's a challenge. You can't have something that solves everybody without a little change and maybe a configuration. You know even some people use variants, some people don't right, but the functionalities in there and how you use it is all part of the implementation, which is good you had mentioned you work with the implementation. We can talk about some firefights afterwards, but with technology moving so fast and technology changing, how do you plan three or four releases out? Do you have more detail for the next release and then three or four releases out? You have sort of a broad stroke desire.

Speaker 3:

Yeah, so we have, as you know, we have a six-month cadence, so we plan at six-month cadence. We plan at minimum six months ahead. But still, I would say we still have a quite agile process. We need to make some kind of plan because we need to tell everybody what's going to be in the next release. But, that said, we still have a little bit of wiggle room to modify the plan in the six-month period Because, again, we have a pretty successful, I would say, agile process. So that's for what already is planned.

Speaker 3:

But we never plan, we don't do detailed planning more than six months ahead, because you know, we realize this is the big learning, I think, in our industry. Like you know, these waterfall models, they just don't work right. So you know, whatever you plan beyond six months probably is not going to happen or it's going to look totally different. Yeah, but the world is changing Now. The world is changing so fast that there's literally no point in doing that. No, I mean so that's, you know, overall, from a BC engineering team standpoint, in particular in my team, you know we do a lot of experimentation and prototyping. As I was mentioning before. Some of it don't make it to the product or maybe not yet.

Speaker 3:

We have some things in a drawer that we haven't had for a while. We have tons of ideas and things we could, things we could do, but we choose not to do because we prioritize other things. And you know, one of the things I'm you know, I sometimes have some interesting conversation with my peers is, you know, okay, I have this thing which I really wanted to get in the product and, you know, depending on what other priorities there is, we can have interesting discussions about it, right, but you know, and but there's, you know, in the end it's all matter of prioritizing again things against each other, and and you can say, there are, there are always things that would be, would come on top, things like security. You know, they're like priority zero. With any security things we need, any security work always take precedence.

Speaker 3:

Then there's things like compliance that we need also to do. So these are the things that are not negotiable, so they're always on the top of our backlog. And then after that, everything is a question of you know what we want to prioritize. That everything is question of you know what we want to prioritize and when we try to do, is we so at the leadership, leadership team level, we, we try to set some strategic goals for the product. Uh, you know um and uh. Then we try to align the backlog and or we execute on this strategy you know uh, afterwards in the d. So that's more of a detailed planning of it.

Speaker 1:

You must have the best job to be able to create some ideas and like maybe we should do this next or we have some little bit of time. Let's try to scoot this in there and be creative as possible.

Speaker 3:

It's a really great job in that sense that you know you get to. But you know I mean, okay, I make it sound, you know, maybe a little more romantic than it is, because you know it's not like we just spend all time prototyping and then you know having fun and you know whatever next to the product. So we do have. You know, I can't go and spend all my resources on some things which are not relevant for the product, so these things need to be to some extent relevant to the product. So right now we are doing a lot of things around AI because that's our, you know, number one strategic goal at the moment. I can share a little bit of what we're working on. I mean, some of it is confidential, but I can share the part which is not. So we're doing some more experiment with a page scripting thing.

Speaker 2:

You can't even that right there you just lost me at that, because I am the I don't want to say the biggest, but one of the biggest fans of page scripting.

Speaker 2:

So, let's talk about some of the things you have. I will tell you what I love about page scripting. I saw the roadmap and I wish that what was listed on the roadmap, like you, could fast forward everything, because I did two, uh, regional sessions on it and I was shocked that I don't even want to interrupt you on this, but I I had to because you, you hit like a trigger for me. Uh, I was shocked that no one in the room knew of the page scripting. But after the session both of those sessions ran over and everyone had so many visions of how they could apply page scripting from the conversation and then also at Directions, we did a lengthy page scripting session that encompassed a little bit more than the page scripting use for user accepting tests, but some other things that you could do with it.

Speaker 1:

So keep going with your list, but we can definitely. That's a long-term, that's a long-term favorite man.

Speaker 3:

You know, it's like what I usually say when we discuss priorities there's you know, there's hardly any feature. People will literally. So I showed that you early, I think tech days, even before it was shipped or it was the prototype, and at that time I said, well, this is something that might or might not make it to the project man. The feedback I got was totally overwhelming and people literally at conferences, like direction, they would stop me in the hallway and ask me when are you going to ship the page scripting? I never experienced that with any other thing we put in the product. So that's clearly something that people see a lot of potential in. So the first thing we need to do is to remove this preview tag. So we want to, because it still has a preview tag, so we need to remove that. But that's just a matter of it's not because it's not production quality, but there's a few things around you know. You know complying things around the, you know accessibility and documentation. I think like that, get in the control.

Speaker 2:

Is that what the preview tag is? Because I have had the question on what preview means for a feature in the product because there have been other features that had the preview tag. Yes, and I was asked in a session what does the preview tag mean? So maybe, before we continue with that, not to interrupt you again, which, like I said, you have me super excited now. So I'll probably interrupt you on the pay scripting, but can you explain the significance of the preview tag and what it means for the feature within the product?

Speaker 3:

question. So the preview tag essentially uh means several things. So it means first of all, there's limited to no support. Uh meaning, like you, you can use the feature. It's mostly when we put a feature out there with a purview tag means that we want to get a feedback on that feature before we actually put it in production. But but there's no guarantee that we'll ever ship the feature. We can't. You know if the feedback is, you know, very negative, or we realize there's too much work, or you know it's just, you know, not good enough, we can always pull it back. I don't recall any example where we ever done it, but this is a possibility. We could potentially pull the whole thing back and say, okay, we're not going to do it anyway. So that's one of the meaning of the preview, of the preview tag Limited support to no support.

Speaker 3:

And there might be some things, you know, which are not quite finished and polished, although we tend to have a pretty high bar anyway, even though we put the preview tag on. But you know there are such a thing as we call it in terms of the Microsoft tax. You know all the features we release, they need to be accessible. You need to go through security review. That we have to do anyway because you know, obviously it has to be secure. Even though it's preview, we need it to be fully documented. And then there's things like version. I'll give you an example.

Speaker 3:

You know, with the page scripting, we're discussing whether, if we make it production ready and remove the preview tag, we have this YAML format behind it. We've been discussing so far we have been free to make changes to it. We can change. Tomorrow we can decide to change whatever format we save the scripts in YAML it. Tomorrow we can decide to change whatever format we save the scripts in YAML. And then if we break your scripts, we don't care, because it's preview.

Speaker 3:

You might have tons of scripts or they won't work because we changed the format. That's just the way it is when it's preview, whereas when we put it in production then we can't do that anymore, meaning we'll have to, at minimum, either document the format, which is part of the feature, or we need to be backward compatible with the previous uh, the previous um versions of the format, right? Or have some kind of migration path. You know, one way or the other, you know you, you might have to reload the script and save it again or things like that, to update it to the next moment. So all these things need to be in place where things are not in preview anymore.

Speaker 1:

Yeah, I think where page scripting is one of the favorites across the board, it's certainly. For me, it's a solution that solved a problem that spans across all different areas of business and spans across a partner and as well as a client and as well. So those kinds of things, you know, although it doesn't seem like it's, you know, not like AI driven, you know co-pilot driven, yet but it's such a great feature that anybody can pick it up and use it for, you know, to solve a problem or use it to simplify a process or some testing, things like that. And, again, it's usable across all business aspects. That's probably one of my favorites, for sure.

Speaker 2:

Oh it's, you can't stop with me. It was 2023. I mentioned it at Directions. Duilio and I did a session together on PageSkip and we did mention when you introduced it I think it was June 2023 at BC Tech Days. Ever since I saw that, I've been on board with it and just the uses of it are creative, from documentation to training materials outside of the user acceptance testing, and I've also talked with some that they got creative on ways to enter data because you can change the YAML file so that they automated doing some tasks.

Speaker 3:

This is another reason why we have kept it in preview for so long, because you know, we knew from the very beginning that there was a wide range of possible usage of it. And you know it's both a blessing and a curse, because you know, if you start saying this is something you can use for documentation or troubleshooting, then you also need to support the scenarios and test it for that. So all of a sudden, the range of potential utilization expands, which needs more testing for us, more support, whereas if we say, okay, the scope of this is end-user testing, so it's a much more limited scope and allows us to say, okay, if you use it for anything else, that's at your own risk. Right, we don't prevent anybody to use it for other things, which I can hear you've been, you know, diligently doing, which is fine, you know. But we might, you know, we might you know, if we come up with, hey, I have this scenario when I do documentation or record, troubleshooting or whatever, we might say this is not something we support at this stage.

Speaker 3:

So this is another reason why we have the preview tag. So we just, you know I can share a bit of news with you. Just, you know, I can share a bit of news with you. So we've been very focused on the user and user testing partner. That's basically how we basically presented that feature. But I think we're shifting the focus a little bit now to be more partner-oriented, because we realize, you know, partners are using it also a lot. Partners are using it also a lot. So we have we are taking a broader perspective on the page scripting than just the user.

Speaker 2:

I'm so happy to hear that and, again, as I agree with what you're saying, it's the use, as people do get creative, but the ability to test. And is there anything that you can share from the roadmap that will be out soon, or is it all?

Speaker 3:

confidential. I can put my top 10 wish list, if you want I want that, okay.

Speaker 2:

Well, if you can sneak in the, you don't have to say anything, but if you can sneak in the editing of it, I can tell you what we're experimenting with, and again it's the disclaimer.

Speaker 3:

It might not make it to editing of it something which is not which. I can tell you what we're experimenting with, and again it's the you know disclaimer. It might not make it to the product at all. But some of the ideas we have was you know um, is to somehow, you know, imagine when you, when you do your step, you could somehow involve an lm at some stage.

Speaker 1:

And I'm not gonna say anymore you and I would be talking about that. I think we had spoken to others as well about like our wish list and I think it was around that. So, yes, I that would be fun mine.

Speaker 2:

Mine, to start, is you can edit the yaml file and load it would be to be able to edit the script while it's there, to delete a step, to add a step, to reorganize a step, because sometimes I'll go through a script scripting, then I want to put a conditional or a validation, and then the conditional I forget to put the end in. So you know I have to either edit the yaml file or I have to record over again and also merge two scripts. I understand that the concept of the scripts is to have small use cases for users, but the ability to maybe take two of them together. I know we can use replay, which I think was a great addition as well to be able to do a wide range of testing, and now with AL Go you can also use page script tests and there is like it's just. I have to slow down my excitement, I'm sorry. I feel like a kid. I lose focus here.

Speaker 3:

All of these things are on the backlog. So there you know, all of these features are obvious, so to speak. So you know everybody wants them, so they will. You know. I'm pretty confident they will make it to the product. I can't tell you exactly when and how we'll prioritize it, even for the next upcoming six months. I think we're still discussing what's going to be, but there's going to be some things. It might be depending on how we allocate resources. It might be a little bit disappointing in that sense that it might be the only thing we do is remove the preview tag, so that's like the most boring version of it. But that means that that that means that the next release we can you know all the resources we'll be using on page.

Speaker 3:

We will be for new features right we will be able to also add some new stuff for um, we have a, you know, peter boring, which I'm which you know is the PM responsible for page scripting, and he's managing that backlog and he's doing a great job at it.

Speaker 2:

No, I understand it's not easy. There's a lot that goes into the product.

Speaker 3:

There's so much demand and so much usage of it, so we'll keep working on it. I'm very happy to hear that.

Speaker 2:

Thank you, so I interrupted you when you were talking about some of the things that you were working on. You jumped on the page scripting, but it's almost. I almost feel it was. You know, it's something that just took me for a side rail here because of my passion for that. I could talk about that every day in every conference with everyone. So what are some of the other things that you have? The backburner, the pipeline or some of the cool things, and AI within PageScripting would be cool too, because for it to be able to dynamically change yeah, real-time response.

Speaker 3:

Talking about it just a hint right Some of the things you can try, some stuff with patch scripting and AI. Today Let me tell you how. I've done some experiments with it and it's actually there's some potential there. So here's the thing LLMs are really really good at generating structured output, especially if you use few-shot prompting and that kind of this kind of techniques where I'm telling them you know this is the kind of output they want. So you can do things like you know, take a Yammer recording you know, put, put it into the prompt and ask the LLM to you know, generate more tests doing other things and output that particular format and give examples of recordings you have, and actually that works pretty well. So you can do cool stuff.

Speaker 2:

I'm trying that after this conversation. I assure you I am trying that because that is. I've used LLMs going the other way to create user documentation, not to create tests. I can honestly say I didn't try that.

Speaker 3:

So go explain with it because you know so. First of all, you know, llms are really really good at generating structured output, and especially if it's things like they have seen a lot of like there's a lot of YAML on the internet.

Speaker 3:

There's a lot of JSON, and JSON is even better because these days, I'm sure you know but these GPT models they even train specifically to be good at generating JSON, so they're really good at it, but they're doing a pretty good job with YAML, which is not too far from it anyway. So try to play around with that. That's fun.

Speaker 2:

I will try that and if I have some issues, I'll use the new feature for converting JSON to YAML and vice versa. So I will do some tests with, as you had mentioned, if there's not a lot of YAML with the structure, maybe trying a script with YAML converting it to JSON, telling it to make some changes, then converting it back to YAML to C no-transcript. What do you mean? What do I use?

Speaker 3:

Use a chat. Gpt Copilot.

Speaker 2:

Use a I use it depends what I'm working on. I use Copilot for coding and such. I'll use GitHub Copilot. I'll have files open in the editor and then I'll choose the different models. I found Cloud Sonnet with AL seems to work better, yeah, and then the other models work, depending on what I'm going to work with. But I think with the YAML I'll try to see what Claude Sonnet does with creating the tests.

Speaker 3:

You can try to try Azure AI Foundry if you haven't tried it. Azure AI Foundry I haven't worked with it that much.

Speaker 2:

There's only so much I can work with. I've been working with some of the LM for local language models primarily other than GitHub.

Speaker 1:

GoPilot Marcel wrote a blog about how he used Azure AR Foundry.

Speaker 3:

It's just a UI. It's a front-end too, but the thing that's good about it is that it can tweak some of the parameters in the UI, which is one thing, but also it has actually a drop-down box where you can ask specifically generate results in JSON, and that will leverage the OpenAI API, because it's supported at the API level actually. So what it does is actually when it sends the request, there's a flag you can set in there in the request you send that says to the ELM the response as, as a structure, jason, and it will. You will get a Jason back. You have to tell.

Speaker 3:

Funny enough, in the in the prompt you have to say it has to be in Jason, even though you have the run, but whatever, otherwise you'd get an error. But that's you know. When you use it directly from the API, you don't have to do that, but in Azure AI Foundry, somehow you have to. So it works really really well with the so-called few-shot examples. There's also a UI for that in Azure AI Foundry where you can enter some examples of what you want, the format you want as a response, and that works extremely well if you want to play around with it.

Speaker 2:

I'm putting that on the list. I will use Azure AI Foundry.

Speaker 3:

It's a mouthful for me.

Speaker 2:

But I will try that as well. I'm super excited and I appreciate the idea suggestions for testing with page scripting with some LLMs to help create additional tests. It's yeah, that's exciting. So what is?

Speaker 3:

LLMs are very good at generating large you know large datasets. It's you know if you need to generate large data set for whatever purpose, you know LLM is really good at that For testing data. That's the other experiment that I wanted to try with it.

Speaker 2:

Again, it's unfortunate. We have all this great technology and there's so much that I want to play with and there's so much that I want to do that it's a challenge for getting some of that information. So I will also want to play with my idea with it, as you had said. Mentioning test data, because I think several, even maybe a year or so ago, I had a conversation on one of the podcasts about having open source test data, because we have the Contoso data now, now used to be the Kronos data for the applications, but it's not good for every scenario. But now if we could have LLMs create test data following the structure of Business Central, use paid scripting to load the data or configuration packages. However, whatever is a quick and easy way to do it. Configuration packages would be a little more standard. I think it would be a great experiment for it to create test data for customers, sales, orders, items. That way you could get a volume, and you know I'm an expert at making bicycles and coffee machines.

Speaker 3:

So I'll tell you another interesting anecdote about that. So you've seen our sales order agent, right, we demo at directions, right. So when we started testing it we realized we got a pretty good accuracy in some of the tests we had. And so the scenario was send that mail and order some chairs or some furniture. And tests we had and it was like this. So the scenario was you know, send that mail and order, you know, some chairs or some you know furniture and a coffee machine. So basically, the demo data we have in BC, right.

Speaker 1:

And it was working actually pretty well.

Speaker 3:

And then, you know, some people tried some other scenario with some other data and then it was all of a sudden accuracy dropped significantly and at the start we couldn't really understand why. How come, you know, if we order chairs and stuff, that works fine, but if we order something else that doesn't work? And then when we realized, you know, because we went and did some experiment, you know totally outside of BC, just as you know, we chat GPT it's like we realized you know our test data in BC, just as you know, chat GPT is like we realize you know our test data in BC. They are out there, they are on the internet and LLMs have seen it. You know this means they know about the Amsterdam share, they know about the. You know all the things we have in the BC database because it has been around for so many years now.

Speaker 3:

And you know, know, lms are trained on on the internet, so they have seen that data. So they are yeah, they are pretty good at recognizing it. Um, so you know, that's one of the interesting things about you know, uh, you know interesting side effect of training on the content of the internet, that on certain things. So what we had to do was so we created back to your point, bernd we created actually a test set which we don't publish, but we created new test data, also using NLM actually. So basically, we created a toy shop with children, toys and all the things, which is totally different than what we had to test our feature, and that's what we're using to test our AI feature, and we make sure we don't put that out there.

Speaker 2:

so, lms, don't come buy this data and get biased on this track it does know it because I did a blog post on bulk image import to change and I did go and search and I used copilot and said create me an image of an amsterdam chair. And it looked like the blue or whatever color. I forget the colors of the chairs, but it created similar chairs to the pictures that were in the sample data.

Speaker 3:

I'm sure you can. I'm sure you can ask. You know I didn't, I didn't try it, but I'm sure you can ask. You know what? What are? Go go chat GPT and ask what? What are the chairs in business central? And I'm sure it will come up with a list. I'm pretty sure that works.

Speaker 2:

It's it, I will try that, but I'm pretty sure that works. This is all just amazing. We've gone from pen and paper just in my lifetime. From pen and paper to a computer or a system, an LMM I don't even know what you refer to it, as it's almost becoming, I feel like I'm talking to a human, sometimes with the information that it gives back in the conversations, even now, with a lot of the voice chats that you can have with some of the applications, and I just can't believe in my lifetime what I've changed and the the generations now that are growing up with it.

Speaker 1:

it's all they know it will be all they know to. That would be the day, brad, about using Copilot voice in Business Central one day. That would be fascinating. When he talks back to you, that's going to be amazing that's not really hard, that's not far.

Speaker 3:

I'm sure you can, you probably can do that today somehow. You know, I mean you can get access to the. You know you could write an add-in that does that somehow. I mean it's just scaffolding, because these services, you know, all there is is, you know they have these text to speech and speech. You know models which are not lms model, all the models, right, uh, and they're just, you know, it's just wiring them, it's just plumbing in the end, right, you just wire them to the next thing. Well, and then that works.

Speaker 3:

Actually, we did, we, we, um, I didn't, I didn't manage to show that because when I showed Petscript now we're back to Petscript when I showed Petscripting, you said it was in 2023, so I take your word for it. You know, michael from my team, we implemented. I didn't implement that Michael from my team did. We actually had a prototype where we had the wild whisper, which was back then, the speech-to-text protocol, into that and you could basically talk to BC and say, hey, do this and that, and it would generate a page scripting, script and execute it.

Speaker 2:

Dude, that's awesome. Can I ask? I'll ask you this question and you don't have to answer it. Obviously you don't have to answer it, but obviously if it's confidential, don't answer it, but if it's a theory, you talk.

Speaker 2:

I see ERP software becoming faceless, and what I mean by faceless is the user, the. You'll have the data layer, the data layer you know separate that. You have the ui, the business logic and the data layer where you can go to an implementation, in essence, have the ui created and have a core set of actions that you use to create it to even to the point where you're using the voice because I know you could do it now with email with a sales order agent is getting there, where some of the data input will be just specifically what you need for an implementation from a core set of actions. Or even if a salesperson's in a car, they had a conversation and they could say, create a sales order for Chris for an Amsterdam chair and a sport bicycle with these rims and give him a 20% discount, and automatically the sales order is created, is put into the process and system.

Speaker 2:

I have seen AI have the ability and someone demonstrated for me on Business Central to be able to find the actions on the page, and it's outside of Business Central, it's not something within it. This is how creative people are getting where. It actually was able to find the actions and the buttons on the page and do something. So my thought was now that's we won't have a standard UI in that sense. Is that a vision?

Speaker 3:

Yeah, for sure, there's lots of yeah, so this, you know what you're saying is you know, everything is an agent, basically, right.

Speaker 1:

Yes.

Speaker 3:

So there's a lot of discussions around that and there surely be I'm sure there'll be a future where we'll have, you know, ai. I mean right now, agent is a big thing, uh, and and agents needs don't need to be, you know, you've seen the ourselves already. Agent demo, right, we, we keep the human in the loop a lot, like you know, you have to basically approve, uh, what the ai is doing and the agent is doing, and you know many, many of the steps. Like you know, when you get an email, you have to, you know, look at the email and say, okay, proceed right. And when the agent, you know, creates a sales quote, you get to look at the sales quote before and the email they actually crafted, before it gets sent to the customer. So there's, you know, but you know we program these steps right. I mean, it's not the nature of LLMs or agents to have the human in the loop. We implement it that way and we could just have skipped this. But now there's tons of reasons why, at this stage of the technology, and also the maturity of the technology, and also, you know, the maturity of the technician, but also how you know how people psychologically respond to this level of automation, that we want humans to be in the loop, right, uh, and and and but.

Speaker 3:

But our, you know uh, jacob, which is our ux uh expert for bctm, he has a great way to talk about this. He talks about, you know, you have this dial of things where you know, on one end, if you dial all the way to the right or left, I don't know, you have, you know, human very involved at basically all the steps and the agent is only progressing and executing the different steps, as you know, as, as, as you know, as long as you approve every steps. So that's the very, you know, you know, um conservative approach, uh, and then you can, you know, potentially dial all the way, uh to no human in the loop, where the agents are, um, just doing what they're supposed to do and and there's no human intervention. So there's also another way to look at it. You know people are using the analogy with self-driving cars. You know there's these levels level 1, 2, 3, and 4 in the self-driving car witness and people are talking about agents doing the same analogy with, basically, level 4 is the fully autonomous version of it. That's right, and so I have no doubt we'll uh, we'll get there.

Speaker 3:

Uh, for you know, I, you know it's very, I think it's very hard, I think it'll be presumptuous to try to you know if I try to tell you what the future will look like, but uh, that, because so many things are happening, can go in so many directions. But but I think you know if I should venture a guess, there there are, there are a few things that are emerging. There will be a certain level of, you know um, you know self, you know autonomous agent. There will be a certain number of these. Uh, you know how autonomous I think hopefully will be. You be depending on what the scenario is, or critical it is. You don't want an Asian to run. I don't know a nuclear power plant, probably.

Speaker 2:

I have a full self-driving vehicle and the car drives better than other people on the road.

Speaker 3:

There are things like these, which are a little bit sensitive, like these, which are still a bit sensitive. You might, you might, you know, uh, you might agree that some ai, you know, decides when to pull down the shades in your home, you know, just to take another, your rather, you know, uh, you know, harmless, uh example, right, and then I think there'll be a whole bunch of things in between. Again, it's not going to be an either, or There'll be something that you will trust the AI to do, and some, you will require more control for sure.

Speaker 2:

Yeah, I like the human in the loop and I like the point where you're talking about where you can dial it up because you do have to build for lack of better terms, and again, it's almost like everyone's interacting with ai, like it's, it's, it's a person, but you do have to build up trust so that you can see that it's going to react.

Speaker 2:

And I can understand and see a point where you're talking in some cases. Now, as I had mentioned, I have a full self-driving vehicle and my daughter was with me and she told me that the car drives better than I do and I feel that it's again it's through a progression. So, if you take it, even from the business point of view, with the sales agent, to have the human in the loop gates to where, maybe, as the product or the feature or the agents mature for lack of better terms, now it can be I want to see all emails or I don't want to see emails, unless this or some feature, just to use one of the points you have where the user in the implementation, depending upon what they're working on, can gate it where they feel appropriate, versus every step of the way yeah, you still have some sense of like the ability to pull the levers right.

Speaker 1:

I mean, there there's a sense of control. But at the same time you have an option. Like you know, I'm at this point now where I really don't need to interact. I can turn off the human interaction from my side. I can still turn it off, but I don't need it to prompt me to decide about something, to go and take action. I'm up to a point as a business owner if you're, you know you have a business running Business Central. You get to a point where, like, okay, I trusted enough to not have to ask me to get an approval because it does a 99.9% success rate. Why do I need to go and interact with it, knowing that it'll do it successfully 99.9% of the time? Then I can just turn off the number.

Speaker 3:

You know. So in the you know early previews we have, you know we have. You know it's very conservative. It asks you very steps in the way and then when we see that people, when they, when it doesn't take very long, like a few minutes of usage, people go okay, approve, approved, because you know they kind of uh, but, but it's, I mean you're totally right. I mean the whole point of it is also to to build some.

Speaker 3:

There's psychological things around. You know, letting go from the things that you were in control and it's it's important to build. You know, as technologists and you know software developers, we need to build that trust in the systems with our users so they, they kind of they can, they can see that uh and that that it's working as expected. Another thing we do is that I don't know if you've noticed, but any sales quote or document that has been created by the agent is marked afterwards. So if you can go, even if you turned off the reviewing part of it, you can go afterwards and look at the records and you can see that it has been created by AI. So you can go and review it even after the fact and say, okay, even if the sales score has been created and the mail has been sent, you can go and say okay, this was created by AI. I still want to take a look at it.

Speaker 1:

Yeah, yeah, and you can go back and say, okay, maybe I don't trust it again and I can go and turn the checks and balances. Just to make sure that I'm comfortable, where you know, if I say, hey, if you can get 99 or 90, whatever, like, if you have that control, then I'm okay to let it do its thing and then not worry about it. Maybe I'll check here and there and do quick audits, but I'm comfortable enough that I can let it run autonomously.

Speaker 2:

With the agents and as we talked, I mean, I think everything can be an agent and I think I had some conversations at Directions where agents will have specific tasks that do those specific tasks well, just like some individuals may be specialized. What I start to think about is cross-stack agents being able to work together. So right now we set up an email inbox and the email again to go back to the sales order agent. The sales order agent works, but is there a way to trigger and communicate with other agents, maybe within Outlook, word Excel, some of the other Microsoft suites where Copilot is being embedded in agents? So now we have one flat stack. You can have an agent manager that manages the agents for Business Central and other Microsoft products.

Speaker 3:

So that's funny. You mentioned that because that's you know, I just got out of a series of meetings exactly on that subject. So there's this new standard if you haven't heard of it, you should check it out which comes from Entropic. Originally it's called the Model Context Protocol, you know, abbreviated NCP. There's a lot of hype around this right now. In essence, it's more or less what you described.

Speaker 3:

It is interesting because it is a standard, the way it works. If you're not familiar with it, it's a really simple standard, right, but it is a standard. That's why it's interesting, right? Basically, it has a discovery component in it, meaning that you can have an agent or a service or something and basically publish what it can do and can be discovered by a client. And it's also dynamic. It doesn't have to be, you know, it can change as you go.

Speaker 3:

So, for example, let's say you have a web API, that's something you like, and then you can change the API. It will still work, right, because the magic is, you know, when you're a client, what you do is that you inject an LM in the mix, and LMs are pretty good at, you know, figuring out. Okay, here's the API description, so I can call that, and then if you change tomorrow, then we'll figure it out. Right, here's the API description so I can call that and then if you change tomorrow, then we'll figure it out. So what it allows you to do is do things like so there's this notion of an MCP server, so basically you think of it as your BC could be, business central, could be an MCP server, and then you could have a you know, another MCP server, right, which is?

Speaker 3:

So I'll give you an example, and it's not totally constructed because I was playing around with it and that's basically the prototype that I built, just to, you know, get my head wrapped around this. So, you know, I just exposed the. You know we have this API for the top 10 customers, so I just exposed that as an MCP server. And then we have then Google Maps, instead of the, has also an MCP server, and so you can write an application. So you need a client, you know.

Speaker 3:

So I use Visual Studio Code because it's it has a built-in MCP client in that. But there's also this thing called Cloud Desktop that has it. But you can build your own and what it allows you to do. So basically you register, you say, well, here's the tools that I have. So I have BC as one tool, I have Google Maps as another tool and you can have like a weather tool as well, which is also an MCP server.

Speaker 3:

And then you can go in the chat and you can say, oh, okay, look up the address for that customer and then give me driving directions in the chat, right, and so what it does is kind of orchestrate, you know. So it figures out, okay, how do I close at my disposition. You know I can go and look up the address for that particular customer in the BC, you know, using the BC API, and then you know, getting that address information, it figures out how to feed it into the Google Maps API or Bing API, right, and gives you the driving direction there. And then you can ask, hey, what's the weather there? And then you know it will, you know, call the weather MCP server track. And so what's interesting with this is that you can have a whole range of tools at your disposal that you know the your client will figure out how to use right and open ai. I will support that standard uh very soon in chat gpt meaning, uh, you know, if you have in the JET GPT that you know from OpenAI, you can go there and you can use any MCP server that are at your disposal and there's a bunch of these already out there so you can go and have all sorts of things. You can ask things about BC and do things like that.

Speaker 1:

So is the idea would be eventually, as a business, right? So I'm putting myself in the shoes of a business owner and I have all these Microsoft products. Is the idea in the near future and again we can't tell if this is going to happen is to have an agent that orchestrates within your tenant, maybe within your tenant, to pull all these different agents based upon your prompt, based upon your need, and just let it do its own orchestration of pulling all these different api calls, yeah, and then just let it do its one thing and and that's all it does. That would be amazing, because then you have a specific agent.

Speaker 3:

So you need a client and that client can be many things. Right, that can be, you know. So in the scenario you're describing, you work from a client. Let's say, for example, that could be, you know, imagine it could be Teams, you know. So you could do a chat in teams and you can go, hey, uh, give me the latest top 10 customers and what's the driving directions, and then teams would have these mcp servers. You would have like a bc as an mcp server and it would have google maps as an answer and other servers at his disposal to answer your question. But then you can also imagine that you know BC, business Central itself, you know, in its UI, you know, could act as a client as well. And Business Central could. If we choose to implement that, it could support other MCP servers like, for example, the weather, whatever, or another Microsoft product, microsoft Sales, for example, which may be a little more interesting in terms of business scenarios. So that's exactly where you discover. That's the idea behind this whole MCP thing.

Speaker 1:

That is wild.

Speaker 2:

I love it. It's mind-blowing because you can take it back to what I was saying, whereas it becomes faceless, because now the customer of a product or the user of a product doesn't have to be a customer, and sometimes I get drawn to terms that I think aren't as appropriate. You're dynamically using the application, so you have a bunch of services together and then the agent's going to orchestrate the services from the servers to give you the results, and I like that. The top 10 driving directions, even where are they are geographically and, like you had mentioned, bring in the weather. I mean, I could see so much information and it's going to become where it's creating these servers or the endpoints, and them having access to the business logic in the data. That will give us the interface that we need.

Speaker 1:

Here's the wild thing about this too I don't even know what's going to happen to us as a civilization.

Speaker 3:

The new thing about it is that it is a standard. As always, as all standards, it's only going to work if it actually takes hold, like if people are adopting and starting. But there's a pretty good chance because Microsoft is a big player in that, open AI is a big player, and Tropic, which are all really frontrunners in terms of AI, so the big companies and the big players, the big actors in the AI world, are behind this. So there's a pretty good chance they will take hold and there's already a lot of interest on this technology and so it's interesting because it's a standard and yeah, and that, will you know, allow these things. But if it really takes hold, there's a lot of, you know, cool things that can happen for sure.

Speaker 1:

Brad, if you talk about the faceless component of this, here's what. And again, I'm just I'm dreaming at this point, right Like it's going to require some effort. But if you imagine, if you have your own central agent for your tenant and you do a business with somebody that happens to have the same capability maybe this another tenant using Business Central at some point all you have to do is connect two tenants together, with those two agents talking, and then it'll do its business on its own right. Based on what I know about the business and what it knows about my business, just have both two tenants talk to each other and I'm hands-off at that point. That's amazing.

Speaker 3:

Basically, what's really new in that? Because we've had stand standards before in our industry. I'm sure you're aware there's been a lot of attempts to and standards are only like you know. We have these. A lot of the things we're doing today are API-based, so you have an API. We have a lot of services. Basically, we live in a service world. There is a web API for this and a web API for that, and you can do a lot of services. Basically, we live in a service world. There is a web API for this and a web API for that, and you can do a lot of things. You can query about the weather, you can check out, I don't know. Airbnb probably has an API as well. Everybody has APIs.

Speaker 3:

But if you want to write an application that kind of aggregates this data and do different know, do different calls to this api, that's a lot of work and and first of all, you need to learn how to use this api, because they're not, you know, necessarily self-descript scripting, right, and also your code will break as soon as somebody changes something in this api, right? What's new here is basically what you do is that you say first of all, you describe this api. There's a. Here is basically what you do is that you say first of all, you describe this API, there's a standard for describing what they do, right. And then the magic comes from the LLM, because the LLMs are really good at figuring out these changes and getting okay, you get this API, okay, it looks like this, so I figure out, they can figure out how to call it. And if, two seconds after, somebody publish an update to this service and change the API, then ELM will figure it out. That's the new thing. That's basically the magic of it.

Speaker 3:

So, whatever client you are, as you described, you have a client leveraging multiple of these MCP servers. It will keep working. Client you're leveraging multiple of these mcp servers. It will keep working and, moreover, you can add more servers, more services to it. You know, in in a seamless fashion, but that's pretty exciting. You know there's a lot of. You know there's a lot of interesting application on that, for sure and it's.

Speaker 2:

I am super thrilled for this, but I have to ground it a little bit from just again. It's the conversations that I have with individuals. We're talking about AI creating sales orders. We're talking about AI. Chris, your idea of having it be in tenants was extremely beneficial and helpful, but now we have a lot of information within a tenant. We have documents, we have data. We have a lot of points. We have a lot of points.

Speaker 2:

How can we manage the security from the context of the individual making the call, to be able to differentiate when it's creating, when the AI is creating this data, when it creates the vectors? For example, vincent, you know you're responsible for the product group and you may have access to finance information and human resource information. I work for a different function within your group and I shouldn't have access to that. What is to prevent and how can you put measures when AI is training on this information? To use some of the terms, what considerations and what do you put in place to make sure that I don't have access to the payroll files when I'm doing a query or a call and you are? And then also, chris, if we're crossing tenants somebody coming in how do we scope what they can do in this world of all this automation, because security is a big concern to a lot of individuals.

Speaker 3:

That's a very good question, and so I can give you some. And yes, that's something that when you develop AI feature, you should be concerned about. For sure you don't want to give you know, but that goes for any automated piece of software. You write as soon as you write, whether it's an agent or there's ai involved or not, you know if something is doing something in in an automated way. You need to be concerned about security and and what it does. And then it doesn't access data that it shouldn't be accessing. So so what? So the way we and and so that's you know that's a very good question. So the way, I can tell you a little bit.

Speaker 3:

By the way, we addressed it in Business Central. So in Business Central, our agent is executing with the credentials that the user you have. So we have a security model where you give, we have this, have this permission system. You know, in Business Central I'm sure you're familiar with it. So you grant the agent the permission that you need to have and nothing more. Right permission system, and the agent is acting with a certain set of permissions. You cannot grant a permission wider than what you have as a user, obviously, and moreover, we took an extra step. So the way our agent works, our sales agent works, is that actually it's working like it's a true agent. It means it's working like a human. It's not calling APIs, it's actually manipulating the products through the UI.

Speaker 3:

So it's looking at the page. So it's not, you know, it's faceless, but basically that's what. What's what happened? So we take, you know we we take the page you are on, we show it to the other them say hey, here's a page, this how it looks like, here's what you need to do. So in our case it could be like create a source code, what's next? And the ELM says okay, click that button, then we do that. Then you get to a new page and we show the page again to the ELM and say hey, here's the new page. What do I do now? Then the ELM goes okay, fill with that value, then we do that, and so on and so forth. So that's how our agent works. Basically, okay, value, then we do that, and so on, and so forth.

Speaker 2:

So that's how our agent works, basically. Okay, so it is working in the context of the UI. Therefore, if it's running by default, it gets the user's permission overlaid, and again, I'm being loose with the terms.

Speaker 3:

We also have a permission specific to the agent that you can. You know it's even more restricted than the user and another thing we're doing is that we're not so the pages. So there's a profile, like a regular BC profile that we've made specifically for that agent, right? So it serves several purposes. First of all, we have a, you know, simplified version of the pages which are targeting the sales for the sales order agent. We have a sales order agent profile which basically targeting the sales order scenario, because you know it's it, we found out it's a lot better because to to show like reduced version of the pages, like simplified version of pages, uh, not to confuse the agent too much. So we get a better accuracy by showing, you know there's tons of fields that you will never use, like on the, you know sales code, there's tons of fields you will never use in, uh, in in the scenarios we're supporting. So no need to to show them to the agent because you'll only get confused, right? So we show a much reduced version of these things and we remove things like, you know, discount, for example, the discount field.

Speaker 3:

We don't want the agent because potentially you could here's the thing that you could potentially have, you could do. You know, I don't know if you can call it problem injection, but you know, I know if you can call it prompt injection. But you know, given the thing is entirely autonomous, it's going to do what you ask it to do. So if the customer, you know, in a sales order agent scenario, the customer write a mail, say hey, I want to sell a quote for these items, you know, two chairs and a desk and I want a 20% discount, the LNM is going to go around and the agent is going to go. Here's the page. There's a discount field. What's in the prompt 20% discount? Okay, I'm just going to put 20% here.

Speaker 3:

It's going to do what you asked it to do there are things like that we've purposely disabled, so the agent, for example, doesn't see the discount field.

Speaker 2:

He can't do things of that nature. No, it's great to hear because, again, when you see these new features and you see the demonstration, it's great to see you know to use your wonderful show Under the Hood, how it really works and to understand some of these considerations because, as you had mentioned, we want to build trust in the technology and how it can be used, but also identify that guardrails have been put up so that, if it does run and execute, it's not going to go wild and, as you had mentioned, start giving products away for free.

Speaker 3:

So you know we announced that directions that we will release this agent platform soon, which will allow you to build your own agents and you only need to think about these things. You need to create, you know, the right permission set for your agents for your particular scenario, and you'll need to create these profiles. So they are. You know you're not doing because we can only do the things that we know about in the BC-based app. If your solution has some customization that are, you know, sensitive for whatever sensitive data and you want the agent to manipulate, you'll need to protect it. But that's you know. That's just business as usual, I would say.

Speaker 2:

Yes, I still can't wrap my head around the security context of it. Again, I'll just have to understand and trust. I understand from the sales order agent, but I get hung up on retrieval of information which is sometimes outside of the context of a business center, which is sometimes outside of the context of Business Central, I think more of a co-pilot in an office setting or a SharePoint setting, where you have a lot of files on your server that it's indexing so that you can have employee self-service, for example, within an organization, or even customer self-service, which is just amazing. It's the efficiencies that we gain from a lot of these tasks and I'm still just looking for a good photo AI agent that will organize all of my photos.

Speaker 1:

I think that's going to be a hurdle for a long time is the security component, because we move so fast in processes and being efficient, and typically security has to catch up, you know. And so it's a fine balance right, because you want to make this huge progress, but then at the same time, you're going to, like, hold on, like, you know, you got to pull the ropes back a little bit and making sure that the security is in place before you let it do its thing, and I think that's going to be like that for quite some time, unless somebody comes up with some simplification of security where it just learns how to do it on its own.

Speaker 3:

The security is always something you know you need to be taking seriously for sure. I mean, I believe that we're in pretty good shape with our agent platform, in that sense that we've built, you know, the necessary tools and guardrails you can put in place to prevent the agent from doing things you don't want it to do. That said, you always need to be careful and how you know. So. We've done a due diligence with BC, right, we do a lot of testing around security and we do a lot of what we call responsible AI as well and try to prevent all the prompt injections, all the misusage of items, all these things. But in the end, again, when you implement your own engine, you'll need to go through the work as well. So there's no work through the work as well. So there's no. You know they still work. You know we're not going to get unemployed. You know, contrary to what everybody says, we're not going to get unemployed anytime soon. I guarantee you that.

Speaker 2:

No, I don't see it. They've been saying that for years. You know, if you go back to the horse and buggy with the motor vehicle vehicle or the automobile, the nature, of the work might change.

Speaker 3:

You know I find myself these days. You know you know when I, when I do, I don't do a lot of coding, but when I do I, you know I approach it very differently than I used to. You know all the boilerplate code we used to write for you know basic stuff.

Speaker 2:

You don't do that anymore, you just have it, yeah no, and one thing, chris, I know when to jump in, but one thing, as I say, it's ai isn't going to replace you. Someone with ai is going to replace you. That's what it is it's. It's not the ai is going to replace you. It's people who are using ai to become more efficient and understand where to use it and when to use it and how to apply it. It's not the ai that's replacing you. It's somebody that has skills just like any other profession and job that you have to continue to advance with. Yeah, and I wanted to kind of finish on that.

Speaker 1:

You know, using AI and the security component, I think my dream of it being tenant specific and have two tenants communicating to each other, two tenants communicating each other. It's going to like you had described, vincent. For you to build that agent, you have to set security parameters specific to that agent and, if you have it, talk to all the different agents. Right, you have to also consider all. Whoever's building them will also have to consider all the security. So I think it's going to take some time, but I think we'll get there eventually, unless someone comes up with a simplification or standard of security across the board. That will be the hurdle, in my opinion. Of course, at the same time, when you set the security, you have to build that trust too, with the business owners as well. They're running people that are running the business you know itself. So exciting times, I think. In my opinion, super exciting.

Speaker 2:

No, there is. There's a lot of excitement. I'm super excited with all the new features in 2025 wave one. Can't wait for 2025 wave two to see the new features. I'm super excited about the page scripting future, even, like you said, even if it's that small removal of the preview tag, because at least we know that now there's a little gas behind it.

Speaker 3:

I mean, this is not going anywhere. I can tell you that?

Speaker 2:

No, I hope not. That will. I'll be the biggest opponent of its removal and I'll refuse to allow it to be removed. No, it's a great tool and I'm super excited to see where Business Central goes with the use of agents and responsible use of agents and the responsibility with the humans in the loop to build the trust within the application, and to Chris's point of being able to communicate with other tenants or other servers for automating tasks or letting AI run and go with it. We do appreciate you taking the time to speak with us today. I'm super excited for the conversation. It took me a long time to calm down from it. It was great seeing you at Directions North America in Las Vegas. If anyone would like to get in contact with you, what's the best way to reach you?

Speaker 3:

They can reach me through LinkedIn. It's probably the best way.

Speaker 2:

Great, great great, great, all right, great, thank you. Thank you again for your time. We really appreciate it and look forward to speaking with you again soon. Thank you, vincent. All right, thank you, ciao, ciao, bye, bye, bye. Thank you, chris, for your time for another episode of In the Dynamics Corner Chair, and thank you to our guests for participating.

Speaker 1:

Thank you, brad, for your time. It is a wonderful episode of Dynamics Corner Chair. I would also like to thank our guests for joining us. Thank you for all of our listeners tuning in as well. You can find Brad at developerlifecom, that is D-V-L-P-R-L-I-F-E dot com, and you can interact with them via Twitter D-V-L-P-R-L-I-F-E. You can also find me at Mattalinoio, m-a-t-a-l-i-n-o dot I-O, and my Twitter handle is Mattalino16. And you can see those links down below in the show notes. Again, thank you everyone. Thank you and take care.

People on this episode