
Dynamics Corner
About Dynamics Corner Podcast "Unraveling the World of Microsoft Dynamics 365 and Beyond" Welcome to the Dynamics Corner Podcast, where we explore the fascinating world of Microsoft Dynamics 365 Business Central and related technologies. Co-hosted by industry veterans Kris Ruyeras and Brad Prendergast, this engaging podcast keeps you updated on the latest trends, innovations, and best practices in the Microsoft Dynamics 365 ecosystem. We dive deep into various topics in each episode, including Microsoft Dynamics 365 Business Central, Power Platform, Azure, and more. Our conversations aim to provide valuable insights, practical tips, and expert advice to help users of businesses of all sizes unlock their full potential through the power of technology. The podcast features in-depth discussions, interviews with thought leaders, real-world case studies, and helpful tips and tricks, providing a unique blend of perspectives and experiences. Join us on this exciting journey as we uncover the secrets to digital transformation, operational efficiency, and seamless system integration with Microsoft Dynamics 365 and beyond. Whether you're a business owner, IT professional, consultant, or just curious about the Microsoft Dynamics 365 world, the Dynamics Corner Podcast is the perfect platform to stay informed and inspired.
Dynamics Corner
Episode 414: Business Central is happy, and so are you: CentralQ turns 2!
In this episode of Dynamics Corner, Kris and Brad are joined by Dmitry Katson, a 20-year veteran in the Business Central ecosystem. Listen in as Dmitry shares his experience developing CentralQ, an AI-powered tool designed to enhance Business Central by leveraging a robust knowledge base. He emphasizes the tool’s ability to automatically update its knowledge daily, drawing from sources like blogs and YouTube, and its role in streamlining processes, improving user experience, and supporting AL development. Dmitry highlights challenges such as creating deterministic AI solutions and the importance of source referencing for credibility. Looking ahead, he discusses plans for CentralQ, including reasoning models, agent coordination, deep search capabilities, local language models, and page scripting for automated documentation. The conversation underscores AI’s transformative impact on development roles, shifting them toward management and architecture and the need for AI agents to access live data while addressing user permissions.
#MSDyn365BC #BusinessCentral #BC #DynamicsCorner
Follow Kris and Brad for more content:
https://matalino.io/bio
https://bprendergast.bio.link/
Welcome everyone to another episode of Dynamics Corner. It's someone's birthday and someone's turning two. I don't know who, because we're rhyming. I'm your co-host, Chris.
Speaker 2:And this is Brad. This episode was recorded on March 5th and March 6th 2025. Chris, chris, chris, I liked your rhyme. Someone is turning two. Are they blue, I wonder who? With us today, we had the opportunity to learn who is turning two, as well as a wonderful conversation about the place for AI within Business Central. With us today, we had the opportunity to speak with Demitri Katzin about Central Q turning two. Good morning sir, hey guys, how are you doing?
Speaker 3:Good morning sir, hey guys, how are you doing Good? Morning.
Speaker 2:Doing great, good, good. You look like you just woke up.
Speaker 3:Yes, thank you.
Speaker 2:And I've been waiting a very long time to say happy birthday to you. Well, not to you, but to your child yeah which one of them, central q, turns two.
Speaker 3:I've been waiting to say that for months now yes, yes, thank you very much, it's, it's coming, the birthday is coming when is the exact birthday?
Speaker 2:I know we spoke with you shortly after it was out some years ago.
Speaker 3:Well, it seems like just yesterday yeah, I need to double check when I first tweet that, but it was the beginning of march and maybe seven or something oh, wow, so we're right.
Speaker 2:We are right there. We scheduled this on purpose. Yes, yes, yes, to be there at the birthday of your child. I call it and it's great, and, before we talk about your child and many other things that are around it, I like calling it your child because I think it's wonderful. Can you tell us a little bit about yourself?
Speaker 3:Yeah, so I'm. It's wonderful. Can you tell us a little bit about yourself? Yeah, so I'm Dmitry. I'm in a business central world for like 20 years. I'm passionate about business central and artificial intelligence. I started with a majority in ML, or machine learning or AI, whatever you call it nowadays. I started in 2016.
Speaker 3:So it was almost like eight years ago right when I headed their AI department and a big partner, and I didn't know anything about that, so that's where my journey started. And then I was passionate to combine AI with a business central for years and I think now my mission is accomplished.
Speaker 2:Your mission is accomplished.
Speaker 1:Mission accomplished.
Speaker 2:That's great and you've been doing a lot of great things. You've been doing a lot of speaking things. You've been doing a lot of speaking sessions, presentations and yeah, and like I see you all over the place. You're very busy not only with business, central and central q, but uh, sometimes it seems like a world traveler to me yeah, it's well.
Speaker 3:There are two uh seasons where I travel, so it's definitely directions. So usually it's directions Asia, as it's not far away from me, just one hour of flight. That's nice, sometimes using bike.
Speaker 1:That's even better. That's good.
Speaker 2:I think I saw a picture of you last year.
Speaker 3:You took your motorbike that's right yeah yeah, but but to be honest, yes, it's still 800 kilometers, so we prefer to use bike to go to the airport.
Speaker 2:Yeah, it would be a long ride.
Speaker 3:A long ride, yeah, and then then Besitek Days and Directions EMEA. So that's my three conferences that I usually attend as a speaker, yeah, and that's where we can meet. I really hope to go this year to directions North America, but it seems that my visa is not ready yet, so I don't think that they will issue that on time.
Speaker 2:I'm hoping that they issue it on time, because I would enjoy meeting you in person in Las Vegas this year.
Speaker 3:I know it's a long trip for you too yes, but it's still already two months of visa processing and they, you know, uh waiting okay you got like a little over three weeks left, four weeks left, so you still have time, you, you just have to.
Speaker 2:When's your? When's your cutoff day? Do you cut-off day? Whereas if you don't have a visa by a certain day, then you definitely won't be attending.
Speaker 3:I think that it's already passed.
Speaker 1:Oh man, We've got to make sure that you make it next year, then I'm hopeful to run into you somewhere.
Speaker 2:I'm hopeful to run into you somewhere then. So you've been doing a lot of great things and for those that do not know about CentralQ, can you tell us a little bit about CentralQ briefly? And then I have a whole list of questions for you.
Speaker 3:Right, yes, so I've been doing different machine learning things before and then, like, I was speaking in the conferences about how we can implement machine learning in Business Central and I remember that first time I talked about this in 2018, I think in Harvard, in the direction of EMEA, and I was the only weird person that talked about this in the conference Even Microsoft didn't talk about that. And then, in directions in me last year, I found myself that, like 60-70% of all the content, everyone speaks about Copilot and AI. So that's where we are. That's where I think that my mission was accomplished. But I returned back like two years ago a little bit more, when the first chat, gpt appeared right, and we were like all mind-blowing about the power of Linguistic Models. We all saw them for the first time and what I did actually I think many people did I thought, hey, great, now I can use it to help myself with a business.
Speaker 3:And just after some quick queries, I figured out that, no, that doesn't work. It just suggested me features that doesn't exist, suggested me you know code that doesn't compile, suggested me you know routes where I it's just hallucinated a lot. But I still thought that, yeah, that could be a good framework to build around and to help our community to use it to help with the business central problems. Yeah, the problem with the business central is that it's still very, you know, narrow, uh, comparing to the whole internet. Yes, so our al development is.
Speaker 3:you know it's several github repos. Comparing to the millions of you reviews, Our documentation for the Business Central is still small comparing to all other products. So probably at this point, at those point of time it was GPT 3.5. It maybe knew something. But you know, the main goal of the light language models is to answer all the questions, no matter if it's correct or not. So it was just imagine the answer.
Speaker 3:However, I found and in those periods of time it was very hard that there are still a way how we can make it better. So if we just make the big knowledge base about everything that we know about the business central in one place and then not just ask directly Flash language model, but first a language model, but first query our knowledge base, find the potential answers, like some text that will potentially answer on the user question, and then we'll send this to the language model together with the user question, this increases the correct answer a lot. So that's what we call it fact grounding, yeah, or the knowledge grounding. So that's where the idea was born about hey, I think that that will work.
Speaker 3:So the next problem with that was that I I need to find a way how to build it because there was no exact documentation, there was nothing. So actually my, my only source of knowledge at this point of time was Twitter. So I followed some guys that also did some experimenting, chat with them, and so I built a knowledge base. I took first the blogs and the Microsoft Learn, Then I added at some point of time YouTube, then it was Twitter also as a source of knowledge and yeah, so it took like two months of building, I remember, and the Central Queue was fun.
Speaker 2:So Central Queue in essence is a large language model that's built or it's grounded, or it has its knowledge based upon popular blogs from community members of Business Central, from the development point of view, as well as from the functional point of view, the Microsoft Learn documents, which keep getting better and better, twitter and the YouTube videos. So anybody who uses Central Queue, similar to ChatGPT you mentioned, which a lot of people use it, will pull the knowledge from those sources to return the result.
Speaker 3:Yes, and also the problem with just a pure LashLanguage model was and still is that it's trained and has a cut-off knowledge date. So it's usually for the OpenAI models it's one year before. So the current models I think that they have a cut of days like 2024 or somewhere in the maybe autumn, maybe summer, but as we use, as we ask about Business Central, so this area is growing fast.
Speaker 3:The new features appears every day oh yes no, like, oh yeah, not every day, okay, but we have uh, waves, uh, and they are much appears, much quicker than this, that large models are trained based on that it does seem like every day, by the way yes, every month we have new features. So it's, it's just like every day by the way?
Speaker 2:Yes, exactly Every month we have new features, so it's just like every day is a holiday.
Speaker 3:I guess you could say yeah so this was the second problem that I wanted to solve and the Central Queue not just have this knowledge base that is trained and used, but it updates automatically every day it but it's updates automatically every day.
Speaker 3:So we search for the web for the new information regarding business central and updates this knowledge base, and you know this is very exciting to see that, for example, when Microsoft release, before the wave, the launch videos yeah, so it's, and they are published on the YouTube. So on the next morning, centralq knows everything from all the videos. So it's you can just go and ask what's new features, how it works. So in the tool answer based on just what was just published what's new features, how it works. So in the tool answer based on just what was just published. I think that's very useful.
Speaker 2:I think it's extremely useful because, as you had mentioned, there aren't a lot of sources or a collection, even with those other language models. Because Business Central, there are a large number of users using the application. We have large number of users using the application. We have a lot of members in the community, but it's still small compared to other languages and other pieces of information on the internet. So it's a great tool for anybody that uses Business Central, and it's not just development and it's not just functional, it's a combination of both. So, whether you're a developer, a user or somebody working to consult others with Business Central, it's a good tool to have.
Speaker 3:Yes exactly.
Speaker 3:And the second thing that I thought should be really mandatory and it now became a standard in all these Copilot things, things is to reference the source. So In the in the pure Charge EPT on those periods of time, you got the answer, but you, you know, you don't know if it's correct or not, so you need to double check that and there were no sources where you can double check that. So that was my uh initial design from the beginning, that, hey, you not only need to get the answer but also the links to the sources where this answer was uh pulled from. Uh, and I found this uh also a very uh. I found this also a very widely used flow. When you ask a question in the sexual queue, it gives you the answer and then if you want to go deeper, you just click on the link. It opens the blog, so there is more detailed information. You can just read it. And I found that around I think 30 or 40% of all redirects to my website are going now from the central queue, which is also interesting.
Speaker 2:Well, I like that. I do like that because, as we all hear, if you haven't heard AI, then I don't know where you are, and if you haven't heard AI within the last hour, I don't know where you are either, because I don't think you can go an hour without hearing AI copilot, large language model, machine learning no matter where you are on the planet you could be using it too.
Speaker 2:You just don't know maybe, maybe the the ability for users of tools such as this to validate the information, because everyone talks about how this hallucinates hallucinations where you had mentioned large language models will always give you an answer. They never return. I don't know, so it could be an incorrect answer. So, knowing that individuals are utilizing or following those links to learn more about the answers or validate the answers, it's nice to hear, instead of everybody just saying give me the answer and it creating something that may or may not even exist, and then people spread that information. So, with Central Queue, when we started talking about planning this because we planned this a long time ago with Central Queue turning two, you said you may have a lot of new things in store for Central Queue.
Speaker 3:Yeah. So I hoped that I will release the second version before we talk, but it's still in development mode Because, well, there are some other projects that I'm doing, oh, I understand. Well, there are some other projects that I'm doing, yeah, oh, I understand. Yeah, but but also, uh, I think that the most important reason for me was to postpone a little bit. Uh, many new things appeared in the ai world since our, you know, since my first planning way, and the most important of them now there are new type of the models, which are called reasoning models, so they don't give you the answer directly, they think about the answer first and then produce the answer. That's a little bit different type of models that I want to also implement in the central queue. So, and also, the other thing is the concept of agents that you also, I think, hear a lot, concept of agents that you also, I think, hear a lot. And I started experimenting with the agents, I think, in September last year August, september and the first agents that I showed were in directions in a year, and I was really mind-blowing about this concept and how it works. So the example that I showed in the directions in here was that I created a team of agents that, yeah, so there were a team of agents that were the goal was to ask any questions in the natural language and it will convert it to the API. Calls to the business central. Do the calls to the business central, grab the data and provide the answer to the user the user and the problem with that if I do it the classical way is that in many cases, if I just ask in a simple call to the life language model, hey, take this query and convert it to the API, this API in most of the cases will not work. But if I make a team of agents, there will be one agent that will be responsible to generate the AI, another agent will be responsible to call this API and another agent will be responsible to provide the final answer, and they actually communicate with each other. So first one generated API, the second one called it, and they actually communicate with each other.
Speaker 3:So first one generated API, the second one called it and didn't work. It returned back to the first one and said hey, this didn't work, so you need to do this job better. It generated something and once again sent it to the other agent. The other agent once again said hey, this didn't work, send it to the other agent. The other agent once again tells hey, this didn't work. So the first agent actually went to the knowledge base that I also connected to that searched for the information. Actually, I connected to the Jeremy's book, the whole book about the API. So it went, read the book, found the exact endpoint that potentially will work and then generated the good API. The second agent executed this API. That worked. The other agent produced the answer and it was like online. You can see their internal communication.
Speaker 2:That is all amazing to me. It's the whole agentification. We talk about this a lot now because everybody's in it, but it's almost like having a staff that's working for you and each one of them does a different task agent coordinator.
Speaker 1:So you have two features coming in. One is the reasoning right, so it's going to reason itself. It sounds like, yes, it's a kind of new feature. And in the second one you're almost adding a um, an agent coordinator. It sounds like it's like I just want to talk to this one thing and then it's going to pull in whatever agent I need to accomplish this task yes, so it's, um, actually what I'm thinking of.
Speaker 3:Uh, because there are simple questions. So how this feature works. It will go to the my Knowledge Base, find this feature and produce the answer. That's how this works nowadays. But let's say you want to ask something like hey, please find me the apps on the app source that do this, compare them by something, produce me the output table which one with maybe some feedback from the users, and suggest me the best I can use. It's like a multi-step process and this currently will not work using the current version of Central Queue. It will work at some point, but the answer will be limited. So I want to now serve more advanced queries with a central queue, which I call central queue 2.0, which I'm working on. So that's why central queue turns 2, not only in years in age, but also in the version. But, yeah, I want it to be agentic, I want it to use reason models and also the new thing that appears in many cases in many areas AI areas nowadays.
Speaker 2:It's called deep search or also deep research.
Speaker 3:So it's because now deep search or also deep research. So it's um because now I'm using and most of the this uh, the chat, gpt, the complexity uh, other uh co-pilots, today in a simple mode, they're using like a maximum of 10 different sources, um depending, because that's actually usually the limitation of the one call, you know to the Lash language model, but with a deep search.
Speaker 3:It's also multi-step. So we you can ask a complex query. It will break down this query into the multiple queries. It will search them one by one, then find maybe 50-70 different sources. It will understand which sources it should go and read, depending on the different evaluations. It will go read, it will find the trusted sources and then produce the answer. So usually this process takes longer. Yeah, so, because the simple question answer in the central queue takes about 10, 10 to 10 seconds to the first token. The deep search, according to my experiments, nowadays it's around one minute. So it's one minute, one minute and a half, but it will go really deep and find more information and produce them more advanced answer.
Speaker 2:And so, yeah, three things that I want to combine together and it's, um, it's not very, you know, obvious how to do this it sounds logical, it sounds wonderful, but how a large language model or how the deep research knows which source to read based upon the content. And that goes back to the reasoning. I mean, I know how the human mind works with reasoning, reasoning based upon history and understanding. I still have difficulty understanding how these language models really put this information together to know it's. It's. It's to me, uh, I mean mind-blowing when I go with like it, just my mind, it, like everything you said, sounds great.
Speaker 2:And if I had 10 people sitting in the room that were humans working with me, I could say, okay, let's go through these sources, find the ones that are relevant for the question. Okay, let's take the pieces back and put them together, because you know that humans have reasoning in how the mind thinks. But getting a computer to do this or to getting a piece of software to do this, which is in essence what it is right, it is software, if I stand correct.
Speaker 1:Hold on. Can I recommend the fourth one as a wish? Maybe, maybe text to audio or audio to text that'd be really cool to add, or someone just have conversation with, that would be awesome to to do. I'm not trying to add more work for you, but yeah, so actually, audio-to-text is a great way.
Speaker 3:I'm personally using this with external software because I know that maybe in Windows it's already implemented by default. I'm using Mac. There is no such feature, but I'm using, you know, let me, what's called? It is called Flow. Yeah, so this software is called flow. You can just Talk to this and it will automatically transcribe and then use it in the query.
Speaker 3:Yeah, but I would also want to add okay, the fifth feature to that is multi-models, multi-model support, which means that now I'm pulling just text from the sources, so from the blogs, it's just text, from the YouTube videos, it's a transcript, and in many cases it's not enough. Especially in the blogs, I found that very often people just paste the screenshots inside of the block. Yeah, so they don't describe these screenshots. That's how this feature works. And then there is an image with different arrows.
Speaker 3:Yes, there is, and I actually now don't get this information, which is very important information. So I want to grab this information, which is very important information. So I want to grab this information as well. But also, that's the back-end, so that's how to improve my knowledge base. On the other side, on the user side, I want it really just to copy-paste the screenshot and send it directly to the central queue and ask about the you know the error, for example uh, this really will help to improve the answers. So, yeah, this five pillows that I'm working uh, right now. Uh, and also yeah, so that's that's the, the area that right now and also yeah, so that's the area that I'm focusing on.
Speaker 2:That's a lot and for you to do this. You're doing this all on your own, now, correct, and in your free time.
Speaker 3:Yes, when I say free time it's.
Speaker 2:You still work with Business Central. You do all the stuff that we talked about. So when do you sleep?
Speaker 3:You see that I already wake up, so it's 6am here. Yes, yes yes. Yes, once again thank you.
Speaker 2:My day starts very early.
Speaker 3:I have more time for work for the central queue after that.
Speaker 2:No, that's good. That's why we we said we could do this, but, as we talked about last time, you're in the future for us, so it's six in the morning, or six zero six hundred, where you are.
Speaker 2:Thursday on tomorrow for us, tomorrow yeah so I like to talk with you because I get to know what will happen tomorrow. The you're doing a lot of great things with central q and another thing that has come out and again with this deep research models is local large language models. Do you see a place for that with Central Queue to maybe help with some of the processing or offloading some of the resources or knowledge for Central Queue?
Speaker 3:Yeah, I thought about that, but I didn't find where this can fit with the central architecture and the users right now, because I don't have like an app for the phone, for example. Maybe we need to do it at some point of time, but let's see. And still, it's like a web service which works in the web, which communicates with Azure OpenAI nowadays and all the whole infrastructure is in Azure and the whole infrastructure is in Azure. There is one thing that maybe can be useful in this case I mean these local language models is using the I call it private data with the central queue. So maybe you know or not, after our previous call when we discussed the web version of the central queue, I released the business central version of the central queue, and so this is the AppSource app, which is actually a paid version, which costs like $12 per user per month, which is like not a lot, I think.
Speaker 3:But with this, you have the central queue inside of the Business Central and you can upload your own documentation there. So you can upload the documentation about how your business central works, like the instructions about your processes, the instructions about your pretend extensions and all of that, and one of the nice features also there is that you can use the page script, the basic user, the Business Central page script, to record the steps. It will take the URL of this page script or a YAML file. You can export that and upload to the Central Queue app inside of the Business Central and it will automatically produce the user manual from that and use it as an internal knowledge about how your Business Central works. And you can just ask a question.
Speaker 1:That's gold. We were just talking about this, right, brad, like we were just talking about like taking a page script result and then turning that into a usable guide or documentation. Especially for someone that is maybe in the middle of an implementation, documentation is usually like the last thing people create, but if you can make it easy with this tool, that's incredible.
Speaker 2:That's going to save a ton of time I'm on the page scripting kick because I've always been big into testing and page scripting is, in essence, a way that you can enhance your user acceptance testings, but with the way that it records it, as you had just mentioned, to create user documentation. So now, with the CentralQ app for Business Central, not only do you get the ability to use the CentralQ knowledge, only do you get the ability to use the CentraQ knowledge base that you update daily with information from Business Central. You have the ability to upload your own private documentation, right, and that stays separate from everything else.
Speaker 3:That's just yeah, yeah, so this is a separate knowledge base that is per environment or pattern-based, depending on your choice. So you have a dedicated knowledge ID and all your knowledge that you upload this is linked to this ID and only you can use it. It is all very secure and you can upload the PDF files, word documents, txt files and page scripts. And there is a chat window. It's also not a question and answer. It's a chat chat so you can go and chat about uh, that and you can decide if it's uh, if it, when you ask a question, what sources it can use. So you can decide if it can use only your private documentation and nothing else, or, in addition, it can use the whole central queue knowledge or, in addition to that, you can use the Microsoft Learn. So it's three big buckets of knowledge that you decide that what you can use and it's yeah, you can go and install it.
Speaker 2:Chris, to your point. That's. That's where it is. Nobody wants to document a process and everybody relies on someone in the office to have that process. But something may happen where they're out. One day they go on vacation, they, for personal reasons, make a change in their career and all that information is lost. But now, with this, to be able to use page scripting to have it generate documentation and then have that documentation searchable is a huge time saving and it's gold because you can record as somebody's working, say that this is that process.
Speaker 1:I'm just even going on that too brad because even if for some example like if you process change right, people process, business process change, you want to go update that, you just do a page script and have it change that in your document and then there's your updated document. Because a lot of people like business change process, your business process changes and then nobody ever updates the original document. So with this.
Speaker 2:It would make it easy. It's the original document, so with this it would make it easy To be honest with you. And again, as you had mentioned, it's a relatively low price for what you get, because for the ability to keep the business continuity there is extremely valuable and important Central Q. This whole AI stuff is such a huge time savings if it's used appropriately yes, and you know the most.
Speaker 3:The cherry on this, on this process, is that when you have the answer, and you as well have the links to the sources and if the source was a page script, you can just click it it will open the business central in a new window and will replay it. So is that right?
Speaker 2:there. See, these are all the. These are what I call like what I would. I don't want to say hidden feature, but these, like. I know about central q and I was chatting with you a few months back about the app as well, because I had the questions about the page scripting and creating documentation. But these are things that I don't think a lot of individuals may know about CentralQ and the power that you have, because from a user point of view, that is a huge time savings for them and from any business point of view, I think there's some huge value in having that. So you have so many things on this application. I still can't believe you did it all by yourself.
Speaker 3:Yes, it was just me. So and uh um and uh I. So this is the, actually the six pillow that I wanted to also embed in the web version of the Central Cube, so Central Cube 2.0. Also we'll have at least in my plans, I really want to do this this login feature where you can log in. It will be your space where you can upload your own documentation. So I want to combine these two worlds together. Nowadays, and you can use it externally as a web or internally inside of the Business Central. So that's my goal, that I want.
Speaker 2:I think that's a great goal and I hope you get to it With CentralQ, if you can share. If you're not comfortable sharing any of these questions, please feel free to let me know. I understand it. How many searches do you get per day, per month, per quarter? What sort of metrics do you have on it?
Speaker 3:Yeah, so for the last two years almost. That's where more than 300,000 questions and answers generated.
Speaker 2:That's a lot of questions.
Speaker 3:And that's produced around 1 billion tokens. Around 1 billion tokens.
Speaker 2:1 billion tokens 300,000 questions what is a token?
Speaker 3:Yeah, so the token is one word or part of the word. One word can be split into one or more tokens, so that's actually how LLH language models they produce see the world and generate the answers. Yeah, so it's around 500 queries per day nowadays and depending on the time of the year, so the lowest number of queries is on 25th of December.
Speaker 2:I wonder, why yes.
Speaker 1:That is interesting.
Speaker 3:Yeah, but still there are questions on this date to the set to kill. Well, some people don't want.
Speaker 2:Yes the break. Do you keep track of or classify the questions? I'm just I'm thinking of could be used. What I mean by classifying is is it a finance question, a purchase and payables question? Order the cash.
Speaker 3:Yes, so I'm also classifying all the questions using the last-lunch models. So it's around, so I have the statistics. So around 20% of all the questions I call it like a general, so it's like different questions about different features of the business central, but about 20% goes around AL development, questions about different features of the business central, but about 20% goes around AL development. And then it's breakdowns by the models, like about 11% comes about the financial and then about 9% from the inventory and so on, about 9% from the inventory and so on. But also this is like a categorization by the categories. But also I classify the questions by the types, and about 50% of all questions about how to do things. So this is like how can I do things? And that's very interesting because that's where comes the power of the central knowledge.
Speaker 3:Because if you rely only on the Microsoft Learn, the Microsoft learn documentation structure is about the feature, about like 90%. I would say that this is the feature, this is how this feature works, this is AL type. This is what it's about. It's not about how the process works. There is not enough knowledge in the documentation about how the process works. So there is not enough knowledge in the documentation about how the process works, and usually people ask about this how to make this process happen. And that's where blogs come into play and YouTube videos come into play, because in many blogs, it's not about the features, it's about the process. That's actually how you can do this using multiple features. So that's where the power of this comes, and I also have this telemetry that 86% of all the knowledge comes from the blogs nowadays and 65% comes from the YouTube videos and about 50% or 60% from the Microsoft Learn. So it's a combination, of course.
Speaker 2:So it's one question can have sources for multiple different categories, but the blocks play a crucial role in this answer in general, I'm fascinated by statistics and I'm happy that you shared that, because I was curious using it and I thought the number one question, chris, would be about what's the best podcast about Business Central.
Speaker 1:I don't know we should give that to Dimitri as a link to our website, because there's transcripts in there.
Speaker 3:I think that I don't know if I can find it very quickly how many answers were used in the Dynamics Corner podcast.
Speaker 2:No, no it's okay, you can look at that afterwards, but it was just a little fun. It was a little fun. I appreciate those statistics and all that you're doing with that. With that, we always have some side conversations and such too. So where do you see AI within Business Central and the most important AL development?
Speaker 3:So I would start with AL development, because that's where I use AI for the Business Central every day. So I'm not the user of the Business Central, so I actually don't very often consume AI features inside of the Business Central for myself, but as an AL developer. So there are nowadays a choice of IDEs, yes, what we can use for AL development. So we all started from the VS Code and we started with a GitHub component. Yeah, so that's where. Yeah, so that's where I started. Many you people use that.
Speaker 3:I then switch to cursor. About eight months or so ago I found that. So at this period of time we saw supported to the cloudSawNet model. So the VS Code supported the OpenAI I think 4.0 model, which is not so good in AL development, to be honest, for any reason. That's the fact. But there is another model from the Entropiq which is called Cloudy Sonnet 3.5 and I found that it knows AL pretty good and the only IDE that supported that was Cursor. So I switched to Cursor and Cursor is actually a clone, a fork, from the VS Code, so it supports all the features from the VS Code plus.
Speaker 3:The guys did a huge job. I mean, my central queue stuff is like maybe I don't know like 5-10% of what they did about the cursor, but they have a big team and they have like millions of investments and they located in San Francisco. So, yeah, this cursor supported the so called composer mode. The composer mode is not just a chat, so it's not like a how can I develop these things in AL. It's not like a hey, how can I develop these uh things in al. It's actually you ask it to produce the feature in the natural language and it develops the feature for you and then you uh ask it and then you check.
Speaker 3:So, uh, and it worked pretty good if you know how to use. Yeah, so it also, and it worked pretty good if you know how to use it. So it also requires some change of your mindset how you deal with AL code, how to set up things. But if you know what you are doing, that's actually really increase the productivity of your development and the quality as well. And now the new model appeared from the clouding cloud, from the Atrofic Cloud, sonnet 3.7, also with the reasoning capabilities, and I found that and also in parallel, in that Coursor introduced the agentic mode. So combining these two things together in AL development, it's the next level, I mean nowadays, so should I put my resume together?
Speaker 2:Is that what you're telling me? Yes, yeah, yeah, yeah, chris, you hear that he's subtly telling AL developers just look for another job, because Actually, yeah, I updated my resume as well.
Speaker 1:so it's he knows.
Speaker 2:I did see the update on VS Code today and it was funny. We'll go back to your story in a moment. I do have another question about the sources, not to disrupt you, but all of the features, for this month's update of VS Code is primarily a lot of the features that you talked about from Cursor and it's all AI-related. They added the agent mode, the copilot edits, a lot of the features that you talked about from cursor and it's all ai related. They added the agent mode, the co-pilot edits, the next edit suggestion, which was a big one, so that you it can have the edit. So it's. It seems like a lot of that's coming back to it. Uh, to go back to the stats, you had mentioned, I believe from memory, in the conversation once. Once I play it back and hear it I'll know for certain, but I think you said 20% is AL development and you mentioned your sources were blogs, learn, youtube and Twitter. Did you ever think of GitHub repositories?
Speaker 3:Yes, so GitHub repository also appeared as a source, I think a year and a half ago. At this point of time it was an experiment, so I didn't pull all the source code from the Business Central, but just a system app repo and there is now in addition to that and there is now in addition to that it appears, a special Tumblr, an option in the central queue that I'm asking specifically about the system app. So actually, what this option is doing, it's using just the GitHub as a knowledge base and you can ask how can I create the email, for example? It will produce the AL code looking into the system app and it works pretty good. I had an experiment which was fun. I was sitting in the visitor Tech Days and there was a session about how to use the system app. In parallel, there was a code in the screen and in parallel I just asked Central Cube, how can I do this and produce the same code. So it was great to see that it really works.
Speaker 3:And, yeah, the one thing that that didn't work right nowadays with this, even Cloud 3.7 models and we still have for now, we still have a job as developers that it can really. So it's good in AL syntax. So it knows the AL syntax. Yeah, but AL development is not about just about the syntax. It's about using existing libraries. Yeah, so it's using existing code units. So we don't want to duplicate the code, we don't want to reinvent the wheels and so on, and that's something that it somewhere knows somewhere.
Speaker 2:Well then, we're safe for a long time then, because if it's trying to analyze some of those code units then I don't even think people can do it. So AI will have quite a bit of challenge. But if the language changes to be more contemporary, you know, down the C-sharp road road or like it has been going, then maybe there is no hope for us. But until all those code units get cleaned up, we're safe, I think.
Speaker 3:Yes, and that's something that I'm also looking at as a central queue opportunity, because recently there was appeared also the so-called MCP protocol in the Coursor.
Speaker 3:Actually, this is the you can think of it as an API to external tools and you can, inside of the Coursor, in this code generation mode, edit mode or chat mode, you can mention the tool and ask it hey, how can I do this? And what I'm thinking of also is to put some effort in making the knowledge graph from the whole Business Central AL. You know COVID phase, so it's the knowledge graphs is a new area also in this world. So actually it's not just take the let's say the code, let's say we take the code, unit 12. So it's the biggest one, more or less, and it has a lot of different functions that are difficult to understand.
Speaker 2:Is 12 bigger than 80? I just had to add.
Speaker 1:I'm just kidding, you know the old time is know the numbers right, so we had to add.
Speaker 2:No, I'm just kidding you know the old time is, know the numbers right, so we refer everything as number 12, 80.
Speaker 3:But it's more than 1,000 lines of code.
Speaker 2:Oh yeah, I understand, it's huge.
Speaker 3:It's huge and actually the problem with that. You can't not just take this one code unit and paste it to the LLM as in one call and ask it to you know, can you, can you just produce my code based on that and so on. They just we still have limitations of context window. So there is an, the, the area which is called knowledge graph, so actually you can create a knowledge graph, also using knowledge language models that tells you the higher level. So this is the flow, this how, the things connected to each other, and that's the internal things or the functions, for example.
Speaker 2:So it's create this how do you get that? Yeah, this is the graph. Is that part of the new language models, or do you have to use another tool for that?
Speaker 3:No, there are some open source libraries how you can do this, but still behind the scenes they are using the large language models to produce these knowledge graphs. At the end, this is the database of how things are connected to each other and when you have this database and you ask a question the user asks a question it can first go to this database instead of the knowledge base. With a viewer knowledge, with a raw knowledge, it can go first to this knowledge graph, understand which pieces of knowledge are valuable for that and then go deeper to the raw knowledge and that's really increased the quality of the answer. So my idea that I also want to put effort in to make this Central Queue API for the Cursor or VS Code, I think they will also support that.
Speaker 2:I think so. I think VS Code will follow Cursor wherever it goes.
Speaker 3:And then you can just use the Cloudy 3.7 but also Managed Central Queue. So the flow should be like hey, I want to do this feature and please use central queue to do for the best quality, or something like this. And then it will first go to central queue, finds the existing libraries that can handle this, then provide this information to the cloud, so net and the clouds, it will provide the final art. So in this, this will really, I think, increase the final quality of the features. Yeah, so maybe I will do something that will make me unusable as a eligible, make you obsolete, make me obsolete, that's what you do.
Speaker 2:Dimitri goes down as the father of CentralQ and also the same one that killed.
Speaker 1:AL development.
Speaker 2:But it's not only him.
Speaker 1:It's everyone else, everyone else.
Speaker 3:Yeah, but I think that it's really changing the way how we do the development right now, because I did for the last task that I got about the AL development, I actually decided to do the experiment, so I decided to not write any code at all, so I was using just a cursor in this chat mode with edits to produce the final code, and it appears at the end. First, it's doable, so the feature was there. The first draft was very quick. Yeah, so it's doable, so the feature was there. The first draft was very quick. Yeah, so it's. It was much quicker than I will do that.
Speaker 3:However, the next follow-ups asking to refactor something, to change something and so on, resulted in additional time, so the total time from zero to hero appeared to be more or less the same as I estimated how I would do this, but there were, in the final solution, there were many things that I didn't thought about from the beginning. So that was the problem. That asked to connect to external API, and it actually looked in the documentation of this API and found something that I didn't find by myself when I looked in the documentation, and it implemented these things inside of my API.
Speaker 3:So, something like about error lock management with nice features to discover more. It results in a more user-friendly flow at the end. So I found that these tools really help you to produce a better solution and we actually will not go anywhere. We just will make this transition from just you know AL programmer to something more manageable, managerial.
Speaker 2:Yeah, I think you're correct. You'll be more managerial and more architect and function-based.
Speaker 1:So to go back to your experiment.
Speaker 2:You had a task to consume an API. The amount of time it took and you didn't want to write any code, so you wanted AI to create the entire code for you, including refactoring. So the amount of time that it took for it to do it, you say, was about the same amount of time you thought that it would have taken you to do it.
Speaker 1:Yes.
Speaker 2:But it produced better quality code in a sense, because it had additional user-friendly and error handling and other features within it that you didn't consider as part of your first estimate. That is amazing. I think that would be a great test, but I'd like to see that test done differently. See, this is a good session, see, I like to see these types of sessions. So, if you ever do one AL developer versus AI right, so you could do it not an estimate of what you think A live event. Find an AL developer. I don't know if you can do it live, depending on what it is is how much time it takes, but find an al developer. We'll have to volunteer someone to write something, give them a task. You can do it with ai, see how long it takes them in the end result and see how long it took your ai again, chaining it together with the, the refactoring and the code completion, to see the reality of who wins. See, it's AL versus AI. I would call it.
Speaker 3:Yes, and the good news is that two days ago I just got an email from Luke, who is the organizer of the BC Tech Days, that this kind of session was approved. So there will be. There will be a session at the BC Tech Days, uh, and as all sessions at BC Tech Days, they are recorded and then published on the YouTube. Uh, so, um, and we actually, uh, we'll do the session with AJ, uh, so he will be the one old school so doing the school, doing the doing the like classical AL development, and I will be doing the same in a just typing that session alone is worth the price of admission for BC Tech Days because to see that experiment to where AL developer aj who's?
Speaker 2:so if your aj is in there, you're doing it. I don't even want, I want to ask what it's about. I'm just saying if aj is doing, I have some ideas of what type of experiment it will be. Or do you want some ideas for experiments?
Speaker 2:I'm open about the ideas I'll have to send you both some ideas for this session because I think that would be a great session or a great idea. The world is changing so fast. A little side topic now. I like to go with you with AI and I'm trying not to jump around too much. Business Central is adding a lot of AI functionality to within the application with the agents and a few other features. Where do you see that going within the application itself, outside of development, outside of everything but the future of Business Central and ERP software with AI?
Speaker 3:So Microsoft is working now and released the first agent, because the flow seems to me not real. I mean, the flow of this agent is that the user gets the email and then, based on this email, the sales and sales agents read this and generate the sales quote. Yeah then, but yeah, I took with the Microsoft. They told that they did a research and found it. So this is pretty common flow. But you know, that's just my opinion on that. But the main thing that the agents coming to the business central, I find that well, this should be a really next step. I wouldn't be very optimistic about that from where it's now, because I see that a lot of we don't still have a lot of platform support to make it really powerful. For example, what I see, we don't have so-called live queries. So in many cases an agent to work efficiently with a business central, it needs data. So it needs data to work with and it needs to search for the data autonomously. So, based on the user request, it needs to understand how to fulfill the task. It should go to the database, search for the data that is required to fulfill this task and then maybe do some action. So that's actually what agents do.
Speaker 3:There are many definitions of agents, but I prefer to call them the large language models, that action in a loop. So they understand the query, they understand what next action to produce and then they do something to prepare for this action. And then they produce the action yeah, and then this could be a small step. Then they start once again. So this is the outcome from my previous action. I need to start once again. What's my next action? And we think of these agents that we actually don't, uh, don't program them deterministic. Yeah, so we, we can set some guard rails so you can go here and you know that's your goal. But how to accomplish this goal? The agent should decide, and one of the big parts of this decision and the process is to go to the business central data and pull the knowledge that it requires, and I see that, for example, queries, that they are a real solution for that. But we don't have a generate query on fly nowadays, like we can do the SQL query.
Speaker 2:I understand.
Speaker 3:Maybe Microsoft can use this internally, because they do have internal connection to the SQL, so they can generate the SQL queries directly to the database. But this is once again not secure, because if the user doesn't have permission to go to this table, it shouldn't get this information, and that's why even even Microsoft should run all this through the platform layer. Yeah, of course, taking into consideration all the permissions, and that's actually what I really asked them to do here.
Speaker 2:So it's almost like a query API and that query. Api would honor user permission, so anything that they have access to would be filtered through the platform.
Speaker 3:If this will appear, this will open a lot more different scenarios for the agents inside of the Business Central, and that's when this power of agents will really be visible, because for now, I think that it's more well. To be honest, I think there's more automation. It's an agent, so this is very deterministic flow and there is very little space in the agent decision where it can go inside of the process. And so, yeah, I know that they are also working on the next agent for the purchase invoicing, on the next agent for the purchase invoicing. So when you get, when they accost, when the vendor send you the purchase invoice, also maybe by email, and it will grab this email and recognize the invoice and it will convert it to the maybe general journal or the purchase invoice based on what is in the invoice. I think it's a more agent flow than the first one, but let's see where it goes.
Speaker 1:That's a good point that it does sounds more of a workflow power, like more of an automate, than an actual AI, where it requires a little bit of thinking. It's what it sounds like. I mean sales agent and purchase agent. It's very linear. Yes, what it's trying to accomplish.
Speaker 3:Yes, because I think that's the power of agents comes when you say that, hey, okay, this is your goal and this is your tools and you are really free to organize your workflow the way you want and use these tools in the way you want to produce the final outcome. That's where these reasoning models actually really help, because they can produce a really nice plan and then reflect the outcome of this plan and maybe do the second iteration, third iteration. That's where these deep search agents also work like this. So they have the user query, they understand the intent and they plan how to answer on the query on their behalf. So they can say that, hey, this I can go and pull from the knowledge base. Or maybe this query is about the code, so I can go to the code knowledge base. Or maybe this query is about the code, so I can go to the code knowledge base and feed the answers from there and many other things. Or maybe I want to generate something using the Business Central API, so I can just ask in the same window.
Speaker 1:Yeah, I think that goes back to. I know Brad and I had a conversation with somebody where I think where the power comes when you're using AI. What maybe they should have done is a full stack solution where if you want to order something, it's going to take a look to see what you have available. Uh, do you have enough available? Then maybe it would make a suggestion of like hey, I can create a purchase order. We can probably get this vendor you, you know to send us on time To me. That's a better solution in terms of like the experience wise, versus like I need you to order this. Well, I don't have any, I'll just create a sales order and then it kind of stops there and then maybe call in another agent to do the purchase order when, if they had painted it as a whole solution, I think it'd be a better adoption.
Speaker 3:in my opinion, gives it a power of what AI can really really do for an organization.
Speaker 3:Yeah, and I fully understand where they struggle right now because, actually, if we think globally, the Asian concept really works using the language models behind the scenes, right, so this is like the combination of different calls to the life language models, orchestrating this flow calls in the right way, reflecting on the outcomes and so on, but still it's life-long-life models producing the final answer on the sub-answer internally and it's not very deterministic, right.
Speaker 3:So this all things can hallucinate in any kind of level and. But if we implement this in the ERP system, we want this to be trustable and we want this to be deterministic. So, by design, these two roles actually don't really fit together. So we want to build something deterministic with the undeterministic tools, and that's, I think, where the real problem comes. You need to really understand that, hey, this is AI feature and we need to accept that it can be not trustable for now. Okay, so that's where we are right now. We need to accept that and we can do the much more experiments and implement much more AI features and see how they really work, instead of trying to build something very deterministic with a direct flow and call it like an agent.
Speaker 2:Yes, and to Chris, your point, I think I'm hopeful that it will get there that day and I take it as maybe this is the first step to get there and hopefully they can get it to work to where it covers the point too where the agent has some reasoning and it's all inclusive and it can do the whole flow. Chris, like you had mentioned, with the sales order to the purchase order, to schedule it, to do it, versus just creating a sales order and then having somebody have to go do planning or something else maybe it's on purpose.
Speaker 1:It's just prolonging the uh absolution of roles in the organization. It's like ah, we'll give you a little bit so you have a little bit of time to enjoy your position until it gets replaced.
Speaker 2:So what are you saying? It gives you more time to work on your resume, Chris.
Speaker 1:Is that what you're really trying to say?
Speaker 2:Maybe. Well, I, after talking with Dimitri, like I just figured out now that my resume, I'm going to get the update tonight or tomorrow, whatever it may be. See, he's in the future, he's telling us right.
Speaker 3:He's in the future.
Speaker 2:He's telling us put your resume together tonight.
Speaker 3:Yeah, but I think that's the only line that you can add to your resume and you will be there in the field for the rest of the years, at least for the maybe two, three, five years.
Speaker 2:It's the manager of ai agents. So it's there you go. Thank you very much see my new role. I'm the manager of ai agents. That's what I want my new title to be. I'm going to put that on my email manager of ai agents.
Speaker 1:No, just put it update. It says future, you're a future and that's your role. Put it down right now.
Speaker 2:I'm a future manager of AI agents. Is that what?
Speaker 1:you're saying yeah.
Speaker 2:Maybe I'll do that. Well, mr Dimitri. Sir, we appreciate you taking the time to speak with us tomorrow early in the morning and to share information with us about CentralQ, where you're going with it.
Speaker 2:Congratulations on Central Q turning two. I love just saying that it sounds like Central Q turns two. I don't know if Central Q turns three Well, we'll have to come up with another jargon for that but we do appreciate everything you're doing for the community Central Q and all the other information that you share online as well as at these conferences, and I'm looking forward to seeing the results of this BC Tech Days session that you're doing with AI versus AL.
Speaker 3:I'm looking for scenarios and, by the way, if you're looking in this podcast on YouTube, I'm open. Just send me your scenarios and we'll try to do this on the stage.
Speaker 2:Oh, that'd be great, that'd be great. When is the BC Tech Days conference, chris? We'll have to make sure this gets out far enough time before, and we'll have to share that suggestions are open.
Speaker 3:Yeah, so I think it's 15, 16 June Around this days. Okay, 15, 16 June Around these days.
Speaker 2:Okay.
Speaker 1:Plenty of time. We'll put it out, for sure, yeah we'll have plenty of time.
Speaker 2:It's in June. Mid-june is BC Tech Days, and I'm looking forward to seeing your session. If anybody would like to find more information about some of the great things that you're doing, learn a little bit more about CentralQ queue. Or now, Chris, did you know that you can support central queue? Dmitry does this all on his own, Uh, and many people benefit from the use of it. So, uh, you do also have the opportunity to support central queue, Um, so you can do that. So, uh, where is can someone get uh in contact with you?
Speaker 3:Um, so the first, like centralqai, that's the free website. Then from there you can go to the docs and see the documentation. From there you can go to the AppSource app, or you can go to the AppSource and find CentralQ there. Me, I'm LinkedIn, dmitry Kapson, almost there online, especially at 6am in the morning, always for you.
Speaker 2:I know great Well, next time we'll do it at 5am.
Speaker 3:Well, next time we'll do it at 5 am. Now. Do you really want?
Speaker 2:to talk about that. We'll talk about that later, we'll see. We'll have you on. I still hope, and I'm holding out, that you do have the opportunity to make it to the United States for the upcoming Directions Conference. I know it's really close and I know it's difficult logistically to travel from the future back to the present on the short notice, but I definitely would. If you do attend, just shoot me a message, because I definitely will make sure that I look out for you and Chris and I will like to hear more about the future with you while we're in Las Vegas. Thank you again for all that you do and I look forward to speaking to you again soon. Ciao, ciao, ciao. Thanks for having me, bye-bye, thank you, bye. Thank you, chris, for your time for another episode of In the Dynamics Corner Chair, and thank you to our guests for participating.
Speaker 1:Thank you, brad, for your time. It is a wonderful episode of Dynamics Corner Chair. I would also like to thank our guests for joining us. Thank you for all of our listeners tuning in as well. You can find Brad at developerlifecom, that is D-V-L-P-R-L-I-F-E dot com, and you can interact with them via Twitter D-V-L-P-R-L-I-F-E. You can also find me at matalinoio, m-a-t-a-l-i-n-oi-o, and my Twitter handle is matalino16. And you can see those links down below in their show notes. Again, thank you everyone. Thank you and take care.