Dynamics Corner

Episode 512: They’re Born, Answer, Die: How AI Agents Actually Work Under the Hood

Kris and Brad Season 5 Episode 512

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:24:19

In this episode of Dynamics Corner, Kris, and Brad welcome back Dmitry Katson — a 20-year Business Central veteran, 10-year AI practitioner, and creator of CentralQ — for what might be the most eye-opening breakdown of how AI works that you'll hear this year. Dmitry walks through the mechanics piece by piece: what a large language model really does (predicts the next word — that's it), what an agent is (an LLM working in a loop with tools), and why your agent has zero memory between calls. His analogy is unforgettable — every time you send a message, the model is born, reads your conversation, answers, and dies. The next message? Born again with no memory of its past life. From there, the conversation takes fascinating turns: why the same prompt in GitHub Copilot and Claude Code can produce different results (it's the framework, not the model), how Dmitry built custom MCPs to dig through a decade of personal emails for a visa application, why voice mode is no longer speech-to-text but true audio-to-audio generation, and a mind-bending experiment where scientists digitally recreated a fruit fly's brain — neurons, connections, and all — and watched it come alive. Whether you're just getting started with AI or deep in the agentic development world, Dmitry's clarity will recalibrate how you think about every tool you use.

Send us Fan Mail

Support the show

#MSDyn365BC #BusinessCentral #BC #DynamicsCorner

Follow Kris and Brad for more content:
https://matalino.io/bio
https://bprendergast.bio.link/

Memory And Context Window Reality

SPEAKER_01

We as humans we do have unlimited uh content window, as I say, and a memory. So that's two things that we have but agents not agents language models don't do that, don't have that. They don't have unlimited context window and they don't have a memory.

SPEAKER_02

Welcome everyone to another episode of Dynamics Corner. I learned a lot today, and I want to know how it runs in the background of all the AI. I'm your co-host, Chris.

SPEAKER_03

And this is Brad. This episode is recorded on March 17th, 2027. You know what today is, Chris? March 17th. Yes, it's it is St. Patty's Day. St. Patrick's Day. Hope everyone's celebrating. I know it'll be a little bit of celebration later on, but as you had mentioned, we had the opportunity today to speak about AI and break it down to how the fundamentals of agents work, how large language models work, and the process that it goes through to give us the answers that we're looking for. With us today, we had the opportunity to speak with Demetri Katz.

SPEAKER_00

Hello.

SPEAKER_03

Good morning, sir. How are you doing?

SPEAKER_00

It looks evening there. Good morning.

SPEAKER_03

It's early morning for you there, I believe.

SPEAKER_01

No, it's uh it's evening. So it's 9 p.m.

SPEAKER_03

9 p.m. I I you know with these time zones I can't uh keep up with it. But wow, is all I can say. Since the last time we had spoken with you, the advances in AI and technology, I think we spoke to you last time about the uh the workshop hackathon that you had done about a year or so ago. And that seems light years ago as far as technology is concerned.

SPEAKER_01

It is, but it was only two years ago.

SPEAKER_03

Oh, two years ago.

SPEAKER_02

That was right two years ago. Oh yeah, time blue.

SPEAKER_03

I can't even I don't know if something was last week, a year ago, two years ago. See, I thought I thought it was a year ago, but it's it's i i i I can't keep up uh with uh time, as they say. Um and and right.

SPEAKER_01

I think we we talked twice. Two years ago and a year ago. So it becoming a good tradition to talk every year.

SPEAKER_03

Yes, yes. Well, you're doing some great things, and I see you involved in doing a lot with AI development. You know, you started out with Central Q, which is what we first spoke about, which was something that it's great that you've done for the Business Central community to allow for people or users, developers, partners, everybody that uses Business Central to be able to have a place to go to use AI to help search for content. And now you've done some great contributions to Business Central, and now I see you're doing some great things with development as well and uh some agentic development in conversation. So that's what uh I hope to talk with you about yeah, trying to not lose my job. We we that's a topic that we can talk about. Uh but before we get into all that, you mind telling us a little bit about yourself?

Dmitry Background And CentralQ

SPEAKER_01

Hey everyone, uh I'm Dmitry. Um I'm currently based in uh Thailand for six years already. Uh moved here. Uh 20 years in uh business central slash nov, 10 years in AI, starting from machine learning 10 years ago, and now it's uh long long way uh past. Um I'm doing a lot of things that I love. Um AL development, one side of my story, AI development, another side of my story. Now agents, architecture, that's what I put in my uh email uh as a job. Yes. Uh so yeah, love to help uh people, teach people, uh experiment with different things, um, and in in average, do love to do business central smarter. That's what I do.

What AI Means In Practice

SPEAKER_03

No, that's great. It's great. You're doing some great things. Um you joked about trying not to lose my job or trying to keep my job. I think a lot of people uh think that way, but I think there is some reality about it. I think AI can be efficient and help you with efficiency. Um I think uh uh we'll have to come up with new ways to do things because I know I use it quite frequently as well. Um just one thing I want to ask you. You you said you mentioned you started working with machine learning 10 years ago, and now you're doing agentic uh development or your a agentic engineering, business central development, uh and you also do a lot of presentations. AI it it's it's I think AI is it's one of those things at this point that everybody hears it, but a lot of people may not understand it. And even within the realm of your circle that you work with, like in our group of the individuals that we speak with, everybody's all into AI. But if you step outside the group a little bit, some people still have no idea what AI is. So in the context of AI, what do you consider AI, or what do you mostly focus on when you hear the word AI? If uh if that's a clear question. I know it's not too clear, but I think you can see the struggle that I'm finding a lot of people having with, oh, that's AI, right? Is it spell checking? Is it driving a vehicle? Is it coding? Is it robotics? So it's it's a little challenging.

SPEAKER_01

Yeah, sometimes I get um requests uh for 30 minutes uh consultation uh for people um and they ask me, can you can you teach us AI in 30 minutes? Uh that's a fun. I and I tried to do that. So well first what I try to explain to them is that uh AI could be very different depending on what you need. Um I started with the machine learning and uh those times uh it was called AI, but machine learning allowed us to uh train our models based on our data. Uh where and we can use these models to predict things. Yeah, it could be and prediction could be some number or an answer yes or no, you know, like uh in in this uh uh toy where you ask a question and there is yes no. Um yeah, but that was something that was trained on uh on the some of the experience. Um and that actually was AI, still AI, for many cases. In 2023, uh actually late 2022 when uh large language models appeared, uh not appeared but became popular because of the Chat GPT, uh it still it's also AI and it's also prediction. But in this case it's prediction of the next word. Based on based on the trained data, but in this case the trained data was the whole internet, the whole internet of words, uh texts, books, also trained by deep learning techniques, uh with some of course advanced of new architectures and new technologies uh of the transformers, but uh still it's AI, it's technically something technology that predicts the next word uh based on what it saw previously. So if uh people ask me okay, uh we are in the manufacturing business and uh we want to uh predict the maybe the capacity of production for the next year. What AI should we use? It's uh obvious to me that uh in this case they would need to use uh the machine learning approach because it's their capacity, it's their items, they have a history of what was before, it's not new business. It's obvious to me that they need to train their own model that will predict their capacity. People think that they can just go directly to the Chat GPT and ask about it. Uh and Chat GPT will tell you every time. How good will be that prediction according to your case, to your business? Well, you can try and compare in one year, yeah? But uh large language models are not trained to predict something like that, so to predict uh capacity to optimize manufacturing to do this complex stuff. So AI could be different depending on what task you want to solve. Like language models are trained on text, they predict text. Um if there is if there was a very good book about how the capacity should be planned, uh this life language model step could be used in the pipeline to set parameters for the machine learning trained model. Yeah, so it could be used as a step, but not as a final solution in the pipeline. So choosing the right uh tool is still uh what will keep our jobs.

SPEAKER_03

Someone sent me a meme, believe it, I still get memes occasionally, far fewer than I used to get, but it said uh AI won't replace you if your job is to use the AI or to build the AI or something, so it's uh it's true.

SPEAKER_01

But yeah, yeah, yes. Um I actually uh wanted to experiment with that. Uh and I think last month I asked um uh Claude uh knowing everything about me, uh can you predict would AI replace me or not? Yeah, so and Claude told me that as part of my job is to create uh AI agents, uh then I could be safe, yeah, because that's what what uh will be required as a job, at least for the next year. But then in the next week, yeah, yeah, but then in in the next week after that, um Langchain, which is actually the uh agents, one of the agents frameworks, uh they released agent that can build agents on top of their framework. So previously it was uh uh Langchain had very good uh agents framework, which is called Langraph, and then deep learning and so uh deep agents and so on. Uh, but it was it will required experience and knowledge how to do that, but then they released other agents that already have this knowledge, and you can just ask, and you know, uh it's it's actually very similar to uh what we what we had three years ago. Three years ago, everyone talked that the prompt engineering is everything, yeah. So yes, you need to know how to write prompts. Yes, in this case, you save. Yeah, you remember that. But then we now have large language models that write prompts on on behalf of us, and we use these prompts every day, and they do that work much better.

SPEAKER_02

Yes, yeah, use use use uh uh agents to ask how to write prompts better. Here's what we're gonna do create me a prompt for this agent.

SPEAKER_01

Exactly, and then build this agent. Yes. So uh so I would not so very sure, 100% sure that uh even my job would not be.

SPEAKER_00

That was only one week apart.

Choosing Between ML And LLMs

SPEAKER_03

I think that uh it's it's some some jest, some laughing in there, but uh I think it's a nervous laugh, Brad. That's I no, I laugh at it because I think it's with the rate of the change that it's going, I think worrying about your job or your future is is creating a lot of stress, where I think sometimes just focus on what you're doing, solving the problem, using the tools to solve the problem can make it a little bit easier. But I do agree with you. The the rate of change within AI is very difficult to keep up with. Uh it's almost if you go to sleep for the night, you'll miss out on something new. Uh and I almost think I should go to sleep for a couple weeks because then I'll miss all the talk about something. I'll be able to see just what's new or maybe what we level out on um with it.

SPEAKER_01

Uh many people are wondering nowadays if especially uh if we talk about uh AI for coding if I gain some experience and uh this speeds me up in creating the final product, so I I now know how to use AI to create software, should I share this knowledge to other people because if I have this knowledge nowadays, that's my advantage. Uh if other people don't have this knowledge and experience, um I can win comparing to them. Yeah, that's a question that many people ask, even in our MVP community. I recently wrote a blog about that. Um and um I had this question asked for myself as well uh many times. But uh I think that's uh what makes uh our generation uh still successful and uh we can still develop technologies in so rapid space space um because we share the knowledge. I think that's uh a lot of open source projects that appeared recently, uh a lot of uh blogs about a ro a lot of uh YouTube channels like that, like yours. Um if people share the knowledge, uh maybe maybe they think that they will lose in short term. Uh but I think that if if you share the knowledge, that's actually what I discovered on my experience uh myself, that if you share the knowledge, uh you become more respected uh in in the world, uh because other people can listen to you, uh they can can they can apply your experience or not, uh but not everyone will do the same things as you can do. I mean it's easier to do that, but not everyone will do that. Um so yeah.

Job Anxiety And Sharing Skills

SPEAKER_03

Yeah, no, I understand. I think I think the sharing is important, and I think this is new to everybody, and I think we can all be successful. And I I do believe, like you're saying, trying to be the only one to hold on to the information isn't going to bring you ahead. I think it's having the ability to share with others brings everybody forward, including yourself. Like you said, you do get respected if you're someone who works with it, people understand that you work with it, but as you had mentioned, even in some of the the groups that we're in, people ask questions or they say things that they've worked or didn't work. So the collaboration has become more important because it can bring everyone forward. Uh you you do quite a bit a lot with AI, and I was asked the question recently, and uh I I thought about my experience with it because I was extremely intimidated by it at first. Okay. And then I went from being intimidated by it to now I use it for many tasks in the day, not just coding. I use it for a lot of tasks. But if you were to talk with somebody, let's just say we can bring it into whether it's business central development, business central project management, uh, anybody who is maybe functional consulting, or even anybody who wants to be uh speaking. Okay. We hear of AI, we hear of agents, we hear of prompting, we hear of instruction files, I hear of skills, I hear of tasks, I hear of Claude, I hear of OpenAI, I hear of Copilot. There's so much information out there that it's overwhelming and can be intimidating. But someone such as yourself that has been working with AI for many years, you do a lot of sessions on AI, you create a lot of great things with AI. Where would somebody start? So if you were to have a new developer or somebody like that, where would you s recommend they start to be able to understand to use some of these tools to be able to help them be efficient? Or how do you think they should start?

SPEAKER_01

You know, uh there is a common opinion uh that to learn things, uh you need to start doing things, which I partly agree with. Um however I think that going through some webinars, uh like maybe workshops um or playlists of in the YouTube channels, which a lot of are free nowadays, um you first can get this structured understanding, not try to try this and then discover how this thing works and then try other things and discover how that works. Uh for many people, maybe that approach will work. Uh for a more structured approach, I would recommend to. start exploring and learning from um from trainers of from books or from from some structured uh content but uh on the same way you need to understand what you want to do what you want to achieve what's your task what's your goal and depending on that uh search for the the structured content uh one of the websites that I usually recommend is called uh deeplearning.ai there is a lot of there are a lot of uh free uh courses for many different topics uh from the respected um trainers uh the management from different AI labs uh the engineers and so on um so that's one of the places that I uh recommend to to go also um uh especially for um uh building agents um I would recommend to go uh through the Cloud uh academy uh also OpenAI has its own academy um but you need of course to understand that each of these courses uh built from the AI lab labs uh they are focused on their tools yeah so uh if it's actually okay because many tools work in the similar way uh but yeah uh also could be a good uh approach.

SPEAKER_03

Okay so your suggestion is instead of trying to learn everything try to find a task that you want to complete then find structured learning to take you through that task so you can go from start to finish versus trying to piece together little pieces. And that can help you get some comfort with how it all works.

SPEAKER_02

Yes but uh that's approach uh works very well if you don't know the area so if you like if you want to go from zero if you're already familiar with some tools with it uh with the technology uh how it works then probably trying to solve things uh would give a better um output so you you learn while doing things but only if you know the basics okay so if you know the basics try to solve problems if you're starting from the beginning find a task or something and then go through the path of learning the basics uh to complete some task because I think it's it's there's a lot uh again there's a lot of people I I got lost for a little while because everyone we speak with all they talk about is what they're doing with AI that you have a conversation with someone out of the the the pool that you're in and you realize that there still are a lot of people that are just learning AI even within business central business central has a lot of AI features in it outside of the coding even some of those features and functionality within the application many are still learning how to use and how to take advantage of uh the benefits of using those question for the both of you actually and Dimitri this may be uh pertinent to you but we we talked about the community in in in our little circle right so we we talked about AI and and how we use the tools and uh on the last episode we had the conversation a brief conversation about you know outside of that you know maybe family members what do they perceive AI they know you work in AI but uh but they're in a different industry um so but to me that's still within you know within my sort of larger bubble in this case my my region area I live in Washington state right next to Microsoft essentially now for you Dimitri living in Thailand you know do you like how are the people around you that are using or what's their perception of AI are is there anyone using it and to what degree knowing with your background you know a lot about AI and machine learning uh most of people who I know who is who are not in IT completely uh they know only one AI tool which is ChatGPT um so that's um I do I do know uh the owners of the uh dive dive uh office uh uh where they organize diving uh uh traveling you know car renting we we live in the island uh that's a touristic place so um many businesses like that are very popular here I do have many friends in them as well um sometimes they ask my advice I try to help them um as much as I can uh but they don't know actually anything about uh uh the agents the agents that they don't know the word agents um they know how to go to the chat GPT and ask questions there um which I think uh maybe it's okay I I mean uh that's that helps them uh but um I try to teach my wife for example uh to use uh clothing uh so uh as as uh the previous uh HR uh she's not very familiar uh with those kind of tools uh but at least I show you them I show her how to connect uh maybe some services that they that she used uh to get more content so uh she gets a better experience with the CI tools um yeah so yeah but we live in the bubble that's that's that that's true that's true and this bubble is not very big to be honest comparing to the world and to the number of people that who use just chat GPT uh maybe we have I don't know like a 10 million million people uh in the bubble and it sounds like a lot yeah it sounds like a lot but it's it's it's it's not a lot and that's where sometimes I have to take a step back and realize that everyone's at a different point in their their journey and also it's it's good that you had mentioned you're showing your wife how to use Claude or some tools for AI because AI now or the like we I use the word AI but the the agents and the tools aren't just for development.

SPEAKER_03

You can use them for many different things. You talk about doing some analysis uh of information you used it for uh it's HR point of view I know people use it to create from you know business center point of view create statements of work create business requirements documents create change request documents it does a lot more than just create software applications you can use it as a tool to assist you in in many areas um so it's it's good that you're you're you're doing that but to to bring it back go ahead yes but uh also for myself uh I'm using ai in the personal life as well not only for the job a recent case was interesting um I had to apply uh for the uh visa and one of the requirements uh was to list all of my travel links travels through the last 10 years so wow I I I I thought how how can I do that because where should I take all this information and to do this manually it would be I mean I don't have so much time for that so I thought okay uh definitely I need to use ai for this task but um I do have uh personal emails I do have work emails uh I thought that I need to find information about tickets yeah so we we all have this information in emails um I tried existing uh AI uh services like in chat GPT and even cloud uh but they couldn't do that uh properly so I created two different MCPs for myself uh for the Gmail and for the Outlook and also the agent that used this MCP uh with the personal instructions so I started that they actually did the work the agent uh looped through my emails uh did great search found all the tickets structured and um I checked that there were some mistakes but I edited them and I got uh the long list of my traveling for the last uh ten years um but existing AI tools couldn't do that even those it simple tasks I think but uh but I used my experience to in AI development so I wipe coded or used uh uh agents uh to develop these MCPs to uh develop the agent and then uh put all together uh so yeah I apply this knowledge this knowledge to my personal life as well it's it's amazing at what we can do with this technology and that that's where I just sometimes have to take a step back and just say wow and as you had mentioned it's it's I think a lot of people are afraid of it because they don't understand it. But if you flip it around to see some of the tasks like you had to do like searching emails it's how can I use it to help me do these tasks. So it's not replacing you again I think it's just a fear of the unknown. It's not necessarily replacing somebody it just makes someone's tasks a little bit easier which I guess you know you may need less people to do some of those tasks if you can do it quicker or you can do more tasks. But it's something that can be of a big benefit to you. Now you have mentioned agents we talk about these word agents what is an agent? Is it you know I it's uh it's I think they're people and I now talk to it but if you had to describe an agent to somebody if it was new to it what is an agent?

How To Start Learning AI

SPEAKER_01

Because Business Central we see a sales order agent we see a purchase order agent we see an expense agent and now we see an agent preview where we can create our own agents what is an agent how do you create one you said you created agents so how do you create an agent too a lot of questions I have um you know um I participate in uh also agent conferences uh in different cities and what is the agent is always the first slide on each of the keynotes in the agent con the term and description what is the agent is different to depending on who you ask. You ask me and I am the technical person.

SPEAKER_03

So my definition of agent is um is a large language model um working in a loop trying to solve a task using tools so that's my technical definition of the agent uh let me uh describe the step by step um in the in the world of large language models uh when we just started with the chat gpt it was always a question and then generated answer it was one interaction um when we tried to solve the task using large language models using the chat gpt we were the agent yeah we asked we asked we know we knew what we want to achieve and we asked large language model uh several times until we got the solution or the final answer that is the solution to our question so it was uh a dialogue yeah several uh multiple interactions but in this case we were the age um now when the light language model uh become became better um they achieved a new skill let's say so uh they achieved a skill to ask questions to themselves instead of us trying to ask chat b GPT so now they can ask a question to themselves and generate the answer and then look into the answer and ask question once again trying to uh make the final solution closer yeah so now it's a dialogue not between us and large language models now it's a dialogue between the large language models and the large language models yeah uh this all sounds so futuristic to me that uh you know the agents learn a skill to ask themselves uh uh I don't even want to try to understand it but I have to try.

SPEAKER_01

Yeah okay so maybe I'll rephrase this. So uh uh the easiest way to understand that if we want to create some software let's say it's easy to understand on the development of the software and we need to create uh a feature first thing that we need to understand uh what this feature should do yeah and then we as a developers usually plan how we will implement this feature we need to create a table then we need to create a page then we need to maybe add a field then we need to write a code unit then we need to create a code unit then the function uh then to link everything together yeah so that's our feature uh the agents uh not the agents okay life language models now can do the first step they can plan they can plan how to what tasks they need to uh execute to get this feature done yeah uh so first step they plan and then they start to execute this plan but the execution of the plan is once again asking questions to myself to myself as a large language model uh what is my next task yeah okay this is my next task uh what code should I generate to make this task happen yeah okay now I generate the code now I see I look I now I need to check my code I look at the code okay that looks good let's continue with the next task then we switch to the next task and so on so that's what uh I define as a large language using large language models in a loop okay they asking questions uh and they solve these questions in a loop are they asking these questions you mentioned before that the large language models are predictions of the next word yes are they determining how to ask these questions based on a prediction of the next words uh yes because um because they they see the previous conversations with themselves so yeah so uh so they they look in the in in the previous conversation uh what was the last question now I need to solve this question uh here is the answer now it's the bigger conversation then it's continue um until the job is done okay um we also have we also have what is called tools so um because the tools were in my definition yes large language models uh solving tasks in a loop using tools okay what are the tools the tools is something that already pre-built some functions that we have already that life language models can tell us to execute because life language models itself it's somet something that generates text yeah so this is the prediction of next text they can't go to our system and click on the actual click something run something run run some code they cannot do that so the tool is actually um a connection okay between between this so tool is something that we uh we have maybe uh the browser open the browser that's one tool okay so but large language models can't open the browser on our machine we can create a a code a script that will open the browser and with them we can tell the large language model that you can tell me to open the browser okay so when large language models decide that to achieve their goal to solve their task it needs to open the browser it tells us to open the browser we ours we run the script that opens the browser and then we send the output to the large language model what is there what what is in the browser yeah that's this is just amazing so now I just want to bring it back I like to put it to simple examples.

AI Outside The Tech Bubble

SPEAKER_03

So now an agent is a person that learned things just like We've learned things based on reading, conversation, and information available. And now it needs to build a house. And to build the house, it needs to use the tools to do the job. So if you want to screw in a screwdriver, it doesn't know how to screw in a screw, but we say here's a screwdriver that can screw in the screw for you, and it will execute that turning in the screwdriver. Taking it back to this, so then uh see these tools that you have are pieces of code that the language model knows that it has available to it to do things. So the language model isn't doing anything, it's the code that's running that does something, and then it returns information back to the large language model, and then the large language model reads that information or takes in that information and tries to predict the next words based on the results of what it had.

SPEAKER_01

100% accurate.

unknown

Yes.

SPEAKER_03

Okay. And the difference with using agents versus using the chat GPT of yesterday or a couple years ago, I call it yesterday because it just seems like yesterday, it was more I ask a question, I get an answer, pretty static, right? So it's it was static interactions. Now with agents and tools, it gives us the perception of being more it's still static in steps, but it gives us the perception of being dynamic because it keeps looping, as you had mentioned a moment ago. So now the agent will get text that text it will use to determine I need to use a tool to go do something else. I'll get the result back, I'll analyze it, do the next step and keep going and keep going and keep going until I think I should be done. I use the word think, but I guess until until the next word is done.

SPEAKER_01

Yes, exactly. Um however, um you mentioned that it's um agent is a person.

SPEAKER_03

Um and well, I think of it's like a person.

SPEAKER_01

Yes. Uh yeah, I know that um uh many people when communication with the agents uh they uh try to communicate uh with as with a person um it can work in many cases, but I personally try to try to uh keep uh remembering that this is a technology. Uh and because the way to communicate with the agent to get the most of effective effectiveness uh from the for the for the final result is to know all these details, uh all these nuances how it uh how it ra how it's running, how it's solving the task. Because it's very different to a person how we solve the task. Uh and one of the main differences is that uh we as humans we do have unlimited uh content window, as I say, and a memory. So that's two things that we have, uh but agents not agents, large language models, don't do that, don't have that. Though they don't have unlimited context window and they don't have a memory. You always need to remember that when you ask a question inside of your VS Code, inside of your cursor, uh in the chat, you send a new request to somewhere and um this light language model that will uh shoot answer on this request, it doesn't know anything about you, about your previous interactions, about your tools, about nothing. Yeah, so if you don't use any tools and don't use any MCPs, just uh you know raw chart, uh it will not be effective comparing to the person. If you talk to the person yesterday, you suppose that if the person doesn't have low lobotamine, he will reply to you today.

SPEAKER_03

Yeah, yeah.

SPEAKER_01

So um when I also try to describe how this uh agents works or uh in particular one individual step in s inside of this loop, is that when you ask a question, you send uh the request to someone who is completely dead, and then it wakes up, you know, it's b reborn, it's born, and then uh it reads your request, try to answer your question, answers, and it's died once again. Okay. So it's uh so and then when you continue conversation, you ask the second question, you send it once again, it's once again uh reborn without remembering anything about the previous life. It reads then your previous conversation, message one, message two, and then oh okay, and then it's answers the question, and then it's died once again.

SPEAKER_03

Okay.

SPEAKER_01

So our job is to prepare this message history as effective as possible. So it on every call, it will read it and and can understand what you want from him. From it.

What Agents Are Made Of

SPEAKER_03

You gave it a listen, I I talked to uh I have many agents that I talked with. So I like to use I like to any anytime that we talk uh anytime that I talk with anyone, I try to bring it back to make sure I understand what you're saying. Um and then also for anyone listening, if they can get a different way of hearing it. An agent isn't a person, we just make it look like it's a person. It's basically you send it a piece of text that's all it knows because it's a command. You're sending it one command, it will execute that command, like whatever it needs, predict the next word, or I use the word execute or run that command, send information back, and at that point it's gone. It's there's no memory or persistence of memory. If you're having a loop, what it's basically doing is taking the first thing that you sent it, a listing of all the tools that it has, right? And then you so you it first when you wake it up, you tell it, here's all the tools you have, here's my first question. It gives you an answer. Then when you ask the second question, it basically, with these frameworks, will send the first question, then the list of tools, their results, then your next question, so that every time it basic it's it's in essence resending the entire conversation and list of tools in the in the context of tools to the agent. So it doesn't have memory, it just has more information sent as part of your question that you don't see as being sent in the background. Okay.

SPEAKER_01

Exactly. Okay, so so in these terms, yes, in this in in these terms, nothing nothing actually changed since the uh large language models uh appeared. So that the t the architecture and the technology behind the large language models is the same. They uh we we just built a lot of uh framework services, whatever you call, that actually automate uh giving to the large language model more context uh in the most effective way so it can answer your question as best as it can.

SPEAKER_03

See, that's interesting. So the underlying architecture hasn't changed, it's how we interface with the architecture that's changed, it's it's progressed forward. So if you don't recommend talking to it as a person, what is the best and most efficient way to speak to an agent or to type to an agent? I don't even know. Because again, the context, I hear this word context, right? So context is how much stuff you can send to it at once, correct? So again, if we have 10 questions that we've asked, we keep sending it the same 10 questions, eventually, just like my brain, it fills up, right? So, how do we manage and talk to an agent to be able to maximize the results that we get back from them?

SPEAKER_01

You what I am trying to do, at least to keep in mind, that's my exercise, what I I do time to time, is to place myself on on the agent side. Okay. So I need uh I am asked to do this, and I see only this information. Am I able to effectively uh solve this task knowing only this? Or I need to know something more. Yeah, so that's uh how I look at this. Now, on our side, when we talk to the agent in the chat window, we need to think what information agents need to see to solve these tasks as effectively as um as possible. Yes, uh agents in the current IDEs like uh VS Code, uh the Cursor or the Clot COT uh or the judge on the GPT um or the OpenAI codex, they have built-in tools to search things on their own. They can search web, they can search files, uh that's tools that already per-built. Yeah, so even if we don't give this information during our conversation, the agents can uh help themselves, yes, and search things uh if agents think that it will help to solve the task. But still, if we um if this information is not available anywhere, yeah, if it sits somewhere in in the other folder in our computer, that the agents no clue about that I can search there. Yeah, so we need to provide this information that you can also search there in this folder. Okay, so um uh like uh in in the in the business central AL development, if if we uh return back uh to our uh trenches, uh we we have we have a great um uh repo um yeah made by uh Stefan, which is uh BC History, uh which I think all uh developers uh have on site with all the source code of uh the business central. And uh then we work in uh inside of our extension. Yeah, so try to connect them. What what I am doing, I um add the um uh the the history repo as a subdirectory, uh sub sub git directory uh inside of my XAL extension. So um that's one of the ways, or maybe if you have this on site in your computer, you can also tell the agent that you can go and search information there, uh, how the business central works. Yes, there are uh also MCPs uh that um can search information through the uh symbols, uh but um and I use those as well uh but not all information can be found through the MCP through the symbols. Uh so I prefer to use raw uh business central code as a place where my agents can search information for while trying to solve my task for my extension.

SPEAKER_03

Understood. Okay. I like that strategy so that it knows it can search other the frameworks allow them to search other tools or other things to be able to help them get it. They don't the agents don't have a memory, they only have the context of the conversation. I hear things now about memory, I hear things about context. How do you give it memory? Do you just save conversation history and say refer to my history? Or is there another way that you can do memory and how much space do we have for it to search? Because I know if I were to do a task, you said you said think of if I had this, how would I do it, right? Do I have enough information? If it doesn't remember anything, and it has to go through look Stefan's BC history repo, which is a great repo because you have the localizations and the versions of previous to go through. Doesn't it take a lot to search through all of that and remember it all?

Context Management And Memory Illusions

SPEAKER_01

So remembering is not the right word here. There is no memory. Okay. Um it once again, uh what many people call a memory is information, external information, uh, where the agents can search for some content, depending on the current task. So it's not the memory of uh of the agents, they they don't have a memory. Uh I think that's we should clearly state that. Uh yes, um the frameworks uh like IDEs. Uh I don't know, frankly speaking, about Visual Studio Code uh way of storing the conversations, uh, but cursor I know that uh around months ago or two months ago they changed the way how they store the long conversations. Um previously uh they uh if you if you had a very long conversation, uh more than uh one 180,000 tokens, um they uh around a tool to summarize it. Yeah, so to uh to the summarization is actually take a big text and short it down uh to keep only the most of the important information from that. It worked, it works, uh but many nuances are lost during this step. So because of that, when we continue conversation after the summarization, and we ask uh the agent, hey, can you do the same as you did five steps before? Uh it cannot continue because it doesn't doesn't it doesn't have this information in his history. Uh what they did uh recently uh they now automatically save all the long conversation in a separate files in the system folders, and in the summary uh they have a reference uh uh points to the original files. Yeah, so this helps uh agents to see that oh okay, uh we talked about that. I don't know what exactly we just we discussed, but I can look there. Yeah, so it it gets it goes to the original file, search once again for this information, grab this piece of information, and then continue. So uh this is called offloading of uh uh of text or long run or long text.

SPEAKER_03

So it makes it look like it's memory, but it's really not memory. It's just here's a sentence with a pointer, in essence, to a larger file for a bigger context of conversation. So it gives us the perception that we are dealing with something that's remembering it, but it's not. It's still the same old loop of go here, read a piece of information, do something, save it. And just it's it's so all of the effort then on all of these tools or these frameworks, right? Because I hear a lot about models and frameworks, I could talk a few about all this for days. So you have all these models that are the large language models, that's the underlying technology. And then you have the framework, and the framework would be Claude, GitHub Copilot, Cursor, or other platforms that interact with these models. So what they're doing is just building technology that interacts with the large language model but becomes efficient at dealing with context. That's why I could use GitHub Copilot with Opus 4.6 with one prompt, get a result. I can go into Claude Opus 4.6 with the same prompt and get a different result because it's the way that they manage how it works with the model. It's not the model that's different. So it's the framework that's different. See?

SPEAKER_01

Yes.

SPEAKER_03

I have all this. I am now an AI expert. Until tomorrow, it changes.

SPEAKER_02

You only have 24 hours, but to claim that.

SPEAKER_03

Yes, yes. For maybe one more day because uh uh Wow, this is a lot, but it's it's I really like how you explain the separation between the large language model architecture and how the frameworks are using or looping through to communicate with the architecture and the perception of memory. So when I talk to very large, when I talk about when I talk with my agent, so I shouldn't be writing, and Chris knows me well, I don't do this to anybody. I write thank you. That was good. So I shouldn't write any of that back after it's done. Or should I just I shouldn't give it praise?

SPEAKER_02

Because it's gonna loop through that again.

SPEAKER_01

Every time you ask something. If if you if you feel better, you can.

SPEAKER_03

Okay.

SPEAKER_02

We want to humanize things like that. That's the I think that's the purpose.

SPEAKER_03

I haven't played with it yet because, again, there's just so much to to work with on this technology, and it's very difficult to keep up with it. Um and we didn't even go down the road of having agents manage agents because that's a whole other conversation. But I really want to get to the point where I can talk with it and it will do something and then talk back to me. Right? I know there's some voice prompts and there's ways you can get the voice in, but I want it to be like if I could cook up, look up hook up something like to with like 11 labs where it can like talk back the results or or do something, I think that would be impressive. Because then I would never leave the house. Uh uh I would talk with it.

SPEAKER_01

So I think it's still already available.

SPEAKER_03

Yeah, no, I've seen people that have it. I haven't really experimented with that because I now I'm I'm working on some other things. Like I said, I can't keep up with everything.

SPEAKER_02

I mean, between working with the Brad, don't you do that right now with Grok? Like like did you have a conversation with that?

SPEAKER_03

Man, I I I I used that the other day to be honest with you for the first time just to show someone. But uh I you're talking about grok on the phone. The grok in the vehicle is different because I just say find me a route to here at a charging spot or something. But the on the phone, I don't do it now because everything I I found that between GitHub Copilot and Claude, like the Claude framework, I can do everything I need to do. Um, and even now with Claude being able to access it on your phone, and you know, I can start something in one place, I can access it on my phone to continue doing it, and then even now you have, you know, I have this joke about the lobsters and everything where you can have like these frameworks set up where you can, you know, you can connect to them remotely. I don't really use Brock anymore.

SPEAKER_02

I guess I was thinking about the the vehicle because like you know, I think uh a couple of weeks ago I had a c I had a long drive, so I had to do a quick conversation about a topic, and it was just like a nice little quick banter, and I was like, okay, well I'm done for that topic. It's pretty interesting to have its own opinion about a specific topic. Obviously, it's based on context that it has access to, but um, but I thought it was pretty fascinating. So eventually I I would think that Business Central would have that down the road where it would be like a business manager agent in Business Central where you can have a conversation about strategy and it's gonna give you perhaps an answer, conversational answer, based on the information it has one day.

SPEAKER_03

I I will try I will try that though. I will try to have a conversation with Brock in the vehicle as I'm driving to see um if we can uh do some of that. I I think if you can get to the point where you can say summarize uh, for example, the sporting game for me, you know, or something like that. Where I I treat I do try to do that now. I do more of a like stock analysis, like go out and give me, you know, the current you know stock price of NVIDIA versus where it was last year, compare it to the dial. I try to do stuff like that and say give me a graph, but I think if I could do that type of stuff in the vehicle, like give me a summary and analysis real time like that, I think that would be worthwhile. Um so it's interesting.

SPEAKER_01

I use the voice mode um time to time. I find it very uh effective uh for me uh when preparing for the conferences, for example. Uh when I when I want to um when when I want to uh prepare for the speech and as a non-English local uh uh native uh speaker, um I always try to prepare for the for the session and the voice mode helps me to structure content, maybe to rephrase something, maybe to learn new words, new phrases, and so on. So um because the voice mode uh works uh not like uh voice to text, then text to voice, it's uh it's directly voice to voice. Yeah, so it's it's generating voice. Uh because this uh these modes, the multi-model uh large language models, are called uh those because previously they were trained on text as an input and text as an output, uh, but then they were trained on the uh audio as an input and audio as an output uh directly without uh transcription. I did not know that.

SPEAKER_03

I didn't know that either. I thought it was transcription.

SPEAKER_01

No, it's not. It previously was transcription, but I think around two years ago they changed they they trained the light language models uh from the audio uh sources uh directly. Um so that's why they can um they can hear the nuances of the voice. Um how loud are you, you know, all all these things, um that could not be possible just going through the transcription. Um that that's why these voice modes are really helpful, at least for me, for such kind of tasks. Like a lot of people. I have to look into that.

SPEAKER_03

I like that because as you had mentioned, if it can help you prepare for a session or a speech or an engagement where you're talking, and it can analyze your audio and provide feedback on that.

SPEAKER_02

I'm learning so much today. It's just it's crazy. I had no idea that was uh a thing.

SPEAKER_03

Do you want to know the problem? I'm so old, my context window is filling up. I need to stop doing this. I need to start doing the summary of, oh, I had this chat here, go back and listen to it, and then I can have the conversation. I need to have that put into my brain so they can just do all that with everything. Wow, I have to look into that more now because I think I think we're getting closer and closer to what I was thinking, and then you can have uh an animation of it as well when it's speaking. Again, it's all just visual, but then I could really feel like I'm talking with someone. Uh I'll ask you a question on your thoughts of this. So you understand the uh and I fast this to others. So large language models, they're trained on information, and now we talked about all these technologies, these tools, the perception of memory. If I went through and recorded my life every day, 24 hours a day, of my interactions, could AI create me to where you could sit down and talk with me and think you're talking with me. So if it recorded every like it had a recording, whether it be a text or audio or anything, of how I interacted and how I did everything and what I did, do you think it would create something or it could create something that resembled me?

SPEAKER_01

Uh that would be maybe a good imitation of you. Um however it will not be hundred percent uh your duplicate, your digital duplicate, because uh it doesn't uh have all the information it will not have all the information uh from your brain. Because um what I want to say is when you uh do something uh there are some signals that come into your brain. The signals are coming from the sensors, yeah. So we we have uh our body is a big sensor, um from audio, from ears, uh video, from eyes, um but also there are a lot of sensors inside of our body. Yeah, so there are a lot of neuros happening inside of our body, and our brain receives all of this information. So that information um allows to predict next next thing, yeah. So to allow us to make next step or how better.

SPEAKER_03

So it's like a large language model. It's predicting the next step based on what it knows.

SPEAKER_01

Yes, it is, yes, it is predicting, obviously. Uh but the amount of data we receive to make this prediction of the next step is uh a lot, a lot, a lot more than large language models uh receive.

Voice Mode Digital Doubles Closing

SPEAKER_02

I I think for you too, I think for you too, it's like it's a um you know, I think it's um if if you were to store yourself, Brad, first they would need a ton of storage. Uh right. I think it's like by petabytes is how much your brain contains uh all the storage. And on top of that, you also have to it's almost it's almost like you're only at that point in time of what you store. So any new information you receive, it it would stop at that point.

SPEAKER_01

So yeah, I mean so yes, it's uh a lot it's a lot of um it's just a lot of information that our technology currently just we don't have the the the space where to store this information and uh the uh tools to uh to to receive this information, yeah. So if we can inject the electronic uh device to each of the neurons and each of our neurons receives uh thousands of uh inputs. Yeah, so if we if tech theoretically we can do this and save all this information and train this on top of this, maybe then yes. Uh did you did you did you uh read about uh the recent uh experiment uh with uh uh flower fly.

SPEAKER_03

I did not. But uh you uh just before I want to hear about it, but before you don't, I'm not saying if they could do this with the brain, I'm saying when. It may not be in my lifetime, but with the way this is going, we will be at that point at one point in the future, I believe.

SPEAKER_02

You do need power, which I also read, where we only consume 20 watts of power when we're using a brain compared to a prompt or what an agent uses for power.

SPEAKER_01

Yeah, so there was um there was an experiment. Um recently. Um the flower fly brain has um around uh maybe a thousand, maybe I could be not fully correct here, but uh let's say around thousand of neurons uh and around uh one hundred and fifty thousand of connections between all these neurons. So um the scientists uh they put all these uh neurons and put all these connections uh to the computer model yeah so discovered all these physical connections and copied that to the uh digital world and created not the model that was trained on this brain, but actually reproduced the brain digitally. I have to read this experiment. Yeah, so so they they just reproduced digitally the same brain of the fly um flower flower fly as it has and then they uh created the digital body of this uh foul fly fly and um connected this uh digital brain to this digital body and it became alive digitally. So it's uh it it's actually they they look how it moves, it searched for the food.

SPEAKER_03

Wow, it's wow.

SPEAKER_01

It's that's it it's actually pretty awesome. I mean that is amazing. But it it it happened uh last last um in February, so you can find this information.

SPEAKER_03

I will look that up. No, thank you. That is uh see, we're getting there. I do right. It's when it's when is that when I I do see a point because yeah, I won't even get into it because uh some people think I'm crazy, but I I do see a point in the future where you just take your brain and you plop it into a body and that's it. You're you're uh you know, we're getting closer to a closer.

SPEAKER_02

Isn't there a show like that where you could just hop into another uh another body?

SPEAKER_03

That was different. I think you could teleport into the bodies, but I'm saying now you could have a robot body that you control because now you know age is just you know deterioration of cells, and um, you know, it's just the age of your body. But if you can replace those parts and still control them, you could be a cyborg and live forever. Like the future. I could talk with you all day about this, and uh to be honest with you, uh, we covered quite a bit. Hopefully, uh those that are listening were able to get a little bit more understanding of how this works. Uh I learned a lot. It's just amazing. It's amazing, but we do we do appreciate taking the time to speak with us. Uh and again, as we talk about time is precious. It's truly the currency of life once you spend, you don't get it back. So any moment you spend talking with us is something uh it's time you spend not doing something else. So we do appreciate it. If anyone would like to, I I laugh a little bit because it's just mind-blowing all of the information that you can share. Um if anybody would like to reach you or contact you uh to learn more about AI and learn about some of the other uh great things you're doing, such as speaking sessions and conferences and uh the other again training services that you have for AI. What's the best way to get in contact with you?

SPEAKER_01

Uh LinkedIn, uh Dmitry uh Ketzon. Easily find me there. Uh my website Katson.com. Um all the contacts are there. I will be in person in Directions Azure in uh Ho Shimin uh this year. I'll deliver the AI development for AL workshop. Um and also then in uh BC that days um hopefully I will get visa without any issues. Um Antwerp, Belgium, June great place to be.

SPEAKER_03

I didn't hear Orlando in uh directions North America though. You're not making it to that one.

SPEAKER_01

Uh still US uh visa process, very challenging.

SPEAKER_03

Well, hopefully we can get you here uh for one of these events uh in the upcoming future. Yes. Yeah, I think it would be great to have you. I think everyone would benefit from uh all that you can do. But you sound like you have a big a big year ahead of you with these sessions, and we do appreciate everything that you do and all that you share. I think you share some great things. And thank you again for Central Q. I know a lot of people are using that, so that's another way if you go to Central Q to find a way to get in contact with you and also to use that service to be able to uh get information to assist with business central implementations, whether it be development or from a functional point of view. Uh thank you again. Look forward to speaking with you soon. And uh now I have to go take a break because my brain is is full. We appreciate it. Yes. Uh thank you, thank you, Brett.

SPEAKER_01

Thank you, thank you, Christopher. Thank you, everyone who listened to this podcast. I was glad to be here. Always, always nice to talk to you guys. Thank you.

SPEAKER_03

Likewise, Dimitri. Thank you. Talk to you soon. Chao ciao. Thank you, Chris, for your time for another episode of In the Dynamics Corner Chair, and thank you to our guests for participating.

SPEAKER_02

Thank you, Brad, for your time. It is a wonderful episode of Dynamics Corner Chair. I would also like to thank our guests for joining us. Thank you for all of our listeners tuning in as well. You can find Brad at developerlife.com. That is D V L P R L I F E dot com. And you can interact with them via Twitter, D V L P R L I F E. You can also find me at mattalino.io, m-a-t a l i no dot io. And my Twitter handle is mattalino16. And see you can see those links down below in the show notes. Again, thank you everyone. Thank you, and take care.