Dynamics Corner

Episode 412: πŸ€–πŸ‘¨πŸ»β€πŸ’» AI Assisted Development meets Business Central πŸ‘¨πŸ»β€πŸ’» πŸ€–

β€’ Tine Staric β€’ Season 4 β€’ Episode 412

In this engaging episode of Dynamics Corner, Brad and Kris are joined by Tine Staric. Listen as they dive into the growing impact of AI on software development, spotlighting tools like GitHub Copilot and Cursor. They discuss how these AI tools boost coding efficiency, automate documentation and testing, and simplify ERP system interfaces. Tine shares his passion for Business Central and AI innovation. The trio highlights the importance of grasping coding fundamentals, especially for junior developers who might struggle in an AI-driven world without a solid foundation. They also address AI’s quirks, like hallucinations that can misguide code, emphasizing the need for human oversight to keep business processes on track. Efficiency gains shine through when relevant files feed AI context, and the potential of local models sparks excitement for future development workflows.
 
🧠 AI tools like GitHub Copilot and Cursor enhance coding efficiency and automate tasks. 
🧠 Understanding programming basics remains essential, especially for junior developers. 
🧠 AI hallucinations pose risks, requiring human oversight to ensure accuracy. 
🧠 Efficiency jumps with proper context and local models could shape AI’s future in coding. 
🧠 AI simplifies ERP interfaces and workflows but isn’t a substitute for developer expertise.

Send us a text

Support the show

#MSDyn365BC #BusinessCentral #BC #DynamicsCorner

Follow Kris and Brad for more content:
https://matalino.io/bio
https://bprendergast.bio.link/

Speaker 1:

Welcome everyone to another episode of Dynamics Corner. What is Claude Sonnet? Is that like a poet poem, a music? I don't know. I'm your co-host, Chris.

Speaker 2:

And this is Brad. This episode is recorded on February 26th 2025. Chris, Chris, Chris, who's Claude? Is that what you're asking?

Speaker 1:

Yeah.

Speaker 2:

Well, today we have the opportunity to find out who Claude is. Well, today we had the opportunity to find out who Claude is, what his sonnet is and learn a lot about AI-assisted development at Business Central With us today, we had the opportunity to speak with the MVP, dinae Stein. Hello, good afternoon good, how are you doing? Oh, what good afternoon. I'm doing fine, excellent, excellent. I've been looking forward to speaking with you for some time.

Speaker 2:

You know this, you know, Chris, I don't know if you know this I bother him all the time For everything, Anything. I just bother him all the time. You know I get up at 3 o'clock.

Speaker 3:

I don't think it's bothering, but I am always surprised that you wake up at 4 am and you know I'm the first one you text 4 am the first thing comes to his mind, I think we have to start this over, because now that sounds a little bad 4 am.

Speaker 2:

I wake up, I text Dina, it's just. I always have things on my mind and he's involved in so much. Now it's like I have to just have an outlet to say that is what about this. What about that? Isn't it gold? But now you just let the cat out of the bag and everyone's listening to this going. What's going on that? Well, what about the other people that think I'm the first one that they text at four in the morning?

Speaker 2:

oh, now see now you're out of yourself now you just just set me up, my friend, you know now.

Speaker 3:

Look, I'm sorry. I'm sorry they have to deal with it. I'll take the first place.

Speaker 2:

That was perfect.

Speaker 1:

That was perfect.

Speaker 3:

That was excellent.

Speaker 2:

Excellent, so I'm glad things are going well. You've been doing a lot of great things and you know we'll cut out the small talk and just get right into it.

Speaker 3:

Before we do that, can you tell everyone a little bit about yourself? Sure, yeah, so my name is Tine. I'm let's say I'm a developer in this world of Business Central. I'm originally from Slovenia, but right now I'm living in Lithuania, and actually this is going to be my last year in Lithuania, so next year I'll just say I'm Slovenian, I'm from Slovenia, I work at Companio, but I prefer to describe myself as just you know, being young in the world of dynamics and being really passionate about the technology. So I like to explore stuff, I like to blog about stuff. I like to. I like to explore stuff, I like to blog about stuff, I like to talk at conferences about stuff, and this stuff is usually business central and, right now, a lot of ai which is exactly what I text him about at four in the morning is ai and the great stuff that he's doing it's almost like you're not a developer unless you do ai.

Speaker 1:

Nowadays you have to like. That's how it looks like well.

Speaker 2:

I mean, I hate to always talk about ai, but it seems to be the word of I. I don't even know how long I could go without hearing you gotta say ai for seo purposes no, I, I think. I think you just have to completely disconnect yourself from life, sit in the woods and I still think AI would appear in the trees, but I don't think you can go far with that.

Speaker 2:

So, AI is like anything, it's a tool. So AI-assisted development and, as you had mentioned, you're doing a lot of great things. You are newer to the community. You've been doing it for a while, but a lot of great things. You are newer to the community, been doing it for a while, but a lot of us older we always talk about I talk with him about dinosaurs. We have the dinosaurs and the younglings. I don't know what we could call them. I guess they call what do they call the young Padawans Younglings in Star Wars? Do you know, Chris?

Speaker 1:

I think it's young P, a young padawan.

Speaker 2:

Oh, maybe it is younglings yeah, I think it's younglings yeah I know you have padawans when they're in training and then you have the younglings when they're starting that is true.

Speaker 2:

I think that's right, it's good, but I don't even know where to begin with these questions, because I've I've seen so much that you have been doing with ai, assisted development. Can you tell us a little bit about your thoughts and what you've been doing, what you've been experimenting with in that area for AL and even any other languages, because I saw you did something amazing that I haven't been to get back to, but I want to talk about that at the end. With that whole Python script, chris, where do you see what he did.

Speaker 2:

Wow, python script. Yeah, so we'll get. We'll get to that, because I've been trying to do that for us and I just haven't had the chance to do it, so maybe we can ask a youngling to help us out okay, that's all.

Speaker 3:

Well, depends who you put me next to, um, but okay, the the beginning. Um, I think I started more than two years ago when GitHub Copilot was initially released and even back then it demoed really well, right, you type out a comment and it will propose to you a procedure of what it does, and then I tried it out with, I think, c Sharp at the time, and it did exactly that. So that was super exciting. But then I switched to AL and it was meh. But even even the meh parts were more than enough to cover cover the what was it? 10, $10 with the personal license cost. So I just kept it. I just kept using it, and for a long time. For me it was just the autoomplete. So instead of waiting for intellisense, instead of me figuring out how to complete the procedure, I'll just tab it out. And that was. That was good enough, um. But then in the past let's say six months, when they started rolling out better models, when they gave us um co-pilot chat, then especially co-pilot edits, now with the agent mode, with Claude powering all of that, it became more and more powerful for AL.

Speaker 3:

But whenever I tried a new feature for co-pilot, I didn't start with AL, because my assumption is still AL is going to be weaker compared to a mature language, simply because there's so much more training data available for something like TypeScript. So I always started with some of the side project ideas that I had on my mind a Python script, a frontend in React, whatever came to mind, and it was so cool. It was really amazing to see that once edits rolled out, I could just type a sentence this is what I'm trying to do and it would generate that code. So I always approach it as get excited with a mature language, because you will see the full power. And then, once you know what kind of use cases should work. That's when I tried to bring back to AL to see okay, so I got this kind of scenario working in TypeScript. Does this work in AL? So in a sense, I wasn't discouraged by the use cases that didn't work in AL.

Speaker 3:

And well, I'm still kind of on this hype train because everything that rolls out, even if it only works for mature languages, it has brought me I don't know how to describe it, but a lot of enthusiasm for all of the side projects, because I would if I was not working in bc, I would likely place myself more on the back end side of software development.

Speaker 3:

I would never go for frontend I'm not good at frontend, but now I don't have to be and all of the backend code that I would write myself now I have someone else that I can prompt to give me the frontend parts and now all of those side projects that were just kind of waiting on the sidelines they come to life in I don't know a weekend, so oh, I don't know a weekend, so oh, I don't know. It's super cool to to work on on the side projects with a tool like that, but it also has I won't say all of them, but a limited, um, a limited but growing amount of use cases that I use every day for al. To just skip the boring part, skip the the coding and get more into the problem solving, which is the fun part. I mean coding is the boring part, I would say.

Speaker 2:

Yeah, coding is well. It's nice to be able to create something, but you had mentioned a lot of things, so prior to working with AL, did you program in another language?

Speaker 3:

Never professionally. It was always more for my pleasure, pleasure.

Speaker 1:

You dabbled a little bit so that's interesting it lowers your level of entrance to other languages because you have copilot. Is that is that how you would look at it, where now you can pick up other languages because there's an assistance um to get you started?

Speaker 3:

I think that's a separate topic we can open, because, even though I'm super enthusiastic of using it for side projects, I don't think I would let a developer just generate code with AI if the developer doesn't understand what the code is supposed to do.

Speaker 2:

That's the key. I think you hit the key right there and that's where I wanted to go with it, because I myself use Copilot. You can do some basic things, where I said, in Python, generate me a snake game that plays itself, and it does it pretty well, but if I look at the code, do I really understand what it's doing? So I think you really need to understand the concepts, and then the language and the tool will help you build it. Think you really need to understand the concepts and then the language in the tool will help you build it, but you still need to be able to review it. So it's not that it would lower the skill for coding. In a sense, you still need to have the skill of understanding how it all works and how it's all put together so that you can review it as if it was a junior developer and you're going through a code review.

Speaker 3:

Yeah, I think it's a very, very powerful tool for senior developers because once you're a senior developer, it doesn't matter if you just know one language, one syntax. You understand how code is written. You understand how code is structured in general. So, even though I probably couldn't write those Python scripts myself, I can read the Python scripts and know exactly what's happening there because I understand how code works. So I think for senior developers, this is an amazing new tool, but it's not a tool that will let junior developers skip a level and suddenly be experienced much, much sooner.

Speaker 1:

Yeah, I think that's what I was trying to say. It's like if you've been coding and you understand the structure of code from maybe a specific language, getting into other programming language would be a little bit easier to get into because you understand the structure, how it's supposed to work. But getting to other programming language, like Python for example, that hey, I kind of understand how this works, but I don't want to code it from scratch. So it's an easier way to get into other programming language because you now you have a, an assistant, a co-pilot assistant yeah, I would.

Speaker 3:

I would agree with that. Recently there was a post on reddit where somebody started coding with co-pilot or they were using cursor.

Speaker 2:

We can talk about those I can't wait to get into that. Yeah, that, that's awesome. You know that's on my list too.

Speaker 3:

So they were using Copilot to generate some code, generate an application, and it was fine the first day, the second day, the third day, the fourth day, but then the Copilot just started going in circles introducing bugs. When they asked it to fix the bug, it introduced more bugs. When they were asked to fix the bug, it introduced more bugs. You will hit that if you don't understand the code that Copilot is trying to fix for you. If you know what should be fixed and you just don't care about the syntax, the text that is written out, this is gonna be awesome.

Speaker 2:

But if you also expect the problem solving to be done by Copilot, you will sooner than later hit that limit of well, now it doesn't know either yes, and that, see, I do want to go back to that because I recently this week, I had conversations with individuals that said, oh, I can be a developer now. So I want to take it back to what you had mentioned. There's a couple points you mentioned. To bring it back to, having a senior skill in understanding the concepts of how things work, copilot can be a great tool to assist you. If you're someone that's new to application development, you still can use it as a tool, but don't use it as a learning tool in a sense, because you still need to understand the fundamentals of the structure to see what it gets back to, because the AI will hallucinate depending upon what you ask it, what it's trained on and what you're trying to do, and to not make the assumption, and I think that's a big disconnect. But with that, ai can be.

Speaker 2:

I've heard some stories of individuals talk about the use of it where now you have junior level developers, you have senior level developers and then that middle range of developer is a different landscape. What's your take on that and do you see it where AI can be? Or even if you go into the world of agency? I have so many questions piled up where AI can be, or even if you go into the world of agency I have so many questions piled up to where you can have agents in essence be junior developers writing portions of code that then come back to a senior developer for review. So there's a lot in that range there, that's a good one.

Speaker 3:

I think we're going to struggle in the upcoming months, upcoming years with junior developers, because we will have to forcefully limit ourselves, forcefully limit the speed that we're going at to find work for juniors, because things that I would normally pawn off to a junior for them as a learning opportunity, things that I would normally pawn off to a junior for them as a learning opportunity, and because I'm kind of bored of that work, it can now be done in seconds if I shoot it off to co-pilot. So I think this is going to be a big struggle that you have to understand why you're growing juniors and I think more and more it's going to go into into the experienced or medium level developers as well. Right, in general, I think it's going to be the same way as it was for the past 10 years, maybe more, where everybody needs senior developers and it's going to be really hard to get senior developers because nobody's training juniors to get this. Thank you, thank you. It's a double-edged sword.

Speaker 2:

It is. Everyone wants experience. I think we said it on a previous podcast. If not, I've said it to people before. It's everybody wants someone with experience, but nobody wants to give anyone that experience. So how can they get that experience? It's a challenge, and there is a myth. I think that AI listen, ai is a tool. I use it daily now, and even more so over the past couple of weeks with some of the newer models that have been released. I can't even keep up with the models that are coming out, but you still need to take it back to having the fundamentals of knowing what's going on and not just assuming. I think it's putting a perception in a lot of people's minds that AI can just do it and it makes everything easier and it's perfect. So therefore, we don't need the individual to be able to review it At this point. In 2025, where will it be when I'm retired in 2025. Where will it be when I'm retired? I don't know, but hopefully I won't care and I'll be sitting underneath a tree somewhere if they still exist.

Speaker 1:

So do you? So you think it's a double-edged sword? Then, cause, considering that you know you get younglings right coming into the development world and they're learning or maybe starting to learn using co-pilot of how to code, but they're missing all the foundation, like what seasoned developers have gone through, where they have to build it from scratch. They understand the structure, they understand the concept, but the newer generations are coming in. Are they not learning it that way? They're learning right directly into using AI to build a foundation which could lose that knowledge or translation. I mean, tina, you kind of started before the AI and then you're having to also incorporate AI into your day-to-day. I mean, how is that? Could you do the same thing now if you started now versus when you first started developing?

Speaker 3:

Okay, so that's a two-part question. If I go for the first part, there's a trap with AI that I have caught myself in quite often as well, which is AI is so good at generating answers that seem like the correct answer that we tend to believe it as that's the full truth, right? So whenever you're exploring a new topic with AI, you will think, whoa, this is crazy. I don't have to click 10 different links, because this is giving me a summary of everything. However, if you would use AI to research a topic that you do know about, you would notice that there are hallucinations.

Speaker 3:

Hallucinations are at the core of AI. I don't think they're going away with the current architecture that we have. So AI always hallucinates and if you trust it fully, when you learn a new topic, a technical topic, you're going to start to learn things that don't really exist, and, as a junior developer, you're even more eager to just yeah, ai said that. I'm going to use that as the source of truth, and for me, I think it was just with one of the side projects. Actually, I kept trying to convince AI. Well, not convince AI, but get AI to give me an answer. That was in the documentation all along AI to give me an answer. That was in the documentation all along, and until AI gave me a wrong answer for five different prompts. That's when I said, okay, I need to find a different source of the answer and I found documentation.

Speaker 2:

It took much less time actually to go to the documentation, but because I didn't want to believe that ai is giving me the wrong answer, I stuck with it and I believe that that's that must be the truth very good point and people hallucinate too in a sense, and it's something to to remember is, and I'd say to ai is a tool and I I thank you for bringing up that point, because even if I I could talk with Chris and Chris doesn't know something, he gives me information I still have to have a sense of do I want to believe this? Should I do research? Should I know? I have to have a general understanding, and that's what I'm finding in the fear that I have in some cases is everybody just believes everything that AI is true and will be stuck because we'll have a lot of misinformation out there. Then everybody uses AI, not everybody. A lot of people use AI to publish information. So now you have AI creating information that's not true. Then you have people learning or training AI and people on that, and now it's very difficult.

Speaker 2:

It's a cycle, it's very difficult to determine what is true in a sense, it's even from the development point of view of how to do something. Unfortunately, with development, there's usually more than one way to do something and sometimes, depending on how you write it, it can cause problems, as you had mentioned, days down the road or, in the future, down the road too as well. So it's important.

Speaker 3:

So would you say just to maybe to add this cycle right um, ai hallucinates and then you train on hallucinations and you generate more hallucinations. This is the main reason why I'm not yet sold on agents, because when when one ai model, one ai feature hallucinates, 20% of the time, I can control that. If that's picked up by another LLM that hallucinates and another LLM that hallucinates, you go into the cycle where you don't want to be and maybe for some cases it works. I think in development, agentic development is going to go further than where it is right now. But introducing agents in something like business central, I think will need to have a I don't know a slightly different approach than just letting it loose different completely.

Speaker 2:

But let's go back to the agentic approach to anything. I understand the hallucinations, but if you have agents that are focused or trained on specific functions or specific tasks, that can use different models, would it reduce the hallucination? Because now, instead of saying model X or I don't want to say X, but a particular model, give me this. Now you can say, okay, well, this is what I need. It's broken down into these pieces. Let's go out and get an agent to do something similar to building a house. When you build a house, you'll use a carpenter, you'll use a plumber and you'll use an electrician. You'll have a general contractor that will manage them all. A general contractor may know enough to do everything. Contractor will manage them all. A general contractor may know enough to do everything, but sending it out to the specific uh agent may give you better results yes, if llms weren't so um sure that they need to be right all the time, right when?

Speaker 3:

when's the last time, uh, an llm said to you oh, I'm sorry, I don't know that. You know, I, I have to go and ask a human, I have to. I don't know how to do that. The lms will go all the way. So you know, break down a task, fine, okay. Maybe in the task process you already hallucinate, you send the wrong task to the electrician and then electrician hallucinates again and installs plumbing instead of wires. Right, and this is the part that I'm worried about that when you chain hallucinations you can go sideways quite badly.

Speaker 2:

Understood understood. That is an interesting point to bring up with the, I guess, stacked other agents that plumber can also hallucinate.

Speaker 1:

Maybe what they think is the right way to do things may not always be the right way to do it, and so they're just basing off by experience or basing off what they've learned. But you may have another plumber that would do it better, or you know the right way based on what they also learned. So it's kind of a slippery slope when you're you know, when you're including agents because they're going to hallucinate. It's going to be, it has to be. You have to build like a parameter around it, like how do you do that?

Speaker 3:

So actually I have an example of just an agent hallucinating in AL. Yesterday I put it up on Twitter or Blue Sky, I don't know. I asked it can you just go through this file and translate Dutch comments into English? And it said sure, no problem, I'll fix those linter errors for you. And it identified some linter errors and started fixing them as if they are in C sharp. Some linter errors and started fixing them as if they are in C sharp. So it didn't even understand the task that I was giving it and then provided a wrongful solution to a task that I never asked to be completed.

Speaker 2:

So with development to go back to you talking about hallucinations and finding the development To what extent within AL, based upon your experience with it thus far, do you think it offsets your development? What I mean by that is you mentioned we had IntelliSense. Intellisense helped you and now with Copilot, there's a lot of autocomplete type situations where it will try to create you know if you're doing an action on a page, for example, or try to put in the most common properties, or several properties, including the images, which, to be honest with you, I can't tell you if it's 50% of the time it's right with the name of the image or not but it tries to get there.

Speaker 2:

How much do you think it increases the efficiency on development for creating code? And if you want to break it down to tables and pages, I can talk about some of the examples I have done, also including what it takes to go back and review what it has done. How do you think that, in from your experience with what you've been doing, it's increased your efficiency, and in what areas?

Speaker 3:

um. Time wise, I think, like the, the industry average is around 30 percent and I would say my experience could roughly be around 30 percent of of time saved. Um. But there's two parts which, on top of time saved, it brings to my work. One is naming and brainstorming. That's something that I love to do with Copilot. Even when I'm reviewing someone else's code, I'm like, hmm, something feels off here. Hey, copilot, do you think this feels off as well, like what could be a better name here? And I do get back suggestions that I can then use in the code review. So time saved. The other one is I get new ideas for like naming, for restructuring my own code, but also someone else's code. And then the third part is it's a joy to develop for me. You know when, when I don't have to type the code out, I don't have to complete it with intellisense, I can just say I know what I want the code to look like. You do it.

Speaker 2:

So I think this um, I don't know enthusiastic factor for me weighs a lot my enthusiasm my enthusiasm after seeing some of the stuff that you've been doing has increased, and also I jettison some of the you and I talked about it too some of the little things to take away from that, which is good. What do you find that you use it the most for? Do you use Copilot chat and we'll get into cursor after uh in a moment but do you use it to, say, create procedures for me? Do you say, create something that does this within this is within al, from a business central point of view, or do you use it more for the auto completion point of view?

Speaker 3:

um, auto completion, probably just because of how natural it is. I see text, I tab, I accept text. I feel, even when I introduce Copilot to new developers, al developers or any other language, auto-completion nobody struggles with. Everybody sees how natural it is, how cool it is and just tab, tab, tab, you're done. To work with chat, to work with edits, it takes more of a mental switch. You have to understand okay, this is what I'm now going to write. What if I get Copilot to do that for me? And I've done?

Speaker 3:

I use that primarily when I have predictable code that I want to write. So, for example, today I was reimplementing one trigger on validate trigger on one field. The other one was more or less the same. So I just said, hey, copilot, you do the second one, because you will now see exactly how I want to have it redone or generate this, this, this and this field for me. So whenever I have a clear, clear view of what the next step is going to be, that's when I go for chat. But in terms of what do I use? More often, auto-completions are just every minute, not even every day or every hour.

Speaker 2:

Okay, that's good it's. Have you used it? Do you know how it works with it? Because I had gone through at one point and I had created a table within Business Central and again with Copilot it helped fill in a lot of the common properties and I adjusted it. Then I went to go through and I needed to create a list page. From that I started to type the page. Copilot, basically with the autocomplete, created most of the list page for me.

Speaker 2:

So, how does that work? And also, you mentioned the see. My mind's all over with this. Which model do you find yourself using with development now, claude, claude sonnet it was recently enabled within github. I mean, these models come out so fast and someone has to turn them on or enable them. But which model are you using? Go back to what I was talking about, to understand how it works, to be able to create something from what you have already created. But then also, how do you know which model you should use for a task, or is there just one model that's better for development in general?

Speaker 3:

Okay. So, starting with models for AL Cloud, sonnet 3.5 was, up to two days ago, the best model to use, but now 3.7 is out, which is even better. So AL Cloud Sonnet 3.7 is out, which is even better. So AL clods on a 3.7 all the way to go Other languages that's where we could have a discussion. Do we want to?

Speaker 3:

I personally sometimes go for 01 when I say, hey, this is a code, I just have one bug. Can you help me find the bug? Review the bug? The reasoning models are good for that. But still, I would say my default is always Claude and then, based on if I'm testing a new model out or if I just want to see how another one works, I switch, but Claude is my main driver, more or less always.

Speaker 3:

But to the question of how does Copilot work under the hood. It's interesting. Over the weekend I actually spent quite some time let's call it opening the hood and seeing what is actually getting sent to an LLM. So there's two parts. One is autocompletes and one is edits. I'll stick to autocompletes first. So you've created your table with all of the fields and now you're creating your list page. For that table, copilot will build a generic request. You are a helper for a developer. You help do this, this, this.

Speaker 3:

So it will have just a system prompt, but it will also first of of all know what al looks like from all of the data that it was trained on. But on top of that it's using your open tabs in vs code as context. So because you have your table opened in one of the tabs, it will apply that table to the prompt and say, hey, this is what the user has opened, just for you to know what's happening. And from that autocompletes already know oh, okay, table fields. The name of the page seems to match. Then I have an idea what I would suggest. Right, it knows how the page looks like because it was trained on some AL data during the training of the model, and it knows specifically which fields you would like because in the prompt there was the context of your own table.

Speaker 3:

So one very important part here is that works well when you don't have a ton of tabs opened. If you have 50 tabs open, copilot will try to take something, but it won't really know what fits best. It cannot take everything. Context is limited, so for best results it's good to have only the relevant files opened, it takes up to four open tabs. As context, and again, even with four open tabs, if I have a management code unit which has 3,000 lines inside of it, copilot is not going to take all of that. It will again try to take something out of that management code unit, but not everything. So, to get the best experience when using AI for development, keep only the open files, keep only the relevant files opened and let's have small files. Let's try to keep files small, not only for developers, but now also for AI.

Speaker 2:

That's good to know. That's interesting to see how that is using what you have open for more context for what it has been trained on, which is how, in some, most cases and it's it's a big time savings, as you could see there, when you do a list page, because there's usually a lot of typing in that. So to be able to start a page, give it a similar name, as you had mentioned, and for it to create most of that information where I just have to go through and edit, it is a great time savings. But I also want to keep going back to saying you do need to review it, because you just need to review it.

Speaker 1:

Is it aware only up to four times that the limitation to the tool itself because it's too much data to take, or do you think there's going to be an increase down the road where it will be more aware of more tabs? Or do?

Speaker 3:

you think there's going to be an increase down the road where it will be more aware of more tabs when we get to bigger contexts. We might get more open tabs added, but there's a big difference With autocompletes. You need your completions to be ready fast, because if I type a sentence, I want to wait half a second and I want to see that gray string right, what comes next? I want to wait half a second and I want to see that gray string right. What comes next?

Speaker 3:

When you are talking about edits, so let's call it Copilot Chat, but the one that actually edits your files there. You don't that one doesn't take your open files into the context. You drag the files into what they've called a working set. You have control over the context that gets sent as a prompt to an LLM, and that working set currently has a limitation of 10 files. So that's already a big increase.

Speaker 3:

And I would say if you're creating new files like create a list page out of a table, using edits is much better than trying to auto-complete it, because then you can say here's my table and you can say here's my page object if you already have something in it, and you can say add all of the fields from the table to the list right. You have way more control over what hits the LLM compared to the autocomplete, where the model makes the decisions for you. What's in the context? What's the prompt that's sent to the model? So edits super powerful for new objects, especially if you already have a comparable object somewhere in your code base that you can drag and drop as context.

Speaker 2:

You had mentioned, you looked under the hood to see what it's using. Can you use local? That's the other thing a lot of individuals are doing now. Can you use local large language models with development so that you could see this, or how did you see that information?

Speaker 1:

There's two questions there yeah, can you?

Speaker 2:

change it to use a different model, Number one. Two how did you actually get to be able to look under the hood?

Speaker 3:

Can you change it to a local model? You cannot. I very much hope to see that one day where there's a window where you can add some feedback to Microsoft. I've probably sent seven feedbacks already. I want to have a local option as well, because when I'm on a plane I don't have Copilot, or sometimes Copilot is slow, and this year I do want to have a couple of conference sessions where I'm showcasing some of the examples with co-pilot. It's a risk if the co-pilot is going to say, yeah, I'm having a slow day, let's, let's wait five minutes before this is completed, when usually it takes three seconds. So can you use a local model? No, how did I get to the information? Uh, there's a tool called fiddler um, which?

Speaker 3:

is used for that yeah, it's been around for I don't know a lot of years he's trying to be nice Chris.

Speaker 2:

I remember Fiddler, when I was younger than you so you know Fiddler.

Speaker 3:

Fiddler basically allows you to inspect the network that's going from your machine outside. So with Fiddler you basically place yourself as a man in the middle between VS Code and the endpoints of Copilot. And I was just typing prompts in VS Code and in Fiddler I was watching what's hitting it, so what's happening there, and I could see the system prompts. I could see which files are being used as context. I really have to give credit to Viejo for this. He is the one that shared with me at the directions First. He's the one that said hey, edits are really cool for AL. It generates files that compile. If you use it with Cloud, you have to go go use it now. So that's where I first got the, the, let's say, the initial push. Okay, there's more than autocompletes. And he's the one that said I've used fiddler to see what's happening.

Speaker 2:

So it's like, oh, I didn't even think about that no, it's a great idea and you're able to see what information sent. So it was sending your information, your table. So if we had an open tab for a table, does that get sent up to the ai model?

Speaker 3:

yeah as well. So it's a part of the prompt is going to say the generic, your developer helper, you do this, and that part of the the prompt is going to say this is what the user has opened. And part of the prompt is going to say these are the lines he's currently, they are currently working on. So a few lines above what you're doing, a few lines below what you're doing and it will know. Okay. So we're suggesting autocompletions for this specific part, for this line, and there's another feature called temporal context which is going to say these are the changes that the user has just made. So they remove the line here and added the line here. So it doesn't only give the static files to the LLM, it also tells the LLM this is where the user is positioned right now and this is what they have recently done. So it has more and more ideas. Okay, this is what I think the user will try to do next. This is especially powerful for the next edit suggestions. So the part where it doesn't only try to autocomplete one line for you, but it will also look where does Copilot think you're going to go next?

Speaker 3:

Up to last month, you were only able to get a completion. If you're standing at the end of an existing code line or if you're standing in a new line but with the next edit suggestions, it can suggest edits in the existing code line. Or if you're standing in a new line but with the next edit suggestions, it can suggest edits in the existing code as well. So if I'm standing in line five, it might suggest something in the line five. But then it's going to say hey, I think you're going to want to jump to line 17 now, and I can just press tab and boom, my cursor is on tab 17. And it says I think you're going to want to rename this variable too. So I just press tab and it renames that variable on line 17. So the next added suggestions are like autocomplete, but on another level.

Speaker 2:

Wow. I go back to the days of just having to type all this stuff out. So GitHub Copilot has a number of features. It has the chat that we've been talking about, the autocomplete, the next edit. It has create documentation. Have you used it to create any documentation for AL or other languages?

Speaker 3:

One of the rules that we have on the codebase I'm currently working on is that all of the procedures have to be internal. If they're public, they need the XML documentation. So I've been using Copilot a lot for that. I just say, hey, I need the documentation here. Is it perfect? Absolutely not, but it gives me a good start and, especially with developers whose native language is not English, it brings them up two levels up in the way how they phrase the documentation sentences, in the way they phrase what the certain parameter is supposed to do. It raises all of the flags for double negatives. You know things that to you too, it would sound weird, but when english is not your native language, you're just gonna sounds good when I translate. Translate it to slovenian.

Speaker 2:

So I don't see what your problem is and so the documentation user for, and then the other one that I haven't tried it for AL. I did try it over the weekend for another language, but it can create tests for code. Have you done any experimentation with creating tests for code, because I am a big fan of automated testing and development? Automated testing the whole page. Script testing is a whole other topic that I've been on now too, but from the development point of view, had you used it to create tests for AL?

Speaker 3:

I did so for testing it's. I would say it's a two-part story. First, the tests that I tried to generate were successful. Exactly the tests that I tried to generate were successful. Exactly the tests that I wanted to. I didn't even need to. I didn't fix anything except the object ID, because nobody likes object ID. It's not even Copilot why the tests were so good.

Speaker 3:

I had to add a table, I had to add a code unit, I had to add a page. But the functionality was very similar to a table, a code unit and a page. I already had and for those three objects I already had tests. So when I created new table, page and code unit, I pulled that as context. I said, hey, for the categories we also have tests. And I pulled in the file this is how tests look like and I said create a new test for brands, which was the functionality I was adding. And because it knew exactly how tests should look like, it knew how to create those new tests.

Speaker 3:

So when you are trying to generate tests for something that you've already tested in a similar manner, it is going to do a good job. If we expect the co-pilot to write a bunch of unit or integration or end-to-end tests when we give it a management code unit, that's not really going to work. That's not really going to work. But again, vietco had a session of how you can get new tests, not only the tests that you've already created, going with Copilot. The answer is interfaces. He's been talking about this topic for a number of years now.

Speaker 3:

That you should use interfaces for unit testing, that you should use interfaces to isolate different parts from one another. And when your code is modular, when your code has interfaces, it doesn't look that far from the code you would write in C-sharp, and with C-sharp the models are really good because it's a mature language. So if your code is solid, if your code has interfaces, if it's modular, you can simply prompt Copilot and say I am now testing this interface, this function, and it will create stubs, spies, all of those test doubles. It will generate in the session he generated I don don't know 35 tests in 15 minutes maybe. So it's doable, but not if our expectation is that we can throw our legacy code from 20 years ago and say now I need tests okay, I like that the whole interface.

Speaker 2:

I think that's underutilized, and I think it's because that, the whole interface I think that's underutilized, and I think it's because that's a whole other topic that you and I talk about as well too. So that's interesting, so you can help stub out some of the tests so that you can test the code, which is extremely beneficial.

Speaker 3:

Now just to jump a little bit. You use Cursor for development. Now I use all three, and by all three I mean VS Code, vs Code Insiders, which is getting all the nice stuff, and Cursor. I've started using Cursor as well. Yes, what is Cursor? Cursor is a fork um made from vs code. So imagine, at some point the team said, hey, vs code, we like that, but we would like to create our own vs code which is gonna focus on being ai first. So this is an ide, a development environment which has AI at the core. So that's what Cursor is trying to be the AI-first IDE.

Speaker 2:

So, with it being a fork of VS Code, you have all of the functionality of VS Code. So what I mean by that is the standard functionality, and you can use the extensions from the marketplace that you can install. So you can use the AL extension. You can use other extensions with it as well. Now, with the AI driven first, what's the difference with using Cursor versus using GitHub Copilot or GitHubpilot or github chat co-pilot, chat within vs code?

Speaker 3:

so I was surprised by how well the switch to cursor went. I knew going into it that this is a fork from vs code, so everything should work. But it's crazy how good everything works. You open it up. All of your extensions are already there. You you open a project, it can even open the same open tabs that you had in VS Code. So transitioning to Cursor is a non-issue. But to your second question what does it mean that it's more AI-centric? A lot of the features that we now have in GitHub Copilot actually I'm not going to say they started in Cursor, but they were in Cursor before they reached GitHub Copilot Things like the next added suggestions, which I talked about earlier.

Speaker 3:

So let me jump to line number 17 to rename a variable that was in Cursor for quite some time before it reached the GitHub Copilot extension. But on top of that, for me what's really cool? For example, when I work with Python, you can get some results directly in the terminal. I run a Python script and then it says error in the terminal. When I hover over a Python script and then it says error in the terminal, when I hover over with my mouse over that error, there's already a button saying would you just like to transfer this to the AI pane, to the AI window. You click that button and AI takes over, because that's more or less what I would do. Anyway, I would say, ai, you try to fix the error first, and if you can't, I'll see what I would do. Anyway, I would say AI, you try to fix the error first, and if you can't, I'll see what I can do. So that was one of the really cool additions.

Speaker 3:

Claude 3.7 came one day earlier to Cursor, which was part of the reason why I bought a pool license two days ago to Cursor, because I really wanted to see if it's better for AAL or not. Agents were there, but the gap is not as big. One of the quality of life features that I've noticed is you can now have Cursor files where you specify what kind of files I never want to send to an LLM. It's a security issue. I don't want to send my secrets, my keys, to an LLM ever. It doesn't matter if they promise me 17 times that they won't store my data and that's something cursor now has, and GitHub Copilot is not there yet 17 times that they won't store my data and that's something Cursor now has, and GitHub Copilot is not there yet, so they're always thinking with this AI-first approach to how can we make the whole experience better.

Speaker 2:

So why do you use all three?

Speaker 3:

Enthusiasm. I'm comparing what works best for me right now. Uh, I don't. I am driving as like the, the tool for everyone to use. I'm standing behind the github co-pilot primarily because I got our organization to buy licenses for all of the experienced plus developers. It would be very hard for me to make a case. Nah, now let's buy Cursor for some of them. So I endorse GitHub Copilot, but I also love what Cursor is doing. And the insider is just to see what's coming in the future releases.

Speaker 2:

Oh no, I understand the inside. I was just mentioning because of if they're in essence the same with some. Was just mentioning because of if they're in essence the same with some more oomph. Behind cursor, it seems like with some of the functionality, why would someone use VS Code instead of cursor outside of any fees as well? I mean anytime there's a fee. You always have to take that into consideration too.

Speaker 3:

I don't think there's a strong argument to go for either um vs code. You're used to it. It's not like cursor is that much different. It's more or less the same, but you do still have to make a switch. For me it was a big turn off that there's another theme and it took me two days. Oh, I can switch the theme, right, it's the same as in VS Code, right? But that was already something small and I was annoyed by it. And there was another thing that I liked with Cursor Ah, yeah, yeah.

Speaker 3:

So even though it's the same set of features, they are implemented differently. The prompts that Cursor sends are slightly different. So to give you an example, previously I said I gave Cursor the task to translate Dutch comments into English and it started fixing linter errors. At first I thought this is weird. I tried it five times. Five times, instead of translating, it went to fix linter errors. I tried the same prompt in VS Code it translated the comments. So the same feature behaves differently, even if it's using the same model, because the prompts are different and the way how it's implemented is different as well. Also, in terms of performance, cursor Composer, which is the equivalent of GitHub, copilot, edits for me seems to work better on large files. It knows that it has to jump to a certain section and make the edits there. Well, github Copilot will try to go through the whole file line by line and then fix the middle part where my changes actually need to be applied. So the same feature but works differently okay, I have not tried cursor yet.

Speaker 2:

I've been sticking to the. I'm an old guy so I stick with vs code, with what I know, because I've been doing great with github co-pilot and github co-pilot chat and then also using some tools outside as well. But I think I'll have to give it a shot. My enthusiasm I'm driving from your enthusiasm, I appreciate it.

Speaker 2:

I want to take a step back for a moment. You had mentioned something earlier and I want your opinion on this, because I've talked about this with several individuals. You had mentioned that you like to do the back end development. You don't need to do the front-end development and then you can use AI to help create the front-end. In essence, at this point, do you ever see a point where ERP software will be faceless and AI can generate the front-end for users based upon what they need? This is where I'm going with this, seeing that ERP would have the backend data, the backend business logic, but the frontend interface would be generated automatically by AI based upon the context of the user using or doing a function. Do you think that's feasible, possible, plausible, likely?

Speaker 3:

Technology-wise, I think we're going to get there. There was that whole Project Sophia I'm not sure if you saw that. That was a year, two years ago where it generates the UI based on what you want. Would the same apply to ERPs? I doubt it. My background is in accounting. Accountants don't like changing UIs all the time. Right when I'm used to how UI works for me to enter ledgers, that's how I want my UI. Don't change it. Even the changes from BC25 to 26 accountants are going to be annoyed by that.

Speaker 2:

So the idea that AI is going to regenerate UI every time I walk in or that it's going to be different between two users, I don't think that's going to be very confident it doesn't have to be different for you, because if you're an accountant, that you have a certain view, you could have a consistent view for you every time you use it, but it'd be tailored for your specific function and then, even if you had to switch to use something else, it could still generate the same UI.

Speaker 2:

In essence, it's just something that I'm thinking about where, how much of this, where the backend can have all the business logic, you can have all the actions, you can have all of the functions. The data store Don't even get me going where I think data will be. Data will not be so structured, in my opinion, in the future. We already getting there, I think. But if you have that framework, that you could have it built on your own, because even if business central right now, you have personalizations, the ability to make all these personalizations by role or by user in essence gives you the ability to change the interface for a specific group or for a specific person.

Speaker 1:

But is it? Would it matter, though I think it's going to shrink on the need for UI. If a lot of things are being automated, there's going to be less interaction with the application. So it's now, I think it's going to be narrowed to a specific functionality where you do require a human interaction to be less interaction with the application. So it's now, I think it's going to be narrowed to a specific functionality where you do require a human interaction. I think that's going to be the more focused area, but curious from what you think today an interesting observation.

Speaker 3:

So I'm not so sure about the idea of um fully customizable. I do think, especially in work apps, we would like things to stay the way they are Once something works for me, I want it to stay that way because I know that's what I need to do to finish my work. But something that I have seen is that AI technology is going to drive UI optimizations, and not because AI likes users, but more from the agent perspective. So I have a really cool demo that I hope I can show you to guys. At some point where you ask an agent, I would like to go to Business Central, create an invoice for this customer and enter this item. And what it does? It spins up a new browser. It goes to Business Central, it logs in with the credentials that I gave it and then it comes into the role center. It's going to look what are all the available actions on the role center? Ah, invoice, create invoice. That's the one I want.

Speaker 3:

So then it clicks the same UI that I, as a user, would. Once it's on the invoice card, it again scans all of the available UI options. Which fields should I populate for a customer? Ooh, this one says customer name. Okay, let me put that in, and it goes on to items and quantities and so on. But agent can use all of the same functionalities that I as a user have available to me. I don't have to develop anything specifically on Business Central, so no AL code needed for this additional use case to be covered. However, if my field is hidden somewhere deep down, agent won't find it. If my action is somewhere very deep down, the agent won't find it. So where I do think we're going to go is the direction of simplified UI, because now we have an additional reason why UI should be simple, why important fields should be where they should be and not just add them to general, because the user will figure it out and personalize them themselves.

Speaker 2:

Taking it to the point of where. I like that, because it does take it to the point of where the UI is generated based upon the actions that you have. If you want to show the demo, you can share your screen now. I'll look for it. If you're not ready, we'll do it another time.

Speaker 1:

But you have the ability to share your screen.

Speaker 2:

If you're comfortable doing it, if the information is something that you don't mind sharing to be recorded. If not, I respect that and you know I'll call you after this and we'll set up a team's call because I want to see it.

Speaker 1:

Yeah, so the? So you're saying the ui, the ui. The ui is going to be irrelevant in terms of, like designing it, because it's going to be simplified, as you had mentioned. So I don't know if the effort is going to be. It's not irrelevant.

Speaker 2:

It's more relevant to keep it simple and not have it so complex oh yeah, that's sorry.

Speaker 1:

Yeah, I think my explanation is slightly different.

Speaker 3:

Like it's, it's irrelevant, for in terms of like, putting a lot of effort into it, more of like, hey, let, right, we have the brains that can figure out what needs to be done will end up with more simple pages being built specifically for agents. But at that point we're also going to realize oh, but that's also good for users, right, I can, if I have a page that shows the information that I need to access every day and I don't have to go through the list and the card and figure out the certain number. So I think that the agent transformation is going to be really beneficial for users that don't like AI as well. Yeah, that's just like. I think the agent transformation is going to be really beneficial for users that don't like AI as well.

Speaker 2:

I think, that's going to be the next big push to this agentics world.

Speaker 1:

When you are driving your Tesla, right? You just say warm up my seat. I don't need to fiddle around or trying to navigate that with buttons.

Speaker 2:

I've never told my car to warm up my seat.

Speaker 1:

Oh you should, you should try it, chris, chris, my car to warm up my seat? Oh you should, you should try it chris, chris, I tell you to cool my seat. You could say the same thing like oh yeah, where you're at yes, yeah, that's, I was chris, I was joking that's right, I forgot. Cool my seat, not heat my seat so no, it is.

Speaker 2:

It's the simplification of the UI in essence. This is where I see it going. I like Tina, I like your explanation of it, where you have a core and that's kind of a better way to explain what I was trying to visualize where you had the core actions already defined and then now that UI can be generated for someone for their specific needs, without the need to personalize it and go to all these, Even ourselves. Now, with some of the features that you can do with the promoted actions and the field classification, you know to do the show more, show less, the importance and a few other factors of it, I think will change it well.

Speaker 3:

I think you actually bring a good point right. The most traditional of machine learnings were always. Machine learning models were used to find I don't know, what does the user most commonly purchase. Right, but in the business central sense, what does the user most commonly click? And you could have suggested customizations which would essentially bring us to to the point you were saying so can I have a tailored ui? Just not necessarily from the ground up, but suggest to someone hey, you've been clicking this action every day for the past two weeks. What if you move it somewhere else? That I think could be, could be also a cool direction it will be a step.

Speaker 2:

It will be a step to be there. So how do you feel about a demo?

Speaker 3:

yes or no? I I would love to, but I have no clue where the project is, so okay I will.

Speaker 2:

I will definitely show you that um once I found, once I find the project, okay, let me know and I'm interested in seeing it, even just knowing it's there or you can show it to me, but then also you should blog about it and show it.

Speaker 3:

I think that would be beneficial and jaw dropping, as I say to many individuals actually to write a post about it, because you can probably tell that I'm quite enthusiastic about this whole AI technology, but when it comes to agents I'm skeptical and when I first tried to run that demo to see how AI recognizes the Business Central UI, I found it very cool. It demos very well, but what's the use case Like for me when it comes to Business Central and agents? I'm always coming back to this we don't want mistakes in an ERP and language models are kind of built on the fact that they will hallucinate at some point. It's a fact, they will hallucinate. So I have a hard time understanding right now what agents will look like, not only what agents could look like, because I see, like I said, I see all sorts of very cool-looking demos. Like I said, I see all sorts of very cool looking demos, but with those demos I'm not going to convince somebody who needs to have their costs posted to the correct account without the mistake. There's a risk.

Speaker 2:

The simplistic version is in the demo, but I'm not saying I'm a proponent or an opponent to any of this. But we also have to remember how many times do humans make mistakes?

Speaker 2:

I heard that I'm not saying. I'm just saying is we have to have a sense of reality that, chris, go back to the Tesla. I will tell you. I have the Tesla drive me everywhere and the Tesla will see things before I do and reacts before I could. So I'm not saying you should have to trust everything every time. It's a matter of to use the word that we use over here, checks and balances. Or you have the checks, but everyone's saying that AI is not perfect. Why is everyone concerned with it if we just have the assumption of saying it's okay that AI is not perfect? Why is everyone concerned with it if we just have the assumption of saying it's okay that AI is not perfect? Because, on the flip side, neither are humans. Because I can tell you a number of times, chris and Tina, I don't know, depending upon in your role oh, I posted this to the wrong GL account. What do I do?

Speaker 1:

Yeah, I think understanding the risk, as you said, but you have to be able to understand that risk and the goal is to minimize that risk.

Speaker 2:

Correct, yes, so are you more risk adverse by having AI that might be finely tuned to a specific task? And AI may not be applicable for all tasks in this agentification world or agentic agentification which word should I use?

Speaker 2:

agentification so it's, it's. That's what I mean. It's, it's, it's. This is a philosophical discussion in a sense, but I'm of the mindset and I say over and over again use the right tool for the job. I understand the risk of using a human for certain things and I understand the risk of using a human for certain things, and I understand the risk of using a computer for certain things. We use calculators. Nobody wanted to use a calculator before and they thought that everybody would have to use math and then you would check the calculator and do this.

Speaker 3:

And now nobody knows math, but with calculators, we all now use calculators. What if those calculators were wrong 20 percent of the time, and you don't know which 20 percent of the time? You know how would we still use calculators like that? Because humans can't calculate correctly 100 of the time either.

Speaker 2:

but that's just that's kind of you hit. The point that I was trying to make is that it calculated. I mean, math is math. At this point, for some of the calculators uh, you know, I don't know what any complex calculator is, but you know scientific calculators and a regular calculator is going to solve the problem for you because that's a a finite task, in my opinion. You need to do an operation to these numbers. You can use these agents for finite tasks. So it's a matter of using the right agent for the right task and up until the point where those agents can do more complex tasks, you can rely on a human. But also a human at that point may be wrong a percentage of the time, because they're tired, and it's not to say that the humans don't have the ability to it. Look at what happens to humans, why they make mistakes. They're distracted, they're tired, they don't want to be there. You know there's a lot of external factors that cause them to hallucinate. And then how do we catch it?

Speaker 3:

That's a very good point, because I've heard this comparison a lot of times. Humans also hallucinate, right, I don't buy it as much, but nevertheless it brings up a point. Okay, so if human can do an error in our system, then an agent can do that same error. What if we put the process in place so that the agent cannot go and purchase a thousand bicycles off of a vendor but at the same time? If human was able to do that, well, what if we prevent for the humans as well? So that's another aspect. So if agents are going to fix the UI for us, I think agents are also going to highlight all of these process issues that we already had.

Speaker 2:

We just somehow never stumbled upon them well, that's where some of the workflows and the approvals come into in a business process. So I'm not saying I'm changing the thought process. I'm just trying to look at a different perspective, because everyone says, oh, I can make an llM. Say five plus five is nine. It's so dumb. Well, it's not supposed to be doing that anyway, right? So it's almost like we're trying to find fault with it. But almost, if you understand the limitation, you know where to apply it.

Speaker 1:

Yeah, I think that's my point. It's that, as long as you understand that there is going to be a risk and the goal is always to minimize risk if there's a human aspect that, even if you put parameters around it, humans tend to want to figure out loopholes, they they want to be curious and that maybe eventually get around it. But if you have an agent that is specific to a task and always do the same task, it minimizes that risk of someone getting around that workflow. So it's just understanding of like this tool, that what it can do for you and what it can't do for you. But there's going to be always something to consider, or you always should consider that there's always going to be a risk. It's how much you want to take on that risk is the question.

Speaker 3:

As a business. One thing that I struggle with is if a task is so straightforward that we can confidently say an agent is not going to hallucinate on that and probably there's also a better way how we can solve that problem. There's Power Automate that is going to execute it 100% of the time in that specific order, right? Or can I write AL code so I'm not pushing too much against agents. I know they're coming, I know they're going to be awesome. I've seen demos. I love them, but I just need to find a way for me to find what is a good use case for agents, because if it's straightforward, there are other tools that are 100 reliable and if it's not as certain, then it's a good use case for agents. But how do you deal with that uncertainty? And I I agree with kind of both of you probably the.

Speaker 3:

The answer is what's the balance of human in the loop? We don't want to necessarily go and revisit each step the agent took, but we also don't want it to execute 15 steps before we we jump in. So what's the what's the right balance? Yeah, because in in cursor and github, copilot, that's exactly what's happening. Agents are going on. At some point I say stop. I know the best way forward because you seem to be struggling, and I think why agents are going to work so well in development is because it's constantly human in the loop. We see everything that's happening and that's why, even though we know LLMs hallucinate, we also find a ton of value with them.

Speaker 2:

Yes, that was a great point, Like your last point with the human interaction. We know it's hallucinates. We accept the hallucination because we know we're reviewing and correcting as we go. Yeah.

Speaker 2:

So it's it's part of the process and I had, as I went back and mentioned that I had it create field pages or even actions and some of the properties were wrong. I just go in and fix the image name, for example, which it seems to mostly get incorrect. It tries to match it to the caption, but it does a good job. But I still just accept it, because I didn't have to type the other 10 properties that were in there, including the brackets and everything else. It's wonderful. Well, sir, thank you again for a wonderful conversation.

Speaker 1:

I could talk with you for days about this and you know that I love the conversation because I think we all have different takes on this agentic world where it's going I think it's coming.

Speaker 2:

But I'm with tina to the point and you in a sense, chris, it's just knowing the application and the right use case for it. I think the misconception a lot of people have with ai at this point is ai is one thing, instead of ai being all the different variations that we have, even with the different models, and that an agent can't do everything when you don't even have a person that can do everything. So the the point is that the agents will replace specific functions or tasks to make them easier. And uh, I'll just say easier. So I'm not saying agents replace people.

Speaker 2:

To Tattini's point is there's a balance of where do you have a human? Because if you look at everything, everything that we've been doing through the evolution of time or even with man and you go through the industrial revolution, is everybody's created tools to do a task. No tool has been able to do everything. So they've created hammers, they created power tools, we created horse and buggy, we created automobile. Every step forward there's been resistance to a lot of those tools, saying, oh, a hand can do it so much better. But once you have the tool that does a specific task, it can complete that function and task and you put it all together and you have a tool belt.

Speaker 3:

That's function and task, and you put it all together and you have a tool belt. That's where I stand with this, but I'm just a dinosaur. So so no, I, I completely, I completely agree with that. I always say that this is not here to replace people. Um, it's just making the people more, more, uh, efficient, or at least more enthusiastic about what they do.

Speaker 2:

And with that, I heard a quote the other day. I forget who said it. It was on another podcast. I don't want to quote it due to inaccuracy, but I would like to give credit to someone because I didn't say it. They didn't say, and what they said is AI is not going to replace you. Someone using AI is going to, and it sat with me, and just let that resonate you. Someone using AI is going to, and it sat with me and just let that resonate.

Speaker 2:

AI, to Tina's point, is not going to replace people, but somebody who can become more efficient by utilizing AI in their duties will.

Speaker 1:

Exactly.

Speaker 2:

I agree.

Speaker 2:

Mr Tineser, thank you again for taking the time to speak with us. It's always a pleasure. Look forward to having you back on in a few months to see where your progress goes with this, because I know technology is moving quickly and with your enthusiasm, and I follow everything that you do. I know that you'll be doing some great things, so we'll get something on the calendar just to lock it in. But in the meantime, how can someone get in contact with you to learn a little bit more about some of the things that you've been doing with AI, to see some of the information that you've been sharing and all of the other great things that you've been doing?

Speaker 3:

I think LinkedIn is the best place to go and I would say, for existing posts, the blog right. Because just today, when I was getting in the headspace of what are we going to talk about on the podcast, I was like I wrote a blog post on GitHub Copilot a month ago and I already have so many new things that I need a patch to. And it's been only what 30 days, so there's more coming excellent, excellent.

Speaker 2:

Thank you, and that's why I want to lock you in for, you know, maybe not 30 days, but at some point in the future. But again, thank you for your time. We really appreciate it. Thank you for all that you share, and also thank you for all that you share and that you do within the community. I know I've personally learned a lot from it as well so I appreciate it.

Speaker 3:

Thank you for having me again.

Speaker 2:

We'll talk with you soon. Ciao, ciao, take care, ciao. Thank you, chris, for your time for another episode of In the Dynamics Corner Chair, and thank you to our guests for participating.

Speaker 1:

Thank you, Brad, for your time. It is a wonderful episode of Dynamics Corner Chair. I would also like to thank our guests for joining time. It is a wonderful episode of Dynamics Corner Chair. I would also like to thank our guests for joining us. Thank you for all of our listeners tuning in as well. You can find Brad at developerlifecom. That is D-V-L-P-R-L-I-F-E dot com, and you can interact with them via Twitter D-V-L-P-R-L-I-F-E. You can also find me at Mattalinoio, M-A-T-A-L-I-N-OI-O, and my Twitter handle is Mattalino16. And you can see those links down below in the show notes. Again, thank you everyone. Thank you and take care.

People on this episode