Dynamics Corner

Episode 338: In the Dynamics Corner Chair: Efficient Development with AL-Go for GitHub

Freddy Kristiansen Season 3 Episode 338

Listen as Freddy Kristiansen discusses the BCContainerHelper and AL-Go for GitHub as tools to help partners develop efficiently for Microsoft Dynamics 365 Business Central. These tools aim to provide managed solutions, allowing partners to focus on customer value instead of building their workflows and pipelines. Freddy discusses the importance of automated testing, continuous integration/continuous deployment (CI/CD) workflows in app development, and the ability to deploy apps to different environments, such as Sandbox and Production.

 

Connect with Freddy on LinkedIn (https://www.linkedin.com/in/freddykristiansen/)

Learn more about AL-Go on GitHub (https://github.com/microsoft/AL-Go)

Send us a text

#MSDyn365BC #BusinessCentral #BC #DynamicsCorner

Follow Kris and Brad for more content:
https://matalino.io/bio
https://bprendergast.bio.link/

Speaker 1:

Welcome everyone to another episode of Dynamics Corner, the podcast where we dive deep into all things Microsoft Dynamics. Whether you're a seasoned expert or just starting your journey into the world of Dynamics 365, this is your place to get insights, learn new tricks and understand what is ALGO. I'm your co-host, chris.

Speaker 2:

And this is Brad. This episode was recorded on August 28th 2024. Chris, chris, chris, algo, who's Al? I was going to say do you know how to spell ALGO, who's Al? And where did he go? Yeah, where did Al go? Yeah, with us today, we had the opportunity to speak with someone who is all about ALGO, to hear some valuable insights on how ALGO can be used to make your life a little more efficient. With us, today, we had the opportunity to speak with Freddie Christian.

Speaker 1:

I'm just getting the chair Perfect.

Speaker 3:

Oh, my bird is flying away. Where's it going? So this one is measuring how the air quality is. Oh wow. When it sits on top of the like upwards, the air quality is fine in here. If it's hanging down and dying, it symbolizes that the air is bad. When it sits like that, it needs power. Oh, okay.

Speaker 2:

What's good to say is if we're talking with you and the bird starts to go down, I'll start to get afraid.

Speaker 3:

I need to get one of those, so that's like it's a thing from back in the days where they actually had canaries in the mines yes, yes and when they like passed out, they knew it was about time to get up.

Speaker 2:

That's exactly what I was thinking about the canary in the mine I need to get one of those. I don't know if I want one of those. I don't know if I want to know the air quality. Maybe I just want to just go and not know. Well, I won't keep an eye on it then, because it needs power. Good afternoon, thank you for talking with us. Good afternoon, thank you for talking with us this afternoon. I've been looking forward to speaking with you, always about something that's of great interest to me and many others I know as well. But before we jump into it for those that may not know, can you tell everyone a little bit about yourself?

Speaker 3:

I am Freddy Christensen. My official role is a principal product manager, but the title that I like to tools that we have for DevOps and Docker and basically trying to help partners develop in as efficient way as possible without spending time on TDS and other things that they may be able to avoid spending time on right.

Speaker 2:

No, that is true, and I like the technical evangelist term because I know you have been working with the product for a long time. Even back in the early days when I started out, I saw some information circling from you as well, so I do like the term and I do follow everything that you promote and push, which is good. One of the things that I wanted to talk with you about is you talk about with partners working with the latest tools so that they can become more efficient with the development and not have to do some of the tedious work. You tedious work with Docker and then also with the GitHub repositories as well, but as far as a dock is concerned, you work with the BC Container Helper tool. So can you tell me a little bit about BC Container Helper, what it's used for and what its position is?

Speaker 3:

Yeah, so back in the days, I think, around Tech Days 2017, so that's seven years ago we launched the first way to run Business central and containers. At tech days that year me together with to be a svenster and the guy called jacob vanak and we had a session where we demonstrated these things. The problem was that running containers would require you to like, create this docker run statement that that easily would fill out the screen with different parameters and other things, and we really did not want all AL developers to be Docker experts. So we created this PowerShell module called bc-container-helper that was supposed to make life easier. And well, I think the first version of that, nav Container Help, was shipped back in 2018, and a number of versions of that Later.

Speaker 3:

That became BC Container Helper and it's a huge pile of cool functionality that partners can use really of cool functionality that partners can use really Really unstructured, in the way that it's kind of the only shipping vehicle I've had to. Whenever there's something like, how do we publish to an online environment, well, I'll just ship that in VC Container Helper, and then it's easier because that module is always used by people making DevOps and stuff like that. So we're looking to change stuff in that sense, but we can talk about that later. The module is fairly big today. It has a lot of functionalities, has over 2 million downloads. That doesn't mean that we have 2 million AL developers, obviously. It means that it's frequently downloaded in DevOps pipelines and workflows. So in a good day you'll have several thousand downloads of the module from pipelines probably no, it's great.

Speaker 2:

It is a tool that I personally use daily as well and I use it in a number of fashions. I think, as you had mentioned, it's a great tool. There is a lot to it and I do appreciate. I've been using it for many years and it has gotten easier because, as you had mentioned, there's two aspects of it. It is the PowerShell aspect of it and then also the Docker aspect of it PowerShell aspect of it and then also the Docker aspect of it.

Speaker 2:

And becoming a Docker expert is a little difficult, I think you can say at some points using Docker Desktop versus Docker Engine and the like. So it gets a little challenging when you're just trying to develop and publish. So having the tool set to make it easier does help expedite and become a little bit more efficient with development does help expedite and become a little bit more efficient with development With the containers and the images. How is that structured as far as being able to pull down versions of Business Central or Nav, and also, how far back can we go to create containers for versions of Business Central or Nav? I know early on I used to be able to work it with Nav from the images.

Speaker 3:

Yeah, so one thing is what's supported. Another thing is what's possible, right? Yes, supported is obviously the versions of NAV and Business Central that we support today. I actually don't know if we is there still support for the latest nav version, or is that our support today, I can't remember. But anyway it should work and it should work all the way back to. I know people who have created Docker containers with nav 2013,. Maybe even nav 2009 R2. But obviously it's not something that I do a lot to support or maintain or anything like that.

Speaker 3:

There's only one Docker image really out there, like you can find in the Docker Hub or in the Microsoft Container Registry. You can find Docker images and you'll find one Docker image that matches the operating system, and every month we will ship a new Docker image that matches the supported Windows versions Right now Windows Server 2016, windows Server 2019, and Windows Server 22. I actually think Windows Server 2019, and Windows Server 22. I actually think Windows Server 2019 is no longer supported, but as long as the Windows team will ship updates to that, I will ship updates to. We will ship updates to the generic image, so if anybody is using Windows Server 2019, they'll be able to get a matching image of the Business Central generic image.

Speaker 3:

In the beginning we created images for every single NAV slash Business Central version. The problem we had there was that we had to redo that every month when there was a new Windows version. We had to redo like the entire matrix of supported nav PC versions, times the different Windows versions. And back then Docker also shipped containers for all the intermediate versions and it became like a huge matrix and I had computers running for multiple days to create those things. And we changed that and instead of creating these fixed images, we create one generic image to rule them all.

Speaker 3:

And then we, instead of shipping the Business Central bits in containers, we ship that as artifacts and these artifacts are then stored in a storage account and that storage account can be accessed either from BC Container Helper by getting a BC artifact URL, or you can also just download it with a normal invoke rest method, if you want to do that, from PowerShell or whatever. And the artifacts are split in two there's an app part and a platform part, or a localization part and a platform part, and the localization part is then multiplied by the different countries that we ship. So you'll be able to download a US version of a specific version of Business Central and there'll be a manifest file in that and inside of that manifest file there'll be a pointer to what platform this Business Central localization version runs with and then you can download that and you'll have the bits to create the container with.

Speaker 2:

So, yeah, it's all magic.

Speaker 2:

I know it sounds like all magic, because I can just create a new container, put in the version that I'm looking for and the country that I'm looking for, and I can see it download the artifact and the images and it creates a new business central implementation for me.

Speaker 2:

And, as you had mentioned, there are a number of tools available within the BC container helper PowerShell script, I guess, say, or commandlet um everywhere from creating containers, uh, pulling out um runtimes of libraries or apps, if someone wanted to create a specific runtime for a version and the artifacts as well. And the earliest I've been able to use it was 2015. I haven't tried anything before 2015, but I was able to use BC Container Help with the pull-down image and load a container from a CD, or not from a CD, but from a DVD download of 2015. And it worked quite well if you just load the prerequisites into the system. You had mentioned and I had seen you post and talk about BC Container Helper and some changes that you're going to make or have been made to it in conjunction with ALGO, and you alluded to that a few moments ago. Can you explain a little bit about that?

Speaker 3:

Yeah Well, it's not changes per se, it is changes.

Speaker 3:

So what we're going to do is, since there are functions for all kinds of things in Container Helper and we want to structure that in a better way, and to do that we will take the functions that are needed to create containers or whatever and put them in probably one PowerShell library Some of the functions that are used to, let's say, publish to online environment.

Speaker 3:

We'll probably put that into GitHub Actions maybe, or in another PowerShell library, I don't know, but really take.

Speaker 3:

So with ALGo for GitHub, which is the tool that we shipped a few years ago, we're kind of seeking to be able to do everything that partners need from a DevOps solution, and back in the days with Container Helper, container Helper was really used to give partners a module so that they could build all of the things that they would use in a DevOps solution. But it's suboptimal to have three or four thousand partners sit and create their own workflows and their own pipelines and their own mechanisms to like upgrade and all of these things. It's better to have a managed solution where, like, everything is handled in that one, and that means that we're going to look at AL go for GitHub and see what functions of BC container helper is. That one using it is creating containers for development environment, it's creating online environments and all kinds of things, and we're going to take these functions out of container helper and put in other PowerShell modules or in GitHub actions, other things, so that eventually, algop for GitHub will no longer need BC Container Helper.

Speaker 3:

And what's the future then of BC Container Helper? Well, it will be set on hold or on whatever, I don't know. I'm not going to maintain it anymore. When all the functionality that is needed for people's devops solutions are available in other locations, then the need for BC container helper would only be to to really work with these old CLL containers or the things that are not not really needed today. And the idea there is to stop supporting VC container helper at the day where, basically, the cutoff day is, when ALGo no longer needs VC container helper. But rest assured, we're not going to just move all the functionality into ALGo and force everybody to use a ago. That's not the that's not the purpose.

Speaker 3:

The purpose of this exercise is really to to make sure that we have something that we can support and and work with, and, and we will. We will suggest people that they use what's called a managed DevOps solution, which I include, include ALGo for GitHub, I include ALOps, I include Alpaca and there might be more. The reason for that is that partners of varying sizes can really focus on providing customer value instead of developing workflows and pipelines and all of these things, and we're, of course, working with partners to make sure that ALGo supports as much as this is possible. Waldo is probably doing the same on ALOps and Cosmo is doing the same on Alpaca, ending up having these three really good solutions for these things. Really good solutions for these things. Obviously, algo is free, so that one. If you want a solution that doesn't cost you anything. Algo for GitHub is one solution. Allops has a price tag on it and Alpaca has the same. Alpaca and Allops are both on Azure DevOps and ALGo is only on GitHub. So that's some of the things that people or partners should think about when they select what solution to use.

Speaker 3:

I had a survey not long ago where I asked people what kind of DevOps solution they use, and approximately 50% of all partners are still using something they built themselves, and I'm kind of to blame for that as well, because the thing that they built themselves they built that after a hands-on lab that I provided back in the days, which was kind of the first approach to this and creating a hands-on lab to teach people how to do this.

Speaker 3:

We could just see that people were spending way too much time on maintaining and handling and all of these things. So that's the reason for for, instead of maintaining and expanding a hands-on lab for these things, we started creating ALGo for GitHub instead, and I'm using that ourselves as well. The BC Apps repository, with the system app and all of these things, is using ALGo, and also the Business Central Apps private repository, where we have the pilot of the base application on GitHub, is also powered by ALGo for GitHub and obviously we had to add features to ALGo for supporting these things, and we're also working with Alpaca and other partners to have their requirements and make sure that we support all of the things that they need.

Speaker 2:

That's good, and we're talking about AL Go for GitHub, which is a set of templates within GitHub. Can you maybe talk a little bit more? We talked about managed solutions. We mentioned AL Ops, alpaca and AL Go, but maybe a little bit more of what that exactly is For someone that may not know we're talking about. I know I talk with a lot of partners, even customers, that are working with their own PTEs, that are looking for ways to manage those extensions and make sure that they're compatible with the newer versions, and you know they run into some challenges. But maybe you can break down AL Go in terms that someone can understand its position and what it is exactly doing and how it's of benefit.

Speaker 3:

Yeah, so ALGo for GitHub is a plug-and-play DevOps solution for business-central apps, either PTEs or AppSource apps. There's no real built-in support for on-premises solutions, but I know multiple partners who are using that for that as well. It's not something we are focusing on, but what we are focusing on is if you are a customer, if you are a partner, that are building a PTE. Typically, I don't know how things are in various places, but if a customer asks a partner to create a PTE for them, one option could be to have it in a shared repository where the customer actually can see what's going on and the code ownership is for the customer. So the customer would basically give the partner access to a repository that they would create and then the partner would create the code. The way to create the repository is really to use akams, slash ALGOPTE and say, yes, I want to create a repository, this is my repository, and then invite the partner to this repository. I want to create a repository, this is my repository, and then invite the partner to this repository. Now the partner can add the code and already at that time workflows and pipelines and release pipelines. Everything is set up just by running that simple URL and the only thing you need to do is basically add the code.

Speaker 3:

Obviously there can be dependencies. That requires the partner to do some setup, but all in all should be fairly simple At least that I think. And to set up, and some of the things we're working on now is NuGet support, so that if you have a dependency on other apps, then you can really just specify a NuGet feed from that partner or maybe get symbols from Microsoft to use when you build the apps. So trying to make it as easy as possible for people to just always have CI CD set up out of the box without having to ever change a YAML file, change a PowerShell script or maybe install Docker locally. Eventually we want to have it integrated to GitHub Codespaces as well, so you can go into your repository and say edit in Codespaces and it's going to open VS Code for you right there in your browser and you can edit that and you can test it out in an online environment or stuff like that. All of that should be like plug and play and really you can do that almost on your TV if you have a keyboard right.

Speaker 2:

Development has sure changed for this application over the years. I go way back to the old cal days of I started working with version 1.1 and the changes that have come around over the past 20 something years are impressive. So aogo is a set of templates to help manage and deploy extensions that some that are created for Central, whether they're apps for AppSource or if a customer has a PTE, or even if a partner is doing a PTE for a customer Templates might be a wrong word.

Speaker 3:

Say again, please. Okay, templates might be a wrong word. It really is a starting point. It is a template per se, but it also contains a workflow called update algo system files, meaning that it's really the same as updating your Windows. So when you install Windows, you get monthly update of Windows and you always have the latest version. And that's one of the things that's built into ALGo for GitHub that we release versions continuously and people can upgrade their current workflows to the latest version just by running that script and applying whatever stuff they have in their own repository. But typically a template would be a starting point and then you're kind of off to yourself to then maintain it afterwards.

Speaker 2:

Okay, I understand, and then with the algo within the workflows, the workflows will do things such as check for changes or breaking changes with compatible versions. Is any, are there any configurations that need to be done for that, or is that all part of the base setup? After you do the initial part of the base setup.

Speaker 3:

but so if you just add code to algo for github GitHub it will build the code whenever you check in changes. There's a pull request handler set up so that if you create pull requests then you'll have to do code reviews and stuff like that and then you merge the pull request. It's going to do a build. If you don't have any releases, it doesn't know what old version to check for the moment. You then create a release of your app, algo will automatically check for compatibility with the latest released version of your app, like inside the repository. So there you will have breaking change checked all the time. Also, if you write.

Speaker 2:

That word gives me nightmares. Checked all the time. Also, if you arrive, that word gives me nightmares. By the way, I'm just letting you know. I hear breaking change and I just cringe.

Speaker 3:

And also, if you have a test app, it will automatically run the test app and show the results of the test. If you have BCPT tests, like performance test toolkit tests, they can be set up to run automatically as well within either all your builds or on a daily scheduled test to see whether your app regresses in performance. And also, with the latest feature, page scripting and stuff like that we're also looking to implement that. So basically, you'll be recording your page scripting tests in the recorder and then the YAML file that comes out of that, you'll check that into your repository and we know that this is a page scripting thingy and just execute that test and show the results as part of your pipeline.

Speaker 3:

That's the way that we want to implement these new functionalities is by saying how can we make it as easy as possible for people? I mean, typically you'd have to. Okay, what code do I need to write in order to run these page scripting tests against my container or my service tier or whatever? All of that you don't know, but then we'll make sure that the tests get executed whenever you are running CI, cd or test current or test next major or whatever.

Speaker 2:

You had mentioned a point where you could run and check. You have and you have your builds run daily to check or you can have it run at intervals. Is there a configuration for that? And then also with that you mentioned algo is free. I know it's a free uh to use on github, but is there a cost to run those workflows regularly, like within github itself? Like what type of license or what type of github setup does someone need to have because you can have variant versions of GitHub?

Speaker 3:

Yeah, so on GitHub, everything that is public is free and everything that is private has a cost attached to it. So if you have a public repository, you can run as many workflows as you want and it doesn't cost you a dime. If you have private repositories, based on the school you buy, you can have the basic one.

Speaker 3:

I can't remember what it's called. So there's two schools for developers and there's three schools for companies. The schools for companies is GitHub Enterprise, github Teams I can't remember what the free one is, but let's just say GitHub Free. In GitHub Free, you get 2,000 minutes of GitHub actions for free and after that you pay per minute to use your runners. You can use self-hosted runners if you want, if you prefer that, so you can set up a few runners and attach those to your AL go.

Speaker 3:

That becomes a setup. That's something you need to do. Obviously, you can find documentation on how to do that, but whether you will save money on that or you'll save money on actually using the GitHub hosted runners, I don't know. We are working on making sure that all the builds are running on Linux, so whenever you are building stuff, we are running Linux, which is faster than the Windows runners and it's half the price, so you can actually get a lot of builds done by using a Linux runner.

Speaker 3:

The problem is that when we want to run tests, we kind of need a Windows agent, because we need to create a Docker container to run the tests in, and we're investigating various ways of handling that to see whether we can run the test in an online environment. If you have a sandbox you don't need, then maybe you can use that. It actually speeds up things quite dramatically because you don't need to ever create a container and everything can run Linux if you do that. So these are some of the things that we are working on making as smooth as possible so that people can save money and time important with the changes and with technology and with the updates coming readily.

Speaker 2:

for Business Central, with the monthly updates as well as the twice a year waves, ensuring that the extensions are compatible and you won't have any of those breaking changes or other issues is important, as well as making sure your tests run properly. To me that's important. I'm a big advocate for testing and test scripts because I have found countless errors I don't want to say countless, but I found a number of errors with changes that have been made that if the tests weren't there nobody would have caught, or that have even gone through a code review in a sense, when someone is reviewing it. So those tests are helpful. So having the capacity to be able to do that easily for each pull request is important.

Speaker 3:

We will run. So in AL Go you'll have a set of artifacts that you develop on. So typically you'd have, like I'm developing on, if I say I want to be compatible with version 23, I'll be developing on version 23, because then I'm sure I don't like use 24 features and suddenly become incompatible with the version that I'm working on. So if you're working on, let's say, version 23, this, this is the artifacts that you have a development container on and that is the, the artifacts that is used for the CI-CD builds and the testing and all of that. Then there are three other workflows that are supplied by ALGO one called Test Current, one called Test Next Minor and one called Test Next Major. Test Current will actually say, okay, you are developing on version 23, but the current version is 24.4. So I'll run that workflow, maybe on a daily schedule or maybe on a weekly schedule or whatever, and test your code against the current version of Business Central. And the next major and the next minor will do the same, just with the next minor, meaning 24.5, and the next major would be 25.0.

Speaker 3:

But all are available today as artifacts, Inside artifacts. You don't need this token or this key to unlock the inside artifacts. The workflows can automatically get them. You don't even need to accept the insider agreement because you never see that workflow, You're just running a test against it and you'll never get a container that actually works with that. So these things are automatically supported out of the box to make sure that, like you'll see errors that you normally would see three or four months from now, if you're running the next major build and seeing there's a problem in that one and that kind of, I'm hoping that people will will run these on schedule so that they kind of are more ready for when we ship the next major and everybody are then faster to get to the next version.

Speaker 2:

No, it's important. I see a number of customers receive those warnings when they have an update try to be applied and it says you know, we can't apply the update because your extension is incompatible. And this will help extensions incompatible and this will help. It's a proactive approach and preventative approach that, as it reduces the risk of interruption to someone receiving an update or into their business central system where if they don't update it, it has it, you know it doesn't get republished after they go outside of their window uh, one part of the algo which you were talking about. So with the AL, go for GitHub, it can automatically deploy the apps to an environment as well. Is that correct? Will it automatically invite you to a sandbox, or both sandbox and production, or can you target? You can do both.

Speaker 3:

You can also set up continuous deployment to a production environment by default. If you point out that you want to deploy to an environment automatically and you don't state anything in settings, it will reject. If it's a production environment, it will do it if it's a sandbox environment. But you can go into settings and say deploy to prod if that's the name of my production environment, say continuous deployment equals true, then we will also do continuous deployment to the production environment. Not sure many customers and partners are ready for that one, but it's possible. I mean it's possible. I mean it's not something that I should block from not happening by default. It is not happening. So we're not going to override any customer's apps just by accident.

Speaker 2:

But if someone has a workflow that's stable and they want to reduce the risk of whomever is going to load extensions into their environment, they can go through this process and at least have it monitored, checked or the workflow there, and then the sandbox environment can go before.

Speaker 3:

Because I do know it's also important to see that there's two types of apps. There's PTEs and AppSource apps. We cannot deploy an AppSource app to a production environment from a CI-CD pipeline because we can do it to a sandbox environment. There we'll deploy it as if we were using VS Code and deploying it into the dev scope. But you cannot deploy an AppSource app into the dev scope in a production environment. So in a prod environment you have to install it in the right way.

Speaker 3:

And if it's an AppSource app, then we do support publishing to AppSource directly from ALGo for GitHub. We will automatically submit it to validation and thereafter actually even take it live in AppStore if you want to do that. But typically if you don't state anything, it will be validated and set where you need to go into the partner center portal and say go live with this app, else it'll stay there. And if you now the latest version, we also support preview of app source apps and at that point in time what we really do when we publish to app source without going live is to put it in preview and people can now, from that production environment, install the preview if they like that and then you can test it, you can go live and then they can install the final version there.

Speaker 2:

So, talking about that preview version of an app, is that something that partners can give to specific customers to test new functionality? Is that the focus for it, where you can give specific customers reference to a version or preview version to work with?

Speaker 3:

so when, when you, when you publish an app to app source, it goes through like a validation process and then it it, if you have created what's called a flight code, then it's in preview for the people who has that flight code. You can give that to customers or partners and they can then install the preview version of that app. Um, with that flight code, you can give that to customers or partners and they can then install the preview version of that app with that flight code and then they can install the final version once that gets shipped.

Speaker 2:

That's good, that's a new feature.

Speaker 2:

I've been hearing about that quite a bit and it's also good for partners or anyone who has an app AppSource to get some additional testing or functionality review for customers before releasing it to widespread use within it. You had mentioned, when setting up a GitHub repository for a new PTE or a new AppSource app again, ptes would be for customer environments, appsource apps would go to AppSource that you could run the AKMS, pte, load, pte. I forget what to wish ALGO PTE, algo, pte. I know there's two of them, one for apps and one for PTE. What about for an existing repository? So now, if someone's been working with an AppSource app or a PTE in a GitHub repository, is there a way to apply the AL Go workflows to that repository or is it better to create a new repository and copy in the code?

Speaker 3:

I haven't actually thought too much about applying. There's a few ways of doing it. You can create a new repository and then you can import the git history from the old repository and basically get everything in there. But you can also that's probably you can do that from Azure Deluxe to GitHub. You can also take I don't know if I have a documentation on that specific one there are some in ALGo. You can go to the documentation, the readme. There are a few ways that it's documented on how to take an existing repository and make that into an ALGo repository. Not sure that you can do that with a GitHub repository. I'm not sure it's documented. You can obviously do it, but I'll write that down on a piece of paper and then add that to the docs, Because it is really just adding some files to your repository and then ALGo should more or less be able to take it from there should more or less be able to take it from there.

Speaker 2:

No, that would be great if there's some documentation on that process or that situation for those partners or customers that are managing their code with source control such as GitHub and now, with AL Go becoming more and more popular, to be able to add that to the repository. Speaking of ALGo and you mentioned the documentation where can someone find more information on ALGo? Where do they obtain ALGo?

Speaker 3:

You can find that in the.

Speaker 2:

GitHub repo In the GitHub repo for ALGo. It's in the Microsoft repo and you can search for ALGo and find it.

Speaker 3:

You can also do akams slash ALGo workshop, which will take you to it's an area in the ALGo repo, really, where there's some MD files that takes you through the workshop on how to set things up, how to do releasing versions, how to do a lot of things, and a lot of things that are not yet documented are there, just as lines to be documented. So we're working on that. It's a continuous involvement, really, of this, and whenever we get new ideas, it goes into the, into the backlog, and we'll try to make that happen. We also have a few partners that contribute to ALGrow, which we are happy for, but obviously the idea is that partners, most partners, will just use it as a tool. Some partners, if they have the capacity to actually co-develop ALGo with us, they are more than welcome to do that. The more the merrier. We are three people right now working on ALGo for GitHub Obviously me and then two other guys, or a girl and a guy, and yeah, we'll be trying to keep up with all the requirements that are coming from partners and customers.

Speaker 2:

It's like drinking from a water hose. I call it, or a fire hose, excuse me. And it's not just AL Go, it's just with Business Central. The number of changes, number of features, technology is advancing customer requirements, everything's becoming easier, but because it's easier it's becoming faster, so it's almost like a cycle of. I feel like I'm constantly running to keep up, as I guess you could say. But having tools such as VC Container Helper and ALGO do facilitate that process where you don't have to become an expert in the workflows or you know creating Docker containers and downloading images and setting pieces up the AL Go workshop. I know you do a number of sessions for AL Go and I know in September upcoming to be Days of Knowledge US. Are you going to be attending Days of Knowledge in the United States?

Speaker 3:

I'll be there and I have a few sessions about Elgo and some of the things that we talked about here, also some like getting started session and a more advanced session, and I'm also looking to see who's there to see if I can get some feedback from partners who are using it already to get some good ideas and what can we do better and what should we do differently. I'm always looking for that when I'm at conferences to get feedback on where people are struggling and where people are running into dead ends, to make sure that we can do it better than we're doing already I understand.

Speaker 2:

So in days of knowledge in the United States you'll be having a workshop or a session on ALGO. So if anybody who wants to get some hands-on or to learn a little bit more about it, they can also attend that conference.

Speaker 3:

There's no hands-on labs per se, but there's sessions.

Speaker 2:

Sessions where you walk through how to set up and use ALGO and then also be able to answer some questions, which will be good. I will be there. Chris is going to the Power Platform Conference, so we switched on this one. Usually we go.

Speaker 3:

The next week you can go both places.

Speaker 1:

We've got to spread ourselves.

Speaker 2:

It's also coast to coast. It's a stretch across the country for Chris to go and come back for the two conferences. And that's why I'll just do the Data Knowledge United States one as well, because it gets a lot to fly back and forth all over the place.

Speaker 3:

I have quite a few business central conferences around the world.

Speaker 1:

Oh, yeah, oh, more than you can count right now.

Speaker 2:

Yeah, there's quite a few, and that's one of the questions I always have, or you know, or get asked. I like to talk to him as well. As you know, it's outside of. You know the ALGO or BC Container Helpers? Like which ones do you go to the ALGO or BC Container Helpers? Like which ones do you go to? Right, because you have Dynamic Minds. You have BC Tech Days. You have Directions EMEA, you know Directions Asia, directions North America, days of Knowledge. You have Dynamics User Groups. So there's a lot of conferences and there's a few others. On there too, there's a lot of great information. It's just important to just strategize and plan so you get the right ones, that you get the most out of it, which is important.

Speaker 3:

I've always, I think I've attended every single Days of Knowledge conference.

Speaker 3:

There were Really Not Days of Knowledge every single Tech Days conference. In all the years I think I've went to Anver and what I like about these technical conferences is really that not only will people I mean, if you go into a session and you see a lot of stuff and you hear a lot of stuff, you're not going to remember everything, right, but you're going to get some, you're going to see, you'll have an overview of what's possible and you know where to look and you'll start thinking about how to do things smarter. So going to conferences and learning technical stuff, I think is something that partners should prioritize for their people, because people will really be much more innovative if they know more about what's possible and what's coming and what's trending and all of these things. You will get people's mind working and not only their hands and feet. But you can, like you can. Your people will be much more satisfied, right? So I cannot, yeah, stress enough that partners should really send their their technical people to technical conferences and and learn stuff.

Speaker 1:

It's, it is important I agree with you I, it's absolutely important, I agree.

Speaker 2:

It's important to also support their customers so it gets the technical talent that they have, whether it be functional consultants or developers that go to these, or even solution architects, whatever roles you have, as you mentioned. But I'll call more on the implementation side, just so that they know what's aware, they know what's positive, they know what's coming so they can solution Business Central better or also implement it a bit better, so that they can know what to expect and what's going on.

Speaker 3:

So it's interesting that we have a Days of Knowledge in America now. I hope there's going to be a lot of attendance. So I hope it's going to be a lot of attendance. So I hope it's something that's going to be a recurring event. Obviously, the Directions Committee cannot run sessions in the US if only two guys show up, because I assume there are more than two guys.

Speaker 2:

I'm hopeful there will be. I mean, it's the first year, so hopefully they'll have enough of an attendance.

Speaker 3:

We'll do this again.

Speaker 2:

Yes, that's.

Speaker 2:

I'm hopeful for that too, because I do feel that we need to have those technical conferences or those technical tracks at conferences, uh, where consultants uh, you know, those that are in the field, as I'll say can get a good look at what is there, what is coming, and also see how to do some things. You know, as you had mentioned, it gets me thinking, when I go to see some of these sessions and conferences, of how to solution better, which is extremely important. Well, freddie, thank you for taking the time to talk with us today about BC Container and ALGO. I'm going to go back now and play with it again myself. I just want to try some new things. Every time. I see. I follow everything that you put. I follow the ALGO, I follow the BC Container Help. As I mentioned, I use it daily, several times a day. I'm doing something with containers, even if I just need to do a quick test of something that it doesn't require.

Speaker 3:

It's a little easier than downloading the DVD and installing that and uninstalling the old version.

Speaker 2:

Yes, yes, and sometimes it's again. It's where a sandbox environment may not be again, because the whole you know the online environments if I have a sandbox as well too, but again, as you had mentioned, it's a lot easier to just spin up a Docker container sometimes in my workflow some days, so I appreciate the tools that your team has put together some days. So I appreciate the tools that your team has put together. I know personally it's simplified my days and helped a lot of customers as well too, because of that simplification. If someone wanted to learn more about BC Container, algo, there are GitHub repositories for both of them and in that GitHub repository if they have any questions, I know that you're very responsive to the issues and then someone can keep up with the changes as well. If anyone else wanted to get in touch with you if they had additional questions about additional questions or feedback on ALGO or BC Container Helper, what's the best way to get in contact with you?

Speaker 3:

Typically like if people have anything on bc container helper or al go, it would be create questions or issues directly in the in the github repos, right? Um, the github repo for el go with uh yeah, githubcom slash Microsoft slash AL dash go and the GitHub repo for the BC container helper is githubcom slash Microsoft slash container helper. So it's still the repository, is still called nav container helper, but that's where everything is and there's an issues box there where you can go and ask questions or whatever else. I mean people are also free to shoot me an email, freddyk at microsoftcom. I'd be more than happy to answer, but be prepared that I might say please create an issue here instead. It's easier to track issues like that on GitHub than it is to track where the heck is that email.

Speaker 2:

I agree. I agree with you and with that, if somebody has an issue or a question, I'd like seeing them in the issues because it may help others as well.

Speaker 2:

Because if one person has a question, if one person thinks something is an issue, it's a good place to go, because I had something that I had an issue. I had some errors on that I saw on the issues list and I was able to easily remedy without having to contact anybody. So having it in there is some good use for that. So that response is welcomed, I think, by many. It's important for others to realize that on these GitHub repositories, that's a great resource as well If you have questions to look at through the history and to be able to search them. Hopefully GitHub will get a little better with the searching, but that's not our space for that. But again, thank you for taking the time to speak with us today. I really appreciate it. I definitely look forward to seeing you in a few weeks at Days of Knowledge United States and to also learn a little bit more about ALGO while I'm there.

Speaker 3:

Looking forward as well. Thanks for inviting me and have a nice trip to Vegas, christopher. Yeah, he gets to go to Vegas, it's going to be a tough one. Okay it'll be tough. Have a good one. Thanks, Freddie.

Speaker 2:

Thank you, chris, chris, for your time for another episode of In the Dynamics Corner Chair and thank you to our guests for participating thank you, brad, for your time.

Speaker 1:

it is a wonderful episode of Dynamics Corner Chair. I would also like to thank our guests for joining us. Thank you for all of our listeners tuning in as well. You can find Brad at developerlifecom, that is D-V-L-P-R-L-I-F-E dot com, and you can interact with them via Twitter D-V-L-P-R-L-I-F-E. You can also find me at matalinoio, m-a-t-a-l-i-n-o dot I-O, and my Twitter handle is matalino16. And you can see those links down below in the show notes. Again, thank you everyone. Thank you and take care.

People on this episode