067: LLM/AI as agents in your Go system with Markus Wüstenberg

Dominic:

Hello there, I'm Dominic Saint Pierre and you're listening to GO podcast. Thank you so much for listening. We have an interview this week with Markus Wistonberg, he's a friend of the show, we will talk about LLMs And I'm I'm not a huge user of LLMs and and whatnot with, you know, all sorts of agentic coding and and or, you know, building agent into your own system at the moment. So Marcus is going to to talk about what what's, you know, what's going on in in that field. And I will try to to have a an open mind.

Dominic:

So if you want to join, we have a Slack channel, it's called GoPodcast in one word in the Gophers Slack community. So if you're there, you have some link in the gopodcast.dev website if you want to join. I'm still trying to to learn how to participate in the channel and whatnot. But, yeah, I mean, if you if you if you have ideas for new episode and want or want to join as a guest or anything like that. It will be nice to see you there.

Dominic:

I have my course that I launched three weeks ago actually, Zero to Gopher. This covers the Go fundamentals and is a great, you know, a great way to to start with Go. So if you haven't written anything yet and you are, you know, jumping into into the language and you want to, you know, you want to have some guided, I would say, first steps, this this is a this is something that you can check. There there's always a a 50% discount for listeners of the show in the show notes. And on that, I leave you with the interview.

Dominic:

Hello there. So this week, I'm joined by Markus Wustenberg. So it's it's already your third time in in the show. Markus, thank you. Thank you so much for returning.

Markus:

Thank you for having me.

Dominic:

So we we will skip the intro, so you can listen to episode forty five and sixty three. So this this is where Marcus was a guest. So let's let's start by having a a maybe a a small, know, thirty or sixty second on on component and where where a project is so we, you know, we can follow that, but we we will jump to our main topic after that.

Markus:

Yeah. That sounds good. Yeah. So I'm building components, this HTML library where you can build HTML components in PureGo. It's pretty mature, so I don't work on it a lot actually.

Markus:

I've recently done a components data star integration, so that's a sort of an HTMX JavaScript competitor. I think we talked last time about using Alpine JS and HMX together, and Right. DataStar kinda does both things in a very neat way. So, yeah, check that out if you haven't. So I'm doing an integration with that with components.

Markus:

Yeah. That's pretty much it. Otherwise, it's it's stable. I use it every day. So

Dominic:

Very nice. Yeah. Datastar is crazy. I received the Lainey actually a couple of

Markus:

Oh, you had?

Dominic:

Oh, yeah. Oh, yeah. It's I'll

Markus:

have to listen to that.

Dominic:

Yeah. It was it was great. I'm I'm trying to to get him back, actually, because at the time when I did the interview, I was not using DataStar. You know, funny enough, I I think he's he started a project a little bit similar to component at some point.

Markus:

Yeah. He did because he actually saw components, but he wanted it to be more type safe so he could not write invalid HTML as far as I know. I think that was his his motive. And then he created this go star library, which is kind of similar in its how it's used at least, but doesn't let you do weird stuff like components does. We we disagree on that.

Markus:

I like I like letting people do weird stuff with HTML, but

Dominic:

that's fine. Yeah. Yeah. The HTML is always is always a good a good debating aspect. Definitely.

Dominic:

It would be nice to to receive Temple authors. I I interviewed interviewed them as well. So it would be nice to do a because I I personally, I switched to to I I returned back to pure HTML now.

Markus:

So Oh, you did?

Dominic:

Oh, yeah. Oh, yeah. I I oh, for for so many reason that we won't go over because it's not it's not the topic, but yeah. Conversation for another day. Yeah.

Dominic:

Yeah. Totally. But day data story is great. And I I I I can see I can see that using that with Gambonnet is a great fit probably, and this is this is something that, you know, if if people did not look at any of this, I I encourage you to check it out. It's it's different, but it's pretty nice.

Markus:

Yeah.

Dominic:

So it will be all on AI today. So, I mean, LLMs and things like that, what We will try to to I don't know. To approach this topic with with some I don't know. Care careful thoughts and things like that. I I I have I have some strong opinions, but Yeah.

Dominic:

I I I was interested to to interview you because I was looking at what you are or at least I well, I think you are doing, and it seems to be very interesting. So let let's first start by maybe telling people how how are you using AI and LLMs in your day to day, you know, at this moment?

Markus:

Actually, I I'd like to start a a bit earlier because I have a background in in sort of doing machine learning as well.

Dominic:

Oh, nice.

Markus:

I did some yeah. Back at university, I was in a PhD program using machine learning which which was before the whole neural network craze reignited. But yeah. So classic machine learning, I call it, doing analysis of sensor data from mobile phones and stuff like that. So I've always been interested in this space, but have been well, working more on, yeah, software development and and software infrastructure and stuff like that since.

Markus:

But this whole LLM craze or AI craze sort of reignited some of that because I think it's so it's just so cool in so many ways, and I think it's it still feels like magic using it. So yeah.

Dominic:

Wait. Wait. Wait. Wait. Wait a minute.

Dominic:

Just before you continue. So the Sure. Have you have you just said that you completed a PhD? All of this time, I was I was talking with doctor Marcus Wistenberg?

Markus:

Actually, no. Because I I didn't complete it. For for many reasons, I dropped out and went to to work for Uber for yeah. Doing software infrastructure and stuff. Nice.

Markus:

Yeah. So I won't go into the details of that.

Dominic:

Oh, yeah. Totally. But But I I I agree that it was pretty different back then. So we we were talking about, you know, way more machine learning back then compared to, I don't know, five or six years ago when

Markus:

it all started. Yeah. Right. So really, yeah, decision trees and and I don't know. Yeah.

Markus:

All that all that very, yeah, classic machine learning before neural networks really came back in style.

Dominic:

But is this is this is it still used at some place? Is is there any research still trying to advance? Because if if I'm completely honest, I was personally finding that way more interesting than what we have at the moment.

Markus:

Yeah. I don't know. I don't follow the space very closely. I'm mostly in deep into LLMs and Okay. Usage, basically.

Markus:

So also, I'm not very academic in my approach. I like applied AI. Nice. Yeah. Yeah.

Markus:

Okay.

Dominic:

So so you so you you have seen a lot of of differences. So what what was your original, I don't know, reaction? How how did you approach using LLMs? You know, what what was what was your mentality at that time?

Markus:

Well, like everyone else, I was amazed when when ChatGPT came out, I think. I'd sort of followed GPT two and three seeing it on Hacker News and discussing it with some people here locally. But it had I don't know. I thought it was curious and and fun, but I couldn't see, yeah, how much you could apply it to, and I don't think anyone really foresaw that. Maybe a

Dominic:

few people.

Markus:

So, yeah, the ChatGPT moment, it changed everything for a lot of people, including myself. And I've just I thought it was so yeah. Like like I said, it it I think it still feels like magic, basically, using that that just the the the the the computer can understand what I'm saying in natural language. I think it's baffling. And it's been, what, three years or something now?

Markus:

And I just that. If that was all we had, I think it would still feel like magic. So, yeah, that's kind of what started it. And then I just kinda dove into it also from well, both from the from the code generation side of it, which is what a lot of people are are talking about and using it for, these coding agents as we call them now. But also integrating them into applications as runtime dependencies because they can be used for so many things.

Markus:

So building agents are using them for all kinds of things they're good for. Like, you can use them for classification models. You can use them for search assistance, you can use them for so many things, and I think it's just deeply interesting.

Dominic:

Yeah. So so that was mainly your your use case. So adding functionality to existing systems that use LLM under the hood without without the the user really understanding or knowing that, oh, okay. It's they they are they are dispatching this call to an LLM at some point and and doing x y zed. Right?

Markus:

Yeah. Exactly. Because it you can you can do you can build applications in so many ways, and you don't have to tack AI on top of everything and then magically get some, I don't know, investor money or something like that. I I'm as sick of that as everyone else is, I think. But I really like how you can integrate it in places where it's actually useful, and we're still figuring out, all of us, what this usefulness is and and how it can an aided this.

Markus:

So I think down the line, we'll just use it as another tool in the toolbox, but a very powerful one. Like, we use databases or we use search clusters or whatever else.

Dominic:

And were you missing any any library at the time? So was it was it in Go? Let let's let's return to the beginning of when you started to integrate some functionalities into either system that you were building in your consultancy or or whatever whatever it was Yeah. Were you, like, creating, I don't know, maybe Python services because it it was me, you know, way more easier back then to to call those LLMs in Python because the library was was created. Or how is it today in Go?

Dominic:

What what what what was the kind of the history and and is is your work change from from the beginning to to what it is today?

Markus:

Yeah. So in the beginning, I guess it was very browser based, so figuring things out in the browser and then trying it out. But luckily, this it's always been a very simple API. So the one that that Open a v AI sorry. OpenAI started, that is basically, yeah, one of the standards now that the sort of the chat completion API, which is very easy to grasp.

Markus:

It's very easy to use, and and a lot of other platforms have adopted that. So it's just I mean, you don't need a lot of scaffolding to be able to use it, so it's basically just some HTTP calls. And Go is very, very good at that. So it's it's basically a, yeah, a language built for distributed systems and network systems and everything to do with HTTP. So I always thought it was a really good fit.

Markus:

And you get some of this in Python, which is the de facto machine learning language, But I don't think Python is as good a fit for actually building applications that use LLMs because they're also I mean, it's just another dependency over the network in that way. So there's nothing special to it. Now if you want to do actual machine learning, like building models and stuff like that, I would still reach for Python, I think. But but building applications on top of AI and LLMs, I think Go is ideally suited for that. And we saw that for when a lot of Go SDKs are out there now that that people can just use, like Anthropic has one and OpenAI has one and and Google obviously also has one for their Gemini platform.

Markus:

Yeah. So they they are sort of different in in small but important ways, but there are various libraries out there that sort of smooth over some of the differences. And I've yeah. I've built one myself as well because I I can't cannot build things apparently.

Dominic:

Yeah. Sure. So I suppose that you are mostly expecting JSON back from those those interaction with with with LLMs when you are adding functionalities to a system. For example, when you say, you know, when it's not really a converse conversation that you are implementing but something else, I guess that you are sending JSON and receiving JSON.

Markus:

Well, yes and no because it's it's just a sort of a tech token completion engine. Right? So sometimes it can't really output JSON while it's malformed or something like that. So now the SDKs generally have that built in that please make your answer conform to this JSON schema. Mhmm.

Markus:

And they generally do and are very good at that and have been trained for that now. But before, it was like, yeah, maybe sometimes you did get it back. Sometimes you get a text preamble saying, here's your JSON, and then you got the JSON or something like that. Right? So it's but it's definitely improved now, and you generally get JSON back if you use a a large enough model.

Markus:

But, yeah, that's that's how it works. But thinking of it fundamentally as you get text back and then reacting to the error cases appropriately as well is a part of that because they are fundamentally indeterministic. So you can't really rely on what you're getting back. And I think that's some of the fun hidden actually because it's you have to be good at building software that takes errors into account and handles them appropriately. And I think that's always been a staple in in robust software development.

Markus:

And what what are

Dominic:

you doing? Let's say let's say that the response is not what you are looking for. So are you, like, re trying or what what what are some good advice that you could give?

Markus:

Yeah. So basically, some of the the basic things that we've always done. So set proper timeouts, make sure to retry when you're not getting back what you expected, Do, yeah, retries with back off algorithms and jitter and all that stuff that we already used to in distributed systems.

Dominic:

But we we are seeing more and more people that that will dispatch some calls to some models and some calls to other models. So are are you are, you know, are you there yet? Have you tried that?

Markus:

Like different models? What do you mean?

Dominic:

Well, maybe it's it's to save some cost or maybe it's to to have a completely two different so let's say let's say I have a task x y and zed and I first first send it to a GPT model and after that, if if it's not, you know, proper or something is is not, you know, correct, I could I could use a cloud, for example, and and retry there.

Markus:

Right. Not generally at run time, though. That's something I want more figure out in the design phase. So having these I don't know if you know the term evals, evaluations. Mhmm.

Markus:

Sort of like tests, but for LLM systems. So where you you have your, yeah, your test cases basically set. The things you care about involving data in your application, and the kind of queries you send out away to the LLMs, then you get responses back and you grade them somehow, like score them from zero to one, for example, and you get an average across all your test cases or something like that. At least you can see the distribution. And then you look at different models from different providers, and then you see, okay, what performs the best, and then you choose that at run time instead.

Markus:

So you don't

Dominic:

That's interesting. Okay.

Markus:

At least that's what I would do.

Dominic:

Yeah. Yeah. Okay.

Markus:

But people yeah. A lot of people use it in different ways.

Dominic:

Okay. So it's a it's a good way to compare quickly to, you know, two or three providers like that.

Markus:

Yeah. And and there are models among them because Yeah. There are a lot of different models and they yeah. They vary a lot in in stuff like cost and latency and how good they are at stuff generally. So and generally, the more powerful, the slower and the more costly.

Markus:

So it's always a a sort of trade off. So and depends on your use case. So if it's some if you do background jobs for, I don't know, processing documents, then maybe you don't care about that. Yeah. But if you are integrating into a search system and you want to have sort of a agentic search or something like that, then you want low latency, obviously, if it's for an end use case.

Dominic:

So let's continue with with this idea of search. So I'm I'm interested about so if if if I'm understanding that you you would send some some data to to the LLM for for it to index, but, you know, sometimes it could be a very large amount of data and we are we are so constrained with with the context size at the moment. So how how would you what are you talking about when we are talking about search, for example? Because let's say I would I would I would have a lot of data to be searched on. How would I do that?

Markus:

Yeah. So typically, when I say agentic search, it's something like either you have an agent chatbot aiding you in search or you sort of type in natural language queries and you get results back. Something like that. Typically, because these models have don't have unlimited context, you can't just load your, I don't know, all your 1,000,000 documents in there or whatever. So Unfortunately Yeah.

Markus:

Yeah. You do what what we've always done. We put it in databases and make it searchable. And in this case, we just gave the LLM a tool to be able to search in the system itself, basically. So that's it.

Markus:

So you've you've probably heard the term RAC, which is retrieval augmented generation, and that's basically what it is. It's just give the thing here a tool to be able to search itself, and then you can sort of reason about it and and do all kinds of things. But that's the that's the basic gist of it. You you just give it tools to search. Interesting.

Dominic:

Okay. Okay. I get you. So and this was the basis from the the now expanding MCP protocol, I guess. Was it was it like the the idea that started the the other idea, I guess?

Markus:

Yeah. That MCP, I think, is a sort of a for those who don't know, it's a model context protocol. So it's sort of a a protocol that's standardized to be able to use tools, for example, across the web, and it has stuff like authentication and and, yeah, protocols and all that standardized and built in so it can be used across different providers. That is one form of giving tools to agents, say, but there are many other ways of doing that. So so one is, for example, if I fire up Cloud Code in the terminal and then I give it access to some command line utilities that I have, And that is a form of tool.

Markus:

It can it can use the bash tool to call those and then get its information that way. For example, if if I give it access to the SQLite command line utility, And then now it can search my local SQLite database using the built in full text search, for example. And that works wonderfully as well. And and it's it's great in in many ways. Yeah.

Markus:

Yeah. Yeah. That make sense?

Dominic:

Oh, yeah. Totally. Totally. So so I so I I I was I wasn't very familiar with with drag to be to be frank, but the way that you explained it, you know, now I I can I can start to can start to see that while the okay? This thing this thing is is is probably a small piece that I was I was missing because providing data was always intriguing to me.

Dominic:

I mean, how how do you provide all all this data in a secret way?

Markus:

Yeah. Exactly. And it it's just super interesting in in that the way we search the web is changing. People are using ChatGPT or Google are rolling out their own AI search. I don't know.

Markus:

I think it's google.com/ai or something like that. I haven't used it much myself.

Dominic:

But yeah.

Markus:

So it's this whole thing is changing in a way that I don't think we've seen so much since Google initially rolled out in the nineties and really changed how we Yeah. How we find information on the Internet. So I think it's it's just because of that, it's so interesting to follow what's going on in this field.

Dominic:

Well and I yeah. For for web search, I can personally testify that as a blind person, using an LLM is way way way more easier than than having to jump into Oh, three or four websites that structurally are different and things like that. I mean, the for me, this is a this is a this is a life changer, to be frank.

Markus:

Oh, yeah. I can really imagine because it's it's text and then you you can get it read through the screen reader.

Dominic:

Right? Absolutely. Absolutely. And the format is mostly always the same. So that that that is a huge huge deal when when you're using a screen reader.

Dominic:

You know, format is is always the problem.

Markus:

Yeah. What about stuff like understanding images because you can paste them in and then it can describe them. Right?

Dominic:

I haven't really used that myself to be to be honest. Okay. Yeah. I'm I'm not a I'm not a huge image user, if I can. Yeah.

Dominic:

Yeah. But but because I I already have a I already have a an an OCR built in in the screen reader. So where wherever I am, if if there is an image with some text, I can OCR it.

Markus:

Okay. Yeah.

Dominic:

Yeah. It was it was already there. But I I imagine that at some point, the next generation of screen reader might be just crazy if if we were able to to have and and, you know, this might bring me to to a follow-up question question with that story. I imagine that if we were able to have some kind of model locally, that would that would help a lot of people. Because personally, one one thing that I've been blocking me to use to use more LLM in my in my programming activities, if you will, is that is that the cost.

Dominic:

This is this is kind of extreme to me, especially I don't I don't see the value at this moment for me. But if if the model ran locally, I I I could probably use that because let let me give you a real example. I wanted to build myself. It's interesting that that you mentioned the images that now I have a use case. I I was a huge out of the park player.

Dominic:

It's a it's a baseball game. It's a baseball simulation. I have been playing that since 2000 I don't know two or whatever. Yeah. And since I'm using screen reader, I cannot cannot play that game, obviously.

Dominic:

So I wanted to build something for me that I would just be able to, let's say, take a screenshot of of of the game screen and ask some question. It's basically a glorified Excel sheet, if you will. So it's not it's not it's not it's not it's a baseball simulator. Right? There there's there's numbers everywhere and things like that.

Dominic:

Yeah. But that you know, I I cannot do that because the the the cost that it was involved for me to run that. But if it were local, you know, I I could do that. And I don't care if it's if it's slow. I don't I I would not care if you know, I I could I could wait a couple of seconds.

Dominic:

That that that is not a problem for me, but running running local models required so much power. It's it's crazy. And I'm not talking about electricity. I'm just talking about CPU and and and memory at the moment.

Markus:

Yeah. Yeah. Yeah. But there there are local vision models as they're called. Like, for example, Google's schema, which are frankly impressive.

Markus:

You can I I'm still, yeah, baffled that you can download a 50 gigabyte file? And one thing, you can talk to it in text, but it also you can paste images in and it can actually reason about it or or describe it or talk about it or whatever. And and, yeah, I think that's crazy. I bought a sort of a big big MacBook laptop just to be able to run some of these models locally because I think it's so fun to play with as well. So Yeah.

Markus:

Definitely check that out. And also Yeah. All the Chinese labs like DeepSeek I think it was DeepSeek OCR, which came out recently, which is supposedly very, good at what it does. I haven't tried that one yet.

Dominic:

Yeah. Maybe maybe that's my problem. I I have I have way too much old old CPUs and and computers here. I would need to invest. Okay.

Markus:

You can try it out online. So through APIs and then Yeah.

Dominic:

I could I could try that.

Markus:

Yeah. Yeah.

Dominic:

I could try that. So have you have you dabbled a little bit? So we have talked a little bit about integration and and whatnot. I I think we will return to that. But I'm I'm curious to hear you a little bit about what what is your day to day programming task at the moment?

Dominic:

Are you using any kind of LLM to to do any any task? And where are you with that?

Markus:

Like, for you mean for programming assistance or for building it into applications?

Dominic:

Yeah. Programming assistance. Let's start with that. So are you are you using, I don't know, the the new the new editors that that are built in with with with the LLMs and things like that? Are you using CLI?

Dominic:

Are are you using anything at all to to help you generally?

Markus:

Yes. I'm I'm a very heavy cloud code user. I actually, I don't write so much code as I used to anymore because it's it's written for me. So my role has very much shifted to one of, I don't know, project manager, project owner kind of thing. Obviously, I still I read a lot of code now.

Markus:

I I code review everything, of course, because I I still want it to be up to my professional standards. But it's very much so that I don't actually write that much code anymore. It does it for me. But I think the interesting thing I actually use it for is not the code generation part. It's more the everything else that goes before you write any code.

Markus:

The feature brainstorming, for example. I have a what cloud what's called a skill in Cloud Code. So basically just a description of you can do this, help me do that. So I have a skill called brainstorm, which is just exactly that. It helps me brainstorm a new feature and and anything I haven't thought of.

Markus:

And I like that it's sort of a an inversion of control so that it I tell it, help me brainstorm this, and then it just starts asking me a lot of questions, and I've asked it to do it like multiple choice. So ask me clarifying questions. Let me select multiple choice, but I I can always write more text if I need to, of course. And then it's sort of we go through a design together sort of where it also asks me things I may not have thought of. And then in the end, you get sort of a summary of that, and then you can use that as a basis for implementation.

Markus:

And whether that's even there, if that's just that's why you stop using it, I think that's super useful, especially for someone like me who's who's working a lot of doing a lot of solo development. Yeah. So having this sort of, I don't know, partner or it's it's yeah. Robot debugging on steroids, I think.

Dominic:

Yeah. I I I hear you a lot on that. To be frank, this is this is mostly how I have been using LLMs myself to

Markus:

Yeah. And it works great.

Dominic:

It works great. I admit. I admit it. As as a solo software engineer myself, this this is this is really very nice. Yeah.

Dominic:

What what are skills exactly? Are are they are they just some some context file that that you are setting up that? And how how are you triggering that from the Cloud Code CLI for instance?

Markus:

So skills are basically markdown files which have some yeah. Some description of the skill you want to be able to use. It was a it's it's a solution to a problem with MCPs actually that MCPs took up a lot of what's called the context window, so basically the the short term memory of the model. So if you loaded a couple of MCPs, say the one from GitHub and something to run some stuff in the browser or something, then suddenly half your context window is filled up with that because MCP is very verbose in that regard. So the people at Anthropic came up with this skill system, which is incredibly simple, but also very effective and I think really, really fun.

Markus:

In that, you just have a skills folder somewhere in your settings, and in that folder are subfolders. Each one is named after the skill, and then you have this sort of skill dot m d file. And in the beginning, it has a one line description. What does this skill do? And those one liners are all in the context all the time.

Markus:

So every fine time you fire up Cloud Code, you get those skill in the context, and then it knows, okay, I have those skills, and I know when to invoke them, either implicitly when I think it's a good time to do so or explicitly when the user asks me to, basically. But the important thing is that the rest of the skills, the whole meat of it is skill description and any associated scripts and and anything like that, is not always in the context window, but loaded on demand. And that's all there is to it, actually. And it's just I like that it's just it's just markdown files on my file system. So I can create those skills anytime I want.

Markus:

So now I have skills to, yeah, like I said, help me brainstorm. I have skills for using Git in the way that I like Nice. That fits to my workflow. I have, of course, a Go skill, which which I've refined over several months, which is just this is how I like to write Go. This is my sort of my style.

Markus:

This is how I like to test. This is my code formatting. This is how I structure applications. All that just goes into that. So every time I ask it, okay, help me write something for this feature in Go, then it evokes that Go skill, reads everything, and then proceeds from that, basically.

Markus:

And that that just works so incredibly well because it's it's essentially that the agent becomes composable, that you can use it for many tasks and not not not necessarily just programming related, but anything. So I've tried also dabble in in electronic music, so I'm trying to to figure out, okay, how how can I use it for that as well? Can I make it my, I don't know, music manager or something like that? So I think it's it's very fun to just play around with what what does this enable that I wasn't able to do previously.

Dominic:

Yeah. That's pretty interesting. That that sounds that sounds great. So for now, only Entropic has has done this kind of

Markus:

Yeah. But you can sort of hack out into the other ones, and they they take a lot of features for from each other because it's it's evolving so quickly. So so I I'm sure it'll come to the others in some form of another soon. Because it it it is just very powerful, I think, in its simplicity because everyone can write a skill if you can write text or or transcribe or whatever. And then you can build these skills for yourself and and that's yeah.

Markus:

It's just super super useful.

Dominic:

Nice. So maybe maybe maybe I I will tell you what I lived and what most people that are not using LLMs often maybe hit as a wall, and maybe maybe you will be able to, I don't know, to Yep. Enlighten me on on or and enlighten me on some something.

Markus:

So Definitely.

Dominic:

I use the I use OpenCode myself, which is which is similar to CloudCode, basically. It's it's just that you can plug other models and and whatnot. Yeah. It it's basically a CLI CLI that, you know, that you can you can use to to have it to write write some code. I I don't really like to have anything on my on my editor myself.

Dominic:

Mhmm. Yeah. See. Yeah. Yeah.

Dominic:

Pretty good. So we are we have all seen the demo aspect and whatnot, you know, generate me x y zed quickly and beautifully and blah blah blah, you know, small small project. For me, the problem arise at some point when a small a small project was created and now it's it start to it start to break it, it start to do a lot of stuff that I don't want and I don't even know how to. So so my my instinct was to, you know, close the session and and restart, but it was not it was not that. It was just it was not just a problem of, well, the the context is is way too full here and here and now it's a and and this is pretty recent.

Dominic:

I I think I last time I tried that was, like, in in the summer or whatnot. Maybe maybe it's a it's already crazy long in this world. Yeah. I would imagine. May maybe that that that might be something.

Dominic:

But to me, it felt like more way more work than than what it what it what it would have been in if I were written all all the line of codes. Would have been longer for me to do that. But the state that I were that I were in at some point due to May you know, probably me, probably a little bit of the LLMs. I'm I'm not I'm not pointing finger here. I'm just just saying that a lot of people seems to have this problem that at some point, it might break a lot of things at once.

Dominic:

And now, of course, you can you can go back if you if you have, you know, get the story and whatnot. But but I don't know. This this experience is is very traumatizing in in a sense.

Markus:

Yeah. What what did it break? Like the code or anything?

Dominic:

Yeah. Yeah. Oh, yeah. Totally. It was it was and and to be frank, it was probably a lack of review form from from myself.

Dominic:

At some point, I opened the the Go file and I was like, oh, wow. Yeah. What are we having here?

Markus:

What is

Dominic:

I I suppose that the skills that you just described, if I were able to really precisely tell the LLM that I want this and I don't want that and, you know, things like that, I I believe that it it would have helped. I I was not knowing skills until until I saw you post on on Blue Sky the other day. So Mhmm. Give you give you an example. But Yeah.

Dominic:

Maybe maybe that that's it. Maybe maybe we are not there yet, and I I don't know if you have a lot of back and forth in in your day to day. Are you often having to return back to a previous commit because now the last thing that the LLM did for you was completely, you know, it it was not only broken, but it it it was not respecting the directive at all.

Markus:

Yeah. Yeah. First of all, that still happens to me as well. So it's not definitely not not a perfect solution for anything. It's it's very much so that you have to get an intuition for what it's good at and what it's not good at and how to frame your problems and, I don't know, get an intuition about when you should go back in your conversation history and then nudge it in another direction or stuff like that.

Markus:

That is one part of it. So getting an intuition for for that and steering it, I think I do a lot. And the other part is that, yeah, as you say, my my git history and my git log is sacred. So basically, what goes in there needs to be good enough. So I don't let it commit on its own, and whatever goes in there goes through a sort of a pull request cycle on GitHub first.

Markus:

So it goes through linting and testing, and and actually, I use that to review my own code as well. So because it's I don't know. It's a different mental space that, okay, now I'm in the browser. I'm on GitHub. I'm reading a pull request, and I'm reading it the same way I would for any other developer, basically.

Markus:

That, okay, this needs to be changed. No. This can't this can't be done that way, stuff like that. So so just that I get a second look at it from a different angle maybe. And I think that's a very important part of being a software professional because I'm, in the end, I'm still on the hook for whatever I output, whether it's generated by an LLM or I wrote it manually.

Markus:

Then it's something I want to be responsible for and I something I want to be able to say, yeah, I did that with with tools, but I I made that and I I'm yeah. I'm on the hook for the quality of that. So I think that's that's as important or even more important than it ever was.

Dominic:

Yeah. That's a good point. I mean, you know, code code review is might might might have been a problem when I was probably letting it letting it do way too much things without supervision and whatnot. So would you say that you are way way more productive these days? Because if I understand correctly, what you're saying is that it seems like you are two, maybe three now in in in in a sense.

Dominic:

And now you are doing code review yourself. Are you running puncturing tasks sometimes for your your

Markus:

I'm yeah. I'm ex I'm experimenting with that, but I'm also very aware that this whole multitasking, I don't think the brain can really do that. Yeah. So Yeah. So it's always and I want to be aware that I don't context switch too much.

Markus:

So it really depends on what it is. So right now, I'm doing like one central thing at a time and I might have some let's call it, supporting tasks done in parallel, but I'm generally not doing many things in parallel because I can't keep it in my head. So in the end, I become the bottleneck for that, for for producing stuff. And I think I don't know. I think that's fine for now because I I still want to be the one that's in charge because they are not good enough to be able to run the show themselves.

Markus:

Although all the marketers want us to believe that. I don't know if that's true yet.

Dominic:

Yeah. That that is something I have a lot of difficulties, you know, with with all the automating agents to, I don't know, to build a company from scratch, and now that there there's no employee in there. The I I don't I don't understand that, to be frank, why why it's so interesting for some people. But but, again, what what task can oh, yeah. Go ahead.

Markus:

Yeah. So but what what it does enable me to do is well, things I couldn't do previously, like, if I don't have enough knowledge to do something, let's say Yeah. Fancy in the front end. I'm I'm more of a back end person, but I I want some feature in the front end. I don't know where to begin with that.

Markus:

Then I would instruct Claude, okay, please write me some JavaScript to do that particular feature. And I've did that yeah. I've done that previously. And that was just okay. I wouldn't have been able to do this even if I I'd spent a half a day to try and figure it out because there are so many new concepts I'd have to learn.

Markus:

And now suddenly, I have a working implementation in ten minutes that I can try out. And because it's a UI thing, okay, I don't care as much about the architecture maybe of this particular feature as long as it seems to work. Of course, I I scan through the code just to see if there are any glaring issues, but otherwise, okay, it works. I'll keep it and and that's sort of a low stakes thing, but still something that's that's very valuable to me that I wouldn't have been able to do before. And yeah.

Markus:

Or stuff like debugging or or chores like upgrading dependencies from libraries or stuff like that. I I tend to offload that because it's just, okay, a new version of this is out, but I have some some merge conflicts and some tests are failing. Please go fix that. And it'll just go in a loop until everything works because it can run tests itself. It can run the linter itself.

Markus:

It can upgrade things itself and and all that. And that is a real time saver, I think. So I can focus on the interesting work instead.

Dominic:

Yeah. That's interesting. The the fixing merge conflicts is is is pretty music to my ear, to be frank.

Markus:

Yeah. Yeah. Exactly. So and, of course, you have to check that it actually does it correctly. Oh, yeah.

Markus:

But I've I've always also have had a lot of human coworkers in through the years that have accidentally deleted some code in in a merge commit. So it's not like it's

Dominic:

That's the easiest way to fix the merge, man. Yeah. Delete

Markus:

all. Just delete all the stuff. Yeah. No changes. Excellent.

Markus:

Yeah. So so stuff like that, that is a real time saver for me.

Dominic:

Yeah. That

Markus:

it it does that and takes care of the chores and enables me to do things I couldn't do before. And then maybe it speeds up some of my work some of the time, but other times, it's also slows me down. So I'm still figuring that out.

Dominic:

Yeah. Yeah. I think I think it's a balance for for for sure. You you have to see what what works for you and whatnot. You know?

Dominic:

It's it's not it's not that it's it's to be prevented. My only fear is that at some point, we will lose a couple of problem solving aspect because now you are explaining the problem. It's it's true that you are explaining the problem. We are we are problem I don't know. Explainer, if you will, instead of of maybe continuing to be part problem solver in in a sense.

Dominic:

That that is that is concerning a little bit for me.

Markus:

Yes. I I I think I agree, and I've had the same thoughts. But I'm also thinking that maybe this is just another layer in the abstraction. Mhmm. Because I don't know I basically don't know how a compile compiler works.

Markus:

I don't know a lot of the stuff that goes on beneath the stack that I'm usually working with. So I know my way around distributed systems. I know how applications generally work. I know how the web works. But anything below that, like, how how does I don't know.

Markus:

How how does the source code get translated to something the machine can run? I have a rough idea. Yeah. I can go figure it out if I want to. But day to day, I don't really know.

Markus:

So there are certain class problems I just won't be able to debug effectively. And I I think I think of this kind of the same way that I'm I'm worried I lose some of these skills, but I also gain something in that I can do many more things than I could previously because my my breath is bigger now. Yeah. And I can go deep if I want to also with the aid of LLMs, of course. And most of the time, I think that's that's a win at least for now.

Markus:

But I guess we'll see how that pans out.

Dominic:

Yeah. Totally. Totally. No. No.

Dominic:

It it seems to be it seems to be there to to stay for sure. I'm you know?

Markus:

Yeah. Don't think we're going back.

Dominic:

Yeah. I think we have we have way past that. It's just it's just I I would I would still prefer to have the hype a little bit dimmed down, to be frank.

Markus:

Oh, yeah.

Dominic:

When I'm hearing things like I I don't know if it's Facebook or whatever the company that says, oh, we you know. I I think it was in Tropic at some point that that that says the the CEO says there there won't be any software engineer in in in six months and one thing. And Yeah. Please don't say that. This is

Markus:

yeah. Yeah. I I don't know. I've been through enough hype cycles now that, yeah, I kinda get sick of it, but also I can just ignore it.

Dominic:

But my my problem at this moment is that now the c level and managers are hearing that and and the expectation is kind of crazy now.

Markus:

Yeah.

Dominic:

You know, people people just just say, oh, you know, you use DLM. It will it will, you know I don't I don't that that part I don't like.

Markus:

Definitely. Totally agree on that. And it reminds me very much of when everyone wants an app because the iPhone was out and then Yeah. Yeah. What do you need an app for?

Markus:

I don't know. We need an app. It's like

Dominic:

Yeah. More of

Markus:

like more like a business card or something like that. Right? And I think that's very much the same now and I think we'll see that for some years to come. But there will be pushback. The craze will die down.

Markus:

The hype cycle will move to something else as well. And then what we'll be left with, I think, is something that's genuinely useful for a lot of people. And then yeah. Oh, yeah. We'll see we'll see where it goes from there.

Dominic:

Oh, yeah. Totally. My my kids I I don't recall if I asked you that, but I don't know if you have kids. But my kids are Either. 18 and 20.

Dominic:

They the the older one does not really seems to want to use that for a lot of environmental consideration and things like that. I mean, this this is true that it it there there is some environmental issues with with having to generate that much electricity just to reply to to someone that wants to marry the LLM. Know, I I don't know if you understand what I mean, but peep there is a lot, you know, there there's it's not just software engineers that are using that thing now. It's it's planet wide. And the amount of energy that needs to be consumed is crazy, and now they are talking about building a nuclear plant just to just to supply, you know, the AI just to run the the models and whatnot.

Dominic:

This I don't know. I I have I have some concern there as well.

Markus:

Yes. I think we should go into that with eyes wide open and then yeah, because I've through the same things. I've come to the conclusion that using LLMs, like text LLMs for now are fine. Using them to generate images is fine. I don't generally generate videos, for example, because I don't know, one eight second video is one kilowatt hours of energy or something

Dominic:

like that.

Markus:

But but I think what was it? I can't recall the number, so sorry if they're wrong. But something like if if you have one prompt, it's bay it's something like 0.3 watt hours of energy, which is 10 times the amount of a Google search or something like that. But it's it's sort of compared to, I don't know, driving a car or heating a building. That is a very, very, very low amount.

Markus:

So at least that was my conclusion for at the time that prompting and using it for work is comparable to using the Internet generally or, yeah, as a whole. And it's not that much different than, I don't know, watching Netflix or doing Google searches or or something like using recommendation engines, which is also AI for, yeah, for when shopping online or whatever. Like Yeah. Those recommender engines, that that's that's a lot of energy as well, but people don't really talk about that. So yeah.

Markus:

I think it's it's

Dominic:

Yeah. You've got a you've got a good point for sure. I'm I'm I'm just, you know, I'm I'm just a little bit you know you know how it is. We we do things as humans, and we don't really care about the what it will really really do in in twenty, fifty years from now and things like that. I mean, this is there there's there's all the the mental aspect of, you know, there there's a lot of people that, I don't know.

Dominic:

Just talk to the thing and outside of work. I'm not talking about work. I'm talking about personal stuff. I think that some people will lose themselves in the in that thing. That that that is I don't know.

Dominic:

You know, the Steve Wozniak was was kind of wanting to wait a little bit. I I I'm a little bit on his side. Can can we, you know, can we wait? Can we just see what what it will really do to some people?

Markus:

Yeah. Yeah. This But you're not talking about energy then, are you? What

Dominic:

Well, no. Not not not now. Not anymore because it's you know, I'm I'm talking about mental

Markus:

Ah, right. Okay.

Dominic:

You you know, some people might lose themselves in in that thing. And then Yeah. We have seen that already. You know, there there are stories out there. So Yeah.

Dominic:

Yeah. It's it's it's a little bit concerning. I I like the tool. I like the idea. So, you know, it's like like you were saying, it's just a matter of finding what task it really it really solve for you the best.

Dominic:

And and I think that once you once you start that to find those, you you can you can, you know, you can start enjoying having some productivity. Have have you tried other other things than the cloud code already?

Markus:

Yeah. I'm I'm trying many different things. So Gemini has a terminal app as well, of course. I haven't used that as much. I've used codecs from OpenAI.

Markus:

Mhmm. Tried that as well. I think it's really good for code review. It does that really well, but it's just it's very slow. So the iteration speed goes down a lot.

Markus:

And I like my tools fast now. Basically yeah. I also like the Go toolchain. It's just so nice and everything compiles very fast and and the feedback cycle is very fast. And with codecs, that went out the window.

Markus:

Yeah. So even though I think it's a very good model, I couldn't live with that. So I stopped using it for that. And I haven't really cracked the whole doing it offline thing. So the the yeah.

Markus:

You go do your thing, and then I'll wait and you'll send me a pull request kind of thing, that workflow. I haven't really gotten that working for me.

Dominic:

Yeah. Like the new I think I think GitHub Copilot is like that. You you can you can now assign some pull request to CoPilot.

Markus:

Yeah. Exactly.

Dominic:

Yeah. Okay.

Markus:

Yeah. And they and Google has something called Jules, which is the same, and Anthropic have their cloud code on the web, which is also the same. And I think OpenAI have and it's also just called Codecs, I think, which is also the same. And I think for certain use cases, they are very usable that you can kind of fire off a task and forget about it and then at some point you'll get, I don't know, an email saying here are the results. And I think for certain use cases that's very good, but it doesn't fit into my development cycle because I like to be very iterative and figure things out along the way and learn along the way.

Markus:

That's how I like to approach that And that doesn't work with that.

Dominic:

Yeah. I was seeing the the GitHub assign assigning a pull request to an LLM for maybe fixing some bugs that are extremely clear and and things like that. Because now you you can describe the the problem in in in the GitHub issues, which is Yeah. You know, already what we you know, what most teams are doing these days and things like that.

Markus:

So Mhmm.

Dominic:

Assigning assigning the the issues to to someone or an LLM if, you know, if if it's going to to do the the the right job is is kind of a good idea. Think they they nailed something because now you don't distract our our most software engineering teams are are are already working. So that, you know Yeah. That might be something, especially for very, very simple task, you know, a higher bug or very small feature and things

Markus:

like All those jaws of it that we talked about previously. Yeah.

Dominic:

I think Like you were like you were saying, the dependencies update and whatnot. So they they you know, GitHub already have have something that checks your dependencies for for project. So now, though, those could be, you know, those could be done almost automatically with with an LLM and whatnot, so on on the background. So not only not only that so yeah. So a a lot of things that that seems to be improving the software engineering, you know, day to day task and whatnot.

Dominic:

This this this is this is clear and and nobody is really saying the contrary, but but, yes, it's it's not completely there. And I I wonder if we have reached some some I don't know. Some plateau at some point at the moment. Have you have you have you found that?

Markus:

Think that's the

Dominic:

I I use Gemini as, you know, my day to day models. Mhmm. Yeah. I don't know. Some something something happened in the last month.

Markus:

I think it's it's maturing a lot so that we don't have these huge steps forward where, wow, now I can do this. Yeah. But stuff like the skill system where it's just that's very much more of a developer experience kind of thing and and maturing the ecosystem so things generally work across tools and, yeah, standardizing stuff and and figuring out common libraries, for example, how to build them into your Go applications and all that. And I saw a charm release something the other day about, okay, this is how you can build agents our way by giving it tools and this is our agentic framework and stuff like that. I think we we're going to see a lot more of that because, I mean, the speed has been absolutely reckless.

Dominic:

Yeah.

Markus:

It's just that it was so hard keeping up that it was and it was overwhelming and all these new concepts. At least I felt that way. And now it's sort of I don't think it's slowed down. I understand a lot more of it. And there are still these steps forward, but they're not as big as before, and they're just in other dimensions as well Yeah.

Markus:

Compared to just pure raw model performance.

Dominic:

So would you say that we are entering a a an era maybe that now Knight will be more more tools that we will see coming coming up and and a lot you know, it will be less about about the model itself, but really how to how to properly plug that into in into, you know, normal day to day operation?

Markus:

Yeah. I think so. And it's all it's been happening a lot this year already with with the whole yeah. People called it the year of the agents, and I think that that's turned pretty true in that we have been thinking about a lot about how these agents work and how can they work for us and how do we give them tools in the form of MCPs or cloud skills or whatever you want to call it. And then, yeah, now we're kind of figuring out patterns just like we figured out software patterns a long time ago for

Dominic:

Yeah.

Markus:

Yeah. How do we do these things generally? And I think that's a very welcome thing to to do and also be a part of because it's it's yeah. It's going you can have a lot of impact by figuring out some of this stuff and and perhaps generalizing it and making it easy to use. So so developers who don't read every update there is about LLMS Yeah.

Markus:

Even if if that were possible. They can create applications that build on this stuff and and create something of value that is useful to a lot of people instead of just tacking it tacking it on and calling it AI. So I I I really welcome that. I don't know what to call it. Maturity.

Markus:

Yeah.

Dominic:

Yeah. Yeah. Yeah. Yeah. I I was speaking with with a friend the other day that works at Walmart, basically, and they they started to build a lot of internal agent, but not not for developers, not for software engineer, but for all sorts of role in the company to to help them, you know, achieve achieve those small small things that usually would have required a technical person doing some queries in in a database, for example.

Markus:

Yeah. Exactly.

Dominic:

Or but for sales or whatever the department. So that that is because because now they can those people can talk, like like you were saying, with, you know, nor normal language. I don't know what to call that. The natural language.

Markus:

Natural language.

Dominic:

And now they they they get their data instead of creating an issue that will be there for, like, months because no nobody have time to do that. So I I I think I think this for an enterprise mindset is is probably extremely interesting.

Markus:

Yeah. Not just enterprise. I mean, the small businesses everywhere that it's just very empowering for for a lot of people to be able to do things they couldn't do previously. And we just need to make sure that it's safe to use and that it's it does the right thing most of the time. Yeah.

Markus:

Stuff like that. And, yeah, there's a lot of work to do there. And then the whole adapting to that as a society, that's a whole another Oh, yeah. That that's gonna take a long time, I think.

Dominic:

Yeah. What what can go wrong if you if you give your your LLM the access to your database? I mean

Markus:

Yeah. Absolutely nothing.

Dominic:

So would you, you know, how would you propose that someone starts if they if they haven't started yet? Or what, you know, what what is you know, is Cloud Code, the CLI, a good a good way to start or you would you would recommend someone that did did not started yet to use, I don't know, via Visual Studio Code, Copilot, or what is called, cursor and things like that, the the the other two?

Markus:

Yeah. I think I would, yeah, do try out Cloud Code just because I think it has the best developer experience of the of the agent tools out there. And then yeah. And then dabble with that. Try try writing some skills, like, do create a Go skill and then write.

Markus:

This is how I prefer to write code and then see whether you actually get get better results from that than just using pure raw Cloud Code. I have my by by the way, I have my skills repo online. It's on GitHub. We can link to that. So you can just you can just go read the skills and take inspiration from that.

Markus:

And I didn't didn't make up the brainstorming skills myself, for example. I just, yeah, took that from an online a blogger called Jesse who who really, yeah, experiments a lot with all

Dominic:

of Nice.

Markus:

So you can share these easily with it's just markdown files. And and, yeah, go take it from there. Install Clarke Code. Try try it out raw, then try it with some skills, and then see Nice. Where you can take it from there.

Markus:

I think that's how I I would do it.

Dominic:

Yeah. Okay. We will we will have all of that. I will I will personally have a look at your at your repo to be sure, to be frank. I like I was saying, I'm personally using OpenCode sometimes.

Dominic:

So for you know? But are are you not looking at the the I know I know that there is there is this money how much money it's it's currently costing. Are you not looking at that and saying, wow, man, this this thing is is increasing quickly.

Markus:

Oh, mean for my cloud code, you said?

Dominic:

Yeah. Yeah. I I I think that there there's there's some some indicator in in those things that each time you you press enter and each time the alarm is is replying, this thing is increasing. It's crazy. Like, jeez, this this question cost me like $1.

Markus:

Yeah. But actually, I turned it around a bit that so I have a cloud code. They have this max subscription where basically you have a fixed fee and then you have rate limits, but I rarely hit them. So that you basically have an upfront cost and I don't know. The the max plan, I think, if I if that saves me something like, I don't know, half an hour every month, then that's worth it.

Markus:

And just based on yeah. Because I do I do consulting as well. Right? So it's not it's not a yeah. It's a lot of money, and especially if you're if you're from a different country with a different Right.

Markus:

Purchasing power Yeah. Then that is a different thing. But but yeah. I live in in Denmark in Europe where the cost is kind of comparable to The US, and there, it just it makes sense for me to use that if it saves me some time at all. So I don't that's not something I I worry too much about anymore.

Dominic:

Yeah. Yeah. Again, another another sad truth that, you know, less the the purchasing power parity between countries

Markus:

Mhmm.

Dominic:

Kind of limit them, you know, if I I don't know. This yeah. It's me. But that's why I was I was talking about the locally as soon as we will start to have some local models in our home, it it might it might might be a good way maybe. But but again, you need to buy buy the hardware and now the electricity is is on you.

Dominic:

But but but still, I think I think maybe I think it was NVIDIA that was working on an hardware or something like that, you know, a small box that have a lot of GPUs in there that you could deploy some models. That that to me, you know, this this is very interesting to me because my wife is also using a lot of she's using how is it, girl? Google notebook, something like that.

Markus:

Yeah. Notebook LM, maybe.

Dominic:

Yeah. Notebook LLM. Something like that. So so so because she's at a university and whatnot, and and this is this is crazy, I mean. But the point is what I was going with that.

Dominic:

Where yeah. Well, if if it's if it were local, the amount of things that we could do, to be frank, without without having to to use the the cloud the the Cloud One Mhmm. I don't know. It see it seems to be to me, maybe it's because we are in Quebec and the electricity is is kind of cheapy here because we are a hydroelectricity and whatnot, reusable and kind of greenish electricity. But I don't know.

Dominic:

To me, it it seems to be interesting to to have local models for security as well.

Markus:

I get what you're saying. Yeah. And for some use cases, it definitely makes sense, like, for for privacy use cases and compliance reasons and

Dominic:

Yeah.

Markus:

Stuff like that in professional settings. I'm totally with you. I think it's it's super useful to have these on device, and I definitely think we'll see that because the the models themselves are also getting more efficient. So you you can have models that run on your phone that you can talk English to. Yeah.

Markus:

And I think that run purely on your phone, and I think that's that's super incredible. Also speaks to how incredibly good phones have become, of course. But that's also that even smaller models can be useful for certain things. That said, you don't get the the operational efficiency from running these models at scale, and you don't have Google's TPUs running your models.

Dominic:

Oh,

Markus:

yeah. And they can I'm sure they are hard at work on on pressing every last bit of juice out of these chips so they work optimally and actually where you get the most, yeah, usage out of your energy. I cannot imagine that they are not

Dominic:

Yeah. Sure.

Markus:

Working on that. Oh, yeah. And and just that chips have hardware is just so powerful now. And, I mean, I'm sitting on in front of an incredibly overpowered laptop for what I'm doing day to day except running LLMs on it locally. That's just it's just yeah.

Markus:

I think that's amazing.

Dominic:

So Are you able to run the the models on your GPU on your MacBook laptop?

Markus:

Yes. Because Apple has this unified architecture that where they share the memory between the GPU and the CPU. So you it actually, they run really, really well on your Macs

Dominic:

Oh.

Markus:

Which is why I got one. Interesting. So, basically, I have these yeah. I have a 128 gigs of RAM, and then I can run the, for example, the open source or open weights. What's it called?

Markus:

The one that OpenAI came out with. I forget its name now. I think it was just yeah. A 120 b model open. Was it chat?

Markus:

I can't okay. I forgot the name now. But that one. Yeah. So running that locally and seeing that work are just the gamma models from from Google, like I mentioned earlier, looking at images locally for me.

Markus:

And I think that's that's something. That's Yeah.

Dominic:

That's a lot.

Markus:

It's crazy. That's possible. It's like having brains stored on your hard drive,

Dominic:

I think. Yeah. Sure. You so you you you were not an Apple or Mac user before that?

Markus:

Oh, yeah. I was.

Dominic:

Okay. Okay.

Markus:

But now I'm I'm in mono, I guess.

Dominic:

128 gig of RAM. This is crazy.

Markus:

I guess outside of Macworld, that's just

Dominic:

yeah.

Markus:

Of course, you can have that. And inside Appleworld

Dominic:

is Yeah. Yeah. Totally.

Markus:

Yeah. That's what they charge for.

Dominic:

Yeah. Sure. Oh, alright. Marcus, that was that was very interesting, to be frank. I think I think I will I will go try these these skills and call code myself.

Markus:

Yeah. You should. I'll send you some links.

Dominic:

Yeah. Totally. We will have all the links on the show notes as well. So, again Yes. Thank you.

Dominic:

Thank you so much. And maybe maybe we yeah. If we can we can do a, you know, some episode in six six months, I guess that things will be extremely diff different.

Markus:

Yeah. Tell me about why why you moved away from components again back to HTML.

Dominic:

I'll I'll

Markus:

be interested in that.

Dominic:

Oh, yeah. Totally. Okay.

Markus:

Just one one long feedback session.

Dominic:

Yeah. Yeah. Sure. That's cool. Thank you.

Markus:

Yeah. Thank you, sir. Bye.

Creators and Guests

Dominic St-Pierre
Host
Dominic St-Pierre
Go system builder - entrepreneur. I've been writing software systems since 2001. I love SaaS, building since 2008.
Markus Wustenberg
Guest
Markus Wustenberg
Go course creator @golangdk. Also nerd, musician, photographer, glitter enthusiast, and minimalist. Mastodon: https://t.co/V44ZTEaSpH
067: LLM/AI as agents in your Go system with Markus Wüstenberg
Broadcast by