063: Common mistakes when testing with Jakub Jarosz

Dominic:

Hello there. You're listening to Go podcast. I'm Demnik St Pierre. I would say, you know, quote, unquote, normal software engineer. I'm joined by Jakob this this morning.

Dominic:

Jakob Yarosh, if I pronounce your name correctly. Jakob, you are here for the second time.

Jakub:

Yes. Glad to be here.

Dominic:

Thank you very much. So if you want if you want to listen to to the backstory of of Jakob, you know, just jump to episode 50. It was it was pretty interesting. I must admit I I glance at the episode when I when I knew that we were about to jump in and know, your background is so interesting.

Jakub:

Yeah. It's quite diverse actually. So I started with electronics and a little bit of mechanics back like thirty years ago. And then continue the work in the technology and overlapping different areas of the software electronics and electric electrical controls as well.

Dominic:

Yeah. That that's pretty interesting. So what what have you been doing these days? So any any anything changed for you in in the I I know we are only at the episode 62, but it's because I'm not posting I I've took whole summer a break from the pod and whatnot. So, I mean, even even though in terms of episode number, it might not be a long time, it's it's still a long time that that we talked.

Jakub:

Yeah. It was couple of months ago. I I think. Yeah. So the my main focus stays more or less the same with some slight changes.

Jakub:

So I do a lot of the security things, system integration, and as well more and more nowadays going into the integrating the electrical systems and doing a lot of the home automation as well.

Dominic:

Nice. So are you using Go for that?

Jakub:

Partially. Partially Go. Unfortunately, Go is not so popular in this area like the JavaScript. Yeah. At least a lot of the JavaScript and TypeScript libraries that different companies use to, let's say, integrate their small devices with the cloud.

Jakub:

But Go and Rust grow in popularity.

Dominic:

Yeah. Sure. Do do you do a lot of embedded development? So with with a background in electrical engineer, if I'm not mistaken, I mean, you you you should you should be at home with with boards and whatnot. Those those, you know, input and output things.

Jakub:

Yeah. Yeah. More pure electronics. So I'm diving deeper right now into the programming itself. So the most of the time is the system integration configuration and writing some decode that glue different devices together.

Jakub:

So not like a pure firmware let's say for the microcontrollers. Although I started doing this more and more nowadays, but mainly the code around the gluing this different components together and integrating systems as well.

Dominic:

Pretty interesting. Okay. Okay. And and what about so what about this? So I know that I think that Jon Arundel, came three times to this podcast.

Dominic:

I think kind of I I I don't know if he you know, I don't know how to say that, but would he would he suggest at some point that you start writing a book or something or or this idea came from yourself directly?

Jakub:

We were we were talking with John many many times and I remember years years ago when I learned the puppet from his book. So that's actually first virtually let's say we met John and his way of explaining things and then we started actually collaborating. He is my current mentor. We already cooperating for more than two years at this moment. So actually there was couple of the ideas that basically they put the knowledge that you are gaining over the years into the some form of the education material.

Jakub:

And that's the idea of the book came.

Dominic:

Nice.

Jakub:

Plus a lot of the IHAS as well to take part in different conferences and meetups when I speak about the Python and Go as well, especially recently.

Dominic:

In white test exactly, what you know, is is there a reason for choosing that topic exactly? I I know that you are doing a lot of of, you know, testing related stuff and whatnot, but I mean, there there are so many topic to choose. So why test?

Jakub:

Mhmm. So mainly because coming from the electronic background, basically, the testing, wherever you touch the, let's say, multimeters or some other devices to verify, let's say, the electrical current or the circuits, you are basically doing testing. So that was quite natural moving from this testing that involves pure physical devices to actually concentrate on the test part when I started many years ago writing software. So it's like a moving basically slowly from the mechanical parts and using the tests for the more, I would say, touchable matter like the circuits and boards to writing the tests and seeing the tests as part of the documentation for, let's say, libraries that we write for other users or the end software for customers. So that was quite natural.

Jakub:

And then I I work as well, a main test automation engineer for a couple of years. The main area was actually writing software, the tests software, and the some hardware systems as well.

Dominic:

So when you say software that tests, so are are we talking about a higher complex situation than an end to end test? So it's so it's it's completely something external to the system that tests the system basically?

Jakub:

Yes, yes. So for example you need to create certain state in the database. Back in the day when we didn't have smartphones and the different still majority of people used probably Nokia devices with the Symbian system. I worked on the in the company here in Ireland that we were developing some back end systems for the telecom operators that allows to do certain automation with your, let's say, devices and with the couple of, let's say, people that you have in your in your list on the phone. And so the entire system, the UI was the couple of lines on the handset, but majority of the functionality was happening on the server side.

Jakub:

So in order to test this and do some integration tests in the on the premise with the telecom companies, we needed to write a lot of software actually provision this our software integrated with test databases. Create a lot of the test data that would allow to test the system. Then run this different test. Emulate the devices. So we were using as well the Wireshark for sniffing the protocols and writing the software that emulates certain the signals sent from the handset at that time in order to create certain state in the databases on the higher a little bit level.

Jakub:

And then all this piece of software that was responsible for verification. So that's I would say some people call this like a glue software, right? So it's like a software that emulated real humans and partially the systems, and at the end, at the higher level was verifying if the certain state is present after some operations. Now for example, you pick up your favorite tree callers. Right?

Jakub:

And your the calling rate at that time was, let's say, half price to these three callers, and you could change them. So this entire business logic was basically coded in the back end. But to test different scenarios, we needed to write basically software that tests the software.

Dominic:

Yeah. Yeah. Yeah. I'm trying I'm trying to imagine that and I I'd like you to be honest and tell me, was that was that any fun or difficult? So I because because let let me let me rephrase that a little bit.

Dominic:

So when whenever I did some more involved tests, that being integration or end to end testing, at some point, I kind of ate my test. I I I think that I think that we we talked a little bit about that in in our last episode, but that that's always my my feeling. And now what you're just just describing Mhmm. Having a completely separate set of software on the side being completely you know, required to follow what's going on with the main system. It's it's seem so what?

Dominic:

Were there two teams that that were, you know, one that were maintaining the main system and the other one that was maintaining the software to test?

Jakub:

Yeah. It depends on the company because I worked for a couple of startups and in some companies they were completely different teams, but doesn't mean that we didn't collaborate. Yeah. But that was the separate automation engineers or QA engineers. So different companies or different teams called them differently.

Jakub:

Both teams were writing software. So we were closely collaborating with developers responsible for writing the core product. And then our team that was responsible for writing software to test the deployed software or a couple of components that they were driven by this software. So it depends on the company, on the types of software. So it was not strict division between QA team, automation team, and the software team, like in many companies.

Jakub:

But it was close collaboration. And especially in not only to have the understanding what kind of scenarios are covered on what level of the tests. And this is very, for example, important and many people get this wrong duplication of the tests and especially when they are writing certain business tests using, for example, like years ago was Selenium, later on WebDriver, to basically simulate some user action, human action on the browser or on the handset when we are talking about the mobile devices. And then testing business logic through the web UI versus using, for example, exposed APIs, assuming we have exposed APIs. So this area of collaboration is very important between teams.

Jakub:

First of all, to agree about what scenarios are, let's say, tested and executed on which level. Some, for example, scenarios could fit better for pure unit tests when we need to simulate some part of the system than another scenarios, for example, could cover only some user interface. But in many cases, not all APIs were exposed in the way that we could, let's say, write software that interacts with the system through, for example, REST API. In many cases, it involved to create some scenario that simulated the user clicking or doing some action on on the web UI on the browser.

Dominic:

And was was your team always trying to catch up for for new changes, or you were the one that needed to implement them because now the core the core team would use your testing software to test the new feature?

Jakub:

That's was that that's that's the real problem for us. Never ending problem. I wouldn't say that it's one right way to approach this. Right? I see more successful teams basically doing different level of tests.

Jakub:

They communicate very well. So there is no this artificial boundary between developers, testers, or artificial division like in some companies at least in the past were quite clearly like marked. These are developers, they are throwing over the code for testers and then testers are responsible for marking either this software is ready for deployment or not, and then throw the code, for example, to operation team. So, it depends on the project, coming back to your question, but usually was the best, what I remember, cooperation was when we were planning together delivering all certain features. Planning at the very beginning, who is doing basically what tests, what is required, and then basically doing this together.

Jakub:

Sometimes the testing and driving the user interface, first we needed to, for example, code some logic or code certain components to abstract some actions. And then it was the time for core developers to deliver, let's say, the part of the product that we could hook the test suite or the the higher level test suite with the core development and see actually the real action happening, a real interaction between the systems. Sometimes we we were writing some back end to simulate that system is already, let's say, responding with certain with certain data and we were able to write some business logic on the front end part or integrating with some API calls. And then we were hooking this to the real system, let's say, one week later. So the always the most important part was and still is to communicate between between the teams and to to be able to allocate the work and and synchronize work together.

Dominic:

Yeah. Communication?

Jakub:

Communicate exactly. Exactly. That's the

Dominic:

most important Always the solution. Always the problem. Yes. Yeah. Exactly.

Dominic:

So what about your book? So you you have you have found common mistakes that gophers were doing when testing their codes?

Jakub:

Yes. So basically so over the last four years I contributed to couple of the open source products all written in Go. As well many I help many students on the exercise platform on the GoTrack. And all my, let's say, the observations and the learning I'm putting into the book and got like 50 most common mistakes that I could observe over this almost four years, recent years, and how to avoid them. Right?

Jakub:

So I'm giving an example of certain mistakes and how they affect, let's say, the end users and how they could be avoided or how we could fix or implement certain fix.

Dominic:

Nice. I I I will I will shamelessly or shamefully I I don't I don't even know what what the right word. But, you know, I'm I'm a little bit surprised that I found something myself lately that I never really used to when when I have a an helper function in my test just having, you know, if if it's going to receive the the t dot testing for example, the testing dot t. Sorry. You know, I never use the t dot l helper, which prevent from having this function from being printed out in the console as being the the faulty one.

Dominic:

You know, when your test is failing and you are calling a sub function Mhmm. If you don't if you don't have this, then, you know, what you will see is that all this line in your helper function caused the the test to fail. But that's not what I wanted to see. And I never really came across the t dot helper, frankly until very very recently. So I'm I'm not sure.

Dominic:

What Is that part of your 50 mistakes that you have seen?

Jakub:

Yes, actually. Have Yes, yes. Because early morning today, actually, I was finishing the third chapter that is third part actually out of the few chapters that I'm writing about the helpers and especially the different functions that supposed to help us with running tests. Yes, this is quite interesting and very very helpful, the technique, and especially very useful when we need to create some test preconditions for main tests of functionality to compare, let's say, the results that we get with what we what we want. So not getting full, let's say, stack of the errors just to fail the helpers.

Jakub:

So the example, for example, that I'm writing right now is replacing helpers that supposed to be helpers, but in some projects the helpers are literally normal go function that panic inside. So just in case particular condition is not met, the entire test run basically explodes and we see lovely stack trace, right? They panic. And this could scare, especially new developers, right? So, I always remember John first advising his books, Don't panic.

Jakub:

If you see panic, don't panic. So, basically replacing the functions that supposed to bring certain conditions like setting up test servers or creating some environmental variables or any other preconditions that is required for. For the test, this helper should accept the T struct. And then just in case if we are not able to create this preconditions for whatever reason, right, we call the t fatal. And then, first of all, we don't need to catch the, basically, the error value.

Jakub:

We are not returning error value from the helper, so we don't need to again pollute our test code business logic with catching errors related to setting up preconditions. So that's the big win. Keep the tests and logic clean, easy to read, which automatically it's easier to understand when we are reading the test code. And basically keep nicely separate creating preconditions with the business logic. And not not panicking, not returning, let's say, the errors that needs to be handled anyway further.

Dominic:

Yeah.

Jakub:

So it's nicely abstracts this this way of of creating preconditions.

Dominic:

Yeah. I'm I'm I'm creating a course at the moment as well. I I've I've started I've started and and the way I I put that for the the t dot helper is that I I showed an example of Mhmm. Creating just a simple comparison comparison helper and when when this thing is looking, you know, let's say let's say we are receiving a and b which are tagged as any and and we are just doing an if. Mhmm.

Dominic:

You know, if a not equal b. It's not it's not doing a deep deep equal or anything. It's it's just to show the t dot helper. So when the t dot helper is not there then on the console it it will show the line of that of that helper which I don't want to see. I want to see my tests, you know, check if a is is equally is equal to b, for example, and and this is the line that I want.

Dominic:

So so for me, it was was kind of crazy do it doing go for that long and never came across you know, you you always see t dot parallel, you always see t dot skip and whatnot. But t dot helper for me was not something I have seen much, to be frank. Mhmm. Mhmm. Yeah.

Jakub:

There are a lot of different gems still in the standard library. Even though Go is quite small. Right? But a lot of a lot of really good I would say, in quotes. Right?

Jakub:

Hidden functionality that can help with many, many areas, especially writing tests like we are talking right now.

Dominic:

Yeah. Absolutely. Absolutely. So an another another thing I I have myself with tests is that well and and this is this is mainly because I'm working in small companies most of the time, but often Mhmm. The the test will be will be started.

Dominic:

I I will start to do some tests at some point. Oh, no. Time time gets in the way. I I I cannot I cannot continue to write tests anymore or not, you know, taking time to write very decent tests. And at some point, you are stuck with with a a test suite that you don't really trust anymore.

Dominic:

So that that that is extremely a bad feeling. This this is when when I I know I failed. This you know? When you cannot test your your your test suite, this is I I don't I don't because now this work is not really help helpful helpful anymore because you are not very there there's not much point to run your tests since it's it's not, you know, it's not really meaningful. And the amount of of time that you would need to come back to a decent place is kind of scary and and and a lot of work.

Dominic:

So

Jakub:

Yeah.

Dominic:

Do do you have do you have thoughts about this? Do you have seen that?

Jakub:

It is oh, yes. Yes. I've experienced experienced many times many times. And as you said, it's not a very pleasant feeling that you can't trust this lovely green color that's supposed to show you everything is okay, press the button, deploy. When the people on the team are not confident that the green means really, you know, we can go ahead with deployment.

Jakub:

It's quite unfortunate situation. Yes, I came across this many many years ago, first time when I was mainly writing Python. Up to the point that when I started digging into the already existing code base when I joined the company, the many tests, their assertion was if true is equal true. And the reason to make this test green was basically bump up the test coverage. Nice.

Jakub:

We yes. Yes. There is different techniques. Right? This I would say almost criminal.

Dominic:

Sometimes you sometimes you are better not to dig too deep too deep into some code base.

Jakub:

But, you know, it's it's always I like this kind of this, you know, coding archaeology, I call this. Yeah. And in Go we are fortunate because we can always go deeper and deeper and see how certain stuff is implemented, right, in the standard library. Yes. So this is real beauty that we can learn very very good deep down and figure out how actually stuff works.

Jakub:

Another example of not as well that you are not trusting too much is convoluted setups, right? Or different setups that you can't trust or certain setups that depend on the environment that you are running. Or sometimes test fails, sometimes it's it passed. So this another situation that brings this feeling that you are not really confident. And as a result, you can't, let's say, deploy on Friday.

Jakub:

Right? Probably. Shouldn't be a problem. Right? I after.

Jakub:

Should Yeah.

Dominic:

You should never deploy on Friday. Even even if the test pass.

Jakub:

Yes. So we are aiming to this ideal situation that Friday before four, you press the enter. Mhmm. Software is deployed after the old test pass. But reality is is is reality.

Dominic:

Yeah. Yeah. And another thing I'm I'm curious to see if it's in inside your book that I'm doing is for some reason yeah. Let let's be frank. It's pretty easy to test unit unit unit test.

Dominic:

To to make unit test, you know, when the when the function accepts some parameters and return something Mhmm. There's no there's no side effect. There's no IO or anything. As soon as there is more complexity, then now it's either either you you mock those, which I don't like, and and Yeah. In in Go, I mean, it's not it's not something that we are seeing a lot.

Dominic:

It's it's preferred, I think, to let's say, if you want to use a database, well, you just you just use a database. But at some point at some point, yes, it it makes things a little bit complicate. Maybe it's it's a little bit what you what you just said with the setup, but it's more than the setup. It's it's also the test Mhmm. That I don't know.

Dominic:

It's like, know, making integration tests and end to end test is is a beast in itself, and it needs to be way more thought out than trying to to test a simple function that receives something and do some calculation and things like that and return a scalar scalar value for example. So Mhmm. Do do do you have any any, I don't know, guidance or how do you organize your project to ensure that in two months when you look at your code you you will you will not want to rewrite them all at at some point.

Jakub:

Yeah. So it it always in this this question always reminds me the question that John always ask. John Arundel. Right? To look at the function and first of all figure out what can go wrong.

Jakub:

And the second question, what are we really testing here? And this is really brilliant question that brings to the path and start thinking what actually our code, let's say the function is is doing, and then concentrate on only testing this part. And the like the golden example is, for example, generating some SQL request. Right? When our function is responsible for generating the SQL query, we need to make sure that this query, the string for example, looks as intended.

Jakub:

Right? We don't know what will happen on the database side. So in order to implement like a higher level test when we are not only unit testing the values that we create. In our case, this example that I'm talking about is some SQL query, but what we are getting from the database. So this is already a little bit higher level test.

Jakub:

And as you say, the mocking here that I see in some projects can be really like a slippery slope because I remember in the past the problems that, okay, we didn't change the mocks or some stubs, and then the core functionality changed, and we were happy writing the running the tests. Everything was green until the point when software was deployed to servers.

Dominic:

Mhmm.

Jakub:

And suddenly someone, for example, forgot to update the mocks to match the change functionality that was going with the current product. So there is like a big problem in terms of the synchronizing in many places that possible the mocking these values, whatever match the current state of the core system. But, as you mentioned, there is the we could use the normal database. Right? So, right now, many people use and know in the Go community, the project that is, I believe, the recently was acquired, I think, by Docker.

Jakub:

I'm not sure. Test containers.

Dominic:

Mhmm.

Jakub:

So we don't need to and I'm writing in the book as well about couple of the examples that we don't need to separately, let's say, the spin, the Docker Compose and set up the system locally on our laptop or on the integration somewhere, the continuous integration pipeline. But we can import the test containers, the Go package, and then leverage real database wrapped in the Go code inside the Docker container, populate with some data, and then have normal interaction. And then it nicely the test containers nicely integrates with the Go libraries and we can write the test helper that responsible for setting up preconditions for the test and then quickly destroy the database afterwards. So this is this is quite good solution as well. Nicely integrates with the existing test suite.

Jakub:

And and really works very well.

Dominic:

Yeah. Yeah. Yeah. Yeah. Yeah.

Dominic:

Totally. That that's fine. Mhmm. I'm I'm more talking about the, you know, the complexity inside the test itself.

Jakub:

So Okay.

Dominic:

Let's take let's take a a web application. Any any kind of web application. At some point, there will be a request that will that will comes in and it's pretty difficult to test because sometimes your your web handler might might be performing two or three things inside of it. Yeah. So yes, you can you can test those two or three things outside of the context of the web, but one one good well, I I find it extremely nicely done.

Dominic:

I'll go, know, the HTTP test is is done and whatnot. Mhmm. So we we can easily test our fully integrated server, but at some point, my tests are are just getting extremely verbose, and I'm used to Go. I mean, I've saying Go is verbose for decades now, but

Jakub:

Yeah.

Dominic:

But my test is where it is always worse. I don't know why. I don't know exactly how to properly I don't know. Do a a decent a a decent function that I will still look in in in a couple of of time and and still find it. Oh, yeah.

Dominic:

Yeah. This this this is properly written.

Jakub:

Yeah. So one approach that I was taking quite recently was to basically offload a lot of interaction. I'm I was thinking, okay, what is the final some expected state of the system that I need to achieve after, let's say, couple of operations. Be this some risk, for example, the rest API calls or other operation. And I was abstracting this, let's say two or three operations to the separate the test helper.

Jakub:

The role of the test helper was to bring this as close to the place where I'm actually running the real test that validates that after, let's say, two or three different calls or two or three operations are supposed to, let's say, bring, for example, some database to certain values. And then I have this precondition set up inside the test helper with some meaningful names. And then in the main test, have the helper and then small business logic responsible for testing the final result. Because I've seen in couple of projects that I'm working on, at certain tests, example, you scroll literally three, four hundred lines of code with separate these steps, right? And what is interesting.

Jakub:

And then at the very end you have this real, I mean real, this business test that you are really interested in. 400 lines you write some business operation, then let's say 20 lines is your main business logic. So what is interesting as well, it's quite hard to read this test, understand especially for new people on the team or when you are not touching this test and come back to this test after two, three months for example. You need to recall what is really happening, especially if we don't have any additional information. And what is interesting as well in the this 400 lines of code that is responsible for setting up these preconditions, we are calling not, for example, t fatal, but t error.

Jakub:

It means that even if we got an error from the first or some middle step, we carry on setting trying to set up preconditions. The error basically is compounding, let's say. Yeah. And then at the very end, your test fail, but it fail not because the business logic is wrong, but because we didn't set up properly. At some point some preconditions in the middle of the actions.

Jakub:

So that's the this huge difference between the calling the t error or t error f Yeah. And fatal. And I noticed as well that in many projects, this t error is used in the part of the test responsible for bringing some preconditions for the real test.

Dominic:

Yeah. Yeah. Interesting. Sure. Sure.

Dominic:

Sure. So can you can you share maybe, you know, one or two popular mistakes that you have you have seen? We we, you know, we have talked about one so far. So what what can we what can we expect? So what what is what is the most easiest one to fix other than the t error, t t fatal, for example?

Jakub:

So this this will be one of the most important. Then we talk about the using the helpers. Right? The helpers that call t fatal as well inside. So we are offloading to the helper functions setting up preconditions, but not only setting up.

Jakub:

The helpers as well can be responsible for bringing down setup preconditions. Let's say, for example, running server could be could be could be stopped not from the separate call inside the test, but actually inside the helper function. That that could be. Another interesting problem, but actually that we could start from this, is the naming conventions for the test itself. So and I must admit, I love the project that John wrote called the Go Test Docs.

Jakub:

That when you run your test suite through this project, this project allows you to see the functionality that you described in your test names as the real English sentences, meaning that you see and you read what actually behavior of the system or behavior of the subsystem you are testing. So the naming convention for the test, I would say it's big another issue in many in many cases. I'm not even talking about the naming test like test one, two, three, etcetera. Then you see the, let's say, failing test number four. But what does it mean?

Jakub:

Right? So then we we need to start digging deeper into the code, go back, see what the test number four is is doing, trying to figure out what the what code is called with what values. So but when we have the some meaningful name that describe the functionality, describe the behavior, we see immediately what kind of the business scenario we are we are are talking about, what we are testing. And this is the very important that we don't need to employ the additional test frameworks. Right?

Jakub:

So in Go, we are basically doing as minimum as possible of the external libraries because, basically, testing package gives us almost everything that we need apart maybe from using the Google CMP package that allows us nicely compare the structs. So naming convention and the naming tests using really meaningful English sentences that describe the functionality and the behavior behavior.

Dominic:

So what what a good naming would would look like? So let's say let's say we have a test that that need to test a a an add function. So let let's take a calculator. Very, very simple. So what what would be a good test?

Dominic:

Like, something like test would you say test that adding one and two give threes? Is that what you are

Jakub:

This this would be like a bare minimum. Right? That for example, the at correctly executes, for example, addition on the valid values. Right? Something like this.

Dominic:

Okay. So so you're you're not you're not very specifying what what what you are doing, but more okay. Okay. I understand what you are saying now. Okay.

Dominic:

Mhmm. Mhmm. Interesting.

Jakub:

What I'm what what this particular function is doing. Right? Or when we are creating some values. Right? Or we have some valid and invalid values.

Jakub:

So when we have, let's say, the function some validator. Right?

Dominic:

Mhmm.

Jakub:

We could name that validator, for example, the validates correctly some particular values. Kind of the description, when we read in the plain English sentence, will give us the clue what's this test, what kind of the behavior we are validating.

Dominic:

I suppose that John's tools is looking at the function name and just split at all the capitalization and do Okay, yeah.

Jakub:

Yes, yes. And it's really, really nice because we are not bringing any additional library to our suite. It's just the CLI tool. You are running your tests, executing using this tool, and it nicely shows you what's your name for the test. And it really helps to think about how you could improve naming conventions in your test.

Dominic:

Right.

Jakub:

Especially when we have table tests. Oh yeah. And then we are joining basically, just in case there is an error, right? The the main part, the name of the test, main test, is connected later on with the name of the some subtest that

Dominic:

we have. Oh, yeah. Totally.

Jakub:

So this is as well the interesting part that we can pay more attention to have nice descriptive sentence just in case something happened and we are getting error.

Dominic:

And what about what about the messages? Are you I I tend to always have something like, you know, wanted value, comma, got value. So this this this kind of most most of my my errors that that that I that I wrote. So are you are you more more descriptive in your messages? Not

Jakub:

really. So the most important, I would say, is to keep the naming convention that we have in Go. So basically, the word want for what we want and got for what we got from the from the function. Now the big advantage of using this descriptive test names is that when the test fails, the name of the test already shows you the problem. What kind of yes.

Jakub:

So then you don't need to start writing the poems in your error messages. Yeah. So it would be enough to write, okay, want this, got this, right? Or got this, want this. Usually want, for example, something, got something.

Jakub:

And then the error, I mean, the test name describes already the problem, what happened, and then you see immediately what you wanted and what you got. And this is as well interesting and and a really nice place with the CMP package when we, let's say, check for equality some struct

Dominic:

Mhmm.

Jakub:

With the struct that we want. And then if something is not not correct, then we only call basically the CMP dot diff.

Dominic:

Yeah.

Jakub:

And printing out the diff between the structs. And the test name nicely and clearly describes the verified behavior.

Dominic:

Yeah. Oh, that's interesting. That is clearly something I I haven't done, you know, good enough so far. You know, you naming your test is is not really something that that I well, personally, I I tend to not really put much effort into that, and I I think I think that might be one of the problem that when I look at my test at some point I don't like them.

Jakub:

Mhmm. That's cool. Another one is keeping the separate, I would say the core one. That's keeping separate table tests with the examples with the values that should make the, let's say, function fail and keep separate tests with the values that prove that function is working correct. And the big advantage is that keeping these two tests separately documents nicely invalid values.

Jakub:

That's the one point. Secondly, when you are dealing only with invalid values, your business logic in tests is much much easier to understand and much smaller because you don't need to write if conditions and have in the struct basically distinguishing between the valid and invalid cases in the struct that holds the test value.

Dominic:

Okay. Yeah. Yeah. Okay. Let me let me okay.

Dominic:

I I I want to unpack that a little bit to make sure I I truly understand. So I I suppose that it's something that you have seen so far because I'm I'm I'm also personally guilty of doing that. Personally when I do some some table tests, I tend to do all, you know, all the values in in one one one fit. Mhmm. Mhmm.

Dominic:

So the so so you're purposing to to separate the valid and the invalid values. I like that.

Jakub:

I like that. Yes.

Dominic:

Yes. But I'm I'm not sure I'm not sure I I got your explanation about what what you know, when you say your business logic get get less complicated on your invalid values. Can you can you rephrase that a little bit?

Jakub:

Yeah. So let's say we have the table test with, let's say, 10 structs. Five of them, when we pass them to the function, should we expect that this function will return an error. Another five are valid inputs and the function should work properly. When we put everything together in one test, in the table, we need to be able to distinguish if certain struct when we expect the error or when we don't expect the error.

Jakub:

So you automatically need to add additional field that then is used in the business logic to distinguish if the error means test fail or the error means test pass. And then it's automatically you need to bring the logic that compares this additional value that indicates if the struct is valid or input struct is invalid. And when you have the tests only with the invalid examples values, you don't need to do this because you know that this test and this table test holds only invalid values. And then you can when you look through this test and feed the test data inside the testing function, then you know that every time you loop, you need to get an error if you are testing invalid scenarios. If you test positive scenarios, you know that the error is not something that you want.

Jakub:

Then you eliminate this additional business logic, and on the other hand, especially for the invalid values, it is a really good documentation, especially when you need to add additional invalid scenarios in the future, then it's a matter only to add, let's say, one struct, one element in the slice that keeps this invalid values. And then once just you glance quickly and you see what kind of the invalid values make the function, let's say, return an error or should the function return an error.

Dominic:

I like that one a lot.

Jakub:

Matt Reyer talks about so called glanceability. Right? So this is one of the examples that I really, really, really like, that you glance through the logic of the test and you don't need to think that if I have this error and this struct, my function should return an error. Right? And then when should not return an error, then I need to even check if what I got is what I want.

Dominic:

Yeah. Yeah. And like you said earlier, if I put myself into the shoes of, you know, a junior developer or any any newcomer to a project, this this immediately is pretty pretty clear about what, you know, what what should work and what what does not.

Jakub:

And Mhmm.

Dominic:

Yeah. That that that doesn't

Jakub:

It doesn't even need to be junior developer. It's someone new joining the team or even us working on certain functionality that we didn't touch for the last couple of months. Mhmm.

Dominic:

Yeah. That's pretty interesting. So what's the what's the state of of your book? Is it is it is it soon to be released?

Jakub:

It's almost half is done, so I will be releasing pre pre review version next week.

Dominic:

Nice. So how how was the process for you? Because I I know that's something that I I've talked a lot with with John. You know, what was it was it did you did you find that process hard, long, difficult? What what what what was your

Jakub:

The easiest is gather the different examples. Right? Get the references to different places. But the hardest part is to follow the logic of certain examples and bring this logic and examples to the user in to the reader in the way that with the least number of certain assumptions. Because when we are talking about the pieces of the code, regardless of the book, we need to put ourselves into the reader's shoe and use the examples, clear examples, plus explain what's going on without any pre assumptions, because it's quite easy to make this mistake and I'm guilty on this myself.

Jakub:

That's why I'm reading, rewriting a lot of the examples because when I work on certain code I know what's going on, right? When someone is reading for the first time a piece of code and I want to illustrate some, let's say, the mistake with the testing, I need to basically imagine that I don't know the rest of the code, any pre assumptions.

Dominic:

Mhmm.

Jakub:

So this is the hardest part. Yeah. But on the other hand, the most enjoyable. It reminds me all the time when I used to teach the IT subjects in in school back in Poland where I needed to prepare a lot of the educational material on my own.

Dominic:

Nice. Yeah. And do you do you talk at all about TDD in your book? What what is what is your thought?

Jakub:

You see, this topic of the so called TDD, I I like using using the name test first approach. And I always have in mind the work on my electronics part. Or when I do some, let's say, reconstruction back at home, For me the test is basically the plan that I have or some specification for, let's say, the electrical circuit. So I need to know more or less what I want to achieve, what I want to build and then constantly measure if I'm doing right. So, for me it's like quite natural, right, from the very beginning that I need to have idea when I'm already working on something that I know how it should look like, how it should work, that I'm writing this measurement first, which is the equivalent of telling that, for example, I need to put three bricks or four bricks in line when I'm building, let's say, a shed in my backyard.

Jakub:

Mhmm. And then what I do? I have, for example, one line that tells me I'm building this stuff straight, this wall. On the other hand, I have the measuring tape in my, let's say, the hand and I put and I measure if this is, let's say, this required 50 centimeters. So every every part of the testing, it's literally this measuring if I'm doing the correct if I'm making correct, let's say, if I'm building correctly or if I'm doing the correct step.

Jakub:

Right? And then I'm making the step which is equivalent of writing the code. Then I'm writing the tests that I know that I need to, let's say, get certain value from some function. Then I'm writing this measurement, which is equivalent of the checking, let's say, the size of the wall or, for example, the conductivity of the some the electrical, let's say, cables. And then step by step basically building this entire either piece of software or for example some electrical circuits.

Dominic:

Yeah. Nice.

Jakub:

So test first approach. So I see more from, let's say, mechanical electrical engineering part influenced a lot of my thinking about the writing and testing software itself.

Dominic:

Yeah. Totally. It makes sense. So do do you have anything a closing thing that you want to say? I I we will have, you know, the link the link to your website and books and whatnot on on the show notes.

Dominic:

But, you know, if you if you have a a message or something.

Jakub:

I'm really glad that people enjoy writing Go. It's such a beautiful simple language. It allows us to express our ideas very, very clear. And that's it, basically. Let's write more tests.

Dominic:

Oh, yeah. Totally. Think it would be a fantastic book, to be frank.

Jakub:

Let's not to be afraid to deploy on Friday.

Dominic:

Yeah. Yeah. Yeah. Yeah. That would that would be a nice change for the future.

Dominic:

Let's let's turn that around and not deploy the the all of the four days of the week. Yeah. Only deploy on Friday. Alright. Thank you.

Dominic:

Thank you so much.

Jakub:

Thank you. Thank you very much, Dominik.

Dominic:

Alright.

Creators and Guests

Dominic St-Pierre
Host
Dominic St-Pierre
Go system builder - entrepreneur. I've been writing software systems since 2001. I love SaaS, building since 2008.
Jakub Jarosz
Guest
Jakub Jarosz
Helping network and system engineers learn Go | #Go #Rust #DevNet #DevSecOps
063: Common mistakes when testing with Jakub Jarosz
Broadcast by