Pattern Pattern
PODCAST

Changing your approach to testing with Alan Richardson

coc-ep35

In the world of software development, testing is a necessary evil.

Testers serve as the bridge between the product developers and users by troubleshooting bugs and issues, to ensure the quality of an application. Beyond ensuring software quality and reliability, testers can promote customer loyalty, save costs from fault ridden software launches, and build a company’s confidence with their own products.

In this episode, we are joined by The Evil Tester as we take a deep dive into testing, and talk about the challenges that organizations commonly face with it. We take a look at the practical skills to become good testers and testing teams and also touch on topics such as Agile Testing and its relation to Agile Development, automation, and the various testing models that we see today.

Transcript

Kevin Montalbo 

Welcome to Episode 35 of the Coding Over Cocktails podcast. My name is Kevin Montalbo. With us always is Toro Cloud CEO and founder, David Brown. Hi, David!

David Brown

Episode 35. I didn't realise that. That's quite a lot. Kevin, how are you?

Kevin Montalbo 

Yeah, doing great, apart from this, which we've been talking about off-camera. But anyway, our guest for today is known as the "Evil Tester", working as an independent Agile coach and software development consultant, helping companies improve their Agile development and coding processes, use of automation and exploratory technical testing. He writes books around his expertise, creates online training courses, videos, and podcasts, and has over a hundred repositories on GitHub to keep his development skills up-to-date. Ladies and gentlemen, joining us today for a round of cocktails is AR. Hi, Alan! Welcome!

Alan Richardson

Well, thanks very much and that's a great intro. I want to hire that person. He sounds good.

Kevin Montalbo

Oh, thank you. Thank you. All right. So let's talk about your nickname first, "Evil Tester." So, that's interesting. How did you come up with that persona?

Alan Richardson

There are things that work or don't work, and we should be open to exploring those.

So, "Evil Tester." I mean, it isn't strictly me. That's the little devil icon that appears on all the products and he was born in the olden days when we did very structured projects and it was very annoying. So, I used to draw little cartoons to get my frustrations out, and it's kind of like a Dilbert thing that no one has seen. But over time it just evolved into… I kind of had an attitude of testing and I wanted to approach that in more detail and free myself up ‘cause I noticed that when I was speaking to people, they would hear the word "evil" and then go, "You know, testing's not evil. This is bad," and they'd hang on to the word.

And I thought it was really important that we try and move away from an association with words, because it's not like on projects, there are things that are good or bad or evil. There are things that work or don't work, and we should be open to exploring those, to try and make a difference on the project.

David Brown

It's interesting. I thought it must have come from, like, a necessary "evil testing" as a necessary evil, because it's not everyone's first choice to do the testing. It's like documentation, right? So, it tends to be pushed to the end of a project. And it's sort of done reluctantly, particularly by developers. It's not the interesting part of the project, typically. So, I thought it must've come from, "It's that evil necessity that has to be done at the end."

Alan Richardson

So, that was also part of it. Where your rule is a "necessary evil", you go, "Okay, well, I can live up to this. I can be as evil as is necessary for this," but also it's a case of, if everyone in the project is going down the same path, you need to have other people seeing things on the site, looking at alternatives, appreciating it in different ways. So, it was that alternative view. You've got the angel on one shoulder and the devil on the other, and you need both in order to keep on track.

David Brown

Yeah. Fair enough. In 2016, you wrote a book which was actually a publication of lots of articles and columns you'd had in the testing planet. What I really liked about this, the articles from the testing planet, is they seem to take a really, sort of humorous tone to the questions. There was a lot of wit and sarcasm littered throughout the responses to each question. And it took what was a relatively bland subject and made it entertaining. Was that your intention all along, to sort of spice up testing and make it more interesting?

Alan Richardson 

I've never viewed testing as a bland or that kind of subject because I'm in it. I'm interested in it. There's always so much stuff to research. And, I can see that for other people, it might be. But the aim was to try and put out something which exemplified the attitude that I constantly try and develop in myself and train people in. So, when we're working on teams, we're trying to get people to be more assertive, to come out with ideas, to not feel that, because they're testing, they have to sit in the background. And, I didn't feel that any testing book really exemplified that attitude and approach that we absolutely want from testers, particularly in newer environments. But also, I wanted to make it more humorous, more of a book that you could give to people that people who weren't necessarily in testing might read it.

They might not necessarily learn how to test from it, but they will learn what to expect from testers and how they approach it. Also, the style of question-and-answer was designed so that people would realize they probably already have the answers, but what they lack is the confidence in their answers to themselves. So, part of the reason for giving those kinds of odd answers was to help people see the nuances in the question where they were ignoring biases or the renewing presuppositions in the question that they just answered. If they asked the question differently, they already know the answer and they have the confidence to appreciate it through.

So, you give them an answer that they will never accept. Therefore they're forced to mirror their own views from the answer to then go forward with something. If you give people bad advice, they have to try and come up with some good advice to follow. So, it was that provocation. And a lot of it was built from a study of psychotherapy over the years where you're trying to provoke someone into taking responsibility for their situation. You don't necessarily give them answers. You help them explore their situation.

David Brown 

That’s clever. You know, the answers are provocative, you know, that's probably a better word for it. And I don't mean to belittle in terms of the subject matter being bland. I mean, our forte is APIs and application integration. They're pretty bland subjects as well in my view and, I would say, to the general populace as well. So, it's all relative, right? And as you say, each topic is interesting to those that are involved in it.

You've been asked lots of questions—whether it be through the column or through the conferences and talks that you do. Are there any of those questions where you thought, "Actually that's a really good question," and something that made you reflect yourself?

Alan Richardson

Yeah. So, that question you just asked is one of those questions that make you reflect on what I have been asked, because that's such a hard question to ask and to answer and consider. There are things like, well, "How'd you become the best tester?" And what stands out from that for me is the concept that there might be a "the best", that there's an absolute in there when it's not about an absolute in the world. It's always a relative absolute towards you. "How do you continue to improve?" and things like that. So, the questions are very often interesting because of the usual biases. The questions that concern me, and I think are really important, are the ones that we get asked all the time. So, "How do I get into testing? How do I get into automation?"

Because they are reflective of the world that we have, some people want to get into testing as a route to get into software development. And because for some reason it's viewed as an easy way or a beginner's approach or something else when every part of software development has a beginner level that you can get in because they're all tremendously involved skill sets. And the focus that people have to learn how to automate more, keeps coming up again and again and again, but that to me is a separate discipline. It's a very important discipline, but as a separate discipline from testing. But the questions we get asked seem to suggest that we can [inaudible] testing and automating so much that people think, "Well, I can go into testing and that's easier because they don't have to automate", or "I can go into automating and that's easier cause they don't have to test."

So, they stand out because they still demonstrate to me that we still don't really understand how to do software development well enough that testing is a natural fit into it that people know when and how to test. So, it's still odd for me. I know that's partly still why I came up with the Evil Tester, to have that attitude for responsibility so that people can take charge and have that responsibility and own their role, whatever it is, but bring their full self into that role and ‘cause I think we don't encourage that.

David Brown 

Yeah. It's interesting as a discipline. You mentioned you get questions about, "How do I get into testing?" or "How do I become the best tester?" We were recruiting testers all the time ourselves, you know, as part of our software platform. And you mentioned that sometimes, the discipline is seen as an easier path or, maybe you're not good enough to be a programmer, then you become a tester or something like that where in actual fact, finding a really good tester is really hard. So in your opinion, what are the characteristics of a good tester?

Alan Richardson

For me, a good tester is one that wants to study software development.

So, one of the things that's hard about that question is that the temptation is to focus on the attributes that are not related to testing, right? Because one of the attributes that makes a good tester is that they're good at testing. So, they've studied testing. They understand what testing is and what it's for, but in order to really do that, you have to understand software development, to know where testing fits in. So, for me, a good tester is one that wants to study software development. And personally, there's so many stories behind "The Evil Tester", right? So I could keep giving you reasons, but one of the reasons is I started as a programmer and I've been interested in software development.

And I view myself as a software developer and I aspire to be able to do all the different roles and aspects and processes within software development. Testing is where I'm most known. And I like testing because it feeds into all of those different areas and gives me the chance to do all of those things. So, the skill set of wanting to embrace software development in total is really important because otherwise you tend to get narrow focused-in on just a testing role. And, that can be very hard because then, that can have a tendency to isolate people when what we are trying to do is spread across the entire project. So a deep, deep understanding of feedback mechanisms, cybernetics systems, and software development in total, the processes, the history of software development for all the techniques that are involved, then moving beyond those.

So, this is hard for people who are beginners to listen to, right? Because what I'm expressing is the totality of what it takes to really be in software testing because the techniques that we start with then move you out into math. So, we've got things like boundary value analysis, which then moves you into set theory. We have paths going through systems, which then moves you into graph theory, right? We have to try and work out how much data we need, which takes you into a probability theory, right? There's math involved in this, which you don't have to go into depth, but it's something else we should be interested in. There's a lot of technical information and studies that are required in there that when you go into it in depth, come through. Also that tenacity to see things through, we want people to develop an attitude of being able to communicate directly when people don't want to hear the information that you have to convey.

The skill set of wanting to embrace software development in total is really important because otherwise you tend to get narrow focused-in on just a testing role.

It's not necessarily the right role for people who want to be liked all the time, right? There has to be the ability to challenge people, to challenge people with evidence, challenge people with the right language for that person, such that they take notice. So there are a lot of, I guess people would call it social skills. But I didn't learn them from social skills. I learned them by studying psychotherapy and the language that therapists use in order to challenge people. And what you also get when people answer that question are my biases. Those are the things that I think are important because I had to develop those throughout my career. So, I'm ignoring all the things that I came in with as default, because I'm focusing on the things that I had to develop.

David Brown 

So let me just see if I understand it correctly. They're going to be an architect, a programmer, have understanding of testing skills related to automation and the like, they have to be a psychoanalyst and a good communicator. It's a tough combination of skills to be a tester.

Alan Richardson 

But you don't need that at the start. You can develop these skills over time.

David Brown 

That’s a holy grail here. I think it's an unusual combination.

Alan Richardson

Also you don’t need all of those skills, right? Because the whole reason we create teams is that we expect the team to have all those skills. So, we bring in someone on a test or role because they have some of the skills that are missing from the team for some of those aspects. And hopefully, they are all unified together. If you don't have the team building skills, hopefully someone else on your team does to help pool all those skill sets together. That's also why we have Scrum masters and managers. Hopefully, they have those skills to give yourself maximum flexibility. Because my job has required maximum flexibility because I've moved from different roles, different companies doing consultancy. So, I've needed to develop that maximum flexibility. If you're working on one team building, one thing, focus on the skill sets that are important for that team. And focus on the skill sets that are in general missing from that team at the moment, in order to maximize the value that we can bring to it.

David Brown 

You're consulting with organizations all the time and going in there and evaluating their testing procedures and the like. What are the common problems that organizations are facing when they're setting up a testing procedure or building it into their Agile sprints and the like? What are the common issues that they're facing? Is it a personnel issue? Is it finding the right people? Is it the process? Where is it?

Alan Richardson

So, it's all of those things. And the process is essentially a personnel issue because the personnel don't know how to build the process. So, it's generally always a people problem at the heart. But one of the issues you focus on testing is people don't really study testing anymore. So, when I started, because I came from a programming background, I had to study testing to figure out what an error this was because I was joining a test consultancy and building test tools for them. But I didn't know what testing was. So, I had to learn testing. So, I studied it. Most people now don't seem to study testing. They don't seem to read testing books in terms of the testing theory, what they read are possibly books on programming books, on automating, introductory books on testing, which are still kind of rooted in, "Write some scripts and do big design specs," or they might read the Agile Testing books, which are very much about interpersonal skills and the relationships

So, one of the issues is people don't know how to test and they don't know how to adapt their testing to environments. And we very often have a lot of junior staff on projects. So, the seniority, the experience that helps you adapt isn't there. I think it's also the focus on testing. How do we effectively build this software project or product and how do we construct a software development approach such that it's most effective and software testing will naturally fit into that? So, part of the reason we have issues is not just because we don't know how to test, it is because we don't know how to construct contextual software development processes.

David Brown 

So that, again, that last part, we don't know how to construct?

Alan Richardson 

Contextual software development processes. So, every environment is different. Every product we're trying to build is different. Every team has a different set of people on it with different skill sets and different attitudes. The software development process has to cover, not just, "Here are the fundamental building blocks of building software." It's "Here are the fundamental building blocks of building software within the constraints that we have for time and budget and our overall organizational strategy and the skill sets of this team and the tooling that we are using." All of this.

David Brown

I understand. You mentioned that you are testing. Most organizations will be familiar with Agile Development, that they have sprints and Scrum masters. They have the end listening to the market and iterating and releasing frequently on this sort of stuff. Then there's this concept of Agile Testing. How does that fit in with the whole Agile Development methodology?

Alan Richardson

So, I'm not sure there is such a thing as Agile Testing other than as a reminder that you are working on an Agile project. So, do not attempt to bring in all the artifacts and processes that are associated with waterfall or structured development. However, most people now haven't worked on waterfall or structured development. So, all they know is Agile. So for them, Agile testing is testing and the issue is every Agile process is different, right? You talk about Scrums. People interpret that in different ways. Some teams don't have a lot of the control points that are required for an Agile process. Agile is hard, right? Because you've loosened up a lot of the restrictions. It's about evolving, it’s about feedback. It's about looking at what's working and what's not, but some teams drop retrospectives. Some teams don't retrospect until like two weeks or four weeks when they need to be doing it continually, some teams don't pair.

So, they don't get that constant knowledge sharing. They don't pair across rules or disciplines. They don't pair someone who's in a testing role with a programming role. They don't have a system in place. So, for me, Agile testing is trying to work out. What is the end goal of testing in this particular project? Is there a focus on acceptance criteria? Is there a focus on the issues? Is there a focus on gaps? Are we trying to help spread testing knowledge throughout the team? Or are we trying to make sure that we are bringing it to each story? Remembering that we're not just testing stories, we're testing stories that are interconnected because we're building a product and a system we deliver, which is a thing on its own, right?

So, if we only focus on stories, then we miss the interconnected parts between them. The story over here now conflicts with this story over here. So, it's taken those kinds of views as well. So, it's hard for me to say, what is Agile testing because to me it's just testing, but it's remembering that an Agile approach to software development has risks that we also have to target and testing is looking at those that's constantly looking at risks.

David Brown 

Yes. That's interesting. I thought you were going to say it's out of an Agile Development process. The Agile Testing needs to be incorporated within the sprint itself. And so you can't sign off a story as "done" until it's been tested. Is that just assumed it's part of Agile testing? Is that right?

Alan Richardson 

So, for me, that's just assumed, right? If you wanted an answer that is, "Give me a description of what Agile testing looks like on an Agile project?," it's things like that. We have stories, we look at acceptance criteria, we try and figure out what are the risks that accept criteria, what data I need to cover, what the process is, dah, dah, dah. But those are just derivation approaches for coverage of things. For me, the testing process itself is, "How am I going to test this? And how do I work in this environment?" And if we take the broader view of risk, then it's not just the risks that are in the story. Because there's always a risk that an acceptance criteria might not be met. So, we check that the acceptance criteria has been met. We automate that to the extent that is required. We test around the covers of it to try and make sure nothing else slips through, but there's a bigger focus. And on some teams, the programmers are going to do a lot of the testing like that. And some teams, the business analysts or user representatives on the team are going to do a lot of that. So, testing or testers sometimes have to pick and choose which parts they're going to do and fit in. So the concept of Agile Testing is quite broad.

David Brown

As a software company ourselves, we went through a process. Here's a bit of a confession: we used to do the testing after the sprint, but before the product release cycle. So, we say we had a quarterly product release cycle, but we're in a two weeks sprint. We would just create a backlog of issues that needed to be tested before we did the product release. We've since changed. And we incorporated the testing as part of the sprint itself, but an issue, the challenge, which you have to address is you have to allocate resources differently, and you have to accommodate as part of the sprint planning, the test as part of the plan so that the testing doesn't become a bottleneck in terms of your sprint and completing a sprint.

So, how do you recommend people accommodate testing as part of a sprint plan when they're going from that kind of waterfall type process in terms of release cycle? Like, we were doing all your testing as a separate process, completely separate process to the sprint plan itself and having to resource it to come in accordingly.

Alan Richardson 

And so your description of your initial Agile processes, and why that initial question of "What is Agile testing" is quite hard is because it was called an Agile process, but many people would have disagreed that it's not an Agile process. And testing is always about fitting in with whatever processes are there and looking at the risks and trying to make it happen. But when people do have that definition of "done" or sprints, it's not done until it's tested, unless we have a definition of done that says, "Yeah, it's kind of mostly done and we'll test later" or a promise to be done in the future if it's written in JavaScript.

So, we have to figure out what does it mean in our environment and what is our appetite for risk on release? Because we may choose to defer certain things, because we are happy to accept the risk that it doesn't work because we can fix it faster. So, all of those things feed in as well, and I prefer to see testing done as close to the point that you're doing the development as you can, that "done" incorporates that definition of, "We've checked whether the acceptance criteria is met. We've tried to automate it, we've explored around it. We've done exploratory testing. We've demoed it to the user. The user has done some testing on it and is happy with it." All the aspects that we think are in there.

David Brown 

You know, those automated tests, should they be completed within that sprint cycle as well?

Alan Richardson 

So, with our mission there's a number of aspects on that. So, one of the aspects of automating and the one that probably should be completed within the definition of "done" is the concept of asserting that the acceptance criteria has been met and continues to be met over time, right? That's the most basic level of automating. And that's probably what most people typically mean by automation or test automation, checking that the acceptance criteria has been met. But, we also have test-driven development unit tests. And some of those are kind of related to the acceptance criteria, but really they're related to the code and the architecture of the code that we're building. So, we expect that to be included, but it's more of, we just expect that we'd very often don't see what that actually means. Then you've got the concept of, we don't just automate acceptance criteria assertions, we also want to use automating to help us test further and explore more.

You've got the concept of just feeding in a lot of data to cover existing paths, which people very often don't like to do because it takes time and people have this concept of "Well all our test automation should be done within the build process and fast feedback." Whereas it is entirely possible to have your acceptance criteria assertion in the build process, but still have in parallel, a longer running set of exploratory types. We're putting stuff through because we don't know what result we're going to get back or randomly generating data and feeding it through because we don't know what result we're going to get. It's possible to do that in parallel. And very often, people don't because they don't try and bring that full extent of what testing might mean. Testing is, "How can we best test products and get as much information back as possible?"

One way to do that is to use the existing capabilities for automating that we built during the process. Because very often, what we're doing during that "done" state is building the capabilities to automate. And then very often we underuse them because all we do is accept criteria rather than build the capabilities that then allow us to really push this forward and throw in lots of data and have it running with multiple users in parallel reusing the abstractions that we built up down here, because we tend to focus on testing or acceptance criteria or broader testing view that can incorporate the automating.

David Brown

Well, automation is seen as highly scalable because you're building up a bank of repeatable tests that can obviously be executed without human intervention or little human intervention. Is there a limit to how much we should be automating in our testing?

Alan Richardson

Probably, and that will depend on every environment in every product. So, I think we can say things like "Is there a minimum?" and the minimum is we should automate the acceptance criteria because we want to make sure that they continue to work longer term. In terms of the maximum, it's going to depend how you do it. If you drive all the automated execution from BDD or Cucumber or something like that, which people do regardless of whether people want them to or not, if everything is there, then you have a set of abstractions that are hard to reuse. So, then there are limits, right? Because there are limits to how much you should do because the maintenance of it is really hard. If you architect it, you're automating in such a way that it's possible to put them together in different ways. Then you're building capabilities that can be harnessed randomly.

So, as an example I, a couple of years ago, experimented with bots for testing. So, rather than writing test scripts, I would write bots that were used in abstraction lyrics, and they would randomly choose actions and randomly choose data, which were implemented by abstraction. I could just throw these bots at systems I was doing, and they would explore the crude model that they had in multiple ways and give me feedback on it. So, then it's a case of, "Are there any limits to what should be done with that?" And the answer is, well, no. I can just leave that running forever because it has assertion spelled in, it will report when it finds something odd, it is robust.

So, I don't have to maintain it. We've also got to split up the difference between, "What should we do" and "What could we do?" because we have capabilities in terms of automating, can we automate this? Can we automate this in a way that is robust and doesn't fail? Because very often we make statements like, "Well, we should not automate this because it is hard to automate or because it is flaky," that's actually a capability problem, not a risk decision about whether we need the information on an ongoing basis. So, we just have to be careful that we ask the right questions or that we answer it using a risk decision, or whether it's a capability decision.

David Brown 

In a lot of your blogs and resources, you talk about models and building testing models and models like requirements, acceptance criteria, risk flow functionality. Can you tell us about these testing models?

Alan Richardson 

So, there's a huge amount of testing models and what modeling might mean to testers, right? Because modeling is simply our way of understanding the world, which in this case is the system. And depending on the model that we have, we will view the system differently, or we will limit the way that we approach it. As an example, if I built a graph model of a system that showed the flow of actions through that system, I am focusing very much on the structure of the system. I'm limiting myself to linear paths through the system rather than trying to figure out, well, how can I go from here across to this other point fast? Could I just change the URL, go directly to a part of the application? Is that valid? Is it not? My model is focusing my attention. So I think it's very important that we build multiple models to view the system from different angles.

One of the things that's hard about this is that again, people very often haven't studied older computer science books. So they're not aware of all the different types of models that people had in the past. They may not know what a data flow diagram is. They may not know the concept of a graph. They may not understand that a state transition model is different from just a model of flow through a system, right. And we have all these different aspects. So I think it's really important to try and study modeling like that, but then you also have informal modeling. So a lot of people would view a mind map as a model, which is some model of someone's understanding of a system. Then we have to understand, "Is that a model that was designed to help us understand, or is that a model that is designed to help us communicate?"

Because very often we conflate the two. So, we build a mind map to help us understand the system, but then we try and use it to communicate, but it doesn't match anyone else's model of the system because we're using different languages in different terms. When we use a behavior-driven development and we use Cucumber, those Gherkin scripts are models. They're a high-level abstraction of the system. Some of them are procedural. Some of them are declarative. We can argue about which is best and which is most appropriate. They probably should be declarative, but it's quite possible that under some circumstances, a procedural model would be useful there. There's a ton of stuff to understand in there. And it's such a huge area to study because as soon as we start looking at that in detail, the more we can formalize the models, the more we can actually use them automatically in our testing. I mentioned bots earlier on, those bots were built around the concept of state transition models.

So, I could just let them loose on the system. If it was an informal modeling of the system, it would be hard to build a coverage approach or way of executing those models and building them up to do stuff. So we have to decide whether it's a formal model or whether it's an informal model, whether it's for our understanding, whether it's for communication, whether we're using it to drive execution, whether we're using it for coverage. Because a lot of time, every model can be used for coverage. We can do this, we can do stuff. We can then look back at our model and go have we covered this aspect of the model, whether it's a mind map, a state transition diagram, a list of stuff, and our checklist can still be used as a coverage model. We also have to understand that there are limits to what we've written in that model, because there are ambiguities and modes of interpretation.

If it's a checklist, we can interpret each point in that checklist differently. So having a cheat, some coverage of it, it may not be enough coverage because we could interpret in different ways. So I think modeling is a massively rich area, and it's something that I continue to look at. And it's why I continue to study systems theory and cybernetics and all the different approaches, and Petri nets, and all the different mathematical models. But I have to focus on what is ultimately practical and usable. And also, we're limited by the tooling that we have because you can't just build your own tools all the time to explore models because then you'd have to formalize them.

David Brown 

And I feel like most things, when you scratch the surface of a topic like this, you just realize how much is underneath. It's almost a discipline. When I really start to hear you talk about it. You're providing some amazing resources there for people both already in the field and those looking to get into the field. Can you share with our audience your social URLs and other channels in which people can follow you?

Alan Richardson 

Yeah. So the easiest one to look at is Eviltester.com. And then I think I've got the "Evil Tester" handle on most of the social media. So that's the easiest place to find it. And they're usually jumping off points to anything else that I'm doing.

David Brown

Alan Richardson, thank you so much for joining us today. It was really interesting speaking to you and you just made me realize as much as I thought I knew about testing, how little I did know, and I need to go back to the books myself. Thank you so much for joining us today.

Alan Richardson 

Thank you very much. We're always learning that. That's what we do, but thanks for having me on that was really good too. Thank you very much.


Listen on your favourite platform


Other podcasts you might like

cta-left cta-right
Demo

Want a ringside seat to the action?

Book a demo to see how our fully integrated platform could revolutionise your organisation and help you wrangle your data for good!

Book demo