In 2014, the industry finally came up with a definition for the term "Microservice Architecture" — which is "a particular way of designing software applications as suites of independently deployable software."
The people who came up with this definition back then thought that the style could become the future of enterprise software. Fast forward to today, and we quickly realize that they were right – microservices are now seemingly everywhere. An O’Reilly report in 2020 found that several organizations have begun migrating from monolithic systems, applications, and architectures to microservices, and many more are looking to begin that transition.
In this episode of Cocktails, we talk to one of the two people who came up with the definition of microservices - we discuss its evolution throughout the years, how organizations can respond to the complexity of this architecture style, choosing between REST and GRPC, and the future of the industry with microservices.
- James Lewis talks about how the term "microservices" has evolved since its conception in 2014.
- How can organisations address the complexity of rearchitecting monolithic applications as microservices?
- Are microservices appropriate for every use case?
- On what basis should a developer decide between gRPC and REST?
- How will the ecosystem for continuous integration and deployment of microservices continue to evolve to make life simpler for DevOps?
Our guest for today is a Software Architect and Director at ThoughtWorks based in the UK. As a member of the ThoughtWorks Technical Advisory Board, the group that creates the Technology Radar, he contributes to industry adoption of open source and other tools, techniques, platforms, and languages. He’s an internationally recognised expert on software architecture and design and on its intersection with organisational design and lean product development. Most importantly, he also defined the Microservices architectural style back in 2014, which we’ll be talking about today. Ladies and gentlemen, joining us for a round of cocktails is James Lewis.
James, it’s an honor to have you on the podcast. How are things?
Really good. It's an honor to be invited, so thank you very much for the invite. Yeah. You find me in a very rainy London, or near London, sitting in my shed at the bottom of the garden, which was my first lock lockdown project during the pandemic. So I converted my office into a shed, so I'm there in mission control there. And yeah, it's been like everyone has been very, very strange, you know, a lot of my job has been - I wouldn't say has gone, but a lot of my job involved, travelling to conferences and public speaking and, you know, really an evangelist for four weeks over the last few years and all that sort of disappeared really. So it's really great to actually be invited on to things like podcasts, where we can talk about these things, because I think everyone misses it though, but yeah. Thanks. Thanks for asking.
And I like your party lights too in your shed.
All right. So, let's jump right in. So together with Martin Fowler, you were responsible for coming up with the concept of microservices. So, in the past seven years, since its inception, have you seen the use of the term and the concept evolve from what it initially was?
So yeah, I mean the short answer is yes - Martin Fowler - I think actually that's not so bad, right? So, naturally things evolve and I'll come to that in a second, but there's, I guess there's two types of evolution in a sense. There's building on what we have in the past. You know, there's this idea. If you have scientific revolutions and then you gradually build on those ideas until you have another revolution and that's kind of fine, but then there's another type of evolution, which is, should we say, like a kind of bad type of evolution.
Martin Fowler talks about this as semantic diffusion, where the term or the things start stopped meaning what it originally meant and starts to mean something else. So, you could talk about for a long time, Agile, you could talk about the phrase, Agile being semantically diffused, where it started to mean, to be synonymous with Scrum and sprints and commitments, and this kind of stuff, which actually, that's not what Agile is about at all.
So I mean, with microservices, yeah, absolutely. I think things have changed, but I think it's been a good change. I think things have evolved in a good way. And the reason I say that is because, you know, we're all really looking back at microservices and that definition, I think it was almost like a meta-paper for a scientific research in the sense that it was gathering a whole lot of good things that people were doing at the time. You know, it's almost a bit like XP. It's like you take all the practices and you turn it all up to a level in extreme programming. That's the ideal joke about that. I think microservices is the same. If you look at the individual components, they were all there around, you know, and had been, and being built on for a number of years.
So thinking, I guess you could pick up things like the products over projects and organized around business capabilities, those sort of principles or characteristics, and you can see their origin and things like domain driven design. You know, you can see there the origin in concerning ourselves with the business aspects of the problem we're trying to solve, as opposed to like the layered technical aspects or whatever. Or if you look at things like, you know, endpoints, the smarts and the endpoints and not in the pipes, then you know, that that's an evolution of how we'd sort of come to view RESTful or web integration as almost the holy grail of decoupling in the sense, and then, you know, the best type of integration over say, you know, enterprise service buses or smarts in your infrastructure where, you know, things tend to end up being, well, you end up with stuff smeared everywhere else was not very cohesive.
And you end up with, you know, with tight coupling all across the place. So I think it's, you know, it was an evolution of these ideas. You could say, it's an evolution of Jim Webber's Guerilla SOA, which - if you haven't seen this - is a great tool he did with Martin Fowler some time, years ago now, at QCon. I think it was a keynote called, "Does My Bus Look Big in This?" about his idea of Guerrilla SOA, which is, you know, it was the antithesis of the Big Bang where, you know, we'll go away and design your service oriented architecture, and then come back three years later and, you know, it will all be done, you know, it never worked, never happened.
And yeah, and so the Guerilla SOA idea was to do, was to build out things in slices based on value. So, I think it's all been an evolution really, and it's all based on these giants that came before going back four years to Unix in essence. So, I don't know if you've got any thoughts about that as well, but that's kind of how I see the evolution.
It's interesting, isn't it? I mean we talked about that interpretation of Agile, just a couple of podcasts ago, and in terms of the meaning of Agile has changed. The interpretation of what Agile means now that is so broad, it's probably diffused from its original concept. So, you said microservices is being interpreted in a similar sort of way. How are people using the terminology, which wasn't really envisaged by you?
The spirit of microservices architecture is to take good architectural practice and then essentially take it to the extreme with a set of constraints, these principles guiding you.
Yeah, it's I mean - just a sidebar on Agile - I mean, Agile for me, I can tell you, I mean I’ve been working for ThoughtWorks now for 15 years. I was doing XP before ThoughtWorks. Before I joined ThoughtWorks, I've been working on that for a long, long time. And all our jobs mean, for me is three things. It's small batch sizes, fast feedback, and discipline. And if you do those, if you've got those three things, that's essentially Agile for me. But in terms of microservices, yeah, I mean, I think the original, the spirit of microservices was around - it wasn't so much of a "microservices have to look like X." It was more [of] the spirit of microservices architecture is to take good architectural practice and then essentially take it to the extreme with a set of constraints, these principles guiding you.
So, originally there were things like, you know, it's very much like the 12-Factor Apps from Heroku. Those constraints and principles, they came out practically the same time as we were doing the first first set of sort of microservices architectures in ThoughtWorks. And Netflix was doing X and then so on. And the spirit of microservices was to build well-architected, evolve well-architected systems based on these small components where, you know, where you took the idea of single responsibility principle, almost like to extremes, right? So, you know, we talk about turtles all the way up and down, you know, so a method in object-oriented languages. A method has a single purpose, single responsibility, a class as a single responsibility, and namespaces as a single responsibility, you know, so the package, and then a library or an application has a single responsibility.
And then you sort of scale those up until you’re at the level of the business capability, which might be represented by a number of microservices interacting. But the key thing is it was that, you know, you have these small things that were, well, that were well-designed using standard patterns, using standard integration patterns. I think what 's coming to me now in some sense is, it's crystallized more, it's more of a firm definition. People tend to think of microservices in a number of ways. They tend to think of them as NaN services. You know, I'm going to have a, I don’t know, an order service or a product service, or a user service, these kinds of things. And they're connected together in a DAG or you know, a Directed Acyclic Graph. And that's actually the sort of, if you look at this, squint a bit and look at what Netflix used to do, it was a little bit like that.
Not quite, wasn't quite manned, but they had this, this graph of stuff where one thing would call another, which would call another, which would call another, and that's fine. And, you know, you can do that as long as you put in place the appropriate, you know, patterns to solve for availability and reliability. Because the problem with deep call chains is that service at the top, the availability is limited by the product of all its other calls. So, I think this is called the multiplicative effect of downtime. If you've got one service at the bottom, or if you've got one service that calls five other services, that its availability is limited by the sum, the product of the availability of the services. So that's, that's why you end up with all these circuit breakers and all this kind of reliability stuff built in.
It's fine. The NaN patterns are fine to a point. The problem with NaN services is they tend to mean that you have that change trickles through more than one of them. So when you want to make a change to something to a business process, you'd tend to have to check, make a change to the user service and the product service and the catalog service. Whereas if you have something like, you know, product management as a service, like a capability sort of thing, little capabilities in API, you can limit the blast radius of change to the capability. There's a great blog by Michael Nygard, you know, the author of the fantastic "Release It!" if you can look it up, I think it's about verb services. So you should be, you know, trying to think of a service in terms of doing stuff, as opposed to just returning, you know, CRUD operations, they should have behavior.
And so that's why we favor services with things like management and the title order management, or that kind of stuff, rather than order, it should be everything to do with orders, not just calling and getting an order from a database.
So, I think things have changed. Things have crystallized in a sense, as I say, and people, when they talk about microservices, they tend to mean kind of that DAGs type. Net physicals change. I believe they sort of inverted the graph if you like. So, they went to a reactive pattern where they're sort of streaming results rather than doing, you know, request-response from their services, which is slightly, this is where Rx Java came from. The port of Rx from CSharp was their implementation of that.
But there are other patterns, I mean, this is almost onto Stefan Tilkov’s thoughts around the different sorts of styles of microservice itself. So, you've got, you know, this is the sort of, he talks about Type 2A, if you like, which is Netflix or Amazon, this is a DAG of hundreds of connected services or Type 2B, which is when you invert it, as I say to the arrows, the other way up on the diagram, and you get the Rx version of it. But there's also a type of service, a type of microservices architecture, or sort of category of architectures, which are always purely functional, and based on messaging or sort of event-driven architectures. And, this was sort of characterized early on by a guy called Fred George who's also ex-ThoughtWorks.
It's no surprise that a lot of these people came from ThoughtWorks actually. But he talks about this. Like, you know, he was building applications of very, very, very simple, functional microservices that were functional in the sense that they would take an input and produce you an output, you know, and it was repeatable like a mathematical function in function programming functional, and they would just be sitting, listening to a bus. And when they received a message, they would do some stuff and then they would produce another message in response. And it was this sort of set of services that were connected. But I mean, there are other styles. I mean, there's the other two. I was gonna mention Stefan Tilkov himself. He talks about self-contained systems. So he's got a great work on self-contained systems.It's a really nice idea where you have a single runtime, which contains everything you need to do a particular slice-through your application, not separated out and doing individual services. I'm probably doing that a disservice by having them have a look at Stefan's work. And then there's what I call "Just use your head." I normally swear a bit more to use your head because the original intent of microservices, as I said at the start of this, was to build stuff in a well-architected way, deploying sensible patterns, but then constraining yourself around things like running, you know, "Don't scale by a thread. Scale by processes.", which is one of the Heroku 12-Factor App constraints. So, has it changed? Yes, it's crystallized, but at the same time, you know, that's okay. That's fine. You know, you can still just use your head.
So, when you say expanding from a noun-based service to a verb service, does that imply that the domain limited scope of a service is now expanded? You’re expanding the scope of the service? Like you said, it's no longer an order service and order management services, doing more. Is that what you're saying?
It's super hard, right? Because, is [inaudible] management is a potential capability, right? So, you have capability that you need to offer to the enterprise. Order management sounds like one of those capabilities [where] there is that single service. Well, maybe? And actually from the outside, you probably shouldn't care. What you should see is an API, which is your representation that you see, or if you want a marriage with capability. Now, internally, the team that's building out that capability, you know, whether they choose to decompose that into a set of services, you know, many services or whether they choose to represent that as a single service with maybe multiple endpoints, I guess that's a choice for the team themselves and applying what I said about using it, appropriate design patterns, architectural patterns to make that decision.
I'll give you a good example of that. I did a talk a long time ago now, 2012? 11? Something like that at QCon San Francisco. It was called "Java the Unix Way", and it was the first time I started talking about these concepts publicly. We've been talking internally, but I haven't spoken publicly. And in that we have this source of user management capability I talked about that we were building and that from the outside, it looked like it set up a series of endpoints where you could poke it with user detail, or you could request it to create a user for you. And it would, or wouldn't depending on whether you've given it a well-formed, user request or whatever it was, it would allow you to query it for users and do various things like that for users - create them, delete them, you know, change account details, all that sort of stuff.
But internally, even though it had these very well-defined APIs and then it was actually Atom was the domain application protocol used, but internally it was represented by, well internally it was composed of a fronting service, which accepted a request and then stored it off to the side for safekeeping. So it validated it, stalled it off to the side and then said, "Right, your request is in process." And sort of manage that on an internal queue. We then have competing consumers in separate processes that were reading messages off the queue, as fast as they could and creating users in the back end store, essentially. So it was internally, it was sort of one, two, three, or however, many, three types of suites, three different services externally. it looked like one thing, it was one API.
And we used the three services internally to provide to, well, to solve for the cross-functional requirements which we had, which was very spiky traffic. And we'd have really high load overnight, and then crowd operations during the days of batch overnight or during the day. And that, I guess, that's an example where, when you’re from the other side there is a capability, you don’t really care, but the team, the people inside that, you know, have to solve for the requirements they have, and to do that, they apply a standard rock value, standard design pattern, which would be competed in the sort of enterprise integration pattern sense. So I don't know if that helps to sort of clarify maybe a bit when I mean the barrier around the different types. Things change depending on where you're standing, right? It's like relativity in a sense.
And it all sounds so simple and obvious and logical when you're breaking it down, like from a monolithic architecture to microservice architecture, you end up maintaining potentially dozens or hundreds of applications, as opposed to one. And how, like, in using stuff that might be comprised of a few front-facing APIs or dozens of front-facing APIs with lots of backend services, microservices behind those is doing very specific functions with different architectural staff, generally behind them, you know, some event driven or whatever. It starts to become complex. So how do you recommend organizations manage this complexity of re-architecturing monolithic applications to a microservice-based application architecture?
I think the mistake people make is by trying to look at it too much in the round, in essence.
I guess the glib answer is carefully. I think the mistake people make is by trying to look at it too much in the round, in essence, right? Because depending on where you're standing, yes, things can look really, really complex. If you just look at the individual, it's a bit like the night sky, right? You look at the night sky, you see all these stars and it looks like there's lots and lots and lots and lots of stars up there. And there are, but actually if you start to think of those stars and constellations, there's fewer constellations out there. Right? And I think that, like the service landscape in a large enterprise is sort of similar to that. If you just look in the round, if you look at all the stars and all the services and you’re like, "God, I've got 2000 of these things.".
Yeah. So that's really complex, and these things are event-driven and these things are this and these things are that. However, if you consider them as groups of things instead, and this is the whole, how we grouped together into business capabilities, you know, and then organize your teams around that, the people in the side of those capabilities, they don't need to see the whole night sky. They just need to be concerned with their, you know, their solar system, if you like. I'm just putting, I'm pushing this metaphor too far that I've never used before, but hey.
You’re really on a roll there.
Yeah but that's really, I think the trick. And I think if you're used to, as an architect in a large organization or an enterprise architect, you're used to having this overview. And I think that's, you know, that's still necessary. But I think the complexity can be managed by pushing these things into these smaller constructs and having the teams worry about them themselves and worry about you know, the right APIs, and that kind of thing. I think there was the other question in there about how you sort of migrate or should you migrate in more or how'd you migrate from a monolith. Now, the question I was asked is why? Why do you want to migrate your monolith, you know? And there are multiple reasons, valid reasons why you might want to do that. I can think of a few off the top of my head. Maybe it's old, right? That your license is running out. The license is too expensive. You need to replace it with something else. And therefore you're going to do what you can to migrate to microservices at the same time. You're a start up or scale up. Another might be another reason.
And, you know, Twitter is a great example of this. Fail Whale starts happening more and more because you've become really successful. And you need to scale because reasons, you know? It might be availability - you're not available enough and you've become a mission critical resource. You need to do something, you know, you need to "I knew you were built in a particular way that doesn't encourage scaling."
So there are valid reasons and valid business reasons to do these things, but I don't think it should be as, you know, the implication David is that you shouldn't undertake this lightly, you know. I think we will, but hopefully where you were pointing your product, me to go, because you know, it shouldn't be taken lightly. There is a great book, obviously out there by Sam Newman, who's a great friend and former colleague called "Monolith to Microservices", I believe, which is excellent, it's really excellent. So we spent a lot of time working together and I fully endorse everything that comes out of Sam's mouth. There's some stuff around board games I'm not so sure about, but he's got a lot, he's got a weird taste of Lego.
He tends to go for the architecture sets. I'm not sure about that, but apart from that, you know, go look up Sam’s stuff. There is also some stuff we're working on. Actually I'm working with Martin at the moment around how do you find seams in legacy software? So, and the patterns that you can then apply to to gradually move away from legacy, but it's, you know, it's like everything else, you apply standard patterns. You look for things generally around business seams. And depending on the underlying monolith of what the infrastructure and what that looks like, you've then got various technical patterns that you can apply. Great example, being my friends at BBVA who are awesome engineers, and they've managed to migrate part of their global payment processing away from them from their mainframe, which is, you know, this is their core banking platform.
Now this is, I mean, this is the holy grail of many of the big banks at the moment is to do away with their mainframes and they're on their journey and they've managed it. And one of the things they've been really great at is identifying the chunks and how to move those chunks off. And one of the nice patterns I've heard when we've used as well as treating things like mainframes as an event sourced or event driven architecture. Cause if you squint, it's all about reading and writing batch files. And if you squint to the batch file, it's just a list of transactions, right? So if you can sort of just pop back files into the right place, produce them and consume them. You can actually isolate whole parts of the mainframe safely often. So there are different techniques you've applied, but I think the main question is why do you have a valid business case? And because it's cool, it's not a valid business case.
I mean, don't get me wrong. I think as a company, we Microsoft's Services Architectures, you know. That's part of what we do, but we also enable integration of monoliths as well. Or SaaS based applications or whatever, and automating workflows. But it's interesting because often starting with a monolith, if it's architected well, that it can be broken down into microservices in the future is perhaps easier and more maintainable for a small team. You mentioned some valid reasons for migrating away from some companies may migrate away from the monolith.
But I think also you have the issue where it gets so big, that codebase gets so big and so unmanageable and the team has changed several times since the original codebase was written. It gets to the point where you don't want to touch it, ‘cause you’re too afraid to touch it. You don't know, you don't understand, you know, you don't want to touch it, the case of the applications and touching it. Right? But if the monolith is actually well active in the first place so that it can be broken down, you did with your services, if and when it needs to be, then that's perhaps not the bad starting point for some businesses. It depends on the use-case. And if you look at it this way, if you're just looking at migrating a certain monolith, there's a whole multitude of reasons why you may want to do that. But you wanna have a reason, like you say.
Absolutely. I couldn't agree more. I'm about to have a story. So I think it's funny, it's quite tragic. But one of those examples you gave me, which is that it's become a codebase that isn't, well, it's a monolith and it's become just unmanageable. And it was a large financial institution who shall remain nameless. And they asked us what was the best course forward. Looking at the codebase, they've been trying to change it and becoming what was impossible. So we ran some stack analysis. I said, "Well, I've got some good news for you. Your code base is cohesive, except the problem is it's all one and a half million lines of code that’s cohesive"
We found a class - we're using a static analysis tooling - and we were looking at things like cyclomatic complexity and this kind of thing. And we found one class that has a method with 60 levels of nested, if else, which is quite entertaining. They also tell me the old joke, which is there's no correct level of nested if-states because if you've got one, you can always add another, that's fine. If you've got 60, what else are you going to do? You've got no shots. You have to edit that.
No, I agree. I mean, you know, there are those pathological cases where codebases get so low, so unmanageable that the only thing you can do is start again, you know, and sometimes that is the best economic approach. I think this is actually - the hard bit is, for that business case, how'd you understand what the existing thing looks like. It's a bit like archeology and Cartwright talks about software archeology. You know, you have to go digging through these things to really understand what the economic benefits of strangling versus starting again, you know, what they are. And those are quite hard things to understand, right?
Yeah. We've talked about APIs a few times now we've made many mentions on them. REST is not the only option for integrating microservices. There's a bit of a, you know, GRPC is in favor at the moment for implementation of services for performance and other reasons. In that debate and GRPC versus REST, on what basis should a developer choose one over the other?
We're almost in the Vim versus Emacs here. Right?
I'm in like old mountain, chartered, cloud territory. I mean, it's quite hard for me to talk about this without describing it with, it's quite hard for me to talk about it in general ‘cause you know, it makes me upset, but it's quite hard to talk about it.
We’ve hit a sensitive point.
Yeah, I mean, I still think XML is a thing, right. Without, you know, without talking a bit about my history and actually the ancient history of integration of Unix is really so, you know, because when REST became really popular, it was as a reaction in the sense of, it was a reaction in two things, you know? To wait internally in enterprise integration. And you know, internally inside big organizations, you probably even remember this, if you're old enough, we went from the, sort of the ORM was their sort of core members and the sort of, you know the sort of distributed object model type, it's an interprocess into application communication. We then moved to a sort of vicinity of the web services, style of integration using the WS Deathstar you know, specifications and stack. And incidentally, there is an ontology to describe all the specifications for the WS, for the web service specification. And at any time you actually need a specification of the specifications that offer specifications, it's just, you know, you've gone too far with that, right? It's too complex.
But you know, you actually had the web services stack and originally it was the web services where, you know, SOAP RPC you'd have remote for SOAP remote procedure calling where you you basically you know, invoke a method on another service on another server by well basically by bypassing it, some XML, and it was really awful. And the coupling that led to it was almost impossible to make any change to your service without requiring all the other things that we're talking to to actually change as well. And that was a real issue, certainly for limiting the rate of change in organizations, in teams, between teams in particular, then we moved to document oriented web services, which were kind of better, right? Because you're passing documents around rather than and with documents, you can do things like change the order of, of fields and a document without necessarily breaking the other side. Whereas with RPC, you know, if you change the order of parameters in the method call, it's going to break the other thing, right? So you have to be really, really careful about how you’re making changes. You can add stuff without documents. You can add things without breaking other stuff, without breaking stuff downstream and things like that. So you can evolve your services much more freely with documentaries and web services, but then the natural evolution came to REST, right?
So for integration inside organizations, because with REST, you've not only got the attributes of document oriented integration - where you can add fields, you can rename by adding a new one with a new name, you can change the order of things - you’re not fragile in that sense. So you get less coupling to the implementation through the representation, but you also get decoupling as a behavior as well. Because, you know, with hypermedia driving application state, now you can include links, which allows you as the service provider by providing links to guide what your consumers are doing. So, you provide this additional level of decoupling, which is why a lot of people move to REST, especially you know, in ThoughtWorks it was a really big thing. Hypermedia driven applications were a really big thing. And then this idea of things like domain application protocols, so using things like Atom or AtomPub, which became quite popular to constrain how services would interact, you know, so don't even have to think about it.
If you're doing that for millions of people all day, every day, maybe there's a use case for using a binary protocol that's a bit faster than just playing bits, you know, XML or JSON over the wire. I think there is a use case there, right?
If you’re Google and you've got the problems that Google has, or if you’re Facebook and you've got the problems that Facebook has, it's perfectly reasonable to do things like come up with new storage mechanisms like Cassandra. It's perfectly reasonable to come up with Bigtable. It’s always perfectly reasonable to come up with new protocols, like things like gRPC or even know, HTTP, what's the new one? An even newer version of HTTP that they’re sort of building, which I can't remember the name of. It escapes me. But that's completely reasonable.
Now, if I'm Kate, I'm sitting in my team in my bank and I've got to talk. You know, I'm gonna integrate with another team across the way that’s using my services to do, I don't know, investment banking. They're using my service in order to see what prices off of particular stocks and shares. Do I need to use gRPC? Then, well probably not. I mean, the human eye can't take more than, like, an update every couple of seconds, right? Or a second or so. You don’t want your stock ticks are flickering every 10 milliseconds. Um, really, you know, probably it's okay to use something that's gonna give you some additional benefits over GRPC, with things like visibility.
So there’s transparency but also this additional layer of decoupling that allows you to change stuff on your client, to change stuff without breaking necessarily. I mean, there are other cases that we, you know, you probably wouldn't use REST if you're doing low latency stuff. If you're housing your server in the exchange itself you would probably do something else that would be equal to C++, and you'd probably be doing very, very low level stuff over the wire but you know, in the majority of cases, I don't see that the benefits of GRPC outweigh some of the cost of GRPC, which is you lose a lot of us in transparency in things that you have in HTTP. Saying that, of course, things have changed now and we’re talking these days about things like [inaudible] enterprise and you know, about security being everywhere essentially. So, you know, we don't just want to turn that TLS firewall. When you come into an organization, we should be using TLS across every single one of our services, which means that you can’t snoop on HTTP requests, and just see the plain text. You'd need to build tools that allow you to do that sort of stuff, to get the same level of transparency.
But I think I still think you'd get a lot of progress, saying that again, I'm pretty sure I've lost this one. So JSON is not information-exchange for me to think.
We're seeing a lot of gists of GRPC and like an East-West architecture, so that way you have a bunch of microservices within an app, you know, they contain applications, like you said, it might be services behind an existing front-facing API. And those services need to communicate very, very quickly because there's dependencies on each other in order to be able to return the results set to the front facing API. But in that North-South kind of dependency, maybe a RESTful interface is more practical and easier to maintain and has advantages itself.
Yeah, David, that, I think that comes back to what I was saying about the different styles of microservice architecture. I mean, like if your requirements, inside your capability, where you've got a number of services interacting, but that interaction needs to be super, super low latency. It seems like a reasonably designed, you know, decision to go with something like GRPC. But, do you really need to be calling it? Is 30 milliseconds okay? Because if it is, then you’re probably fine to use REST even then. And if you’ve also got incredibly chatty services about that. There might be a question to be asked about why you've got so many things crossing codes low latency between them. But yeah, as you say, if you have a reasonable reason to use it, then absolutely! Use the right tool for the job, that's actually what microservices is all about.
And one of the reasons that makes it exciting is you get to use the right tool for the job. You get to make different decisions, depending on where you are. You get to make these different decisions about security, depending on where you are. This is one of the reasons why you take government that's sort of adopted, or several departments have adopted the style because you know, you don't have to secure every part of your system to the lowest or the highest common denominator of security. If you like, you can have, you know, you can have inside and outside the perimeter style architecture which is really, really useful from the perspective of change. So yeah, I totally agree David. You know, if you have that circumstance, yeah, that's a perfectly reasonable design choice, but you get to make those decisions, which I think is the fun thing.
And public cloud providers are sometimes making this choice a bit easier in some cases to completely abstract the choice, right? So, you know, they're providing these managed services from be-a-container-orchestration or service measures that really, you know, do a lot of this grunt work for you. And so in some cases you may even be, you don't really care in terms of the protocol it’s using for your microservices communication so long as it works. And less latency and those sort of issues are a major concern as an architectural choice. There may be, you do care, but internally it may not necessarily play some extra overhead or burden because the public cloud providers are abstracting some of this and managing it for you, right?
Fundamentally, building software is about solving problems.
Yeah. Which is awesome. As long as they’re making the right decision for themselves, which you know, basically what you hope. But you know, again, one of the things I care about that is I want to be able to make changes to the thing I've got without having to a notify other people, if possible, or be how other people notify me when you know that I need to make a change because they've made a change. I want to be in control of that change cycle, you know? And if the infrastructure provided by the cloud providers, you know, allows me to stay, to remain in control, which for the large part does then, because they're operating at the right level of abstraction of, like, transports and things like this, right? SQS, SNS, they don't care what you put in it, you know I still get to the side. What the stuff is I put in, you know, I push it into these services.
And that means that I'm still in control from the perspective of change. And that's, I think the important thing, because that's the thing that limits throughput. I mean, fundamentally building software is about solving problems, you know, I would say business in quotes, but it's not necessarily business. It's not necessarily about making money. It could be about, you know modeling for the pandemic, which we're doing with one of the universities in India, you know, we're building the risk-based Asian ABM agent-based model to do a pandemic modeling, you know. But that's business in the sense, and the inverted commerce, the reason you're doing it is to solve a problem, whether that's, as I say this, the agent-based model stuff, whether it's trading, whether it's sounding an insurance policy or you know online deliveries of coats.
And in order to do that, you need to have a particular type of, you know, you need to have as much throughput from your teams in order to solve those problems as you can. So, what I'm interested in is how do you maximize the throughput back to Agile again, right? Which is all about limiting work in progress and, you know, small batch sizes and fast feedback, and what architectural decisions of designs are there that help maximize throughput or hinder. And one of the things that really hinders is coupling, you know, me making a change. It means someone else has to make a change. That making a change means I have to make a change and anything we can do and tools that we can use to minimize the coupling between teams I think is important.
It's why consumer driven contracts were such an important idea. Because back to the fast feedback bit of Agile, if you can find out that you're going to make a breaking change in your build to someone else's software, then you can fix it then and there you know. If you find out when they start phoning you further down the line, you've got much less context about the change ‘cause it's long. You’re then gonna roll it back from production or roll forward. You end up with, you know, essentially a hit on lead time for requirements coming behind that change. So, you know, consumer driven contracts would say, "You tell me how you're gonna use my service and I'll honor that contract by running it, executing it in my build as a test for the behavior that you want, and if I break that behavior, then we can have a conversation but it's a managed conversation, as opposed to me breaking the behavior and the other person and to deal with it or phone up in an irate match."
Sometimes of course, that's the only way you can deal with stuff. I was at a bank once where we were turning off an old version of a service to migrate to a new one. And we had no idea who was using it. We knew two teams who were using it, but we had no idea who else, ‘cause there were more requests and just those two teams. So the only way to deal with it was to turn it off and then wait for the phones to ring, which they did quite quite quickly.
Well, James several years ago, you had pioneering work on microservices and then you've had the benefit of watching this whole ecosystem evolve and thrive. You’re now looking into the future and predicting the next big thing or the next evolution. Where are we at? What's next?
So if I was going to be a bit glib, I'd say database triggers in the cloud seemed quite a big deal at the moment, which is Stefan’s description of serverless, right? I think that's a really interesting idea. So a couple of years ago now I sort of looked into it too much in the last year or so, but I was following it, following a lot of the research on not just containers, but then unikernels, which is essentially if you've come across that - I've come across unikernels it's almost like, yeah, I used to joke that Docker is 30% of the way to a unikernel, but you're like, rather than build an application, you'll actually build an OS and your application into the OS but you only build exactly what you need for your applications. If you don't need UDP, you don't build that part of the stack, that networking stack, do you know what I mean?
In terms of security, it's the holy grail, because you're minimizing the attack surface, you know, there's no option to forget to turn off, you know, that port 80 is open. It's not there in the first place, but also there's interesting stuff formally proven then formally checking. ‘Cause if you use compilers that are themselves formally proven, then you can build unikernels that are again, you know, formally proven to be correct, if you like, or secure. And that's really exciting, I think from the perspective of building secure software. So I think that that's one thing in terms of, you know, in terms of where I think that infrastructure where things like Landers are going or even serverless.
I have a concern about serverless. I mean, Stefan was quite right and he does talk to my concerns around what I see potentially of the problems with service in the future, which is this unmanaged complexity that you get in databases. If you've ever looked at, you know, databases from sort of 2001, or big Oracle or big SQL server database, when everyone was going, "Ahh, all the logic is stored procedures." And if you've tried to untangle any of that sort of stuff, where "this list is calling that, which is calling that, which is calling this, which is calling that", it's really, really, really hard. And I remember one client where they had, you would invoke a stored procedure, you'd pass an index into a table, which had the stored procedure name that you wanted to call. So, you'd call it like ID 15. Then the stored procedure would call, we'd look at the table and, "Ah, that means the store user," or something.
So, then it would, by reflection, invoke the store user stored procedure, passing the parameter. What are people thinking? You know, you've created your own pointer system within the database. It's just kind of crazy. But I do wonder about that. I do wonder whether - I worry slightly - that we're going to get into that position with serverless with this. And so I think there's a lot of evolution to come there with using functions as a service. I think another interesting thing that I've been talking a lot with Keith Morris - someone else you might want to consider for a podcast - cause he is very much in the infrastructure space. And he's awesome. It’s this idea at the moment that's infrastructure as code and what that's gonna look like in the future. ‘Cause we've started to get new things on the scene like Pulumi and CDK, which are much more, "I'll write code" and then something's generated at the back of it. But is that the end? Is that where the industry is going as a whole or is this just a stepping stone to something else? In the same way that I think Docker and functions as a service is probably a stepping stone to something like unikernels in the future.
So yeah, those are two things that I think are quite interesting. There's also a bunch of stuff around things like ethics in machine learning. I mean, at ThoughtWorks we get exposed to a lot of interesting ideas and you know, there's a bunch of stuff fizzing if you like.
I was speaking to a data scientist the other day, and he was talking about these ethics in ethical machine learning or A. He said, "David, it's an oxymoron. The whole point of machine learning is to define an outcome, to predict an outcome. So, why do you want to create boundaries around the outcome it wants to predict?" And he was challenged by the whole concept of ethical machines.
Yeah. Well, I mean that’s the thing isn’t it? In the same boat, that's true in the sense that unless you're very careful about the datasets you use, then yeah. If you have a perfect dataset, entirely representative dataset, then yes, that's kind of all fine. The problem is none of them are, you know, we don't have these representative datasets. And this is one side of things. So it's not the case that you're getting the right outcome. You get the wrong outcome, you can't get the right outcome. It's likely though. It's like, if you already measure you know, if in certain or whatever you only decided to throw away 95% of the results and then suddenly you can't find this or the other. Essentially it's automated confirmation bias, to describe it. Which is not a great thing. If you've got more, if you've got all the data available, then that's done. That's fantastic. If you don't, I think there's real issues.
James, I feel like we're just getting started. Unfortunately, we have well and truly run out of time. How can the listeners learn and follow you publicly? What are your channels they should have been following?
So I very frequently update my blog, Bovon.org. There's also quite a few videos on YouTube now. We’re talking about microservices over the years. I'm on Twitter @boicy, that’s b-o-i-c-y. And if you want to get a hold of me, please email firstname.lastname@example.org. I'm always interested in talking about this stuff and learning more about what other people are doing. So thank you very much. And thanks, Kevin and David, for inviting me. It's been pretty great.
Absolutely, our pleasure. That's very generous of you to offer all those points of contact. We would love to have you back on the program soon and continue these talks. I think there is so much we could dive into. It's been a pleasure. Thank you very much. Enjoy the rest of your day in London.
Thank you. I could hear the rain and wind howling outside the shed. So might delay getting a new cup of coffee. Thanks very much. I'll sign out now!