Pattern Pattern
PODCAST

The future of Event-driven integration with James Urquhart

coc-ep-21-og

In the book, Flow Architectures: The Future of Streaming and Event-Driven Integration, Flow is defined as "networked software integration that is event-driven, loosely coupled, and highly adaptable and extensible. It is principally defined by standard interfaces and protocols that enable integration with a minimum of conflict and toil." It aims to reduce the cost of real-time integration, while allowing data streams to be shared in new and innovative ways, giving birth to a World Wide Flow.

In this episode of Cocktails, we talk to the proponent of Flow Architectures, which he predicts, would be the foundation for linking the world’s activity via a World Wide Flow. We discuss the concept of Flow, the technologies that would need to evolve alongside it, and how businesses and organizations today can prepare for its inevitable emergence.

Episode outline

  • James Urquhart defines "Flow" and the concepts behind it.
  • What technologies will need to evolve to support Flow?
  • Will Flow see adoption and promotion as an industry-standard within certain verticals?
  • Are companies set up to embrace a real-time event-driven economy?
  • If the technologies and standards to support Flow are yet to emerge, what can an organisation do now to prepare for it?

Transcript

Kevin Montalbo

Joining us all the way from Australia is Toro Cloud CEO and Founder, David Brown. Hi, David!

David Brown

Hi, Kevin.

Kevin Montalbo

Our guest for today is a proven technology executive and a key influencer in the use of distributed systems and technologies in the enterprise setting. He has a background that includes both field and product leadership, with both start-ups and large corporations. Named one of the ten most influential people in cloud computing by the MIT Technology Review, The Next Web, and The Huffington Post, and a former contributing author to GigaOm and CNET, he frequently writes and speaks to these disruptive technologies and the business opportunities they afford.

He is the author of Flow Architectures: The Future of Streaming and Event-Driven Integration published by O’Reilly. In it he imagines a new global event-driven network that will drive, what he describes, as a Cambrian explosion, that will change how the world works. And we’re here to talk about that today. Ladies and gentlemen, James Urquhart. Great to have you on the podcast, James.

James Urquhart

It is a pleasure to be here. And I love that sign, man. It’s so cool! I gotta get one.

David Brown

We’ve been waiting for that sign for a while. And we just need a new camera so when it comes up, the resolution comes up better in the background there, but it is the Coding Over Cocktails sign.

James, we’ve been really looking forward to this. The book is really interesting stuff. And your predictions are very, very interesting. We often talk in our podcast about, you know, "What's your prediction for the future?" Well, you've written a book about some disruptive technology which you think is going to disrupt the future significantly. Why don't you tell us what is Flow? Let's get started with, what is it that it does?

James Urquhart

Flow is about event-driven integration and how standard interfaces and protocols change the way we integrate businesses.

Yeah. Thank you. So, the concept of Flow is really about – the fundamental question behind it is, what happens when the interfaces and protocols that we use to integrate our applications in a way that we can consume real-time event streams, real-time indications of state change in other systems in our own systems. So today, that happens all the time. You have stock-trading networks. You have healthcare systems. You have manufacturing systems where sensors can indicate there is a problem to a central control system. But the problem is, every single one of those use-cases that you go through have basically, bespoke, made-for-that- purpose, interfaces and protocols that enable those things to talk to each other. I did some work with something that we could talk more about as we go forward, a technique called Wardley Mapping, which allows you to understand the user need, understand the needs that are required to meet that user need and then map that against an evolutionary scale that goes from sort of incredibly new and novel and not understood all the way through custom and then product until they get to a commodity or utility phase, when it's fully understood and everybody expects a certain behavior.

And so, my question was, "What happens when those interfaces and protocols begin to evolve to that point where they become standard and understood and ubiquitous?" And it very quickly occurred to me that that is a huge deal. It's akin to when HTTP gave us a standard for defining and linking information with a standard interface and protocol for understanding how to not only provide a pointer to another piece of information, but to follow that pointer from one piece of information to another, right? So, in the same way this is about activity. This is about linking activity in a way that you could very easily say, "Hey, I want to make available this stream of events that represent this activity to the world or to specific individual organizations," and then as a consumer, you can say, "Hey, I want to subscribe to that particular stream, to be able to react to that information as it becomes available." And, if you follow the same train of thought in terms of what happened with HTTP and linking individual pieces of information, and so on, until we start to see a global graph of information that's highly linked to each other. I believe that over the long term, we're talking a decade, maybe more, that we will see a very large, massive, scale connected environment where real time data is being exchanged and consumed and generating new data across industries, across geographic locations and across the world. So that's the heart of Flows. It's about how event-driven integration will really be changed by standard interfaces and protocols and how that will change the way we integrate businesses.

David Brown

I mean it sounds like a pub-sub model, like internal architectural application development, event-streaming, and subscribing to events and having applications responding to those events is a common architectural model for internal application development. So, my understanding of Flow is, it is that on a global scale. So, you're making those events publicly available, presumably with some form of security, so that you can lock it down to only those subscribers that you want to. So it's almost replacing RESTful APIs with an event-based subscription type.

James Urquhart

I want to correct that slightly because it cannot replace REST-based APIs.

David Brown

Sorry, not replacing. Supplementing.

James Urquhart

Today we do many things with API calls that are about trying to catch an event – a state-change – that's occurred. So, what we're doing is we're replacing those situations where we're using request-reply models where that's not the most efficient, and I'll give you an analogy. So, there's a gentleman, I can't remember his name right now, but he spoke at DevOps Enterprise a couple of years ago about how Walmart has adopted a vendor event integration for their real-time inventory systems. And the most powerful aspect of that story, you know, he talked about, you know, what the architecture looks like internally in Walmart systems to be able to take advantage of receiving data in real time, as opposed to having to find it by making these kinds of tree calls through multiple services. Right? So, literally, the data that a system requires ends up looking a lot like a lookup table because it just gets fed with the data from the outside, gets updated on a regular basis. So they actually just literally, in some cases, use a lookup table as the service. Cool stuff.

And then he did the math on what that does to availability. So, rather than having to chain all those services together and get availability by multiplying all the service levels of all the services that you have to call to get your answer, they now have that one call to get an answer. Whether or not that answer is 100% accurate is dependent on how accurate the feed is from the chain, right? But the call tree is a simple call, and so it's way cheaper to achieve five-nines in terms of service availability in order to answer that question. And so, that kind of fundamental shift in thinking really changes the nature of when you would use event-driven versus API-driven. I think, you know, in probably a large number of cases where API-driven are used today, the idea of using event streams will make more sense. However, even in Flow, the interfaces I talk about are APIs. Right? To initiate a connection to a stream, to request to subscribe. And you're right about the pub-sub thing – that seems to be likely the dominant, if not the only model that Flow will take.

But in order to make that subscription, you have to make a request-response call to the topic hold or to the publisher subscriber system. So, APIs for request response situations will still be critical. But it’s very, very true, though, that we do an awful lot of things where we make calls on a timely basis for trying to catch information when it's available, as opposed to "Hey, I have a query. I need a response to that query at that time," and that's what it will replace. That's the fundamental change.

David Brown

There seems so many implications in terms of how it's gonna disrupt business and the like, which we'll get to in a second. But can I ask you? You've likened Flow to HTTP, the foundation of the World Wide Web, where you predict the Flow will become the foundation for linking the world's activity via this worldwide Flow, to use the same sort of an acronym. What led you to the conclusion that this was inevitable, that this is how business is going to be conducted?

James Urquhart

Yeah, the core element of it is that, if you look at the needs to satisfy an event-driven integration - so you just say "event-driven" integration without putting any caveats on it - and, especially the only caveat I put on is I'm really, really interested in how I do that integration across organization boundaries, like, what are my needs to be able to, securely and with trust and in a performant way, have one organization say, "I want to subscribe to the feed from another." And there's many examples in the book of different kinds of interesting use-cases where that's the case. And as I did that analysis, anytime you have integration, the key factors that make integration possible are, you need some form of interface that allows the systems involved to be able to identify a connection between each other. Now that might be through an intermediate party.

So I am very clear in the definition of Flow, that it doesn't have to be the producer and the consumer that provide and consume interfaces and protocols. There could be agents that act on behalf of the producers and the consumers to do that. But when you have that Flow, that situation you need those interfaces and protocols to enable, ultimately those two things to be able to get connected together. So those are two things that you end up saying are components of the overall solution.

The other things that you need besides those two things include things that we all know extremely well, which are the things that interact with streams. Because just streaming event data with no interaction is just moving data around for no really good reason, right? You have to have interaction that goes with Flow and so, when you take that on, then you also have to map things like queues and processors and sources and sinks as a part of the map as well. And so I put all that stuff on there, and then I was paying attention to the fact that a lot of those things, in terms of queues and processes, are a commodity today, right? You can go to Amazon today and you have a selection of different types of event processing. You can choose from Lambda Step Functions and EventBridge and a number of other things. And then you have a number of queues available to you as well. So Kafka, just a simple message service. You know, a number of other things that are available just to give you one utility sense of that. And then you have a big choice in the marketplace for them. So, you have this ubiquity of core processing elements. Sources and sinks range a little bit more, right? So, if you're talking about data, sinks and analytics things, those were largely commodity today too. You can get the data warehouses and even short term processing in real-time visualization stuff today very easily. Sources range from brand new sensors we've never seen before all the way to commodity elements. But then the interfaces and protocols.

Those today, if you look at how people are integrating, especially across organization boundaries, they're integrating applications using event streams. As I said earlier, those are highly bespoke. Those situations today, one company may say, "Hey, we're just exposing our Kafka API through some sort of secure tunnel," or something like that, "And with heavy access control around it, and that's how we're doing it." Another group may say, "Hey, we're publishing these two EventBridge in AWS. And so if you want to connect to this, you can go to EventBridge. And this is how you connect to our stream." And there's, you know, everything from what the stock market does where they literally sell you a port on a server that you connect to directly and get a very direct network stream that way to more casual kinds of ways of connecting.

And so, the part of Wardley mapping is understanding that everything moves from that crazy novel thing through those four steps that I talked about earlier, custom product towards that utility phase, right? And if it's really, really useful technology, or a capability I should say, new technology will come to meet that capability and replace and slowly subsume the old one until you get to the point where you've got a very standard. There's no real need. We know exactly what we want here. There might be little tweaks to specific features that are built off of the core capability. There might be tweaks to the service that's offered, but you get to that point where you move to that commodity. And so what happened? That's when you ask the question. What if we're kind of largely custom or maybe product in some cases today? What happens when the right technology comes along to make that utility?

And so that's, to me, that's the kind of core thing [that’s] somewhat speculative to say. But as I did the analysis on the business drivers that need to be - there is a whole chapter on that in the book - the technologies and what the technologies today are telling us about the trends. And there's a whole chapter that kind of talks to that stuff and then do a little bit of sort of ‘gameplay’ - Simon Wardley, who invented Wardley maps calls ‘gameplay’ - against what you see in the current state now, and you ask questions about different ways it might evolve. It became really, really clear to me that unless there's some fundamental either legal or physics thing that's not understood today that would block its evolution, over time it's pretty inevitable. And we're getting very close to the point where that's going to start to appear. And we can talk more about what I see that drives that.

David Brown

Was that like an "aha" moment when you came to that conclusion?

James Urquhart

Yeah. It's when I go to Medium or some old blog posts that I did a while ago, that you can search for and look up if you want to where you could see old, like, really sloppy versions of different analyses that I did. But you can see literally where I sort of came to the conclusion that, "Wait a minute. These interfaces and protocols are really important." And when I reached that point, that's that. That was definitely an "aha" moment. Definitely.

David Brown

Can we talk? You've mentioned a few technologies, so it seems like, as you said some technology is already there in order to publish messages, put them onto queues, have people subscribe to them, even some of the security elements already there. We briefly touched on what technologies would need to evolve to support Flow. So, for example, I'm thinking about in the RESTful API space, what needed to evolve was a common standard to describe RESTful APIs to subscribe to them. Now I know we have emerging standards like CloudEvents.io. That is a sort of technology which you're gonna need to evolve to support Flow?

James Urquhart

Yeah, let me break down interfaces and protocols just slightly more because that will help you kind of understand. But yes, I think you're on the right page. So, on the interface side there's really, kind of, two core elements that are the interfaces that you need to really make Flow work. So the first is the logical connection, which is really the subscribe command. If you buy into publish-and-subscribe being the mechanism, then what are your interfaces to publish and what are your interfaces to subscribe? There's a possibility there's some other mechanisms that are more direct that aren't so much about publish-and-subscribe that might be useful in some use-cases that might play into what has to be supported.

So, you might have a situation where it's not so much about subscribe. It's more just about establishing a logical, more conversational connection between a producer and a consumer. But if we go with the publish-and-subscribe and you go, "Okay, whatever the publish-and-subscribe interfaces are and the mechanisms that allow a publisher to say ‘I want to connect and put things in a topic somewhere’ and a subscriber to say ‘I want to connect to put and take something out of that topic somewhere,’’' then those interfaces of logical connection at large. And they sit on top of all the networking stuff - TCP, IP and TSL or TLS or whatever. But then, the other interface that’s really critical is discovery. And discovery is important because, you know, what streams do I want to subscribe to? How do I find streams that are valuable to me and my company? Especially if the number of streams grows exponentially like HTML websites did. Then how do I, you know, how do I find what I need to find? And so there's a couple ways that could go. One way is that it could be just Google and web pages that describe a standard way that you could look and say, "Okay, here's my stream URI, and here's what you need to know in order to be able to consume the data that comes from the stream URI." And that has a very decent chance of being the winner.

But there's also a number of companies out there that are putting out commercial products that are more like, you know, API-driven registry of streams that do more than just say, "Hey, here's the stream URI, who's the owner?" or whatever, they give you programmatic ways of understanding what's the velocity of stuff that you might expect to get from this? And what schemas do you need to know and understand in order to be able to consume these things? And where can I, you know, if I can programmatically determine from the schema how to read it, where do I go get the things I need for my program to be able to determine that. So those things I think are important, and then on the protocol side, there are two pieces to that as well that are really important. And the first is the payload. You know, what is it that you're selling that's unique about that state change that you want to communicate to the other side and, you know, protocols are the one thing where it's gonna be not standardized in the sense of there's one ubiquitous standard for protocol for payloads. It's likely to be more on an industry by industry or even key market by key market basis.

So, things like you see already with the EDI and a number of other places where they built data standards for how to exchange data across boundaries. Those will get encapsulated into payload form, or new ones will be created to meet new needs. But the metadata gets very interesting. So, the core descriptions of what's in the payload and what do I need to know to be able to pick it apart? What time was this event driven? How has it been routed to me? Where is the original source? Those things are very, very important in order to be able to both trust the data that you're getting and also to understand a little bit about how to process the data and the kinds of things that you want to determine from that.

And that metadata is, there's something called CloudEvents out there that you mentioned earlier, CloudEvents.io. CloudEvents is a project from CNCF. I think it has the inside track in a big way in terms of being that metadata protocol. I think it's a very important step in the right direction. Even if it doesn't win, it will inform whatever wins in the long term. But man, I gotta tell you, it's everything about the way they set it up that’s correct. It's not itself a wire protocol. It's a metadata definition protocol that could be mapped to a number of different wire protocols. So, AMQP, MQTT, HTTP, Riker mapping, just about everything you can think about there. But there's something like that will have to be in place as well, and so when you look at those things that are in place, that's where I really kind of think that that those are the pieces and elements that, um have to get better and stronger and fully standardized in order for us to say, "Now we have standard interfaces and protocols for Flow."

David Brown

You think certain vertical industries will also drive adoption? So, the finance industry coming up with standards to describe a financial transaction, for example?

James Urquhart

I do think that certain industries are gonna find this [Flow] more valuable than right off the bat.

Yeah. I mean, you know, I think there are certain places where it's gonna be really hard for Flow to crack the nut, so to speak. So, I use high frequency trading as an example frequently in the book, as sort of the kinds of areas where a Flow-like contract exists already. The catch there is, though, of course, the speed and performance of what you need to know there is really high. But that doesn't preclude Flow from giving you some of the ability to say, and giving the financial stock trading industry the ability to say, "Hey, we're moving from this port-based approach to a Flow interface," and essentially we're going to establish the same connection when all is said and done. But the metadata will change a little bit, and the Flow will change a little bit as a result of that.

But I do think yeah, absolutely. I do think that certain industries are gonna find this more valuable than right off the bat. Transportation industry, specifically, the logistics industry is one where I look at and I say, "Yeah, they're already moving in this direction." There's a number of vendors that are using some relatively common standards for things like truck telemetry for semis and things like that. Those things are already starting to become very, very standard. So, I absolutely think the industries will drive it and, you know, some industries will be first. Others will be later.

But, man, the power of real-time data is surprising in places where you've been using batch processing for decades. Right? I've seen this firsthand, and, you know, you get a real excited shift of behavior when people can actually see what's happening, say, in an online retail campaign in real-time, versus 24 hours later.

Kevin Montalbo

All right. So, like any disruptive technology, I imagine that new technologies and new industries and business models are going to emerge. Have you foreseen any of these emergent or new industries and business models going to take place?

James Urquhart

There is a lot of opportunity for business data streams to be analysed and combined and reused in ways that we don't see a lot of right now.

So far as of now, I think, there's very little today that you can look at and sort of say, "Oh this has been driven by, of course, standard interfaces protocols," because they don't exist. But you can see some hints of the kind of explosion of business opportunity that's out there. I mean just today, right? We see the news that Apple and Kia Motors are kind of working together on a factory to build cars. Right? The great part of that is the telemetry capability that's available from the Apple ecosystem and from what they're able to do in terms of tying integrating your entire personal portfolio of data into now, adding cars to the picture makes that possible. Right? So, you look at self driving cars, you look at, you know the streaming of music and film streaming industries and and some of the capabilities that you might see with all these new networks popping up everywhere and everybody doing their own naps and all that stuff, which I think is, you know, it is really indicative that there are things that could be built. On top of that. I think it will go much further, though. I do think that there is a lot of opportunity for business data streams to be analyzed and combined and reused in ways that we don't see a lot of right now. So I think there's more to come in a huge way.

Kevin Montalbo

How about for organizations? Do you see any organizational challenges that Flow will face? So are companies right now set up to embrace a real time event driven economy?

James Urquhart

Yeah, I think there will be some resistance in the sense of, you know, one thing I say a couple times in the book and I believe this for a really long time is the first requirement of any really useful business system is trust. If you don't have trust as the baseline, none of the rest of the features and functionality matter. And so it'll take some time for organizations to trust that if they make their data available, that'll be consumed in a way that's beneficial to them and to find the business models where that's the case. It will take a while as well for consumers to trust that the datasets they get from some place they don't have a contract with upfront is valid and worthy of consumption. There's a fun little debate going on a little bit right now about sort of the nature of what is an application. And then, you know, we've used this term about sort of Big A applications, Little A applications and all kinds of terminologies. Sort of, you know, what's the difference between a deployable component that's owned by a team and the application from the consumer's perspective or from the businesses perspective?

And I think a big part of the problem why we have trouble kind of mapping these things is because, really, in the end, what we have is a graph of software components out there and so we have a bunch of components that dependencies and communication links with each other, and finding those boundaries is getting harder and harder to do as we have more and more of them that are more connected and interesting in complex ways. So, I do think that in the end, some of the resistance that enterprises will have revolves around the way they're organized and the way that they're organized to build software and how software comes out. So, the old sort of Conway's Law conundrum that we were dealt with and seeking that dream of a reverse Conway's Law where organizations begin to change around the software components that make us successful.

I don't think we will get there super soon anytime soon, which is why I think Flow will be delayed somewhat because the trust, again, the trust factor is not only the trust factor between one business and another and the network in between, but it's also within the organization. There's a trust factor about sharing data as well that has to be overcome. That's why I say it's a 5 to 10 year horizon before it's a mainstream technology problem, but I do think that the early movers, the people that helped drive the standards and then build businesses around those standards early, will have an inside track on really profiting from what happens later.

David Brown

I'm guessing there's also business process challenges to move to a sequential event processing model. Many businesses work in batch processing still, so they'll literally need to overhaul their business processing in order to be able to support a real-time event streaming model.

James Urquhart

Yeah. The real-time part of it is two factors. So one is the need for automation, where we've been able to kind of slide by without automation for a long time. I think it's gonna be driven in a huge way. There's a class of work that's always been easily replaced by software that I call clerk jobs. And so, you know what’s a clerk job - somebody who takes something off a queue someplace, does a prescribed function against that something and then passes it on to the next step in the process. If you look at the movie "Hidden Figures" about the women in NASA that used to compute functions and provide mathematical answers to equations in the sixties and the fifties, right? You look at that and how quickly they replaced us. The IBM mainframe became the core processing system within the organization.

I believe there's a lot of white collar clerk work or that it is very much at risk as Flow pops up because organizations, they're gonna have to look at, does it make sense for us to try to process a growing velocity of data that's coming in through certain streams that we subscribe to in a human process way? Or do we have to finally find ways to automate portions of compliance and automate portions of data reconciliation and data cleanup as we take a look at things? Hard problems to solve, but maybe increasingly easy, as machine learning and AI technologies get a little bit better over the course of the same decade that all of this is evolving.

So, yeah, you know what's gonna really survive? A lot from a career perspective for most organizations is going to be those things where you have to use smarts and creativity in order to be able to either create something of value or to resolve issues that might impact the value of a product or service being offered. And in those situations, I think it'll be a very, very long time before humans are replaced. You know, I always say there's always a negative to every positive and sort of the way these technologies evolved. And the negative is, really, I think there are a lot of jobs that have been relatively safe for the last 100 years that are probably going to increasingly be at risk and not just because of AIML but the combination of AIML and Flow is really going to enable businesses to automate those tasks easier.

David Brown

What can organizations do to prepare for this? You mentioned the early adopters, they're going to be the winners. And let's just put aside the technology providers which there's obviously huge opportunities for. But organizations for which you're gonna be publishing or subscribing data streams, can they get started on this now or is it purely theory? What’s the things they could do and put into practice now to prepare for this?

James Urquhart

Well, yeah. So, the good news is because the interaction technologies are increasingly basically, you know, more or less commodities at this point, processing events at scale is a doable thing today with the technologies that are available. So what I the first thing I point out is, you know, we're all talking about sort of microservices architectures, but a lot of people think of microservice architectures in terms of, you know, request-response APIs and REST API kind of services. There's a number of different works out there that talk now increasingly about how you either process streaming data as a sort of data stream approach using a number of, say, Apache, different Apache projects out there to do that, different commercial products in the market to do that.

And you could certainly take a look at if what you're really looking at is beginning to understand and sort data, there's a number of things you could do there today using those technologies. There's a new class of stuff out there that's around sort of building models of the real world in digital form as sort of a digital twin model. If you Google the term digital twin, you'll find a billion things about that. And, you know, one of the ones that I've worked with a fair bit is an open source project called Swim OS and the company Swim.io that's sort of producing that and delivering commercial products around that. The idea there is - and this is a class of tech, it's not just them - the idea is that you process the stream as it comes in. You determine from the data of the stream that there's the existence of an entity or agent in the system. You build a digital twin for that agent. You then, through the data, determine how it's connected to other agents in the system. You begin to build this graph model in memory of sort of, what are these things and how are they connected to each other and then someone interestingly uses an AI approach and sort of more of a machine learning approach to kind of determine over time how should I react as the other agent I'm aware of change? What are the actions that I should take? What signals should I admit? Or how should I update my state to reflect what I'm seeing elsewhere?

And so it's in heavy use of things like, you know, traffic systems. I think the city of Las Vegas is one that the Swim team kind of holds up as an example. Palo Alto as well, but also you know, they do models of, you know, entire cell phone networks for major network cell phone providers, where they know what towers were doing and what cell phones are doing and what's connected to what at any given moment, based on the stream of data coming from those towers and coming from the cell phone network systems themselves. And they do that on a surprisingly small number of hardware systems in order to support that. So, it's a very cost effective way of understanding how those things are related.

I think you know, another element of that that you can look at today and build around today is the cloud world and serverless, right? People who are doing serverless systems today are sort of building prototypes, systems that would work very well in a world where they're consuming an external stream instead of an API. A lot of Lambda today is triggered by a call to an API gateway. And you could just replace that with something like it's consuming a stream. And you could do that today as well. The big thing is, if you build it in a way that the interfaces and protocols that you depend on in the processing of the system downstream, that you have the ability to adapt that very quickly and cheaply as you move forward. So, you know, decoupling the implementation of the event processing from the consumption of the data in the business systems themselves, I think, is a very important thing to keep in mind so that you can begin to consume what I believe will be standard libraries, right? Standard. You won't have to write all this stuff to be able to consume Flow because a lot of it will be just very, very standardized around those standards. And you'll be able to say, you know, "Give me a known object type out of what I've received from the stream that I can then consume downstream in what I'm doing."

So definitely, you know, keep in mind that those interfaces and protocols are what are going to evolve and change build with that in mind. But then, you know, take advantage of the architectures that are available today, and the new architectures as they start to evolve where they make sense. In the use-cases, you need to do it. And there's a number of books out there that O'Reilly and others have about event driven microsystems, microservices and about service programming and things like that can help you, your organization be ready and on the way be ready when Flow appears.

Kevin Montalbo

All right. I have a suggestion, actually, for organizations who want to prepare for Flow and you talked about books. So, the suggestion is basically just read your book, right, James?

James Urquhart

Sure! You know, my book is a good way to get a sort of high level, say, national map scale overview of what the problem set is once available. It's not a book that's going to give you a recipe for exactly how to build your systems today to be event-driven in that way. But yes, I highly recommend to everybody that's interested in this idea of moving to sort of real-time stream integration with other organizations, even within your company or outside of your company, to take a look at the Flow Architectures book because, you know, I think I did a really, really good job of mapping out, sort of all kinds of different questions that you have to keep in mind and think about, and that the market will have to solve in order for Flow to really become trustworthy and useful in the future.

And there's a ton of opportunity, you know. Not only is there instructions about how to be prepared so that you could be reactive to Flow, but there's a ton of description of things that, you know, if you're an entrepreneur or an investor, or some of the spaces that need to be solved in order for Flow to really come together and be useful, and you could do that for a venture of an integration without standards today in very specific situations, and then explode that out to take advantage of the standard when it's available. There's a bunch of stuff in there about what industry organizations, if your part of an industry trade group or if you're in the, you know, in a government organization that has a standards body that you adhere to, what you do today to define the payload standards and to be prepared to quickly declare a consumption standard around something like CloudEvents as it becomes really clear that it's the standard that's available.

So, there's a lot of opportunity in the book beyond just, "Hey, what do you need to do as a software architect or software developer to use Flow?" And it's really a book that's meant for investors and entrepreneurs and business leaders and technology leaders and, you know, a number of different audiences that are going to be able to take advantage and leverage Flow, as it becomes available.

Kevin Montalbo

All right, we want to congratulate you, here from Toro Cloud, on your book before we wrap this up. Where can they go to learn more about you?

James Urquhart

Yeah. You know, the best places to find me are these days, I've been a long time, kind of Twitter user and that's the number one place I usually go to to communicate and look for ideas and meet new people. But I'm increasingly on LinkedIn as well, and using that platform, I find it very useful in ways that Twitter is not really useful. And those are two great places to find information. I also just launched the blog, like, literally just launched the blog. It's called "The Flow Book," it’s on Blogger. So Flowbook.blogger.com is where you can find it. And also I'm about to post my second post ever on that blog. But it would be a good place to kind of see how I kind of break things down, maybe a little bit differently than I did in the book or also how my thinking evolves and changes. And I'm gonna try to stay on top of any news that's really important as well as when Flow sort of comes out and comes to fruition. So I'm seeing interesting things, a number of industries that begin to indicate that there are businesses that could transform quickly if this technology was available. And I'll try to highlight those as I go forward. So that's the main ways to get hold of me.

Kevin Montalbo

Right, so that's a wrap for this episode of Coding Over Cocktails. Thank you very much, James Urquhart, for joining us. We hope we can have you again soon.

James Urquhart

Yeah, I enjoyed this very much. And, you know, I gotta see that sign with the new camera when it's available. So, let me know.


Listen on your favourite platform


Other podcasts you might like

cta-left cta-right
Demo

Want a ringside seat to the action?

Book a demo to see how our fully integrated platform could revolutionise your organisation and help you wrangle your data for good!

Book demo