Pattern Pattern

Data Mesh: Defining a new data architecture paradigm with Zhamak Dehghani

Becoming data-driven is one of the many key goals that companies today are striving towards. By empowering their employees and stakeholders with data, organizations can foster insightful decision-making and create a competitive advantage. But how can we effectively achieve this and unlock the full potential of our data? Our guest for today may have the answer.

In this episode of Cocktails, we’re joined by the creator of the Data Mesh who shares with us her journey on how she came up with the concept and the challenges associated with evangelizing it. We also talk about the specifics of what this paradigm shift in data platform architecture is, explore how it’s different from existing data concepts, and dive into how organizations prepare for data mesh.


Kevin Montalbo: As always joining us all the way from Australia is Toro Cloud CEO and founder David Brown. Hi, David, how are you doing?

David Brown: Good day, Kevin. Thanks!

KM: All right. Fantastic. And Our guest for today is a Director of Next Tech Incubation in ThoughtWorks, North America, where she focuses on distributed systems and big data architecture, with a deep passion for decentralized technology solutions - the foundations for democratization: data mesh, decentralized trust and identity, and networking protocols.

In 2018, she founded the concept of Data Mesh, a paradigm shift in big data management toward data decentralization, and since has been evangelizing the concept with the wider industry.

She is a member of the ThoughtWorks Technology Advisory Board and contributes to the creation of the ThoughtWorks Technology Radar. She has worked as a technologist for over 20 years and has contributed to multiple patents in distributed computing communications, as well as embedded device technologies.

Ladies and gentlemen, we’re very excited to have Ms. Zhamak Dehghani to the show. Hi Zhamak, welcome to Coding Over Cocktails!

Zhamak Dehghani: Hi Kevin. Hi David. Thanks for having me. I have to apologize for that long bio that I had sent you. It took half of the show.

KM: No, no it’s fine.

DB: It’s super impressive. And some of your work is clearly impressive as well, which we want to talk about today, particularly the data mesh. But before we get into that, we've had a couple of guests from ThoughtWorks on the podcast and what is really interesting is how you guys keep coming up with these innovative concepts, which end up defining architecture in some way or the other. So, before we dive into the data mesh itself, can you tell us anything about the process that you have there, where you just come up with these amazing ideas and definitions?

ZD: That's an awesome question and I wish I could tell you that there is a process. I think nobody probably knows exactly how these things happen, but ThoughtWorks, perhaps, has the right ingredients and the condition for these ideas go from somebody with a critical eye and a critical mind, questioning how we build software, to go through a process of validating the new hypotheses and experimenting with clients and evolving the solutions and helping out the other end. Any paradigm shift or a new concept, I think it's really important to talk about what are those conditions and elements that produce these ideas. And part of it is like having the right people, the critical thinkers and enough mass of those people in a company. I think the other very important and very unique ingredient is this idea of an open IP. Individuals within ThoughtWorks can create IP rights to talk and they get encouraged and they have a sense of ownership of that.

And you know, I said to you our leaders encourage thought leadership there in ThoughtWorks, and then we have access to, you know, a set of global clients, which is a fantastic experiments ground to try different things and have a vantage point, fantastic vantage point to see patterns, to be able to kind of see patterns. In fact, my role is to create a process to systematize and somehow accelerate and incubate things that we do, but we do it a bit more organically. So, it takes longer, almost every decade. We come with one or two new ideas. So, how can we do that a bit more systematically with somebody we can talk to in a year and see if I managed to find a process for it.

DB: Well, was the concept for you of the data mesh, was that a natural evolution of one process to another? Was it a Eureka moment where you woke up in the middle of the night? How did it come to you?

ZD: Yeah, I think the very first phase of it was, well, I think prior to even seeing the data problem or seeing the patterns was the fact that I was in an adjacent world to the data world. And I had seen the movement from centralization to decentralization that have moved through the microservices, I suppose, revolution. I was immersed in the world of distributed computing to a large degree. And then I came to the world of data, big data, the current iteration of it at least, and analytical data. And I noticed that this world has just been completely isolated. And I don't know, it was kind of frozen in time. About 10, frozen in time - was it 10, 20, maybe half a century ago, and the values and the principles and assumptions were very much in conflict with everything else that had evolved outside of that world towards a scale with decentralization and distributed architecture.

So, I couldn't really [inaudible] the system of the world that we lived in. And I felt like maybe I was the child who pointed at the emperor and told that he wasn't wearing clothes. It's just that kind of feeling that I shouldn't be saying this, but it felt important to me. The reason I kind of started hypothesizing with my colleagues and work on data mesh was that we had the need. We had four or five clients that simultaneously around the same time - big clients, very forward technologically - they had already investments in big data platforms, multiple generations of that. And they were yet looking for a solution because after spending so much money they were not getting results.

So, I guess those are the kind of ingredients that came together and kind of resulted in coming up with data mesh and I was the silly, little child who went in and talked about it at first. And I had a lot of skeptics within the company and outside of the company, that this is speaking sacrilegious.

DB: Which is probably a good thing, challenging you on the concept. And you say you have to argue as to why you still think it's valid, right? So, tell us a little bit more about data mesh and how it differs. I mean, you've already touched on it in terms of centralization versus decentralization for example, from a data lake. But can you just give us an overview of a data mesh and how that data world has evolved from data warehouses, data lakes and the like.

ZD: Absolutely. I mean, as a starting point, I think we have a delineation between data within the operational world, microservices databases that are keeping the states, how far are stateful workloads, right? They run the business. And on the other side, we have warehouse, lake where we aggregate the byproduct of running the business. The data is generated and creates these views that then we can analyze, right? We can feed their machine learning models and so on. As our ambitions around analytical data processing kind of grows, we have more machine learning models. More of our business becomes intelligently augmented and data-driven that the centralized model starts to fall apart at scale and the scale being the diversity of the sources and the diversity of consumption models.

So, data mesh at heart, tries to solve the problem of scale. If you fast forward life - I don't know, 15, 20 years down the track in the future - and imagine that everything that we do is some, having augmented with intelligence, using their data and the data that feeds those models can come from anywhere, any source on the planet. Then it just doesn't make sense to have centralized solutions. At least for me, it just didn't make sense. And it was evident at the micro level, within the enterprises, that model wasn't working. So, data mesh solved that problem and the moment you try to decentralize on a different access base in which you can scale out. And this access was, for me today, the domains of business, how we are decompartmentalizing our businesses so that we can scale based on domains. Then you are dealing with a decentralized distributed system problem, dealing with problems of siloing of the data, dealing with inconsistencies and lack of integration between that data, problem of who's owning what and accountability structure.

So, the rest of the data mesh principles are addressing those problems that arise once you decouple and decompose the centralized analytical data solutions based on domains and based on the ownership of the modeling and the data itself within those domains. And they introduced a set of new concepts as well. Maybe those domains need to change and shift their thinking from “Data is an asset that I'm collecting” to “Data is a product that I'm serving” and delighting the experience of the consumers around it. And then, perhaps to really enable those domains and not incur an exponential cost of building up infrastructure, perhaps we need to have self-serve infrastructure with a different mindset that we really want to bring the level of complexity and the cost of running and observing and monitoring these data products low.

So, then we come up with a data platform model. And finally, you have to find a governance model that works with this distributed system that while it allows these domain teams to run fast and do what they want and model the data however it makes sense to that domain, there is a governance model that serves the common good and finds that equilibrium between autonomy of the domains, but also interoperability and adherence to some sort of a standardization for interoperability and some form of enforcing policies that are important to be enforced at the level of each data product in a global fashion. So, what is that mechanism? What are those, you know, levers that we can put in a governance model that finds that equilibrium between centralization and decentralization of policy, decision-making and enforcement of those policies? And hence, the kind of computational and federated model of governance came to exist.

DB: This concept of domain-ownership and productizing data sounds a lot like microservices. We have a domain model associated with microservice. Are we talking about an extension of that concept where you’re just thinking beyond microservice and now servicing that data as a product to a wider community?

ZD: That's a really good observation. I think it's an extension to that model. And if you go back to the origin of microservices, as why we came up with this model was because the work of the domain-driven design showed that modeling your large organizations - multiteam, multibusiness unit organizations - on that one model and serving that model through services, applications, it falls apart as you scale. So, you have to move to a multi-model world and then around those models - and I'm generally using the model just as a placeholder for services, business processes, their data supporting them, the capabilities, the APIs, and so on - so, once you break down those models so that the teams can have the right cognitive load to support it and build services, we have arrived at a microservices architecture.

And, absolutely, you're right, this is just piggybacking on that model and saying, "Okay, we need a new structure." I'm not sure if it's a microservice, we call it "data product" and "data product container." We need a new kind of a structure that extends that microservice or set of microservices within that domain, whose responsibility is exposing this analytical data from that domain, that the domain itself can use to train its machine learning, or, you know, a review its reports, but also other domains can use it so it's interoperable with others. It's an extension. And some people, you know, would say, "What is this thing? Is this a data service? Is this just another event stream out of my existing microservice?" I think that's the space for innovation and that's the white space. I have an opinion about what it should look like, but that's just the first revision of implementation of this data product and it's contained around it given the utility layer of technologies that we have today. And I really think and I hope that this is just the beginning of a series of innovations around that.

DB: Yeah. I mean, you defined automation in 2018, right? So, we've had a couple of years since then. Have you seen any evolution in terms of the adoption and the concept and your own thoughts on the concept when you first defined it?

ZD: Yes. So, I can tell you, it actually takes a couple of years before people actually pay attention to you. It's funny because we have the original article on Martin Fowler's website and he sends me these, kind of, visits and reviews. And then there's a plot, a kind of access to our writing at the time. And my article is always this little, lonely pale gray dot at the end of the outside, away from all of the other kinds of articles where they fit in terms of access and how it's changing. So, it's just caught on fire, even though this came from a couple of years ago. So, I think what is happening right now is a lot of technology vendors are taking, kind of, the easiest path to map the existing technology that they have, their roadmaps, their strategy for their products to data mesh.

And the easiest path is to just change your marketing content. Whatever you marketed as data lake is now data mesh. And you try to kind of do your best to see how those products can be used. Our implementations have a layer of custom code in terms of creating this data product container and managing it and provisioning it to manage its life cycle. But underneath all of that, we are using the existing storage capabilities. We're using data processing capabilities. We're using all of those famous tools that everybody else is using. And what we have found is that configuration of these tools in a distributed fashion is really hard. And you run into limits as simple as the number of storage accounts that you can have. An account provider might be limited, and it is for some.

So, you can't have this independently, you know, manage an autonomous data product. If you want to have hundreds of data products, you just simply run out of storage accounts as a starting point, and then infrastructure cost actually increases because, you know, you're just exponentially growing the number of resources that are allocating to smaller data products, but it is doable. It's just hard and expensive. And, I don't know, it looks like Frankenstein has been kind of stitched together and there's a layer that's missing that we are building on top.

DB: A layer is missing now that you're trying to build on top of that. Is that what you were saying?

ZD: Yeah. So, I put this in my second article and I tried to kind of classify the set of technologies that I think are needed and missing. I would say the bottom layer of that technology, but there we call it like “base infrastructure” or “utility layer”, this is your storage, workload processing, or data processing. It's your multi-model access to data, different kinds of storage, so that you can manage your APIs and services on Kubernetes. So, that layer exists. But as we talked about, it needs to be reconfigured, priced differently so that you can use them and combine them into these data products independently rather than one lake or one warehouse.

The second layer of that is, I call it "data product experience layer." By "experience", I mean the experience of people who are providing these data products and people that are using data products, people that are managing these data products, [they] need a new set of capabilities that, I can just simply, declaratively define my data product as output ports and what type of data is sharing semantics, schema and, here you go. I have a - I don't know, DM [inaudible] like a data mesh control plain interface, go and provision this thing, and then people can come and consume it. So, that layer is certainly missing. That concept of data product itself, its container and it's APIs.

And then the level above it is managing the emergence of this kind of intelligence at the mesh level and giving access to people at the mesh level. So, how can I search, browse, explore at the mesh level? How can I see the emergence of some sort of a knowledge graph at the mesh level? And those capabilities somewhat exist today. Again, we have various versions of data registries and data products, but they built based on a different assumption. They built based on assumption that data is this messy, unclean, bits and bytes that somebody dumped onto this or is in the silo of my application databases. And I need to kind of go and gather and look around and somehow make some meaning and insights out of it.

Like, assume for example, this particular file dumped on this disc is high quality because I have a lot of users for it, and that's a very different model than what data mesh says. And maybe I'm wrong. Maybe that will never happen. But what data mesh says today is a product intentionally exposes data silos around quality, its data schema, its data semantic. And yes, we need a mesh level of giving access to all of those and aggregate them. But we are intentionally sharing that the same way we’re intentionally sharing APIs.

DB: Speaking of APIs, with security of these data domains and product owners, is it the same challenge associated with sharing APIs within an organization and securing those APIs, and making sure that the data relevant to the APIs are accessible to the stakeholders that should have access to it? Or are there wider security issues that people should be aware of?

ZD: I think it includes the things you've mentioned but it also has wider implications and I'm not sure exactly security's the right umbrella term. But security, privacy, consent, like all of those things are there. And then the access model is slightly different. You know, if I want to do a population analysis, I don't necessarily need to know, you know, population list or a population of users. I want to do some sort of analysis on the statistical shape of like, what are the age groups and so on. I probably don't need to know as a consumer, the exact addresses, but if there is a differential privacy being applied to it, then okay, can I still do a population analysis without accessing individual roles?

So, there are additional concerns that we need to consider around security as an umbrella term of data product. And then in terms of how we're going to implement this, I think this is exactly where we can learn a lot from what happened in the microservices world with zero trust architecture and enforcing policies and security and access control, and encryption, and all of those good things at the granularity of every access point in this case would be like an Apple port on a data product, on a single data product. And to be able to do that, we'd be able to change those in an automatic, automated fashion and configure and author those policies and push them to those beta products very quickly with automation and be able to observe them.

So, I think there's a lot to learn there, and this is actually a very uncomfortable space for people that come from the traditional big data because of its centralized nature you know, forever and ever, we just need security by parameter or putting guardrails around the body of the data or the accounts that can access them. But here, we can really secure, not only every single data product, but also we can monitor and change those security parameters in an automated fashion. And then if we create this new concept of the product, which is beyond just the data itself, yes, it has pointers to the way that data is stored and it has the management of that storage, but it also has mechanisms for policy configuration and policy enforcement. Then the sky's the limit. Like you can just keep adding new policies to that. You can start simply with access control. Then you add your anonymization. You can keep adding and each of these data products have that policy engine embedded in them because it's part of the construct of the data product itself then we can kind of continue extending them. And this is not a new idea. This exists in the operational world, we just have to think about data differently.

DB: You've rolled out the data mesh to some of your clients, as you said, there were some technical challenges associated with that. What are the kind of benefits that they've realized out of the projects?

ZD: The scale and speed. I mean, initially these projects take quite a little bit of time to get to the point that you can see the scale and speed, because as I said, there is no solution to go by off the shelf. And I don't think that there will be a solution off the shelf. This is an ecosystem play, and there needs to be a combination of interoperable solutions coming together to really raise the bar. But the clients I started with two years ago now, we started from scratch. So, we had to build a lot of stuff and that takes time. And I was extremely lucky to work with organizations that were, you know, technically ambitious and we could do this. They had the investment, they have commitment to their data strategy. And part of that data strategy was building this data mesh platform.

So it took a while. But, just even starting with very primitive elements that you can configure and provision these data products, then you can see the benefit because you're not dependent on a single team to do it for you anymore. You’re not dependent on the data platform team, or BI team or whatever team that is. You can just start these new teams around the group of data products. And they can go off and use the platform to bootstrap themselves and start creating these data products and get them to the hands of the users. And that was evident when COVID hit us for one of our clients that very quickly, they provided additional services to their members around consulting COVID patients and so on through chatbots.

And we had, very quickly, new products set up to capture those conversations and use them downstream for providing a better experience to their members. And it just took us, you know, a week or 10 days or a couple of weeks off. Of course, people didn't sleep during the weekends and in the 10 days. So, it was a lot of integration going on, but yeah, that's become evident. And then we have been able to kind of scale out. It doesn't matter. We can scale out on the number of data product teams easily. You have an access around which you can easily scale.

DB: And what, how about some of the unexpected challenges associated with projects?

ZD: Where should I start? You know, you're venturing into the unknown and you're just going to come across things that you expected and then you come across things you really didn't expect. This is a story, actually, a colleague of mine shared with me from that particular account, but they built this kind of data product provisioning engine that was working really well. They made it really, really easy in a declarative fashion. Very easy to kind of put the scaffolding, let’s say for a repo of a data product and start to declaratively say what this data product is, and run a command, and get it all provisioned, CICD pipeline, you know, data recs clusters, Spark Jobs running your accounts for your lake storage, all of those good things that need to come together to support this particular data product.

And because it was so easy, the next thing we didn't know, people started kind of creating these data products, maybe too many of them. And then we just run into really hard limits on our cloud. Our particular cloud provider just ran out of resources. They had never imagined somebody would want to have 200 data lakes because that's how they had designed. So, basically very hard limits. They're probably coded somewhere in their cloud infrastructure. That was one of them on the technical side. And then I can talk about the people's side, if you're interested too?

DB: We have a few more minutes. Why not? Let's touch on some of the people's side, because it is a people problem, right? Particularly with product ownership and domain ownership. So what were some of the people challenges?

ZD: So, those challenges still remain after two years. So, those are really, really hard challenges. The ownership of who owns the data, what is this new role, that's certainly something that we'll go through some sort of an evolution. You will find the people that you already have, and you think they're closest to the job because you were starting from writing a job description for day one. This job description doesn't even exist today. And then, you know, you hired like your governance team members and so on. So, there's a transformation around that thinking, but then to start just conveying what data product means, it takes a long time. People go, "What are you talking about? Is this a file? Are you talking about lake files? Are you talking about tables in the warehouse?" And then, especially people that come from that background try to map their world and language and the things they knew to this new thing that never existed before.

So, that translation and vocabulary, and then you know, influencing how you design and even converse. It took a while and we’re still gonna run into hard moments that people say, "Oh, that's what you really meant by the product. I thought you meant like a table in a warehouse." And if you don't address those concerns early enough, what you end up with is a distributed data warehouse. You know, like a distributed monolith. Like it's just not a good place to be.

DB: You mentioned that it took a couple of years and all of a sudden it's getting traction, it's firing, taking off. So, what's happening for you now? Are you getting called to speaking engagements? If you're allowed to do them in person, how are you seeing adoption taking off? Is it the tools, vendors?

ZD: All of the above. It's really interesting because when I wrote the article, in fact, for my day job, I said, "I want to go and help other people to come up with the next idea." That's why I'm the Director of Emerging Tech - well for "Next Tech" is the name we use internally, and I thought [with] this data mesh thing, "I'm done, if it's out there, people would take it or not." And little I knew that my day and my life will be consumed by data mesh. So, it’s really interesting. So it's the big providers. All of them, they're interested in data mesh, already putting articles out there. What we're exploring with them at ThoughtWorks is, because we have partnership with our clients, it's just, "Let's go to clients together and have an open mind and when your product makes sense, we’ll use it to build the data mesh." If it doesn't, you can learn a lot from that experience and evolve your product.

So, that is certainly taking off. I think there is a growing number of startups that probably are, you know, in their early journey. And they hear about data mesh and they’re like, "Oh, there is an overlap between what I was thinking to build or the product I’m building to data mesh." So, there’s a large number of startups that are kind of exploring that space and trying to see whether there's an overlap. And that's really exciting. Yes, speaking engagements and this podcast and many, many other ones, I feel like people probably are sick of hearing me. So, that's happening, but I feel like that's my responsibility to to share that as much as possible because people are at different points in their journey.

So, I'll be doing this for a while and I'm also writing a book and a brief introduction to data mesh. I'm struggling to keep it brief and work through the chapters. And it's 150 pages already. And I'm being told that, you know, it needs to be a call-to-action book. So, there will be soon, before the end of the year, a call-to-action book to kind of jot down the thesis, the tenets, and everything I know and we know so far, there's a lot to be learned.

DB: Is this the first major publication since your original article on data mesh?

ZD: Yes, and this would be the ones I'm very thoughtful to not be very specific early on. You know, I can write down the specifics based on what we've built, but that might be a terrible first implementation. Yes, it will be the first publication. In fact, I must say that there will be a new book coming out, "Architecture: The hard parts". It's an architecture book. It's not a database book, but there will be a small part covering just an image from an architecture perspective, not from the full concept perspective and that book I’m contributing to might be coming out to people.

DB: Well, Zhamak, congratulations! You know, it may feel like it's been a long time coming to success, but really, two years is a blink of an eye. And I'm sure the journey is going to continue for you for quite some time yet. Another fellow Australian based in San Francisco, thought leader at ThoughtWorks, congratulations on the success you're seeing around data mesh and your journey with your new publication. Thank you for joining us on the podcast. And maybe once your book is published, we can get you back. And you can share some of your thoughts of how you've taken the concept to the next level of what you're seeing evolving into the future.

ZD: Thank you for having me and thank you for those wonderful, thoughtful questions.

KM: Before we let you go, Zhamak, where can people follow more about your journey on data mesh and among other things?

ZD: Of course, a few places. I mean, I'm on LinkedIn and on Twitter "@zhamakd". There is, in addition to those, a data mesh learning community on Slack that a fellow technologist has started with a lot of passion and energy beyond what I could do and it's called Data Mesh Learning channel. It’s a Slack channel. It's a great place. I think there's a [inaudible] that has been created around it. I'm happy to share the link for your show notes later. It in fact has been an amazing community. We went in a span of a few weeks from nobody to nearly 2000 people, very engaged. I'm very excited to see it, see how that community grows and within that community, we publish and we share various information about data mesh.

KM: This has been a fantastic conversation. Thank you very much, Zhamak, for being here. Thank you!

Listen on your favourite platform

Other podcasts you might like

cta-left cta-right

Want a ringside seat to the action?

Book a demo to see how our fully integrated platform could revolutionise your organisation and help you wrangle your data for good!

Book demo