Data Mesh is a trending topic. But is it a good fit for you?
The first page of Google results, at least for me, is a list of pages that with one exception are forms to fill out to get access to people’s white papers, books to purchase, or a product to try.
As a generalized technology/architecture, this puts it deeply in the camp of being suspect. Everybody who wants to tell you about Data Mesh has something to sell you about Data Mesh. You have been warned!
The one exception in my first-page list is the Martin Fowler page on the topic which is authored by the originator of this technology proposal. The rest of this article presumes the reader has read at least that.
I’d like to tell you why Data Mesh is likely a fit for you – if you work in a large organization with a large IT budget and some appetite for risk. I’ll assume you’ve read the basic discussions of Data Mesh and understand the basic premise and constructs.
In the abstract, at the risk of oversimplifying, Data Mesh seems to be a type of microservices approach for replacing or supplementing data lakes. Data lakes have real problems! One of my personal faves is managing schema drift in json dumps being conformed to some reporting schema. We simply don’t have time to go into the authorization concerns…
Microservices did solve some problems even while creating many new ones that then required new technology practices to solve. Those new technologies became a de-facto standard of their own, and in hindsight 10 years later I can say overall, it was a positive for my projects, as well as the industry.
If this article finds its audience – in a Data Mesh approach, you will be decentralizing some things you probably “own” today, and managing federated versions of others. Similarly, Data Mesh will require you to build or provide tooling that can give your domain teams a consistent approach to developing and deploying Data Products. Productizing what you already do in-house for consumption by whomever the domain team chooses to hire to build that ETL system or BI app. Rails, if you will. Again, that sounds (mostly) good, with challenges that force you to firm up shaky bits like internal documentation of systems and practices.
What We Pull Apart
You will have the domain teams that are integrated with subject matter experts for the development of “Domain Data Products” that are specific to the data domain that they are intended to provide value from. On the surface this separation of responsibilities makes sense. Let’s put the challenges closest to the solvers. Let us develop the solutions right next to the experts. Shorter loop times. Faster clarifications. How we integrate and analyze data will require a ton of SME input. A proper “pod” of folks that have the various skill sets makes perfect sense to align with the experts in the domain.
Still, when I think it through for a moment, I must ask myself if I have the staffing and budget to provide data engineers and analysts to each key business domain the organization works from. Do they have budget for departmental IT? Thinking over the history of larger companies I’ve worked in, it’s a firm “no”.
Not that the organizations don’t have the moolah – it just isn’t typically allocated that way. So – maybe instead you can convince those business areas to budget for tech themselves? Rationalizing those expenditures will be its own exercise in politics, education, overexplaining to people who have a vested interest in pretending to not understand you, and potential frustration.
Even if they did want to own/invest in that – it likely would not be ongoing. They would want pods they can spin down after a project and not pay for until the next one is in the chute. Does it make you wince to think of rolling off a great engineer because he did an awesome job finishing Department X’s Data Product? So, you think, I need to rent these resources out, but retain ownership…this feels familiar.
Each team might also need a devops engineer and a tester to handle deployment and testing in some shared infrastructure space set up by the centralized data platform team. To some extent it depends on how good your shared-services team is in developing, delivering, documenting, and supporting frictionless solutions as if they were products in themselves.
What We Keep Together
How we use Google CloudRun or Kubernetes or (insert provider/exec stack) to deploy applications to shared or common infrastructure has become a baseline expectation of any given applications team. There’s usually at least one devops oriented person on these teams that can handle the infra markup and troubleshooting of any deployment issues. These days it seems table stakes for senior developer roles. One wonders if this makes sense for the data engineering role? How we push analytics dashboards/apps/api’s to host them is an entirely different skill and perhaps one that is better centralized.
Even the skills of an excellent BI Dashboard analyst is something that could be a shared resource, not to mention you’ll probably want delegates for outreach to the domain teams, on your centralized team, to keep the federated pieces aligned/connected/integrated/governed.
In microservices we frequently had the mindset that the language of choice was pretty much up to the implementing team so long as it was containerized and deployable with some API. Non-functional concerns like log aggregation and alerting and monitoring (not to mention authentication) are often delegated to a more central team.
How Meshy Should We Be?
In Data Mesh it feels like we ought to limit the surface area of implementation. We can’t have each team choosing their own BI tool. Licensing costs and user experience will suffer greatly! Not all BI tools are a good fit for federated installation/security without serious licensing cost impact, and it adds another dimension to your governance concerns.
What about the pipeline orchestration engineering work? Should one team choose airflow and another team choose lambda step functions? What is the organizational burden of a multi-pronged system orchestration approach? Step back another layer. Should one team use DBT and another team use Apache spark for ETL? Who’s on point when something goes wrong?
I still find myself arguing with folks about whether a microservices set should ever share a common base library. Then I find out they were using Ruby and I was using Python. But I digress…As a big advocate of microservices, my initial response to Data Mesh was positive. Basic software axioms like don’t repeat yourself (DRY) and single responsibility principle (SRP) give us useful patterns. These patterns are guidelines to help us limit maintenance costs and challenges. Don’t these patterns apply in the data engineering and analytics space? At what granularity? Where do componentization and reuse land when everything is supposed to be completely separate and self-contained?
And do any of these changes, in the end, actually simplify (or rapid-ify) how our data works for us?
I’ve read at least one blog, and also had a firsthand conversation about how Data Mesh got [their] project delivered very fast. Undoubtedly their project, whatever it was, seemed to come together unusually quickly. Was it Data Mesh? Or was it that they were, for a moment, able to operate outside any of the normal rules/systems because they were prototyping Data Mesh? That first microservice came together real quick back in the day. I think it was the third one that needed to talk to the first and second one that started the awakening.
I would assert that before you ever heard of Data Mesh, you were probably already building data products (lower case d, lower case p). Analytics dashboards or data delivery APIs are nothing new. Splitting them by business area is nothing new. The innovation, if you can call it that, is pushing the engineering responsibility closer to the business user. This can be achieved in a variety of ways and Data Mesh is one. Another is to do a better job of convincing your product owners and business users that they need to be a part of the design and development cycle, riding shotgun with your data engineers and analysts to provide better input and feedback.
If you are one of the orgs that treats your data team like an IT support team, try instead partitioning (podifying?) your data team and allocating them to specific LOBs or projects for a mission/duration. Think of them as “Data Specialization Corps” instead of rental coders and the SME’s as “Domain Ambassadors” – and the Data Team department as a suzerainty.
The rest of it is simply doing the job right like you probably should have been doing before you ever heard the word (D)ata (M)esh.
Final thought on this – Gartner 2022 Hype Cycle indicated through its fun language that Data Mesh would be obsolete prior to reaching the “Plateau of Productivity”. While this has its naysayers, I tend to agree with Gartner here. Still, I believe the fundamental concept – solving the social/technical problem with a “decentralized sociotechnical approach” does find a void needing to be filled. Perhaps I’m a dreamer but I’d like to see that problem solved organizationally and structurally and remain unconvinced it can be solved technologically by moving the toys into the other room.