Pattern Pattern

JMS vs. Kafka: Technology Smackdown with Clement Escoffier and Kai Waehner

Two of the most popular message brokers used today are JMS and Kafka. While both have their own pros and cons, which one should you actually use?

In this episode, we have our first-ever Technology Smackdown, where we pit two technologies or architectural styles against each other for a friendly sparring match. Today, we find out how JMS and Kafka stack up against each other and how they apply to certain use-cases. Joining us are Red Hat’s Senior Principal Software Engineer Clement Escoffier for the JMS side, and Confluent’s Field CTO Kai Waehner for the Kafka side.

Episode outline

  • Clement Escoffier answers which problems JMS is trying to solve.
  • Kai Waehner discusses what brought about the need for Kafka.
  • Escoffier and Waehner weigh in on the pros and cons for both message brokers.
  • In which use-cases do each of the message brokers shine the most?


Kevin Montalbo

Welcome to Episode 66 of the Coding Over Cocktails podcast. My name is Kevin Montalbo, and joining us from Sydney, Australia is Toro Cloud CEO, and founder David Brown. Good day, David!

David Brown

Hi there, Kevin!

Kevin Montalbo

All right. In this edition of the Coding Over Cocktails podcast, we're going to have a Tech Smackdown, putting forth two technologies or architectural styles in a friendly sparring match. Today, we're debating between two messaging services; JMS versus Kafka. Which message broker should you choose? 

On the JMS corner is a Java Champion, senior principal software engineer, and Reactive Architect at Red Hat. He has contributed to several projects and products, touching many domains and technologies such as OSGi, mobile, continuous delivery, and DevOps. More recently, he focused on Reactive Systems, Cloud-Native applications, and Kubernetes. He is an active contributor to many open-source projects such as Apache Felix, Eclipse Vert.x, SmallRye, Mutiny, and Quarkus and he recently authored the Reactive Systems with Java book.

Joining us today to represent the JMS side is Clement Escoffier. Hi Clement, welcome to the show!

Clement Escoffier 

Hello, thank you for having me!

Kevin Montalbo

On the Kafka corner, we have the Field CTO and Global Technology Advisor at Confluent. He works with customers across the globe and with internal teams such as engineering and marketing teams within organisations. His main area of expertise lies within the fields of Data Streaming, Analytics, Hybrid Cloud Architectures, Internet of Things, and Blockchain. He’s also a regular speaker at international conferences such as Devoxx, ApacheCon and Kafka Summit, and writes articles for professional journals, and shares his experiences with new technologies on his blog.

Representing the Kafka-side is Kai Waehner. Hi Kai, welcome to the show!

Kai Waehner

Hey, everyone! Thanks for being here. Great topic to discuss today.

David Brown

No, thank you. It's a pleasure to have you both on the program. We actually came up with this topic because this is one of the more popular blogs on our website. There's still obviously a lot of interest from our readers on this topic of JMS versus Kafka. It has a lot of search volume as well, so it's still obviously pretty relevant today. So, before we jump into the relative advantages and disadvantages, maybe we can just give a brief outline. We’ll get each of you to give a brief outline of the problems that each platform is trying to solve. So maybe Clement, you can start us off by describing [how] JMS has been around for a while. What is the solution and what problems does it try to solve?

Clement Escoffier

So, yeah, so JMS has been around for a long time, probably 20 years or something like this. It's actually aged relatively well, if we look at all the other technologies that come from more than 20 years ago. So, what programs try to address was really to have a way to implement asynchronous communication between different applications or different parts of one system. [JMS] very quickly became popular for this integration pattern and EIP, enterprise integration patterns, while it was the beginning of the rise of what we called ESB and things like that. So, we needed a way, a bus to exchange those messages between values, applications, or part of your systems. So, JMS is Java-centric, as  “J” is for Java. However, implementations generally rely on different protocols.

If we take Artemis or ActiveMQ, they have their own core protocol, implementing JMS on top of that. And generally, when you use a JMS broker, you actually have a set of protocols that are offered. It's not necessarily a specific one. Typically, I'm mostly recommending using AMQP 1.0. The “A” of AMQP means “advanced” and believe me, they don't lie about the advanced parts. It's a complicated protocol, but it's very, very well done for cloud and things like that. So, one of the advantages of JMS is that it's very Java-centric, so you have to use Java. But you have these other protocols that you can use. But it's a clear semantic [that] when you are in Java, you know exactly how it's going to work. 

You have two types of deliveries; queues and topics. And be careful because the topics of JMS are different from the topics from Kafka. So, we will always have to mention what kind of topics we are referring to. So, JMS queue, it's a queue of messages. You have messages inside your queue, you have a set of consumers and they share the load. It's unbonded so you can have as many consumers as you want. So, it's a big advantage on the cloud because you can scale up and scale down according to the depths of the queue. It has delivery guarantees and things like that, so it has persistence, durability and so on.  Topics is a pub-sub implementation in the sense that all the messages will be broadcasted to all the consumers. So, I won't go into the details of how it's implemented, but basically it means having a queue per consumer. So, it’s just a copy of the messages. 

So, based on these two, relatively simple, well-understood delivery mechanisms, JMS was successfully [solving] a lot of problems for the last 20 years. And we find it in banking systems, in smaller applications where asynchronous is important, like ordering systems or logistics, or even IOTs, as these days. It works relatively well on the cloud, on things like Kuberneter or in containers, because it doesn't have any consumer limits when you have queues. However it has a downside, typically [in] ordering. You can't order. It can be quite different, which is one of the things where Kafka is very good.

David Brown 

You're not arguing for Kafka here. [laughs] Like, you raise those disadvantages. 

Clement Escoffier

I generally speak about both. So, that's why.

David Brown

You're being too honest there. [laughs]

Clement Escoffier

So, if you're looking for a hybrid cloud kind of thing, hybrid cloud is a system where you have multiple datacenters, multiple cloud providers. So, your application is replicated or collaborated across clouds. In that case, the filtering and transferring capabilities between one datacenter to another one of JMS and of the underlying protocols are very, very advanced with fine grained tuning  and all that. So, you can only copy the messages which will be actually consumed on the other side and things like that. So, you have lots of flexibility. 

That's more or less it. Well, it's been popular for almost 20 years.  It's still heavily used. We have lots of demand around JMS as a member of the Quarkus team. It's quite popular. We have a connector for JMS, a connector for Kafka. Of course, these days we have more demand around Kafka because well, it's hyped and people want to use it. It's a very cool technology. But when people do migration, when people have more legacy kinds of systems, JMS is still where things happen.

David Brown 

You did mention a couple of the legacy apps, like the ESB-type architecture from the legacy applications. But, as you said, they’re still relevant today for pub-sub type architectures and queue type applications as well. So, Kai, maybe you can start on if JMS was doing all this so well, it has been around 20 years, what brought about the need for Kafka? 

Kai Waehner 

You'll see that with Kafka, you can do many things, and it's much more than just a messaging broker.

Yeah, that’s a good point. First of all, I really want to make clear that in the end, we are comparing apples and oranges here. Right? And even crazier  than that, I would even say not both fruits, because on the one side, if we talk about JMS, and that's why I really want to emphasise this, this is a standard protocol, right? So this is a standard specification, and you need to implement it. And then Red Hat has Artemis, and then IBM has IBM MQ and everyone has a different implementation. So, that’s the first big difference and therefore a move about just using one broker and then migrating to another vendor, that's not true. And we can discuss this later today. So, that’s the one point. We are not really comparing apples and oranges because Kafka is a protocol implementation.

If you use Kafka, it doesn't matter if you use Confluent’s Kafka, or if you use IBM's Kafka or Red Hat’s Kafka, it's Kafka's Kafka, and it works, right? You can just use it. So, that's the first big difference. And the second big difference, and this is also now where we see that both are relevant today, because as we now just heard, the JMS broker, whichever you use, is a messaging broker. You use it for sending data from A to B, and it's very simple and understood and works well for that. 

On the other hand, Kafka is a very different technology. It's what we call an event-streaming or data-streaming platform, and with that, on a higher level, I always explained it as four things. In the intro, it's really funny because we heard that we are comparing two messaging brokers, but that's not true because Kafka is four things in the end; it's a messaging broker, so that's the part you can compare to the JMS broker. And in combination with that, it's a storage system. The JMS broker is partly like that, but much less so, a little bit. And in addition to that, with Kafka, and I'm really just talking about open source Kafka, right? Not about any vendor like Confluent or anything else, it's also a data integration platform with Kafka connect. And it's also a stream-processing platform with Kafka streams. 

That's all part of the single Apache Kafka download you have. And you'll see that with Kafka, you can do many things, and it's much more than just a messaging broker. If you want data integration, like Red Hat for example, use Apache Camel, which is a fantastic framework. So, I worked for Talend 10 years ago, and I laughed at frameworks, but you need more components for that.

And I also worked for Tibco before, so I know the messaging part well, and the stream processing part. And this is really on a high level and two points for Kafka, it's implementation. It's not a standard where you have different vendors, it's an implementation. And on top of that, it's much more than messaging, and this is the difference. And while now in the future, we will still see both because it's very different use-cases. For pub-sub messaging from A to B, at least if it scales not extreme then JMS and brokers are great for that. For anything else, people use Kafka more. And the key point here is you can use it for transactional workloads and analytics workloads. 

So, that's also a common misunderstanding. People think it's only for big data. That's what it was built for over 10 years ago at LinkedIn. But today, over 50% of use cases I see are about transactional workloads. So, this is also another move for people who say, “I should always use JMS for transactional workloads.” So that's, that's not true. You can use it but there are some pros and cons, right? But you need to understand that you can also use Kafka for transactional workloads.

David Brown 

So there is a bit of crossover in that regard. Like you said, there's a bunch of stuff Kafka does where it's not really any comparable to JMS, but there is some crossover in terms of the message broker. And as you said, it's now handling transactional workflows, which has been the traditional domain of JMS. So, if we were to focus in and zoom in on those areas where there is a crossover, forgetting about the vendor aspect of the different vendors of JMS brokers, and just talking about the protocol versus Kafka, what are the relative advantages and disadvantages? When should you choose one or the other, as a message broker? Clement, what would you say to that?

Clement Escoffier

There's plenty of things you need to understand to use and operate Kafka very well. JMS has learned over these 20 years to try to improve that, to make sure that you don't shoot yourself.

That's an interesting question because when you need pure asynchronous communication from A to B, typically JMS tends to be simpler and better understood. But Kafka looks simpler and yeah, you can do a “hello world” in Kafka faster than a “hello world” in JMS. Definitely. Just starting the JMS broker will take more time than doing the “hello world” thing. However, there's plenty of things you need to understand to use and operate Kafka very well, while JMS has learned over these 20 years to try to improve that, to make sure that you don't shoot yourself. 

So, for example, when you write, yes, you get a knowledge base and things like that. It’s pretty simple. When it's written and when you consume it, when you are done, you are knowledgeable, period. That's it. Done. Well, if you want to do this on the Kafka side, it's a little bit more tricky because you need to commit offsets, not every time because that's a very bad thing to do. And Kai may explain why. So you need to have an offset management and check. And when you commit an offset, that means that every record before that has been processed successfully. So, as soon as you do asynchronous sinks in the middle, it can be tricky. So yeah, simple doesn't mean bad. Simple means that it's easy to use for mass developers that don't necessarily want to have a degree in messaging or streaming technologies. They can go there, they know. “Okay, simple. I do a producer. I get my management when it's returned. I even have a blocking app and I know that it's going to block until it's written.” You have time-outs and things like that. 

On the other side, same thing. Once I get a message, you get a message one by one, not by batch as Kafka is recommending. So, you get a message one by one, you process it, you can easily filter. “Oh yeah. I want only the messages with that header or with that content type or coming from that destination,”and things like that. So, the filtering capabilities are very interesting too. And again, it has really been designed to be relatively easy to understand. Again, we need to remember that most developers don't necessarily want to spend their night and weekend trying to understand all the nasty details about what synchronous flows from the flight system are and things like that. So, I want to get the job done.

David Brown

Kai, what would you say to that? That JMS is a simple and mature framework what would be your response to that?

Kai Waehner

No, I fully agree with that. So, that's totally true in the end. It's very simple because, again, the main idea is to send data from A to B and if that's your main goal, then the only thing you need to do is for JMS broker to write a choice very likely, right? Because with Kafka, you do much more. For most use cases we see, it's not enough to just send data from A to B, because you also want to use it and correlate it in real time and integrate with systems. Some are real time. Some are not real time, like a data lake. And so, use it for a much bigger space. And that's why, again, this is an apples to oranges comparison. And I always recommend thinking about the whole use case, not just this broker part. That's my first thing.

But nevertheless, now we should still talk about this comparison, only of the broker. Even there, it's important to understand the differences. And there are differences which have pros and cons like we already mentioned, and in the end, it's totally true that JMS is a simple API. You just send and consume. On the other hand, with Kafka you can do much more with that and have things out of the box, like guaranteed ordering. And there are things for critical business applications you get out of the box so you don't have to worry about that. And therefore it always has pros and cons. The one thing I would say about messages in JMS APIs is that they are really better if you need some communication paradigms, like request-reply. You can do that easily with Kafka, but there's a pattern around it. You implement it differently. With JMS, you have it out of the API. 

So, that's the thing where it's just messaging, right? But then for, for the filtering part, so [for] a Kafka broker, you don't do filtering on the Kafka broker. That's very different from a JMS broker, where you would think like filtering. In Kafka, this was implemented like this intentionally, because it's not good for a scale. So, in Kafka, we also take a look at the whole ecosystem. You do the filtering and processing in its own application, which can be, for example, a Kafka streams application, or a KSQL application. So, it's a different kind of architecture and it depends on the use case. Once again, if you just want to send the data from A to B and maybe filter it and do that at a relatively small scale, then JMS is the right choice. In other cases, it's typically more like Kafka is the better fit. 

And the last part, and this is also a very critical one, which people often don't understand well, is that how you consume data is very different in Kafka and in a JMS broker. In a JMS broker it's push-based, which means the broker pushes the data to the consumer, which sounds more obvious because people say, “Yes, I want to get data in real time, event by event.” Well, reality is that this also creates pressure on the consumer and especially in a world where you have different consumers in the microservices world. You need things like back pressure handling, and that's built into a Kafka protocol because it's pull-based. So, each consumer decides by themselves when and how often to consume data. 

The drawback of this is, and that's what was already explained by Clement, is that you need to understand how it works. It's not just like the JMS API where you get it. But therefore, you have all this power to get it right for your application because your data warehouse is not built for real-time ingestion. It's built for taking snapshots of data, like the last hundred messages and ingested at once because the indexing layer of the data warehouse doesn't work better. 

It's not the problem of the messaging layer. And so with that, you have to build in back pressure handling so that each consumer can consume it like they need. And still, the argument about that, getting a message one by one, well, in the end, the consumer wants to get data and guaranteed ordering. And reality is the latency is so low with Kafka. It doesn't matter if you consume 10 or hundred messages at once, right? You can do it one by one, but it's not recommended because the performance is worse then obviously, but we are here talking about a few milliseconds. And this is another point maybe, where I would say that both JMS brokers and Kafka are not the right choice. If you need low latency, things like real trading in microseconds, that's not either of that. That's a completely different application. And as soon as we talk about real-time, for 99% of the time, it doesn't matter if you get the data in 10 milliseconds or in 20 milliseconds at any scale. 

And so this push-versus-pull-based is super important to understand. And the pull-based is not as versed as some people think because again, you have the back pressure handling built in there. And that’s the other big difference between how JMS brokers typically work and how Kafka works. And I say “typically,” because again, JMS is a standard, so you can implement it like you want. And if you use different brokers, they work differently for things like security, for things like network transfer, for things like persistence. So you don't have the same SLAs and guarantees. You need to check with the vendor behind that. And the support with Kafka, you have the storage system behind it, it's implemented in the protocol.

And with that, the benefit of it is you store the data in Kafka as long as you want. And you can decide that per Kafka topic. And in addition to the back pressure handling of that, sometimes you have not just batch consumers, but also consumers that consume the data really late, like a week later, or maybe a request-response from a web app about a data that came in a year ago. So, you can easily replay data that is old, completely independent from another consumer that consumes it in real time. And this replayability of data is another huge benefit built into the Kafka system out of the box. 

And while JMS brokers have some kind of persistence, again, everyone has different persistence or you need to check with the vendor. It's not really built for replayability. Yes, it has persistence for high availability. That's critical. And that's what typically JMS people mean. But on the other hand, you cannot replay data. So, that one consumer gets it in real time and another one gets it a year later. So, that's what you typically work with, another cash system adding in to the broker or so on. So, there are very different apples and oranges you can choose.

And so, really think about your use case and then choose the right problem. And that's the last thing for this long discussion. Now that's why I still think, also in the future, we will see both platforms, right? Because they have very different use cases. Like as Clement mentioned, if you integrate with legacy systems like mainframes, they have a very good MQ interface, right? So, for that, it's a perfect thing to integrate. So, most of our customers integrate with mainframes. We have IBM MQ, we have the JMS API and a lot of proprietary protocol things on top, but then they use the Kafka  connectors to combine both systems. 

So, these systems are much more complementary than some people think, and it's really better to learn how they use them. I've also seen a customer who tried to reimplement the JMS patterns on Kafka. That will fail. Things like request-reply can be done with Kafka, but it works very differently. If you try to do[it]  the same way as JMS, you will fail.

David Brown

So much good information there. I saw Clement, you were shaking or nodding your head at various points. Would you like to contribute anything more to the comments?

Clement Escoffier 

JMS brokers, on the other hand, tend to be a lot more… I won’t say smarter, but have a lot more features like filtering, routing and things like that.

Everything that Kai said is absolutely, completely true. Typically, request-reply based construct [should be] in a JMS-based system. Completely. I will even say <inaudible> patterns in a Kafka-based system. So, if you want, you can believe you will get it right. [But] believe me, you won't, because you are going to create dynamic topics. It has an impact on the broker. Or using specific keys, but then you need to have your partition well done and things like that. Because of the batching on both the consumer and producer, it can be very, very weird, like if you have lingering time on the producer side. 

I have seen that a lot, people trying to implement RPC-based on Kafka, and I'm always like, “Hmm, now that's not going to fly.” Maybe on your dev machine, yeah. Fine. Because you have a single broker, a single topic, single partitions, everything is fine. That's not your production system. The internal thing is the control flow or back pressure. Kafka consumers are doing the pull things. So, they're pulling and lots of people believe that it's a key to do back pressure because you can decide when to pull. Well, you need to read the small print of the Kafka limitation, because if you do that and you decide, “Yeah, I'm pulling here and I'm pulling in one hour,” or something like that. Well, there are some timeouts. And actually the pull method in Kafka is probably the least of the most misunderstood methods of Kafka in the sense that it's not only going to pull. It's actually the last part of the pull method, it's when it fetches the records, but you have to locate the coordinator of your consumer group. 

You need to be sure that everything is there, you have the heartbeat and things like that. And if you don't call pull enough or frequently enough, the coordinator, which is one of the brokers of your system, will consider you dead and lead to what all Kafka developers absolutely love –  a rebalance protocol, which is also something we can talk hours about, just explaining that protocol. 

So, you need to be very careful. Yes, pull will let you do some back pressure control, but you need to understand a little bit more to implement it right. You need to continue pulling, but maybe you don't don't have the capacity to pull because if you don't want to pull fast enough, it means that you block somewhere else. So, that means you need to pause all your partitions and then resume them.

So, that's all the features that Kafka provides. But it is, again, very flexible. It's a toolbox where you can do almost all things, but you need to understand because, in my personal life, I have a huge toolbox with every drill and Singer but I have no idea how to use them. So, Kafka is a little bit dissenting. You have really, really powerful tools, but you need to understand. 

And one of the things Kai says is actually one way to compare. Kafka brokers are relatively dumb in terms of delivery. Be careful, it's still very complicated how it's done; replication, readers, rebalancing and things like that. It doesn't do much to get right to segments and to partitions and to read. JMS brokers, on the other hand, tend to be a lot more… I won’t say smarter, but have a lot more features like filtering, routing and things like that. It was made in different times for different purposes. Are these filterings and routings being used today? Yes, they’re still used. But I wouldn’t say less than before. We tend to have specific applications that will read and do a specific routing after that. Like what Kai mentioned, Camel [does that]. 

So, you will do that on the consumer side but not necessarily on the broker side. But you have this possibility about replayability, I completely agree. JMS has not been designed for replayability at all. I would say when JMS was designed, it was just not a requirement. We were not doing so much data in motion  like we do today. The explosion of the number of data we are collecting every single day, even just right now, has just completely changed. 

And yes, this data in motion and this replayability are key now in some use cases, but 20 years ago they were not. And that's why it was not necessarily a part of the API. As I said, for request-reply, don't do that with Kafka. Kafka's not necessarily made for that. [And] don't try to implement replayability with JMS. It has not been really designed for that. You are going to have lots of headaches trying to make it work. Yeah, sure. I can make it work on my machine. No problem, [but] that's not a production system.

Kai Waehner

And the funny thing is really how much in agreement we are. I'm not surprised about this. The audience might be simply because these are very different systems with very different benefits. I have a few comments also on what you said, whereas again, I more or less agree. So, the most interesting thing was the last thing you said, where now we have more “data in motion,” as you call it. I talk to customers and for me, it's often hard because I'm in the early stages with our customers, and I need to explain this paradigm shift. So, it's very different from the past. 

So, in the past systems were built, like, you have a message, you send it somewhere, you store it in a database and then you have a web service calling it again, right? So this is an old pattern. This is what JMS was built for and it works very well. So, again, as Clement says, don't try to rebuild RPC or remote procedure calls with the web service APIs and request-reply with Kafka. That's not how it works. 

With Kafka, you have the data in motion, it's continuously flowing. That's a new thing for many people, and that's the hard part about it. But for many of the use-cases you build today –  and it doesn't matter what industry you're in – it's a huge added value if you can act on data in real time, and that's more than just storing in a database and waiting until someone picks it up later. It's about continuously correlating the data. And this is the huge strength of Kafka. 

And again, we are not just talking about the messaging part because that's still just sending data from A to B, whatever the producer and consumer are. The correlation is the huge, powerful part. And then with Kafka, it is a huge advantage that you have an end-to-end platform with a single infrastructure. I've seen customers that try to build similar things with a messaging broker with an integration framework like Camel, then also with a correlation layer, a stream processing framework like Flink maybe, and still they needed some other persistence to start a data long term for replaying it later, for using it for data science, for using it for compliance, regular, to reporting, and so on. With Kafka, you get the role stack end to end in a single platform. And especially when you're not talking about analytics, but about transactional workloads, the customer will be very happy if it's only one platform in one vendor and one support behind that model. 

And that's, I think, one of the key differences, but once again don't try to reimplement your RPC and well known design patterns with Kafka. It works differently, and this is the paradigm shift, which provides the “data in motion” concepts, but it's very different. And this is not just true for Kafka, but it's the same for when you use Apache Flink or anything else. It's a very different paradigm about how to use data continuously in real time. You have concepts like “Sliding Windows,” [where] you always monitor the data of the last 30 seconds continuously. That's something that's built into Kafka because it's an event streaming platform. 

But that's not what a message broker does for you. And you need an additional tool for that. And I think that's really the key foundational difference. And then one other thing which Clement already mentioned several times is about the operations complexity. And here as well,  we are in total agreement. So, Kafka is built as a distributed system. And while you can use it on your development laptop as a single broker and client, the same is actually true where you see some edge use cases where you deploy a single broker, like in a drone, in a hardware, but that's an initial use case. [In] 99% of cases, Kafka is used as a 24/7 highly relative system. And therefore, it's a distributed system. It has huge advantages for business continuity, guaranteed no data loss and no downtime even if systems go down. That's also where Kubernetes and so come into play. 

But with data, of course, you have the complexity of the operations. It's harder to operate than a simple JMS broker. So for that, I think that also our vendors like Red Hat and Confluent agree, because we both have an operator for Kubernetes so that you can take over much of the effort of this complexity. That's added to the brokers because Kafka has a more complex. But again, the framework is built by the experts to operate it for you so that you do things like rolling upgrades automatically instead of doing that by yourself. 

And the last part on the complexity – and it's also something that people underestimate in this comparison – [is] if you're in the cloud, you shouldn't care about that at all. Because in the cloud, you get serverless offerings, which take over this complexity. In Confluent, we are running thousands of Kafka clusters. We know how to do it well, and we also do it for you completely. So, it's a service offering. And with that, you don't have to care if it's a distributed system; how you roll up rates, how you do the rebalancing on the broker side, those complex things. If you need to do those by yourself, it's hard. We can also support you, but in the cloud, you don't have to worry about that. You have a model of consumption based pricing and mission critical SLAs, and you just use it and focus on the business problems. 

And this is also the reason why now we see this big shift to data streaming much more in the cloud because for customers, it's much easier. On premise, they can easily operate the brokers and Kafka’s more complex. Even with Kubernetes and operators in the cloud, you just use it and start small and when it works, you scale it up in a serverless way. And this is also the difference [and] it’s why we see this huge adoption of Kafka, especially in the cloud, because it's much easier to adopt, because you don't have to worry about these things that much. And by the way, that’s not just true for the Kafka core messaging part, but for the whole ecosystem. If you need to do data integration with your existing systems, like a message queue, and on the other side, your cloud data warehouse, then you do the integration with the same system, because Kafka Connect also fully manages them.

And [it’s] the same for the data correlation with the stream processing part. This is fully minute. You just write your SQL query, deploy it there and you run it there under the hood server list. And this also the difference, I think, where we see it going these days. People want to build business applications and not worry about the infrastructure. This is true in general, not just for Kafka core messaging,

David Brown

Lots of great points, both sides of the conversation there, a lot of agreement as well. I was expecting a little bit more debate, but actually, there was a lot of agreement there on common topics. But that's a good thing as well. There seems to be some common use cases for both platforms and a lot of life, I think we both agree, left in JMS. 

So, I think part of the reason this topic comes up and there's still interest in the topic is they're seeing JMS used in the enterprise, but of course, Kafka is the cool new kid on the block, even though it's been around for several years now. But [they’re] wondering if they should still be using JMS. It's great to hear from two experts that there are some very clear use cases for JMS and it's got a lot of life left in it yet. I want to thank you both for joining us today for our first Technology Smackdown. 

It was super interesting, super passionate. You both obviously love this space. It's been a pleasure to have you both on the program. Thank you!

Show notes:

Listen on your favourite platform

Other podcasts you might like

cta-left cta-right

Want a ringside seat to the action?

Book a demo to see how our fully integrated platform could revolutionise your organisation and help you wrangle your data for good!

Book demo