Insights into streaming data and the Eventador platform
Apache Flink offers two simple API’s for accessing streaming data with declarative semantics - The table and SQL API’s. In this post we dive in an build a simple processor in Java using these relatively new API’s.[ Read More ]
When we started Eventador.io in 2016 we needed a simple data source to help us build the platform on. We needed something that exemplified streaming data, something massively dynamic, and something with a lot of data. Tweets were played out, we wanted something better.[ Read More ]
With the addition of Apache Flink - Eventador.io has a true end-to-end enterprise grade stream processing platform. We run the complex infrastructure and provide support, you can focus on your streaming code.[ Read More ]
One of the omnipresent challenges of building a product from scratch is you don’t initially know exactly how customers will want to use it. You build the product you would want to use and are passionate about, however, you must also listen to customers as you evolve your product to deliver exactly what they really want. We wake up every day to this quest.[ Read More ]
Since we first opened the doors at Eventador.io, customers have been building applications that make use of Apache Kafka for a wide variety of streaming data use cases. Over time, it became clear we were only solving for one part of the complete picture. With Kafka, our service had the data transport, durability, and scalability, but what was missing was a mature, accurate, and scalable component where customers could deploy applications that processes the data itself - Until now.[ Read More ]
This release focuses on making the service even more robust, easier to use, and overall customer experience. Many of these features were inspired by direct feedback from you, our customers. Thank you for helping us build the best Apache Kafka™ managed service in existence.[ Read More ]
Since the very first release of Eventador.io we have had a SQL interface. We strongly believe that SQL is an incredible language for dealing with streaming data. Not only semantically, but also because it opens up a whole ecosystem of tools and utilities. As we have grown and gathered customer feedback over the last number of months, we have heard a consistent sentiment: Give us PrestoDB!. So, as of Eventador 0.8 we are replacing PipelineDB with PrestoDB 0.166 in our stack going forward. It turns out, what happens when you pair Kafka and PrestoDB is pretty magical.[ Read More ]
Eventador 0.7 is out! It’s our first release after gathering customer feedback and lessons learned from the platform since our 0.5 release. We pushed a number of bug fixes and minor improvements that will make our platform more powerful and easy to use. Additionally, we launched a couple of really important new features: One-Click Scaling and Dashboard.[ Read More ]
Every Kafka deployment on Eventador has an associated access control list (ACL). The ACL defines what IP addresses are whitelisted and allowed access to produce and consume to and from your deployment. In fact, there are no entries at deploy time, thus access is completely denied. In order to use our service you must first grant the IP address of your client access by adding an entry for it to the ACL. Here is how you do it.[ Read More ]
Getting connected and producing (publishing) and consuming (subscribing) messages is relatively easy in Apache Kafka. In this post we will go over connecting, producing a simple message, and consuming that message via one of a couple native python clients. Most languages are similar and there are a host of native drivers to choose from.[ Read More ]
With the release of Eventador 0.5 we are introducing new plans with one-click provisioning.
This allows you to deliver your projects in a more timely basis, save costs on valuable resources, leverage the cloud more effectively, get worry free 24x7 support, and have best in class data pipeline infrastructure simply by using Eventador.io.
I thought I would outline the 0.5 changes as well as expose some of the details behind our technology stack in the process.[ Read More ]
Apache Kafka was built by a team of Engineers at LinkedIn circa 2000. They were trying to solve the problem of how to pipeline data between various components of their micro-services based architecture. They decided they needed a single pub/sub messaging platform designed to move data efficiently. Kafka was born.
Flash forward 2015. @erikbeebe and myself were frustrated watching customers wait for slow query response times over ridiculously huge data sets and started to ponder a way they can architect applications such that the data is filtered in real-time.
The light bulb went off: Kafka was the way.[ Read More ]
Real-time data is only as good as your ability to analyze and use that data. We wanted a powerful yet simple interface to Eventador data pipelines. Notebooks have been very popular inside the data science community for some time, and they are a natural fit for Eventador.
Today we are releasing Eventador Beta 0.3.0 which includes Eventador Notebooks.
Eventador Notebooks are an automatically deployed notebook environment to make real-time data analysis, experimentation, and manipulation easy. We believe Eventador Notebooks will unlock new levels of usefulness and value out of Eventador.io, and enable your data to be that much more powerful.[ Read More ]
We built Eventador to solve a pervasive and tricky problem; It’s exceptionally challenging to build real-time data systems using existing technology. To address this problem, in July, we launched our real-time data pipeline service based on Apache Kafka. Since launch we have been working feverishly on the platform. Iterating, adding features, and core functions.
Today I am excited to announce the Eventador Beta 0.2.0 release.[ Read More ]
Over the past two decades I have witnessed many massive shifts in the data technologies landscape. I have seen a maturity of the RDBMS market, the adoption of NoSQL technologies, and organizations of every size struggling with data management in one way or another. Throughout all of this change, one area is starting to become more and more intrinsic to the data backbone of many companies: real-time data. And for good reason, the current pattern of writing data to some flavor of database or file sink and then processing that data using either a b-tree index path or parallel processing frameworks, or both isn’t able to meet real time needs. Some other mechanism must be engineered.[ Read More ]