> Flexibility doesn't mean you can query arbitrarily deep payload structures at production scale without cost. Indexing in EventSourcingDB is primarily on metadata (like type, subject, timestamps). If you filter on arbitrary JSON payload fields, you may trigger a full scan. That's by design and by honesty: we don't hide the trade-offs. That's also why we say use EventQL wisely. It's perfect for ad hoc analysis, debugging, or one-off data extraction. If a query is needed repeatedly or must run at scale, build a projection – projections are precomputed and optimized.
As they note, representing derived products of event stores as metadata-only uncached queries/views will only get you so far.
If you want something that bridges exploratory analysis to production-grade, consistent materialized views, I really like the approach that Materialize (materialize.com) and Feldera (feldera.com - see https://news.ycombinator.com/item?id=41685689 and specifically https://news.ycombinator.com/item?id=41687690) take. They're mathematically sound ways of representing an entire hierarchy of derived materialized views in pure SQL, and having them update in real time... much like Excel would, if you change an input cell numerous sheets of dependencies away. There are some incredible papers here - DBSP and Differential Dataflow are the keywords.
I do think that both Feldera and Materialize have focused a lot on analytics use cases, rather than on what implications they could have for event sourcing in high-reliability operational situations. The unifying key, of course, is that the moment you have the ability to declaratively specify a network of downstream projections, you can hot-swap them, version-control them, and time-travel along your code-versioning-time-dimension and data-time-dimension independently - which can be incredible.
But I think there's a ton of tooling left to be built, and culture to change around event sourcing being an enterprisey way of building things, before we start to see the future appear here. But that future will be amazing.
> From the beginning, we wrote EventSourcingDB's storage engine ourselves. It's not built on top of an existing database; it's a purpose-built event store.
I am trying to understand EventSourcingDB but I don't get the concept.
The DB specifically, or the concept of event sourcing? Event sourcing is not a new approach and has a lot of similarities with temporal's approach, though temporal events are not necessarily business events and deterministic event replay is required with temporal. In the general case of event sourcing, arbitrary processing might be done on the event stream to produce some final state or do whatever needs to happen for your use case. As long as you're persisting the events and using events as the basis for your business logic and state, you're doing event sourcing.
I dont know anything about this specific DB though, if that was what you were wondering about, that's more of an implementation-level detail. Temporal server just uses regular mysql and supports mutiple storage backends.
> One of the earliest syntax choices we made was to avoid the classic SELECT … FROM … style. In SQL, a query starts with SELECT, but conceptually you first define your data source and only later decide what you want to project out of it. We always found that ordering unintuitive. So in EventQL a query begins with FROM e IN events. Only at the end do you specify what you want to output, using PROJECT INTO.
> Flexibility doesn't mean you can query arbitrarily deep payload structures at production scale without cost. Indexing in EventSourcingDB is primarily on metadata (like type, subject, timestamps). If you filter on arbitrary JSON payload fields, you may trigger a full scan. That's by design and by honesty: we don't hide the trade-offs. That's also why we say use EventQL wisely. It's perfect for ad hoc analysis, debugging, or one-off data extraction. If a query is needed repeatedly or must run at scale, build a projection – projections are precomputed and optimized.
As they note, representing derived products of event stores as metadata-only uncached queries/views will only get you so far.
If you want something that bridges exploratory analysis to production-grade, consistent materialized views, I really like the approach that Materialize (materialize.com) and Feldera (feldera.com - see https://news.ycombinator.com/item?id=41685689 and specifically https://news.ycombinator.com/item?id=41687690) take. They're mathematically sound ways of representing an entire hierarchy of derived materialized views in pure SQL, and having them update in real time... much like Excel would, if you change an input cell numerous sheets of dependencies away. There are some incredible papers here - DBSP and Differential Dataflow are the keywords.
I do think that both Feldera and Materialize have focused a lot on analytics use cases, rather than on what implications they could have for event sourcing in high-reliability operational situations. The unifying key, of course, is that the moment you have the ability to declaratively specify a network of downstream projections, you can hot-swap them, version-control them, and time-travel along your code-versioning-time-dimension and data-time-dimension independently - which can be incredible.
But I think there's a ton of tooling left to be built, and culture to change around event sourcing being an enterprisey way of building things, before we start to see the future appear here. But that future will be amazing.
> From the beginning, we wrote EventSourcingDB's storage engine ourselves. It's not built on top of an existing database; it's a purpose-built event store.
I am trying to understand EventSourcingDB but I don't get the concept.
https://www.thenativeweb.io/products/eventsourcingdb
Is this a new approach or designed something like Temporal ?
The DB specifically, or the concept of event sourcing? Event sourcing is not a new approach and has a lot of similarities with temporal's approach, though temporal events are not necessarily business events and deterministic event replay is required with temporal. In the general case of event sourcing, arbitrary processing might be done on the event stream to produce some final state or do whatever needs to happen for your use case. As long as you're persisting the events and using events as the basis for your business logic and state, you're doing event sourcing.
I dont know anything about this specific DB though, if that was what you were wondering about, that's more of an implementation-level detail. Temporal server just uses regular mysql and supports mutiple storage backends.
The canonical pronunciation could have been "equal".
Haha, that’s true :-)
Is there a more compelling set of examples somewhere?
You can find a pretty detailed overview with some examples in the documentation: https://docs.eventsourcingdb.io/reference/eventql/
why I did not see select from the samples?
> One of the earliest syntax choices we made was to avoid the classic SELECT … FROM … style. In SQL, a query starts with SELECT, but conceptually you first define your data source and only later decide what you want to project out of it. We always found that ordering unintuitive. So in EventQL a query begins with FROM e IN events. Only at the end do you specify what you want to output, using PROJECT INTO.
It's called PROJECT INTO in this language.