Back in the 1970s, the earliest databases had transactions. Then NoSQL abolished them. And now, perhaps, they are making a comeback… but reinvented.
The purpose of transactions is to make application code simpler, by reducing the amount of failure handling you need to do yourself. However, they have also gained a reputation for being slow and unscalable. With the traditional implementation of serializable transactions (2-phase locking), that reputation was somewhat deserved.
In the last few years, there has been a resurgence of interest in transaction algorithms that perform well and scale well. This talk answers some of the biggest questions about the bright new landscape of transactions:
Martin is a researcher at the University of Cambridge, working on the TRVE DATA project, and author of the O'Reilly book “Designing Data-Intensive Applications”, which analyses the data infrastructure of internet companies. He previously founded and sold two startups, and worked on data infrastructure at LinkedIn.
Github: ept
Twitter: @martinkl