We should have seen it coming. When we stopped even thinking about how we store data for our applications, when we just assumed some DBA would give us a database – and some SysAdmin would give us a file system. Sure, we can talk about W-SAN (what WLAN was to the LAN, but for storage) solutions like Amazon S3 and Rackspace Cloud – but they didn’t fundamentally change anything.
Big Data forces us to re-think storage completely. Not just structured/unstructured, relational/non-relational, ACID compliance or not. It forces us – at the application level – to rethink the current model exemplified by
I’m storing this because I may need it again in the future.
Where storage means physical, state aware object persistence and future means anywhere between now and the end of time.
Data Persistance – A Systemic Approach to Big Data for Applications
What Big Data applications require is a systemic approach to data. Instead of applications approaching data as only a set of if/then operations designed to determine what (if any) CRUD operations to perform it demands that applications (or supporting Data Persistence Layers) understand the nature of the data persistence required.
This is a level of complexity developers have been trained to ignore. The CRUD model itself explicitly excludes any dimensionally – or meta information about the persistence. It is all or nothing.
Data Persistance is primarily the idea that data isn’t just stored – it is stored for a specific purpose which is relevant within a specific time slice. These time slices are entirely analogous to those discussed in Preemption. Essentially, any sufficiently large real time Big Data system is simply a loosely aggregated computer system in which any data object may generate multiple tasks each of which have a specific priority.
For example, in a geo location game (Foursquare) the appearance of a new checkin requires multiple tasks which are prioritized based on their purpose, for example:
- Store the checkin to distribute to “friends” (real-time)
- Store the checking association with the venue (real-time)
- Analyze nearby “friends” (real-time)
- Determine any game mechanics, badges, awards, etc
- Store the checkin on the user’s activity
- Store the checkin object
NOTE: Many developers will look at this list above and ask: “Why not a database?” While a traditional database may suffice for a relatively low volume system (5k users, 20k checkins per day) it would not be sufficient at Big Data scale (as discussed here).
This Data Persistence solution is comprised of four vertical persistence types:
Transitory persistance is for data persisted only long enough to perform some specific unit of work. Once the unit of work is completed the data is no longer required and can be expunged. For example: Notifying my friends (that want to be notified) that I’m at home.
Generally speaking (and this can vary widely by use case) Transitory persistence must be atomic, extremely fast and fault tolerant.
Volatile persistance is for data that is useful but can be lost and rebuilt at any time. Object caching (how memcached is predominantly used) is one type Volatile persistence, but does not describe the entire domain. Other examples of volatile data include process orchestration data, data used to calculate decay for API Rate Limits, data arrival patterns (x/second over the last 30 seconds), etc.
The most important factor for Volatile data persistence is that the data can be rebuilt from normal operations or from long term data storage if it is not found in the Volatile dataset.
Generally speaking, data is stored in Volatile persistence because is offers superior performance, but limited dataset size.
Relational databases and atomicity, consistency, isolation and durability (ACID) are not obsolete. It is important for specific types of operations – done for specific purposes to maintain transactional compliance and ensure the entire transaction either succeeds in an atomic way, or fails. Examples of this include eCommerce transactions, Account Signup, Billing Information updates, etc.
Generally speaking, this data complies with the old rules of data. It is created/updated slowly over any given time slice, it is read periodically, there is little need to publish the information across a large group of subscribers, etc.
Amorphous persistence is the new kid on the block. NoSQL solutions fit nicely here. This non-volatile storage is amorphous in that the content (think property, not property value) of any object can change at any time. There is no schema, table structure or enforced relationship model. I think of this data persistence model as raw object storage, derived object storage and the transformed data that forms the basis of what Jeff Jones refers to as Counting Systems. Additionally, these systems store data in application consumable objects – with those objects being created on the way in.
Systems in this layer are generally highly scalable, fault tolerant, distributed systems with enhanced write efficiency. They offer the ability to perform the high volume writes required in real time Big Data systems without significant loss of read performance.
What Does All This Mean?
Most notably it means, that after years of obfuscating the underlying data storage from developers, we now need to re-engage application developers in the data storage conversation. No longer can a DBA define the most elegant data model based on the “I’m storing this because I may need it again in the future.” model and expect it to function in the context of a real time Big Data application.
We will hear a chorus of voices who will attack these dis-aggregated data persistence models based on complexity or the CAP Theorem or on the standard “the old way is the best way” defense of ACID and the RDBMS for everything. But all of this strikes me as a perfect illustration of what Henry Ford said:
If I had asked customers what they wanted, they would have told me they wanted a faster horse