In part one of this series I briefly discussed the purpose of the application to be built and reviewed the IoT local controller & gateway pattern I’ve deployed. To recap, I have a series of IP cameras deployed and configured to send (via FTP) images and videos to a central controller (RaspberryPI 3 Model B). The controller processes those files as they arrive and pushes them to Amazon S3. The code for the controller process can be found on GitHub.
In this post we will move on to the serverless processing of the videos when they arrive in S3.
… but vendors will fight you every step of the way.
Remember – every vendor wants to get as much of your data in their database as possible. And every data approach (Relational, Document, Key/Value Store and Graph) can be forced to do just about anything; and given the opportunity the vendor will tell you their solution is the right choice.
The challenge for the modern Enterprise Data Architect is maintaining a cogent point of view about assembling a polyglot solution to make each use case (or micro service) less complex, more scalable and easier improve/enrich over time.
However, practical illustrations of patterns and implementations are exceptionally hard to find. This series of posts will attempt to close that gap by providing both the purpose, design and implementation of a complete serverless application on Amazon Web Services.
Part 1 – The setup…
Every application needs a reason to exist – so before we dive into the patterns and implementation we should first discuss the purpose of the application.
Nest wants how much for cloud storage and playback?
I have 14 security cameras deployed, each captures video and still images when motion is detected. These videos and images are stored on premises – but getting them to “the cloud” is a must have – after all if someone breaks in and takes the drive they are stored on all the evidence is gone.
If I were to swap all of the cameras out for Nest cameras cloud storage and playback would cost $2250/year – clearly this can be done cheaper… so…
Since the late-2000’s there has been an explosion of non-relational (NoSQL if you must) data persistence technology. The industry buzz has focused around the derivatives of the seminal work done at Google – i.e., BigTable – and Amazon – i.e., Dynamo.
We’ve seen massive adoption of simple document stores and key-value based stores – which focus on availability and partition tolerance and thereby enable storage and processing of schema-less (or semi-structured) data at velocities and volumes previously considered entirely impractical.
There were – however – compromises. These systems are abysmal at dealing with connections between the data – or more precisely connecting the entities in the data sets with one another in a variety of contexts.
Many of our platforms, systems, applications and services are intended to deal with these types of connections – and unfortunately most engineering teams fall back to relational databases to solve these problems. The problem with this approach, however, is that relational databases are inherently inefficient when performing complex set operations:
The true value of the graph approach becomes evident when one performs searches that are more than one level deep. For instance, consider a search for users who have “subscribers” (a table linking users to other users) in the “311” area code. In this case a relational database has to first look for all the users with an area code in “311”, then look in the subscribers table for any of those users, and then finally look in the users table to retrieve the matching users. In comparison, a graph database would look for all the users in “311”, then follow the back-links through the subscriber relationship to find the subscriber users. This avoids several searches, lookups and the memory involved in holding all of the temporary data from multiple records needed to construct the output. Technically, this sort of lookup is completed in O(log(n)) + O(1) time, that is, roughly relative to the logarithm of the size of the data. In comparison, the relational version would be multiple O(log(n)) lookups plus additional time to join all the data.
The opportunity to build highly efficient systems by leveraging natural graph models (those that exist in the real world) is massive and dramatically under utilized.
Imagine a CRM system which wasn’t tightly coupled to a rigid account, contact, entitlement hierarchy model. Imagine an student roster system which directly modeled the complexity of sections for teachers, schools, districts and states without a myriad of join tables.
The only barrier is investing in education required to understand the efficiencies of graphs and the graph databases available today. I highly recommend you up your game by downloading Neo4j today and beginning to learn the advantages graph databases can provide in your polyglot persistence architectures, and if you need any help let me know – I’m happy to help you and your team model your data and leverage a graph to simply your system and increase your feature velocity.
I’m not always sure people always know what they mean when they talk about Big Data – and even when they do know, I’m not sure they can contrast this new Big Data thing from Data’s previous incarnation.
So let’s see if we can clear it up.
Prior to big data the amount and content of the data you had access to was limited – in technical terms you had to deal with a limited information domain. Why? Because obtaining and storing data was expensive and, more importantly, most data was locked up in the real world and never entered the digital (binary data living in computational systems) realm. That obviously has changed.
This flip – from only generating and storing data directly relevant to operating a business to having access to, collecting and storing massive amounts of data which may or may not be relevant to operating a business is the state change.
The first big problem was tooling. The systems and technologies to collect and store data were designed for the relatively small amounts of strictly modeled data relevant to running our business. Moreover, they were designed to strictly control adding to it, because that was expensive. This was the problem we needed to address first – which is why when we talk about Big Data we invariably talk about technologies – Hadoop, MongoDB, Spark, Kafka, Storm, Cassandra…
But, for business leaders this is misleading, because implementing any (or all) of those technologies will not make the business effective in a Big Data context. These technologies will not provide you magical data which supercharges your business. You will not suddenly have insights your competitors do not; you will not – overnight – find the clarity required to dominate your market.
The key is to combine those tools and capabilities with data driven practices and culture.
Let’s start by avoiding the mistake made with Big Data – let’s clearly talk about what has changed and why data driven is different than what came before.
I’ve worked with organizations – from startups to enterprises – that have robust reporting and systems of operational metrics they use to run the business. They review reports and dashboards regularly, perform regular operational reviews focused on those metrics and target resources and budget toward those that are under performing. Invariably they suggest they are already data driven – because they leverage data to run their business.
They are not. They are optimally operating in the pre-Big Data model – where the universe of data was fixed, the metrics long lived and stable and information outside that realm unobtainable – those insights beyond reach.
A Data Driven organization still does those things – metrics, operational reviews, targeted investments based on under performing metrics. But, they also leverage the larger universe of data to openly question the validity of those metrics; they develop processes to evaluate that universe for new metrics and insights; they allow the data to lead them to opportunities and the identification of threats.
This practice almost always feels like a radical shift – and it is. Organizations must shift from the practice of only focusing on the known knows and embrace this new ability to examine and gain insight from the known unknowns and unknown unknowns.
Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.
When these Data Driven processes and practices, extending and augmenting your metrics driven operational practices, become part of the culture the real value of all that data and all those tools can be realized.
Polyglot persistence is simply the notion that one should leverage multiple data storage technologies chosen based upon the way the data will be used by the application.
In short, use the best tool for the job.
Attempting to make a single data store (or database if you prefer) encapsulate all your application contexts breeds complexity. When each context, entity or value object can tune the data store leveraged to the unique requirements of that domain complexity is reduced and feature velocity is increased.
Polyglot enables in data store transformation, materialized views and projections of the data into alternate stores for the purpose of enabling specific application features. Simply put, you can have multiple representations of the same data where and when it is convenient in your application context.
Data store spend is targeted toward the features and contexts in the application which actually require the investment.
Joins – perceived complexity due to the inability to create a single “query” joining multiple contexts, entities or value objects.
Understanding the benefits of composition allows us to see this as a false barrier – it is simply an issue of changing from the old way of doing things.
Maintenance cost – expertise and management of multiple data stores adds to the overall cost of operating the application.
In a monolithic data store system extensive effort is put into the “tuning” of the data store. This is always due to either the massive complexity of data stores that try to do everything or the need to make a single data store solve too many disparate persistence models. When we use data stores which are “natural” to the domain, context or entity this overhead is massively reduced.
Developer Complexity – finding and staffing developers that can work with multiple data stores is impossible.
When transforming from a monolithic data store architecture this will absolutely be problematic. However, as your polyglot practice matures this issue will diminish with time.
All of the above relies on having a solid domain driven design and flexible, adaptable architecture for your application.