In part four of the series I discussed securing the serverless REST API serving the collected IoT data from the security camera devices.
In this post I will cover deploying the web application that uses the REST API.
In part four of the series I discussed securing the serverless REST API serving the collected IoT data from the security camera devices.
In this post I will cover deploying the web application that uses the REST API.
In part three of the series I discussed creating a serverless REST API – using Lambda and API Gateway – to serve the collected IoT data from the security camera devices.
In this post I will cover securing the REST API.
Since the series is getting quite lengthy I’m creating this post as a way to get access to the entire series.
In addition – I’ve created a Gitter room to discuss the GitHub Repositories and these posts. Just click on the link and join.
In part two of this series I discussed creating a serverless data collection and processing fabric for an IoT deployment. To recap, we’ve now reviewed the local devices and controller/gateway pattern for the security cameras deployed. We’ve also discussed the Amazon Web Services infrastructure deployed to collect, process and catalog the data generated by the security cameras.
In this post we will cover the creation of a serverless REST API.
In part one of this series I briefly discussed the purpose of the application to be built and reviewed the IoT local controller & gateway pattern I’ve deployed. To recap, I have a series of IP cameras deployed and configured to send (via FTP) images and videos to a central controller (RaspberryPI 3 Model B). The controller processes those files as they arrive and pushes them to Amazon S3. The code for the controller process can be found on GitHub.
In this post we will move on to the serverless processing of the videos when they arrive in S3.
Serverless architectures are getting a lot of attention lately – and for good reason. I won’t rehash the definition of the architecture because Mike Roberts did a fine (and exhaustive) job over at MartinFowler.com.
However, practical illustrations of patterns and implementations are exceptionally hard to find. This series of posts will attempt to close that gap by providing both the purpose, design and implementation of a complete serverless application on Amazon Web Services.
Every application needs a reason to exist – so before we dive into the patterns and implementation we should first discuss the purpose of the application.
I have 14 security cameras deployed, each captures video and still images when motion is detected. These videos and images are stored on premises – but getting them to “the cloud” is a must have – after all if someone breaks in and takes the drive they are stored on all the evidence is gone.
If I were to swap all of the cameras out for Nest cameras cloud storage and playback would cost $2250/year – clearly this can be done cheaper… so…
As I’ve worked with teams engineering teams big and small – in both enterprise and startup contexts – over the last 20 years I’ve noticed two distinct patterns in leadership and their impact on the culture and productivity of those teams.
Risk focused leadership emphasizes the up front identification and mitigation of risk in any program or project. It attempts to know as much as possible before committing and rewards engineers who can identify and articulate risks.
Since the engineer’s perceived worth is derived from her ability to identify reasons things won’t work – or more precisely to avoid mistakes – the culture tends to favor inaction and exhaustive research and analysis.
Predictably, these teams tend to have low output. Generally the output they do generate is both expensive and highly reliable. In enterprise contexts there tends to be a reliance on proven vendors – usually with a bias toward those with long market histories which can be analyzed.
Opportunity focused leadership emphasizes the potential gain of any program or project. It – often aggressively – attempts to capture opportunities as they present themselves. These leaders reward engineers who can grasp the opportunity and rapidly implement solutions which might capture the opportunity.
Since the engineer’s perceived worth is derived from her ability to create solutions which may capture the opportunity – or more precisely move quickly with imperfect information – the culture tends to favor rapid cycles of activity and an ability to “change gears” rapidly.
Predictably, these teams tend to have very high output, however, much of that output goes unused. Generally – but not always – the output is proof of concept quality with a bias toward open source tools, frameworks and platforms. Since the long term viability of the opportunity and feature/product were not exhaustively analyzed teams learn to implement low cost solutions which can be “thrown away”.
What should be obvious by now is that neither is bad or good – each is appropriate in certain contexts and, more often than not, a project, program or organization requires a well defined, understood and articulated balance between the two leadership focuses.
Some leaders are naturally opportunistic, some are risk managers. As a leader of a engineering or product development team it is your responsibility to understand:
Most importantly, you must ensure that the opportunity/risk profile is articulated and the stakeholders understand and agree with the inherent tradeoffs for any program, project or organization. Failing to do that is the unacceptable risk you must avoid at all costs.
I’ve been thinking about writing a series of posts about Big Data for months now… which is entirely too much thinking an not enough doing, so here we go.
Wikipedia offers a very computer science oriented explanation of Big Data – but while the size of the dataset is a factor in Big Data there are several others worth considering:
Classically Big Data was regarded is extremely large datasets, usually acquired over a long period of time, involving historical data leveraged for predictive analysis of future events. In the simplest terms, we use hundreds of years of historical weather data to predict future weather. Big Data isn’t new, my first exposure to these concepts was in the early 1990’s dealing with Contact Center telecom and CTI (computer telephony integration) analytics and predictive analysis of future staffing needs in large footprint (3 to 5 thousand agent) contact centers.
Another example of the classic Big Data problem is the traditional large operational dataset. In the 90’s a 2 Terabyte dataset – like the one I worked with at Bank of America – was so massive that it created the need for a special computing solution. Today large operational datasets are measured in Petrabytes and are becoming fairly common. The primary challenges with these types of datasets is maintaining acceptable operational performance for the standard CRUD operations.
There are Big Data storage models emerging and maturing today with the two default options being hadoop – which relies on a distributed file system and distributed key value store systems such as Cassandra and CouchDB. These systems (referred to as NoSQL solutions) differ from standard RDBMS systems in two important ways:
While all of these are interesting engineering problems, they still lack a crucial component. As a matter of fact, most conversations about Big Data fail to adequately address what is, perhaps, the most important problem with Big Data systems today.
Today’s big datasets are manageable in RDMBS systems. That being said, a significant amount of complexity is inserted in the managament process, most notably:
Given that, large datasets that change slowly over time, or more accurately, those that have a relatively low volume of creates (including those that occur as a result of large transformations) as compared to read, update and delete can be managed using RDBM systems.
Where Real Time – specifically as related to Social Media, user generated content and other high create applications (such as the Large Hadron Collider) – intersect with Big Data is – to me – the most interesting Big Data topic.
This model presents three distinct challenges:
The intersection of all three of these challenges was exactly the what we dealt with at justSignal. We needed to consistently collect hundreds of thousands of Social Media objects per second, generate hundreds of meta data elements (transformations) for each object, and make all of that data available in real time to our client applications. This view of Big Data is slightly different. In this view the size of the dataset – while still significant – isn’t as important as the challenges presented by a very high volume of CRUD operations over very short time slices.
The most important thing I’ve learned is that there is no silver bullet. While the traditional relational database isn’t effective in real time big data scenarios neither is a standalone Hadoop or Distributed Key Value Store. Essentially, you must evaluate each use case against the suite of technologies available and select the one best suited to that particular use case. Selecting a NoSQL solution for order processing – which has heavy ACID/Transactional requirements isn’t a good idea. However, staying with your RDBMS for high insert/transform data processing isn’t going to work either.
The approach we took at justSignal – which I will go into in more detail in a future post – was to create a unified data persistence layer designed to leverage the right long/short term data store (and sometimes more than one) based on the requirements of the application. This data persistance layer is made up of:
Each plays a critical role in our ability to collect, transform, process and serve hundreds of thousands of social media mentions per minute.
More to follow…
I’m honestly heartened by the sudden rash of efforts to create a methodology to determine ROI (return on investment) for Social Media efforts. It signals something very important for Social Media – the return of rationality to the debate.
When you consider that a few short months ago the prevailing meme was that creating a basis for your Social Media efforts in terms of ROI was “doing it wrong” – it is impressive how far we’ve come. The realization that moral arguments and scare tactics will only get you so far – and in many cases backfire – has led to an overwhelming need to create an ROI model.
Unfortunately many of these efforts are not really after ROI – they are seeking to justify an already formed point of view.
The reality is we simply don’t know if Social Media has a analytical, fact based ROI. That may sound odd coming from a guy who has bet his personal savings starting a Social Media Engagement and Analytics company – so let me explain both why the ROI hasn’t been proven and why I’m betting it will be.
Social Media is a Niche Opportunity – Today
If you want to know why there is no fact based proven ROI for Social Media investments today, all you need to understand is that Social Media has been adopted in niches. It may be in the Marketing department, or used by your Digital Agency, or perhaps in your Customer Service department. Each of these adoptions was driven out of fear (we have to monitor this and deal with the negative) or the moral (we love our customers – so we are going to do this). The investment was negligible – and in most cases I’d bet it was funded right out of the operating budget of the organization where it was used.
These organizations are beginning to declare victory and are being challenged to prove it. This presents unique challenges, because Social Media runs on anecdotes, not analysis. Dell sells 3 million in product from Dell Outlet after offering those products on Twitter. That is a great anecdote – but it isn’t analysis. When you ask the critical questions:
you quickly find that the anecdote doesn’t equate to ROI. It might… but it isn’t there yet.
These types of anecdotes are justifications. They are about proving the correctness of an already made assumption.
I’ve seen this movie before – it exactly parallels the pattern for CRM in the late 1990’s.
NOTE: For simplicity I’ve omitted the case where a technology/methodology has a niche ROI without broader adoption.
We are squarely in the middle of the justification phase for Social Media. This roughly corresponds to the height of the expectations (the big peak on the Gartner Magic Quadrant) and always directly precedes the Trough of Disillusionment. This is a recognizable and predictable pattern for adoption of new technologies and methodologies – and here is why.
The initial opportunity is too good to stay on the sidelines for some early adopter group. They – almost always within existing operating budgets and using the promise as a bulwark defense – adopt the technology/methodology. Once they believe they have seen tangible results they attempt to socialize the “win” outside the organization by creating justifications for what they’ve already done. These justifications bring broader scrutiny.
That scrutiny happens in two phases:
The second is ROI. A systemic way of proving that adoption generates a return. If, and only if, that can be proven will the technology escape the niche application and be applied on a broad scale.
Why does it work this way – because enterprises are first and foremost risk management systems. They systemically avoid large risks.
Why Will Social Media Attain Broad Adoption
The primary reasons I believe Social Media will in fact generate a valid ROI and attain broad adoption:
Measurability
As you might imagine, it is very difficult to justify and create a systemic ROI for something that is exceptionally difficult to measure. Social Media is – in contrast – eminently measurable. Rational decisions must be made about what to measure – and we need more focus on connecting those measures to the core business metrics – but there is no fundamental barrier to creating valuable measures.
The Value Proposition
Today, we’ve put all our Social Media eggs in the PR/Marketing basket. Even the small amount of credibility given to customer service via Social Media has been driven by the (C-Level Down) idea that customer service should “avert disasters” by monitoring Social Media and addressing customer issues. Make no mistake, this is customer service acting in a PR role – the goal isn’t to provide service so much as to avoid negative perceptions.
However, if you take one large step back and think about the opportunity Social Media presents – you can quickly see that the value proposition is in having a huge, open back channel to your market. We’ve had channels to our customers, and sometimes even our prospects – but this is bigger. It is the entire market for your product or service. You get to listen in on what they have to say about what they want and need. You can engage them to better understand their motivations. You can apply what you learn to create incremental improvements in every phase of your business.
Yes, you can send out special offers. Yes, you can address customer concerns. But the real return will come from having a robust back channel with your entire market; and the resulting market intelligence can – if you apply it – help you make every part of your business more appealing to your target market.
So let’s get serious about ROI. Let’s talk about how companies operate and win by continually tuning their processes to better address the needs of their target market. Let’s talk about how Social Media provides them a back channel to that market, a back channel that is an invaluable source of intelligence about the market.
Let’s talk about how a business that applies the intelligence gained via Social Media to all of their decision making processes is faster and more agile in addressing the needs of their market – and thereby wins market share.
David Alston of Radian6 stopped by yesterday and commented on this post.
This video is a detailed explanation of my views on Social Media, Brand Monitoring, and operationalizing and scaling those things in companies.
http://vimeo.com/moogaloop.swf?clip_id=2576062&server=vimeo.com&show_title=0&show_byline=0&show_portrait=0&color=8f3b55&fullscreen=1
Video Blog 12-19-2008 | The question of Scale & Social Media for Brand Managment from Brian Roy on Vimeo.