Predictions for AWS re:Invent 2017 (tl;dr: AI & IoT)

< View all blog posts
Peter Zawistowicz
November 07, 2017
Category: AWS re:Invent 2017
This post is the second installment of our Road to AWS re:Invent 2017 blog series. In the weeks leading up to AWS re:Invent in Las Vegas this November, we'll be posting about a number of topics related to running MongoDB in the public cloud. See all posts here.

In just under two months, more than 46,000 technologists will descend on Las Vegas for this year’s AWS re:Invent. Ranging from seasoned members of the AWS community to the cloud-curious, re:Invent attendees should expect the conference’s sixth iteration to deliver the same parade of ecosystem partners, an extensive agenda focused on moving to (and being successful in) AWS cloud, and the inevitable announcement of a fresh batch of new AWS services.

In attempting to predict what this year’s re:Invent keynote will unveil, we’ll look at how the industry has changed since last November, as well as Amazon’s track record for debuting new products at past re:Invents.

Since last year’s conference, the two most significant shifts in the space are underpinned by the two largest trends of the moment: AI and IoT.

It is safe to assume that we will see an augmentation of AWS’s artificial intelligence and machine learning offerings next month. Last year’s conference brought us Lex, Polly, and Rekognition as Amazon made its entrée into advanced text, voice, and image processing. Widespread adoption of this flavor of artificial intelligence is still modest, so these releases may have been overshadowed by seemingly more relevant tools like Athena, which allows users to run SQL-based queries on data stored in S3. Nonetheless, the development of its AI portfolio is of strategic importance for AWS. Despite being the most popular public cloud, Amazon has faced increasing pressure from Azure and Google Cloud Platform. The latter has been able to differentiate itself among the early-adopter community primarily for its more mature AI offerings. To remain dominant over Google in the space, Amazon must prove able to keep up with the same pace of innovation in this sector.

The areas that appear most ripe for innovation from AWS this year are in voice, image, and video analysis. Already, we have seen success among e-commerce players when using text and image-based search to shorten their conversion cycles. In fact, Gartner reports that voice-based search is the fastest growing mobile search type. The opportunity to exploit users’ devices for image and voice-based search is evident in Amazon’s offerings (Alexa, Amazon iOS/Android app). Furthermore, the explosion of intelligent chat-based interfaces (Messenger, Drift, etc.) has increased the demand for a broader set of capabilities in natural language processing services like Lex. As a result, we should be prepared to see further enhancements to Lex, Polly, and Rekognition.

Video remains the one area of machine learning-based processing AWS has yet to touch. As their image analysis engines improve, the next logical step would be for the low-latency processing of video inputs. With the untold volume of video content being generated every day by ever-improving cameras, it stands to reason that organizations will want to turn that into insight and profit.

These first two predictions hint at another group of potential releases we could see from AWS next month. The development of extensible models for the analysis of text, voice, image, and video is predicated on the accessibility of high quality, low-cost microphones and cameras. While smartphones have supported these inputs for more than a decade now, the availability of WiFi and reliable cellular networks has increased the speed and frequency by which their outputs can be shared or uploaded for further analysis.

So, that brings us to our next theme: the Internet of Things.

Many analysts and skeptics have suggested IoT adoption is weak and its promises are over-hyped. Their skepticism is primarily centered on two ongoing challenges with IoT: 1) the lack of one or two emergent platforms on which IoT technologies can standardize and 2) the relatively limited ability for data from decentralized sensors to be analyzed at “the edge” rather than in a central cloud.

As with operating systems, media encodings, or network protocols, mass adoption of the technologies they support is typically preceded by one to three main players emerging as the default options. AWS entered the competition to build the winning IoT platform at re:Invent 2015 with its announcement of AWS IoT. All other major technology companies have made similar bids for dominance of this market. In addition, there are hundreds of venture-funded startups aiming to serve as a universal platform untethered from an existing “marketecture.” Nevertheless, the fact remains that no winner in this race has yet been crowned.

This remains a large opportunity and Amazon is well-poised with its existing portfolio of software and ecosystem of networking and hardware partners. AWS appeared to renew its commitment to capturing the IoT market at last year’s re:Invent with the debut of AWS Greengrass and Lambda@Edge. Greengrass allows for the running of Lambda functions on local, offline devices rather than in Amazon’s cloud. Lambda@Edge is one of AWS’s first forays into “edge computing,” allowing users to run low-latency and device-specific Node.js functions in their “edge locations”. Both releases mark a shift from centralized cloud computing to distributed edge computing—perhaps less comfortable for AWS, but necessary for sometimes-offline or time-sensitive IoT projects.

However, Greengrass was just the first step to enabling AWS users to better serve disparate, intermittently-connected devices. Notably, Greengrass still requires ML-powered data processing and analysis to take place in the cloud rather than locally (at the edge). Improvements in hardware technology may also prompt AWS to improve their on-device offerings and make services like S3 and DynamoDB available outside of their infrastructure to better store and process sensor data on the devices themselves. Similarly, we may also see devices become a more significant player in more seasoned services like Kinesis, enabling the local ingestion of data.

No matter what gets announced on the keynote stage this year, you can rest assured it will lead the conversation for the months that follow.

comments powered by Disqus