Adam Herzog

Your Docker Agenda for AWS re:Invent 2015

Headed to Las Vegas for AWS re:Invent 2015 tomorrow? So are we! Stop by booth #445 to say hi to the team, learn more about Docker and AWS and pick up some cool swag.

We know there are so many great breakout sessions at this year’s conference but we’ve compiled our list of the top must-attend Docker talks at AWS re:Invent 2015 below.

Our best advice is to sign up for Jérôme’s talk now – the Docker talk by Docker last year was so packed that the organizers added another Docker talk from the Docker team! Jérôme’s talk is the only Docker talk by Docker this year so make sure you sign up now to save a seat!

You’ve been warned: sign up now before it’s completely booked!


And don’t forget to tweet your #dockerselfies from AWS re:Invent and enter to win an Apple Watch! The winner will be chosen by the Docker team at the conference and announced via social media. To be entered in the content, be sure to include #dockerselfie in your tweet!


 

Wednesday, Oct 7 at 4:15-5:15 pm in Palazzo N

From Local Docker Development to Production Deployments

Jérôme Petazzoni – Tinkerer Extraordinaire, Docker Inc.

In this session, we will learn how to define and run multi-container applications with Docker Compose. Then, we will show how to deploy and scale them seamlessly to a cluster with Docker Swarm; and how Amazon EC2 Container Service (ECS) eliminates the need to install,operate, and scale your own cluster management infrastructure. We will also walk through some best practice patterns used by customers for running their microservices platforms or batch jobs. Sample code and Compose templates will be provided on GitHub afterwards.


 

Wednesday, Oct. 7 at 11:00 am-12:00 pm in Delfino 4002

Application Monitoring in a Post-Server World: Why Data Context Is Critical

Kevin McGuire – Director of Engineering, New Relic

The move towards microservices in Docker, EC2 and Lambda points to a shift towards shorter lived resources. These new application architectures are driving new agility and efficiency. But they, while providing developers with inherent scalability, elasticity, and flexibility, also present new challenges for application monitoring. The days of static server monitoring with a single health and status check are over. These days you need to know how your entire ecosystem of AWS EC2 instances are performing, especially since many of them are short lived and may only exist for a few minutes. With such ephemeral resources, there is no server to monitor; you need to understand performance along the lines of computation intent. And for this, you need the context in which these resources are performing.

Join Kevin McGuire, Director of Engineering at New Relic, as he discusses trends in computing that we’ve gleaned from monitoring Docker and how they’ve helped us rethink how we monitor and analyze AWS. We’ll dive into the case for how contextual information like instance size, AMI, availability zone and tags can be used to drive an elevated understanding of transient infrastructure behavior and how it contributes to application performance. We’ll show how integrating status information gives you a more accurate view of EC2 lifecycle and health. And finally, how that information powers the ability for you to analyze and display that performance information in new and powerful ways.


 

Wednesday, Oct 7 at 1:30-2:30 pm in Delfino 4102

Turbine: A Microservice Approach to Three Billion Game Requests a Day

William Day – Senior Software Engineer, Warner Brothers Games – Turbine

Romesh McCullough – Staff Software Engineer, Warner Brothers Games – Turbine

Evan Pipho – Platform Lead – MGP, Warner Brothers Games – Turbine

Turbine shares lessons learned from their new microservice game platform, which used Docker, Amazon EC2, Elastic Load Balancing, and Amazon ElastiCache to scale up as the game exceeded expectations. Learn about their Docker-based microservices architecture and how they integrated it with a legacy multiplatform game-traffic stack. Turbine shares how they gracefully degraded their services rather than going down and how they dealt with unpredictable client behavior. Hear how they resharded their live MongoDB clusters while the game was running. Finally, learn how they broke their game-event traffic into a separate Kafka-based analytics system, which handled the ingestion of over two billion events a day.


 

Wednesday, Oct 7 at 2:45-3:45 pm in San Polo 3501B

Netflix Keystone: How Netflix Handles Data Streams Up to 8 Million Events Per Second

Peter Bakas – Director of Engineering, Event and Data Pipelines, Netflix

In this session, Netflix provides an overview of Keystone, their new data pipeline. The session covers how Netflix migrated from Suro to Keystone, including the reasons behind the transition and the challenges of zero loss while processing over 400 billion events daily. The session covers in detail how they deploy, operate, and scale Kafka, Samza, Docker, and Apache Mesos in AWS to manage 8 million events & 17 GB per second during peak.


 

Wednesday, Oct 7 at 2:45-3:45 pm in Palazzo A

Hosting ASP.NET 5 applications in AWS with Docker and AWS CodeDeploy

Steve Roberts – Software Development Engineer, Amazon Web Services

Norm Johanson – Sr. Software Development Engineer, Amazon Web Services

The .NET Platform is undergoing a revolution with a new modularized .NET Framework and CoreCLR, a new cross platform runtime. ASP.NET 5 gives .NET developers the ability to develop and run their applications outside of Windows. In this session we will explore how to develop and deploy ASP.NET 5 applications on Windows with AWS CodeDeploy and Linux with Docker. For Docker we will explore using Docker with both Elastic Beanstalk and EC2 Container Service.


 

Thursday, Oct 8 at 11:00-12:00 in Palazzo H

Docker & ECS in Production: How We Migrated Our Infrastructure from Heroku to AWS

Michael Barrett – Software Engineer, Remind, Inc.

Eric Holmes – Infrastructure Engineer, Remind

This session will introduce you to Empire, a new self-hosted PaaS built on top of Amazon’s EC2 Container Service (ECS). Empire is a recently open-sourced project that provides a mostly Heroku-compatible API. It allows engineering teams to deploy and manage applications in a method similar to Heroku, but with the added flexibility and control of running your own ECS container instances. We’ll talk about why Remind decided to move its infrastructure from Heroku to AWS, introduce you to ECS and the open source platform we built on top of it to make migration easier, and then we’ll demo Empire to show you how you can try it today.


 

Thursday, Oct 8 at 1:30-2:30 pm in San Polo 3506

Amazon ECS at Coursera: Modifying the ECS Agent for Production

Frank Chen – Software Engineer (Infrastructure), Coursera Inc

Brennan Saeta – Software Engineer, Coursera Inc

Coursera has helped millions of students learn computer science through MOOCs ranging from Introduction to Python, to state-of-the-art Functional-Reactive Programming in Scala. Our interactive educational experience relies upon an automated grading platform for programming assignments. But, because anyone can sign up for a course on Coursera for free, our systems must defend against arbitrary code execution.

Come learn how Coursera uses AWS services such as Amazon EC2 Container Service (ECS), and Amazon Virtual Private Cloud (VPC) to power a defense-in-depth strategy to secure our infrastructure against bad actors. We have modified the Amazon ECS Agent to support security layers including kernel privilege de-escalation, and enabling mandatory access control systems. Additionally, we post-process uploaded grading container images to defang binaries.

At the core of automated grading is a general-purpose near-line & batch scheduling and execution microservice built on top of the Amazon ECS APIs. We use this flexible system to power a variety of internal services across the company including data exports for instructors, course announcement emails, data reconciliation jobs, and more.

In this session, we detail aspects of our success from implementing Docker and Amazon ECS in production, providing ideas for your own scheduling, execution and hardening requirements.


 

Thursday, Oct 8 at 1:30-2:30 pm in Palazzo N

Turbocharge Your Continuous Deployment Pipeline with Containers

Daniele Stroppa – Cloud Specialist, Amazon Web Services UK Ltd

Dan Sommerfield – Senior Manager, Software Development, Amazon Web Services

“It worked on my machine!” How many times have you heard (or even said) this sentence? Keeping consistent environments across your development, test, and production systems can be a complex task. Enter containers! Containers offer a way to develop and test your application in the same environment in which it runs in production. Developers can use tools such as Docker Compose for local testing of complex applications; Jenkins and AWS CodePipeline for building and orchestration; and Amazon ECS to manage and scale their containers. Come to this session to learn how to build containers into your continuous deployment workflow, accelerating the testing and building phases and leading to more frequent software releases. Attendees will learn to use Docker containers to develop their applications and test locally with Docker Compose (or Amazon ECS local), integrate containers in building, deploy complex applications on Amazon ECS, and orchestrate continuous development workflows with CodePipeline.


 

Thursday, Oct 8 at 2:45-3:45 pm in Venetian H

Amazon EC2 Container Service: Distributed Applications at Scale

Deepak Singh – GM, Amazon EC2 Container Service, Amazon Web Services

In recent years, containers have become a key component of modern application design. Increasingly, developers are breaking their applications apart into smaller components and distributing them across a pool of compute resources. It is relatively easy to run a few containers on your laptop, but building and maintaining an entire infrastructure to run and manage distributed applications is hard and requires a lot of undifferentiated heavy lifting. In this session, we discuss some of the core architectural principles underlying Amazon ECS, a highly scalable, high performance service to run and manage distributed applications using the Docker container engine. We walk through a number of patterns used by our customers to run their microservices platforms, to run batch jobs, and for deployments and continuous integration.  We explore the advanced scheduling capabilities of Amazon ECS and dive deep into the Amazon ECS Service Scheduler, which optimizes for long-running applications by monitoring container health, restarting failed containers, and load balancing across containers.


 

Friday, Oct 8 11:30 am – 12:30 pm in San Polo 3506

Building Robust Data Processing Pipelines Using Containers and Spot Instances

Oleg Avdeev – Staff Engineer, AdRoll

It’s difficult to find off-the-shelf, open-source solutions for creating lean, simple, and language-agnostic data-processing pipelines for machine learning (ML). This session shows you how to use Amazon S3,Docker, Amazon EC2, Auto Scaling, and a number of open source libraries as cornerstones to build one. We also share our experience creating elastically scalable and robust ML infrastructure leveraging the Spot instance market.


Learn about Docker for Amazon Web Services with tutorials, case studies and additional resources.


 

 Learn More about Docker

 

, , ,

Adam Herzog

Your Docker Agenda for AWS re:Invent 2015


Leave a Reply

Get the Latest Docker News by Email

Docker Weekly is a newsletter with the latest content on Docker and the agenda for the upcoming weeks.