Week 40 - What I Need to Know About Amazon Web Services (AWS)

With all the news about Amazon HQ2 coming to Northern Virginia and New York, many people have probably forgotten about another key player in the Amazon family - Amazon Web Services (AWS). While AWS had slow and largely unnoticed growth for most of its life, it eventually became an essential part of Amazon's services, and can't be ignored.

Just look at AWS' quarterly revenue growth over the past four years and you'll see that companies are relying on Amazon's storage and computing services at an increasing rate. But even today, with such high growth and praise, most people don't see Amazon as a storage and computing company. And to be fair, most people don't and won’t use it that way. But I wonder how close we are to the day when we hear the name Amazon and naturally wonder if they're talking about the retail side or the cloud computing side.

Speaking of the Cloud...What is it?

I can't even count the number of times I heard in my IT classes how "the cloud isn't some magic place in the sky that stores data." But hey, they must be saying that because a lot of people probably believe something like that.

For those who are curious, the cloud can be boiled down to just having servers scattered through the world or regional location that can be accessed at any time. Gone are the days of building entire data centers in your office building, or putting all your servers in a single server farm. The cloud is a highly distributed network - which allows for greater access, security, and scalability (among other things).

In Amazon's case, AWS offers a whole army of services. For the sake of everyone's time, I'll just say that their main services can be broken up into two categories:

  1. Storage - Simple Storage Services (S3)

  2. Computing  - Elastic Compute Cloud (EC2)

On top of these two services though, AWS also provides tools for databases, migration, CDNs, media, security, big data analytics, machine learning, IoT, gaming, and many more.

Going back to that cloud part - AWS works by having multiple Availability Zones with distinct geographic Regions. You could think of this as the cloud since all of your storage and computing power is distributed across multiple locations, but all still easily accessible from where you are working from. You can learn more about regions and availability zones here.

So Why Use AWS?

1. Money

So let's pretend you run a large company that needs a lot of storage and computing power. Before AWS, you'd most likely build out an entire space dedicated to this functionality, theoretically one that you could scale up to as well. But here's the problem with this static setup - you're not guaranteed to grow into that space, and before that time ever comes, you're paying for something you're barely using, if at all.

So AWS did something different and decided to follow a pricing plan that was used by everyday utilities - pay for what you use.

A radical idea, I know. But just think about this pricing model for a second, specifically from the customer's eyes. If you only had to pay for what you used, and the computational and storage capabilities scaled to that use, you would use that service every time. It just wouldn't make sense to have a static system in place. While this means Amazon might not make as much money in the short term, it's obvious now that this kind of model is good for business.

2. Growth

I briefly hinted at this earlier, but AWS is a powerful tool for small and new businesses simply because it is built to scale in regard to usage (both in terms of cost, computational power, and storage). This immense level of flexibility means that businesses always know that they can handle whatever comes their way. Of course, this is assuming their budget can handle that flexibility.

3. Security

Ah, security. The hottest field in IT, CS, and every other computer-related field. You can't go anywhere without hearing about the need for more security jobs, especially in this Northern Virginia / D.C. area.

But when it comes to cloud security through AWS, let's just compare the two models briefly.

  • Traditional: all servers in one place where dozens, if not hundreds, of people could access everything all at once.

  • AWS: multiple server locations spread throughout the world in nondescript areas with 24x7 surveillance and extremely restricted access.

Even if you did go the traditional route and had everything in one place AND had constant surveillance with strict user access, you're still just putting all your eggs in one basket. Diversify people!

Or, assuming you hired the best of people who had no malintent and you enabled state of the art security across every little piece of hardware and software, you get hit by an earthquake and your building collapses. Sure, you had made backups of everything, but I just hope you get the point I'm trying to make.

In the end, Amazon has positioned itself beautifully as a leader in the cloud computing world and that it can do so much more than ship your favorite paper towel brand. With its scalable pricing and usage capabilities, they've enabled companies of all sizes and verticals to use them as a foundational resource.

Go to Table of Contents

Myth: Amazon originally started AWS because it had extra computing power it wasn't using.

Fact: AWS was essentially created out of the need for Amazon to scale up its storage and computing capabilities and better utilize its internal infrastructure.

A quick note about key players though:

  • Benjamin Black - worked with Pinkham to propose the selling infrastructure as a service to Bezos

  • Chris Pinkham - team lead for the EC2 development team

  • Andy Jassy - current CEO of AWS and helped conceive AWS between 2000 and 2003 through talks with Jeff Bezos

Let's travel back to the early 2000s though to get a better idea of the birth of AWS and look at Amazon as a company that most people have probably never thought of.


Amazon wanted a better way to utilize their e-commerce platform. But instead of creating clean, organized, and documented API services, it had a mess of everything. Hindsight is always 20/20. So, early Amazon engineers had to get their digital fingers dirty and clean up their internal spaghetti system. By doing that though, they unknowingly began the first step in creating today's AWS.

The process that was born from cleaning up their internal system helped Amazon built more efficient and clean tools in the future.

But as the company experienced continued growth and as their engineering team grew, they still found that they just couldn't keep up with the demand for development work. AWS CEO Andy Jassy looked into the issue and found something fascinating: the development managers expected tools in short time frames, but the developers were only able to lay the foundations within those time frames each time.

The issue though wasn't the due date of the project though, it was the process of creating the foundation from scratch each time. The development team was not re-using anything they had created, creating more work for themselves with each project. So, instead of reinventing the wheel each time, Amazon created templates for developers to use each time they needed a foundation. Again, hindsight is always 20/20.

A Game-changing Retreat

In 2003, the Amazon executive team held a retreat at Jeff Bezos' house. During the meeting that they assumed would go for 30 minutes (it did not), they started to realize that they had the ability to do more than just fulfill and ship people's orders.

Due to the work they did on cleaning up their internal systems, they realized they now had the expertise to provide infrastructure as a service, think computational, storage, and database tools.

The Amazon team started to wonder if they could turn that expertise into an additional selling point, hence the beginning of what AWS would eventually become.

The Mythical Aha Moment

It never happened.

Amazon, or at least Jassy, doesn't really consider there to be any single moment where they could yell "Eureka!" Instead, they just slowly started fleshing our their idea of Infrastructure as a Service (IaaS) and came to this conclusion: use the tools they create as an operating system for the internet.

This internet OS (they really missed an opportunity to grab iOS) was the idea that would later become AWS. Their goal? "To allow any organization or company or any developer to run their technology applications on top of our technology infrastructure platform.”

It wasn't really until 2006 though that AWS had something to show. That year, in August, Amazon launched Elastic Compute Cloud (EC2), which is their cloud infrastructure service.

Amazon soon realized though that they were the first in this field. It took a few years before big names like Microsoft, IBM, and Google to even catch on to what was happening. But they were too late. As of today, AWS has over 30% of the market share for cloud infrastructure services, with second place Microsoft only accounting for 13%. Companies are catching up though, so it will be interesting to see what AWS does next.

Go to Table of Contents

Tutorial 1: Launching a Virtual Machine using EC2

What was accomplished?

  • In this tutorial, I created an Amazon EC2 instance using the Amazon Linux AMI and connected to it through Git Bash ssh and public/private keys.

  • Per best practice, the entire instance was terminated at the end of use.

Why use EC2?

"Amazon EC2 is a web service that provides secure, re-sizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. You can use Amazon EC2 for a variety of applications, including websites and web applications, development and test environments, and even back-up and recovery scenarios. Amazon EC2 offers a wide selection of instance types with varying combinations of CPU, memory, storage, and networking capacity that you can use to meet the unique needs of your applications."  - AWS VM Tutorial

Tutorial 2: Store and Retrieve a File with Amazon S3

What was accomplished?

  • In this tutorial, I created a bucket in Amazon S3 to upload and download a file. After downloading the file, I deleted the object from the S3 bucket, and then deleted the S3 bucket as well.

Why use S3?

"You have backed up your first file to the cloud by creating an Amazon S3 bucket and uploading your file as an S3 object. Amazon S3 is designed for 99.999999999% durability to help ensure that your data is always available when you want it. You’ve also learned how to retrieve your backed up file and how to delete the file and bucket." - AWS Backup Files to S3 Tutorial

Tutorial 3: Create and Connect to a MySQL Database

What was accomplished?

  • In this tutorial, I used Amazon RDS to create a MySQL Database Instance that I accessed using MySQL Workbench.

Why Use Amazon Relational Database (RDS)?

"You have created, connected to, and deleted a MySQL Database Instance with Amazon RDS.  Amazon RDS makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and re-sizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business."

Tutorial 4: Launch an Application with AWS Elastic Beanstalk

What was accomplished?

  • In this tutorial, I created an Elastic Beanstalk environment that runs a PHP application.

Why use AWS Elastic Beanstalk?

"AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS." - AWS Elastic Beanstalk

Tutorial 5: Update an Application with AWS Elastic Beanstalk

What was accomplished?

  • In this tutorial, I updated (i.e., changed some text in) the PHP app I was running in Elastic Beanstalk.

Tutorial 6: Set up a Continuous Deployment Pipeline using AWS CodePipeline

What was accomplished?

  • In this tutorial, I created an Elastic Beanstalk instance, a CodePipeline, and an S3 Bucket to host, test, and deploy an sample application.

Why use CodePipeline?

"Using CodePipeline, you created a pipeline that uses GitHub, Amazon S3, or AWS CodeCommit as the source location for application code and then deploys the code to an Amazon EC2 instance managed by AWS Elastic Beanstalk. Your pipeline will automatically deploy your code every time there is a code change. You are one step closer to practicing continuous deployment!" - AWS CodePipeline Tutorial

Project: Build a Modern Web Application

Still in Progress

Go to Table of Contents

Too Long; Didn't Read

Amazon. More than just a place to buy all your gifts this Cyber Monday.

Executives and engineers saw a need in the early 2000s for scalable and efficient hosting and computing technology, and leveraged their existing infrastructure and knowledge to make it the multi-billion dollar company it is today.

And I'm only talking about Amazon Web Services here.

But AWS did something different. Instead of just creating a static service that people would have to buy and use in chunks, AWS took a more dynamic approach. Their services scale, both in terms of performance and cost.

AWS is great for companies that are growing and establishing roots, as AWS networks have the ability to scale as your company grows and charge you for only what you use.

Basically, AWS grows at the rate your business does, assuring that you are being the most efficient with your money. And what business owner wouldn't want that?

AWS though is a powerful and complicated suite of tools (there are over 90 tools to work with) and isn't something you just pick up on. I spent an entire day just going through tutorials, and I wouldn’t even say I’ve scratched the surface.

But just from going through these tutorials, I now know that AWS is an extremely powerful tool, and other major companies like Google and Microsoft need to step up their game if they want to have place in the cloud computing space.

Go to Table of Contents