Contents

Managing an Amazon Web Services infrastructure

Managing an Amazon Web Services infrastructure.

So you’re thinking about moving your applications and servers into AWS? The potential upside is significant: No more managing and refreshing hardware. No more worrying about networking equipment and setup, or database backups. The list of aggravations you can eliminate is long.

However, it’s not a Happily Ever After panacea. You’ll still have to manage your AWS infrastructure to ensure that you are secure, scalable and stable. That said, here’s what you’ll need to do to make that happen.

Consolidated Logging

This is both a security best-practice and an operational necessity. From a security perspective, it’s imperative you keep people (all people) off your servers. That requires extracting the logs from those servers into a common location for review. From an operational perspective, keeping all logs in one place allows you to correlate events across servers, and to stay alert on event occurrences.

When you run your own datacenter (or host everything in someone else’s), you really only need to be concerned with collecting logs from the processes and servers you create. No other services really need monitoring. Enterprise tools such as Splunk, SumoLogic or the ELK Stack work great for this purpose.

In the AWS world, you have a vast number of mission critical services — provided by AWS — from which you must consolidate logs, in addition to your processes and servers. Many of these services (like API Gateway, ELB, ECS, Lambda, S3, etc.) natively log to Cloudwatch Logs; the remainder — including your process and server logs — can be brought into the fold.

User Access

All user access to your AWS infrastructure — including API access — needs to be carefully controlled, and enforce MFA.

Consider creating an AWS account that only contains users, enforces MFA for all access (console and API), and uses role assumption between accounts to grant actual rights to resources.

Additionally, all users should have only the minimum required permissions to accomplish their jobs, and this should be enforced at the IAM policy level.

Access Auditing

In order to be compliant with various security standards, it’s a best practice to audit access to your infrastructure. The audit should tell you who did what when, and should include direct user access and API-level access.

AWS enables you to do this with Cloudtrail, and the events logged in the trail can — and should — be sent to both S3 and Cloudwatch Logs.

Network Monitoring

Just because AWS provides most of your networking management and maintenance doesn’t mean that you shouldn’t monitor traffic flow in and out of your infrastructure.

If you use Virtual Private Clouds (VPCs), at a minimum you should enable VPC Flow Logs on your public interfaces — which allows you to see exactly who’s hitting your infrastructure, and whether you’ve accepted or denied their traffic. These logs should be included in your Cloudwatch Logs setup.

Vulnerability Analysis

If you’re running your own physical datacenter, you really only have to concern yourself with vulnerability scans of your servers; whereas there are many AWS managed resources that, if improperly created, can present unintended vulnerabilities.

In AWS, you should implement Inspector — first, to scan each of your EC2 instances for vulnerabilities, then to report on those for correction by your DevOps team. This handles the server-side for vulnerability scanning. You should also implement Config, which scans AWS resources, in general, for vulnerabilities created by the ways you’ve provisioned and configured them. Those two safeguards, properly implemented, will give you a complete vulnerability picture across your AWS infrastructure.

IPS/IDS

Detecting and preventing intrusions in AWS is different than it is in your corporate datacenter. Traditionally, security teams implemented enterprise class devices that plugged directly into a network. In AWS, you no longer have access to a physical network — so detecting and preventing an intrusion requires a host-based solution.

For Linux-based systems, it’s as simple (and complex) as using an SELinux kernel. SELinux will detect an intrusion attempt, log it in a centralized audit log — then prevent the intrusion attempt by preventing access to prohibited server resources. Additionally, SELinux will protect your networking ports — making it much more difficult for an attacker to determine where your entry points are, and what type of entries exist.

Conclusion

It is possible to create an AWS infrastructure that is more secure, and better maintained, than your existing corporate datacenter; but only if you understand, and effectively use, the tools that AWS provides — as well as the capabilities that are inherently included in your new cloud.

Feedback? contact Joe with any comments

About the Author
An experienced C-Level executive with a breadth of experience in operations, sales, marketing, technology, and product development. Joe specializes in innovation and change leadership, having had success as both an entrepreneur and intrapreneur, across numerous industries including defense, insurance, telecommunications, information services and healthcare industries.

Joe relishes the opportunity to make a real difference, both in the success of the corporation and in the lives of the people he interacts with. A self-starter that neither requires nor desires large amounts of oversight having succesfully built numerous DevOps teams and Solution Architected some of the most advanced and secure utility computing applications.