Securing applications in the public cloud

The security and auditing model of installing agents on virtual servers breaks down in the public cloud.

large public cloud
Credit: Thinkstock

I have written on the topic of cloud-induced transformation of IT in the past. Adapting IT audit and monitoring processes to cloud infrastructure is one of the challenges I come across when it comes to cloud rollouts.

In a 1990s-era data center, everything revolves around hardware and virtual machines. Big, monolithic applications are installed and run on servers. Servers themselves run in the private subnet (secure) or public (DMZ), and they have various security agent software installed to monitor and log everything that goes in and out of these machines. 

Public cloud is not a data center; it is a platform

It is easy to think of public cloud (such as AWS) as a managed hosting service or collocation. However, this is only a fraction of the services public clouds offer. Among services provided by large public cloud providers like AWS or Azure, there is storage, queuing, machine learning, container hosting, database engines and much more.

There are other application platform services out there in the public cloud. Consider the REST API services offered by Microsoft Office 365, Salesforce, Google and LinkedIn: None of them involve any virtual servers at all.

The services that do create virtual server instances do so in an entirely automated fashion. Consider AWS RDS, for example, which under the covers spins up virtual servers that run the DBMS software. Likewise, the AWS Elastic Container Service, Elastic MapReduce and Kenesis create fully managed EC2 instances. It would defeat the purpose of using the public cloud to try and manage these servers on your own.

Cloud application servers are ephemeral

A correctly built cloud-first application server is transient. This server scales automatically with the workload, wakes up when needed and goes to sleep when not. Developers don't log on to these servers directly. It does not make sense to monitor these servers in a traditional way.

Applications can be built without any servers

It is possible to build a modern app without using any backend server component at all. Useful applications can be put together using API mashups, and APIs offered by social media and cloud providers.

These applications require no custom backend code at all. Any backend code that is needed can be built using something like AWS Lambda functions. The APIs and functions execute on servers that IT has very little or no control over.

Consider using tools offered by the cloud provider

Each cloud provider provides tools for security monitoring, logging and audits. If a cloud provider does not offer any, then perhaps the choice should be re-examined.

While Azure has similar tools and procedures, I will focus on AWS, since it is the one I am most familiar with. IT teams should review the AWS security audit guide.

At the infrastructure level, consider AWS Elastic Load Balancer log collection. Operations teams can collect and analyze network traffic using VPC flow logs.

There are a few more options for additional security. Most importantly, any additional tiers that implement security must meet or exceed application availability and scalability requirements. Before configuring any custom in-line gateway or forward proxy, I would still recommend first exhausting native AWS resources.

My AWS VPC security wish list

Before NAT Gateways became available, AWS recommended configuring a NAT instance. NAT instances required additional administrative and DevOps effort. When NAT Gateways as a service became available, it was an instant hit with DevOps teams. Unlike a NAT EC2 instance, the AWS Gateway service offers better availability and more bandwidth -- and it works well for most use cases. Likewise, AWS should offer a managed forward proxy as a service.

The real challenge, however, is ensuring that the EC2 instances in the private subnet can only talk to approved external services -- and that includes AWS APIs such as Dynamo, Kinesis, S3 and SQS. The purpose of VPC endpoints is to allow applications to communicate with AWS services without going through the public internet.

Unfortunately, AWS only offers an S3 endpoint. That severely limits AWS services that can be used from the private subnet without having to jump through hoops. Creating VPC endpoints for all AWS services should be at the top of the list for AWS.

This article is published as part of the IDG Contributor Network. Want to Join?

Computerworld's IT Salary Survey 2017 results
Shop Tech Products at Amazon