Clean Up the AWS Kitchen Sink

A walk through on how to evaluate cloud environments and secure them using Identity and Access Management (IAM),

Jan 27, 2022


Share this article

Stephen Kuenzli of K9 Security walks us through how to evaluate cloud environments and secure them using Identity and Access Management (IAM), as well as other tools.

Video Transcript

I'm Stephen Kuenzli, author of Effective IAM for AWS and IAM Pulse contributor.

I spent years leading AWS migrations and we always found IAM to be the hardest part. Now I’m making AWS IAM usable for Cloud teams.

Today we're going to talk about what to do when you need to clean up an AWS organization that maybe you just inherited. Or maybe it's your own organization that’s grown messy and you need to clean it up quickly.

Slide: The sink is a mess (00:31)

Let's set the scene. You've just acquired another company and there are eight AWS accounts.

There are 25 known apps and at least as many data stores. Some apps are sharing IAM roles.

The access controls found in due diligence are spotty, at best.

Overall, things are a mess.

You've been challenged with securing the critical data in those accounts within three months.

So your company can integrate features from the acquisition into your services while maintaining your security posture.

The project which created equivalent controls with custom least privilege policies for each app took a year for your other accounts. What are you going to do?

Slide: The old way is too slow (00:51)

The old way? It’s too slow. It’s not going to get you there in time.

There are 25 known apps in these accounts. Each has at least one data store. Some data stores are shared.

You've got twelve weeks.

It's not going to be possible for your organization’s two IAM experts to learn and secure two unfamiliar apps each week.

The app engineers aren't IAM experts, so they're not going to be able to write least privilege policies. At best, they can provide high-level information about what data each app needs to access.

Looks like we're headed for overloaded experts, ineffective security or both.

We're not going to make it.

Slide: The new way (01:49)

But there is another way. We can focus our efforts on the critical data, which is the goal of this project.

And we can use resource boundaries to protect that data.

But we still have to deploy this solution to 25 applications and at least as many data stores in time.

So we need to codify protecting data with a resource boundary in easily usable component.

So let's dig into this new solution.

Slide: Control data access with Resource Boundaries (02:27)

A resource boundary controls access to critical data and can simplify access review.

You create resource boundaries with resource policies that control access to that data resource or encryption key.

First, you control the Identity dimension: who can access the data? How?
Second, you control the Network dimension: where can authorized identities access the data from?

So you can put the access control close to the resource being protected.

Our secret weapon here is the Key Management Service, KMS.

Slide: Control Access to data with KMS (02:59)

We can control access to data with KMS. I mean, that's what encryption is all about. Encryption is not just a compliance checkmark. It’s about access control.

Encryption scrambles data, so no one can understand it unless they have the key. AWS makes encryption easy. And KMS supports resource policies so you can control who can use the key.

More than 65 data services in AWS support encrypting data with KMS. That includes all the core data services like DynamoDB and RDS.

When you encrypt data with your own customer managed key, you can control who can call encrypt, decrypt, and administration operations.

So we can identify the data for each application or data domain like Accounting, Ecommerce, or User Profiles.

Then we can encrypt that data with a dedicated key.

Now we can use KMS key policies to allow authorized applications & people the access they need and deny everyone else.

When an app or person needs to access encrypted data in a service like S3, the service will request an encryption operation on the requesting principal’s behalf.

For the request to succeed, the requesting principal must have permission to use both the data service api with the resource and the key protecting the data in that resource.

The requestor has to have both.

So to read an encrypted object from S3, an IAM principal must be allowed to call S3:GetObject on the object, and KMS:Decrypt on the key protecting the object.

And if you don't have privileges to encrypt or decrypt with that key, you can't read or write the data.

The details of this are explained in the book Effective IAM: Simplify AWS IAM chapter 5.

Slide: Identify Critical Data Sources (04:52)

Okay, so what are we going to do practically?

First, let's identify the critical data sources. We'll start by asking app engineers where the critical data is. They may not be able to describe a policy, but they can tell you where the data is and generally who should have access to it.

And they should be able to describe what kind of access is needed. Does their app need to read data?

Should it be able to write data to that RDS database or a DynamoDB table? Does it need to create SQS queues?

You also need to inventory the existing KMS keys and see where they're used.

Based on my experience it’s unlikely they have a key policy that prevents use by over-permissioned identities with KMS:Decrypt on all keys.

Determine what data goes together and draw a diagram. Group data resources together into their natural data domains, just like on the previous slide. This will typically align to an app or functional grouping. We’re going to create a security domain out of each of these data domains by protecting it with an encryption resource boundary.

Slide: Design your resource boundaries (05:52)

Now let’s design the encryption resource boundaries.

Use one KMS key per data domain.

And pair with the application engineers to determine who needs to read and write in and out of that domain.

Often there will be multiple applications accessing certain data stores, and you'll need to give them appropriate privileges. Some app identities will need to read, some write, some both. Record this information in that same high level language.

We don't need to get into specific API actions at this point. And you don’t have time for that discussion anyway. Simply: read-write to this data store or that data store.

Now adopt a usable KMS key policy automation solution where the app engineer can understand and review the inputs.

In fact, that automation, library or tool needs to understand what you and the application engineer just described.

AppA needs to read and write to that data store.
AppB only needs to be able to read.
The CI/CD system needs to be able to administer those resources, et cetera.

Slide: Usable access control interface

Let’s see how engineers should be able to specify their intended access control.

We’ll look at just the interface here.

So we can see what a usable access control automation interface should look like.

It will be close to our human readable description.

First we translate our access data table into variables that define the principals who should have each access capability.

Then we assign them to the key.

We've lifted up the conversation from the low level details of IAM security policy to a language nonexperts can understand and review.

Now the library turns these inputs into nearly 200 lines of key policy implementing least privilege. There are statements allowing each capability for authorized principals and denying everyone else.

Check out the github repo for details.

And this is what we need to get this job done quickly while improving security greatly.

Slide: Encrypt Everything with KMS! (07:54)

Here comes the best part. Encrypt everything!

First provision the KMS keys and policies using automation. Then Enable encryption for each data source.

Do this in Dev first, of course. We don’t want to just slap this into production.

Go switch those DynamoDB tables, switch RDS databases, S3 buckets, etc to use the key for that domain.

Resolve any missed access.

Now each of your critical data sources is shielded by a resource boundary.

Also recognize that these policies, and security policies in general, are really part of the application.

Slide: Celebrate! (08:39)

Now, celebrate. You've completed your mission!

You've protected critical data sources.

You’ve also scaled security out to application engineers.

They can now take responsibility for maintaining what those inputs are. After all, they were the source of those high level access definitions. They're not necessarily equipped yet to make enhancements to the automation you adopted. But they can work with you on that.

On top of that, you've simplified Access review. Now you can both use this higher level language to discuss security controls.

And that's how you clean up an AWS kitchen sink.

I hope this has been helpful. Please feel free to contact me with questions or comments on Twitter at @skuenzli


    Related Videos


    Calm Cloud Security - Containers and AWS ECR

    In this video we'll build a container image with docker, test it, and push it to...

    Mar 28, 2022


    Calm Cloud Security - AWS ECR Primer - Theory

    A walkthrough of AWS's container image hosting resource Elastic Container Regist...

    Mar 15, 2022


    Calm Cloud Security - AWS STS Assume - AWS CLI and Terraform

    Walk through assuming a different IAM role with STS assume with us

    Feb 25, 2022


    Join the beta waitlist

    Enter your email to get notified when our product becomes available to try.

    Sign Up for the community

    Create your member profile to get involved with our content, programs, and events.