Multi-Account Strategies

Thuong To
6 min readDec 12, 2023

--

This week’s customer is currently operating everything for all the clients that they support out of one AWS account. This situation isn’t ideal for a company that wants to organize and scale their usage of AWS as their usage grows.

The practice of using multiple accounts has many advantages. To summarize, you can group workloads based on business purposes and ownership, centralize logging, and constrain access to sensitive data. You can also limit the scope of impact from adverse events, manage costs better, and distribute AWS service quotas and API request rate limits.

Businesses that are starting to adopt AWS, expanding their footprint on AWS, or planning to enhance an established AWS environment need to ensure they have a foundation on AWS for their cloud environment. One important aspect of their foundation is to organize their AWS environment by following a multi-account strategy.

By using multiple AWS accounts to help isolate and manage your business applications and data, you can optimize across most of the AWS Well-Architected Framework pillars — including operational excellence, security, reliability, and cost optimization.

Group workloads based on business purpose and ownership

You can group workloads with a common business purpose into distinct accounts. As a result, you can align the ownership and decision-making of those accounts. You can also avoid dependencies and conflicts with how workloads in other accounts are secured and managed.

Different business units or product teams might have different processes. Depending on your overall business model, you might choose to isolate distinct business units or subsidiaries in different accounts. By isolating business units, they can operate with greater decentralized control — while still retaining the ability to provide overarching guardrails. This approach might also ease divestment of those units over time.

Guardrails are governance rules for security, operations, and compliance that you can define and apply to align with your overall requirements.

If you acquire a business that already operates in AWS, you can move the associated accounts into your existing organization intact. This movement of accounts can be an initial step toward integrating acquired services into your standard account structure.

Apply distinct security controls by environment

Workloads often have distinct security profiles that require separate control policies and mechanisms to support them. For example, it’s common to apply different policies for security and operations to the non-production and production environments of a given workload. If you use separate accounts for the non-production and production environments, the resources and data that make up a workload environment are separated from other environments and workloads by default.

Constrain access to sensitive data

When you limit sensitive data stores to an account that is built to manage it, you can more easily constrain the number of people and processes that can access and manage the data store. This approach simplifies the process of achieving least-privilege access. Limiting access at the coarse-grained level of an account helps contain exposure to highly sensitive data.

For example, by designating a set of accounts to house publicly accessible Amazon Simple Storage Service (Amazon S3) buckets, you can implement policies that expressly forbid all other accounts from making S3 buckets publicly available.

Promote innovation and agility

At AWS, we refer to your technologists as builders because they’re responsible for building value by using AWS products and services. Your builders might represent diverse roles, such as application developers, data engineers, data scientists, data analysts, security engineers, and infrastructure engineers.

In the early stages of a workload’s lifecycle, you can help promote innovation by providing your builders with separate accounts in support of experimentation, development, and early testing. These environments often provide greater freedom than more tightly controlled production-like test and production environments. They do so by providing broader access to AWS services while also using guardrails that help prohibit access to (and the use of) sensitive and internal data.

  • Sandboxaccounts: Typically disconnected from your enterprise services and don’t provide access to your internal data. However, they offer the greatest freedom for experimentation.
  • Developmentaccounts: Typically provide limited access to your enterprise services and development data. However, they can more readily support day-to-day experimentation with your enterprise-approved AWS services, formal development, and early testing work.

In both cases, we recommend security guardrails and cost budgets so that you limit risks and proactively manage costs.

You can support later stages of the workload lifecycle by using distinct test and production accounts for workloads or groups of related workloads. By having an environment for each set of workloads, owning teams can move faster by reducing dependencies on other teams and workloads, and by also minimizing the impact of changes.

Limit scope of impact from adverse events

An AWS account applies boundaries for security, access, and billing boundaries to your AWS resources. These boundaries can help you achieve the independence and isolation of resources. By design, all resources that are provisioned within an account are logically isolated from resources that are provisioned in other accounts — even within your own AWS environment.

This isolation boundary provides a way to limit the risks of an application-related issue, misconfiguration, or malicious actions. If an issue occurs within one account, impacts to workloads contained in other accounts can be either reduced or eliminated.

Support multiple IT operating models

Organizations often have multiple IT operating models, or ways that they divide responsibilities among parts of the organization to deliver their application workloads and platform capabilities. The following diagram shows three example operating models:

Example operating models

In the Traditional Ops model, teams who own custom and commercial off-the-shelf (COTS) applications are responsible for engineering their applications, but not for their production operations. A cloud platform engineering team is responsible for engineering the underlying platform capabilities. A separate cloud operations team is responsible for the operations of both applications and platform.

In the CloudOps model, application teams are also responsible for production operations of their applications. In this model, a common cloud platform engineering team is responsible for both the engineering and operations of the underlying platform capabilities.

In the DevOps model, the application teams take on the additional responsibilities of engineering and operating platform capabilities that are specific to their applications. A cloud platform engineering team is responsible for the engineering and operations of shared platform capabilities that are used by multiple applications.

As a practice, IT Service Management (ITSM) is a common element across all of the models. Your overall goals and requirements for ITSM might not change across these models. However, the responsible individuals and solutions for meeting those goals and requirements can vary, depending on the model.

Given the implications of centralized operations versus more distributed operational responsibilities, you might benefit from establishing separate groups of accounts in support of different operating models. By using separate accounts, you can apply distinct governance and operational controls that are appropriate for each of your operating models.

Manage costs

An account is the default way that AWS costs are allocated. By using different accounts for different business units and groups of workloads, you can more easily report, control, forecast, and budget your cloud expenditures.

In addition to cost reporting at the account level, AWS has built-in support to consolidate and report costs across your entire set of accounts. When you require fine-grained cost allocation, you can apply cost allocation tags to individual resources in each of your accounts.

Distribute AWS service quotas and API request rate limits

AWS service quotas (also known as limits) are the maximum number of service resources or operations that apply to an account. For example, a service quota could be the number of S3 buckets that you can create for each account.

You can use the Service Quotas service to help protect you from unexpected or excessive provisioning of AWS resources, and from malicious actions that could dramatically impact your AWS costs.

AWS services can also throttle (or limit) the rate of requests that are made to their API operations.

Because service quotas and request rate limits are allocated for each account, use separate accounts for workloads to help distribute the potential impact of the quotas and limits.

For more information about multi-account strategies, see the following:

--

--

Thuong To
Thuong To

No responses yet