Originally published on LinkedIn.
Why centralize logs in a multi-account AWS environment?
In large AWS environments using multiple accounts (for isolation, billing, governance, etc.), log data often becomes fragmented across accounts, services and regions. This fragmentation hinders:
- Rapid detection and response to issues
- Unified cost & activity tracking across the organization
- Consistent compliance, retention and audit processes
By centralizing log collection, processing and analytics in a dedicated log account (or “tooling account”), you gain visibility, reduce duplication and enforce standard policies across your entire cloud estate.
Architecture overview: Multi-Account Log Centralization
Here’s a high-level architecture for a centralized logging platform across AWS organization accounts:
Dedicated Logging Account
- Create a central AWS account dedicated to logs and observability.
- Within an AWS Organizations setup, this account can be managed as a tool-account.
Log Streaming from Source Accounts
- Enable services like CloudTrail, VPC Flow Logs, ALB logs, Lambda logs, etc. in each account.
- Use cross-account log forwarding: each source account sends logs to the central account’s S3 bucket, Kinesis Data Firehose, or CloudWatch Logs subscription.
Central Storage & Analytics
- Central S3 bucket (or buckets) in the logging account with lifecycle/retention policies.
- Data processed via AWS Glue / Lambda / Kinesis for ingestion, transformation.
- Query/visualize using Amazon Athena, Amazon OpenSearch Service (or Amazon Managed Grafana + logs), and dashboards.
Governance, Cost & FinOps Integration
- Tag logs (account, service, environment) to enable cost attribution and usage dashboards.
- Use lifecycle rules (transition to S3-IA, Glacier) to optimise storage cost.
- Cross-account logging also simplifies security and audit trail aggregation.
Step by Step Implementation
Step 1 – Enable CloudTrail & log delivery
In each account, activate a multi-region CloudTrail; configure it to deliver to the central account’s S3 bucket (via bucket policy allowing cross-account Put).
Use CloudWatch Logs and Kinesis Data Streams for near-real-time log ingestion if required.
Step 2 – Set up central logging account
In the central account:
- Create S3 bucket(s) with standardized prefix structure:
s3://central-logs/<account-id>/<region>/<service>/<date>/
- Apply lifecycle rules: e.g., hot logs in S3 Standard for 30 days, then S3-IA or Glacier for long-term.
- Enable Athena data catalog and partitions for efficient querying.
Step 3 – Consolidate and transform
- Use AWS Glue crawler to detect partitions, define schema for different log types (CloudTrail, VPC Flow, Application logs).
- Run Lambda or Kinesis Data Firehose for streaming ingestion when low latency detection is needed (e.g., GuardDuty findings).
- Use OpenSearch or Managed Grafana as central dashboarding layer.
Step 4 – Cost & retention management
- Monitor S3 storage, Glue crawler run cost, query costs (Athena) and enforce budgets.
- Archive older logs or delete if not needed, according to compliance.
- Build Dashboards in QuickSight or Grafana to show log ingestion rate, cost by account, alerts by severity.
Step 5 – Security & compliance
- Enforce IAM roles and bucket policies so that source accounts can only deliver logs and cannot access another account’s logs.
- Enable encryption at rest (S3 SSE-KMS) and in transit.
- Use AWS Config / Security Hub to ensure log delivery is enabled in all member accounts.
Key Benefits & Observations
- Unified observability: A single pane of glass for logs across accounts.
- Cost efficiency: Reduced overhead from duplicated tools; better storage lifecycle usage.
- Faster investigations: Troubleshooting cross-account issues becomes simpler.
- Governance & compliance: Standardised retention, encryption, access controls.
- Scalability: As new accounts are onboarded, you just attach logging policies; no siloed log systems.
Practical Tips & Pitfalls
- Watch data volume: Cross-account logs can grow quickly; plan capacity and cost.
- Avoid single point of failure: Ensure the logging account’s S3, ingestion, dashboards are highly available.
- Standardise naming/partitioning: Clear structure (account, region, service, date) simplifies querying.
- Think transformation: Not all logs are alike — normalise fields, pre-filter noisy data.
- Onboard new accounts via automation (Terraform, AWS CDK) to ensure consistent logging setup.
Wrap-Up
If you’re operating at scale with multiple AWS accounts, centralised log management isn’t optional — it’s essential. The architecture described above gives you a reusable blueprint. With the right tooling, you’ll gain visibility, cost control and compliance across your cloud estate.
Want to dive deeper into specific ingestion pipelines, dashboards, or cost-analysis strategies? Drop me a message or book a slot via my calendar.
