Table of contents
Code samples and documentation
Next.js + AWS (CloudFront, S3, Route53, ACM) + Terraform + GitHub Actions CI/CD
Complete Terraform configuration, GitHub Actions workflows, and detailed setup instructions are available in the repository. Please feel free to use it as a reference for your own projects or to suggest improvements.
Remember: Sometimes the longest roads teach you the most valuable lessons.
Why build another blogging platform?
“Just use WordPress,” they say. “It’ll be done in a few minutes.”
As an AWS Solutions Architect, I know I can set up a simple blog in the console within an hour. But here’s the thing – I’ve built production systems that maintain 99.9% uptime. I’ve hit the console countless times. This project isn’t about the fastest path to blogging; it’s about how to start a blog. This is about pushing my limits.
I need one that can:
- Forced me to translate my console knowledge into code
- Challenge my understanding of AWS service interactions
- Create realistic testing grounds for complex cloud architectures
- Provide documentation for other engineers facing similar challenges
Real value? There’s a gap between knowing how to do something in the console and implementing it in Terraform. This is where learning happens. This is where engineers grow.
architectural decisions
core infrastructure components
I designed the architecture with several key requirements in mind:
- High availability: Edge distribution using CloudFront
- Safety: Implement appropriate IAM roles and OIDC federation
- Scalability:S3 for static content and version control
- automation: Complete your CI/CD pipeline with GitHub Actions
Here’s how each part fits together:
Content delivery and security
- CloudFront distribution: Handles HTTPS termination, caching, and low-latency delivery
- ACM (AWS Credential Manager): Manage SSL credentials for secure communication
- Original Access Identity (OAI): Restrict S3 access to CloudFront only
- S3 bucket:Host static files with versioning enabled
infrastructure management
- Private S3 backend: Securely maintain Terraform state files
- IAM role: Long-term automation with minimal permissions
- OIDC Online Identity: Securing AWS access for GitHub Actions
This setup not only followed the six pillars of the AWS Well-Architected Framework, but it also taught me the intricacies of networking and IaC.
The real challenge: the continuous journey
What seemed like a simple architecture on paper turned into a complex troubleshooting process. Here’s how each challenge leads to the next:
1. Trust Area Puzzle
Initially, Terraform was unable to access the hosted zone by name. Although the AWS CLI can find it, Terraform keeps failing. This was my first hint that there was a deeper problem, although I wasn’t aware of it yet.
# Initial attempt that failed`
resource "zone_name" "primary" {
name = var.domain_name
}
# Solution: Use zone ID directly`
data "zone_id" "primary" {
zone_id = var.zone_id
}
2. Multiple Account Maze
The hosted zone issue exposed a complication: I was unknowingly trying to access resources across multiple AWS accounts without the appropriate cross-account permissions. This is how it behaves:
- first sign:Terraform Permission error occurs when trying to access resources such as hosted areas
- second flag: User not found in expected account
- final revelation:S3 bucket 403 error indicates cross-account access attempt
After many attempts to fix permissions, tired settings, and countless Chrome tabs open, I walked away from the computer. Sometimes the best debugging tool is a fresh perspective. When I came back, I systematically verified each environment variable and found that they were not configured correctly.
Initially, I set environment variables for the default account:
# Checking current identity
aws sts get-caller-identity
# Setting up environment variables
export AWS_ACCESS_KEY_ID="new_access_key"
export AWS_SECRET_ACCESS_KEY="new_secret_key"
export AWS_DEFAULT_REGION="us-east-1"
# Profile
export AWS_PROFILE=default
But this creates conflicts when trying to switch between accounts. What’s the real problem? Terraform roles lack cross-account access.
3. Cross-account resource management
The situation is clear:
- Trusted area: In the main account, but inaccessible due to resources in the backup account
- S3 bucket: In a backup account, but the primary role lacks cross-account access
- terrain:Able to build the resource because I granted permissions from the backup account but not from the main account.
With a fresh mind, I saw a clear picture. I took a systematic approach:
The solution requires system cleanup:
-
Profile configuration: I configured a profile specifically for the backup account and initially deleted it. through settings
export AWS_PROFILE=backup
I aligned the Terraform action with the correct account. - Environmental variable adjustment: I unchecked the environment variable that was causing the conflict previously and reconfigured the AWS CLI to ensure that the operation authenticated and executed correctly under the appropriate account.
-
Validate and clean: I ran away
aws sts get-caller-identity
Verify that the CLI is now using the correct account. Subsequently, I useterraform init
Reinitialize Terraform andterraform destroy --auto-approve
Delete resources created by mistake in the backup.
The resolution of these issues marked an important turning point for the project. Not only did I manage to simplify the AWS architecture by ensuring resources were properly aligned with their respective accounts, but I also enforced best practices in AWS management and Terraform usage. This process emphasized the importance of vigilant account management and provided valuable lessons for maintaining clarity and organization in a cloud environment.
4. CI/CD pipeline optimization
The final challenge arises in the CI/CD pipeline. Although the YAML is syntactically correct, the Terraform operations are pending in the Terraform Plan. What is the root cause? Environment variables need to be passed individually to each Terraform directive:
- name: Terraform Init
run: |
cd ./terraform-config
terraform init
env:
TF_VAR_domain_name: ${{ secrets.DOMAIN_NAME }}
TF_VAR_aws_region: ${{ secrets.AWS_REGION}}
TF_VAR_zone_id: ${{ secrets.ZONE_ID }}
- name: Terraform
id: plan
run: |
cd ./terraform-config
terraform plan -out=tfplan
- name: Terraform Apply
id: apply
run: |
cd ./terraform-config
terraform apply -auto-approve tfplan
working solution
- name: Terraform Init
run: |
cd ./terraform-config
terraform init
env:
TF_VAR_domain_name: ${{ secrets.DOMAIN_NAME }}
TF_VAR_aws_region: ${{ secrets.AWS_REGION}}
TF_VAR_zone_id: ${{ secrets.ZONE_ID }}
- name: Terraform
id: plan
run: |
cd ./terraform-config
terraform plan -out=tfplan
env:
TF_VAR_domain_name: ${{ secrets.DOMAIN_NAME }}
TF_VAR_aws_region: ${{ secrets.AWS_REGION}}
TF_VAR_zone_id: ${{ secrets.ZONE_ID }}
- name: Terraform Apply
id: apply
run: |
cd ./terraform-config
terraform apply -auto-approve tfplan
env:
TF_VAR_domain_name: ${{ secrets.DOMAIN_NAME }}
TF_VAR_aws_region: ${{ secrets.AWS_REGION}}
TF_VAR_zone_id: ${{ secrets.ZONE_ID }}
It took me two hours to solve this challenge, which highlighted the importance of understanding the nuances of Terraform commands in a CI/CD pipeline and the need for careful configuration to ensure smooth automation.
Key lessons learned
- Account management is crucial
- Always verify which account you're operating in
- Set up proper profile management early
- Use `aws sts get-caller-identity` frequently
- Infrastructure as code requires different thinking
- What works in the console might need different approaches in Terraform
- Resource relationships need explicit definition
- State management is crucial
- Understanding the certification process
- OIDC federation setup requires careful configuration
- Environment variables affect different tools differently
- Multiple authentication methods need careful management
- Best practices emerge
- Always verify account context before operations
- Implement clear naming conventions
- Maintain separation of concerns between accounts
- Document environment variable requirements
embrace the hard road
A year ago, I made a choice: to stop settling for the easy path. I have set up several AWS projects. The Linux server has been maintained. I am certified as an AWS Solutions Architect. But terrain? That’s the next mountain I want to climb.
The console is comfortable. This is visual. It’s immediate. But what about the code? The code requires precision. Every resource relationship must be explicit. Every permission must be defined. Every interaction must be planned.
This project forced me to think about infrastructure differently:
- No more clicking options
- No more relying on AWS’s default settings
- No more visual confirmation of changes
Instead I have to:
- Define each resource relationship in code
- Gain a deep understanding of every service interaction
- Plan automation from the start
- Think in terms of state management and drift detection
Is it harder? Absolutely.
Did it take longer? There is no doubt about it.
Is it worth it? Every frustrating minute.
expect
More than just a blog, the platform demonstrates the complexity and learning opportunities of modern cloud architecture. Each challenge forced me to delve deeper into AWS services, Terraform behaviors, and cloud practices.
The final architecture follows the AWS Well-Architected Framework principles:
- Safety: Correct IAM roles, OIDC federation, and access control
- reliability: CloudFront distribution and S3 versioning
- Performance efficiency: Edge caching and optimization
- cost optimization: almost free
- Operational excellence: Fully automatic deployment