As of 2024, it's still not possible to enable HTTPS access to an S3 static website with a custom domain name, solely using the S3 bucket configuration.
Instead, the recommended way to enable HTTPS access to your custom domain static site is to deploy a CloudFront distribution that uses your S3 bucket as its origin.
It's understandable, given the potential complexities of bolting custom cert config and other sensitive details onto S3 buckets.
In this post I'll discuss the steps I used to perform the CloudFront HTTPS upgrade on this very blog.
Create a clean project and regenerate the static site
I used Pelican to build the most recent incarnation of this blog, and my long-ago-pinned Pelican version had fallen out of date.
So, I decided to start with a clean Pelican project in a new github repo: https://github.com/billagee/blog-dev.likewise.org
Luckily the latest Pelican release worked fine with my legacy files after I ran pelican-quickstart
, copied my content/
dir over, and manually added my old pelicanconf.py
config lines to the new project's conf file.
Interestingly, the ancient blog post I wrote covering my initial Pelican installation came in very handy here.
Create an infra/ dir in the Pelican project to hold Terraform files
I decided to create an infra/
subdir in the Pelican project dir to hold the Terraform files for my AWS resources (roughly, the CloudFront distribution, its origin bucket, and the DNS records for the static site)
Easy enough. In that dir I followed the pattern of creating a main.tf
to hold my resources, and config.tf
to hold config bits:
# config.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "us-east-1"
}
Then configure your AWS credentials on the CLI and run terraform init
Create a new private bucket to serve as the CloudFront origin
Just as with the git repo for the Pelican project, I decided to start with a clean bucket, rather than import my legacy public S3 bucket into Terraform.
The old bucket will serve as a backup and can eventually be deleted.
This new bucket doesn't need to be public!
# main.tf
locals {
s3_bucket_name = "blog-dev.likewise.org"
domain = "blog-dev.likewise.org"
domain_for_prod = "blog.likewise.org"
route53_zone_name = "likewise.org."
}
resource "aws_s3_bucket" "main" {
bucket = local.s3_bucket_name
}
Run terraform plan
and terraform apply
to create the bucket.
Make sure the Pelican project generates files and can upload them to S3
To get ready to publish your site files, as per the usual Pelican development flow, make sure that make devserver
runs and the output looks OK in a browser pointed to http://localhost:8000
Before publishing I updated the Makefile
generated by Pelican to remove the --acl public-read
option on the s3_upload
target:
s3_upload: publish
aws s3 sync "$(OUTPUTDIR)"/ s3://$(S3_BUCKET) --delete
Once you take care of that, make s3_upload
should copy your site files to the bucket, assuming your AWS creds are configured.
Create certificates in ACM
It's painless to create SSL/TLS certificates for your custom domains in AWS Certificate Manager - so I created one for my new temporary subdomain (blog-dev.likewise.org
).
While there I went ahead and created one for the production domain blog.likewise.org
as well.
Note you can use both subdomains in a single cert if you wish.
Data sources for Route53 zone and ACM certs
Since I don't plan to keep my complete Route53 zone configuration or ACM certificates for my custom domain in this Terraform project, but we'll need the zone ID and cert ARNs later, create data sources to pull them in:
# main.tf
data "aws_route53_zone" "main" {
name = local.route53_zone_name
private_zone = false
}
data "aws_acm_certificate" "main" {
domain = local.domain
statuses = ["ISSUED"]
}
data "aws_acm_certificate" "cert_for_prod" {
domain = local.domain_for_prod
statuses = ["ISSUED"]
}
CloudFront configuration
The CloudFront configuration is by far the most complex piece.
Points worth noting:
- While adding the CloudFront distribution, we'll also create a bucket policy that allows the CF distro to read the bucket
- We'll add a CloudFront function that makes sure
/
is redirected to/index.html
in blog post slug URIs
# main.tf
resource "aws_s3_bucket_policy" "main" {
bucket = aws_s3_bucket.main.id
policy = data.aws_iam_policy_document.cloudfront_oac_access.json
}
resource aws_cloudfront_distribution "main" {
enabled = true
aliases = [local.domain, local.domain_for_prod]
default_root_object = "index.html"
is_ipv6_enabled = true
wait_for_deployment = true
default_cache_behavior {
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
# CachingOptimized managed policy ID from
# https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-managed-cache-policies.html#managed-cache-caching-optimized
cache_policy_id = "658327ea-f89d-4fab-a63d-7e88639e58f6"
target_origin_id = aws_s3_bucket.main.bucket
viewer_protocol_policy = "redirect-to-https"
function_association {
event_type = "viewer-request" # Attach the function at the viewer-request stage
function_arn = aws_cloudfront_function.main.arn
}
}
origin {
domain_name = aws_s3_bucket.main.bucket_regional_domain_name
origin_access_control_id = aws_cloudfront_origin_access_control.main.id
origin_id = aws_s3_bucket.main.bucket # Same value as .id
}
comment = local.s3_bucket_name
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
acm_certificate_arn = data.aws_acm_certificate.cert_for_prod.arn
minimum_protocol_version = "TLSv1.2_2021"
ssl_support_method = "sni-only"
}
}
resource aws_cloudfront_origin_access_control "main" {
name = "${local.s3_bucket_name}-oac"
origin_access_control_origin_type = "s3"
signing_behavior = "always"
signing_protocol = "sigv4"
}
data "aws_iam_policy_document" "cloudfront_oac_access" {
statement {
principals {
identifiers = ["cloudfront.amazonaws.com"]
type = "Service"
}
actions = [
"s3:GetObject",
]
resources = ["${aws_s3_bucket.main.arn}/*"]
condition {
test = "StringEquals"
values = [aws_cloudfront_distribution.main.arn]
variable = "AWS:SourceArn"
}
}
}
# Cloudfront function to make sure URLs ending in / are redirected to /index.html
# See https://stackoverflow.com/a/76581267/267263
resource "aws_cloudfront_function" "main" {
name = "my-cloudfront-function"
runtime = "cloudfront-js-1.0"
code = <<EOF
function handler(event) {
var request = event.request;
var uri = request.uri;
// Check whether the URI is missing a file name.
if (uri.endsWith('/')) {
request.uri += 'index.html';
}.
// Check whether the URI is missing a file extension.
else if (!uri.includes('.')) {
request.uri += '/index.html';
}
return request;
}
EOF
comment = "Function to append index.html to missing URIs"
}
Route53 configuration for your dev site
# main.tf
resource "aws_route53_record" "main" {
name = local.domain
type = "A"
zone_id = data.aws_route53_zone.main.zone_id
alias {
name = aws_cloudfront_distribution.main.domain_name
zone_id = aws_cloudfront_distribution.main.hosted_zone_id
evaluate_target_health = true
}
}
Test your dev site
Make sure all looks good using your dev subdomain (as mentioned earlier, mine was https://blog-dev.likewise.org)
When ready to cutover, disable S3 static website config on legacy bucket
I did this in the AWS console; it's an easy one-click change.
Remove legacy DNS record for the production subdomain
I did this manually in the AWS console after taking note of the existing values.
Create new Route53 record for the production subdomain
# main.tf
resource "aws_route53_record" "production-fqdn" {
name = local.domain_for_prod
type = "A"
zone_id = data.aws_route53_zone.main.zone_id
alias {
name = aws_cloudfront_distribution.main.domain_name
zone_id = aws_cloudfront_distribution.main.hosted_zone_id
evaluate_target_health = true
}
}
Change SITEURL in Pelican's publishconf.py to the production subdomain
Add 'aws cloudfront create-invalidation' to your Pelican s3_upload target
To immediately wipe out the CloudFront edge caches of your site when you publish, so you can see whether your bucket update looks OK on your live site, you can add a CLI command to create an invalidation to the Pelican s3_upload
make target.
My target ended up looking like this:
s3_upload: publish
aws s3 sync "$(OUTPUTDIR)"/ s3://$(S3_BUCKET) --delete
AWS_PAGER="" aws cloudfront create-invalidation --distribution-id=<MY CLOUDFRONT DISTRIBUTION>--paths '/*'
Run final 'make s3_upload' and test your production subdomain
All should be well at this point. Make sure to test your post slugs to make sure links work.
Also, Google results for your blog posts might show quirky behavior if anything in your configuration is amiss. It's good to search for posts you know are indexed in Google and make sure you can still follow Google links back to your blog.