8

Let's Encrypt a Static Site on Amazon S3

 3 years ago
source link: https://rudism.com/lets-encrypt-a-static-site-on-amazon-s3/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Let's Encrypt a Static Site on Amazon S3

2016-01-06 (posted in blog)

Out of Date Notice (Nov. 2019): Using Amazon’s ACS service (which didn’t exist at the time this post was written) to generate SSL certificates for your S3/Cloudfront websites is a far simpler process than using Let’s Encrypt. That’s what I’m doing for this blog now.

To create this blog, I use a static site generator called Hexo, and upload the resulting files to a web-enabled Amazon S3 bucket. I’ve recently started using Let’s Encrypt to generate free domain-validated SSL certificates for many of my non-static sites, and decided that I’d like to do the same for my static sites as well. This post documents the process of generating the SSL certificate from Let’s Encrypt, and all of the steps I took to get it working with a static site hosted on AWS.

Step 1: Use Cloudfront and Route 53

The details of this step are a bit beyond the scope of this article, but it is a necessary one. If you’re like me, to set up your website you simply ticked the box under “Static Website Hosting” for your bucket, then created a CNAME record at your registrar to point to the bucket’s domain name. Unfortunately that’s not good enough anymore. If you want to make the move to SSL, you’ll need to set up a Cloudfront distribution in front of your bucket, and you’ll also probably need to use Route 53 instead of your registrar’s default name servers.

Since Cloudfront requires you to point your domain using a CNAME record, Route 53 is required if you want to be able to access your website at a naked domain without the www prefix (i.e., codeword.xyz as well as www.codeword.xyz). This is because, according to official DNS specs, it is not technically allowed to create CNAME records on naked domains. Some registrars will let you to do it, but since it’s non-standard it may not work entirely as expected and you do so at your own peril. Route 53 gets around this because you can create an A record that’s just an alias to the specific Cloudfront distribution and avoid the CNAME issue altogether. If you don’t need to access your site at a naked domain, then you can probably get away without using Route 53 (just create the necessary CNAME records in your registrar’s DNS servers).

Setting everything up on Amazon isn’t really too hard if you’re already familiar with other AWS services. The first step would be to create a new Cloudfront distribution that points to your bucket (there is a section to select the SSL certificate to use, initially you can just use the default Cloudfront certificate—we will be uploading our own and doing a switcheroo later).

Important: In most cases where you are transitioning an already existing static S3 site to Cloudfront, you cannot simply select your bucket from the origin dropdown when creating your Cloudfront distribution. In order to mimic the way S3 static hosting works, you must use the public URL for your bucket that you see in its static hosting configuration as the origin URL. It should look like yoursitename.com.s3-website-us-east-1.amazonaws.com. If you were to select the pre-populated bucket from the dropdown, you would get something like yoursitename.com.s3.amazonaws.com, which would probably mostly work, but things like default document within sub-folders and error page settings from your S3 configuration would not work.

Once the Cloudfront distribution has been created, you add a new Hosted Zone to Route 53 for the domain you will be using, and create A record aliases to point to the new Cloudfront distribution (in my case, I created two: one for codeword.xyz and one for www.codeword.xyz). To do this, you simply tick the “Alias” box and select your Cloudfront distribution from the provided dropdown. At this point, everything should be ready to go on AWS, so it’s just a matter of updating the NS records at your registrar to point to the ones given to you by Route 53. Once that’s all done and your website is working, you’re ready to move on to the next step.

Keep in mind that if you do additional things with the domain (such as MX records to handle email routing) you’ll need to mirror all of that stuff in your Route 53 config before updating the NS records or else things might suddenly stop working.

Step 2: Generating the Certificate

When dealing with self-hosted sites on a server that you own and have root access to, generating Let’s Encrypt certificates is fairly easy. What I did was temporarily shut down Nginx, and ran the Let’s Encrypt client in certonly and standalone mode where it creates its own web server in order to validate the domain. Once that completed successfully, I updated the Nginx configs to point to the newly generated certificates and fired it back up.

Since Amazon S3 only supports static content delivery, there’s no way to run the Let’s Encrypt client from your domain that’s pointing to your S3 bucket. Fortunately, there is a hidden --manual option that we can pass to the Let’s Encrypt client that will let us generate a certificate without having to do any fancy footwork with the domain name. Here’s what I ran for this blog:

1
$ sudo ./letsencrypt-auto certonly --manual --server https://acme-v01.api.letsencrypt.org/directory -d codeword.xyz -d www.codeword.xyz

Note, I included a --server option; I’m not sure that will still be necessary once Let’s Encrypt comes out of beta. After running that command, you’ll get prompted about your IP address being logged, and then given something that looks like this:

1
2
3
4
Make sure your web server displays the following content at
http://codeword.xyz/.well-known/acme-challenge/abcdefg0123456789 before continuing:

abcdefg0123456789.hijklmnopqrstuvwxyz

Go to your S3 management console, and create the necessary folder structure (.well-known in the root, and then acme-challenge within that). Now on your computer, create a file using the name and content that the client gave you. In the example above, the file would be named abcdefg0123456789 and the content would be abcdefg0123456789.hijklmnopqrstuvwxyz. Then upload that file to the .well-known/acme-challenge folder you created in S3. I also edited the Metadata of the file in S3 and set the Content-Type header to text/plain, but I don’t think that step is necessary. I just did it so that I could copy and paste the provided URL into my browser and verify that the file existed and the content displayed as I expected. Once you’re satisfied that the URL is returning the expected content, hit enter to continue.

You’ll have to repeat the above step for each domain you specified (so you will have one file in that directory for each domain, in my case two). If all goes well, after hitting enter on the final domain, you’ll get something like the following message:

1
Congratulations! Your certificate and chain have been saved at /etc/letsencrypt/live/codeword.xyz/fullchain.pem. Your cert will expire on 2016-04-05. To obtain a new version of the certificate in the future, simply run Let's Encrypt again.

Step 3: Upload the Certificate to AWS

In order to use your new certificates on your Cloudfront distribution, you need to upload it to the IAM certificate store on AWS. As far as I can tell, there’s no way to do that through the web interface, so you’ll have to use the AWS command line interface. This is a set of python scripts that lets you interact with the AWS API from your linux shell. You can install and configure it using pip like this:

1
2
$ sudo pip install awscli
$ aws configure

You may also need to do sudo aws configure if you`ll be running the tools as root (as I do in the examples here). You can then use the tool to upload your certificate as explained in the AWS docs. For me, the command looked like this:

1
$ sudo aws iam upload-server-certificate --server-certificate-name cert_codeword_xyz --certificate-body file:///etc/letsencrypt/live/codeword.xyz/cert.pem --private-key file:///etc/letsencrypt/live/codeword.xyz/privkey.pem --certificate-chain file:///etc/letsencrypt/live/codeword.xyz/chain.pem --path /cloudfront/certs/

You’ll get a message with the certificate metadata if the upload was successful.

Step 4: Update Cloudfront with the Certificate

Now go to your Cloudfront dashboard and edit the distribution you created for the site (click its ID in the list, then click the “Edit” button from the “General” tab). You can now tick the box that says “Custom SSL Certificate” and then select the certificate you just uploaded from the dropdown.

Now you’re done! Once the Cloudfront distribution finishes refreshing, you should be able to access your site via both HTTP and HTTPS. You can optionally do what I did and go to the “Behaviors” tab of your Cloudfront distribution, edit the default behavior, and tick the “Redirect HTTP to HTTPS” box—this will cause all connections to your site to be encrypted.

The next step, which I’m not covering here because I haven’t done it myself yet, will be to automate the renewal process with a script that runs the Let’s Encrypt client, then automatically replaces the IAM certificate in AWS with the new one. That way you won’t have to remember how to do any of this stuff every 6 months when it comes time to renew the certificate—just run a script and be done (or put it in a cron job if you’re feeling adventerous!).

Short Permalink for Attribution: rdsm.ca/27ipo

Comments

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK