Sooner or later every SSL certificate expires, and in the Let’s Encrypt case, the validity is 90 days. In a standard situation, we would have Certbot that automatically inspects certificates that need to be renewed, however this is not applicable in our serverless context.
Several weeks have passed since we generated an SSL certificate using Let’s Encrypt with AWS Lambda and Certbot. We only have a few days left until the 90-day expiration.
With the previous article of this series, we uploaded the files related to the created certificate into an S3 Bucket categorizing them by date and domain name: <bucket-name>/ssl/<year>/<month>/<day>/<domain>
/
. For a better understanding of Certbot CLI’s working directory structure and the files saved in the S3 Bucket, it is recommended to consult part one of this series.
To renew a certificate in an AWS Lambda, the actions to execute aren’t so different compared to those executed for the creation. We still need to run a Certbot command and upload the resulting files to the same S3 Bucket. The main differences are about identifying and retrieving the certificates that require to be renewed and recreating the Certbot working directory structure to simulate the same situation available after the generation of the certificate.
Identify certificates to be renewed
The first step is creating a list of domains with an expiring certificate. Let’s Encrypt recommends renewing your certificates every 60 days and that is the minimum amount of days after which a certificate could be renewed. Trying to renew a Let’s Encrypt certificate via Certbot before 60 days will raise an error (“Cert not yet due for renewal”).
Thanks to the key structure inside the S3 Bucket we can easily accomplish research in a range of days starting from 60 days before the current date, for example: a date range that falls between 65 to 70 days before the current date. By specifying a range, we can avoid executing this operation every day or retry it on the following days in case of a renewal error.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
bucket = "certs-bucket" today = date.today() check_start_date = today - timedelta(days=70) check_end_date = today - timedelta(days=65) days_list = [check_start_date + timedelta(days=x) for x in range((check_end_date-check_start_date).days + 1)] # Create a Tuple of S3 key prefixes. In our example 'ssl' is the static first directory level inside the bucket. prefixes = tuple(map(lambda x: f"ssl/{x.year}/{x.month}/{x.day}/", days_list)) domains_to_renew = [] for obj in get_s3_objects(bucket, prefixes, "pem"): # get_s3_object could be a function using list_objects_v2 S3 boto3 api and a filter by 'pem' as suffix s3_domain_path = obj['Key'].split('/')[0, -2] # Avoid duplicated domains (each dir has 4 pem files, so 4 times the domain) if [element for element in domains_to_renew if element['DomainToRenew'] == domain]: continue domain_date = '/'.join(obj['Key'].split('/')[1:4]) domains_to_renew.append({ "DomainToRenewS3Path": s3_domain_path, "DomainDate": domain_date }) return domains_to_renew |
Certificates renewal
With the collected list of domain certificates to renew, we can download the related files available on the S3 Bucket and then recreate on Lambda’s volume the structure of the Certbot’s working directory. I’m going to summarise the steps to archive these operations with the downloaded files:
- We must make use of
<domain>.conf
to recreate the directory<certbot-work-dir>/accounts/<apihost>/directory/<account>/
*
. We need to extrapolate from the<domain>.conf
file the values ofaccount
andserver
parameters and substituting them respectively to<account>
and<apihost>
. - The .json files, so
meta.json
,private_key.json
andregr.json
have to be moved into the previously created directory<certbot-work-dir>/accounts/<apihost>/directory/<account>/
*
. - The .pem files, so
cert.pem
,chain.pem
,fullchain.pem
eprivkey.pem
, have to be moved into the directory<certbot-work-dir>/archive/<domain>/
, but adding a number to the name during the operation of each file. For example<certbot-work-dir>/archive/<domain>/cert1.pem
. - For each .pem file previously moved into
<certbot-work-dir>
archive/
we have to create a symbolic link in the directory<certbot-work-dir>/live/<domain>/
pointing to files inside the directory<certbot-work-dir>/archive/<domain>/
. For example<certbot-work-dir>/live/<domain>/cert.pem
→<certbot-work-dir>/archive/<domain>/cert1.pem
.
After completing the previous steps, we can renew the certificate through a Certbot command that is very similar to the one used for creation.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
certbot_options = [ 'renew', '-n', '--agree-tos', '--email', email, '--no-random-sleep-on-renew', '--no-eff-email', '--dns-route53', # Use Route53 dns challenge <span class="crayon-s">'-d'</span><span class="crayon-sy">,</span> <span class="crayon-i">f</span><span class="crayon-s">'{domain},www.{domain}',</span> '--config-dir', work_dir, '--work-dir', work_dir, '--logs-dir', work_dir, ] certbot.main.main(certbot_options) |
We have renewed our certificate. Now we can upload the updated files into a new S3 path with today’s date, following the same steps as certificate creation (We could copy the last steps from the previous article’s code).
The Let’s Encrypt certificate we renewed and all related files must be deleted from the S3 Bucket to prevent the same certificate from being identified another time for renewal, on the next execution of the Lambda.
The .pem files uploaded can be updated on the webserver to refresh our website’s certificate.
Conclusion
There are two ways to obtain a new SSL certificate with Let’s Encrypt through AWS serverless services. The method presented here is just one of them. The second method involves creating a new certificate every time instead of renewing the existing one. However, in this case, it is important to revoke the previous certificate to avoid receiving the impending expiration email.
The renewed certificate will still expire in 90 days… but we now have all the necessary tools to automate the next renewal process. Be sure not to miss the next article in the series that will demonstrate one of the possibilities to automate everything. Cheerio!