Continuos Deployment to AWS S3 with Bitbucket Pipelines

Today I needed to get a deployment pipeline set up for a React app that was going to be hosted via a public Amazon S3 bucket. Since the repo was in a clients Bitbucket organization, the Bitbucket Pipelines feature seemed like a great option. This is going to be a quick guide for deploying a React app (a la create-react-app) to AWS S3, including setting up an S3 Bucket Policy to make your react app accessible to the public.

First off, create your repo in Bitbucket. Then, create a new Bucket in S3. Once the S3 bucket is created, make sure you go to the Properties section and enable “Static website hosting”. Enter index.html in the “index document” field, and click Save. Take note of the endpoint value at the top of the form- this is where your website will be accessible.

Next, clone the repo to your machine. From inside the directory, run npx create-react-app . which creates a new react app in the current directory. Verify that everything works locally by running yarn && yarn build. If you can’t build the project locally, troubleshoot that before moving forward. Once yarn build succeeds locally, move on to setting up your Pipeline.

Bitbucket makes this pretty easy, but there were a few gotchas that weren’t clear in the docs. Head over to your repo in the bitbucket webapp, and click Pipelines. Select the JavaScript template by clicking the JavaScript logo, then make the following changes:

  • Add a “name” to the first step (Build)
  • change script entries to: yarn / yarn build
  • add “artifacts” member (this tells Pipelines to store the output of yarn build)
  • Add a 2nd “step”, with a name of “Deploy”
  • Set the “Deployment” member to something like dev, test, stage, production, etc
  • Add AWS S3 Pipe by clicking “Pipes” -> “AWS S3 deploy”
  • set AWSACCESSKEYID and AWSSECRETACCESSKEY to use environment variables
  • set the S3BUCKET member to the name of the bucket you created earlier. I like to have a bucket per environment, and use the string from the “deployment” parameter as a prefix. So in my case, the bucket name becomes ‘stage-MockBucketName’. Don’t forget to set “AWSDEFAULT_REGION” to the region of your bucket. My bucket is in Virgina, which is region ‘us-east-1’.
  • set “LOCAL_PATH” to “build” as that’s where create-react-app will place your generated files after running ‘yarn build’.

At this point, you can go ahead and commit the bitbucket-pipelines.yml file. It should look pretty close to this:

# This is a sample build configuration for deploying a React app to Amazon Web Services S3 Storage from https://programming-is-easy.com
# Check our guides at https://confluence.atlassian.com/x/14UWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: node:10.15.3

pipelines:
  default:
    - step:
        name: Build
        caches:
          - node
        script:
          - yarn
          - yarn build
        artifacts:
          - build/**
    - step:
       name: Deploy
       deployment: stage
       script:
        - pipe: atlassian/aws-s3-deploy:0.3.7
          variables:
            AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
            AWS_SECRET_ACCESS_KEY: $AWS_ACCESS_KEY
            AWS_DEFAULT_REGION: 'us-east-1'
            S3_BUCKET: 'stage-bucket-name'
            LOCAL_PATH: 'build'

Now, we need to create an AWS Access Key to provide to Bitbucket, so that Bitbucket will have permission to upload to our S3 bucket. Once you have those, head over to Bitbucket and visit your account settings page by clicking your avatar, then clicking “Bitbucket settings”. From there, scroll down to Pipelines and click Account Variables. Here is where we will add your AWS credentials. Make sure that the “Name” field matches what you entered in the yaml file. If your yaml looks like mine, you’ll want to use AWSACCESSKEY and AWSACCESSKEY_ID. Bitbucket Pipelines now has the information it needs to connect to S3 on your behalf.

We’re getting close! We need to create a “Bucket Policy” for the bucket, otherwise we’ll have to manually set each object to Public after each deploy. Gross. To create a bucket policy, visit your S3 bucket in the AWS Console and the select Permissions, Bucket Policy. Then click “Policy generator” at the bottom, and use the following values: S3 Bucket Policy Effect: Allow Service: S3 arn: Copy ARN from previous tab, and add “/*” to the end. This means “apply this to every object in this bucket”.

Ok, you should be all set. Let’s make a change locally, commit and push it, then visit the Pipelines tab in your Bitbucket repo. You should see the status of your Pipeline here. Once it’s marked as succesful, we can visit our website at the endpoint URL we go when we enabled “Static website hosting” earlier on.

Published 21 Nov 2019

Programming Is Easy; Humans make it hard.
Matt Chandler on Twitter