How-to: Upload directly from the front end of a Vue app into an AWS bucket with SignedURLs

If you want to provide an upload for users in your Vue app, this is usually done via your backend and the associated data handling. With the help of an AWS bucket and the SignedURLs, this is relatively easy to achieve, without placing a lot of load on your personal backend.

I will report on my experiences and learnings from implementing this solution. Maybe it will help one or the other reader to get there a little faster.

The blog post shows the setup with which the upload to AWS can be set up and you can do without a backend server completely. The learning curve at AWS is a bit steep, but once the basic setup is in place, the upload can be implemented quickly.

This is what you need to start

Setup of your AWS console

Step 1: Create an account

The first step is to create an AWS account. AWS always asks for the credit card details for a new account, although we later adjust the setup so that there are no costs for the time being. After successful registration, we have access to the keys belonging to the account. The keys shown are the root keys, which we will not use now because an IAM user will be created for the upload. If you want to use these keys later, you can simply have new keys generated for you at any time under “My Security Credentials”.

AWS Konsole

Note: You should never share your keys publicly because the “pay as you go” pricing model could quickly get expensive for you. You are on the safest path if you save the keys as variables in an .env file in your project and ignore them in your Git repository.

Step 2: create a bucket

Once the account has been created, the bucket creation can continue. A bucket can be created relatively easily by giving it a name and deactivating all approval for the time being. This is important because only then can the bucket policy be set to public to make the images accessible to everyone.

AWS Konsole
Pic: Create a public bucket (also turn off last approval)

Your bucket policy and CORS settings should look like this:

 {
    "Version": "2012-10-17",
    "Id": "public policy example",
    "Statement": [
        {
            "Sid": "Allow get requests",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::YOUR_BUCKET/*"
        }
    ]
}
<?xml version="1.0" encoding="UTF-8"?><CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><CORSRule>    <AllowedOrigin>*</AllowedOrigin>    <AllowedMethod>POST</AllowedMethod>    <AllowedMethod>GET</AllowedMethod>    <AllowedMethod>PUT</AllowedMethod>    <AllowedMethod>DELETE</AllowedMethod>    <AllowedMethod>HEAD</AllowedMethod>    
<AllowedHeader>*</AllowedHeader>
</CORSRule></CORSConfiguration>

Step 3: Create a new IAM user

Once we have created the bucket, we can now concentrate on creating an IAM user. The quickest way to find the right page for the creation is to search within the AWS console.

AWS Konsole

The new user should only have access to the Get, Put and DeleteObjects. Here, too, security plays a major role. Should unauthorized persons gain access to the key, they still do not have full access to the bucket. It is therefore advisable to create a new policy.

Here is an example of a policy. All the steps for creating a new IAM user can be found in the following list:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": "arn:aws:s3:::YOUR_BUCKET/*"
        }
    ]
}
  • Navigate to the user – the link is in the sidebar, click on Add user
  • Enter a name and select Programmatic access. This creates a separate access key for the user, which we will use later when uploading.
  • Select “Attach existing policies directly” to be able to create a new policy under “Create policy”. The button will open a new tab.
  • Simply insert the code from above under JSON and click “review Policy”. Note: Instead of YOUR_BUCKET you have to use the name of your bucket of course.
  • Assign a name for the policy and close the tab with “create policy”
  • Update the policies and select the newly created policy
  • Click “Next: Tags”, which you can ignore because we don’t need tags
  • After creating the user, you can see the key. You can either copy the text or download a CSV file. We will need this key in the next step!
AWS Konsole

The AWS JS SDK

After successfully setting up our AWS console, we can finally start coding. For reasons of reusability, I have put the AWS methods separately in an aws.js file.

First step: Create an S3 instance

First of all, we want to add the aws-sdk library to our project to be able to use the AWS method. We also use the axios package for requests.

Install aws-sdk and axios in your yarn project:

yarn add aws-sdk axios

As soon as the installation was successful, we can start with the initialization of an S3 instance. As you can see, we are using our access keys from the .env file and a region variable. An S3 instance can be initialized with new aws.S3 (). I have stated the option signatureVersion to be able to upload files to the server. If you use an American server, you can save yourself this option.

const aws = require('aws-sdk')

aws.config.update({
  secretAccessKey: process.env.VUE_APP_AWS_SECRET_ACCESS_KEY,
  accessKeyId: process.env.VUE_APP_AWS_ACCESS_KEY,
})

export const s3 = new aws.S3({
  signatureVersion: 'v4',
  region: process.env.VUE_APP_AWS_REGION,
})

Note: if you copy the code, don’t forget to define VUE_APP_AWS_ACCESS_KEY, VUE_APP_AWS_SECRET_ACCESS_KEY and VUE_APP_AWS_REGION in your .env variables.

Second step: SignedURL

When the configuration is finished, we can create our singleUpload method with signedURL. A quick look at signedURLs and why is it good to use them?

  • The signedURL can only be used for individual file uploads – the user cannot unintentionally fill the bucket further
  • The signedURL encrypts the filename and filetype – the user can only upload the registered file
  • The signedURL is limited in time – this protects against exploits e.g .: Users with bad intentions who try to use the signedURL from another user
  • The signedURL is generated by the Vue.js app and cannot be created by yourself
  • The signedURL only works with the specified bucket – users cannot see or access any other bucket
export const singleUpload = (file, folder) => {
  const key = folder + '/' + Date.now() + '-' + file.name.replace(/\s/g, '-')
  const params = {
    Bucket: process.env.VUE_APP_AWS_BUCKET,
    Key: key,
    Expires: 10,
    ContentType: file.type,
  }
  const url = s3.getSignedUrl('putObject', params)
  return axios
    .put(url, file, {
      headers: {
        'Content-Type': file.type,
      },
    })
    .then(result => {
      const bucketUrl = decodeURIComponent(result.request.responseURL).split(
        key
      )[0]
      result.key = key
      result.fullPath = bucketUrl + key
      return result
    })
    .catch(err => {
      // TODO: error handling
      console.log(err)
    })
}

In line 2 we generate a specific file name at AWS called a key. Also, the file name must also contain the folder in which the file should be located, for example, an album or team. We can delimit the folder name with a slash. We use Date.now () to generate a unique file name. The replace method replaces the whitespaces with a hyphen (-). It would even be possible to only work with Date.now (). This is up to you as to which structure you want to build in your bucket.

As I mentioned above, the “Expires” attribute limits the URL in time. If you want to learn more about getSignedUrl, click on the link.

As soon as the file has been uploaded, we receive the key and the link to the file, which we give back, for example, to store it in our database.

Third step: delete the file

Deleting an uploaded file is just as easy. All you need is the bucket and the name of the file. If you use several buckets, it is better to save the bucket name in the database as well. Both names can then be pulled from the database. After the successful deletion in the bucket, you must of course also delete the file from your database.

export const deleteObjectByKey = key => {
  const params = {
    Bucket: process.env.VUE_APP_AWS_BUCKET,
    Key: key,
  }
  const data = s3.deleteObject(params).promise()

  return data
}

Upload component in Vue with Filepond

If you don’t want to style your file upload yourself, filepond is highly recommended. With the library, you can implement a professional UI for uploading in a few minutes.

Upload Komponente

Step 1: FilePond component

To be able to use the libraries, we add them back to the project dependencies with yarn.

yarn add vue-filepont filepond-plugin-file-validate-type filepond-plugin-image-preview filepond-plugin-image-crop filepond-plugin-image-transform

After successfully adding it, you can import vue-filepond into the desired Vue component.

import vueFilePond from 'vue-filepond'
import FilePondPluginFileValidateType from 'filepond-plugin-file-validate-type'
import FilePondPluginImagePreview from 'filepond-plugin-image-preview'
import FilePondPluginImageCrop from 'filepond-plugin-image-crop'
import FilePondPluginImageTransform from 'filepond-plugin-image-transform'
  <FilePond
    ref="pond"
      :server="{
      process: (fieldName, file, metadata, load, error, progress, abort) => {
        uploadFile(file, metadata, load, error, progress, abort)
      },
    }"
    @removefile="onRemoveFile"
  />

Now to the FilePond component: Ref is required to connect the method like processFiles, addFile, etc. with the component. When the file is edited, our uploadImages method is executed with the parameters. Important: the AWS methods must also be imported from aws.js.

import { singleUpload, deleteObjectByKey } from '@/aws.js'

Step 2: This is how it works with the file upload

The file upload in our Vue app can now be implemented quite easily. We call our uploadFile method with the file and the desired folder as parameters. If the upload was successful, we will receive a response with the status 200.

async uploadFile(file, metadata, load, error, progress, abort){
      const result = await singleUpload(
        file,
        this.$route.params.teamSlug // folder of the file, you should change it to your variable or a string
      )
      if (result.status === 200) {
        // Handle storing it to your database here
        load(file) // Let FilePond know the processing is done
      } else {
        error() // Let FilePond know the upload was unsuccessful
      }
      return {
        abort: () => {
          // This function is entered if the user has tapped the cancel button
          // Let FilePond know the request has been cancelled
          abort()
        },
      }
},

Step 3: rendering the pictures

To display files in your Vue app later, specific file data such as key and url must be saved in the database. If you only save images, then only the key is sufficient, since the URL can be generated in a computed object.

computed: {
  imgSrcArray: () => {
    return this.keys.map(url => 'https://s3.eu-central-1.amazonaws.com/vue-fileupload-example/' + url)
  },
},

Important: Exchange eu-central-1 for your bucket region and vue-fileupload-example for your bucket name! Then you can v-for render me a list of images, for example.

<img v-for="src in imgSrcArray" :src="src"/>

Step 4: removal of files

In step 1 you have probably already noticed the v-on remove. Now I will show you the method that will be performed when deleting.

async onRemoveFile(event) {
      let url = this.$route.params.teamSlug + '/' + event.file.name // event.file.name has only the name after the slash / so the actual filename
      const res = await deleteObjectByKey(url)
      if (
        res.$response.httpResponse.statusCode >= 200 &&
        res.$response.httpResponse.statusCode < 300
      ){
        // here remove data from database too
      }
    },

The status code of the response between 200 and 300 means that the file has either been deleted or does not exist at all.

Résumé

With the help of an AWS bucket and the signedURL function, it is relatively easy to implement a file upload without much involvement of the backend. So the backend is not put under unnecessary load. In connection with Vue and Filepond, the desired upload is ready for use via the frontend.

Install WordPress locally in less than a minute

Hosting and developing WordPress locally always takes some effort. First of all, a local server and a MySQL database are required to be able to install WordPress at all. For a long time, tools like MAMP for the Mac were the good companions of a WordPress developer. Since tools and workflow should move with the times, there have been major changes here. We present our current WordPress workflow and the tools we use for it.

Basic setup for our workflow

Setting up the tools

Modern workflows separate the development environment from the live environment. Since errors can occur again and again during development, it must be ensured that the actual live page is not affected. Our workflow always has the following three WordPress versions:

  1. Locally on the developer’s computer
  2. Test on the staging server separately from the live version
  3. Live version

First of all, features or adjustments are implemented locally on the developer’s computer. If a completed work step can be tested, the update is uploaded to the test server. This includes a WordPress installation that is completely separate from the live version and can be accessed, for example, on the domain test.deindomain.de. We will only update the live versions with the new functions once the feature has been tested there without errors

Setting up the tools

As soon as you have downloaded Local from FlyWheel, you can start with the local setup. In the first step, you start the program.

To create a new local WordPress site, start the Local wizard with Create New Site. The name of the new WordPress site is now requested. If you want to select additional options, such as a stored blueprint, you can do this under the Advanced Options. The local domain can also be adjusted here and the folder directory selected. In the next step, you have to select the server configuration. If you don’t need any special features here, you can simply leave the preset preferred active and click on.

In the last step, the WP-Admin data is requested, i.e. the data with which you want to log into your WordPress after installation.

Then Local sets up the local machine and after a short waiting period, you have created your WordPress site. The actual development can now finally begin! 😉

A quick look at the local interface

When Local has created your page, the page will reappear in the left sidebar. If you choose it, you can see different data about your local machine. For one thing, you can start or stop them. The machine must of course be started if you want to call it up in the browser. You can find the link for this under View Site and with a click on Admin you come directly to the WP Admin area. In addition to data such as the WordPress version, PHP version, or the page path, you will find other features at the bottom that are our little highlight. You can create a live link to your local machine.

With this link, the WordPress site can also be accessed externally, and feedback on the current state of development can be obtained quickly without having to deploy the site first.

Synchronize existing WordPress installations

We have now set up a local WordPress site and can start or stop it as we wish. In the following articles, I will explain how the local WordPress installation can be synchronized with an existing live version and how to secure the development and deployment workflow with Git.

Part 2: Importing the content of a live version into the local WordPress instance

Part 3: Deployment of the WordPress theme using Git and DeployHQ