Run Stable Diffusion Without Giving Away Your Privacy Using AWS + Automatic1111

Paulo Carvalho
4 min readDec 10, 2023

The huge need for data in AI development has made it so almost everything you do gets logged and used for further third-party training. However, what happens if you want to be in control of your data but don’t have a powerful computer locally? We will go over how to run inference or train Stable Diffusion models in a private server in the cloud.

Stable Diffusion generated image using V1.5 of a “photo of a spaceship going into warp near earth”


We will go over how to setup an environment for running inference using stable diffusion models in your own private server.

This approach is preferred in the case where you don’t have a computer that is powerful enough to run stable diffusion locally and you want to remain in control of your data.

To accomplish this, we will be deploying Automatic1111 open source stable diffusion UI to a GPU enabled VM in the AWS cloud.


Step 1 — Set Up EC2 Instance

If your account is new, you will likely need to request access to the GPU enabled instances we will be using here as shown below.

Once quota increase is granted (this process may take a few hours) proceed with the creation of the EC2 instance as shown below.

Navigate to the AWS Console for creating EC2 instances here. The EC2 instance is the virtual machine (your computer/server in the AWS cloud) that will run the application.

The instance type we are going to create below is a good price-benefit ratio for a GPU enabled instance called g4dn. More information here.

In the network settings make sure to allow SSH from your IP.

Even though the instance has an ephemeral volume attached to it when running we will be reserving a non-volatile (is preserved when instance is stopped) volume of suficient size to host your models.

Step 2 — Access the Instance

Open your ~/.ssh/config file and add the following entry for your server.

Host ai-playground
User ubuntu
HostName the_ip_of_your_instance_goes_here
IdentityFile ~/.ssh/keys/your-key-name.pem

Add the key you downloaded as part of the instance creation process to the ~/.ssh/keys/ folder.

Restrict access permission to your key by running the command below.

chmod 400 ~/.ssh/keys/your-key-name.pem

You can now access your instance by running the following command from your terminal.

ssh -L 7859: ai-playground

The above command will both allow you to run commands in the instance as well as expose port 7860 (which webui uses) to your local port 7859. We are using a different port just in case you are running webui locally as well to avoid conflict.

Step 3 — Install Automatic1111

We will be using Automatic1111 as an easy way to expose the commands required to perform inference and train models for Stable Diffusion.

To install it, run the following sequence of commands in your instance:

sudo apt install wget git python3 python3-venv aria2 libgl1 libglib2.0-0 google-perftools
mkdir sd-webui
wget -q
chmod +x

By the end of the script, the webui backend will be automatically launched.

Step 4 — Download Models

There are several ways to download models. One of the easiest is to choose your desired model from hover over the download button, take note of the model id in the end of the URL and run the following command in your instance.

aria2c --console-log-level=error -c -x 16 -s 16 -k 1M -d sd-webui/stable-diffusion-webui/models/Stable-diffusion -o THE_MODEL_NAME.safetensors

Step 5 — Access Automatic1111

You can now open http://localhost:7859/ in your browser to run the webui and start generating images with your favorite stable diffusion models!


We have gone over how to create a GPU enabled EC2 instance and deploy a stable diffusion web UI accessible locally for running inference on the cloud.


Need help setting this up or would like to talk to an AI expert?

Email us at 🚀



Paulo Carvalho

Want to chat about startups, consulting or engineering? Just send me an email on