Creating Ampere AI Virtual Machines on Google Cloud
Tutorial
Users don’t have to wait until the general availability of Ampere compute instances on GCP Marketplace for being able to run their AI inference workloads. Indeed, Ampere® Altra®resources are already available on Google Cloud. In this document we describe the step-by-step process for creating an Ampere Altra compute resource on Google Cloud for running Ampere Optimized Frameworks including TensorFlow, PyTorch and ONNX-RT for AI inferencing. Ampere compute shapes are called “t2a” and span from one to 48 cores.
Step 1:
Log in to your cloud account at cloud.google.com and click “console”.
You will see the following console screen:
Step 2:
Once in the “console” screen, click “Activate Cloud Shell”.
Step 3:
Once in the Cloud Shell, type the following command:
gcloud compute instances create <your-vm-name>
--image-family=<framework-name>--ampere-ubuntu
-- image-project=ampere-ai
-- machine-type=<t2a-based-shape>
Type in your choices in the fields highlighted between brackets as:
t2a-standard-1
t2a-standard-4
t2a-standard-16
t2a-standard-32
t2a-standard-48
Click enter.
Step 4:
A few moments after typing in the command on step 3, you will receive the confirmation that the vm has been successfully created. Click on “Navigation menu” in the upper left corner of the interface.
Step 5:
Select “Compute Engine” -> “VM instances”.
Step 6:
Find your VM on the list and click SSH to access it.
Step 7:
You are now ready to use your Ampere optimized framework.