The final step to going cloud native is to decide where you want to start. As the last installment in this series, we’ll explore how to approach cloud native application development, where to start the process within your organization, and the types of things that you may encounter along the way.
As the rest of this series has shown, cloud native platforms are quickly becoming a powerful alternative to x86-based compute. As we showed in Part 4, there is a tremendous difference between a full-core Ampere vCPU and half-core x86 vCPU in terms of performance, predictability, and power efficiency.
How to Approach Cloud Native Application Development
The natural way to design, implement, and deploy distributed applications for a Cloud Native computing environment is to break that application up into smaller components, or microservices, each responsible for a specific task. Within these microservices, you will typically have multiple technology elements that combine to deliver that functionality. For example, your order management system may contain a private datastore (perhaps to cache order and customer information in-memory), and a session manager to handle a customer’s shopping basket, in addition to an API manager to enable the front-end service to interact with it. In addition, it may connect with an inventory service to determine item availability, perhaps a delivery module to determine shipping costs and delivery dates, and a payments service to take payment.
The distributed nature of cloud computing enables applications to scale with demand and maintain application components independently of each other in a way monolithic software simply can’t. If you have a lot of traffic to your e-commerce site, you can scale the front-end independently of the inventory service or payments engine or add more workers to handle order management. Instead of single, huge applications where one failure can lead to global system failures, cloud native applications are designed to be resilient by isolating failures in one component from other components.
In addition, a cloud native approach enables software to fully exploit available hardware capabilities, by only creating the services required to handle the current load and turning resources off in off-peak hours. Modern cloud native CPUs like those from Ampere provide very high numbers of fast CPU cores with fast interconnect, enabling software architects to scale their applications effectively.
In Part 2 and Part 3 of this series, we showed how transitioning applications to an ARM-based cloud native platform is relatively straightforward. In this article, we will describe the steps typically required to make such a transition successful.
Where to Start Within Your Organization
The first step in the process of migrating to Ampere’s Cloud Native Arm64 processors is to choose the right application. Some applications which are more tightly coupled to alternative CPU architectures may prove more challenging to migrate, either because they have a source code dependency on a specific instruction set, or because of performance or functionality constraints associated with the instruction set. However, by design, Ampere processors will generally be an excellent fit for a great many cloud applications, including:
Analyzing your application dependencies
Once you have chosen an application that you think is a good fit for migration, your next step is to identify potential work required to update your dependency stack. The dependency stack will include the host or guest operating system, the programming language and runtime, and any application dependencies that your service may have. The Arm64 instruction set used in Ampere CPUs has emerged to prominence relatively recently, and a lot of projects have put effort into performance improvements for Arm64 in recent years. As a result, a common theme in this section will be “newer versions will be better”.
Building and testing software on Arm64
The availability of Arm64 Compute resources on Cloud Service Providers (CSPs) has recently expanded and continues to grow. As you can see from the Where to Try and Where to Buy pages on the Ampere Computing website, the availability of Arm64 hardware, either in your datacenter or on a cloud platform, is not an issue.
Once you have access to an Ampere instance (bare metal or virtual machine), you can start the build and test phase of your migration. As we said above, most modern languages are fully supported with Arm64 now being a tier 1 platform. For many projects, the build process will be as simple as recompiling your binaries or deploying your Java code to an Arm64 native JVM.
However, sometimes issues with the software development process may result in some “technical debt” that the team may have to pay down as part of the migration process. This can come in many forms. For example, developers can make assumptions about the availability of a certain hardware feature, or about implementation-specific behavior that is not defined in a standard. For instance, the char data type can be defined either as a signed or unsigned character, according to the implementation, and in Linux on x86, it is signed (that is, it has a range from –128 to 127). However, on Arm64, with the same compiler, it is unsigned (with a range of 0 to 255). As a result, code that relies on the signedness of the char data type will not work correctly.
In general, however, code which is standards-conformant, and which does not rely on x86-specific hardware features like SSE, can be built easily on Ampere processors. Most Continuous Integration tools (the tools that manage automated builds and testing across a matrix of supported platforms) like Jenkins, CircleCI, Travis, GitHub Actions and others support Arm64 build nodes.
Managing application deployment in production
We can now look at what will change in your infrastructure management when deploying your cloud native application to production. The first thing to note is that you do not have to move a whole application at once – you can pick and choose parts of your application that will benefit most from a migration to Arm64, and start with those. Most hosted Kubernetes services support heterogeneous infrastructure in a single cluster. Annoyingly, different CSPs have different names for the mechanism of mixing compute nodes of different types in a single Kubernetes cluster, but all the major CSPs now support this functionality. Once you have an Ampere Compute pool in your Kubernetes cluster, you can use "taints” and “tolerations” to define node affinity for containers – requiring that they run on nodes with arch=arm64.
If you have been building your project containers for the Arm64 architecture, it is straightforward to create a manifest which will be a multi-architecture container. This is essentially a manifest file containing pointers to multiple container images, and the container runtime chooses the image based on the host architecture.
The main issues people typically encounter at the deployment phase can again be characterized as “technical debt”. Deployment and automation scripts can assume certain platform-specific pathnames, or be hard-coded to rely on binary artifacts that are x86-only. In addition, the architecture string returned by different Linux distribution can vary from distribution to distribution. You may come across x86, x86-64, x86_64, arm64, aarch64. Normalizing platform differences like these may be something that you have never had to do in the past, but as part of a platform transition, it will be important.
The last component of platform transition is the operationalization of your application. Cloud native applications contain a lot of scaffolding in production to ensure that they operate well. These include log management to centralize events, monitoring to allow administrators to verify that things are working as expected, alerting to flag when something out of the ordinary happens, Intrusion Detection tools, Application Firewalls, or other security tools to protect your application from malicious actors. These will require some time investment to ensure that the appropriate agents and infrastructure are activated for application nodes, but as all major monitoring and security platforms now support Arm64 as a platform, ensuring that you have visibility into your application’s inner workings will typically not present a big issue. In fact, many of the largest observability Software as a Service platforms are increasingly moving their application platforms to Ampere and other Arm64 platforms to take advantage of the cost savings offered by the platform.
Improve Your Bottom Line
The shift to a Cloud Native processor can be dramatic, making the investment of transitioning well worth the effort. With this approach, you’ll also be able to assess and verify the operational savings your organization can expect to enjoy over time.
Be aware that one of the biggest barriers to improving performance is inertia and the tendency for organizations to keep doing what they’ve been doing, even if it is no longer the most efficient or cost-effective course. That’s why we suggest taking a first step that proves the value of going cloud native for your organization. This way, you’ll have real-world results to share with your stakeholders and show them how cloud native compute can increase application performance and responsiveness without a significant investment or risk.
Cloud Native Processors are here. The question isn’t whether or not to go cloud native, but when you will make the transition. Those organizations who embrace the future sooner will benefit today, giving them a massive advantage over their legacy-bound competitors.
Learn more about developing at the speed of cloud at the Ampere Developer Center, with resources for designing, building, and deploying cloud applications. And when you’re ready to experience the benefits of cloud native compute for yourself, ask your CSP about their cloud native options built on Ampere Altra Family, and AmpereOne technology.