Software

Implementing High Performance Databases on Azure

Published

on

High performance computing (HPC) is IO-intensive. You might have a fast and powerful supercomputer at your disposal but your work rate will depend on the kind of data transfer system you are using. There are other applications that are IO intensive, such as machine learning and artificial intelligence. 

HPC is set to be offered as an enterprise solution in the near future. For that to happen, it will need to be integrated with the cloud. However, IO-intensive applications don’t do well on the cloud because significant lag occurs during data transfer. 

In the past, businesses that tried to migrate their IO-intensive applications to the cloud were forced to contend with serious lag penalties and usually ended up reverting to private hosting or shared hosting. However, through high performance storage solutions like shared NVMe, it is now possible to implement IO-intensive applications on the cloud without having to endure the lags. 

How High Performance Storage Through Shared NVMe Works

Shared NVMe involves the pooling of NVMe storage resources distributed across a network. It is implemented using software defined storage services. The service abstracts underlying hardware and creates logical volumes. This enables large scale data management through centralized, intelligent management and monitoring. 

Using shared NVMe has been shown to result in up to ten times more bandwidth, 80% lower latency, and up to twenty-five times more Input Output Operations per Second. 

NVMe over Fabric technology allows the use of remote direct memory access (RDMA) to communicate over a network with much lower latency. It can use any RDMA technologies, including InfiniBand. The use of NVMe-oF on cloud servers equipped with NVMe significantly lowers latency. 

How High Performance Storage is Implemented on Azure 

For high performance databases on Azure, shared NVMe can be implemented with N-series as well as H-series virtual machines. The N-series have the ability to access H-series VMs, as long as they are using NVMe sharing. They also have InfiniBand and GPU. H-series have many fast cores and are equipped with InfiniBand and NVMe for high performance. 

This allows businesses to run analytics, HPC, and database workloads on Azure at high performance due to the high speeds. 

There are other options to solving the latency problem on Azure but they don’t work as well as shared NVMe. The first option is to use the L-series for storage-optimized use cases. Though the L-series can run in both converged and disaggregated modes, the disaggregated mode is more likely with less CPU power available. 

The second option is to use the N-series for GPU-based applications such as graphic rendering and video editing. However, they have no localized SSDs. 

Benefits of Using Shared NVMe

Shared NVMe enables added security, which is crucial for IO-intensive applications. Data can be spread across availability zones by mirroring across local NVMe drives. Businesses don’t have to worry about security and data compliance as data is stored on nodes within their accounts. Moreover, data longevity can be ensured through advanced self-healing and warning features. 

Shared NVMe also helps businesses avoid costly overprovisioning of storage. This way, enterprises can embrace multi-cloud strategies that result in advanced cost control ability, better agility, and higher performance. For example, data scientists can efficiently and cost-effectively train models. 

Conclusion 

Overall, shared NVME enables IO-intensive applications, ranging from high performance computing to graphics rendering, to be deployed on the cloud at high performance. The use of shared NVMe results in significantly higher bandwidth and throughput, as well as lower latency for applications in the cloud. It also comes with other advantages, such as added security and better data longevity.

Trending

Exit mobile version