A child asks her Father: Daddy, what are clouds made of? The Father answers: High-performance computing mostly, my child.
Introduction to High-Performance Computing
In the past few years the Cloud business has truly taken off and the existence and significance of cloud computing can no longer be ignored. At least not if you want your business to thrive. Organizations worldwide have been adopting partial cloud services or switching entirely to a cloud infrastructure at an exponential rate, as it has become crystal clear that Clouds are the future of IT. (Learn more about High-Performance Computing Use Cases here).
There is one particular function that is fairly new to the cloud game – the HPC (high-performance computing) clusters. AWS has recently introduced a new product called EFA. Essentially these are servers connected by a fast low latency network within AWS data centers. Put into human language – it is a scalable, on-demand, pay-per-use supercomputer at our fingertips. While AWS is the trailblazer in adopting HPC via EFA, Azure and Google already offer similar capabilities with (arguably) similar performance.
Another relatively new service from AWS is the F1 server. In layman’s terms, F1 simply runs your complex computation code on an extremely powerful FPGA attached to a server. In other words – the F1 server creates a custom, tailored processor for your specific calculation and therefore runs exponentially faster than the regular (or even powerful) CPU.
Applications
Clouds have limitless applications for businesses. This article focuses on the applicability of Cloud services to complex computational problems which take too long to run on your home PCs. When time is of the essence, most of the difficult problems can be solved in one of three ways:
1. Splitting the problem into smaller (but still massive) independent sub-problems:
In this case, no HPC or FPGA is needed. Regular cloud servers can be used to solve the problem quicker than just one computer. The scaling is linear or almost linear at the beginning and gradually reaches saturation with an increasing number of used servers. For instance, using two servers instead of one will solve the problem twice as fast, however using 2,000 servers instead of 1,000 servers may only speed up the solution by 30% due to communication and synchronization overheads. As long as the sub-problems require a lot of work, there is no need for low latency network between the servers. Any server (from any cloud provider) can be used to run the parallelized implementation of the problem.
Here are some examples of such problems:
- Image processing where the image can be divided into smaller images.
- Data mining and processing of a large database, where each server is responsible for a significant chunk of work. One specific example is creating a mathematical model for a company based on historical data of several million clients.
2. Splitting the problem into small independent sub-problems:
Somewhat similar to the previous case, with one KEY difference – the sub-tasks are SMALL, which means there is a lot of communication going back and forth between the servers working on the solution. This, in turn, implies that a very fast network with very low latency is required to achieve decent performance. The scaling of the solution time compared to the size of the cluster is expected to be less linear than in the previous case, however, a meaningful speed-up can still be achieved using the clusters.
This is a classic HPC domain. Problems solved on HPC clusters involve complex mathematical models such as weather models, physical models, financial models, etc.
3. Brute-forcing the problem on a single piece of super-fast hardware:
Some problems just cannot be split efficiently into parallel independent tasks. In this scenario, it is possible to use the previously presented F1 (FPGA) instances. The one limitation for using FPGA is that all data must be preloaded into the memory. Otherwise accessing the data becomes the bottleneck and not the computation power we seek, so regular servers should be used.
What can HPC do for SMB
As the cost of HPC decreases, small businesses discover new opportunities where HPC can be utilized to help provide a competitive edge. Consider these ROI results from an IDC research update, which noted that investments in HPC can be associated with “very substantial returns”:
- An average USD $673 in revenue per dollar of HPC invested
- An average USD $44 of profits (or cost savings) per dollar of HPC invested
Here are just a few examples of HPC applications for small businesses:
- Small manufacturers and component designers can now use HPC to power applications that allow them to test and validate product designs in their R&D shops — without building costly physical prototypes.
- Engineering firms can use HPC systems to slash the time required to run complex simulations, compared to running the same jobs on workstations.
- Visual effects studios can use HPC clusters to accelerate the rendering of complex animations in movies and games.
- Financial analysts can run complex financial markets simulations analyzing years of collected data to provide their clients with investments recommendations.
- A small chain-store could run a trend analysis on several years of historical sales data to understand what products are selling when. Add shelf placement information to the analysis and you can improve sales by targeting high margin merchandise to fastest-selling shelves.
In summary:
Today’s Cloud services provide infinite, previously unthinkable capabilities for the advanced user – “advanced” being the keyword. Hardware with stunning performance is now just a few mouse clicks away. The key to unlocking the immense potential and power of the Cloud is in securing talent with experience in parallel programming and solid knowledge of operating cloud services.
Sources
[1] Joye Jablonski, High-Performance Compute on AWS, Google & Azure
[2] Ed Turkel, CIO, What can HPC do for my business?