2 Item(s)

per page
Set Descending Direction

Banner

In today’s interconnected society, the amount of data being produced is increasing exponentially, especially with recent developments in Internet of Things (IoT) technology and mobile networking.

Click here to see a blog post detailing what IOT is.  

This means that the amount of data that organisations have to store and process will vastly increase over the next few years, so these organisations are looking for future-proof IT infrastructure solutions that will be able to process, store and analyze large volumes of data faster than ever.

 

This is where High Performance Computing (HPC) comes in. HPC is defined as the use of supercomputers and parallel processing techniques in order to solve complex computational problems.  These supercomputers are much more powerful than standard consumer laptops and desktops which are capable of billions of calculations per second, but this is slow compared to the quadrillions of calculations that supercomputers can do per second.  

 

Supercomputers are essentially just computers that have a high level of performance when compared to other general-purpose computers.  These computers usually have thousands of nodes that work together in completing one or more computational tasks , also known as “parallel processing”. This makes supercomputers very good at complex computing tasks such as editing feature films with advanced special effects, predicting weather and artificial intelligence.  

 

These Supercomputer nodes need special processors that are designed for parallel computing workloads. Intel’s Xeon Phi family of processors has been specifically designed for HPC workloads with up to 72 cores

You can see the full range of Intel's Xeon Phi family processors here.

This enables this processor to deliver extremely high processing performance of over 3 tera-FLOPS while maintaining respectable power efficiency.  Xeon Phi chips can also be used as co-processors to conventional server CPUs in the form of PCIe expansion cards, as is the case with the Tianhe-2 supercomputer at the National Supercomputer Center in Guangzhou which uses Xeon Phi coprocessors alongside Ivy Bridge-EP Xeon processors.  

 

HPC is currently used in a range of applications that are crucial for society such as the real-time analysis of stock trends, automated trading and more recently, artificial intelligence (AI).  AI is the current frontier of what is possible with high performance computers and optimization for AI workloads is quickly becoming the standard for the HPC market.

 

There are so many use cases for AI that most industries will be able to benefit from it in some way due to the ability to analyze large amounts of data, identify trends and use the information to predict future trends and automating responses to the data.  

An example of the real world benefits of AI has been demonstrated by Google in 2014 when they deployed the Deepmind AI into one of their facilities and saw a 40% reduction in the energy used for cooling. This obviously translated into much lower energy costs for the facility, which would be a very attractive benefit for data centers and since data centers make up a significant portion of global energy use, this would significantly reduce human energy use and reduce our overall impact on the environment.  

If you would like to see more information on this huge energy reduction which was made possible by AI please click here.

 

If you have more questions about high performance computing or are interested in our HPC solutions, please contact us by phone or email.  

Sales@serverfactory.co.uk

+44 (0)20 3432 5270

Posted in News By Server Factory

         Blog Banner containing Supermicro logo and others

Supermicro releases new Edge computing products in response to emerging AI and 5G technologies

At Mobile World Congress 2019 in Barcelona, Supermicro announced that they are launching new Edge computing systems that are designed to cope with artificial intelligence and 5G workloads.

In a nutshell, Edge computing brings data storage and computing power physically closer to the end user.  In the words of Alex Reznik, Chair of the ETSI MEC ISG standards committee, “anything that’s not a traditional data center could be the ‘edge’ to somebody”.  This would include workloads running on systems that are physically on customer premises.  Basically the goal of Edge computing is to push applications, data and computing power away from centralized points such as data centers and closer to the end user.  

 The reason Supermicro are putting more emphasis on Edge systems is mainly due to the emergence of 5G networking.  The amount of data handled by today’s edge systems is constantly growing which poses challenges to businesses such as bandwidth congestion, processing delays and privacy issues.  

With their new Edge platforms based on Supermicro servers, they hope to help businesses process large data volumes, increase reliability, reduce latency and provide more secure connections.  The new 1019D-16C-FHN13TP and 1019D-FRN5TP edge computing systems are designed for the intelligent Edge as they are balanced between compute, storage, AI and networking capabilities and support up to 37 LAN ports including RJ45 and SFP ports.  These edge systems are also compact as they are 1U with 15 inches of depth. Perfect for businesses looking to run these systems on-site.

For more information about these systems, please contact us by phone or email and we will be glad to further assist you.

Email:Sales@serverfactory.co.uk

Phone:+44 (0)20 3432 5270

Posted in News By Server Factory

2 Item(s)

per page
Set Descending Direction