2 Item(s)

per page
Set Descending Direction

Banner

In today’s interconnected society, the amount of data being produced is increasing exponentially, especially with recent developments in Internet of Things (IoT) technology and mobile networking.

Click here to see a blog post detailing what IOT is.  

This means that the amount of data that organisations have to store and process will vastly increase over the next few years, so these organisations are looking for future-proof IT infrastructure solutions that will be able to process, store and analyze large volumes of data faster than ever.

 

This is where High Performance Computing (HPC) comes in. HPC is defined as the use of supercomputers and parallel processing techniques in order to solve complex computational problems.  These supercomputers are much more powerful than standard consumer laptops and desktops which are capable of billions of calculations per second, but this is slow compared to the quadrillions of calculations that supercomputers can do per second.  

 

Supercomputers are essentially just computers that have a high level of performance when compared to other general-purpose computers.  These computers usually have thousands of nodes that work together in completing one or more computational tasks , also known as “parallel processing”. This makes supercomputers very good at complex computing tasks such as editing feature films with advanced special effects, predicting weather and artificial intelligence.  

 

These Supercomputer nodes need special processors that are designed for parallel computing workloads. Intel’s Xeon Phi family of processors has been specifically designed for HPC workloads with up to 72 cores

You can see the full range of Intel's Xeon Phi family processors here.

This enables this processor to deliver extremely high processing performance of over 3 tera-FLOPS while maintaining respectable power efficiency.  Xeon Phi chips can also be used as co-processors to conventional server CPUs in the form of PCIe expansion cards, as is the case with the Tianhe-2 supercomputer at the National Supercomputer Center in Guangzhou which uses Xeon Phi coprocessors alongside Ivy Bridge-EP Xeon processors.  

 

HPC is currently used in a range of applications that are crucial for society such as the real-time analysis of stock trends, automated trading and more recently, artificial intelligence (AI).  AI is the current frontier of what is possible with high performance computers and optimization for AI workloads is quickly becoming the standard for the HPC market.

 

There are so many use cases for AI that most industries will be able to benefit from it in some way due to the ability to analyze large amounts of data, identify trends and use the information to predict future trends and automating responses to the data.  

An example of the real world benefits of AI has been demonstrated by Google in 2014 when they deployed the Deepmind AI into one of their facilities and saw a 40% reduction in the energy used for cooling. This obviously translated into much lower energy costs for the facility, which would be a very attractive benefit for data centers and since data centers make up a significant portion of global energy use, this would significantly reduce human energy use and reduce our overall impact on the environment.  

If you would like to see more information on this huge energy reduction which was made possible by AI please click here.

 

If you have more questions about high performance computing or are interested in our HPC solutions, please contact us by phone or email.  

Sales@serverfactory.co.uk

+44 (0)20 3432 5270

Posted in News By Server Factory

Intel Release Broad Range of New Workload-optimized Chips

Wednesday, 3 April 2019 13:12:50 Europe/London

Intel banner

On Tuesday 2nd of April 2019 at a data-center innovation event in California , Intel announced its widest portfolio of Intel Xeon processors ever.  Intel has launched over 50 new processors, a lot of them with data-center optimization in mind, as well as other new chips, memory and storage solutions.  This new product portfolio shows Intel’s shift from being a “PC-centric” company to being more focused on data center technologies, which is no surprise given the ever-increasing demand for systems that are optimized for AI workloads, cloud computing and 5G networking.


These new 2nd generation Xeon Scalable processors include support for Intel DL Boost technology (Intel Deep Learning Boost) which is designed to accelerate AI inference workloads such as image-recognition in datacenter, enterprise and edge-computing environments.  Intel says they have worked closely with their partners so that DL Boost technology is optimized and users can maximise benefits from the technology. In fact, the real world practical value of DL Boost technology has been demonstrated as Microsoft has reported a 3.4x boost in image-recognition performance, Target has reported a 4.4x boost in machine learning inference and JD.com has seen a 2.4x boost in text recognition, all since implementing Intel DL Boost technology.


Intel’s new server-class flagship processor is now undoubtedly the Xeon Scalable Platinum 9200, with 56 cores and 12 memory channels.  Intel says that this processor is “designed to deliver leadership socket-level performance and unprecedented DDR memory bandwidth in a wide variety of high-performance computing (HPC) workloads, AI applications and high density infrastructure.  


Other new features of the new generation Xeon Scalables include Intel Turbo Boost Technology 2.0, which allows for a maximum boost clock of 4.4GHz, Enhanced Intel Infrastructure Management Technologies and importantly support for Intel’s Optane DC persistent memory.  

 

Intel have also updated their Xeon D product family with the new 1600 series which builds on the Xeon D-1500 by providing higher clock speeds in a familiar package.  The new processors will see an increase in base frequency of 1.2 - 1.5x that of Xeon D-1500 processors with a boost clock of up to 3.2GHz with Intel Turbo Boost Technology 2.0, thanks to extra TDP headroom.  These new chips are mainly targeted at edge networking, mid-range storage solutions and solutions were space is a constraint such as localised cloud infrastructure.


Alongside the new Xeon CPUs, Intel also launched their Optane DC Persistent memory DIMMs which they describe as delivering “breakthrough storage-class memory capacity to the Intel Xeon Scalable Platform”.  These new memory modules can be installed with traditional DRAM in systems using a standard DDR4 slot. Optane DC Persistent memory is quite revolutionary because it provides a consistent memory tier, which allows data persistence in main system memory rather than disks.      


Alongside this, Intel also launched a new dual-port SSD the Optane DC SSD D4800X (NVMe), which they claim will deliver 9x faster read latency when compared to NAND dual port.  The Intel SSD DC D5-P4326 was also released which is one of Intel's new “Ruler” SSDs, named so because of their form factor. These long and thin “ruler” shaped SSDs come in 15.36TB and 30.72TB capacities, with the smaller also being available in a more conventional 2.5 inch U.2 form factor.  These enterprise-class SSDs finally make it possible to have up to 1PB of storage in a 1U server design, making possible a whole new level of storage density.

 

If you would like more information about Intel's newly released products, click here.

 

If you are interested in purchasing a solution featuring these new products, please get in touch with our sales team: Sales@serverfactory.co.uk +44 (0)20 3432 5270


 

Posted in News By Server Factory

2 Item(s)

per page
Set Descending Direction