Intel working with Facebook to design its AI Deep Learning Xeon processor

Intel

Artificial Intelligence and Deep Learning are making their way to almost every section of the tech industry. Major tech giants have been busy trying to implement AI and Deep Learning into their technology. Recently, Intel announced at Open Compute Project Global Summit 2019 that they have partnered up with Facebook to design the upcoming Cooper Lake 14nm Xeon processor family.

Jason Waxman, VP of Intel’s Data Center Group and GM of Cloud Platforms Group says that Facebook, which is already working on AI and ML for years, is helping Intel with Bfloat 16 format incorporation in Cooper Lake’s design. For those who are unknows, Bfloat 16 is a 16-bit floating point representation system used in Deep Learning training. It helps in speeding up training in tasks like machine translation, image-classification, recommendation engines, and speech-recognition.

Bfloat provides the same dynamic range as offered by 32-bit floating point representation thus improving the performance of the processor. The company is looking forward to introducing new platforms with four and eight-socket designs as well as improving its two-socket design. As far as speculations are concerned, the upcoming Xeon processor family with the four-socket server could have up to 112 cores and 224 threads. This would enable the processor to address the 12TB of DC Persistent Memory Modules (DCPMM).

“OCP is a vital organization that brings together a fast-growing community of innovators who are delivering greater choice, customization, and flexibility to IT hardware. As a founding member of this open source community, Intel is committed to delivering innovative products that help deploy infrastructure underlying the services that support the digital economy,” said Jason Waxman at the summit.

Notably, despite using 112 cores and 224 threads, the Cooper Lake processor will still lag behind the 7nm-based AMD EPYC lineup, that feature  128 cores and 256 threads on a dual-socket motherboard.

With the AI, IoT, clouds, and network transformation taking place, there is an issue of the massive amount of unorganized data. Both Open Compute Project (OCP) and Intel are dedicated to designing hardware that would be equipped to address the issue of unprecedented data in an efficient way.

Besides this, Intel is also reported to be contributing its RSD 2.3 Rack Management Module code to the OCP community. It has been a year since Intel has made efforts to come up with simple common standards for BIOS, BMC, rack management software by being active in OCP system, OpenBM, and OpenRMC firmware projects.

The third quarter of 2019 will see Intel release a family of OCPv3.0-complaint network interface controllers(NIC). The OCPv3.0 family will include controllers ranging from 1GbE to 100GbE and next-generation of Intel Ethernet with more features and better application performance.

ALSO READ:

LEAVE A REPLY

Please enter your comment!
Please enter your name here