ThirdAI Training of Billion-Parameter Neural Networks on Commodity CPUs

August 30th, 2023

Via: AnyScale:

Today, all widely-used deep learning frameworks, such as TensorFlow and PyTorch, require access to a plethora of specialized hardware accelerators, such as GPUs or TPUs, to train large neural networks in a feasible amount of time. However, these specialized accelerators are extremely costly (roughly 3-10x more expensive than commodity CPUs in the cloud). Furthermore, these custom accelerators often consume a troubling amount of energy, raising concerns about the carbon footprint of deep learning. In addition, data scientists working in privacy-sensitive domains, such as healthcare, may have requirements to store data in on-premises environments as opposed to the cloud, which may hinder their ability to access such specialized computing devices for building models.

Motivated by these challenges for democratizing the power of deep learning in a sustainable manner, we at ThirdAI rejected the conventional wisdom that specialized hardware is essential for taking advantage of large-scale AI. To achieve this vision, we have built a new deep learning framework, called BOLT, for efficiently training large models on standard CPU hardware.

Leave a Reply

You must be logged in to post a comment.