Strategies for coaching massive neural networks


Pipeline parallelism splits a mannequin “vertically” by layer. It’s additionally doable to “horizontally” break up sure operations inside a layer, which is normally referred to as Tensor Parallel coaching. For a lot of trendy fashions (such because the Transformer), the computation bottleneck is multiplying an activation batch matrix with a big weight matrix. Matrix multiplication might be regarded as dot merchandise between pairs of rows and columns; it’s doable to compute impartial dot merchandise on totally different GPUs, or to compute elements of every dot product on totally different GPUs and sum up the outcomes. With both technique, we are able to slice the load matrix into even-sized “shards”, host every shard on a special GPU, and use that shard to compute the related a part of the general matrix product earlier than later speaking to mix the outcomes.

One instance is Megatron-LM, which parallelizes matrix multiplications inside the Transformer’s self-attention and MLP layers. PTD-P makes use of tensor, knowledge, and pipeline parallelism; its pipeline schedule assigns a number of non-consecutive layers to every machine, lowering bubble overhead at the price of extra community communication.

Typically the enter to the community might be parallelized throughout a dimension with a excessive diploma of parallel computation relative to cross-communication. Sequence parallelism is one such thought, the place an enter sequence is break up throughout time into a number of sub-examples, proportionally reducing peak reminiscence consumption by permitting the computation to proceed with extra granularly-sized examples.

Leave a Reply

Your email address will not be published. Required fields are marked *