Hey GPU, What’s Up with My Matrix? | by Thushan Ganegedara | Jun, 2023


Matrix multiplication; the holy grail of deep neural networks and trendy language understanding behemoths. As MLEs or information scientists, our fingers are too fast to kind tf.matmul or torch.matmul and we by no means look again. However don’t inform me you’ve by no means had the millisecond infatuation to know what may be taking place to that matrix when it enters the GPU! If you happen to did, you’re in the fitting place. Be a part of me in a journey by way of the fascinating intricacies inside a GPU.

I’ll clarify to you ways these compute powerhouses crunch up the numbers. You’ll be taught three little-known spectacular issues GPUs do, once they come face-to-face with matrices. By the top of this weblog submit, you’ll have an excellent understanding of how matrix multiplication works inside GPUs.

GEMM or generalized matrix multiplication is the kernel that’s executed when GPUs carry out matrix multiplication.

C = a (A.B) + b C

Right here, a and b are scalars, A is an MxK matrix, B is an KxN matrix, and thus C is an MxN matrix. It’s straightforward as that! You would possibly marvel why that trailing addition exists. Seems it is a fairly frequent sample for neural networks (e.g. including bias, making use of ReLU, including residual connections).

If you happen to’re requested to put in writing a matrix multiplication algorithm from first ideas, right here’s what you’ll do (until you’re gifted with a GPU in lieu of a mind — wouldn’t that lower your expenses for an MLE!).

for (int i = 0; i < M; ++i)
for (int j = 0; j < N; ++j)
for (int okay = 0; okay < Okay; ++okay)
C[i][j] += A[i][k] * B[k][j];

Right here’s an animated visible that reveals you what this does.

Internal product primarily based multiplication of two matrices (Recreated by creator — supply of inspiration: https://www.adityaagrawal.net/blog/architecture/matrix_multiplication)

However do you know GPUs despise this implementation 🤔? To know why that’s the case, it’s essential to perceive the GPU reminiscence structure,

For all comparisons and specs, I’ll be utilizing the Nvidia A100 GPU specs.

A GPU has three principal reminiscence ranges,

  • World reminiscence or HBM (what you usually consult with as GPU reminiscence and what you see if you run nvidia-smi )
  • Shared reminiscence (an area reminiscence that’s devoted to a single streaming multiprocessor [or SM] and shared between threads working in that SM)
  • Registers (individually allotted to threads to hold out their workload)

That is what it appears like,

The everyday reminiscence hierarchy of a GPU (L0/L1/L2 caches ignored for simplicity)

The very first thing to notice is that shared reminiscence (known as SRAM any more) is method smaller than the HBM, not to mention registers. So your matrix isn’t going to slot in there (in most events). If we return to our animation, for a single row of A all columns of B must be retrieved, and repeat the method for all rows in A. This implies, the GPU must do many-many reads to compute the output. The HBM (~1.5TB/s) is a number of magnitudes slower than SRAM (~19TB/s).

To place that in numbers, say you wish to multiply a 10x20 and 20x30 matrix, it’s essential to learn columns of B 10x30=300 instances. Is there a greater method we are able to do that?

Seems a easy trick can go a good distance right here! Merely flip the order of the loops, in order that okay turns into the outer most loop. And also you’re completed! 😮

for (int okay = 0; okay < Okay; ++okay) 
for (int i = 0; i < M; ++i)
for (int j = 0; j < N; ++j)
C[i][j] += A[i][k] * B[k][j];

We didn’t contact the precise computation, simply the order of the loops, so we should always get the identical end result as earlier than. Right here’s what the matrix multiplication appears like now!

Outer product primarily based multiplication of two matrices (Recreated by creator — supply of inspiration: https://www.adityaagrawal.net/blog/architecture/matrix_multiplication)

You see, we solely convey one column of A and one row of B at a time and by no means look again. This requires far much less reads than the unique implementation. The one distinction is we have been computing the inside product between two vectors earlier than, now we’re computing the outer product.

The distinction between inside product and outer product proven in inexperienced for 2 vectors (blue and yellow).

However nonetheless, we want complete C in SRAM, which may be too massive to slot in SRAM. What does CUDA do then? That brings us to the second trick.

To not fear! I’m not going to blast you with any complicated arithmetic or Leetcode algorithms. The primary factor to remember is, a matrix is a 2D format of particular person tiles. The next animation does justice to what I’m attempting to elucidate.

You possibly can iterate every block in A and B and nonetheless compute the precise reply for C’s corresponding block

The results of the inexperienced block 💚 is the sunshine blue strip of A 💙 and the sunshine yellow strip of B 💛. Taking this a step additional, to compute the output, you possibly can convey one block of that strip of A and one block from B’s strip at a time, compute the output and accumulate the end result within the inexperienced field.

This offers us a versatile framework the place we are able to load an arbitrary measurement block (or tile) of A and B and nonetheless compute the ultimate reply. We don’t must cease there, we are able to maintain recursively dividing the issue to even smaller issues. i.e. the matrix is damaged into tiles, tiles are damaged into fragments, and fragments to particular person values.

Utilizing the tiling method, the issue may be damaged down recursively

And this lends itself properly to the method execution structure in a GPU. There are three layers to a kernel execution in a GPU. For simplicity, we’ll say a SM runs a single thread block (though in apply they execute them concurrently, to scale back one thing generally known as the tail effect).

  • Threads
  • Warps (a set of 32 threads)
  • Thread blocks (a set of a number of warps)

The precise variety of threads in a thread block will depend on a particular structure. For instance, an A100 has the following specifications.

  • Most of 2048 threads per SM
  • Most of 1024 threads per block
  • Most of 32 thread blocks per SM

Sidebar #2: Magic of the ability of two

Going again to the tiling, It has been discovered that (heuristically) a matrix tile of measurement 256x128 per thread block provides affordable effectivity for many issues. Subsequently it’s a standard tile measurement utilized by CUDA.

You might need heard a few finest apply of protecting batch measurement, hidden dimension measurement as powers of two. That is the place this comes from! When your matrix dimensions are of powers of two, it is going to be totally divisible to a set of tiles with no the rest. If not, it makes your code less efficient.

GPU computations are extra environment friendly when your matrix dimensions are within the energy of two

What occurs when it’s not an influence of two?

Sidebar #2: Tile quantization

What occurs is an impact generally known as tile quantization. In different phrases, in case you have a tile row dimension of 128 however your matrix has 257 parts in a row, you’ll needn’t two, however three tiles in a row (i.e. 256+1). That is illustrated under.

Simply because we had on further component in rows, we’ve got to dedicate two complete thread blocks

Drawback with that is that, the thread block does the identical quantity of computation whatever the helpful information residing in it. So, you’re taking the chance to do helpful computation away out of your GPU, resulting in inefficiencies.

The same impact is called wave quantization, the place the matrix is over-sized and the SMs collectively can not match it without delay. Then the GPU must do the computation in 2 “waves”. Nevertheless that is much less of a priority for contemporary GPUs as they leverage concurrency to scale back wave quantization.

Tile quantization occurs when a thread block has to spill information partially, wave quantization occurs when SMs must spill information.

The ultimate trick is kernel fusion. As a rule, it’s sooner to do all of the computations in a single kernel than having two kernels referred to as one after the opposite. Why? As a result of one kernel wants to put in writing the info to HBM and different must learn that again. We already talked about how gradual that is. A greater method is simply mix the 2 operations into one. Some examples are,

In order it’s seen right here (I’m certain Pytorch has an analogous glossary), there are various fused kernels supplied by way of TensorFlow that mixes commodious operations in to a single kernel. In code, it means one thing like this,

for (int m = 0; m < M; m += Mtile) 
for (int n = 0; n < N; n += Ntile)
for (int okay = 0; okay < Okay; ++okay)
float tmp = 0
for (int i = 0; i < Mtile; ++i)
for (int j = 0; j < Ntile; ++j)
int row = m + i;
int col = n + j;
tmp += A[row][k] * B[k][col];
// Do different issues
C[row][col] = tmp

In different phrases, we maintain dearly to our tmp variable till after we’ve completed all our computations. Then solely we’ll write the end result again to C .

That’s it of us. I hope this was an pleasurable tour by way of the weeds of a GPU. If you happen to’re within the audio-visual model right here’s the hyperlink to my YouTube video.

To recap, we mentioned three issues that makes GPUs actually quick at matrix multiplication.

  • GPUs abandon the friendlier inside product implementation of matmul and embrace the extra read-efficient outer product implementation of matmul
  • GPUs cut up the matrices into smaller blocks (and blocks into fragments) and cut up the compute load throughout thread blocks, warps and threads.
  • GPUs make use of kernel fusion to convey generally co-occurring performance togetter, bettering GPU effectivity.

If you happen to loved this story, be happy subscribe to Medium, and you’re going to get notifications to contemporary content material from me, in addition to unlock full entry to hundreds of high quality tales from different authors.

Except in any other case famous all pictures are by the creator

Leave a Reply

Your email address will not be published. Required fields are marked *