Matrices and Derivatives: An Introduction
NOTE: This article has been translated from Farsi using llama3-70b-8192
and groq
It is obvious from everything about me that I love mathematics; yet it was my second year in graduate school when I was forced to seriously pursue matrix calculus. It all started with a small 14-page booklet by an agricultural economics professor that set me on my path.
Now, over 12 years later, differentiating these functions is not a challenging task anymore. With tools like PyTorch
and TensorFlow
, you can automatically differentiate these functions and do much more. However, having knowledge of matrix calculus can still be beneficial for those who want to optimize their models (at least, it’s a good excuse for me to write about it).
Functions with matrix inputs
Examples of functions with matrix inputs are plentiful. One of the most well-known ones is the multivariate Gaussian distribution function:
If we have a series of samples from this distribution and want to estimate the matrix, we’ll be dealing with functions that take matrices as input.
Another example is neural network models. In most neural network models, there are weight matrices that need to be optimized, and we’ll encounter functions of these matrices.
Let’s go back to estimating the matrix. We have samples of , denoted by . We want to find the that maximizes the likelihood of these samples. Assuming that the s are independent and identically distributed (i.i.d.), we have:
This function needs to be maximized with respect to its input, . This is a function with a matrix input that we need to differentiate and set equal to zero to find the optimal value.
Matrix Calculus Without Tears and Bloodshed
If we want to differentiate these functions in a principled way, we’ll encounter tensors and tensor operations. But there’s a simpler way. This method is based on a clever perspective: “When computing derivatives, the layout of matrix elements is of secondary importance.”
With this perspective, we can ignore the layout of matrix elements when taking derivatives. Each matrix becomes a large vector accompanied by two numbers that specify how the elements are laid out. For example:
If you’re careful, you’ll notice that nothing is lost in this new representation. We can reconstruct the original matrix from this vector and its dimensions.
You might ask, what’s the point of this? But I won’t answer that yet! First, I need to explain that concatenating the columns of a matrix is a mathematical operation called the vec
operator.
Let’s get back to the question: when we represent matrices as vectors, differentiating with respect to matrices becomes differentiating with respect to vectors, which we’re familiar with from our second-year university mathematics.
Calculus of Matrix-Valued Functions
Let’s define a function . According to the definition, we compute the derivative as:
Now, let’s consider a simple case where we want to compute the derivative of a matrix with respect to itself:
where is the identity matrix with dimension .
Let’s consider a more complex case. Assume we have a single-variable function that we can apply to each element of matrix :
What is ? With similar calculations, we get:
This means that the resulting matrix has only non-zero elements on the diagonal, and those elements are the derivatives of with respect to the elements of .
Finally, note that differentiation with respect to matrices is a linear operator. This means that scalar multiplication and addition with respect to this operator have the usual properties:
In this post, we saw that if we temporarily ignore the layout of matrix elements, we can develop a convenient definition for matrix differentiation. We used the vec
operator to concatenate matrix columns into a long vector. In the next post, I’ll delve deeper into matrix calculus.