Handout-Multinomial Logistic Regression

Multinomial Logistic Regression Notes

Terminology, variables

nn: number of features

mm: size of training set

KK: number of classes (we think of classes being named 1,2,,K1, 2, \ldots, K.

x\boldsymbol{x}: input vector, size n+1n+1

y\boldsymbol y: one-hot target vector - vector of all zeroes except for one 1 in the target class slot

y^\boldsymbol{\hat{y}}: predicted target/output variable, always a probability vector, we interpret each entry as being the probability of the input being predicted as each class.

x(i),y(i)\boldsymbol {x^{(i)}}, \boldsymbol {y^{(i)}}: i’th training feature variable, target variable

w1,,wK\bm w_1, \ldots, \bm w_K: weight vectors, each of size n+1n+1.

For vectorization, we also have:

X\boldsymbol X: matrix of input feature values, size (m×n+1)(m \times n+1), including a column of all ones in the far-left, for the bias term/feature.

Y\boldsymbol Y: matrix of true target labels as one-hot vectors, each vector is in a row of this matrix of size (m×K)(m \times K).

Y^\boldsymbol{\hat Y}: matrix of predicted labels, as probability vectors, size (m×K)(m \times K).

W \boldsymbol{W}: weight matrix of size (n+1×K)(n+1 \times K)

Each column of this matrix corresponds to a “regular” weight vector in binary linear regression. Except now there are KK of these vectors in the columns of this matrix.

W=[w1w2wK]\bm W = \begin{bmatrix}\uparrow & \uparrow & \uparrow & \uparrow \\\boldsymbol{w_1} & \boldsymbol{w_2} & \ldots & \boldsymbol{w_K} \\\downarrow & \downarrow & \downarrow & \downarrow \\\end{bmatrix}


Model:

Define zk=wkxz_k = \boldsymbol{w}_k \cdot \boldsymbol{x} for k=1,,Kk = 1, \ldots, K and z=[z1,,zK]\bm z = [z_1, \ldots, z_K]. (So this is a vector of dot products.)

Then define

f(x)=softmax(z)=exp(z)k=1Kexp(zk)=[exp(z1)k=1Kexp(zk),exp(z2)k=1Kexp(zk),,exp(zK)k=1Kexp(zk)]=1k=1Kexp(zk)[exp(z1),exp(z2),,exp(zK)]\begin{align*} f(\boldsymbol{x}) =\text{softmax}(\boldsymbol{z}) &= \dfrac{\exp(\bm z)}{\displaystyle \sum_{k=1}^K \exp(z_k)} =\left[ \dfrac{\exp(z_1)}{\displaystyle \sum_{k=1}^K \exp(z_k)}, \dfrac{\exp(z_2)}{\displaystyle \sum_{k=1}^K \exp(z_k)}, \ldots, \dfrac{\exp(z_K)}{\displaystyle \sum_{k=1}^K \exp(z_k)} \right] \\ &= \dfrac{1}{\displaystyle \sum_{k=1}^K \exp(z_k)} \left[ \exp(z_1), \exp(z_2), \ldots, \exp(z_K) \right] \end{align*}

where exp(x)\exp(x) is the exponential function, exe^x.

The softmax function accepts a vector of numbers and does two things to this vector:

  • Each component in the vector is exponentiated, which always results in a positive number.
  • This vector is then normalized, by dividing each component by the sum of all the components. This preserves the relative sizes of each component in relation to each other, but forces the sum of all the entries to sum to 1.
  • Thus, we can interpret the output of the softmax function as a probability vector.

Note that all the elements of f(x)f(\bm x) will sum to 1, because of the way the softmax function is defined.

And we define y^=f(x)\bm {\hat y} = f(\bm x), and so

y^=[y^1,y^2,y^K],wherey^k=exp(zk)j=1Kexp(zj)=exp(wkx)j=1Kexp(wjx)\begin{gather*} \bm {\hat y} = [\hat y_1, \hat y_2, \ldots \hat y_K], \quad \text{where} \\ \hat{y}_k = \dfrac{\exp( z_k)}{\displaystyle \sum_{j=1}^K \exp(z_j)} = \dfrac{\exp(\boldsymbol{w}_k \cdot \bm x)}{\displaystyle \sum_{j=1}^K \exp(\boldsymbol{w}_j \cdot \bm x)} \end{gather*}

where we can interpret y^k\hat y_k as the probability that the prediction is class kk.

Vectorized version:

We can alternatively write z=[z1,,zK]=[w1x,,wKx]=xW\bm z = [z_1, \ldots, z_K] = [\boldsymbol{w}_1 \cdot \boldsymbol{x}, \ldots, \boldsymbol{w}_K \cdot \boldsymbol{x}] = \bm x \bm W , and so therefore alternatively write y^=f(x)=softmax(xW)\bm {\hat y} = f(\bm x) = \text{softmax}(\bm x \bm W). This is how to predict a single example. If we want to predict all the examples, we can define:

Z=XW\bm Z = \bm {XW}

and

Y^=softmax(Z)\bm {\hat Y} = \text{softmax}(\bm Z)

where the softmax function is applied across the rows of Z\bm Z. Note how Z\bm Z and Y^\bm {\hat Y} are both (m×Km \times K) matrices, the same size as Y\bm Y. Each row of Y^\bm{\hat Y} is a probability vector, corresponding to the prediction function y^=f(x)\bm{\hat y} = f(\bm x) above.

Loss function:

We use what is called the cross-entropy loss function:

L(y^,y)=k=1Kyklogy^kL( \boldsymbol{\hat{y}}, \boldsymbol{y}) = -\sum_{k=1}^K y_k \log \hat{y}_k

Because y\boldsymbol{y} is a one-hot vector, only one of the yky_k terms in the formula is 1; all the rest are zeros.

Denote the yky_k that is 1 by ycy_c (cc standing for "correct," meaning the "correct class"). We can therefore simplify the formula:

L(y^,y)=k=1Kyklogy^k=yclogy^c=logexp(wcx)k=1Kexp(wkx)\begin{align*} L( \boldsymbol{\hat{y}}, \boldsymbol{y}) & = -\sum_{k=1}^K y_k \log \hat{y}_k &= -y_c \log \hat{y}_c &= -\log \dfrac{\exp(\boldsymbol{w}_c \cdot \boldsymbol{x})}{\displaystyle \sum_{k=1}^K \exp(\boldsymbol{w}_k \cdot \boldsymbol{x})} \end{align*}

Vectorized version:

We can rewrite the loss function as

L(y^,y)=k=1Kyklogy^k=ylogy^L( \boldsymbol{\hat{y}}, \boldsymbol{y}) = -\sum_{k=1}^K y_k \log \hat{y}_k = -\bm y \cdot \log \bm {\hat y}

This is the negative dot product of y\bm y and a vector obtained by taking the logarithm of each entry in y^\bm {\hat y}. Note that because y^\bm {\hat y} is a one-hot vector, this dot product will be adding up a lot of zeroes, except for the entry in logy^\log \bm {\hat{y}} that corresponds to the “correct class.” (This formula will rarely be used on its own, but is useful to look at for the vectorized cost function definition below.)

Cost function:

The cost function still calculates the average loss over all training examples:

J(w1,,wK)=1mi=1mL(y^(i),y(i))=1mi=1mk=1Kyk(i)logy^k(i)\begin{align*} J(\boldsymbol{w_1}, \ldots, \boldsymbol{w_K}) & = \frac{1}{m}\sum_{i=1}^m L(\bm{\hat y^{(i)}}, \bm{y^{(i)}}) = -\frac{1}{m}\sum_{i=1}^m \sum_{k=1}^K y^{(i)}_k \log \hat{y}^{(i)}_k \end{align*}

Vectorized version:

This can be vectorized in a number of different ways.

J(W)=1mi=1mk=1KYi,klogY^i,k J(\boldsymbol{W}) = -\frac{1}{m}\sum_{i=1}^m \sum_{k=1}^K {\bm Y}_{i,k} \log \hat{\bm Y}_{i,k}

The formula above uses the Y\bm Y and Y^\bm {\hat Y} matrices defined earlier; the subscript i,ki, k refers to the ii’th row and the kk’th column of the matrix. Note that the formula above is not a matrix multiplication (at least not in the traditional sense); the formula multiplies the elements of the two matrices element-by-element. This is sometimes denoted with the \odot symbol:

J(W)=1mi=1mk=1KYi,klogY^i,k=1mi=1mk=1K(YlogY^)i,k=1msum(YlogY^) \begin{align*} J(\boldsymbol{W}) &= -\frac{1}{m}\sum_{i=1}^m \sum_{k=1}^K {\bm Y}_{i,k} \log \hat{\bm Y}_{i,k} \\ & = -\frac{1}{m}\sum_{i=1}^m \sum_{k=1}^K ({\bm Y}\odot \log \hat{\bm Y})_{i,k} \\&= -\frac{1}{m}\text{sum}(\bm Y\odot \log \hat{\bm Y}) \end{align*}

where the “sum” notation just means add up all the entries in the resulting matrix (this can be done in one line with np.sum in Numpy). The element-by-element product can be obtained by just using * in Numpy rather than @.

Gradient descent:

First we find the gradient of the cost function:

wj,kJ(w1,,wK)=1mi=1m(y^k(i)yk(i))xj(i)for j=0,,nandk=1,,K\dfrac{\partial}{\partial w_{j,k}} J(\boldsymbol{w_1}, \ldots, \boldsymbol{w_K}) = \frac{1}{m} \sum_{i=1}^m \left( \boldsymbol{\hat{y}}_k^{(i)} - \boldsymbol{y}^{(i)}_k \right)x^{(i)}_j \qquad \\\text{for } j=0, \ldots, n \quad \text{and} \quad k=1, \ldots, K

and so the update equations for gradient descent become:

wj,kwj,kα1mi=1m(y^k(i)yk(i))xj(i)for j=0,,nw_{j,k} \leftarrow w_{j,k} - \alpha \cdot \frac{1}{m} \sum_{i=1}^m \left( \boldsymbol{\hat{y}}_k^{(i)} - \boldsymbol{y}^{(i)}_k \right)x^{(i)}_j \qquad \text{for } j=0, \ldots, n

Vectorized version:

First we find the gradient of the cost function:

J(W)=1mXT(Y^Y)\nabla J(\boldsymbol W) = \frac{1}{m}{\bm X}^T(\boldsymbol{\hat Y} - \boldsymbol{Y})

and so the update equation for gradient descent becomes:

WWα1mXT(Y^Y)\boldsymbol{W} \leftarrow \boldsymbol{W} - \alpha \cdot \frac{1}{m}{\bm X}^T(\boldsymbol{\hat Y} - \boldsymbol{Y})

where recall that Y^\boldsymbol {\hat Y} and Y\boldsymbol {Y} are defined the same as earlier.

Analysis: Note that J(W)\nabla J(\boldsymbol W) is a matrix of the same size as W\bm W itself; both are (n+1×K)n+1\times K).

X\bm X is (m×n+1)(m \times n+1), so XT\bm X^T is (n+1×m)(n+1 \times m).

Y\bm Y and Y^\bm {\hat Y} are both (m×K)(m \times K).

So multiplying XT\bm X^T by (Y^Y)(\boldsymbol{\hat Y} - \boldsymbol{Y}) results in a matrix of size (n+1×K)(n+1 \times K), which is the same size as W\bm W.