{
"source_file": "marker_output_prml/textbook/PRML_textbook/PRML_textbook.md",
"total_exercises": 340,
"total_chapters": 14,
"chapters": [
{
"chapter_number": 1,
"total_questions": 39,
"difficulty_breakdown": {
"easy": 20,
"medium": 9,
"hard": 3,
"unknown": 9
},
"questions": [
{
"chapter": 1,
"question_number": "1.1",
"difficulty": "easy",
"question_text": "Consider the sum-of-squares error function given by (1.2: $E(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} \\{y(x_n, \\mathbf{w}) - t_n\\}^2$) in which the function $y(x, \\mathbf{w})$ is given by the polynomial (1.1: $y(x, \\mathbf{w}) = w_0 + w_1 x + w_2 x^2 + \\ldots + w_M x^M = \\sum_{j=0}^{M} w_j x^j$). Show that the coefficients $\\mathbf{w} = \\{w_i\\}$ that minimize this error function are given by the solution to the following set of linear equations\n\n$$\\sum_{j=0}^{M} A_{ij} w_j = T_i {(1.122)}$$\n\nwhere\n\n$$A_{ij} = \\sum_{n=1}^{N} (x_n)^{i+j}, T_i = \\sum_{n=1}^{N} (x_n)^i t_n. (1.123)$$\n\nHere a suffix i or j denotes the index of a component, whereas $(x)^i$ denotes x raised to the power of i.",
"answer": "We let the derivative of *error function E* with respect to vector $\\mathbf{w}$ equals to $\\mathbf{0}$ , (i.e. $\\frac{\\partial E}{\\partial \\mathbf{w}} = 0$ ), and this will be the solution of $\\mathbf{w} = \\{w_i\\}$ which minimizes *error function E*. To solve this problem, we will calculate the derivative of E with respect to every $w_i$ , and let them equal to 0 instead. Based on (1.1: $y(x, \\mathbf{w}) = w_0 + w_1 x + w_2 x^2 + \\ldots + w_M x^M = \\sum_{j=0}^{M} w_j x^j$) and (1.2: $E(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} \\{y(x_n, \\mathbf{w}) - t_n\\}^2$) we can obtain:\n\n$$\\frac{\\partial E}{\\partial w_{i}} = \\sum_{n=1}^{N} \\{y(x_{n}, \\mathbf{w}) - t_{n}\\} x_{n}^{i} = 0$$\n\n$$= > \\sum_{n=1}^{N} y(x_{n}, \\mathbf{w}) x_{n}^{i} = \\sum_{n=1}^{N} x_{n}^{i} t_{n}$$\n\n$$= > \\sum_{n=1}^{N} (\\sum_{j=0}^{M} w_{j} x_{n}^{j}) x_{n}^{i} = \\sum_{n=1}^{N} x_{n}^{i} t_{n}$$\n\n$$= > \\sum_{n=1}^{N} \\sum_{j=0}^{M} w_{j} x_{n}^{(j+i)} = \\sum_{n=1}^{N} x_{n}^{i} t_{n}$$\n\n$$= > \\sum_{j=0}^{M} \\sum_{n=1}^{N} x_{n}^{(j+i)} w_{j} = \\sum_{n=1}^{N} x_{n}^{i} t_{n}$$\n\nIf we denote $A_{ij} = \\sum_{n=1}^{N} x_n^{i+j}$ and $T_i = \\sum_{n=1}^{N} x_n^i t_n$ , the equation above can be written exactly as (1.222), Therefore the problem is solved.",
"answer_length": 1240
},
{
"chapter": 1,
"question_number": "1.10",
"difficulty": "easy",
"question_text": "Suppose that the two variables x and z are statistically independent. Show that the mean and variance of their sum satisfies\n\n$$\\mathbb{E}[x+z] = \\mathbb{E}[x] + \\mathbb{E}[z] \\tag{1.128}$$\n\n$$var[x+z] = var[x] + var[z]. \\tag{1.129}$$",
"answer": "We will solve this problem based on the definition of expectation, variation\n\nand independence.\n\n$$\\mathbb{E}[x+z] = \\int \\int (x+z)p(x,z)dxdz$$\n\n$$= \\int \\int (x+z)p(x)p(z)dxdz$$\n\n$$= \\int \\int xp(x)p(z)dxdz + \\int \\int zp(x)p(z)dxdz$$\n\n$$= \\int (\\int p(z)dz)xp(x)dx + \\int (\\int p(x)dx)zp(z)dz$$\n\n$$= \\int xp(x)dx + \\int zp(z)dz$$\n\n$$= \\mathbb{E}[x] + \\mathbb{E}[z]$$\n\n$$var[x+z] = \\int \\int (x+z-\\mathbb{E}[x+z])^2 p(x,z) dx dz$$\n\n$$= \\int \\int \\{(x+z)^2 - 2(x+z)\\mathbb{E}[x+z]\\} + \\mathbb{E}^2[x+z]\\} p(x,z) dx dz$$\n\n$$= \\int \\int (x+z)^2 p(x,z) dx dz - 2\\mathbb{E}[x+z] \\int (x+z) p(x,z) dx dz + \\mathbb{E}^2[x+z]$$\n\n$$= \\int \\int (x+z)^2 p(x,z) dx dz - \\mathbb{E}^2[x+z]$$\n\n$$= \\int \\int (x^2 + 2xz + z^2) p(x) p(z) dx dz - \\mathbb{E}^2[x+z]$$\n\n$$= \\int (\\int p(z) dz) x^2 p(x) dx + \\int \\int 2xz p(x) p(z) dx dz + \\int (\\int p(x) dx) z^2 p(z) dz - \\mathbb{E}^2[x+z]$$\n\n$$= \\mathbb{E}[x^2] + \\mathbb{E}[z^2] - \\mathbb{E}^2[x+z] + \\int \\int 2xz p(x) p(z) dx dz$$\n\n$$= \\mathbb{E}[x^2] + \\mathbb{E}[z^2] - (\\mathbb{E}[x] + \\mathbb{E}[z])^2 + \\int \\int 2xz p(x) p(z) dx dz$$\n\n$$= \\mathbb{E}[x^2] - \\mathbb{E}^2[x] + \\mathbb{E}[z^2] - \\mathbb{E}^2[z] - 2\\mathbb{E}[x] \\mathbb{E}[z] + 2 \\int \\int xz p(x) p(z) dx dz$$\n\n$$= var[x] + var[z] - 2\\mathbb{E}[x] \\mathbb{E}[z] + 2(\\int xp(x) dx) (\\int zp(z) dz)$$\n\n$$= var[x] + var[z]$$\n\n# **Problem 1.11 Solution**\n\nBased on prior knowledge that $\\mu_{ML}$ and $\\sigma_{ML}^2$ will decouple. We will first calculate $\\mu_{ML}$ :\n\n$$\\frac{d(\\ln p(\\mathbf{x} \\mid \\mu, \\sigma^2))}{d\\mu} = \\frac{1}{\\sigma^2} \\sum_{n=1}^{N} (x_n - \\mu)$$\n\nWe let:\n\n$$\\frac{d(\\ln p(\\mathbf{x} \\, \\big| \\, \\mu, \\sigma^2))}{d\\mu} = 0$$\n\nTherefore:\n\n$$\\mu_{ML} = \\frac{1}{N} \\sum_{n=1}^{N} x_n$$\n\nAnd because:\n\n$$\\frac{d(\\ln p(\\mathbf{x}\\,\\big|\\,\\mu,\\sigma^2))}{d\\sigma^2} = \\frac{1}{2\\sigma^4}(\\sum_{n=1}^N(x_n-\\mu)^2 - N\\sigma^2)$$\n\nWe let:\n\n$$\\frac{d(\\ln p(\\mathbf{x} \\mid \\mu, \\sigma^2))}{d\\sigma^2} = 0$$\n\nTherefore:\n\n$$\\sigma_{ML}^2 = \\frac{1}{N} \\sum_{n=1}^{N} (x_n - \\mu_{ML})^2$$",
"answer_length": 2015
},
{
"chapter": 1,
"question_number": "1.12",
"difficulty": "medium",
"question_text": "Using the results (1.49: $\\mathbb{E}[x] = \\int_{-\\infty}^{\\infty} \\mathcal{N}(x|\\mu, \\sigma^2) x \\, \\mathrm{d}x = \\mu.$) and (1.50: $\\mathbb{E}[x^2] = \\int_{-\\infty}^{\\infty} \\mathcal{N}\\left(x|\\mu, \\sigma^2\\right) x^2 \\, \\mathrm{d}x = \\mu^2 + \\sigma^2.$), show that\n\n$$\\mathbb{E}[x_n x_m] = \\mu^2 + I_{nm} \\sigma^2 \\tag{1.130}$$\n\nwhere $x_n$ and $x_m$ denote data points sampled from a Gaussian distribution with mean $\\mu$ and variance $\\sigma^2$ , and $I_{nm}$ satisfies $I_{nm}=1$ if n=m and $I_{nm}=0$ otherwise. Hence prove the results (1.57) and (1.58: $\\mathbb{E}[\\sigma_{\\mathrm{ML}}^2] = \\left(\\frac{N-1}{N}\\right)\\sigma^2$).",
"answer": "It is quite straightforward for $\\mathbb{E}[\\mu_{ML}]$ , with the prior knowledge that $x_n$ is i.i.d. and it also obeys Gaussian distribution $\\mathcal{N}(\\mu, \\sigma^2)$ .\n\n$$\\mathbb{E}[\\mu_{ML}] = \\mathbb{E}[\\frac{1}{N}\\sum_{n=1}^N x_n] = \\frac{1}{N}\\mathbb{E}[\\sum_{n=1}^N x_n] = \\mathbb{E}[x_n] = \\mu$$\n\nFor $\\mathbb{E}[\\sigma_{ML}^2]$ , we need to take advantage of (1.56: $\\sigma_{\\rm ML}^2 = \\frac{1}{N} \\sum_{n=1}^{N} (x_n - \\mu_{\\rm ML})^2$) and what has been given in the problem :\n\n$$\\mathbb{E}[\\sigma_{ML}^{2}] = \\mathbb{E}\\left[\\frac{1}{N}\\sum_{n=1}^{N}(x_{n} - \\mu_{ML})^{2}\\right]$$\n\n$$= \\frac{1}{N}\\mathbb{E}\\left[\\sum_{n=1}^{N}(x_{n} - \\mu_{ML})^{2}\\right]$$\n\n$$= \\frac{1}{N}\\mathbb{E}\\left[\\sum_{n=1}^{N}(x_{n}^{2} - 2x_{n}\\mu_{ML} + \\mu_{ML}^{2})\\right]$$\n\n$$= \\frac{1}{N}\\mathbb{E}\\left[\\sum_{n=1}^{N}x_{n}^{2}\\right] - \\frac{1}{N}\\mathbb{E}\\left[\\sum_{n=1}^{N}2x_{n}\\mu_{ML}\\right] + \\frac{1}{N}\\mathbb{E}\\left[\\sum_{n=1}^{N}\\mu_{ML}^{2}\\right]$$\n\n$$= \\mu^{2} + \\sigma^{2} - \\frac{2}{N}\\mathbb{E}\\left[\\sum_{n=1}^{N}x_{n}\\left(\\frac{1}{N}\\sum_{n=1}^{N}x_{n}\\right)\\right] + \\mathbb{E}\\left[\\mu_{ML}^{2}\\right]$$\n\n$$= \\mu^{2} + \\sigma^{2} - \\frac{2}{N^{2}}\\mathbb{E}\\left[\\sum_{n=1}^{N}x_{n}\\left(\\sum_{n=1}^{N}x_{n}\\right)\\right] + \\mathbb{E}\\left[\\left(\\frac{1}{N}\\sum_{n=1}^{N}x_{n}\\right)^{2}\\right]$$\n\n$$= \\mu^{2} + \\sigma^{2} - \\frac{1}{N^{2}}\\mathbb{E}\\left[\\left(\\sum_{n=1}^{N}x_{n}\\right)^{2}\\right]$$\n\n$$= \\mu^{2} + \\sigma^{2} - \\frac{1}{N^{2}}[N(N\\mu^{2} + \\sigma^{2})]$$\n\nTherefore we have:\n\n$$\\mathbb{E}[\\sigma_{ML}^2] = (\\frac{N-1}{N})\\sigma^2$$",
"answer_length": 1585
},
{
"chapter": 1,
"question_number": "1.13",
"difficulty": "easy",
"question_text": "Suppose that the variance of a Gaussian is estimated using the result (1.56: $\\sigma_{\\rm ML}^2 = \\frac{1}{N} \\sum_{n=1}^{N} (x_n - \\mu_{\\rm ML})^2$) but with the maximum likelihood estimate $\\mu_{\\rm ML}$ replaced with the true value $\\mu$ of the mean. Show that this estimator has the property that its expectation is given by the true variance $\\sigma^2$ .",
"answer": "This problem can be solved in the same method used in Prob.1.12:\n\n$$\\begin{split} \\mathbb{E}[\\sigma_{ML}^2] &= \\mathbb{E}[\\frac{1}{N} \\sum_{n=1}^{N} (x_n - \\mu)^2] \\quad \\text{(Because here we use } \\mu \\text{ to replace } \\mu_{ML}) \\\\ &= \\frac{1}{N} \\mathbb{E}[\\sum_{n=1}^{N} (x_n - \\mu)^2] \\\\ &= \\frac{1}{N} \\mathbb{E}[\\sum_{n=1}^{N} (x_n^2 - 2x_n \\mu + \\mu^2)] \\\\ &= \\frac{1}{N} \\mathbb{E}[\\sum_{n=1}^{N} x_n^2] - \\frac{1}{N} \\mathbb{E}[\\sum_{n=1}^{N} 2x_n \\mu] + \\frac{1}{N} \\mathbb{E}[\\sum_{n=1}^{N} \\mu^2] \\\\ &= \\mu^2 + \\sigma^2 - \\frac{2\\mu}{N} \\mathbb{E}[\\sum_{n=1}^{N} x_n] + \\mu^2 \\\\ &= \\mu^2 + \\sigma^2 - 2\\mu^2 + \\mu^2 \\\\ &= \\sigma^2 \\end{split}$$\n\nNote: The biggest difference between Prob.1.12 and Prob.1.13 is that the mean of Gaussian Distribution is known previously (in Prob.1.13) or not (in Prob.1.12). In other words, the difference can be shown by the following equations:\n\n$$\\begin{split} \\mathbb{E}[\\mu^2] &= \\mu^2 \\quad (\\mu \\text{ is determined, i.e. its } expectation \\text{ is itself, also true for } \\mu^2) \\\\ \\mathbb{E}[\\mu^2_{ML}] &= \\mathbb{E}[(\\frac{1}{N}\\sum_{n=1}^N x_n)^2] = \\frac{1}{N^2}\\mathbb{E}[(\\sum_{n=1}^N x_n)^2] = \\frac{1}{N^2}N(N\\mu^2 + \\sigma^2) = \\mu^2 + \\frac{\\sigma^2}{N} \\end{split}$$",
"answer_length": 1234
},
{
"chapter": 1,
"question_number": "1.14",
"difficulty": "medium",
"question_text": "Show that an arbitrary square matrix with elements $w_{ij}$ can be written in the form $w_{ij} = w_{ij}^{\\rm S} + w_{ij}^{\\rm A}$ where $w_{ij}^{\\rm S}$ and $w_{ij}^{\\rm A}$ are symmetric and anti-symmetric matrices, respectively, satisfying $w_{ij}^{\\rm S} = w_{ji}^{\\rm S}$ and $w_{ij}^{\\rm A} = -w_{ji}^{\\rm A}$ for all i and j. Now consider the second order term in a higher order polynomial in D dimensions, given by\n\n$$\\sum_{i=1}^{D} \\sum_{j=1}^{D} w_{ij} x_i x_j. \\tag{1.131}$$\n\nShow that\n\n$$\\sum_{i=1}^{D} \\sum_{j=1}^{D} w_{ij} x_i x_j = \\sum_{i=1}^{D} \\sum_{j=1}^{D} w_{ij}^{S} x_i x_j$$\n (1.132: $\\sum_{i=1}^{D} \\sum_{j=1}^{D} w_{ij} x_i x_j = \\sum_{i=1}^{D} \\sum_{j=1}^{D} w_{ij}^{S} x_i x_j$)\n\nso that the contribution from the anti-symmetric matrix vanishes. We therefore see that, without loss of generality, the matrix of coefficients $w_{ij}$ can be chosen to be symmetric, and so not all of the $D^2$ elements of this matrix can be chosen independently. Show that the number of independent parameters in the matrix $w_{ij}^{\\rm S}$ is given by D(D+1)/2.",
"answer": "This problem is quite similar to the fact that any function f(x) can be written into the sum of an odd function and an even function. If we let:\n\n$$w_{ij}^S = \\frac{w_{ij} + w_{ji}}{2}$$\n and $w_{ij}^A = \\frac{w_{ij} - w_{ji}}{2}$ \n\nIt is obvious that they satisfy the constraints described in the problem, which are:\n\n$$w_{ij} = w_{ij}^S + w_{ij}^A$$\n, $w_{ij}^S = w_{ji}^S$ , $w_{ij}^A = -w_{ji}^A$ \n\nTo prove (1.132: $\\sum_{i=1}^{D} \\sum_{j=1}^{D} w_{ij} x_i x_j = \\sum_{i=1}^{D} \\sum_{j=1}^{D} w_{ij}^{S} x_i x_j$), we only need to simplify it:\n\n$$\\sum_{i=1}^{D} \\sum_{j=1}^{D} w_{ij} x_i x_j = \\sum_{i=1}^{D} \\sum_{j=1}^{D} (w_{ij}^S + w_{ij}^A) x_i x_j$$\n$$= \\sum_{i=1}^{D} \\sum_{j=1}^{D} w_{ij}^S x_i x_j + \\sum_{i=1}^{D} \\sum_{j=1}^{D} w_{ij}^A x_i x_j$$\n\nTherefore, we only need to prove that the second term equals to 0, and here we use a simple trick: we will prove twice of the second term equals to 0 instead.\n\n$$2\\sum_{i=1}^{D} \\sum_{j=1}^{D} w_{ij}^{A} x_{i} x_{j} = \\sum_{i=1}^{D} \\sum_{j=1}^{D} (w_{ij}^{A} + w_{ij}^{A}) x_{i} x_{j}$$\n\n$$= \\sum_{i=1}^{D} \\sum_{j=1}^{D} (w_{ij}^{A} - w_{ji}^{A}) x_{i} x_{j}$$\n\n$$= \\sum_{i=1}^{D} \\sum_{j=1}^{D} w_{ij}^{A} x_{i} x_{j} - \\sum_{i=1}^{D} \\sum_{j=1}^{D} w_{ji}^{A} x_{i} x_{j}$$\n\n$$= \\sum_{i=1}^{D} \\sum_{j=1}^{D} w_{ij}^{A} x_{i} x_{j} - \\sum_{j=1}^{D} \\sum_{i=1}^{D} w_{ji}^{A} x_{j} x_{i}$$\n\n$$= 0$$\n\nTherefore, we choose the coefficient matrix to be symmetric as described in the problem. Considering about the symmetry, we can see that if and only if for i=1,2,...,D and $i \\leq j$ , $w_{ij}$ is given, the whole matrix will be determined. Hence, the number of independent parameters are given by :\n\n$$D + D - 1 + \\dots + 1 = \\frac{D(D+1)}{2}$$\n\nNote: You can view this intuitively by considering if the upper triangular part of a symmetric matrix is given, the whole matrix will be determined.",
"answer_length": 1868
},
{
"chapter": 1,
"question_number": "1.15",
"difficulty": "hard",
"question_text": "\\star \\star)$ www In this exercise and the next, we explore how the number of independent parameters in a polynomial grows with the order M of the polynomial and with the dimensionality D of the input space. We start by writing down the $M^{\\rm th}$ order term for a polynomial in D dimensions in the form\n\n$$\\sum_{i_1=1}^{D} \\sum_{i_2=1}^{D} \\cdots \\sum_{i_M=1}^{D} w_{i_1 i_2 \\cdots i_M} x_{i_1} x_{i_2} \\cdots x_{i_M}.$$\n (1.133: $\\sum_{i_1=1}^{D} \\sum_{i_2=1}^{D} \\cdots \\sum_{i_M=1}^{D} w_{i_1 i_2 \\cdots i_M} x_{i_1} x_{i_2} \\cdots x_{i_M}.$)\n\nThe coefficients $w_{i_1i_2\\cdots i_M}$ comprise $D^M$ elements, but the number of independent parameters is significantly fewer due to the many interchange symmetries of the factor $x_{i_1}x_{i_2}\\cdots x_{i_M}$ . Begin by showing that the redundancy in the coefficients can be removed by rewriting this $M^{\\text{th}}$ order term in the form\n\n$$\\sum_{i_1=1}^{D} \\sum_{i_2=1}^{i_1} \\cdots \\sum_{i_M=1}^{i_{M-1}} \\widetilde{w}_{i_1 i_2 \\cdots i_M} x_{i_1} x_{i_2} \\cdots x_{i_M}.$$\n (1.134: $\\sum_{i_1=1}^{D} \\sum_{i_2=1}^{i_1} \\cdots \\sum_{i_M=1}^{i_{M-1}} \\widetilde{w}_{i_1 i_2 \\cdots i_M} x_{i_1} x_{i_2} \\cdots x_{i_M}.$)\n\nNote that the precise relationship between the $\\widetilde{w}$ coefficients and w coefficients need not be made explicit. Use this result to show that the number of *independent* parameters n(D,M), which appear at order M, satisfies the following recursion relation\n\n$$n(D,M) = \\sum_{i=1}^{D} n(i, M-1).$$\n(1.135: $n(D,M) = \\sum_{i=1}^{D} n(i, M-1).$)\n\nNext use proof by induction to show that the following result holds\n\n$$\\sum_{i=1}^{D} \\frac{(i+M-2)!}{(i-1)!(M-1)!} = \\frac{(D+M-1)!}{(D-1)!M!}$$\n(1.136: $\\sum_{i=1}^{D} \\frac{(i+M-2)!}{(i-1)!(M-1)!} = \\frac{(D+M-1)!}{(D-1)!M!}$)\n\nwhich can be done by first proving the result for D=1 and arbitrary M by making use of the result 0!=1, then assuming it is correct for dimension D and verifying that it is correct for dimension D+1. Finally, use the two previous results, together with proof by induction, to show\n\n$$n(D,M) = \\frac{(D+M-1)!}{(D-1)!M!}.$$\n(1.137: $n(D,M) = \\frac{(D+M-1)!}{(D-1)!M!}.$)\n\nTo do this, first show that the result is true for M=2, and any value of $D\\geqslant 1$ , by comparison with the result of Exercise 1.14. Then make use of (1.135: $n(D,M) = \\sum_{i=1}^{D} n(i, M-1).$), together with (1.136: $\\sum_{i=1}^{D} \\frac{(i+M-2)!}{(i-1)!(M-1)!} = \\frac{(D+M-1)!}{(D-1)!M!}$), to show that, if the result holds at order M-1, then it will also hold at order M",
"answer": "This problem is a more general form of Prob.1.14, so the method can also be used here: we will find a way to use $w_{i_1i_2...i_M}$ to represent $\\widetilde{w}_{i_1i_2...i_M}$ .\n\nWe begin by introducing a mapping function:\n\n$$F(x_{i1}x_{i2}...x_{iM}) = x_{j1}x_{j2}...,x_{jM}$$\n\n$$s.t. \\bigcup_{k=1}^{M} x_{ik} = \\bigcup_{k=1}^{M} x_{jk}, \\text{ and } x_{j1} \\ge x_{j2} \\ge x_{j3}... \\ge x_{jM}$$\n\nIt is complexed to write F in mathematical form. Actually this function does a simple work: it rearranges the element in a decreasing order based on its subindex. Several examples are given below, when D = 5, M = 4:\n\n$$F(x_5x_2x_3x_2) = x_5x_3x_2x_2$$\n\n$$F(x_1x_3x_3x_2) = x_3x_3x_2x_1$$\n\n$$F(x_1x_4x_2x_3) = x_4x_3x_2x_1$$\n\n$$F(x_1x_1x_5x_2) = x_5x_2x_1x_1$$\n\nAfter introducing F, the solution will be very simple, based on the fact that F will not change the value of the term, but only rearrange it.\n\n$$\\sum_{i_1=1}^D \\sum_{i_2=1}^D \\dots \\sum_{i_M=1}^D w_{i_1 i_2 \\dots i_M} x_{i1} x_{i2} \\dots x_{iM} = \\sum_{j_1=1}^D \\sum_{j_2=1}^{j_1} \\dots \\sum_{j_M=1}^{j_{M-1}} \\widetilde{w}_{j_1 j_2 \\dots j_M} x_{j1} x_{j2} \\dots x_{jM}$$\n\nwhere \n$$\\begin{split} \\widetilde{w}_{j_1 j_2 \\dots j_M} &= \\sum_{w \\in \\Omega} w \\\\ \\Omega &= \\{ w_{i_1 i_2 \\dots i_M} \\mid F(x_{i1} x_{i2} \\dots x_{iM}) = x_{j1} x_{j2} \\dots x_{jM}, \\ \\forall x_{i1} x_{i2} \\dots x_{iM} \\ \\} \\end{split}$$\n\nBy far, we have already proven (1.134: $\\sum_{i_1=1}^{D} \\sum_{i_2=1}^{i_1} \\cdots \\sum_{i_M=1}^{i_{M-1}} \\widetilde{w}_{i_1 i_2 \\cdots i_M} x_{i_1} x_{i_2} \\cdots x_{i_M}.$). *Mathematical induction* will be used to prove (1.135: $n(D,M) = \\sum_{i=1}^{D} n(i, M-1).$) and we will begin by proving D=1, i.e. n(1,M)=n(1,M-1). When D=1, (1.134: $\\sum_{i_1=1}^{D} \\sum_{i_2=1}^{i_1} \\cdots \\sum_{i_M=1}^{i_{M-1}} \\widetilde{w}_{i_1 i_2 \\cdots i_M} x_{i_1} x_{i_2} \\cdots x_{i_M}.$) will degenerate into $\\widetilde{w}x_1^M$ , i.e., it only has one term, whose coefficient is govern by $\\widetilde{w}$ regardless the value of M.\n\nTherefore, we have proven when D = 1, n(D,M) = 1. Suppose (1.135: $n(D,M) = \\sum_{i=1}^{D} n(i, M-1).$) holds for D, let's prove it will also hold for D + 1, and then (1.135: $n(D,M) = \\sum_{i=1}^{D} n(i, M-1).$) will be proved based on *Mathematical induction*.\n\nLet's begin based on (1.134):\n\n$$\\sum_{i_1=1}^{D+1} \\sum_{i_2=1}^{i_1} \\dots \\sum_{i_M=1}^{i_{M-1}} \\widetilde{w}_{i_1 i_2 \\dots i_M} x_{i_1} x_{i_2} \\dots x_{i_M}$$\n (\\*)\n\nWe divide (\\*) into two parts based on the first summation: the first part is made up of $i_i = 1, 2, ..., D$ and the second part $i_1 = D + 1$ . After division, the first part corresponds to n(D, M), and the second part corresponds to n(D + 1, M - 1). Therefore we obtain:\n\n$$n(D+1,M) = n(D,M) + n(D+1,M-1) \\tag{**}$$\n\nAnd given the fact that (1.135: $n(D,M) = \\sum_{i=1}^{D} n(i, M-1).$) holds for D:\n\n$$n(D, M) = \\sum_{i=1}^{D} n(i, M-1)$$\n\nTherefore, we substitute it into (\\*\\*)\n\n$$n(D+1,M) = \\sum_{i=1}^{D} n(i,M-1) + n(D+1,M-1) = \\sum_{i=1}^{D+1} n(i,M-1)$$\n\nWe will prove (1.136: $\\sum_{i=1}^{D} \\frac{(i+M-2)!}{(i-1)!(M-1)!} = \\frac{(D+M-1)!}{(D-1)!M!}$) in a different but simple way. We rewrite (1.136: $\\sum_{i=1}^{D} \\frac{(i+M-2)!}{(i-1)!(M-1)!} = \\frac{(D+M-1)!}{(D-1)!M!}$) in *Permutation and Combination* view:\n\n$$\\sum_{i=1}^{D} C_{i+M-2}^{M-1} = C_{D+M-1}^{M}$$\n\nFirstly, We expand the summation.\n\n$$C_{M-1}^{M-1} + C_{M}^{M-1} + \\dots C_{D+M-2}^{M-1} = C_{D+M-1}^{M}$$\n\nSecondly, we rewrite the first term on the left side to $C_M^M$ , because $C_{M-1}^{M-1}=C_M^M=1$ . In other words, we only need to prove:\n\n$$C_M^M + C_M^{M-1} + \\dots C_{D+M-2}^{M-1} = C_{D+M-1}^M$$\n\nThirdly, we take advantage of the property : $C_N^r = C_{N-1}^r + C_{N-1}^{r-1}$ . So we can recursively combine the first term and the second term on the left side, and it will ultimately equal to the right side.\n\n(1.137: $n(D,M) = \\frac{(D+M-1)!}{(D-1)!M!}.$) gives the mathematical form of n(D, M), and we need all the conclusions above to prove it.\n\nLet's give some intuitive concepts by illustrating M=0,1,2. When M=0, (1.134: $\\sum_{i_1=1}^{D} \\sum_{i_2=1}^{i_1} \\cdots \\sum_{i_M=1}^{i_{M-1}} \\widetilde{w}_{i_1 i_2 \\cdots i_M} x_{i_1} x_{i_2} \\cdots x_{i_M}.$) will consist of only a constant term, which means n(D,0)=1. When M=1, it is obvious n(D,1)=D, because in this case (1.134) will only have D terms if we expand it. When M=2, it degenerates to Prob.1.14, so $n(D,2)=\\frac{D(D+1)}{2}$ is also obvious. Suppose (1.137) holds for M-1, let's prove it will also hold for M.\n\n$$\\begin{split} n(D,M) &= \\sum_{i=1}^{D} n(i,M-1) \\quad (\\text{ based on } (1.135)) \\\\ &= \\sum_{i=1}^{D} C_{i+M-2}^{M-1} \\quad (\\text{ based on } (1.137) \\text{ holds for } M-1) \\\\ &= C_{M-1}^{M-1} + C_{M}^{M-1} + C_{M+1}^{M-1} \\dots + C_{D+M-2}^{M-1} \\\\ &= (C_{M}^{M} + C_{M}^{M-1}) + C_{M+1}^{M-1} \\dots + C_{D+M-2}^{M-1} \\\\ &= (C_{M+1}^{M} + C_{M+1}^{M-1}) \\dots + C_{D+M-2}^{M-1} \\\\ &= C_{M+2}^{M} \\dots + C_{D+M-2}^{M-1} \\\\ &\\dots \\\\ &= C_{D+M-1}^{M} \\end{split}$$\n\nBy far, all have been proven.",
"answer_length": 5028
},
{
"chapter": 1,
"question_number": "1.16",
"difficulty": "hard",
"question_text": "\\star \\star)$ In Exercise 1.15, we proved the result (1.135: $n(D,M) = \\sum_{i=1}^{D} n(i, M-1).$) for the number of independent parameters in the $M^{\\rm th}$ order term of a D-dimensional polynomial. We now find an expression for the total number N(D,M) of independent parameters in all of the terms up to and including the M6th order. First show that N(D,M) satisfies\n\n$$N(D,M) = \\sum_{m=0}^{M} n(D,m)$$\n (1.138: $N(D,M) = \\sum_{m=0}^{M} n(D,m)$)\n\nwhere n(D, m) is the number of independent parameters in the term of order m. Now make use of the result (1.137: $n(D,M) = \\frac{(D+M-1)!}{(D-1)!M!}.$), together with proof by induction, to show that\n\n$$N(d, M) = \\frac{(D+M)!}{D! M!}.$$\n(1.139: $N(d, M) = \\frac{(D+M)!}{D! M!}.$)\n\nThis can be done by first proving that the result holds for M=0 and arbitrary $D \\geqslant 1$ , then assuming that it holds at order M, and hence showing that it holds at order M+1. Finally, make use of Stirling's approximation in the form\n\n$$n! \\simeq n^n e^{-n} \\tag{1.140}$$\n\nfor large n to show that, for $D\\gg M$ , the quantity N(D,M) grows like $D^M$ , and for $M\\gg D$ it grows like $M^D$ . Consider a cubic (M=3) polynomial in D dimensions, and evaluate numerically the total number of independent parameters for (i) D=10 and (ii) D=100, which correspond to typical small-scale and medium-scale machine learning applications.",
"answer": "This problem can be solved in the same way as the one in Prob.1.15. Firstly, we should write the expression consisted of all the independent terms up to Mth order corresponding to N(D,M). By adding a summation regarding to M on the left side of (1.134: $\\sum_{i_1=1}^{D} \\sum_{i_2=1}^{i_1} \\cdots \\sum_{i_M=1}^{i_{M-1}} \\widetilde{w}_{i_1 i_2 \\cdots i_M} x_{i_1} x_{i_2} \\cdots x_{i_M}.$), we obtain:\n\n$$\\sum_{m=0}^{M} \\sum_{i_1=1}^{D} \\sum_{i_2=1}^{i_1} \\dots \\sum_{i_m=1}^{i_{m-1}} \\widetilde{w}_{i_1 i_2 \\dots i_m} x_{i_1} x_{i_2} \\dots x_{i_m}$$\n (\\*)\n\n(1.138: $N(D,M) = \\sum_{m=0}^{M} n(D,m)$) is quite obvious if we view m as an looping variable, iterating through all the possible orders less equal than M, and for every possible oder m, the independent parameters are given by n(D,m).\n\nLet's prove (1.138: $N(D,M) = \\sum_{m=0}^{M} n(D,m)$) in a formal way by using *Mathematical Induction*. When M = 1,(\\*) will degenerate to two terms: m = 0, corresponding to n(D,0) and m = 1, corresponding to n(D,1). Therefore N(D,1) = n(D,0) + n(D,1). Suppose (1.138: $N(D,M) = \\sum_{m=0}^{M} n(D,m)$) holds for M, we will see that it will also hold for M+1. Let's begin by writing all the independent terms based on (\\*):\n\n$$\\sum_{m=0}^{M+1} \\sum_{i_1=1}^{D} \\sum_{i_2=1}^{i_1} \\dots \\sum_{i_m=1}^{i_{m-1}} \\widetilde{w}_{i_1 i_2 \\dots i_m} x_{i_1} x_{i_2} \\dots x_{i_m}$$\n (\\*\\*)\n\nUsing the same technique as in Prob.1.15, we divide (\\*\\*) to two parts based on the summation regarding to m: the first part consisted of m = 0,1,...,M and the second part m = M+1. Hence, the first part will correspond to N(D,M) and the second part will correspond to n(D,M+1). So we obtain:\n\n$$N(D, M+1) = N(D, M) + n(D, M+1)$$\n\nThen we substitute (1.138: $N(D,M) = \\sum_{m=0}^{M} n(D,m)$) into the equation above:\n\n$$N(D, M+1) = \\sum_{m=0}^{M} n(D, m) + n(D, M+1)$$\n \n= $\\sum_{m=0}^{M+1} n(D, m)$ \n\nTo prove (1.139: $N(d, M) = \\frac{(D+M)!}{D! M!}.$), we will also use the same technique in Prob.1.15 instead of *Mathematical Induction*. We begin based on already proved (1.138):\n\n$$N(D,M) = \\sum_{m=0}^{M} n(D,M)$$\n\nWe then take advantage of (1.137):\n\n$$\\begin{split} N(D,M) &= \\sum_{m=0}^{M} C_{D+m-1}^{m} \\\\ &= C_{D-1}^{0} + C_{D}^{1} + C_{D+1}^{2} + \\ldots + C_{D+M-1}^{M} \\\\ &= (C_{D}^{0} + C_{D}^{1}) + C_{D+1}^{2} + \\ldots + C_{D+M-1}^{M} \\\\ &= (C_{D+1}^{1} + C_{D+1}^{2}) + \\ldots + C_{D+M-1}^{M} \\\\ &= \\ldots \\\\ &= C_{D+M}^{M} \\end{split}$$\n\nHere as asked by the problem, we will view the growing speed of N(D,M). We should see that in n(D,M), D and M are symmetric, meaning that we only need to prove when $D \\gg M$ , it will grow like $D^M$ , and then the situation of $M \\gg D$ will be solved by symmetry.\n\n$$N(D,M) = \\frac{(D+M)!}{D!M!} \\approx \\frac{(D+M)^{D+M}}{D^D M^M}$$\n\n$$= \\frac{1}{M^M} (\\frac{D+M}{D})^D (D+M)^M$$\n\n$$= \\frac{1}{M^M} [(1+\\frac{M}{D})^{\\frac{D}{M}}]^M (D+M)^M$$\n\n$$\\approx (\\frac{e}{M})^M (D+M)^M$$\n\n$$= \\frac{e^M}{M^M} (1+\\frac{M}{D})^M D^M$$\n\n$$= \\frac{e^M}{M^M} [(1+\\frac{M}{D})^{\\frac{D}{M}}]^{\\frac{M^2}{D}} D^M$$\n\n$$\\approx \\frac{e^{M+\\frac{M^2}{D}}}{M^M} D^M \\approx \\frac{e^M}{M^M} D^M$$\n\nWhere we use Stirling's approximation, $\\lim_{n\\to +\\infty}(1+\\frac{1}{n})^n=e$ and $e^{\\frac{M^2}{D}}\\approx e^0=1$ . According to the description in the problem, When $D\\gg M$ , we can actually view $\\frac{e^M}{M^M}$ as a constant, so N(D,M) will grow like $D^M$ in this case. And by symmetry, N(D,M) will grow like $M^D$ , when $M\\gg D$ .\n\nFinally, we are asked to calculate N(10,3) and N(100,3):\n\n$$N(10,3) = C_{13}^3 = 286$$\n \n $N(100,3) = C_{103}^3 = 176851$",
"answer_length": 3596
},
{
"chapter": 1,
"question_number": "1.17",
"difficulty": "medium",
"question_text": "\\star)$ www The gamma function is defined by\n\n$$\\Gamma(x) \\equiv \\int_0^\\infty u^{x-1} e^{-u} \\, \\mathrm{d}u. \\tag{1.141}$$\n\nUsing integration by parts, prove the relation $\\Gamma(x+1) = x\\Gamma(x)$ . Show also that $\\Gamma(1) = 1$ and hence that $\\Gamma(x+1) = x!$ when x is an integer.",
"answer": "$$\\Gamma(x+1) = \\int_0^{+\\infty} u^x e^{-u} du$$\n\n$$= \\int_0^{+\\infty} -u^x de^{-u}$$\n\n$$= -u^x e^{-u} \\Big|_0^{+\\infty} - \\int_0^{+\\infty} e^{-u} d(-u^x)$$\n\n$$= -u^x e^{-u} \\Big|_0^{+\\infty} + x \\int_0^{+\\infty} e^{-u} u^{x-1} du$$\n\n$$= -u^x e^{-u} \\Big|_0^{+\\infty} + x \\Gamma(x)$$\n\nWhere we have taken advantage of *Integration by parts* and according to the equation above, we only need to prove the first term equals to 0. Given *L'Hospital's Rule*:\n\n$$\\lim_{u \\to +\\infty} -\\frac{u^x}{e^u} = \\lim_{u \\to +\\infty} -\\frac{x!}{e^u} = 0$$\n\nAnd also when $u = 0, -u^x e^u = 0$ , so we have proved $\\Gamma(x+1) = x\\Gamma(x)$ . Based on the definition of $\\Gamma(x)$ , we can write:\n\n$$\\Gamma(1) = \\int_0^{+\\infty} e^{-u} du = -e^{-u} \\Big|_0^{+\\infty} = -(0-1) = 1$$\n\nTherefore when x is an integer:\n\n$$\\Gamma(x) = (x-1)\\Gamma(x-1) = (x-1)(x-2)\\Gamma(x-2) = \\dots = x!\\Gamma(1) = x!$$",
"answer_length": 887
},
{
"chapter": 1,
"question_number": "1.18",
"difficulty": "medium",
"question_text": "We can use the result (1.126: $I = (2\\pi\\sigma^2)^{1/2}.$) to derive an expression for the surface area $S_D$ , and the volume $V_D$ , of a sphere of unit radius in D dimensions. To do this, consider the following result, which is obtained by transforming from Cartesian to polar coordinates\n\n$$\\prod_{i=1}^{D} \\int_{-\\infty}^{\\infty} e^{-x_i^2} dx_i = S_D \\int_{0}^{\\infty} e^{-r^2} r^{D-1} dr.$$\n (1.142: $\\prod_{i=1}^{D} \\int_{-\\infty}^{\\infty} e^{-x_i^2} dx_i = S_D \\int_{0}^{\\infty} e^{-r^2} r^{D-1} dr.$)\n\nUsing the definition (1.141: $\\Gamma(x) \\equiv \\int_0^\\infty u^{x-1} e^{-u} \\, \\mathrm{d}u.$) of the Gamma function, together with (1.126: $I = (2\\pi\\sigma^2)^{1/2}.$), evaluate both sides of this equation, and hence show that\n\n$$S_D = \\frac{2\\pi^{D/2}}{\\Gamma(D/2)}. (1.143)$$\n\nNext, by integrating with respect to radius from 0 to 1, show that the volume of the unit sphere in D dimensions is given by\n\n$$V_D = \\frac{S_D}{D}. ag{1.144}$$\n\nFinally, use the results $\\Gamma(1)=1$ and $\\Gamma(3/2)=\\sqrt{\\pi}/2$ to show that (1.143: $S_D = \\frac{2\\pi^{D/2}}{\\Gamma(D/2)}.$) and (1.144) reduce to the usual expressions for D=2 and D=3.",
"answer": "Based on (1.124: $I = \\int_{-\\infty}^{\\infty} \\exp\\left(-\\frac{1}{2\\sigma^2}x^2\\right) dx$) and (1.126: $I = (2\\pi\\sigma^2)^{1/2}.$) and by substituting x to $\\sqrt{2}\\sigma y$ , it is quite obvious to obtain :\n\n$$\\int_{-\\infty}^{+\\infty} e^{-x_i^2} dx_i = \\sqrt{\\pi}$$\n\nTherefore, the left side of (1.42: $= \\mathbb{E}_{\\mathbf{x}, \\mathbf{y}} [\\mathbf{x} \\mathbf{y}^{\\mathrm{T}}] - \\mathbb{E}[\\mathbf{x}] \\mathbb{E}[\\mathbf{y}^{\\mathrm{T}}].$) will equal to $\\pi^{\\frac{D}{2}}$ . For the right side of (1.42):\n\n$$\\begin{split} S_D \\int_0^{+\\infty} e^{-r^2} r^{D-1} dr &= S_D \\int_0^{+\\infty} e^{-u} u^{\\frac{D-1}{2}} d\\sqrt{u} \\quad (u = r^2) \\\\ &= \\frac{S_D}{2} \\int_0^{+\\infty} e^{-u} u^{\\frac{D}{2} - 1} du \\\\ &= \\frac{S_D}{2} \\Gamma(\\frac{D}{2}) \\end{split}$$\n\nHence, we obtain:\n\n$$\\pi^{\\frac{D}{2}} = \\frac{S_D}{2} \\Gamma(\\frac{D}{2}) \\quad \\Longrightarrow \\quad S_D = \\frac{2\\pi^{\\frac{D}{2}}}{\\Gamma(\\frac{D}{2})}$$\n\n $S_D$ has given the expression of the surface area with radius 1 in dimension D, we can further expand the conclusion: the surface area with radius r in dimension D will equal to $S_D \\cdot r^{D-1}$ , and when r=1, it will reduce to $S_D$ . This conclusion is naive, if you find that the surface area of different sphere in dimension D is proportion to the D-1th power of radius, i.e. $r^{D-1}$ . Considering the relationship between V and S of a sphere with arbitrary radius in dimension D: $\\frac{dV}{dr} = S$ , we can obtain:\n\n$$V = \\int S dr = \\int S_D r^{D-1} dr = \\frac{S_D}{D} r^D$$\n\nThe equation above gives the expression of the volume of a sphere with radius r in dimension D, so we let r=1:\n\n$$V_D = \\frac{S_D}{D}$$\n\nFor D = 2 and D = 3:\n\n$$V_2 = \\frac{S_2}{2} = \\frac{1}{2} \\cdot \\frac{2\\pi}{\\Gamma(1)} = \\pi$$\n\n$$V_3 = \\frac{S_3}{3} = \\frac{1}{3} \\cdot \\frac{2\\pi^{\\frac{3}{2}}}{\\Gamma(\\frac{3}{2})} = \\frac{1}{3} \\cdot \\frac{2\\pi^{\\frac{3}{2}}}{\\frac{\\sqrt{\\pi}}{2}} = \\frac{4}{3}\\pi$$",
"answer_length": 1933
},
{
"chapter": 1,
"question_number": "1.19",
"difficulty": "medium",
"question_text": "Consider a sphere of radius a in D-dimensions together with the concentric hypercube of side 2a, so that the sphere touches the hypercube at the centres of each of its sides. By using the results of Exercise 1.18, show that the ratio of the volume of the sphere to the volume of the cube is given by\n\n$$\\frac{\\text{volume of sphere}}{\\text{volume of cube}} = \\frac{\\pi^{D/2}}{D2^{D-1}\\Gamma(D/2)}.$$\n (1.145: $\\frac{\\text{volume of sphere}}{\\text{volume of cube}} = \\frac{\\pi^{D/2}}{D2^{D-1}\\Gamma(D/2)}.$)\n\nNow make use of Stirling's formula in the form\n\n$$\\Gamma(x+1) \\simeq (2\\pi)^{1/2} e^{-x} x^{x+1/2}$$\n (1.146: $\\Gamma(x+1) \\simeq (2\\pi)^{1/2} e^{-x} x^{x+1/2}$)\n\nwhich is valid for $x\\gg 1$ , to show that, as $D\\to\\infty$ , the ratio (1.145: $\\frac{\\text{volume of sphere}}{\\text{volume of cube}} = \\frac{\\pi^{D/2}}{D2^{D-1}\\Gamma(D/2)}.$) goes to zero. Show also that the ratio of the distance from the centre of the hypercube to one of the corners, divided by the perpendicular distance to one of the sides, is $\\sqrt{D}$ , which therefore goes to $\\infty$ as $D\\to\\infty$ . From these results we see that, in a space of high dimensionality, most of the volume of a cube is concentrated in the large number of corners, which themselves become very long 'spikes'!",
"answer": "We have already given a hint in the solution of Prob.1.18, and here we will make it more clearly: the volume of a sphere with radius r is $V_D \\cdot r^D$ . This is quite similar with the conclusion we obtained in Prob.1.18 about the surface area except that it is proportion to Dth power of its radius, i.e. $r^D$ not $r^{D-1}$ \n\n$$\\frac{\\text{volume of sphere}}{\\text{volume of cube}} = \\frac{V_D a^D}{(2a)^D} = \\frac{S_D}{2^D D} = \\frac{\\pi^{\\frac{D}{2}}}{2^{D-1} D \\Gamma(\\frac{D}{2})} \\tag{*}$$\n\nWhere we have used the result of (1.143: $S_D = \\frac{2\\pi^{D/2}}{\\Gamma(D/2)}.$). And when $D \\to +\\infty$ , we will use a simple method to show that (\\*) will converge to 0. We rewrite it :\n\n$$(*) = \\frac{2}{D} \\cdot (\\frac{\\pi}{4})^{\\frac{D}{2}} \\cdot \\frac{1}{\\Gamma(\\frac{D}{2})}$$\n\nHence, it is now quite obvious, all the three terms will converge to 0 when $D \\to +\\infty$ . Therefore their product will also converge to 0. The last problem is quite simple :\n\n$$\\frac{\\text{center to one corner}}{\\text{center to one side}} = \\frac{\\sqrt{a^2 \\cdot D}}{a} = \\sqrt{D} \\quad \\text{and} \\quad \\lim_{D \\to +\\infty} \\sqrt{D} = +\\infty$$",
"answer_length": 1143
},
{
"chapter": 1,
"question_number": "1.2",
"difficulty": "easy",
"question_text": "Write down the set of coupled linear equations, analogous to (1.122), satisfied by the coefficients $w_i$ which minimize the regularized sum-of-squares error function given by (1.4: $\\widetilde{E}(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} \\{y(x_n, \\mathbf{w}) - t_n\\}^2 + \\frac{\\lambda}{2} ||\\mathbf{w}||^2$).",
"answer": "This problem is similar to Prob.1.1, and the only difference is the last term on the right side of (1.4: $\\widetilde{E}(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} \\{y(x_n, \\mathbf{w}) - t_n\\}^2 + \\frac{\\lambda}{2} ||\\mathbf{w}||^2$), the penalty term. So we will do the same thing as in Prob.1.1:\n\n$$\\frac{\\partial E}{\\partial w_{i}} = \\sum_{n=1}^{N} \\{y(x_{n}, \\mathbf{w}) - t_{n}\\} x_{n}^{i} + \\lambda w_{i} = 0$$\n\n$$= > \\sum_{j=0}^{M} \\sum_{n=1}^{N} x_{n}^{(j+i)} w_{j} + \\lambda w_{i} = \\sum_{n=1}^{N} x_{n}^{i} t_{n}$$\n\n$$= > \\sum_{j=0}^{M} \\{\\sum_{n=1}^{N} x_{n}^{(j+i)} + \\delta_{ji} \\lambda\\} w_{j} = \\sum_{n=1}^{N} x_{n}^{i} t_{n}$$\n\nwhere\n\n$$\\delta_{ji} \\begin{cases} 0 & j \\neq i \\\\ 1 & j = i \\end{cases}$$",
"answer_length": 715
},
{
"chapter": 1,
"question_number": "1.20",
"difficulty": "medium",
"question_text": "In this exercise, we explore the behaviour of the Gaussian distribution in high-dimensional spaces. Consider a Gaussian distribution in D dimensions given by\n\n$$p(\\mathbf{x}) = \\frac{1}{(2\\pi\\sigma^2)^{D/2}} \\exp\\left(-\\frac{\\|\\mathbf{x}\\|^2}{2\\sigma^2}\\right).$$\n (1.147: $p(\\mathbf{x}) = \\frac{1}{(2\\pi\\sigma^2)^{D/2}} \\exp\\left(-\\frac{\\|\\mathbf{x}\\|^2}{2\\sigma^2}\\right).$)\n\nWe wish to find the density with respect to radius in polar coordinates in which the direction variables have been integrated out. To do this, show that the integral of the probability density over a thin shell of radius r and thickness $\\epsilon$ , where $\\epsilon \\ll 1$ , is given by $p(r)\\epsilon$ where\n\n$$p(r) = \\frac{S_D r^{D-1}}{(2\\pi\\sigma^2)^{D/2}} \\exp\\left(-\\frac{r^2}{2\\sigma^2}\\right)$$\n (1.148: $p(r) = \\frac{S_D r^{D-1}}{(2\\pi\\sigma^2)^{D/2}} \\exp\\left(-\\frac{r^2}{2\\sigma^2}\\right)$)\n\nwhere $S_D$ is the surface area of a unit sphere in D dimensions. Show that the function p(r) has a single stationary point located, for large D, at $\\hat{r} \\simeq \\sqrt{D}\\sigma$ . By considering $p(\\hat{r} + \\epsilon)$ where $\\epsilon \\ll \\hat{r}$ , show that for large D,\n\n$$p(\\hat{r} + \\epsilon) = p(\\hat{r}) \\exp\\left(-\\frac{3\\epsilon^2}{2\\sigma^2}\\right)$$\n (1.149: $p(\\hat{r} + \\epsilon) = p(\\hat{r}) \\exp\\left(-\\frac{3\\epsilon^2}{2\\sigma^2}\\right)$)\n\nwhich shows that $\\widehat{r}$ is a maximum of the radial probability density and also that p(r) decays exponentially away from its maximum at $\\widehat{r}$ with length scale $\\sigma$ . We have already seen that $\\sigma \\ll \\widehat{r}$ for large D, and so we see that most of the probability mass is concentrated in a thin shell at large radius. Finally, show that the probability density $p(\\mathbf{x})$ is larger at the origin than at the radius $\\widehat{r}$ by a factor of $\\exp(D/2)$ . We therefore see that most of the probability mass in a high-dimensional Gaussian distribution is located at a different radius from the region of high probability density. This property of distributions in spaces of high dimensionality will have important consequences when we consider Bayesian inference of model parameters in later chapters.",
"answer": "The density of probability in a thin shell with radius r and thickness $\\epsilon$ can be viewed as a constant. And considering that a sphere in dimension D with radius r has surface area $S_D r^{D-1}$ , which has already been proved in Prob 1.19.\n\n$$\\int_{shell} p(\\mathbf{x}) d\\mathbf{x} = p(\\mathbf{x}) \\int_{shell} d\\mathbf{x} = \\frac{exp(-\\frac{r^2}{2\\sigma^2})}{(2\\pi\\sigma^2)^{\\frac{D}{2}}} \\cdot V(\\text{shell}) = \\frac{exp(-\\frac{r^2}{2\\sigma^2})}{(2\\pi\\sigma^2)^{\\frac{D}{2}}} S_D r^{D-1} \\epsilon$$\n\nThus we denote:\n\n$$p(r) = \\frac{S_D r^{D-1}}{(2\\pi\\sigma^2)^{\\frac{D}{2}}} exp(-\\frac{r^2}{2\\sigma^2})$$\n\nWe calculate the derivative of (1.148: $p(r) = \\frac{S_D r^{D-1}}{(2\\pi\\sigma^2)^{D/2}} \\exp\\left(-\\frac{r^2}{2\\sigma^2}\\right)$) with respect to r:\n\n$$\\frac{dp(r)}{dr} = \\frac{S_D}{(2\\pi\\sigma^2)^{\\frac{D}{2}}} r^{D-2} exp(-\\frac{r^2}{2\\sigma^2})(D-1-\\frac{r^2}{\\sigma^2}) \\tag{*}$$\n\nWe let the derivative equal to 0, we will obtain its unique root( stationary point) $\\hat{r} = \\sqrt{D-1}\\sigma$ , because $r \\in [0,+\\infty]$ . When $r < \\hat{r}$ , the derivative is large than 0, p(r) will increase as $r \\uparrow$ , and when $r > \\hat{r}$ , the derivative is less than 0, p(r) will decrease as $r \\uparrow$ . Therefore $\\hat{r}$ will be the only maximum point. And it is obvious when $D \\gg 1$ , $\\hat{r} \\approx \\sqrt{D}\\sigma$ .\n\n$$\\frac{p(\\hat{r}+\\epsilon)}{p(\\hat{r})} = \\frac{(\\hat{r}+\\epsilon)^{D-1} exp(-\\frac{(\\hat{r}+\\epsilon)^2}{2\\sigma^2})}{\\hat{r}^{D-1} exp(-\\frac{\\hat{r}^2}{2\\sigma^2})}$$\n\n$$= (1+\\frac{\\epsilon}{\\hat{r}})^{D-1} exp(-\\frac{2\\epsilon\\,\\hat{r}+\\epsilon^2}{2\\sigma^2})$$\n\n$$= exp(-\\frac{2\\epsilon\\,\\hat{r}+\\epsilon^2}{2\\sigma^2} + (D-1)ln(1+\\frac{\\epsilon}{\\hat{r}}))$$\n\nWe process for the exponential term by using *Taylor Theorems*.\n\n$$-\\frac{2\\epsilon \\,\\hat{r} + \\epsilon^2}{2\\sigma^2} + (D-1)ln(1 + \\frac{\\epsilon}{\\hat{r}}) \\approx -\\frac{2\\epsilon \\,\\hat{r} + \\epsilon^2}{2\\sigma^2} + (D-1)(\\frac{\\epsilon}{\\hat{r}} - \\frac{\\epsilon^2}{2\\hat{r}^2})$$\n\n$$= -\\frac{2\\epsilon \\,\\hat{r} + \\epsilon^2}{2\\sigma^2} + \\frac{2\\hat{r}\\epsilon - \\epsilon^2}{2\\sigma^2}$$\n\n$$= -\\frac{\\epsilon^2}{\\sigma^2}$$\n\nTherefore, $p(\\hat{r}+\\epsilon)=p(\\hat{r})exp(-\\frac{\\epsilon^2}{\\sigma^2})$ . Note: Here I draw a different conclusion compared with (1.149: $p(\\hat{r} + \\epsilon) = p(\\hat{r}) \\exp\\left(-\\frac{3\\epsilon^2}{2\\sigma^2}\\right)$), but I do not think there is any mistake in my deduction.\n\nFinally, we see from (1.147):\n\n$$p(\\mathbf{x})\\Big|_{\\mathbf{x}=0} = \\frac{1}{(2\\pi\\sigma^2)^{\\frac{D}{2}}}$$\n\n$$p(\\mathbf{x})\\Big|_{||\\mathbf{x}||^2 = \\hat{r}^2} = \\frac{1}{(2\\pi\\sigma^2)^{\\frac{D}{2}}} exp(-\\frac{\\hat{r}^2}{2\\sigma^2}) \\approx \\frac{1}{(2\\pi\\sigma^2)^{\\frac{D}{2}}} exp(-\\frac{D}{2})$$",
"answer_length": 2757
},
{
"chapter": 1,
"question_number": "1.21",
"difficulty": "medium",
"question_text": "\\star)$ Consider two nonnegative numbers a and b, and show that, if $a \\leq b$ , then $a \\leq (ab)^{1/2}$ . Use this result to show that, if the decision regions of a two-class classification problem are chosen to minimize the probability of misclassification, this probability will satisfy\n\n$$p(\\text{mistake}) \\leqslant \\int \\left\\{ p(\\mathbf{x}, C_1) p(\\mathbf{x}, C_2) \\right\\}^{1/2} d\\mathbf{x}.$$\n (1.150: $p(\\text{mistake}) \\leqslant \\int \\left\\{ p(\\mathbf{x}, C_1) p(\\mathbf{x}, C_2) \\right\\}^{1/2} d\\mathbf{x}.$)",
"answer": "The first question is rather simple:\n\n$$(ab)^{\\frac{1}{2}} - a = a^{\\frac{1}{2}}(b^{\\frac{1}{2}} - a^{\\frac{1}{2}}) \\ge 0$$\n\nWhere we have taken advantage of $b \\ge a \\ge 0$ . And based on (1.78):\n\n$$\\begin{split} p(\\text{mistake}) &= p(\\mathbf{x} \\in R_1, C_2) + p(\\mathbf{x} \\in R_2, C_1) \\\\ &= \\int_{R_1} p(\\mathbf{x}, C_2) dx + \\int_{R_2} p(\\mathbf{x}, C_1) dx \\end{split}$$\n\nRecall that the decision rule which can minimize misclassification is that if $p(\\mathbf{x}, C_1) > p(\\mathbf{x}, C_2)$ , for a given value of $\\mathbf{x}$ , we will assign that $\\mathbf{x}$ to class $C_1$ . We can see that in decision area $R_1$ , it should satisfy $p(\\mathbf{x}, C_1) > p(\\mathbf{x}, C_2)$ . Therefore, using what we have proved, we can obtain:\n\n$$\\int_{R_1} p(\\mathbf{x}, C_2) dx \\le \\int_{R_1} \\{ p(\\mathbf{x}, C_1) p(\\mathbf{x}, C_2) \\}^{\\frac{1}{2}} dx$$\n\nIt is the same for decision area $R_2$ . Therefore we can obtain:\n\n$$p(\\text{mistake}) \\le \\int \\{p(\\mathbf{x}, C_1) p(\\mathbf{x}, C_2)\\}^{\\frac{1}{2}} dx$$\n\n# **Problem 1.22 Solution**\n\nWe need to deeply understand (1.81: $\\sum_{k} L_{kj} p(\\mathcal{C}_k | \\mathbf{x})$). When $L_{kj} = 1 - I_{kj}$ :\n\n$$\\sum_{k} L_{kj} p(C_k | \\mathbf{x}) = \\sum_{k} p(C_k | \\mathbf{x}) - p(C_j | \\mathbf{x})$$\n\nGiven a specific $\\mathbf{x}$ , the first term on the right side is a constant, which equals to 1, no matter which class $C_j$ we assign $\\mathbf{x}$ to. Therefore if we want to minimize the loss, we will maximize $p(C_j|\\mathbf{x})$ . Hence, we will assign $\\mathbf{x}$ to class $C_j$ , which can give the biggest posterior probability $p(C_j|\\mathbf{x})$ .\n\nThe explanation of the loss matrix is quite simple. If we label correctly, there is no loss. Otherwise, we will incur a loss, in the same degree whichever class we label it to. The loss matrix is given below to give you an intuitive view:\n\n$$\\begin{bmatrix} 0 & 1 & 1 & \\dots & 1 \\\\ 1 & 0 & 1 & \\dots & 1 \\\\ \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\ 1 & 1 & 1 & \\dots & 0 \\end{bmatrix}$$",
"answer_length": 2027
},
{
"chapter": 1,
"question_number": "1.23",
"difficulty": "easy",
"question_text": "Derive the criterion for minimizing the expected loss when there is a general loss matrix and general prior probabilities for the classes.",
"answer": "$$\\mathbb{E}[L] = \\sum_{k} \\sum_{j} \\int_{R_{j}} L_{kj} p(\\mathbf{x}, C_{k}) d\\mathbf{x} = \\sum_{k} \\sum_{j} \\int_{R_{j}} L_{kj} p(C_{k}) p(\\mathbf{x} | C_{k}) d\\mathbf{x}$$\n\nIf we denote a new loss matrix by $L_{jk}^{\\star} = L_{jk}p(C_k)$ , we can obtain a new equation :\n\n $\\mathbb{E}[L] = \\sum_{k} \\sum_{i} \\int_{R_{i}} L_{kj}^{\\star} p(\\mathbf{x} | C_{k}) d\\mathbf{x}$",
"answer_length": 374
},
{
"chapter": 1,
"question_number": "1.24",
"difficulty": "medium",
"question_text": "- 1.24 (\\*\\*) www Consider a classification problem in which the loss incurred when an input vector from class $C_k$ is classified as belonging to class $C_j$ is given by the loss matrix $L_{kj}$ , and for which the loss incurred in selecting the reject option is $\\lambda$ . Find the decision criterion that will give the minimum expected loss. Verify that this reduces to the reject criterion discussed in Section 1.5.3 when the loss matrix is given by $L_{kj} = 1 I_{kj}$ . What is the relationship between $\\lambda$ and the rejection threshold $\\theta$ ?",
"answer": "This description of the problem is a little confusing, and what it really mean is that $\\lambda$ is the parameter governing the loss, just like $\\theta$ governing the posterior probability $p(C_k|\\mathbf{x})$ when we introduce the reject option. Therefore the reject option can be written in a new way when we view it from the view of $\\lambda$ and the loss:\n\n$$\\text{choice} \\begin{cases} \\text{class } C_j & \\min_{l} \\sum_{k} L_{kl} p(C_k|x) < \\lambda \\\\ \\text{reject} & \\text{else} \\end{cases}$$\n\nWhere $C_j$ is the class that can obtain the minimum. If $L_{kj} = 1 - I_{kj}$ , according to what we have proved in Prob.1.22:\n\n$$\\sum_{k} L_{kj} p(C_k | \\mathbf{x}) = \\sum_{k} p(C_k | \\mathbf{x}) - p(C_j | \\mathbf{x}) = 1 - p(C_j | \\mathbf{x})$$\n\nTherefore, the reject criterion from the view of $\\lambda$ above is actually equivalent to the largest posterior probability is larger than $1 - \\lambda$ :\n\n$$\\min_{l} \\sum_{k} L_{kl} p(C_k|x) < \\lambda \\quad <=> \\quad \\max_{l} p(C_l|x) > 1 - \\lambda$$\n\nAnd from the view of $\\theta$ and posterior probability, we label a class for **x** (i.e. we do not reject) is given by the constrain :\n\n$$\\max_{l} p(C_l|x) > \\theta$$\n\nHence from the two different views, we can see that $\\lambda$ and $\\theta$ are correlated with:\n\n$$\\lambda + \\theta = 1$$",
"answer_length": 1313
},
{
"chapter": 1,
"question_number": "1.25",
"difficulty": "easy",
"question_text": "Consider the generalization of the squared loss function (1.87: $\\mathbb{E}[L] = \\iint \\{y(\\mathbf{x}) - t\\}^2 p(\\mathbf{x}, t) \\, d\\mathbf{x} \\, dt.$) for a single target variable t to the case of multiple target variables described by the vector t given by\n\n$$\\mathbb{E}[L(\\mathbf{t}, \\mathbf{y}(\\mathbf{x}))] = \\iint \\|\\mathbf{y}(\\mathbf{x}) - \\mathbf{t}\\|^2 p(\\mathbf{x}, \\mathbf{t}) \\, d\\mathbf{x} \\, d\\mathbf{t}. \\tag{1.151}$$\n\nUsing the calculus of variations, show that the function $\\mathbf{y}(\\mathbf{x})$ for which this expected loss is minimized is given by $\\mathbf{y}(\\mathbf{x}) = \\mathbb{E}_{\\mathbf{t}}[\\mathbf{t}|\\mathbf{x}]$ . Show that this result reduces to (1.89: $y(\\mathbf{x}) = \\frac{\\int tp(\\mathbf{x}, t) dt}{p(\\mathbf{x})} = \\int tp(t|\\mathbf{x}) dt = \\mathbb{E}_t[t|\\mathbf{x}]$) for the case of a single target variable t.",
"answer": "We can prove this informally by dealing with one dimension once a time just as the same process in (1.87: $\\mathbb{E}[L] = \\iint \\{y(\\mathbf{x}) - t\\}^2 p(\\mathbf{x}, t) \\, d\\mathbf{x} \\, dt.$) - (1.89: $y(\\mathbf{x}) = \\frac{\\int tp(\\mathbf{x}, t) dt}{p(\\mathbf{x})} = \\int tp(t|\\mathbf{x}) dt = \\mathbb{E}_t[t|\\mathbf{x}]$) until all has been done, due to the fact that the total loss E can be divided to the summation of loss on every\n\ndimension, and what's more they are independent. Here, we will use a more informal way to prove this. In this case, the expected loss can be written:\n\n$$\\mathbb{E}[L] = \\int \\int \\{\\mathbf{y}(\\mathbf{x}) - \\mathbf{t}\\}^2 p(\\mathbf{x}, \\mathbf{t}) d\\mathbf{t} d\\mathbf{x}$$\n\nTherefore, just as the same process in (1.87: $\\mathbb{E}[L] = \\iint \\{y(\\mathbf{x}) - t\\}^2 p(\\mathbf{x}, t) \\, d\\mathbf{x} \\, dt.$) - (1.89):\n\n$$\\frac{\\partial \\mathbb{E}[L]}{\\partial y(\\mathbf{x})} = 2 \\int \\{\\mathbf{y}(\\mathbf{x}) - \\mathbf{t}\\} p(\\mathbf{x}, \\mathbf{t}) d\\mathbf{t} = \\mathbf{0}$$\n\n$$=> \\mathbf{y}(\\mathbf{x}) = \\frac{\\int \\mathbf{t} p(\\mathbf{x}, \\mathbf{t}) d\\mathbf{t}}{p(\\mathbf{x})} = \\mathbb{E}_{\\mathbf{t}}[\\mathbf{t}|\\mathbf{x}]$$",
"answer_length": 1173
},
{
"chapter": 1,
"question_number": "1.26",
"difficulty": "easy",
"question_text": "- 1.26 (\\*) By expansion of the square in (1.151: $\\mathbb{E}[L(\\mathbf{t}, \\mathbf{y}(\\mathbf{x}))] = \\iint \\|\\mathbf{y}(\\mathbf{x}) - \\mathbf{t}\\|^2 p(\\mathbf{x}, \\mathbf{t}) \\, d\\mathbf{x} \\, d\\mathbf{t}.$), derive a result analogous to (1.90: $\\mathbb{E}[L] = \\int \\{y(\\mathbf{x}) - \\mathbb{E}[t|\\mathbf{x}]\\}^2 p(\\mathbf{x}) d\\mathbf{x} + \\int \\{\\mathbb{E}[t|\\mathbf{x}] - t\\}^2 p(\\mathbf{x}) d\\mathbf{x}.$) and hence show that the function y(x) that minimizes the expected squared loss for the case of a vector t of target variables is again given by the conditional expectation of t.",
"answer": "The process is identical as the deduction we conduct for (1.90: $\\mathbb{E}[L] = \\int \\{y(\\mathbf{x}) - \\mathbb{E}[t|\\mathbf{x}]\\}^2 p(\\mathbf{x}) d\\mathbf{x} + \\int \\{\\mathbb{E}[t|\\mathbf{x}] - t\\}^2 p(\\mathbf{x}) d\\mathbf{x}.$). We will not repeat here. And what we should emphasize is that $\\mathbb{E}[\\mathbf{t}|\\mathbf{x}]$ is a function of $\\mathbf{x}$ , not $\\mathbf{t}$ . Thus the integral over $\\mathbf{t}$ and $\\mathbf{x}$ can be simplified based on *Integration by parts* and that is how we obtain (1.90: $\\mathbb{E}[L] = \\int \\{y(\\mathbf{x}) - \\mathbb{E}[t|\\mathbf{x}]\\}^2 p(\\mathbf{x}) d\\mathbf{x} + \\int \\{\\mathbb{E}[t|\\mathbf{x}] - t\\}^2 p(\\mathbf{x}) d\\mathbf{x}.$).\n\n**Note**: There is a mistake in (1.90: $\\mathbb{E}[L] = \\int \\{y(\\mathbf{x}) - \\mathbb{E}[t|\\mathbf{x}]\\}^2 p(\\mathbf{x}) d\\mathbf{x} + \\int \\{\\mathbb{E}[t|\\mathbf{x}] - t\\}^2 p(\\mathbf{x}) d\\mathbf{x}.$), i.e. the second term on the right side is wrong. You can view (3.37: $\\mathbb{E}[L] = \\int \\left\\{ y(\\mathbf{x}) - h(\\mathbf{x}) \\right\\}^2 p(\\mathbf{x}) \\, d\\mathbf{x} + \\int \\left\\{ h(\\mathbf{x}) - t \\right\\}^2 p(\\mathbf{x}, t) \\, d\\mathbf{x} \\, dt.$) on P148 for reference. It should be:\n\n$$\\mathbb{E}[L] = \\int \\{y(\\mathbf{x}) - \\mathbb{E}[t|\\mathbf{x}]\\}^2 p(\\mathbf{x}) d\\mathbf{x} + \\int \\{\\mathbb{E}[t|\\mathbf{x} - t]\\}^2 p(\\mathbf{x}, t) d\\mathbf{x} dt$$\n\nMoreover, this mistake has already been revised in the errata.\n\n# **Problem 1.27 Solution**\n\nWe deal with this problem based on *Calculus of Variations*.\n\n$$\\frac{\\partial \\mathbb{E}[L_q]}{\\partial y(\\mathbf{x})} = q \\int [y(\\mathbf{x} - t)]^{q-1} sign(y(\\mathbf{x}) - t) p(\\mathbf{x}, t) dt = 0$$\n\n$$= > \\int_{-\\infty}^{y(\\mathbf{x})} [y(\\mathbf{x}) - t]^{q-1} p(\\mathbf{x}, t) dt = \\int_{y(\\mathbf{x})}^{+\\infty} [y(\\mathbf{x}) - t]^{q-1} p(\\mathbf{x}, t) dt$$\n\n$$= > \\int_{-\\infty}^{y(\\mathbf{x})} [y(\\mathbf{x}) - t]^{q-1} p(t|\\mathbf{x}) dt = \\int_{y(\\mathbf{x})}^{+\\infty} [y(\\mathbf{x}) - t]^{q-1} p(t|\\mathbf{x}) dt$$\n\nWhere we take advantage of $p(\\mathbf{x},t) = p(t|\\mathbf{x})p(\\mathbf{x})$ and the property of sign function. Hence, when q=1, the equation above will reduce to :\n\n$$\\int_{-\\infty}^{y(\\mathbf{x})} p(t|\\mathbf{x}) dt = \\int_{y(\\mathbf{x})}^{+\\infty} p(t|\\mathbf{x}) dt$$\n\nIn other words, when q = 1, the optimal $y(\\mathbf{x})$ will be given by conditional median. When q = 0, it is non-trivial. We need to rewrite (1.91):\n\n$$\\mathbb{E}[L_q] = \\int \\left\\{ \\int |y(\\mathbf{x}) - t|^q p(t|\\mathbf{x}) p(\\mathbf{x}) dt \\right\\} d\\mathbf{x}$$\n$$= \\int \\left\\{ p(\\mathbf{x}) \\int |y(\\mathbf{x}) - t|^q p(t|\\mathbf{x}) dt \\right\\} d\\mathbf{x} \\quad (*)$$\n\nIf we want to minimize $\\mathbb{E}[L_q]$ , we only need to minimize the integrand of (\\*):\n\n$$\\int |y(\\mathbf{x}) - t|^q p(t|\\mathbf{x}) dt \\tag{**}$$\n\nWhen q = 0, $|y(\\mathbf{x}) - t|^q$ is close to 1 everywhere except in the neighborhood around $t = y(\\mathbf{x})$ (This can be seen from Fig1.29). Therefore:\n\n$$(**) \\approx \\int_{\\mathcal{U}} p(t|\\mathbf{x}) dt - \\int_{\\varepsilon} (1 - |y(\\mathbf{x}) - t|^q) p(t|\\mathbf{x}) dt \\approx \\int_{\\mathcal{U}} p(t|\\mathbf{x}) dt - \\int_{\\varepsilon} p(t|\\mathbf{x}) dt$$\n\nWhere $\\epsilon$ means the small neighborhood, $\\mathscr U$ means the whole space $\\mathbf x$ lies in. Note that $y(\\mathbf x)$ has no correlation with the first term, but the second term (because how to choose $y(\\mathbf x)$ will affect the location of $\\epsilon$ ). Hence we will put $\\epsilon$ at the location where $p(t|\\mathbf x)$ achieve its largest value, i.e. the mode, because in this way we can obtain the largest reduction. Therefore, it is natural we choose $y(\\mathbf x)$ equals to t that maximize $p(t|\\mathbf x)$ for every $\\mathbf x$ .",
"answer_length": 3743
},
{
"chapter": 1,
"question_number": "1.27",
"difficulty": "medium",
"question_text": "Consider the expected loss for regression problems under the $L_q$ loss function given by (1.91: $\\mathbb{E}[L_q] = \\iint |y(\\mathbf{x}) - t|^q p(\\mathbf{x}, t) \\, d\\mathbf{x} \\, dt$). Write down the condition that $y(\\mathbf{x})$ must satisfy in order to minimize $\\mathbb{E}[L_q]$ . Show that, for q=1, this solution represents the conditional median, i.e., the function $y(\\mathbf{x})$ such that the probability mass for $t < y(\\mathbf{x})$ is the same as for $t \\geqslant y(\\mathbf{x})$ . Also show that the minimum expected $L_q$ loss for $q \\to 0$ is given by the conditional mode, i.e., by the function $y(\\mathbf{x})$ equal to the value of t that maximizes $p(t|\\mathbf{x})$ for each $\\mathbf{x}$ .",
"answer": "Since we can choose $y(\\mathbf{x})$ independently for each value of $\\mathbf{x}$ , the minimum of the expected $L_q$ loss can be found by minimizing the integrand given by\n\n$$\\int |y(\\mathbf{x}) - t|^q p(t|\\mathbf{x}) \\, \\mathrm{d}t \\tag{42}$$\n\nfor each value of $\\mathbf{x}$ . Setting the derivative of (42) with respect to $y(\\mathbf{x})$ to zero gives the stationarity condition\n\n$$\\int q|y(\\mathbf{x}) - t|^{q-1} \\operatorname{sign}(y(\\mathbf{x}) - t)p(t|\\mathbf{x}) dt$$\n\n$$= q \\int_{-\\infty}^{y(\\mathbf{x})} |y(\\mathbf{x}) - t|^{q-1} p(t|\\mathbf{x}) dt - q \\int_{y(\\mathbf{x})}^{\\infty} |y(\\mathbf{x}) - t|^{q-1} p(t|\\mathbf{x}) dt = 0$$\n\nwhich can also be obtained directly by setting the functional derivative of (1.91: $\\mathbb{E}[L_q] = \\iint |y(\\mathbf{x}) - t|^q p(\\mathbf{x}, t) \\, d\\mathbf{x} \\, dt$) with respect to $y(\\mathbf{x})$ equal to zero. It follows that $y(\\mathbf{x})$ must satisfy\n\n$$\\int_{-\\infty}^{y(\\mathbf{x})} |y(\\mathbf{x}) - t|^{q-1} p(t|\\mathbf{x}) \\, \\mathrm{d}t = \\int_{y(\\mathbf{x})}^{\\infty} |y(\\mathbf{x}) - t|^{q-1} p(t|\\mathbf{x}) \\, \\mathrm{d}t. \\tag{43}$$\n\nFor the case of q = 1 this reduces to\n\n$$\\int_{-\\infty}^{y(\\mathbf{x})} p(t|\\mathbf{x}) \\, \\mathrm{d}t = \\int_{y(\\mathbf{x})}^{\\infty} p(t|\\mathbf{x}) \\, \\mathrm{d}t. \\tag{44}$$\n\nwhich says that $y(\\mathbf{x})$ must be the conditional median of t.\n\nFor $q \\to 0$ we note that, as a function of t, the quantity $|y(\\mathbf{x}) - t|^q$ is close to 1 everywhere except in a small neighbourhood around $t = y(\\mathbf{x})$ where it falls to zero. The value of (42) will therefore be close to 1, since the density p(t) is normalized, but reduced slightly by the 'notch' close to $t = y(\\mathbf{x})$ . We obtain the biggest reduction in (42) by choosing the location of the notch to coincide with the largest value of p(t), i.e. with the (conditional) mode.",
"answer_length": 1871
},
{
"chapter": 1,
"question_number": "1.28",
"difficulty": "easy",
"question_text": "In Section 1.6, we introduced the idea of entropy h(x) as the information gained on observing the value of a random variable x having distribution p(x). We saw that, for independent variables x and y for which p(x,y) = p(x)p(y), the entropy functions are additive, so that h(x,y) = h(x) + h(y). In this exercise, we derive the relation between h and p in the form of a function h(p). First show that $h(p^2) = 2h(p)$ , and hence by induction that $h(p^n) = nh(p)$ where n is a positive integer. Hence show that $h(p^{n/m}) = (n/m)h(p)$ where m is also a positive integer. This implies that $h(p^x) = xh(p)$ where x is a positive rational number, and hence by continuity when it is a positive real number. Finally, show that this implies h(p) must take the form $h(p) \\propto \\ln p$ .",
"answer": "Basically this problem is focused on the definition of *Information Content*, i.e.h(x). We will rewrite the problem more precisely. In *Information Theory*, $h(\\cdot)$ is also called *Information Content* and denoted as $I(\\cdot)$ . Here we will still use $h(\\cdot)$ for consistency. The whole problem is about the property of h(x). Based on our knowledge that $h(\\cdot)$ is a monotonic function of the probability p(x), we can obtain:\n\n$$h(x) = f(p(x))$$\n\nThe equation above means that the *Information* we obtain for a specific value of a random variable x is correlated with its occurring probability p(x), and its relationship is given by a mapping function $f(\\cdot)$ . Suppose C is the intersection of two independent event A and B, then the information of event C occurring is the compound message of both independent events A and B occurring:\n\n$$h(C) = h(A \\cap B) = h(A) + h(B) \\tag{*}$$\n\nBecause *A* and *B* is independent:\n\n$$P(C) = P(A) \\cdot P(B)$$\n\nWe apply function $f(\\cdot)$ to both side:\n\n$$f(P(C)) = f(P(A) \\cdot P(B)) \\tag{**}$$\n\nMoreover, the left side of (\\*) and (\\*\\*) are equivalent by definition, so we can obtain:\n\n$$h(A) + h(B) = f(P(A) \\cdot P(B))$$\n\n$$= f(p(A)) + f(p(B)) = f(P(A) \\cdot P(B))$$\n\nWe obtain an important property of function $f(\\cdot)$ : $f(x \\cdot y) = f(x) + f(y)$ . Note: In problem (1.28: $P(z) = \\int_{-\\infty}^{z} p(x) dx$), what it really wants us to prove is about the form and property of function f in our formulation, because there is one sentence in the description of the problem : \"In this exercise, we derive the relation between h and p in the form of a function h(p)\", (i.e. $f(\\cdot)$ in our formulation is equivalent to h(p) in the description).\n\nAt present, what we know is the property of function $f(\\cdot)$ :\n\n$$f(xy) = f(x) + f(y) \\tag{*}$$\n\nFirstly, we choose x = y, and then it is obvious : $f(x^2) = 2f(x)$ . Secondly, it is obvious $f(x^n) = nf(x)$ , $n \\in \\mathbb{N}$ is true for n = 1, n = 2. Suppose it is also true for n, we will prove it is true for n + 1:\n\n$$f(x^{n+1}) = f(x^n) + f(x) = nf(x) + f(x) = (n+1)f(x)$$\n\nTherefore, $f(x^n) = nf(x)$ , $n \\in \\mathbb{N}$ has been proved. For an integer m, we rewrite $x^n$ as $(x^{\\frac{n}{m}})^m$ , and take advantage of what we have proved, we will obtain:\n\n$$f(x^n) = f((x^{\\frac{n}{m}})^m) = m f(x^{\\frac{n}{m}})$$\n\nBecause $f(x^n)$ also equals to nf(x), therefore $nf(x) = mf(x^{\\frac{n}{m}})$ . We simplify the equation and obtain:\n\n$$f(x^{\\frac{n}{m}}) = \\frac{n}{m}f(x)$$\n\nFor an arbitrary positive x, $x \\in \\mathbb{R}^+$ , we can find two positive rational array $\\{y_n\\}$ and $\\{z_n\\}$ , which satisfy:\n\n$$y_1 < y_2 < \\dots < y_N < x$$\n and $\\lim_{N \\to +\\infty} y_N = x$ \n\n$$z_1 > z_2 > \\dots > z_N > x$$\n, and $\\lim_{N \\to +\\infty} z_N = x$ \n\nWe take advantage of function $f(\\cdot)$ is monotonic:\n\n$$y_N f(p) = f(p^{y_N}) \\le f(p^x) \\le f(p^{z_N}) = z_N f(p)$$\n\nAnd when $N \\to +\\infty$ , we will obtain: $f(p^x) = xf(p)$ , $x \\in \\mathbb{R}^+$ . We let p = e, it can be rewritten as : $f(e^x) = xf(e)$ . Finally, We denote $y = e^x$ :\n\n$$f(y) = ln(y)f(e)$$\n\nWhere f(e) is a constant once function $f(\\cdot)$ is decided. Therefore $f(x) \\propto ln(x)$ .",
"answer_length": 3235
},
{
"chapter": 1,
"question_number": "1.29",
"difficulty": "easy",
"question_text": "Consider an M-state discrete random variable x, and use Jensen's inequality in the form (1.115: $f\\left(\\sum_{i=1}^{M} \\lambda_i x_i\\right) \\leqslant \\sum_{i=1}^{M} \\lambda_i f(x_i)$) to show that the entropy of its distribution p(x) satisfies $H[x] \\leq \\ln M$ .",
"answer": "This problem is a little bit tricky. The entropy for a M-state discrete random variable x can be written as :\n\n$$H[x] = -\\sum_{i}^{M} \\lambda_{i} ln(\\lambda_{i})$$\n\nWhere $\\lambda_i$ is the probability that x choose state i. Here we choose a concave function $f(\\cdot) = ln(\\cdot)$ , we rewrite *Jensen's inequality*, i.e.(1.115):\n\n$$ln(\\sum_{i=1}^{M} \\lambda_i x_i) \\ge \\sum_{i=1}^{M} \\lambda_i ln(x_i)$$\n\nWe choose $x_i = \\frac{1}{\\lambda_i}$ and simplify the equation above, we will obtain :\n\n$$lnM \\geq -\\sum_{i=1}^{M} \\lambda_i ln(\\lambda_i) = H[x]$$",
"answer_length": 560
},
{
"chapter": 1,
"question_number": "1.3",
"difficulty": "medium",
"question_text": "Suppose that we have three coloured boxes r (red), b (blue), and g (green). Box r contains 3 apples, 4 oranges, and 3 limes, box b contains 1 apple, 1 orange, and 0 limes, and box g contains 3 apples, 3 oranges, and 4 limes. If a box is chosen at random with probabilities p(r) = 0.2, p(b) = 0.2, p(g) = 0.6, and a piece of fruit is removed from the box (with equal probability of selecting any of the items in the box), then what is the probability of selecting an apple? If we observe that the selected fruit is in fact an orange, what is the probability that it came from the green box?",
"answer": "This problem can be solved by *Bayes' theorem*. The probability of selecting an apple P(a):\n\n$$P(a) = P(a|r)P(r) + P(a|b)P(b) + P(a|g)P(g) = \\frac{3}{10} \\times 0.2 + \\frac{1}{2} \\times 0.2 + \\frac{3}{10} \\times 0.6 = 0.34$$\n\nBased on *Bayes' theorem*, the probability of an selected orange coming from the green box P(g|o):\n\n$$P(g|o) = \\frac{P(o|g)P(g)}{P(o)}$$\n\nWe calculate the probability of selecting an orange P(o) first :\n\n$$P(o) = P(o|r)P(r) + P(o|b)P(b) + P(o|g)P(g) = \\frac{4}{10} \\times 0.2 + \\frac{1}{2} \\times 0.2 + \\frac{3}{10} \\times 0.6 = 0.36$$\n\nTherefore we can get:\n\n$$P(g|o) = \\frac{P(o|g)P(g)}{P(o)} = \\frac{\\frac{3}{10} \\times 0.6}{0.36} = 0.5$$",
"answer_length": 667
},
{
"chapter": 1,
"question_number": "1.30",
"difficulty": "medium",
"question_text": "Evaluate the Kullback-Leibler divergence (1.113: $= -\\int p(\\mathbf{x}) \\ln \\left\\{\\frac{q(\\mathbf{x})}{p(\\mathbf{x})}\\right\\} d\\mathbf{x}.$) between two Gaussians $p(x) = \\mathcal{N}(x|\\mu, \\sigma^2)$ and $q(x) = \\mathcal{N}(x|m, s^2)$ .\n\n**Table 1.3** The joint distribution p(x, y) for two binary variables x and y used in Exercise 1.39.\n\n$$\\begin{array}{c|cccc}\n & y \\\\\n\\hline\n & 0 & 1 \\\\\n\\hline\n & 0 & 1/3 & 1/3 \\\\\n & 1 & 0 & 1/3\n\\end{array}$$",
"answer": "Based on definition:\n\n$$ln\\{\\frac{p(x)}{q(x)}\\} = ln(\\frac{s}{\\sigma}) - \\left[\\frac{1}{2\\sigma^2}(x-\\mu)^2 - \\frac{1}{2s^2}(x-m)^2\\right]$$\n$$= ln(\\frac{s}{\\sigma}) - \\left[\\left(\\frac{1}{2\\sigma^2} - \\frac{1}{2s^2}\\right)x^2 - \\left(\\frac{\\mu}{\\sigma^2} - \\frac{m}{s^2}\\right)x + \\left(\\frac{\\mu^2}{2\\sigma^2} - \\frac{m^2}{2s^2}\\right)\\right]$$\n\nWe will take advantage of the following equations to solve this problem.\n\n$$\\mathbb{E}[x^2] = \\int x^2 \\mathcal{N}(x|\\mu, \\sigma^2) dx = \\mu^2 + \\sigma^2$$\n\n$$\\mathbb{E}[x] = \\int x \\mathcal{N}(x|\\mu, \\sigma^2) dx = \\mu$$\n\n$$\\int \\mathcal{N}(x|\\mu, \\sigma^2) dx = 1$$\n\nGiven the equations above, it is easy to see:\n\n$$\\begin{split} KL(p||q) &= -\\int p(x)ln\\{\\frac{q(x)}{p(x)}\\}dx \\\\ &= \\int \\mathcal{N}(x|\\mu,\\sigma)ln\\{\\frac{p(x)}{q(x)}\\}dx \\\\ &= ln(\\frac{s}{\\sigma}) - (\\frac{1}{2\\sigma^2} - \\frac{1}{2s^2})(\\mu^2 + \\sigma^2) + (\\frac{\\mu}{\\sigma^2} - \\frac{m}{s^2})\\mu - (\\frac{\\mu^2}{2\\sigma^2} - \\frac{m^2}{2s^2}) \\\\ &= ln(\\frac{s}{\\sigma}) + \\frac{\\sigma^2 + (\\mu - m)^2}{2s^2} - \\frac{1}{2} \\end{split}$$\n\nWe will discuss this result in more detail. Firstly, if KL distance is defined in *Information Theory*, the first term of the result will be $log_2(\\frac{s}{\\sigma})$ instead of $ln(\\frac{s}{\\sigma})$ . Secondly, if we denote $x = \\frac{s}{\\sigma}$ , KL distance can be rewritten as:\n\n$$KL(p||q) = ln(x) + \\frac{1}{2x^2} - \\frac{1}{2} + a$$\n, where $a = \\frac{(\\mu - m)^2}{2s^2}$ \n\nWe calculate the derivative of KL with respect to x, and let it equal to 0:\n\n$$\\frac{d(KL)}{dx} = \\frac{1}{x} - x^{-3} = 0 \\quad => \\quad x = 1 \\ (\\because s, \\, \\sigma > 0)$$\n\nWhen x < 1 the derivative is less than 0, and when x > 1, it is greater than 0, which makes x = 1 the global minimum. When x = 1, KL(p||q) = a. What's more, when $\\mu = m$ , a will achieve its minimum 0. In this way, we have shown that the KL distance between two Gaussian Distributions is not less than 0, and only when the two Gaussian Distributions are identical, i.e. having same mean and variance, KL distance will equal to 0.",
"answer_length": 2057
},
{
"chapter": 1,
"question_number": "1.31",
"difficulty": "medium",
"question_text": "\\star)$ www Consider two variables x and y having joint distribution p(x, y). Show that the differential entropy of this pair of variables satisfies\n\n$$H[\\mathbf{x}, \\mathbf{y}] \\leqslant H[\\mathbf{x}] + H[\\mathbf{y}] \\tag{1.152}$$\n\nwith equality if, and only if, x and y are statistically independent.",
"answer": "We evaluate $H[\\mathbf{x}] + H[\\mathbf{y}] - H[\\mathbf{x}, \\mathbf{y}]$ by definition. Firstly, let's calculate $H[\\mathbf{x}, \\mathbf{v}]$ :\n\n$$H[\\mathbf{x}, \\mathbf{y}] = -\\int \\int p(\\mathbf{x}, \\mathbf{y}) lnp(\\mathbf{x}, \\mathbf{y}) d\\mathbf{x} d\\mathbf{y}$$\n\n$$= -\\int \\int p(\\mathbf{x}, \\mathbf{y}) lnp(\\mathbf{x}) d\\mathbf{x} d\\mathbf{y} - \\int \\int p(\\mathbf{x}, \\mathbf{y}) lnp(\\mathbf{y}|\\mathbf{x}) d\\mathbf{x} d\\mathbf{y}$$\n\n$$= -\\int p(\\mathbf{x}) lnp(\\mathbf{x}) d\\mathbf{x} - \\int \\int p(\\mathbf{x}, \\mathbf{y}) lnp(\\mathbf{y}|\\mathbf{x}) d\\mathbf{x} d\\mathbf{y}$$\n\n$$= H[\\mathbf{x}] + H[\\mathbf{y}|\\mathbf{x}]$$\n\nWhere we take advantage of $p(\\mathbf{x}, \\mathbf{y}) = p(\\mathbf{x})p(\\mathbf{y}|\\mathbf{x})$ , $\\int p(\\mathbf{x}, \\mathbf{y})d\\mathbf{y} = p(\\mathbf{x})$ and (1.111: $H[\\mathbf{y}|\\mathbf{x}] = -\\iint p(\\mathbf{y}, \\mathbf{x}) \\ln p(\\mathbf{y}|\\mathbf{x}) \\, d\\mathbf{y} \\, d\\mathbf{x}$). Therefore, we have actually solved Prob.1.37 here. We will continue our proof for this problem, based on what we have proved:\n\n$$H[\\mathbf{x}] + H[\\mathbf{y}] - H[\\mathbf{x}, \\mathbf{y}] = H[\\mathbf{y}] - H[\\mathbf{y}|\\mathbf{x}]$$\n\n$$= -\\int p(\\mathbf{y})lnp(\\mathbf{y})d\\mathbf{y} + \\int \\int p(\\mathbf{x}, \\mathbf{y})lnp(\\mathbf{y}|\\mathbf{x})d\\mathbf{x}d\\mathbf{y}$$\n\n$$= -\\int \\int p(\\mathbf{x}, \\mathbf{y})lnp(\\mathbf{y})d\\mathbf{x}d\\mathbf{y} + \\int \\int p(\\mathbf{x}, \\mathbf{y})lnp(\\mathbf{y}|\\mathbf{x})d\\mathbf{x}d\\mathbf{y}$$\n\n$$= -\\int \\int p(\\mathbf{x}, \\mathbf{y})ln(\\frac{p(\\mathbf{x})p(\\mathbf{y})}{p(\\mathbf{x}, \\mathbf{y})})d\\mathbf{x}d\\mathbf{y}$$\n\n$$= KL(p(\\mathbf{x}, \\mathbf{y})||p(\\mathbf{x})p(\\mathbf{y})) = I(\\mathbf{x}, \\mathbf{y}) \\ge 0$$\n\nWhere we take advantage of the following properties:\n\n$$p(\\mathbf{y}) = \\int p(\\mathbf{x}, \\mathbf{y}) d\\mathbf{x}$$\n\n$$\\frac{p(\\mathbf{y})}{p(\\mathbf{v}|\\mathbf{x})} = \\frac{p(\\mathbf{x})p(\\mathbf{y})}{p(\\mathbf{x},\\mathbf{v})}$$\n\nMoreover, it is straightforward that if and only if $\\mathbf{x}$ and $\\mathbf{y}$ is statistically independent, the equality holds, due to the property of *KL distance*. You can also view this result by :\n\n$$H[\\mathbf{x}, \\mathbf{y}] = -\\int \\int p(\\mathbf{x}, \\mathbf{y}) lnp(\\mathbf{x}, \\mathbf{y}) d\\mathbf{x} d\\mathbf{y}$$\n\n$$= -\\int \\int p(\\mathbf{x}, \\mathbf{y}) lnp(\\mathbf{x}) d\\mathbf{x} d\\mathbf{y} - \\int \\int p(\\mathbf{x}, \\mathbf{y}) lnp(\\mathbf{y}) d\\mathbf{x} d\\mathbf{y}$$\n\n$$= -\\int p(\\mathbf{x}) lnp(\\mathbf{x}) d\\mathbf{x} - \\int \\int p(\\mathbf{y}) lnp(\\mathbf{y}) d\\mathbf{y}$$\n\n$$= H[\\mathbf{x}] + H[\\mathbf{y}]$$",
"answer_length": 2566
},
{
"chapter": 1,
"question_number": "1.32",
"difficulty": "easy",
"question_text": "Consider a vector x of continuous variables with distribution p(x) and corresponding entropy H[x]. Suppose that we make a nonsingular linear transformation of x to obtain a new variable y = Ax. Show that the corresponding entropy is given by $H[y] = H[x] + \\ln |A|$ where |A| denotes the determinant of A.",
"answer": "It is straightforward based on definition and note that if we want to change variable in integral, we have to introduce a redundant term called *Jacobian Determinant*.\n\n$$H[\\mathbf{y}] = -\\int p(\\mathbf{y}) ln p(\\mathbf{y}) d\\mathbf{y}$$\n\n$$= -\\int \\frac{p(\\mathbf{x})}{|\\mathbf{A}|} ln \\frac{p(\\mathbf{x})}{|\\mathbf{A}|} |\\frac{\\partial \\mathbf{y}}{\\partial \\mathbf{x}}| d\\mathbf{x}$$\n\n$$= -\\int p(\\mathbf{x}) ln \\frac{p(\\mathbf{x})}{|\\mathbf{A}|} d\\mathbf{x}$$\n\n$$= -\\int p(\\mathbf{x}) ln p(\\mathbf{x}) d\\mathbf{x} - \\int p(\\mathbf{x}) ln \\frac{1}{|\\mathbf{A}|} d\\mathbf{x}$$\n\n$$= H[\\mathbf{x}] + ln |\\mathbf{A}|$$\n\nWhere we have taken advantage of the following equations:\n\n$$\\frac{\\partial \\mathbf{y}}{\\partial \\mathbf{x}} = \\mathbf{A} \\quad \\text{and} \\quad p(\\mathbf{x}) = p(\\mathbf{y}) |\\frac{\\partial \\mathbf{y}}{\\partial \\mathbf{x}}| = p(\\mathbf{y}) |\\mathbf{A}|$$\n$$\\int p(\\mathbf{x}) d\\mathbf{x} = 1$$",
"answer_length": 912
},
{
"chapter": 1,
"question_number": "1.33",
"difficulty": "medium",
"question_text": "Suppose that the conditional entropy H[y|x] between two discrete random variables x and y is zero. Show that, for all values of x such that p(x) > 0, the variable y must be a function of x, in other words for each x there is only one value of y such that $p(y|x) \\neq 0$ .",
"answer": "Based on the definition of *Entropy*, we write:\n\n$$H[y|x] = -\\sum_{x_i} \\sum_{y_j} p(x_i, y_j) ln p(y_j|x_i)$$\n\nConsidering the property of *probability*, we can obtain that $0 \\le p(y_j|x_i) \\le 1$ , $0 \\le p(x_i, y_j) \\le 1$ . Therefore, we can see that $-p(x_i, y_j) \\ln p(y_j|x_i) \\ge 0$ when $0 < p(y_j|x_i) \\le 1$ . And when $p(y_j|x_i) = 0$ , provided with the fact that $\\lim_{n \\to 0} p \\ln p = 0$ \n\n0, we can see that $-p(x_i, y_j) ln p(y_j|x_i) = -p(x_i) p(y_j|x_i) ln p(y_j|x_i) \\approx 0$ , (here we view p(x) as a constant). Hence for an arbitrary term in the equation above, we have proved that it can not be less than 0. In other words, if and only if every term of H[y|x] equals to 0, H[y|x] will equal to 0.\n\nTherefore, for each possible value of random variable x, denoted as $x_i$ :\n\n$$-\\sum_{y_j} p(x_i, y_j) \\ln p(y_j | x_i) = 0 \\tag{*}$$\n\nIf there are more than one possible value of random variable y given $x = x_i$ , denoted as $y_j$ , such that $p(y_j|x_i) \\neq 0$ (Because $x_i, y_j$ are both \"possible\", $p(x_i, y_j)$ will also not equal to 0), constrained by $0 \\leq p(y_j|x_i) \\leq 1$ and $\\sum_j p(y_j|x_i) = 1$ , there should be at least two value of y satisfied $0 < p(y_j|x_i) < 1$ , which ultimately leads to (\\*) > 0.\n\nTherefore, for each possible value of x, there will only be one y such that $p(y|x) \\neq 0$ . In other words, y is determined by x. Note: This result is quite straightforward. If y is a function of x, we can obtain the value of y as soon as observing a x. Therefore we will obtain no additional information when observing a $y_j$ given an already observed x.",
"answer_length": 1638
},
{
"chapter": 1,
"question_number": "1.34",
"difficulty": "medium",
"question_text": "\\star)$ www Use the calculus of variations to show that the stationary point of the functional (1.108: $p(x) = \\exp\\left\\{-1 + \\lambda_1 + \\lambda_2 x + \\lambda_3 (x - \\mu)^2\\right\\}.$) is given by (1.108: $p(x) = \\exp\\left\\{-1 + \\lambda_1 + \\lambda_2 x + \\lambda_3 (x - \\mu)^2\\right\\}.$). Then use the constraints (1.105: $\\int_{-\\infty}^{\\infty} p(x) \\, \\mathrm{d}x = 1$), (1.106: $\\int_{-\\infty}^{\\infty} x p(x) \\, \\mathrm{d}x = \\mu$), and (1.107: $\\int_{-\\infty}^{\\infty} (x - \\mu)^2 p(x) \\, \\mathrm{d}x = \\sigma^2.$) to eliminate the Lagrange multipliers and hence show that the maximum entropy solution is given by the Gaussian (1.109: $p(x) = \\frac{1}{(2\\pi\\sigma^2)^{1/2}} \\exp\\left\\{-\\frac{(x-\\mu)^2}{2\\sigma^2}\\right\\}$).",
"answer": "This problem is complicated. We will explain it in detail. According to Appenddix D, we can obtain the relation, i.e. (D.3):\n\n$$F[y(x) + \\epsilon \\eta(x)] = F[y(x)] + \\int \\frac{\\partial F}{\\partial y} \\epsilon \\eta(x) dx \\qquad (**)$$\n\nWhere y(x) can be viewed as an operator that for any input x it will give an output value y, and equivalently, F[y(x)] can be viewed as an functional operator that for any input value y(x), it will give an ouput value F[y(x)]. Then we consider a functional operator:\n\n$$I[p(x)] = \\int p(x)f(x) dx$$\n\nUnder a small variation $p(x) \\rightarrow p(x) + \\epsilon \\eta(x)$ , we will obtain :\n\n$$I[p(x) + \\epsilon \\eta(x)] = \\int p(x)f(x)dx + \\int \\epsilon \\eta(x)f(x)dx$$\n\nComparing the equation above and (\\*), we can draw a conclusion :\n\n$$\\frac{\\partial I}{\\partial p(x)} = f(x)$$\n\nSimilarly, let's consider another functional operator:\n\n$$J[p(x)] = \\int p(x)lnp(x)dx$$\n\nThen under a small variation $p(x) \\rightarrow p(x) + \\epsilon \\eta(x)$ :\n\n$$J[p(x) + \\epsilon \\eta(x)] = \\int (p(x) + \\epsilon \\eta(x)) \\ln(p(x) + \\epsilon \\eta(x)) dx$$\n$$= \\int p(x) \\ln(p(x) + \\epsilon \\eta(x)) dx + \\int \\epsilon \\eta(x) \\ln(p(x) + \\epsilon \\eta(x)) dx$$\n\nNote that $\\epsilon \\eta(x)$ is much smaller than p(x), we will write its *Taylor Theorems* at point p(x):\n\n$$ln(p(x) + \\epsilon \\eta(x)) = lnp(x) + \\frac{\\epsilon \\eta(x)}{p(x)} + O(\\epsilon \\eta(x)^{2})$$\n\nTherefore, we substitute the equation above into $J[p(x) + \\epsilon \\eta(x)]$ :\n\n$$J[p(x) + \\epsilon \\eta(x)] = \\int p(x) \\ln p(x) dx + \\epsilon \\eta(x) \\int (\\ln p(x) + 1) dx + O(\\epsilon^{2})$$\n\nTherefore, we also obtain:\n\n$$\\frac{\\partial J}{\\partial p(x)} = lnp(x) + 1$$\n\nNow we can go back to (1.108: $p(x) = \\exp\\left\\{-1 + \\lambda_1 + \\lambda_2 x + \\lambda_3 (x - \\mu)^2\\right\\}.$). Based on $\\frac{\\partial J}{\\partial p(x)}$ and $\\frac{\\partial I}{\\partial p(x)}$ , we can calculate the derivative of the expression just before (1.108: $p(x) = \\exp\\left\\{-1 + \\lambda_1 + \\lambda_2 x + \\lambda_3 (x - \\mu)^2\\right\\}.$) and let it equal to 0:\n\n$$-ln p(x) - 1 + \\lambda_1 + \\lambda_2 x + \\lambda_3 (x - \\mu)^2 = 0$$\n\nHence we rearrange it and obtain (1.108: $p(x) = \\exp\\left\\{-1 + \\lambda_1 + \\lambda_2 x + \\lambda_3 (x - \\mu)^2\\right\\}.$). From (1.108: $p(x) = \\exp\\left\\{-1 + \\lambda_1 + \\lambda_2 x + \\lambda_3 (x - \\mu)^2\\right\\}.$) we can see that p(x) should take the form of a Gaussian distribution. So we rewrite it into Gaussian form and then compare it to a Gaussian distribution with mean $\\mu$ and variance $\\sigma^2$ , it is straightforward:\n\n$$exp(-1+\\lambda_1) = \\frac{1}{(2\\pi\\sigma^2)^{\\frac{1}{2}}} \\quad , \\quad exp(\\lambda_2 x + \\lambda_3 (x-\\mu)^2) = exp\\{\\frac{(x-\\mu)^2}{2\\sigma^2}\\}$$\n\nFinally, we obtain:\n\n$$\\lambda_1 = 1 - \\ln(2\\pi\\sigma^2)$$\n\n$$\\lambda_2 = 0$$\n\n$$\\lambda_3 = -\\frac{1}{2\\sigma^2}$$\n\nNote that there is a typo in the official solution manual about $\\lambda_3$ . Moreover, in the following parts, we will substitute p(x) back into the three constraints and analytically prove that p(x) is Gaussian. You can skip the following part. (The writer would especially thank Dr.Spyridon Chavlis from IMBB,FORTH for this analysis)\n\nWe already know:\n\n$$p(x) = exp(-1 + \\lambda_1 + \\lambda_2 x + \\lambda_3 (x - \\mu)^2)$$\n\nWhere the exponent is equal to:\n\n$$-1 + \\lambda_1 + \\lambda_2 x + \\lambda_3 (x - \\mu)^2 = \\lambda_3 x^2 + (\\lambda_2 - 2\\lambda_3 \\mu) x + (\\lambda_3 \\mu^2 + \\lambda_1 - 1)$$\n\nCompleting the square, we can obtain that:\n\n$$ax^{2} + bx + c = a(x - d)^{2} + f, d = -\\frac{b}{2a}, f = c - \\frac{b^{2}}{4a}$$\n\nUsing this quadratic form, the constraints can be written as\n\n1. \n$$\\int_{-\\infty}^{\\infty} p(x)dx = \\int_{-\\infty}^{\\infty} e^{[a(x-d)^2+f]} dx = 1$$\n\n2. \n$$\\int_{-\\infty}^{\\infty} x p(x) dx = \\int_{-\\infty}^{\\infty} x e^{[a(x-d)^2+f]} dx = \\mu$$\n\n3. \n$$\\int_{-\\infty}^{\\infty} (x-\\mu)^2 p(x) dx = \\int_{-\\infty}^{\\infty} (x-\\mu)^2 e^{[a(x-d)^2+f]} dx = \\sigma^2$$\n\nThe first constraint can be written as:\n\n$$I_1 = \\int_{-\\infty}^{\\infty} e^{[a(x-d)^2 + f]} dx = e^f \\int_{-\\infty}^{\\infty} e^{a(x-d)^2} dx$$\n\nLet u = x - d, which gives du = dx, and thus:\n\n$$I_1 = e^f \\int_{-\\infty}^{\\infty} e^{au^2} du$$\n\nLet $-w^2 = au^2 \\Rightarrow w = \\sqrt{-a}u \\Rightarrow dw = \\sqrt{-a}du$ , and thus:\n\n$$I_1 = \\frac{e^f}{\\sqrt{-a}} \\int_{-\\infty}^{\\infty} e^{-w^2} dw$$\n\nAs $e^{-x^2}$ is an even function the integral is written as:\n\n$$I_1 = \\frac{2e^f}{\\sqrt{-a}} \\int_0^\\infty e^{-w^2} dw$$\n\nLet $w^2 = t \\Rightarrow w = \\sqrt{t} \\Rightarrow dw = \\frac{1}{2\\sqrt{t}}dt$ , and thus:\n\n$$I_1 = \\frac{2e^f}{\\sqrt{-a}} \\int_0^\\infty t^{-\\frac{1}{2}} e^{-t} dt = \\frac{2e^f}{\\sqrt{-a}} \\int_0^\\infty \\frac{1}{2} t^{\\frac{1}{2} - 1} e^{-t} dt = \\frac{e^f}{\\sqrt{-a}} \\Gamma(\\frac{1}{2}) = e^f \\sqrt{\\frac{\\pi}{-a}}$$\n\nHere the Gamma function is used. Gamma function is defined as\n\n$$\\Gamma(z) = \\int_0^\\infty t^{z-1} e^{-t} dt$$\n\nwhere for non-negative integer values of n, we have:\n\n$$\\Gamma(\\frac{1}{2}+n) = \\frac{(2n)!}{4^n n!} \\sqrt{\\pi}$$\n\nThus, the first constraint can be rewritten as:\n\n$$e^f \\sqrt{\\frac{\\pi}{-a}} = 1 \\tag{*}$$\n\nThe second constraint can be written as:\n\n$$I_2 = \\int_{-\\infty}^{\\infty} x e^{[a(x-d)^2 + f]} dx = e^f \\int_{-\\infty}^{\\infty} x e^{a(x-d)^2} dx$$\n\nLet $u = x - d \\Rightarrow x = u + d \\Rightarrow du = dx$ , and thus:\n\n$$I_2 = e^f \\int_{-\\infty}^{\\infty} (u+d)e^{au^2} du$$\n\nUsing integral additivity, we have:\n\n$$I_2 = e^f \\int_{-\\infty}^{\\infty} u e^{au^2} du + e^f \\int_{-\\infty}^{\\infty} de^{au^2} du$$\n\nWe first deal with the first term on the right hand side. Here we denote it as $I_{21}$ :\n\n$$I_{21}=e^f\\int_{-\\infty}^{\\infty}ue^{au^2}du=e^f\\left(\\int_{-\\infty}^{0}ue^{au^2}du+\\int_{0}^{\\infty}ue^{au^2}du\\right)$$\n\nSwapping the integration limits, we obtain:\n\n$$\\begin{split} I_{21} &= e^f \\left( -\\int_0^{-\\infty} u e^{au^2} du + \\int_0^{\\infty} u e^{au^2} du \\right) \\\\ &= e^f \\left( \\int_0^{-\\infty} (-u) e^{a(-u)^2} du + \\int_0^{\\infty} u e^{au^2} du \\right) \\\\ &= e^f \\left( -\\int_0^{\\infty} (-u) e^{a(-u)^2} (-du) + \\int_0^{\\infty} u e^{au^2} du \\right) = 0 \\end{split}$$\n\nThen we deal with the second term $I_{22}$ :\n\n$$I_{22} = e^f \\int_{-\\infty}^{\\infty} de^{au^2} du$$\n\nLet $-w^2 = au^2 \\Rightarrow w = \\sqrt{-a}u \\Rightarrow dw = \\sqrt{-a}du$ , and thus:\n\n$$I_{22} = \\frac{e^f d}{\\sqrt{-a}} \\int_{-\\infty}^{\\infty} e^{-w^2} dw$$\n\nAs $e^{-x^2}$ is an even function the integral is written as:\n\n$$I_{22} = \\frac{2e^f d}{\\sqrt{-a}} \\int_0^\\infty e^{-w^2} dw$$\n\nLet $w^2 = t \\Rightarrow w = \\sqrt{t} \\Rightarrow dw = \\frac{1}{2\\sqrt{t}}dt$ , and thus:\n\n$$I_{22} = \\frac{2e^f d}{\\sqrt{-a}} \\int_0^\\infty t^{-\\frac{1}{2}} e^{-t} dt = \\frac{2e^f d}{\\sqrt{-a}} \\int_0^\\infty \\frac{1}{2} t^{\\frac{1}{2}-1} e^{-t} dt = \\frac{e^f d}{\\sqrt{-a}} \\Gamma(\\frac{1}{2}) = e^f d\\sqrt{\\frac{\\pi}{-a}}$$\n\nThus, the second constraint can be rewritten\n\n$$e^f d\\sqrt{\\frac{\\pi}{-a}} = \\mu \\tag{**}$$\n\nCombining (\\*) and (\\*\\*), we can obtain that $d = \\mu$ . Recall that:\n\n$$d = -\\frac{b}{2a} = -\\frac{\\lambda_2 - 2\\lambda_3\\mu}{2\\lambda_3} = \\mu \\Rightarrow \\lambda_2 - 2\\lambda_3\\mu = -2\\lambda_3\\mu \\Rightarrow \\lambda_2 = 0$$\n\nSo far, we have:\n\n$$b = -2\\lambda_3\\mu$$\n\nAnd\n\n$$f = c - \\frac{b^2}{4a} = \\lambda_3 \\mu^2 + \\lambda_1 - 1 - \\frac{4\\lambda_3^2 \\mu^2}{4\\lambda_3} = \\lambda_1 - 1$$\n\nFinally, we deal with the third also the last constraint. Substituting $\\lambda_2 = 0$ into the last constraint we have:\n\n$$I_{3} = \\int_{-\\infty}^{\\infty} (x - \\mu)^{2} e^{[\\lambda_{3}(x - \\mu)^{2} + \\lambda_{1} - 1]} dx = e^{\\lambda_{1} - 1} \\int_{-\\infty}^{\\infty} (x - \\mu)^{2} e^{\\lambda_{3}(x - \\mu)^{2}} dx$$\n\nLet $u = x - \\mu \\Rightarrow du = dx$ , and thus:\n\n$$I_3 = e^{\\lambda_1 - 1} \\int_{-\\infty}^{\\infty} u^2 e^{\\lambda_3 u^2} dx$$\n\nLet $-w^2 = \\lambda_3 u^2 \\Rightarrow w = \\sqrt{-\\lambda_3} u \\Rightarrow dw = \\sqrt{-\\lambda_3} du$ , and thus:\n\n$$I_{3} = e^{\\lambda_{1} - 1} \\int_{-\\infty}^{\\infty} -\\frac{1}{\\lambda_{3}} w^{2} e^{-w^{2}} \\frac{dw}{\\sqrt{-\\lambda_{3}}} = \\frac{e^{\\lambda_{1} - 1}}{-\\lambda_{3}^{\\frac{3}{2}}} \\int_{-\\infty}^{\\infty} w^{2} e^{-w^{2}} dw$$\n\nBecause it is an even function, we can further obtain:\n\n$$I_3 = 2\\frac{e^{\\lambda_1 - 1}}{-\\lambda_3^{\\frac{3}{2}}} \\int_0^\\infty w^2 e^{-w^2} dw$$\n\nLet $w^2 = t \\Rightarrow w = \\sqrt{t} \\Rightarrow dw = \\frac{1}{2\\sqrt{t}}dt$ , and thus:\n\n$$\\begin{split} I_3 &= 2\\frac{e^{\\lambda_1 - 1}}{-\\lambda_3^{\\frac{3}{2}}} \\int_0^\\infty t e^{-t} \\frac{1}{2\\sqrt(t)} dt = \\frac{e^{\\lambda_1 - 1}}{-\\lambda_3^{\\frac{3}{2}}} \\int_0^\\infty t^{1 - \\frac{1}{2}} e^{-t} dt \\\\ &= \\frac{e^{\\lambda_1 - 1}}{-\\lambda_3^{\\frac{3}{2}}} \\int_0^\\infty t^{\\frac{3}{2} - 1} e^{-t} dt \\\\ &= \\frac{e^{\\lambda_1 - 1}}{-\\lambda_2^{\\frac{3}{2}}} \\Gamma(\\frac{3}{2}) = \\frac{e^{\\lambda_1 - 1}}{-\\lambda_2^{\\frac{3}{2}}} \\frac{\\pi}{2} \\end{split}$$\n\nThus, the third constraint can be rewritten\n\n$$\\frac{e^{\\lambda_1 - 1}}{-\\lambda_3^{\\frac{3}{2}}} \\frac{\\sqrt{\\pi}}{2} = \\sigma^2 \\tag{***}$$\n\nRewriting (\\*) with $f = \\lambda_1 - 1, d = \\mu$ and $a = \\lambda_3$ , we obtain the following equation\n\n$$e^{\\lambda_1 - 1} \\sqrt{\\frac{\\pi}{-\\lambda_2}} = 1 \\tag{****}$$\n\nSubstituting the equation above back into (\\*\\*\\*), we obtain\n\n$$\\sqrt{\\frac{-\\lambda_3}{\\pi}} \\frac{1}{-\\lambda_3^{\\frac{3}{2}}} \\frac{\\sqrt{\\pi}}{2} = \\sigma^2 \\Leftrightarrow -\\frac{1}{\\lambda_3} = 2\\sigma^2 \\Leftrightarrow \\lambda_3 = -\\frac{1}{2\\sigma^2}$$\n\nSubstituting $\\lambda_3$ back into (\\* \\* \\* \\*), we obtain:\n\n$$e^{\\lambda_1-1}\\sqrt{\\frac{\\pi}{-\\lambda_3}}=1\\Leftrightarrow e^{\\lambda_1-1}\\sqrt{\\frac{\\pi}{\\frac{1}{2\\sigma^2}}}=1\\Leftrightarrow e^{\\lambda_1-1}=\\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\Leftrightarrow \\lambda_1-1=\\ln(\\frac{1}{\\sqrt{2\\pi\\sigma^2}})$$\n\nThus, we obtain:\n\n$$\\lambda_1 = 1 - \\frac{1}{2} \\ln(2\\pi\\sigma^2)$$\n\nSo far, we have obtainde $\\lambda_i$ , where i = 1,2,3. We substitute them back into p(x), yielding:\n\n$$p(x) = \\exp\\left(-1 + 1 - \\frac{1}{2}\\ln(2\\pi\\sigma^2) - \\frac{1}{2\\sigma^2}(x - \\mu)^2\\right)$$\n$$= \\exp\\left(-\\frac{1}{2}\\ln(2\\pi\\sigma^2)\\right)\\exp\\left(-\\frac{1}{2\\sigma^2}(x - \\mu)^2\\right)$$\n$$= \\exp\\left(\\ln\\left(\\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\right)\\right)\\exp\\left(-\\frac{1}{2\\sigma^2}(x - \\mu)^2\\right)$$\n\nThus,\n\n$$p(x) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\exp\\left(-\\frac{1}{2\\sigma^2}(x-\\mu)^2\\right)$$\n\nJust as required.",
"answer_length": 10306
},
{
"chapter": 1,
"question_number": "1.35",
"difficulty": "easy",
"question_text": "Use the results (1.106: $\\int_{-\\infty}^{\\infty} x p(x) \\, \\mathrm{d}x = \\mu$) and (1.107: $\\int_{-\\infty}^{\\infty} (x - \\mu)^2 p(x) \\, \\mathrm{d}x = \\sigma^2.$) to show that the entropy of the univariate Gaussian (1.109: $p(x) = \\frac{1}{(2\\pi\\sigma^2)^{1/2}} \\exp\\left\\{-\\frac{(x-\\mu)^2}{2\\sigma^2}\\right\\}$) is given by (1.110: $H[x] = \\frac{1}{2} \\left\\{ 1 + \\ln(2\\pi\\sigma^2) \\right\\}.$).",
"answer": "If $p(x) = \\mathcal{N}(\\mu, \\sigma^2)$ , we write its entropy:\n\n$$\\begin{split} H[x] &= -\\int p(x) ln p(x) dx \\\\ &= -\\int p(x) ln \\{ \\frac{1}{2\\pi\\sigma^2} \\} dx - \\int p(x) \\{ -\\frac{(x-\\mu)^2}{2\\sigma^2} \\} dx \\\\ &= -ln \\{ \\frac{1}{2\\pi\\sigma^2} \\} + \\frac{\\sigma^2}{2\\sigma^2} \\\\ &= \\frac{1}{2} \\{ 1 + ln(2\\pi\\sigma^2) \\} \\end{split}$$\n\nWhere we have taken advantage of the following properties of a Gaussian distribution:\n\n $\\int p(x)dx = 1 \\text{ and } \\int (x-\\mu)^2 p(x)dx = \\sigma^2$",
"answer_length": 492
},
{
"chapter": 1,
"question_number": "1.36",
"difficulty": "easy",
"question_text": "A strictly convex function is defined as one for which every chord lies above the function. Show that this is equivalent to the condition that the second derivative of the function be positive.",
"answer": "Here we should make it clear that if the second derivative is strictly positive, the function must be strictly convex. However, the converse may not be true. For example $f(x) = x^4$ , $g(x) = x^2$ , $x \\in \\mathcal{R}$ are both strictly convex by definition, but their second derivatives at x = 0 are both indeed 0 (See keyword convex function on Wikipedia or Page 71 of the book Convex Optimization written by Boyd, Vandenberghe for more details). Hence, here more precisely we will prove that a convex function is equivalent to its second derivative is non-negative by first considering *Taylor Theorems*:\n\n$$f(x+\\epsilon) = f(x) + \\frac{f'(x)}{1!}\\epsilon + \\frac{f''(x)}{2!}\\epsilon^2 + \\frac{f'''(x)}{3!}\\epsilon^3 + \\dots$$\n\n$$f(x-\\epsilon) = f(x) - \\frac{f'(x)}{1!}\\epsilon + \\frac{f''(x)}{2!}\\epsilon^2 - \\frac{f'''(x)}{3!}\\epsilon^3 + \\dots$$\n\nThen we can obtain the expression of f''(x):\n\n$$f''(x) = \\lim_{\\epsilon \\to 0} \\frac{f(x+\\epsilon) + f(x-\\epsilon) - 2f(x)}{\\epsilon^2}$$\n\nWhere $O(\\epsilon^4)$ is neglected and if f(x) is convex, we can obtain:\n\n$$f(x) = f(\\frac{1}{2}(x+\\epsilon) + \\frac{1}{2}(x-\\epsilon)) \\le \\frac{1}{2}f(x+\\epsilon) + \\frac{1}{2}f(x-\\epsilon)$$\n\nHence $f''(x) \\ge 0$ . The converse situation is a little bit complex, we will use *Lagrange form of Taylor Theorems* to rewrite the Taylor Series Expansion above :\n\n$$f(x) = f(x_0) + f'(x_0)(x - x_0) + \\frac{f''(x^*)}{2}(x - x_0)$$\n\nWhere $x^*$ lies between x and $x_0$ . By hypothesis, $f''(x) \\ge 0$ , the last term is non-negative for all x. We let $x_0 = \\lambda x_1 + (1 - \\lambda)x_2$ , and $x = x_1$ :\n\n$$f(x_1) \\ge f(x_0) + (1 - \\lambda)(x_1 - x_2)f'(x_0) \\tag{*}$$\n\nAnd then, we let $x = x_2$ :\n\n$$f(x_2) \\ge f(x_0) + \\lambda(x_2 - x_1)f'(x_0) \\tag{**}$$\n\nWe multiply (\\*) by $\\lambda$ , (\\*\\*) by $1-\\lambda$ and then add them together, we will see :\n\n$$\\lambda f(x_1) + (1 - \\lambda)f(x_2) \\ge f(\\lambda x_1 + (1 - \\lambda)x_2)$$",
"answer_length": 1946
},
{
"chapter": 1,
"question_number": "1.37",
"difficulty": "easy",
"question_text": "Using the definition (1.111: $H[\\mathbf{y}|\\mathbf{x}] = -\\iint p(\\mathbf{y}, \\mathbf{x}) \\ln p(\\mathbf{y}|\\mathbf{x}) \\, d\\mathbf{y} \\, d\\mathbf{x}$) together with the product rule of probability, prove the result (1.112: $H[\\mathbf{x}, \\mathbf{y}] = H[\\mathbf{y}|\\mathbf{x}] + H[\\mathbf{x}]$).",
"answer": "See Prob.1.31.",
"answer_length": 14
},
{
"chapter": 1,
"question_number": "1.38",
"difficulty": "medium",
"question_text": "\\star)$ www Using proof by induction, show that the inequality (1.114: $f(\\lambda a + (1 - \\lambda)b) \\leqslant \\lambda f(a) + (1 - \\lambda)f(b).$) for convex functions implies the result (1.115: $f\\left(\\sum_{i=1}^{M} \\lambda_i x_i\\right) \\leqslant \\sum_{i=1}^{M} \\lambda_i f(x_i)$).",
"answer": "When M = 2, (1.115: $f\\left(\\sum_{i=1}^{M} \\lambda_i x_i\\right) \\leqslant \\sum_{i=1}^{M} \\lambda_i f(x_i)$) will reduce to (1.114: $f(\\lambda a + (1 - \\lambda)b) \\leqslant \\lambda f(a) + (1 - \\lambda)f(b).$). We suppose (1.115: $f\\left(\\sum_{i=1}^{M} \\lambda_i x_i\\right) \\leqslant \\sum_{i=1}^{M} \\lambda_i f(x_i)$) holds for M, we will prove that it will also hold for M + 1.\n\n$$\\begin{split} f(\\sum_{m=1}^{M} \\lambda_m x_m) &= f(\\lambda_{M+1} x_{M+1} + (1 - \\lambda_{M+1}) \\sum_{m=1}^{M} \\frac{\\lambda_m}{1 - \\lambda_{M+1}} x_m) \\\\ &\\leq \\lambda_{M+1} f(x_{M+1}) + (1 - \\lambda_{M+1}) f(\\sum_{m=1}^{M} \\frac{\\lambda_m}{1 - \\lambda_{M+1}} x_m) \\\\ &\\leq \\lambda_{M+1} f(x_{M+1}) + (1 - \\lambda_{M+1}) \\sum_{m=1}^{M} \\frac{\\lambda_m}{1 - \\lambda_{M+1}} f(x_m) \\\\ &\\leq \\sum_{m=1}^{M+1} \\lambda_m f(x_m) \\end{split}$$\n\nHence, Jensen's Inequality, i.e. (1.115: $f\\left(\\sum_{i=1}^{M} \\lambda_i x_i\\right) \\leqslant \\sum_{i=1}^{M} \\lambda_i f(x_i)$), has been proved.",
"answer_length": 963
},
{
"chapter": 1,
"question_number": "1.39",
"difficulty": "hard",
"question_text": "\\star \\star)$ Consider two binary variables x and y having the joint distribution given in Table 1.3.\n\nEvaluate the following quantities\n\n(a) H[x]\n\n(c) H[y|x] (e) H[x,y] (d) H[x|y] (f) I[x,y].\n\n**(b)** H[y]\n\nDraw a diagram to show the relationship between these various quantities.\n\n### 66 1. INTRODUCTION",
"answer": "It is quite straightforward based on definition.\n\n$$H[x] = -\\sum_{i} p(x_{i}) ln p(x_{i}) = -\\frac{2}{3} ln \\frac{2}{3} - \\frac{1}{3} ln \\frac{1}{3} = 0.6365$$\n\n$$H[y] = -\\sum_{i} p(y_{i}) ln p(y_{i}) = -\\frac{2}{3} ln \\frac{2}{3} - \\frac{1}{3} ln \\frac{1}{3} = 0.6365$$\n\n$$H[x, y] = -\\sum_{i,j} p(x_{i}, y_{j}) ln p(x_{i}, y_{j}) = -3 \\cdot \\frac{1}{3} ln \\frac{1}{3} - 0 = 1.0986$$\n\n$$H[x|y] = -\\sum_{i,j} p(x_{i}, y_{j}) ln p(x_{i}|y_{j}) = -\\frac{1}{3} ln 1 - \\frac{1}{3} ln \\frac{1}{2} - \\frac{1}{3} ln \\frac{1}{2} = 0.4621$$\n\n$$H[y|x] = -\\sum_{i,j} p(x_{i}, y_{j}) ln p(y_{j}|x_{i}) = -\\frac{1}{3} ln \\frac{1}{2} - -\\frac{1}{3} ln \\frac{1}{2} - \\frac{1}{3} ln 1 = 0.4621$$\n\n$$I[x, y] = -\\sum_{i,j} p(x_{i}, y_{j}) ln \\frac{p(x_{i}) p(y_{j})}{p(x_{i}, y_{j})}$$\n\n$$= -\\frac{1}{3} ln \\frac{\\frac{2}{3} \\cdot \\frac{1}{3}}{1/3} - \\frac{1}{3} ln \\frac{\\frac{2}{3} \\cdot \\frac{2}{3}}{1/3} - \\frac{1}{3} ln \\frac{\\frac{1}{3} \\cdot \\frac{2}{3}}{1/3} = 0.1744$$\n\nTheir relations are given below, diagrams omitted.\n\n$$I[x, y] = H[x] - H[x|y] = H[y] - H[y|x]$$\n\n$$H[x, y] = H[y|x] + H[x] = H[x|y] + H[y]$$",
"answer_length": 1100
},
{
"chapter": 1,
"question_number": "1.4",
"difficulty": "medium",
"question_text": "Consider a probability density $p_x(x)$ defined over a continuous variable x, and suppose that we make a nonlinear change of variable using x = g(y), so that the density transforms according to (1.27: $= p_{x}(g(y)) |g'(y)|.$). By differentiating (1.27: $= p_{x}(g(y)) |g'(y)|.$), show that the location $\\widehat{y}$ of the maximum of the density in y is not in general related to the location $\\widehat{x}$ of the maximum of the density over x by the simple functional relation $\\widehat{x} = g(\\widehat{y})$ as a consequence of the Jacobian factor. This shows that the maximum of a probability density (in contrast to a simple function) is dependent on the choice of variable. Verify that, in the case of a linear transformation, the location of the maximum transforms in the same way as the variable itself.",
"answer": "This problem needs knowledge about *calculus*, especially about *Chain rule*. We calculate the derivative of $P_y(y)$ with respect to y, according to (1.27):\n\n$$\\frac{dp_{y}(y)}{dy} = \\frac{d(p_{x}(g(y))|g'(y)|)}{dy} = \\frac{dp_{x}(g(y))}{dy}|g'(y)| + p_{x}(g(y))\\frac{d|g'(y)|}{dy} \\qquad (*)$$\n\nThe first term in the above equation can be further simplified:\n\n$$\\frac{dp_x(g(y))}{dy}|g'(y)| = \\frac{dp_x(g(y))}{dg(y)}\\frac{dg(y)}{dy}|g'(y)|$$\n (\\*\\*)\n\nIf $\\hat{x}$ is the maximum of density over x, we can obtain :\n\n$$\\frac{dp_x(x)}{dx}\\big|_{\\hat{x}}=0$$\n\nTherefore, when $y = \\hat{y}, s.t.\\hat{x} = g(\\hat{y})$ , the first term on the right side of (\\*\\*) will be 0, leading the first term in (\\*) equals to 0, however because of the existence of the second term in (\\*), the derivative may not equal to 0. But\n\nwhen linear transformation is applied, the second term in (\\*) will vanish, (e.g. x = ay + b). A simple example can be shown by :\n\n$$p_x(x) = 2x, \\quad x \\in [0,1] = \\hat{x} = 1$$\n\nAnd given that:\n\n$$x = sin(y)$$\n\nTherefore, $p_y(y) = 2\\sin(y)|\\cos(y)|, y \\in [0, \\frac{\\pi}{2}],$ which can be simplified :\n\n$$p_{y}(y) = \\sin(2y), \\quad y \\in [0, \\frac{\\pi}{2}] \\quad => \\quad \\hat{y} = \\frac{\\pi}{4}$$\n\nHowever, it is quite obvious:\n\n$$\\hat{x} \\neq \\sin(\\hat{y})$$",
"answer_length": 1288
},
{
"chapter": 1,
"question_number": "1.40",
"difficulty": "easy",
"question_text": "By applying Jensen's inequality (1.115: $f\\left(\\sum_{i=1}^{M} \\lambda_i x_i\\right) \\leqslant \\sum_{i=1}^{M} \\lambda_i f(x_i)$) with $f(x) = \\ln x$ , show that the arithmetic mean of a set of real numbers is never less than their geometrical mean.",
"answer": "f(x) = lnx is actually a strict concave function, therefore we take advantage of *Jensen's Inequality* to obtain:\n\n$$f(\\sum_{i=1}^{M} \\lambda_m x_m) \\ge \\sum_{i=1}^{M} \\lambda_m f(x_m)$$\n\nWe let $\\lambda_m = \\frac{1}{M}, m = 1, 2, ..., M$ . Hence we will obtain:\n\n$$ln(\\frac{x_1 + x_2 + \\dots + x_m}{M}) \\ge \\frac{1}{M}[ln(x_1) + ln(x_2) + \\dots + ln(x_M)] = \\frac{1}{M}ln(x_1x_2...x_M)$$\n\nWe take advantage of the fact that f(x) = lnx is strictly increasing and then obtain:\n\n$$\\frac{x_1 + x_2 + ... + x_m}{M} \\ge \\sqrt[M]{x_1 x_2 ... x_M}$$",
"answer_length": 543
},
{
"chapter": 1,
"question_number": "1.41",
"difficulty": "easy",
"question_text": "Using the sum and product rules of probability, show that the mutual information $I(\\mathbf{x}, \\mathbf{y})$ satisfies the relation (1.121: $I[\\mathbf{x}, \\mathbf{y}] = H[\\mathbf{x}] - H[\\mathbf{x}|\\mathbf{y}] = H[\\mathbf{y}] - H[\\mathbf{y}|\\mathbf{x}].$).",
"answer": "Based on definition of $I[\\mathbf{x}, \\mathbf{y}]$ , i.e.(1.120), we obtain:\n\n$$I[\\mathbf{x}, \\mathbf{y}] = -\\int \\int p(\\mathbf{x}, \\mathbf{y}) ln \\frac{p(\\mathbf{x})p(\\mathbf{y})}{p(\\mathbf{x}, \\mathbf{y})} d\\mathbf{x} d\\mathbf{y}$$\n\n$$= -\\int \\int p(\\mathbf{x}, \\mathbf{y}) ln \\frac{p(\\mathbf{x})}{p(\\mathbf{x}|\\mathbf{y})} d\\mathbf{x} d\\mathbf{y}$$\n\n$$= -\\int \\int p(\\mathbf{x}, \\mathbf{y}) ln p(\\mathbf{x}) d\\mathbf{x} d\\mathbf{y} + \\int \\int p(\\mathbf{x}, \\mathbf{y}) ln p(\\mathbf{x}|\\mathbf{y}) d\\mathbf{x} d\\mathbf{y}$$\n\n$$= -\\int \\int p(\\mathbf{x}) ln p(\\mathbf{x}) d\\mathbf{x} + \\int \\int p(\\mathbf{x}, \\mathbf{y}) ln p(\\mathbf{x}|\\mathbf{y}) d\\mathbf{x} d\\mathbf{y}$$\n\n$$= H[\\mathbf{x}] - H[\\mathbf{x}|\\mathbf{y}]$$\n\nWhere we have taken advantage of the fact: $p(\\mathbf{x}, \\mathbf{y}) = p(\\mathbf{y})p(\\mathbf{x}|\\mathbf{y})$ , and $\\int p(\\mathbf{x}, \\mathbf{y}) d\\mathbf{y} = p(\\mathbf{x})$ . The same process can be used for proving $I[\\mathbf{x}, \\mathbf{y}] = H[\\mathbf{y}] - H[\\mathbf{y}|\\mathbf{x}]$ , if we substitute $p(\\mathbf{x}, \\mathbf{y})$ with $p(\\mathbf{x})p(\\mathbf{y}|\\mathbf{x})$ in the second step.\n\n# 0.2 Probability Distribution",
"answer_length": 1171
},
{
"chapter": 1,
"question_number": "1.5",
"difficulty": "easy",
"question_text": "Using the definition (1.38: $var[f] = \\mathbb{E}\\left[ \\left( f(x) - \\mathbb{E}[f(x)] \\right)^2 \\right]$) show that var[f(x)] satisfies (1.39: $var[f] = \\mathbb{E}[f(x)^2] - \\mathbb{E}[f(x)]^2.$).",
"answer": "This problem takes advantage of the property of expectation:\n\n$$\\begin{aligned} var[f] &= & \\mathbb{E}[(f(x) - \\mathbb{E}[f(x)])^2] \\\\ &= & \\mathbb{E}[f(x)^2 - 2f(x)\\mathbb{E}[f(x)] + \\mathbb{E}[f(x)]^2] \\\\ &= & \\mathbb{E}[f(x)^2] - 2\\mathbb{E}[f(x)]^2 + \\mathbb{E}[f(x)]^2 \\\\ => & var[f] &= & \\mathbb{E}[f(x)^2] - \\mathbb{E}[f(x)]^2 \\end{aligned}$$",
"answer_length": 349
},
{
"chapter": 1,
"question_number": "1.6",
"difficulty": "easy",
"question_text": "Show that if two variables x and y are independent, then their covariance is zero.",
"answer": "Based on (1.41), we only need to prove when x and y is independent, $\\mathbb{E}_{x,y}[xy] = \\mathbb{E}[x]\\mathbb{E}[y]$ . Because x and y is independent, we have :\n\n$$p(x, y) = p_x(x) p_y(y)$$\n\nTherefore:\n\n$$\\iint xyp(x,y)dxdy = \\iint xyp_x(x)p_y(y)dxdy$$\n\n$$= (\\int xp_x(x)dx)(\\int yp_y(y)dy)$$\n\n$$=> \\mathbb{E}_{x,y}[xy] = \\mathbb{E}[x]\\mathbb{E}[y]$$",
"answer_length": 354
},
{
"chapter": 1,
"question_number": "1.7",
"difficulty": "medium",
"question_text": "In this exercise, we prove the normalization condition (1.48: $\\int_{-\\infty}^{\\infty} \\mathcal{N}\\left(x|\\mu,\\sigma^2\\right) \\, \\mathrm{d}x = 1.$) for the univariate Gaussian. To do this consider, the integral\n\n$$I = \\int_{-\\infty}^{\\infty} \\exp\\left(-\\frac{1}{2\\sigma^2}x^2\\right) dx \\tag{1.124}$$\n\nwhich we can evaluate by first writing its square in the form\n\n$$I^{2} = \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} \\exp\\left(-\\frac{1}{2\\sigma^{2}}x^{2} - \\frac{1}{2\\sigma^{2}}y^{2}\\right) dx dy.$$\n (1.125: $I^{2} = \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} \\exp\\left(-\\frac{1}{2\\sigma^{2}}x^{2} - \\frac{1}{2\\sigma^{2}}y^{2}\\right) dx dy.$)\n\nNow make the transformation from Cartesian coordinates (x, y) to polar coordinates $(r, \\theta)$ and then substitute $u = r^2$ . Show that, by performing the integrals over $\\theta$ and u, and then taking the square root of both sides, we obtain\n\n$$I = (2\\pi\\sigma^2)^{1/2}. (1.126)$$\n\nFinally, use this result to show that the Gaussian distribution $\\mathcal{N}(x|\\mu,\\sigma^2)$ is normalized.",
"answer": "We need Integration by substitution.\n\n$$\\begin{split} I^2 &= \\int_{-\\infty}^{+\\infty} \\int_{-\\infty}^{+\\infty} exp(-\\frac{1}{2\\sigma^2}x^2 - \\frac{1}{2\\sigma^2}y^2) dx dy \\\\ &= \\int_{0}^{2\\pi} \\int_{0}^{+\\infty} exp(-\\frac{1}{2\\sigma^2}r^2) r dr d\\theta \\end{split}$$\n\nHere we utilize:\n\n$$x = r\\cos\\theta$$\n, $y = r\\sin\\theta$ \n\nBased on the fact:\n\n$$\\int_{0}^{+\\infty} exp(-\\frac{r^{2}}{2\\sigma^{2}})r\\,dr = -\\sigma^{2}exp(-\\frac{r^{2}}{2\\sigma^{2}})\\big|_{0}^{+\\infty} = -\\sigma^{2}(0-1) = \\sigma^{2}$$\n\nTherefore, I can be solved:\n\n$$I^{2} = \\int_{0}^{2\\pi} \\sigma^{2} d\\theta = 2\\pi\\sigma^{2}, = > I = \\sqrt{2\\pi}\\sigma$$\n\nAnd next,we will show that Gaussian distribution $\\mathcal{N}(x|\\mu,\\sigma^2)$ is normalized, (i.e. $\\int_{-\\infty}^{+\\infty} \\mathcal{N}(x|\\mu,\\sigma^2) dx = 1$ ):\n\n$$\\begin{split} \\int_{-\\infty}^{+\\infty} \\mathcal{N}(x \\big| \\mu, \\sigma^2) \\, dx &= \\int_{-\\infty}^{+\\infty} \\frac{1}{\\sqrt{2\\pi\\sigma^2}} exp\\{ -\\frac{1}{2\\sigma^2} (x - \\mu)^2 \\} \\, dx \\\\ &= \\int_{-\\infty}^{+\\infty} \\frac{1}{\\sqrt{2\\pi\\sigma^2}} exp\\{ -\\frac{1}{2\\sigma^2} y^2 \\} \\, dy \\quad (y = x - \\mu) \\\\ &= \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\int_{-\\infty}^{+\\infty} exp\\{ -\\frac{1}{2\\sigma^2} y^2 \\} \\, dy \\\\ &= 1 \\end{split}$$",
"answer_length": 1228
},
{
"chapter": 1,
"question_number": "1.8",
"difficulty": "medium",
"question_text": "By using a change of variables, verify that the univariate Gaussian distribution given by (1.46: $\\mathcal{N}(x|\\mu,\\sigma^2) = \\frac{1}{(2\\pi\\sigma^2)^{1/2}} \\exp\\left\\{-\\frac{1}{2\\sigma^2}(x-\\mu)^2\\right\\}$) satisfies (1.49: $\\mathbb{E}[x] = \\int_{-\\infty}^{\\infty} \\mathcal{N}(x|\\mu, \\sigma^2) x \\, \\mathrm{d}x = \\mu.$). Next, by differentiating both sides of the normalization condition\n\n$$\\int_{-\\infty}^{\\infty} \\mathcal{N}\\left(x|\\mu,\\sigma^2\\right) \\, \\mathrm{d}x = 1 \\tag{1.127}$$\n\nwith respect to $\\sigma^2$ , verify that the Gaussian satisfies (1.50: $\\mathbb{E}[x^2] = \\int_{-\\infty}^{\\infty} \\mathcal{N}\\left(x|\\mu, \\sigma^2\\right) x^2 \\, \\mathrm{d}x = \\mu^2 + \\sigma^2.$). Finally, show that (1.51: $var[x] = \\mathbb{E}[x^2] - \\mathbb{E}[x]^2 = \\sigma^2$) holds.",
"answer": "The first question will need the result of Prob.1.7:\n\n$$\\begin{split} \\int_{-\\infty}^{+\\infty} \\mathcal{N}(x|\\mu,\\sigma^2) x \\, dx &= \\int_{-\\infty}^{+\\infty} \\frac{1}{\\sqrt{2\\pi\\sigma^2}} exp\\{-\\frac{1}{2\\sigma^2} (x-\\mu)^2\\} x \\, dx \\\\ &= \\int_{-\\infty}^{+\\infty} \\frac{1}{\\sqrt{2\\pi\\sigma^2}} exp\\{-\\frac{1}{2\\sigma^2} y^2\\} (y+\\mu) \\, dy \\quad (y=x-\\mu) \\\\ &= \\mu \\int_{-\\infty}^{+\\infty} \\frac{1}{\\sqrt{2\\pi\\sigma^2}} exp\\{-\\frac{1}{2\\sigma^2} y^2\\} \\, dy + \\int_{-\\infty}^{+\\infty} \\frac{1}{\\sqrt{2\\pi\\sigma^2}} exp\\{-\\frac{1}{2\\sigma^2} y^2\\} y \\, dy \\\\ &= \\mu + 0 = \\mu \\end{split}$$\n\nThe second problem has already be given hint in the description. Given that:\n\n$$\\frac{d(fg)}{dx} = f\\frac{dg}{dx} + g\\frac{df}{dx}$$\n\nWe differentiate both side of (1.127: $\\int_{-\\infty}^{\\infty} \\mathcal{N}\\left(x|\\mu,\\sigma^2\\right) \\, \\mathrm{d}x = 1$) with respect to $\\sigma^2$ , we will obtain :\n\n$$\\int_{-\\infty}^{+\\infty} (-\\frac{1}{2\\sigma^2} + \\frac{(x-\\mu)^2}{2\\sigma^4}) \\mathcal{N}(x|\\mu,\\sigma^2) dx = 0$$\n\nProvided the fact that $\\sigma \\neq 0$ , we can get:\n\n$$\\int_{-\\infty}^{+\\infty} (x-\\mu)^2 \\mathcal{N}(x\\big|\\mu,\\sigma^2) dx = \\int_{-\\infty}^{+\\infty} \\sigma^2 \\mathcal{N}(x\\big|\\mu,\\sigma^2) dx = \\sigma^2$$\n\nSo the equation above has actually proven (1.51: $var[x] = \\mathbb{E}[x^2] - \\mathbb{E}[x]^2 = \\sigma^2$), according to the definition:\n\n$$var[x] = \\int_{-\\infty}^{+\\infty} (x - \\mathbb{E}[x])^2 \\mathcal{N}(x|\\mu, \\sigma^2) dx$$\n\nWhere $\\mathbb{E}[x] = \\mu$ has already been proved. Therefore :\n\n$$var[x] = \\sigma^2$$\n\nFinally,\n\n$$\\mathbb{E}[x^2] = var[x] + \\mathbb{E}[x]^2 = \\sigma^2 + u^2$$",
"answer_length": 1622
},
{
"chapter": 1,
"question_number": "1.9",
"difficulty": "easy",
"question_text": "Show that the mode (i.e. the maximum) of the Gaussian distribution (1.46: $\\mathcal{N}(x|\\mu,\\sigma^2) = \\frac{1}{(2\\pi\\sigma^2)^{1/2}} \\exp\\left\\{-\\frac{1}{2\\sigma^2}(x-\\mu)^2\\right\\}$) is given by $\\mu$ . Similarly, show that the mode of the multivariate Gaussian (1.52: $\\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}) = \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\boldsymbol{\\Sigma}|^{1/2}} \\exp\\left\\{-\\frac{1}{2} (\\mathbf{x} - \\boldsymbol{\\mu})^{\\mathrm{T}} \\boldsymbol{\\Sigma}^{-1} (\\mathbf{x} - \\boldsymbol{\\mu})\\right\\}$) is given by $\\mu$ .",
"answer": "Here we only focus on (1.52: $\\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}) = \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\boldsymbol{\\Sigma}|^{1/2}} \\exp\\left\\{-\\frac{1}{2} (\\mathbf{x} - \\boldsymbol{\\mu})^{\\mathrm{T}} \\boldsymbol{\\Sigma}^{-1} (\\mathbf{x} - \\boldsymbol{\\mu})\\right\\}$), because (1.52: $\\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}) = \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\boldsymbol{\\Sigma}|^{1/2}} \\exp\\left\\{-\\frac{1}{2} (\\mathbf{x} - \\boldsymbol{\\mu})^{\\mathrm{T}} \\boldsymbol{\\Sigma}^{-1} (\\mathbf{x} - \\boldsymbol{\\mu})\\right\\}$) is the general form of (1.42: $= \\mathbb{E}_{\\mathbf{x}, \\mathbf{y}} [\\mathbf{x} \\mathbf{y}^{\\mathrm{T}}] - \\mathbb{E}[\\mathbf{x}] \\mathbb{E}[\\mathbf{y}^{\\mathrm{T}}].$). Based on the definition: The maximum of distribution is known as its mode and (1.52: $\\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}) = \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\boldsymbol{\\Sigma}|^{1/2}} \\exp\\left\\{-\\frac{1}{2} (\\mathbf{x} - \\boldsymbol{\\mu})^{\\mathrm{T}} \\boldsymbol{\\Sigma}^{-1} (\\mathbf{x} - \\boldsymbol{\\mu})\\right\\}$), we can obtain:\n\n$$\\frac{\\partial \\mathcal{N}(\\mathbf{x} | \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma})}{\\partial \\mathbf{x}} = -\\frac{1}{2} [\\boldsymbol{\\Sigma}^{-1} + (\\boldsymbol{\\Sigma}^{-1})^T] (\\mathbf{x} - \\boldsymbol{\\mu}) \\mathcal{N}(\\mathbf{x} | \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma})$$\n$$= -\\boldsymbol{\\Sigma}^{-1} (\\mathbf{x} - \\boldsymbol{\\mu}) \\mathcal{N}(\\mathbf{x} | \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma})$$\n\nWhere we take advantage of:\n\n$$\\frac{\\partial \\mathbf{x}^T \\mathbf{A} \\mathbf{x}}{\\partial \\mathbf{x}} = (\\mathbf{A} + \\mathbf{A}^T) \\mathbf{x} \\quad \\text{and} \\quad (\\mathbf{\\Sigma}^{-1})^T = \\mathbf{\\Sigma}^{-1}$$\n\nTherefore,\n\nonly when \n$$\\mathbf{x} = \\boldsymbol{\\mu}, \\frac{\\partial \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma})}{\\partial \\mathbf{x}} = 0$$\n\nNote: You may also need to calculate *Hessian Matrix* to prove that it is maximum. However, here we find that the first derivative only has one root. Based on the description in the problem, this point should be maximum point.",
"answer_length": 2113
}
]
},
{
"chapter_number": 2,
"total_questions": 60,
"difficulty_breakdown": {
"easy": 26,
"medium": 16,
"hard": 3,
"unknown": 16
},
"questions": [
{
"chapter": 2,
"question_number": "2.1",
"difficulty": "easy",
"question_text": "Verify that the Bernoulli distribution (2.2: $(x|\\mu) = \\mu^x (1-\\mu)^{1-x}$) satisfies the following properties\n\n$$\\sum_{x=0}^{1} p(x|\\mu) = 1 (2.257)$$\n\n$$\\mathbb{E}[x] = \\mu \\tag{2.258}$$\n\n$$var[x] = \\mu(1-\\mu).$$\n (2.259: $var[x] = \\mu(1-\\mu).$)\n\nShow that the entropy $\\mathrm{H}[x]$ of a Bernoulli distributed random binary variable x is given by\n\n$$H[x] = -\\mu \\ln \\mu - (1 - \\mu) \\ln(1 - \\mu). \\tag{2.260}$$",
"answer": "Based on definition, we can obtain:\n\n$$\\sum_{x_i=0,1} p(x_i) = \\mu + (1-\\mu) = 1$$\n\n$$\\mathbb{E}[x] = \\sum_{x_i=0,1} x_i \\, p(x_i) = 0 \\cdot (1-\\mu) + 1 \\cdot \\mu = \\mu$$\n\n$$var[x] = \\sum_{x_i=0,1} (x_i - \\mathbb{E}[x])^2 p(x_i)$$\n$$= (0 - \\mu)^2 (1 - \\mu) + (1 - \\mu)^2 \\cdot \\mu$$\n$$= \\mu(1 - \\mu)$$\n\n$$H[x] = -\\sum_{x_i=0,1} p(x_i) \\ln p(x_i) = -\\mu \\ln \\mu - (1-\\mu) \\ln (1-\\mu)$$",
"answer_length": 384
},
{
"chapter": 2,
"question_number": "2.10",
"difficulty": "medium",
"question_text": "Using the property $\\Gamma(x+1) = x\\Gamma(x)$ of the gamma function, derive the following results for the mean, variance, and covariance of the Dirichlet distribution given by (2.38: $Dir(\\boldsymbol{\\mu}|\\boldsymbol{\\alpha}) = \\frac{\\Gamma(\\alpha_0)}{\\Gamma(\\alpha_1)\\cdots\\Gamma(\\alpha_K)} \\prod_{k=1}^K \\mu_k^{\\alpha_k - 1}$)\n\n$$\\mathbb{E}[\\mu_j] = \\frac{\\alpha_j}{\\alpha_0} \\tag{2.273}$$\n\n$$\\operatorname{var}[\\mu_j] = \\frac{\\alpha_j(\\alpha_0 - \\alpha_j)}{\\alpha_0^2(\\alpha_0 + 1)}$$\n (2.274: $\\operatorname{var}[\\mu_j] = \\frac{\\alpha_j(\\alpha_0 - \\alpha_j)}{\\alpha_0^2(\\alpha_0 + 1)}$)\n\n$$\\operatorname{cov}[\\mu_j \\mu_l] = -\\frac{\\alpha_j \\alpha_l}{\\alpha_0^2 (\\alpha_0 + 1)}, \\qquad j \\neq l$$\n (2.275: $\\operatorname{cov}[\\mu_j \\mu_l] = -\\frac{\\alpha_j \\alpha_l}{\\alpha_0^2 (\\alpha_0 + 1)}, \\qquad j \\neq l$)\n\nwhere $\\alpha_0$ is defined by (2.39: $\\alpha_0 = \\sum_{k=1}^K \\alpha_k.$).",
"answer": "Based on definition of *Expectation* and (2.38: $Dir(\\boldsymbol{\\mu}|\\boldsymbol{\\alpha}) = \\frac{\\Gamma(\\alpha_0)}{\\Gamma(\\alpha_1)\\cdots\\Gamma(\\alpha_K)} \\prod_{k=1}^K \\mu_k^{\\alpha_k - 1}$), we can write:\n\n$$\\begin{split} \\mathbb{E}[\\mu_j] &= \\int \\mu_j Dir(\\pmb{\\mu}|\\pmb{\\alpha}) d\\pmb{\\mu} \\\\ &= \\int \\mu_j \\frac{\\Gamma(\\alpha_0)}{\\Gamma(\\alpha_1)\\Gamma(\\alpha_2)...\\Gamma(\\alpha_K)} \\prod_{k=1}^K \\mu_k^{\\alpha_k-1} d\\pmb{\\mu} \\\\ &= \\frac{\\Gamma(\\alpha_0)}{\\Gamma(\\alpha_1)\\Gamma(\\alpha_2)...\\Gamma(\\alpha_K)} \\int \\mu_j \\prod_{k=1}^K \\mu_k^{\\alpha_k-1} d\\pmb{\\mu} \\\\ &= \\frac{\\Gamma(\\alpha_0)}{\\Gamma(\\alpha_1)\\Gamma(\\alpha_2)...\\Gamma(\\alpha_K)} \\frac{\\Gamma(\\alpha_1)\\Gamma(\\alpha_2)...\\Gamma(\\alpha_{j-1})\\Gamma(\\alpha_j+1)\\Gamma(\\alpha_{j+1})...\\Gamma(\\alpha_K)}{\\Gamma(\\alpha_0+1)} \\\\ &= \\frac{\\Gamma(\\alpha_0)\\Gamma(\\alpha_j+1)}{\\Gamma(\\alpha_j)\\Gamma(\\alpha_0+1)} = \\frac{\\alpha_j}{\\alpha_0} \\end{split}$$\n\nIt is quite the same for variance, let's begin by calculating $\\mathbb{E}[\\mu_i^2]$ .\n\n$$\\begin{split} \\mathbb{E}[\\mu_j^2] &= \\int \\mu_j^2 Dir(\\pmb{\\mu}|\\pmb{\\alpha}) d\\pmb{\\mu} \\\\ &= \\frac{\\Gamma(\\alpha_0)}{\\Gamma(\\alpha_1)\\Gamma(\\alpha_2)...\\Gamma(\\alpha_K)} \\int \\mu_j^2 \\prod_{k=1}^K \\mu_k^{\\alpha_k-1} d\\pmb{\\mu} \\\\ &= \\frac{\\Gamma(\\alpha_0)}{\\Gamma(\\alpha_1)\\Gamma(\\alpha_2)...\\Gamma(\\alpha_K)} \\frac{\\Gamma(\\alpha_1)\\Gamma(\\alpha_2)...\\Gamma(\\alpha_{j-1})\\Gamma(\\alpha_j+2)\\Gamma(\\alpha_{j+1})...\\Gamma(\\alpha_K)}{\\Gamma(\\alpha_0+2)} \\\\ &= \\frac{\\Gamma(\\alpha_0)\\Gamma(\\alpha_j+2)}{\\Gamma(\\alpha_j)\\Gamma(\\alpha_0+2)} = \\frac{\\alpha_j(\\alpha_j+1)}{\\alpha_0(\\alpha_0+1)} \\end{split}$$\n\nHence, we obtain:\n\n$$var[\\mu_j] = \\mathbb{E}[\\mu_j^2] - \\mathbb{E}[\\mu_j]^2 = \\frac{\\alpha_j(\\alpha_j + 1)}{\\alpha_0(\\alpha_0 + 1)} - (\\frac{\\alpha_j}{\\alpha_0})^2 = \\frac{\\alpha_j(\\alpha_0 - \\alpha_j)}{\\alpha_0^2(\\alpha_0 + 1)}$$\n\nIt is the same for covariance.\n\n$$\\begin{split} cov[\\mu_{j}\\mu_{l}] &= \\int (\\mu_{j} - \\mathbb{E}[\\mu_{j}])(\\mu_{l} - \\mathbb{E}[\\mu_{l}])Dir(\\pmb{\\mu}|\\pmb{\\alpha})d\\pmb{\\mu} \\\\ &= \\int (\\mu_{j}\\mu_{l} - \\mathbb{E}[\\mu_{j}]\\mu_{l} - \\mathbb{E}[\\mu_{l}]\\mu_{j} + \\mathbb{E}[\\mu_{j}]\\mathbb{E}[\\mu_{l}])Dir(\\pmb{\\mu}|\\pmb{\\alpha})d\\pmb{\\mu} \\\\ &= \\frac{\\Gamma(\\alpha_{0})\\Gamma(\\alpha_{j} + 1)\\Gamma(\\alpha_{l} + 1)}{\\Gamma(\\alpha_{j})\\Gamma(\\alpha_{0} + 2)} - 2\\mathbb{E}[\\mu_{j}]\\mathbb{E}[\\mu_{l}] + \\mathbb{E}[\\mu_{j}]\\mathbb{E}[\\mu_{l}] \\\\ &= \\frac{\\alpha_{j}\\alpha_{l}}{\\alpha_{0}(\\alpha_{0} + 1)} - \\mathbb{E}[\\mu_{j}]\\mathbb{E}[\\mu_{l}] \\\\ &= \\frac{\\alpha_{j}\\alpha_{l}}{\\alpha_{0}(\\alpha_{0} + 1)} - \\frac{\\alpha_{j}\\alpha_{l}}{\\alpha_{0}^{2}} \\\\ &= -\\frac{\\alpha_{j}\\alpha_{l}}{\\alpha_{0}^{2}(\\alpha_{0} + 1)} \\quad (j \\neq l) \\end{split}$$\n\nNote: when j=l, $cov[\\mu_j\\mu_l]$ will actually reduce to $var[\\mu_j]$ , however we cannot simply replace l with j in the expression of $cov[\\mu_j\\mu_l]$ to get the right result and that is because $\\int \\mu_j \\mu_l Dir(\\boldsymbol{\\mu}|\\boldsymbol{\\alpha}) d\\boldsymbol{\\alpha}$ will reduce to $\\int \\mu_j^2 Dir(\\boldsymbol{\\mu}|\\boldsymbol{\\alpha}) d\\boldsymbol{\\alpha}$ in this case.",
"answer_length": 3094
},
{
"chapter": 2,
"question_number": "2.11",
"difficulty": "easy",
"question_text": "By expressing the expectation of $\\ln \\mu_j$ under the Dirichlet distribution (2.38: $Dir(\\boldsymbol{\\mu}|\\boldsymbol{\\alpha}) = \\frac{\\Gamma(\\alpha_0)}{\\Gamma(\\alpha_1)\\cdots\\Gamma(\\alpha_K)} \\prod_{k=1}^K \\mu_k^{\\alpha_k - 1}$) as a derivative with respect to $\\alpha_j$ , show that\n\n$$\\mathbb{E}[\\ln \\mu_j] = \\psi(\\alpha_j) - \\psi(\\alpha_0) \\tag{2.276}$$\n\nwhere $\\alpha_0$ is given by (2.39: $\\alpha_0 = \\sum_{k=1}^K \\alpha_k.$) and\n\n$$\\psi(a) \\equiv \\frac{d}{da} \\ln \\Gamma(a) \\tag{2.277}$$\n\nis the digamma function.",
"answer": "Based on definition of *Expectation* and (2.38: $Dir(\\boldsymbol{\\mu}|\\boldsymbol{\\alpha}) = \\frac{\\Gamma(\\alpha_0)}{\\Gamma(\\alpha_1)\\cdots\\Gamma(\\alpha_K)} \\prod_{k=1}^K \\mu_k^{\\alpha_k - 1}$), we first denote:\n\n$$\\frac{\\Gamma(\\alpha_0)}{\\Gamma(\\alpha_1)\\Gamma(\\alpha_2)...\\Gamma(\\alpha_K)} = K(\\pmb{\\alpha})$$\n\nThen we can write:\n\n$$\\begin{split} \\frac{\\partial Dir(\\pmb{\\mu}|\\pmb{\\alpha})}{\\partial \\alpha_j} &= \\partial (K(\\pmb{\\alpha}) \\prod_{i=1}^K \\mu_i^{\\alpha_i-1})/\\partial \\alpha_j \\\\ &= \\frac{\\partial K(\\pmb{\\alpha})}{\\partial \\alpha_j} \\prod_{i=1}^K \\mu_i^{\\alpha_i-1} + K(\\pmb{\\alpha}) \\frac{\\partial \\prod_{i=1}^K \\mu_i^{\\alpha_i-1}}{\\partial \\alpha_j} \\\\ &= \\frac{\\partial K(\\pmb{\\alpha})}{\\partial \\alpha_j} \\prod_{i=1}^K \\mu_i^{\\alpha_i-1} + ln\\mu_j \\cdot Dir(\\pmb{\\mu}|\\pmb{\\alpha}) \\end{split}$$\n\nThen let us perform integral to both sides:\n\n$$\\int \\frac{\\partial Dir(\\boldsymbol{\\mu}|\\boldsymbol{\\alpha})}{\\partial \\alpha_{i}} d\\boldsymbol{\\mu} = \\int \\frac{\\partial K(\\boldsymbol{\\alpha})}{\\partial \\alpha_{i}} \\prod_{i=1}^{K} \\mu_{i}^{\\alpha_{i}-1} d\\boldsymbol{\\mu} + \\int ln \\mu_{j} \\cdot Dir(\\boldsymbol{\\mu}|\\boldsymbol{\\alpha}) d\\boldsymbol{\\mu}$$\n\nThe left side can be further simplified as:\n\nleft side = \n$$\\frac{\\partial \\int Dir(\\boldsymbol{\\mu}|\\boldsymbol{\\alpha}) d\\boldsymbol{\\mu}}{\\partial \\alpha_j} = \\frac{\\partial 1}{\\partial \\alpha_j} = 0$$\n\nThe right side can be further simplified as:\n\n$$\\begin{array}{ll} \\text{right side} & = & \\displaystyle \\frac{\\partial K(\\pmb{\\alpha})}{\\partial \\alpha_j} \\int \\prod_{i=1}^K \\mu_i^{\\alpha_i-1} d \\, \\pmb{\\mu} + \\mathbb{E}[ln\\mu_j] \\\\ \\\\ & = & \\displaystyle \\frac{\\partial K(\\pmb{\\alpha})}{\\partial \\alpha_j} \\, \\frac{1}{K(\\pmb{\\alpha})} + \\mathbb{E}[ln\\mu_j] \\\\ \\\\ & = & \\displaystyle \\frac{\\partial lnK(\\pmb{\\alpha})}{\\partial \\alpha_j} + \\mathbb{E}[ln\\mu_j] \\end{array}$$\n\nTherefore, we obtain:\n\n$$\\mathbb{E}[\\ln \\mu_{j}] = -\\frac{\\partial \\ln K(\\alpha)}{\\partial \\alpha_{j}}$$\n\n$$= -\\frac{\\partial \\left\\{ \\ln \\Gamma(\\alpha_{0}) - \\sum_{i=1}^{K} \\ln \\Gamma(\\alpha_{i}) \\right\\}}{\\partial \\alpha_{j}}$$\n\n$$= \\frac{\\partial \\ln \\Gamma(\\alpha_{j})}{\\partial \\alpha_{j}} - \\frac{\\partial \\ln \\Gamma(\\alpha_{0})}{\\partial \\alpha_{j}}$$\n\n$$= \\frac{\\partial \\ln \\Gamma(\\alpha_{j})}{\\partial \\alpha_{j}} - \\frac{\\partial \\ln \\Gamma(\\alpha_{0})}{\\partial \\alpha_{0}} \\frac{\\partial \\alpha_{0}}{\\partial \\alpha_{j}}$$\n\n$$= \\frac{\\partial \\ln \\Gamma(\\alpha_{j})}{\\partial \\alpha_{j}} - \\frac{\\partial \\ln \\Gamma(\\alpha_{0})}{\\partial \\alpha_{0}}$$\n\n$$= \\psi(\\alpha_{j}) - \\psi(\\alpha_{0})$$\n\nTherefore, the problem has been solved.",
"answer_length": 2606
},
{
"chapter": 2,
"question_number": "2.12",
"difficulty": "easy",
"question_text": "The uniform distribution for a continuous variable x is defined by\n\n$$U(x|a,b) = \\frac{1}{b-a}, \\qquad a \\leqslant x \\leqslant b.$$\n(2.278: $U(x|a,b) = \\frac{1}{b-a}, \\qquad a \\leqslant x \\leqslant b.$)\n\nVerify that this distribution is normalized, and find expressions for its mean and variance.",
"answer": "Since we have:\n\n$$\\int_{a}^{b} \\frac{1}{b-a} dx = 1$$\n\nIt is straightforward that it is normalized. Then we calculate its mean:\n\n$$\\mathbb{E}[x] = \\int_{a}^{b} x \\frac{1}{b-a} dx = \\frac{x^{2}}{2(b-a)} \\Big|_{a}^{b} = \\frac{a+b}{2}$$\n\nThen we calculate its variance.\n\n$$var[x] = \\mathbb{E}[x^2] - \\mathbb{E}[x]^2 = \\int_a^b \\frac{x^2}{b-a} dx - (\\frac{a+b}{2})^2 = \\frac{x^3}{3(b-a)} \\Big|_a^b - (\\frac{a+b}{2})^2$$\n\nHence we obtain:\n\n$$var[x] = \\frac{(b-a)^2}{12}$$",
"answer_length": 466
},
{
"chapter": 2,
"question_number": "2.13",
"difficulty": "medium",
"question_text": "Evaluate the Kullback-Leibler divergence (1.113: $= -\\int p(\\mathbf{x}) \\ln \\left\\{\\frac{q(\\mathbf{x})}{p(\\mathbf{x})}\\right\\} d\\mathbf{x}.$) between two Gaussians $p(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma})$ and $q(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\mathbf{m}, \\mathbf{L})$ .",
"answer": "This problem is an extension of Prob.1.30. We can follow the same procedure to solve it. Let's begin by calculating $\\ln \\frac{p(x)}{q(x)}$ :\n\n$$ln(\\frac{p(\\mathbf{x})}{g(\\mathbf{x})}) = \\frac{1}{2}ln(\\frac{|\\mathbf{L}|}{|\\mathbf{\\Sigma}|}) + \\frac{1}{2}(\\mathbf{x} - \\mathbf{m})^T \\mathbf{L}^{-1}(\\mathbf{x} - \\mathbf{m}) - \\frac{1}{2}(\\mathbf{x} - \\boldsymbol{\\mu})^T \\mathbf{\\Sigma}^{-1}(\\mathbf{x} - \\boldsymbol{\\mu})$$\n\nIf $x \\sim p(x) = \\mathcal{N}(\\mu|\\Sigma)$ , we then take advantage of the following properties.\n\n$$\\int p(\\mathbf{x})d\\mathbf{x} = 1$$\n\n$$\\mathbb{E}[\\mathbf{x}] = \\int \\mathbf{x} p(\\mathbf{x}) d\\mathbf{x} = \\mu$$\n\n$$\\mathbb{E}[(\\mathbf{x} - \\mathbf{a})^T \\mathbf{A} (\\mathbf{x} - \\mathbf{a})] = \\operatorname{tr}(\\mathbf{A} \\mathbf{\\Sigma}) + (\\mathbf{\\mu} - \\mathbf{a})^T \\mathbf{A} (\\mathbf{\\mu} - \\mathbf{a})$$\n\nWe obtain:\n\n$$KL = \\int \\left\\{ \\frac{1}{2} ln \\frac{|\\boldsymbol{L}|}{|\\boldsymbol{\\Sigma}|} - \\frac{1}{2} (\\boldsymbol{x} - \\boldsymbol{\\mu})^T \\boldsymbol{\\Sigma}^{-1} (\\boldsymbol{x} - \\boldsymbol{\\mu}) + \\frac{1}{2} (\\boldsymbol{x} - \\boldsymbol{m})^T \\boldsymbol{L}^{-1} (\\boldsymbol{x} - \\boldsymbol{m}) \\right\\} p(\\boldsymbol{x}) d\\boldsymbol{x}$$\n\n$$= \\frac{1}{2} ln \\frac{|\\boldsymbol{L}|}{|\\boldsymbol{\\Sigma}|} - \\frac{1}{2} E[(\\boldsymbol{x} - \\boldsymbol{\\mu}) \\boldsymbol{\\Sigma}^{-1} (\\boldsymbol{x} - \\boldsymbol{\\mu})^T] + \\frac{1}{2} E[(\\boldsymbol{x} - \\boldsymbol{m})^T \\boldsymbol{L}^{-1} (\\boldsymbol{x} - \\boldsymbol{m})]$$\n\n$$= \\frac{1}{2} ln \\frac{|\\boldsymbol{L}|}{|\\boldsymbol{\\Sigma}|} - \\frac{1}{2} tr \\{\\boldsymbol{I}_D\\} + \\frac{1}{2} (\\boldsymbol{\\mu} - \\boldsymbol{m})^T \\boldsymbol{L}^{-1} (\\boldsymbol{\\mu} - \\boldsymbol{m}) + \\frac{1}{2} tr \\{\\boldsymbol{L}^{-1} \\boldsymbol{\\Sigma}\\}$$\n\n$$= \\frac{1}{2} [ ln \\frac{|\\boldsymbol{L}|}{|\\boldsymbol{\\Sigma}|} - D + tr \\{\\boldsymbol{L}^{-1} \\boldsymbol{\\Sigma}\\} + (\\boldsymbol{m} - \\boldsymbol{\\mu})^T \\boldsymbol{L}^{-1} (\\boldsymbol{m} - \\boldsymbol{\\mu})]$$",
"answer_length": 1987
},
{
"chapter": 2,
"question_number": "2.14",
"difficulty": "medium",
"question_text": "This exercise demonstrates that the multivariate distribution with maximum entropy, for a given covariance, is a Gaussian. The entropy of a distribution $p(\\mathbf{x})$ is given by\n\n$$H[\\mathbf{x}] = -\\int p(\\mathbf{x}) \\ln p(\\mathbf{x}) \\, d\\mathbf{x}. \\tag{2.279}$$\n\nWe wish to maximize H[x] over all distributions p(x) subject to the constraints that p(x) be normalized and that it have a specific mean and covariance, so that\n\n$$\\int p(\\mathbf{x}) \\, \\mathrm{d}\\mathbf{x} = 1 \\tag{2.280}$$\n\n$$\\int p(\\mathbf{x})\\mathbf{x} \\, \\mathrm{d}\\mathbf{x} = \\boldsymbol{\\mu} \\tag{2.281}$$\n\n$$\\int p(\\mathbf{x})(\\mathbf{x} - \\boldsymbol{\\mu})(\\mathbf{x} - \\boldsymbol{\\mu})^{\\mathrm{T}} d\\mathbf{x} = \\boldsymbol{\\Sigma}.$$\n (2.282: $\\int p(\\mathbf{x})(\\mathbf{x} - \\boldsymbol{\\mu})(\\mathbf{x} - \\boldsymbol{\\mu})^{\\mathrm{T}} d\\mathbf{x} = \\boldsymbol{\\Sigma}.$)\n\nBy performing a variational maximization of (2.279: $H[\\mathbf{x}] = -\\int p(\\mathbf{x}) \\ln p(\\mathbf{x}) \\, d\\mathbf{x}.$) and using Lagrange multipliers to enforce the constraints (2.280: $\\int p(\\mathbf{x}) \\, \\mathrm{d}\\mathbf{x} = 1$), (2.281: $\\int p(\\mathbf{x})\\mathbf{x} \\, \\mathrm{d}\\mathbf{x} = \\boldsymbol{\\mu}$), and (2.282: $\\int p(\\mathbf{x})(\\mathbf{x} - \\boldsymbol{\\mu})(\\mathbf{x} - \\boldsymbol{\\mu})^{\\mathrm{T}} d\\mathbf{x} = \\boldsymbol{\\Sigma}.$), show that the maximum likelihood distribution is given by the Gaussian (2.43: $\\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}) = \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\boldsymbol{\\Sigma}|^{1/2}} \\exp\\left\\{-\\frac{1}{2} (\\mathbf{x} - \\boldsymbol{\\mu})^{\\mathrm{T}} \\boldsymbol{\\Sigma}^{-1} (\\mathbf{x} - \\boldsymbol{\\mu})\\right\\}$).",
"answer": "The hint given in the problem is straightforward, however it is a little bit difficult to calculate, and here we will use a more simple method to solve this problem, taking advantage of the property of Kullback— $Leibler\\ Distance$ . Let g(x) be a Gaussian PDF with mean $\\mu$ and variance $\\Sigma$ , and f(x) an arbitrary PDF with the same mean and variance.\n\n$$0 \\le KL(f||g) = -\\int f(\\mathbf{x})ln\\left\\{\\frac{g(\\mathbf{x})}{f(\\mathbf{x})}\\right\\}d\\mathbf{x} = -H(f) - \\int f(\\mathbf{x})lng(\\mathbf{x})d\\mathbf{x} \\qquad (*)$$\n\nLet's calculate the second term of the equation above.\n\n$$\\int f(\\mathbf{x}) lng(\\mathbf{x}) d\\mathbf{x} = \\int f(\\mathbf{x}) ln \\left\\{ \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\mathbf{\\Sigma}|^{1/2}} exp \\left[ -\\frac{1}{2} (\\mathbf{x} - \\boldsymbol{\\mu})^T \\Sigma^{-1} (\\mathbf{x} - \\boldsymbol{\\mu}) \\right] \\right\\} d\\mathbf{x}$$\n\n$$= \\int f(\\mathbf{x}) ln \\left\\{ \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\mathbf{\\Sigma}|^{1/2}} \\right\\} d\\mathbf{x} + \\int f(\\mathbf{x}) \\left[ -\\frac{1}{2} (\\mathbf{x} - \\boldsymbol{\\mu})^T \\Sigma^{-1} (\\mathbf{x} - \\boldsymbol{\\mu}) \\right] d\\mathbf{x}$$\n\n$$= ln \\left\\{ \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\mathbf{\\Sigma}|^{1/2}} \\right\\} - \\frac{1}{2} \\mathbb{E} \\left[ (\\mathbf{x} - \\boldsymbol{\\mu})^T \\Sigma^{-1} (\\mathbf{x} - \\boldsymbol{\\mu}) \\right]$$\n\n$$= ln \\left\\{ \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\mathbf{\\Sigma}|^{1/2}} \\right\\} - \\frac{1}{2} \\text{tr} \\{I_D\\}$$\n\n$$= -\\left\\{ \\frac{1}{2} ln |\\mathbf{\\Sigma}| + \\frac{D}{2} (1 + ln(2\\pi)) \\right\\}$$\n\n$$= -H(g)$$\n\nWe take advantage of two properties of PDF f(x), with mean $\\mu$ and variance $\\Sigma$ , as listed below. What's more, we also use the result of Prob.2.15, which we will proof later.\n\n$$\\int f(\\boldsymbol{x})d\\boldsymbol{x} = 1$$\n\n$$\\mathbb{E}[(\\boldsymbol{x} - \\boldsymbol{a})^T \\boldsymbol{A} (\\boldsymbol{x} - \\boldsymbol{a})] = \\operatorname{tr}(\\boldsymbol{A}\\boldsymbol{\\Sigma}) + (\\boldsymbol{\\mu} - \\boldsymbol{a})^T \\boldsymbol{A} (\\boldsymbol{\\mu} - \\boldsymbol{a})$$\n\nNow we can further simplify (\\*) to obtain:\n\n$$H(g) \\ge H(f)$$\n\nIn other words, we have proved that an arbitrary PDF f(x) with the same mean and variance as a Gaussian PDF g(x), its entropy cannot be greater than that of Gaussian PDF.",
"answer_length": 2251
},
{
"chapter": 2,
"question_number": "2.15",
"difficulty": "medium",
"question_text": "Show that the entropy of the multivariate Gaussian $\\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma})$ is given by\n\n$$H[\\mathbf{x}] = \\frac{1}{2} \\ln |\\mathbf{\\Sigma}| + \\frac{D}{2} (1 + \\ln(2\\pi))$$\n (2.283: $H[\\mathbf{x}] = \\frac{1}{2} \\ln |\\mathbf{\\Sigma}| + \\frac{D}{2} (1 + \\ln(2\\pi))$)\n\nwhere D is the dimensionality of $\\mathbf{x}$ .",
"answer": "We have already used the result of this problem to solve Prob.2.14, and now we will prove it. Suppose $x \\sim p(x) = \\mathcal{N}(\\mu|\\Sigma)$ :\n\n$$H[\\mathbf{x}] = -\\int p(\\mathbf{x})lnp(\\mathbf{x})d\\mathbf{x}$$\n\n$$= -\\int p(\\mathbf{x})ln \\left\\{ \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\mathbf{\\Sigma}|^{1/2}} exp \\left[ -\\frac{1}{2} (\\mathbf{x} - \\boldsymbol{\\mu})^T \\Sigma^{-1} (\\mathbf{x} - \\boldsymbol{\\mu}) \\right] \\right\\} d\\mathbf{x}$$\n\n$$= -\\int p(\\mathbf{x})ln \\left\\{ \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\mathbf{\\Sigma}|^{1/2}} \\right\\} d\\mathbf{x} - \\int f(\\mathbf{x}) \\left[ -\\frac{1}{2} (\\mathbf{x} - \\boldsymbol{\\mu})^T \\Sigma^{-1} (\\mathbf{x} - \\boldsymbol{\\mu}) \\right] d\\mathbf{x}$$\n\n$$= -ln \\left\\{ \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\mathbf{\\Sigma}|^{1/2}} \\right\\} + \\frac{1}{2} \\mathbb{E} \\left[ (\\mathbf{x} - \\boldsymbol{\\mu})^T \\Sigma^{-1} (\\mathbf{x} - \\boldsymbol{\\mu}) \\right]$$\n\n$$= -ln \\left\\{ \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\mathbf{\\Sigma}|^{1/2}} \\right\\} + \\frac{1}{2} \\text{tr} \\{ I_D \\}$$\n\n$$= \\frac{1}{2} ln |\\mathbf{\\Sigma}| + \\frac{D}{2} (1 + ln(2\\pi))$$\n\nWhere we have taken advantage of:\n\n$$\\int p(\\boldsymbol{x})d\\boldsymbol{x} = 1$$\n \n$$\\mathbb{E}[(\\boldsymbol{x} - \\boldsymbol{a})^T \\boldsymbol{A} (\\boldsymbol{x} - \\boldsymbol{a})] = \\operatorname{tr}(\\boldsymbol{A}\\boldsymbol{\\Sigma}) + (\\boldsymbol{\\mu} - \\boldsymbol{a})^T \\boldsymbol{A} (\\boldsymbol{\\mu} - \\boldsymbol{a})$$\n\nNote: Actually in Prob.2.14, we have already solved this problem, you can intuitively view it by replacing the integrand f(x)lng(x) with g(x)lng(x), and the same procedure in Prob.2.14 still holds to calculate $\\int g(x)lng(x)dx$ .",
"answer_length": 1646
},
{
"chapter": 2,
"question_number": "2.16",
"difficulty": "hard",
"question_text": "\\star \\star)$ **www** Consider two random variables $x_1$ and $x_2$ having Gaussian distributions with means $\\mu_1, \\mu_2$ and precisions $\\tau_1, \\tau_2$ respectively. Derive an expression for the differential entropy of the variable $x = x_1 + x_2$ . To do this, first find the distribution of x by using the relation\n\n$$p(x) = \\int_{-\\infty}^{\\infty} p(x|x_2)p(x_2) dx_2$$\n (2.284: $p(x) = \\int_{-\\infty}^{\\infty} p(x|x_2)p(x_2) dx_2$)\n\nand completing the square in the exponent. Then observe that this represents the convolution of two Gaussian distributions, which itself will be Gaussian, and finally make use of the result (1.110: $H[x] = \\frac{1}{2} \\left\\{ 1 + \\ln(2\\pi\\sigma^2) \\right\\}.$) for the entropy of the univariate Gaussian.",
"answer": "Let us consider a more general conclusion about the *Probability Density Function* (PDF) of the summation of two independent random variables. We denote two random variables X and Y. Their summation Z = X + Y, is still a random variable. We also denote $f(\\cdot)$ as PDF, and $F(\\cdot)$ as *Cumulative Distribution Function* (CDF). We can obtain :\n\n$$F_Z(z) = P(Z < z) = \\iint_{x+y \\le z} f_{X,Y}(x,y) dx dy$$\n\nWhere z represents an arbitrary real number. We rewrite the *double integral* into *iterated integral*:\n\n$$F_Z(z) = \\int_{-\\infty}^{+\\infty} \\left[ \\int_{-\\infty}^{z-y} f_{X,Y}(x,y) dx \\right] dy$$\n\nWe fix *z* and *y*, and then make a change of variable x = u - y to the integral.\n\n$$F_Z(z) = \\int_{-\\infty}^{+\\infty} \\left[ \\int_{-\\infty}^{z-y} f_{X,Y}(x,y) dx \\right] dy = \\int_{-\\infty}^{+\\infty} \\left[ \\int_{-\\infty}^{z} f_{X,Y}(u-x,y) du \\right] dy$$\n\nNote: $f_{X,Y}(\\cdot)$ is the joint PDF of X and Y, and then we rearrange the order, we will obtain :\n\n$$F_Z(z) = \\int_{-\\infty}^{z} \\left[ \\int_{-\\infty}^{+\\infty} f_{X,Y}(u - y, y) dy \\right] du$$\n\nCompare the equation above with th definition of CDF:\n\n$$F_Z(z) = \\int_{-\\infty}^z f_Z(u) \\, du$$\n\nWe can obtain :\n\n$$f_Z(u) = \\int_{-\\infty}^{+\\infty} f_{X,Y}(u - y, y) dy$$\n\nAnd if X and Y are independent, which means $f_{X,Y}(x,y) = f_X(x)f_Y(y)$ , we can simplify $f_Z(z)$ :\n\n$$f_Z(u) = \\int_{-\\infty}^{+\\infty} f_X(u - y) f_Y(y) dy$$\n i.e. $f_Z = f_X * f_Y$ \n\nUntil now we have proved that the PDF of the summation of two independent random variable is the convolution of the PDF of them. Hence it is straightforward to see that in this problem, where random variable x is the summation of random variable $x_1$ and $x_2$ , the PDF of x should be the convolution of the PDF of $x_1$ and $x_2$ . To find the entropy of x, we will use a simple method, taking advantage of (2.113)-(2.117). With the knowledge :\n\n$$p(x_2)=\\mathcal{N}(\\mu_2,\\tau_2^{-1})$$\n\n$$p(x|x_2) = \\mathcal{N}(\\mu_1 + x_2, \\tau_1^{-1})$$\n\nWe make analogies: $x_2$ in this problem to $\\boldsymbol{x}$ in (2.113: $p(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}^{-1})$), x in this problem to $\\boldsymbol{y}$ in (2.114: $p(\\mathbf{y}|\\mathbf{x}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\mathbf{x} + \\mathbf{b}, \\mathbf{L}^{-1})$). Hence by using (2.115: $p(\\mathbf{y}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}, \\mathbf{L}^{-1} + \\mathbf{A}\\boldsymbol{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}})$), we can obtain p(x) is still a normal distribution, and since the entropy of a Gaussian is fully decided by its variance, there is no need to calculate the mean. Still by using (2.115: $p(\\mathbf{y}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}, \\mathbf{L}^{-1} + \\mathbf{A}\\boldsymbol{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}})$), the variance of x is $\\tau_1^{-1} + \\tau_2^{-1}$ , which finally gives its entropy:\n\n$$H[x] = \\frac{1}{2} \\left[ 1 + ln2\\pi (\\tau_1^{-1} + \\tau_2^{-1}) \\right]$$",
"answer_length": 3009
},
{
"chapter": 2,
"question_number": "2.17",
"difficulty": "easy",
"question_text": "Consider the multivariate Gaussian distribution given by (2.43: $\\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}) = \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\boldsymbol{\\Sigma}|^{1/2}} \\exp\\left\\{-\\frac{1}{2} (\\mathbf{x} - \\boldsymbol{\\mu})^{\\mathrm{T}} \\boldsymbol{\\Sigma}^{-1} (\\mathbf{x} - \\boldsymbol{\\mu})\\right\\}$). By writing the precision matrix (inverse covariance matrix) $\\Sigma^{-1}$ as the sum of a symmetric and an anti-symmetric matrix, show that the anti-symmetric term does not appear in the exponent of the Gaussian, and hence that the precision matrix may be taken to be symmetric without loss of generality. Because the inverse of a symmetric matrix is also symmetric (see Exercise 2.22), it follows that the covariance matrix may also be chosen to be symmetric without loss of generality.",
"answer": "This is an extension of Prob.1.14. The same procedure can be used here. We suppose an arbitrary precision matrix $\\Lambda$ can be written as $\\Lambda^S + \\Lambda^A$ , where they satisfy:\n\n$$\\Lambda^S_{ij} = \\frac{\\Lambda_{ij} + \\Lambda_{ji}}{2} \\quad , \\quad \\Lambda^A_{ij} = \\frac{\\Lambda_{ij} - \\Lambda_{ji}}{2}$$\n\nHence it is straightforward that $\\Lambda^S_{ij} = \\Lambda^S_{ji}$ , and $\\Lambda^A_{ij} = -\\Lambda^A_{ji}$ . If we expand the quadratic form of exponent, we will obtain:\n\n$$(\\boldsymbol{x} - \\boldsymbol{\\mu})^T \\boldsymbol{\\Lambda} (\\boldsymbol{x} - \\boldsymbol{\\mu}) = \\sum_{i=1}^D \\sum_{j=1}^D (x_i - \\mu_i) \\Lambda_{ij} (x_j - \\mu_j)$$\n (\\*)\n\nIt is straightforward then:\n\n$$(*) = \\sum_{i=1}^{D} \\sum_{j=1}^{D} (x_i - \\mu_i) \\Lambda_{ij}^S(x_j - \\mu_j) + \\sum_{i=1}^{D} \\sum_{j=1}^{D} (x_i - \\mu_i) \\Lambda_{ij}^A(x_j - \\mu_j)$$\n$$= \\sum_{i=1}^{D} \\sum_{j=1}^{D} (x_i - \\mu_i) \\Lambda_{ij}^S(x_j - \\mu_j)$$\n\nTherefore, we can assume precision matrix is symmetric, and so is covariance matrix.",
"answer_length": 1017
},
{
"chapter": 2,
"question_number": "2.18",
"difficulty": "hard",
"question_text": "\\star \\star)$ Consider a real, symmetric matrix $\\Sigma$ whose eigenvalue equation is given by (2.45: $\\mathbf{\\Sigma}\\mathbf{u}_i = \\lambda_i \\mathbf{u}_i$). By taking the complex conjugate of this equation and subtracting the original equation, and then forming the inner product with eigenvector $\\mathbf{u}_i$ , show that the eigenvalues $\\lambda_i$ are real. Similarly, use the symmetry property of $\\Sigma$ to show that two eigenvectors $\\mathbf{u}_i$ and $\\mathbf{u}_j$ will be orthogonal provided $\\lambda_j \\neq \\lambda_i$ . Finally, show that without loss of generality, the set of eigenvectors can be chosen to be orthonormal, so that they satisfy (2.46: $\\mathbf{u}_i^{\\mathrm{T}} \\mathbf{u}_j = I_{ij}$), even if some of the eigenvalues are zero.",
"answer": "We will just follow the hint given in the problem. Firstly, we take complex conjugate on both sides of (2.45):\n\n$$\\overline{\\Sigma u_i} = \\overline{\\lambda_i u_i} = \\sum \\overline{\\lambda_i} \\overline{u_i}$$\n\nWhere we have taken advantage of the fact that $\\Sigma$ is a real matrix, i.e., $\\overline{\\Sigma} = \\Sigma$ . Then using that $\\Sigma$ is a symmetric, i.e., $\\Sigma^T = \\Sigma$ :\n\n$$\\overline{\\boldsymbol{u_i}}^T\\boldsymbol{\\Sigma}\\boldsymbol{u_i} = \\overline{\\boldsymbol{u_i}}^T(\\boldsymbol{\\Sigma}\\boldsymbol{u_i}) = \\overline{\\boldsymbol{u_i}}^T(\\lambda_i\\boldsymbol{u_i}) = \\lambda_i\\overline{\\boldsymbol{u_i}}^T\\boldsymbol{u_i}$$\n\n$$\\overline{\\boldsymbol{u}_{i}}^{T} \\boldsymbol{\\Sigma} \\boldsymbol{u}_{i} = (\\boldsymbol{\\Sigma} \\overline{\\boldsymbol{u}_{i}})^{T} \\boldsymbol{u}_{i} = (\\overline{\\lambda_{i}} \\overline{\\boldsymbol{u}_{i}})^{T} \\boldsymbol{u}_{i} = \\overline{\\lambda_{i}}^{T} \\overline{\\boldsymbol{u}_{i}}^{T} \\boldsymbol{u}_{i}$$\n\nSince $\\boldsymbol{u_i} \\neq 0$ , we have $\\overline{\\boldsymbol{u_i}}^T \\boldsymbol{u_i} \\neq 0$ . Thus $\\lambda_i^T = \\overline{\\lambda}_i^T$ , which means $\\lambda_i$ is real. Next we will proof that two eigenvectors corresponding to different eigenvalues are orthogonal.\n\n$$\\lambda_i < u_i, u_i > = < \\lambda_i u_i, u_i > = < \\Sigma u_i, u_i > = < u_i, \\Sigma^T u_i > = \\lambda_i < u_i, u_i >$$\n\nWhere we have taken advantage of $\\Sigma^T = \\Sigma$ and for arbitrary real matrix A and vector x, y, we have :\n\n$$< Ax, y> = < x, A^Ty>$$\n\nProvided $\\lambda_i \\neq \\lambda_j$ , we have $\\langle \\boldsymbol{u_i}, \\boldsymbol{u_j} \\rangle = 0$ , i.e., $\\boldsymbol{u_i}$ and $\\boldsymbol{u_j}$ are orthogonal. And then if we perform normalization on every eigenvector to force its *Euclidean norm* to equal to 1, (2.46: $\\mathbf{u}_i^{\\mathrm{T}} \\mathbf{u}_j = I_{ij}$) is straightforward. By performing normalization, I mean multiplying the eigenvector by a real number a to let its *Euclidean norm* (length) to equal to 1, meanwhile we should also divide its corresponding eigenvalue by a.",
"answer_length": 2072
},
{
"chapter": 2,
"question_number": "2.19",
"difficulty": "medium",
"question_text": "Show that a real, symmetric matrix $\\Sigma$ having the eigenvector equation (2.45: $\\mathbf{\\Sigma}\\mathbf{u}_i = \\lambda_i \\mathbf{u}_i$) can be expressed as an expansion in the eigenvectors, with coefficients given by the eigenvalues, of the form (2.48: $\\Sigma = \\sum_{i=1}^{D} \\lambda_i \\mathbf{u}_i \\mathbf{u}_i^{\\mathrm{T}}$). Similarly, show that the inverse matrix $\\Sigma^{-1}$ has a representation of the form (2.49: $\\Sigma^{-1} = \\sum_{i=1}^{D} \\frac{1}{\\lambda_i} \\mathbf{u}_i \\mathbf{u}_i^{\\mathrm{T}}.$).",
"answer": "For every $N \\times N$ real symmetric matrix, the eigenvalues are real and the eigenvectors can be chosen such that they are orthogonal to each other. Thus a real symmetric matrix $\\Sigma$ can be decomposed as $\\Sigma = U \\Lambda U^T$ , where U is an orthogonal matrix, and $\\Lambda$ is a diagonal matrix whose entries are the eigenvalues of $\\Lambda$ . Hence for an arbitrary vector $\\mathbf{x}$ , we have:\n\n$$\\Sigma x = U \\Lambda U^T x = U \\Lambda \\begin{bmatrix} u_1^T x \\\\ \\vdots \\\\ u_D^T x \\end{bmatrix} = U \\begin{bmatrix} \\lambda_1 u_1^T x \\\\ \\vdots \\\\ \\lambda_D u_D^T x \\end{bmatrix} = (\\sum_{k=1}^D \\lambda_k u_k u_k^T) x$$\n\nAnd since $\\Sigma^{-1} = U\\Lambda^{-1}U^T$ , the same procedure can be used to prove (2.49: $\\Sigma^{-1} = \\sum_{i=1}^{D} \\frac{1}{\\lambda_i} \\mathbf{u}_i \\mathbf{u}_i^{\\mathrm{T}}.$).",
"answer_length": 828
},
{
"chapter": 2,
"question_number": "2.2",
"difficulty": "medium",
"question_text": "\\star)$ The form of the Bernoulli distribution given by (2.2: $(x|\\mu) = \\mu^x (1-\\mu)^{1-x}$) is not symmetric between the two values of x. In some situations, it will be more convenient to use an equivalent formulation for which $x \\in \\{-1, 1\\}$ , in which case the distribution can be written\n\n$$p(x|\\mu) = \\left(\\frac{1-\\mu}{2}\\right)^{(1-x)/2} \\left(\\frac{1+\\mu}{2}\\right)^{(1+x)/2}$$\n (2.261: $p(x|\\mu) = \\left(\\frac{1-\\mu}{2}\\right)^{(1-x)/2} \\left(\\frac{1+\\mu}{2}\\right)^{(1+x)/2}$)\n\nwhere $\\mu \\in [-1, 1]$ . Show that the distribution (2.261: $p(x|\\mu) = \\left(\\frac{1-\\mu}{2}\\right)^{(1-x)/2} \\left(\\frac{1+\\mu}{2}\\right)^{(1+x)/2}$) is normalized, and evaluate its mean, variance, and entropy.",
"answer": "The proof in Prob.2.1. can also be used here.\n\n$$\\sum_{x_i = -1, 1} p(x_i) = \\frac{1 - \\mu}{2} + \\frac{1 + \\mu}{2} = 1$$\n\n$$\\mathbb{E}[x] = \\sum_{x_i = -1, 1} x_i \\cdot p(x_i) = -1 \\cdot \\frac{1 - \\mu}{2} + 1 \\cdot \\frac{1 + \\mu}{2} = \\mu$$\n\n$$\\operatorname{var}[x] = \\sum_{x_i = -1, 1} (x_i - \\mathbb{E}[x])^2 \\cdot p(x_i)$$\n\n$$= (-1 - \\mu)^2 \\cdot \\frac{1 - \\mu}{2} + (1 - \\mu)^2 \\cdot \\frac{1 + \\mu}{2}$$\n\n$$= 1 - \\mu^2$$\n\n$$H[x] = -\\sum_{x_i = -1, 1} p(x_i) \\cdot \\ln p(x_i) = -\\frac{1 - \\mu}{2} \\cdot \\ln \\frac{1 - \\mu}{2} - \\frac{1 + \\mu}{2} \\cdot \\ln \\frac{1 + \\mu}{2}$$",
"answer_length": 577
},
{
"chapter": 2,
"question_number": "2.20",
"difficulty": "medium",
"question_text": "A positive definite matrix $\\Sigma$ can be defined as one for which the quadratic form\n\n$$\\mathbf{a}^{\\mathrm{T}}\\mathbf{\\Sigma}\\mathbf{a}\\tag{2.285}$$\n\nis positive for any real value of the vector $\\mathbf{a}$ . Show that a necessary and sufficient condition for $\\Sigma$ to be positive definite is that all of the eigenvalues $\\lambda_i$ of $\\Sigma$ , defined by (2.45: $\\mathbf{\\Sigma}\\mathbf{u}_i = \\lambda_i \\mathbf{u}_i$), are positive.",
"answer": "Since $u_1, u_2, ..., u_D$ can constitute a basis for $\\mathbb{R}^D$ , we can make projection for a:\n\n$$\\boldsymbol{a} = a_1 \\boldsymbol{u_1} + a_2 \\boldsymbol{u_2} + \\dots + a_D \\boldsymbol{u_D}$$\n\nWe substitute the expression above into $\\boldsymbol{a}^T \\boldsymbol{\\Sigma} \\boldsymbol{a}$ , taking advantage of the property: $\\boldsymbol{u}_i \\boldsymbol{u}_j = 1$ only if i = j, otherwise 0, we will obtain:\n\n$$\\mathbf{a}^{T} \\mathbf{\\Sigma} \\mathbf{a} = (a_{1} \\mathbf{u}_{1} + a_{2} \\mathbf{u}_{2} + \\dots + a_{D} \\mathbf{u}_{D})^{T} \\mathbf{\\Sigma} (a_{1} \\mathbf{u}_{1} + a_{2} \\mathbf{u}_{2} + \\dots + a_{D} \\mathbf{u}_{D})$$\n\n$$= (a_{1} \\mathbf{u}_{1}^{T} + a_{2} \\mathbf{u}_{2}^{T} + \\dots + a_{D} \\mathbf{u}_{D}^{T}) \\mathbf{\\Sigma} (a_{1} \\mathbf{u}_{1} + a_{2} \\mathbf{u}_{2} + \\dots + a_{D} \\mathbf{u}_{D})$$\n\n$$= (a_{1} \\mathbf{u}_{1}^{T} + a_{2} \\mathbf{u}_{2}^{T} + \\dots + a_{D} \\mathbf{u}_{D}^{T}) (a_{1} \\lambda_{1} \\mathbf{u}_{1} + a_{2} \\lambda_{2} \\mathbf{u}_{2} + \\dots + a_{D} \\lambda_{D} \\mathbf{u}_{D})$$\n\n$$= \\lambda_{1} a_{1}^{2} + \\lambda_{2} a_{2}^{2} + \\dots + \\lambda_{D} a_{D}^{2}$$\n\nSince $\\boldsymbol{a}$ is real,the expression above will be strictly positive for any non-zero $\\boldsymbol{a}$ , if all eigenvalues are strictly positive. It is also clear that if an eigenvalue, $\\lambda_i$ , is zero or negative, there will exist a vector $\\boldsymbol{a}$ (e.g. $\\boldsymbol{a} = \\boldsymbol{u_i}$ ), for which this expression will be no greater than 0. Thus, that a real symmetric matrix has eigenvectors which are all strictly positive is a sufficient and necessary condition for the matrix to be positive definite.",
"answer_length": 1668
},
{
"chapter": 2,
"question_number": "2.21",
"difficulty": "easy",
"question_text": "Show that a real, symmetric matrix of size $D \\times D$ has D(D+1)/2 independent parameters.",
"answer": "It is straightforward. For a symmetric matrix $\\Lambda$ of size $D \\times D$ , when the lower triangular part is decided, the whole matrix will be decided due to\n\nsymmetry. Hence the number of independent parameters is D + (D-1) + ... + 1, which equals to D(D+1)/2.",
"answer_length": 268
},
{
"chapter": 2,
"question_number": "2.22",
"difficulty": "easy",
"question_text": "Show that the inverse of a symmetric matrix is itself symmetric.",
"answer": "Suppose A is a symmetric matrix, and we need to prove that $A^{-1}$ is also symmetric, i.e., $A^{-1} = (A^{-1})^T$ . Since identity matrix I is also symmetric, we have:\n\n$$AA^{-1} = (AA^{-1})^T$$\n\nAnd since $AB^T = B^TA^T$ holds for arbitrary matrix A and B, we will obtain:\n\n$$\\boldsymbol{A}\\boldsymbol{A}^{-1} = (\\boldsymbol{A}^{-1})^T \\boldsymbol{A}^T$$\n\nSince $\\mathbf{A} = \\mathbf{A}^T$ , we substitute the right side:\n\n$$\\boldsymbol{A}\\boldsymbol{A}^{-1} = (\\boldsymbol{A}^{-1})^T \\boldsymbol{A}$$\n\nAnd note that $\\mathbf{A}\\mathbf{A}^{-1} = \\mathbf{A}^{-1}\\mathbf{A} = \\mathbf{I}$ , we rearrange the order of the left side :\n\n$$\\boldsymbol{A}^{-1}\\boldsymbol{A} = (\\boldsymbol{A}^{-1})^T \\boldsymbol{A}$$\n\nFinally, by multiplying $A^{-1}$ to both sides, we can obtain:\n\n$$A^{-1}AA^{-1} = (A^{-1})^T AA^{-1}$$\n\nUsing $AA^{-1} = I$ , we will get what we are asked:\n\n$$\\boldsymbol{A}^{-1} = \\left(\\boldsymbol{A}^{-1}\\right)^T$$",
"answer_length": 941
},
{
"chapter": 2,
"question_number": "2.23",
"difficulty": "medium",
"question_text": "By diagonalizing the coordinate system using the eigenvector expansion (2.45: $\\mathbf{\\Sigma}\\mathbf{u}_i = \\lambda_i \\mathbf{u}_i$), show that the volume contained within the hyperellipsoid corresponding to a constant\n\nMahalanobis distance $\\Delta$ is given by\n\n$$V_D |\\mathbf{\\Sigma}|^{1/2} \\Delta^D \\tag{2.286}$$\n\nwhere $V_D$ is the volume of the unit sphere in D dimensions, and the Mahalanobis distance is defined by (2.44: $\\Delta^{2} = (\\mathbf{x} - \\boldsymbol{\\mu})^{\\mathrm{T}} \\boldsymbol{\\Sigma}^{-1} (\\mathbf{x} - \\boldsymbol{\\mu})$).",
"answer": "Let's reformulate the problem. What the problem wants us to prove is that if $(\\mathbf{x} - \\boldsymbol{\\mu})^T \\mathbf{\\Sigma}^{-1} (\\mathbf{x} - \\boldsymbol{\\mu}) = r^2$ , where $r^2$ is a constant, we will have the volume of the hyperellipsoid decided by the equation above will equal to $V_D |\\mathbf{\\Sigma}|^{1/2} r^D$ . Note that the center of this hyperellipsoid locates at $\\boldsymbol{\\mu}$ , and a translation operation won't change its volume, thus we only need to prove that the volume of a hyperellipsoid decided by $\\mathbf{x}^T \\mathbf{\\Sigma}^{-1} \\mathbf{x} = r^2$ , whose center locates at $\\mathbf{0}$ equals to $V_D |\\mathbf{\\Sigma}|^{1/2} r^D$ .\n\nThis problem can be viewed as two parts. Firstly, let's discuss about $V_D$ , the volume of a unit sphere in dimension D. The expression of $V_D$ has already be given in the solution procedure of Prob.1.18, i.e., (1.144):\n\n$$V_D = \\frac{S_D}{D} = \\frac{2\\pi^{D/2}}{\\Gamma(\\frac{D}{2} + 1)}$$\n\nAnd also in the procedure, we show that a D dimensional sphere with radius r, i.e., $\\mathbf{x}^T\\mathbf{x} = r^2$ , has volume $V(r) = V_D r^D$ . We move a step forward: we\n\nperform a linear transform using matrix $\\Sigma^{1/2}$ , i.e., $\\mathbf{y}^T \\mathbf{y} = r^2$ , where $\\mathbf{y} = \\Sigma^{1/2} \\mathbf{x}$ . After the linear transformation, we actually get a hyperellipsoid whose center locates at $\\mathbf{0}$ , and its volume is given by multiplying V(r) with the determinant of the transformation matrix, which gives $|\\Sigma|^{1/2}V_D r^D$ , just as required.",
"answer_length": 1555
},
{
"chapter": 2,
"question_number": "2.24",
"difficulty": "medium",
"question_text": "\\star)$ www Prove the identity (2.76: $\\begin{pmatrix} \\mathbf{A} & \\mathbf{B} \\\\ \\mathbf{C} & \\mathbf{D} \\end{pmatrix}^{-1} = \\begin{pmatrix} \\mathbf{M} & -\\mathbf{M}\\mathbf{B}\\mathbf{D}^{-1} \\\\ -\\mathbf{D}^{-1}\\mathbf{C}\\mathbf{M} & \\mathbf{D}^{-1} + \\mathbf{D}^{-1}\\mathbf{C}\\mathbf{M}\\mathbf{B}\\mathbf{D}^{-1} \\end{pmatrix}$) by multiplying both sides by the matrix\n\n$$\\begin{pmatrix} \\mathbf{A} & \\mathbf{B} \\\\ \\mathbf{C} & \\mathbf{D} \\end{pmatrix} \\tag{2.287}$$\n\nand making use of the definition (2.77: $\\mathbf{M} = (\\mathbf{A} - \\mathbf{B}\\mathbf{D}^{-1}\\mathbf{C})^{-1}.$).",
"answer": "We just following the hint, and firstly let's calculate:\n\n$$\\left[\\begin{array}{cc} A & B \\\\ C & D \\end{array}\\right] \\times \\left[\\begin{array}{cc} M & -MBD^{-1} \\\\ -D^{-1}CM & D^{-1} + D^{-1}CMBD^{-1} \\end{array}\\right]$$\n\nThe result can also be partitioned into four blocks. The block located at left top equals to :\n\n$$AM - BD^{-1}CM = (A - BD^{-1}C)(A - BD^{-1}C)^{-1} = I$$\n\nWhere we have taken advantage of (2.77: $\\mathbf{M} = (\\mathbf{A} - \\mathbf{B}\\mathbf{D}^{-1}\\mathbf{C})^{-1}.$). And the right top equals to:\n\n$$-AMBD^{-1} + BD^{-1} + BD^{-1}CMBD^{-1} = (I - AM + BD^{-1}CM)BD^{-1} = 0$$\n\nWhere we have used the result of the left top block. And the left bottom equals to :\n\n$$CM - DD^{-1}CM = 0$$\n\nAnd the right bottom equals to:\n\n$$-CMRD^{-1} + DD^{-1} + DD^{-1}CMDD^{-1} = I$$\n\nwe have proved what we are asked. Note: if you want to be more precise, you should also multiply the block matrix on the right side of (2.76: $\\begin{pmatrix} \\mathbf{A} & \\mathbf{B} \\\\ \\mathbf{C} & \\mathbf{D} \\end{pmatrix}^{-1} = \\begin{pmatrix} \\mathbf{M} & -\\mathbf{M}\\mathbf{B}\\mathbf{D}^{-1} \\\\ -\\mathbf{D}^{-1}\\mathbf{C}\\mathbf{M} & \\mathbf{D}^{-1} + \\mathbf{D}^{-1}\\mathbf{C}\\mathbf{M}\\mathbf{B}\\mathbf{D}^{-1} \\end{pmatrix}$) and then prove that it will equal to a identity matrix. However, the procedure above can be also used there, so we omit the proof and what's more, if two arbitrary square matrix X and Y satisfied XY = I, it can be shown that YX = I also holds.",
"answer_length": 1473
},
{
"chapter": 2,
"question_number": "2.25",
"difficulty": "medium",
"question_text": "In Sections 2.3.1 and 2.3.2, we considered the conditional and marginal distributions for a multivariate Gaussian. More generally, we can consider a partitioning of the components of $\\mathbf{x}$ into three groups $\\mathbf{x}_a$ , $\\mathbf{x}_b$ , and $\\mathbf{x}_c$ , with a corresponding partitioning of the mean vector $\\boldsymbol{\\mu}$ and of the covariance matrix $\\boldsymbol{\\Sigma}$ in the form\n\n$$\\mu = \\begin{pmatrix} \\mu_a \\\\ \\mu_b \\\\ \\mu_c \\end{pmatrix}, \\qquad \\Sigma = \\begin{pmatrix} \\Sigma_{aa} & \\Sigma_{ab} & \\Sigma_{ac} \\\\ \\Sigma_{ba} & \\Sigma_{bb} & \\Sigma_{bc} \\\\ \\Sigma_{ca} & \\Sigma_{cb} & \\Sigma_{cc} \\end{pmatrix}. \\tag{2.288}$$\n\nBy making use of the results of Section 2.3, find an expression for the conditional distribution $p(\\mathbf{x}_a|\\mathbf{x}_b)$ in which $\\mathbf{x}_c$ has been marginalized out.",
"answer": "We will take advantage of the result of (2.94)-(2.98). Let's first begin by grouping $x_a$ and $x_b$ together, and then we rewrite what has been given as:\n\n$$m{x} = \\left( egin{array}{c} m{x}_{a,b} \\\\ m{x}_c \\end{array} \night) \\quad m{\\mu} = \\left( egin{array}{c} m{\\mu}_{a,b} \\\\ m{\\mu}_c \\end{array} \night) \\quad m{\\Sigma} = \\left[ egin{array}{c} m{\\Sigma}_{(a,b)(a,b)} & m{\\Sigma}_{(a,b)c} \\\\ m{\\Sigma}_{(a,b)c} & m{\\Sigma}_{cc} \\end{array} \night]$$\n\nThen we take advantage of (2.98: $p(\\mathbf{x}_a) = \\mathcal{N}(\\mathbf{x}_a | \\boldsymbol{\\mu}_a, \\boldsymbol{\\Sigma}_{aa}).$), we can obtain:\n\n$$p(\\boldsymbol{x}_{a,b}) = \\mathcal{N}(\\boldsymbol{x}_{a,b}|\\boldsymbol{\\mu}_{a,b},\\boldsymbol{\\Sigma}_{(a,b)(a,b)})$$\n\nWhere we have defined:\n\n$$\\boldsymbol{\\mu}_{a,b} = \\left( \\begin{array}{c} \\boldsymbol{\\mu}_a \\\\ \\boldsymbol{\\mu}_b \\end{array} \\right) \\quad \\boldsymbol{\\Sigma}_{(a,b)(a,b)} = \\left[ \\begin{array}{cc} \\boldsymbol{\\Sigma}_{aa} & \\boldsymbol{\\Sigma}_{ab} \\\\ \\boldsymbol{\\Sigma}_{ba} & \\boldsymbol{\\Sigma}_{bb} \\end{array} \\right]$$\n\nSince now we have obtained the joint contribution of $x_a$ and $x_b$ , we will take advantage of (2.96: $p(\\mathbf{x}_a|\\mathbf{x}_b) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}_{a|b}, \\boldsymbol{\\Lambda}_{aa}^{-1})$) (2.97: $\\boldsymbol{\\mu}_{a|b} = \\boldsymbol{\\mu}_a - \\boldsymbol{\\Lambda}_{aa}^{-1} \\boldsymbol{\\Lambda}_{ab} (\\mathbf{x}_b - \\boldsymbol{\\mu}_b).$) to obtain conditional distribution, which gives:\n\n$$p(\\mathbf{x}_a|\\mathbf{x}_b) = \\mathcal{N}(\\mathbf{x}|\\mathbf{\\mu}_{a|b}, \\mathbf{\\Lambda}_{aa}^{-1})$$\n\nWhere we have defined\n\n$$\\boldsymbol{\\mu}_{a|b} = \\boldsymbol{\\mu}_a - \\boldsymbol{\\Lambda}_{aa}^{-1} \\boldsymbol{\\Lambda}_{ab} (\\boldsymbol{x}_b - \\boldsymbol{\\mu}_b)$$\n\nAnd the expression of $\\Lambda_{aa}^{-1}$ and $\\Lambda_{ab}$ can be given by using (2.76: $\\begin{pmatrix} \\mathbf{A} & \\mathbf{B} \\\\ \\mathbf{C} & \\mathbf{D} \\end{pmatrix}^{-1} = \\begin{pmatrix} \\mathbf{M} & -\\mathbf{M}\\mathbf{B}\\mathbf{D}^{-1} \\\\ -\\mathbf{D}^{-1}\\mathbf{C}\\mathbf{M} & \\mathbf{D}^{-1} + \\mathbf{D}^{-1}\\mathbf{C}\\mathbf{M}\\mathbf{B}\\mathbf{D}^{-1} \\end{pmatrix}$) and (2.77: $\\mathbf{M} = (\\mathbf{A} - \\mathbf{B}\\mathbf{D}^{-1}\\mathbf{C})^{-1}.$) once we notice that the following relation exits:\n\n$$\\left[\\begin{array}{cc} \\boldsymbol{\\Lambda}_{aa} & \\boldsymbol{\\Lambda}_{ab} \\\\ \\boldsymbol{\\Lambda}_{ba} & \\boldsymbol{\\Lambda}_{bb} \\end{array}\\right] = \\left[\\begin{array}{cc} \\boldsymbol{\\Sigma}_{aa} & \\boldsymbol{\\Sigma}_{ab} \\\\ \\boldsymbol{\\Sigma}_{ba} & \\boldsymbol{\\Sigma}_{bb} \\end{array}\\right]^{-1}$$",
"answer_length": 2588
},
{
"chapter": 2,
"question_number": "2.26",
"difficulty": "medium",
"question_text": "\\star)$ A very useful result from linear algebra is the *Woodbury* matrix inversion formula given by\n\n$$(\\mathbf{A} + \\mathbf{BCD})^{-1} = \\mathbf{A}^{-1} - \\mathbf{A}^{-1}\\mathbf{B}(\\mathbf{C}^{-1} + \\mathbf{D}\\mathbf{A}^{-1}\\mathbf{B})^{-1}\\mathbf{D}\\mathbf{A}^{-1}.$$\n (2.289: $(\\mathbf{A} + \\mathbf{BCD})^{-1} = \\mathbf{A}^{-1} - \\mathbf{A}^{-1}\\mathbf{B}(\\mathbf{C}^{-1} + \\mathbf{D}\\mathbf{A}^{-1}\\mathbf{B})^{-1}\\mathbf{D}\\mathbf{A}^{-1}.$)\n\nBy multiplying both sides by (A + BCD) prove the correctness of this result.",
"answer": "This problem is quite straightforward, if we just follow the hint.\n\n$$(A + BCD) (A^{-1} - A^{-1}B(C^{-1} + DA^{-1}B)^{-1}DA^{-1})$$\n\n$$= AA^{-1} - AA^{-1}B(C^{-1} + DA^{-1}B)^{-1}DA^{-1} + BCDA^{-1} - BCDA^{-1}B(C^{-1} + DA^{-1}B)^{-1}DA^{-1}$$\n\n$$= I - B(C^{-1} + DA^{-1}B)^{-1}DA^{-1} + BCDA^{-1} + B(C^{-1} + DA^{-1}B)^{-1}DA^{-1} - BCDA^{-1}$$\n\n$$= I$$\n\nWhere we have taken advantage of\n\n$$-BCDA^{-1}B(C^{-1} + DA^{-1}B)^{-1}DA^{-1}$$\n\n$$= -BC(-C^{-1} + C^{-1} + DA^{-1}B)(C^{-1} + DA^{-1}B)^{-1}DA^{-1}$$\n\n$$= (-BC)(-C^{-1})(C^{-1} + DA^{-1}B)^{-1}DA^{-1} + (-BC)(C^{-1} + DA^{-1}B)(C^{-1} + DA^{-1}B)^{-1}DA^{-1}$$\n\n$$= B(C^{-1} + DA^{-1}B)^{-1}DA^{-1} - BCDA^{-1}$$\n\nHere we will also directly calculate the inverse matrix instead to give another solution. Let's first begin by introducing two useful formulas.\n\n$$(I + P)^{-1} = (I + P)^{-1}(I + P - P)$$\n \n= $I - (I + P)^{-1}P$ \n\nAnd since\n\n$$P + PQP = P(I + QP) = (I + PQ)P$$\n\nThe second formula is:\n\n$$(\\boldsymbol{I} + \\boldsymbol{P}\\boldsymbol{Q})^{-1}\\boldsymbol{P} = \\boldsymbol{P}(\\boldsymbol{I} + \\boldsymbol{Q}\\boldsymbol{P})^{-1}$$\n\nAnd now let's directly calculate $(A + BCD)^{-1}$ :\n\n$$(A + BCD)^{-1} = [A(I + A^{-1}BCD)]^{-1}$$\n\n$$= (I + A^{-1}BCD)^{-1}A^{-1}$$\n\n$$= [I - (I + A^{-1}BCD)^{-1}A^{-1}BCD]A^{-1}$$\n\n$$= A^{-1} - (I + A^{-1}BCD)^{-1}A^{-1}BCDA^{-1}$$\n\nWhere we have assumed that $\\boldsymbol{A}$ is invertible and also used the first formula we introduced. Then we also assume that $\\boldsymbol{C}$ is invertible and recursively use the second formula:\n\n$$(A + BCD)^{-1} = A^{-1} - (I + A^{-1}BCD)^{-1}A^{-1}BCDA^{-1}$$\n\n$$= A^{-1} - A^{-1}(I + BCDA^{-1})^{-1}BCDA^{-1}$$\n\n$$= A^{-1} - A^{-1}B(I + CDA^{-1}B)^{-1}CDA^{-1}$$\n\n$$= A^{-1} - A^{-1}B[C(C^{-1} + DA^{-1}B)]^{-1}CDA^{-1}$$\n\n$$= A^{-1} - A^{-1}B(C^{-1} + DA^{-1}B)^{-1}C^{-1}CDA^{-1}$$\n\n$$= A^{-1} - A^{-1}B(C^{-1} + DA^{-1}B)^{-1}DA^{-1}$$\n\nJust as required.",
"answer_length": 1908
},
{
"chapter": 2,
"question_number": "2.27",
"difficulty": "easy",
"question_text": "Let $\\mathbf{x}$ and $\\mathbf{z}$ be two independent random vectors, so that $p(\\mathbf{x}, \\mathbf{z}) = p(\\mathbf{x})p(\\mathbf{z})$ . Show that the mean of their sum $\\mathbf{y} = \\mathbf{x} + \\mathbf{z}$ is given by the sum of the means of each of the variable separately. Similarly, show that the covariance matrix of $\\mathbf{y}$ is given by the sum of the covariance matrices of $\\mathbf{x}$ and $\\mathbf{z}$ . Confirm that this result agrees with that of Exercise 1.10.",
"answer": "The same procedure used in Prob.1.10 can be used here similarly.\n\n$$\\mathbb{E}[\\mathbf{x}+\\mathbf{z}] = \\int \\int (\\mathbf{x}+\\mathbf{z})p(\\mathbf{x},\\mathbf{z})d\\mathbf{x}d\\mathbf{z}$$\n\n$$= \\int \\int (\\mathbf{x}+\\mathbf{z})p(\\mathbf{x})p(\\mathbf{z})d\\mathbf{x}d\\mathbf{z}$$\n\n$$= \\int \\int \\mathbf{x}p(\\mathbf{x})p(\\mathbf{z})d\\mathbf{x}d\\mathbf{z} + \\int \\int \\mathbf{z}p(\\mathbf{x})p(\\mathbf{z})d\\mathbf{x}d\\mathbf{z}$$\n\n$$= \\int (\\int p(\\mathbf{z})d\\mathbf{z})\\mathbf{x}p(\\mathbf{x})d\\mathbf{x} + \\int (\\int p(\\mathbf{x})d\\mathbf{x})\\mathbf{z}p(\\mathbf{z})d\\mathbf{z}$$\n\n$$= \\int \\mathbf{x}p(\\mathbf{x})d\\mathbf{x} + \\int \\mathbf{z}p(\\mathbf{z})d\\mathbf{z}$$\n\n$$= \\mathbb{E}[\\mathbf{x}] + \\mathbb{E}[\\mathbf{z}]$$\n\nAnd for covariance matrix, we will use matrix integral:\n\n$$cov[x+z] = \\int \\int (x+z-\\mathbb{E}[x+z])(x+z-\\mathbb{E}[x+z])^T p(x,z) dx dz$$\n\nAlso the same procedure can be used here. We omit the proof for simplicity.",
"answer_length": 934
},
{
"chapter": 2,
"question_number": "2.28",
"difficulty": "hard",
"question_text": "\\star \\star)$ www Consider a joint distribution over the variable\n\n$$\\mathbf{z} = \\begin{pmatrix} \\mathbf{x} \\\\ \\mathbf{y} \\end{pmatrix} \\tag{2.290}$$\n\nwhose mean and covariance are given by (2.108: $\\mathbb{E}[\\mathbf{z}] = \\begin{pmatrix} \\boldsymbol{\\mu} \\\\ \\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b} \\end{pmatrix}.$) and (2.105: $cov[\\mathbf{z}] = \\mathbf{R}^{-1} = \\begin{pmatrix} \\mathbf{\\Lambda}^{-1} & \\mathbf{\\Lambda}^{-1} \\mathbf{A}^{\\mathrm{T}} \\\\ \\mathbf{A} \\mathbf{\\Lambda}^{-1} & \\mathbf{L}^{-1} + \\mathbf{A} \\mathbf{\\Lambda}^{-1} \\mathbf{A}^{\\mathrm{T}} \\end{pmatrix}.$) respectively. By making use of the results (2.92: $\\mathbb{E}[\\mathbf{x}_a] = \\boldsymbol{\\mu}_a$) and (2.93: $cov[\\mathbf{x}_a] = \\mathbf{\\Sigma}_{aa}.$) show that the marginal distribution $p(\\mathbf{x})$ is given (2.99: $p(\\mathbf{x}) = \\mathcal{N}\\left(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}^{-1}\\right)$). Similarly, by making use of the results (2.81: $\\boldsymbol{\\mu}_{a|b} = \\boldsymbol{\\mu}_a + \\boldsymbol{\\Sigma}_{ab} \\boldsymbol{\\Sigma}_{bb}^{-1} (\\mathbf{x}_b - \\boldsymbol{\\mu}_b)$) and (2.82: $\\Sigma_{a|b} = \\Sigma_{aa} - \\Sigma_{ab} \\Sigma_{bb}^{-1} \\Sigma_{ba}.$) show that the conditional distribution $p(\\mathbf{y}|\\mathbf{x})$ is given by (2.100: $p(\\mathbf{y}|\\mathbf{x}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\mathbf{x} + \\mathbf{b}, \\mathbf{L}^{-1})$).",
"answer": "It is quite straightforward when we compare the problem with (2.94)-(2.98). We treat $\\boldsymbol{x}$ in (2.94: $\\mathbf{x} = \\begin{pmatrix} \\mathbf{x}_a \\\\ \\mathbf{x}_b \\end{pmatrix}, \\quad \\boldsymbol{\\mu} = \\begin{pmatrix} \\boldsymbol{\\mu}_a \\\\ \\boldsymbol{\\mu}_b \\end{pmatrix}$) as $\\boldsymbol{z}$ in this problem, $\\boldsymbol{x}_a$ in (2.94: $\\mathbf{x} = \\begin{pmatrix} \\mathbf{x}_a \\\\ \\mathbf{x}_b \\end{pmatrix}, \\quad \\boldsymbol{\\mu} = \\begin{pmatrix} \\boldsymbol{\\mu}_a \\\\ \\boldsymbol{\\mu}_b \\end{pmatrix}$) as $\\boldsymbol{x}$ in this problem, $\\boldsymbol{x}_b$ in (2.94: $\\mathbf{x} = \\begin{pmatrix} \\mathbf{x}_a \\\\ \\mathbf{x}_b \\end{pmatrix}, \\quad \\boldsymbol{\\mu} = \\begin{pmatrix} \\boldsymbol{\\mu}_a \\\\ \\boldsymbol{\\mu}_b \\end{pmatrix}$) as $\\boldsymbol{y}$ in this problem. In other words, we rewrite the problem in the form of (2.94)-(2.98), which gives:\n\n$$\\boldsymbol{z} = \\begin{pmatrix} \\boldsymbol{x} \\\\ \\boldsymbol{y} \\end{pmatrix} \\quad \\mathbb{E}(\\boldsymbol{z}) = \\begin{pmatrix} \\boldsymbol{\\mu} \\\\ \\boldsymbol{A}\\boldsymbol{\\mu} + \\boldsymbol{b} \\end{pmatrix} \\quad cov(\\boldsymbol{z}) = \\begin{bmatrix} \\boldsymbol{\\Lambda}^{-1} & \\boldsymbol{\\Lambda}^{-1}\\boldsymbol{A}^T \\\\ \\boldsymbol{A}\\boldsymbol{\\Lambda}^{-1} & \\boldsymbol{L}^{-1} + \\boldsymbol{A}\\boldsymbol{\\Lambda}^{-1}\\boldsymbol{A}^T \\end{bmatrix}$$\n\nBy using (2.98: $p(\\mathbf{x}_a) = \\mathcal{N}(\\mathbf{x}_a | \\boldsymbol{\\mu}_a, \\boldsymbol{\\Sigma}_{aa}).$), we can obtain:\n\n$$p(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}^{-1})$$\n\nAnd by using (2.96: $p(\\mathbf{x}_a|\\mathbf{x}_b) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}_{a|b}, \\boldsymbol{\\Lambda}_{aa}^{-1})$) and (2.97: $\\boldsymbol{\\mu}_{a|b} = \\boldsymbol{\\mu}_a - \\boldsymbol{\\Lambda}_{aa}^{-1} \\boldsymbol{\\Lambda}_{ab} (\\mathbf{x}_b - \\boldsymbol{\\mu}_b).$), we can obtain:\n\n$$p(\\mathbf{y}|\\mathbf{x}) = \\mathcal{N}(\\mathbf{y}|\\boldsymbol{\\mu}_{\\mathbf{y}|\\mathbf{x}}, \\boldsymbol{\\Lambda}_{\\mathbf{y}\\mathbf{y}}^{-1})$$\n\nWhere $\\Lambda_{yy}$ can be obtained by the right bottom part of (2.104: $\\mathbf{R} = \\begin{pmatrix} \\mathbf{\\Lambda} + \\mathbf{A}^{\\mathrm{T}} \\mathbf{L} \\mathbf{A} & -\\mathbf{A}^{\\mathrm{T}} \\mathbf{L} \\\\ -\\mathbf{L} \\mathbf{A} & \\mathbf{L} \\end{pmatrix}.$), which gives $\\Lambda_{yy} = L^{-1}$ , and you can also calculate it using (2.105: $cov[\\mathbf{z}] = \\mathbf{R}^{-1} = \\begin{pmatrix} \\mathbf{\\Lambda}^{-1} & \\mathbf{\\Lambda}^{-1} \\mathbf{A}^{\\mathrm{T}} \\\\ \\mathbf{A} \\mathbf{\\Lambda}^{-1} & \\mathbf{L}^{-1} + \\mathbf{A} \\mathbf{\\Lambda}^{-1} \\mathbf{A}^{\\mathrm{T}} \\end{pmatrix}.$) combined with (2.78: $\\begin{pmatrix} \\mathbf{\\Sigma}_{aa} & \\mathbf{\\Sigma}_{ab} \\\\ \\mathbf{\\Sigma}_{ba} & \\mathbf{\\Sigma}_{bb} \\end{pmatrix}^{-1} = \\begin{pmatrix} \\mathbf{\\Lambda}_{aa} & \\mathbf{\\Lambda}_{ab} \\\\ \\mathbf{\\Lambda}_{ba} & \\mathbf{\\Lambda}_{bb} \\end{pmatrix}$) and (2.79: $\\Lambda_{aa} = (\\Sigma_{aa} - \\Sigma_{ab} \\Sigma_{bb}^{-1} \\Sigma_{ba})^{-1}$). Finally the conditional mean is given by (2.97):\n\n$$\\mu_{y|x} = A\\mu + L - L^{-1}(-LA)(x - \\mu) = Ax + L$$",
"answer_length": 3105
},
{
"chapter": 2,
"question_number": "2.29",
"difficulty": "medium",
"question_text": "Using the partitioned matrix inversion formula (2.76: $\\begin{pmatrix} \\mathbf{A} & \\mathbf{B} \\\\ \\mathbf{C} & \\mathbf{D} \\end{pmatrix}^{-1} = \\begin{pmatrix} \\mathbf{M} & -\\mathbf{M}\\mathbf{B}\\mathbf{D}^{-1} \\\\ -\\mathbf{D}^{-1}\\mathbf{C}\\mathbf{M} & \\mathbf{D}^{-1} + \\mathbf{D}^{-1}\\mathbf{C}\\mathbf{M}\\mathbf{B}\\mathbf{D}^{-1} \\end{pmatrix}$), show that the inverse of the precision matrix (2.104: $\\mathbf{R} = \\begin{pmatrix} \\mathbf{\\Lambda} + \\mathbf{A}^{\\mathrm{T}} \\mathbf{L} \\mathbf{A} & -\\mathbf{A}^{\\mathrm{T}} \\mathbf{L} \\\\ -\\mathbf{L} \\mathbf{A} & \\mathbf{L} \\end{pmatrix}.$) is given by the covariance matrix (2.105: $cov[\\mathbf{z}] = \\mathbf{R}^{-1} = \\begin{pmatrix} \\mathbf{\\Lambda}^{-1} & \\mathbf{\\Lambda}^{-1} \\mathbf{A}^{\\mathrm{T}} \\\\ \\mathbf{A} \\mathbf{\\Lambda}^{-1} & \\mathbf{L}^{-1} + \\mathbf{A} \\mathbf{\\Lambda}^{-1} \\mathbf{A}^{\\mathrm{T}} \\end{pmatrix}.$).",
"answer": "It is straightforward. Firstly, we calculate the left top block:\n\nleft top = \n$$\\left[ (\\boldsymbol{\\Lambda} + \\boldsymbol{A}^T \\boldsymbol{L} \\boldsymbol{A}) - (-\\boldsymbol{A}^T \\boldsymbol{L})(\\boldsymbol{L}^{-1})(-\\boldsymbol{L} \\boldsymbol{A}) \\right]^{-1} = \\boldsymbol{\\Lambda}^{-1}$$\n\nAnd then the right top block:\n\nright top = \n$$-\\boldsymbol{\\Lambda}^{-1}(-\\boldsymbol{A}^T\\boldsymbol{L})\\boldsymbol{L}^{-1} = \\boldsymbol{\\Lambda}^{-1}\\boldsymbol{A}^T$$\n\nAnd then the left bottom block:\n\nleft bottom = \n$$-\\boldsymbol{L}^{-1}(-\\boldsymbol{L}\\boldsymbol{A})\\boldsymbol{\\Lambda}^{-1} = \\boldsymbol{A}\\boldsymbol{\\Lambda}^{-1}$$\n\nFinally the right bottom block:\n\nright bottom = \n$$L^{-1} + L^{-1}(-LA)\\Lambda^{-1}(-A^TL)L^{-1} = L^{-1} + A\\Lambda^{-1}A^T$$",
"answer_length": 763
},
{
"chapter": 2,
"question_number": "2.3",
"difficulty": "medium",
"question_text": "In this exercise, we prove that the binomial distribution (2.9: $(m|N,\\mu) = \\binom{N}{m} \\mu^m (1-\\mu)^{N-m}$) is normalized. First use the definition (2.10: $\\binom{N}{m} \\equiv \\frac{N!}{(N-m)!m!}$) of the number of combinations of m identical objects chosen from a total of N to show that\n\n$$\\binom{N}{m} + \\binom{N}{m-1} = \\binom{N+1}{m}.$$\n (2.262: $\\binom{N}{m} + \\binom{N}{m-1} = \\binom{N+1}{m}.$)\n\nUse this result to prove by induction the following result\n\n$$(1+x)^N = \\sum_{m=0}^N \\binom{N}{m} x^m$$\n (2.263: $(1+x)^N = \\sum_{m=0}^N \\binom{N}{m} x^m$)\n\nwhich is known as the *binomial theorem*, and which is valid for all real values of x. Finally, show that the binomial distribution is normalized, so that\n\n$$\\sum_{m=0}^{N} \\binom{N}{m} \\mu^m (1-\\mu)^{N-m} = 1$$\n (2.264: $\\sum_{m=0}^{N} \\binom{N}{m} \\mu^m (1-\\mu)^{N-m} = 1$)\n\nwhich can be done by first pulling out a factor $(1 - \\mu)^N$ out of the summation and then making use of the binomial theorem.",
"answer": "(2.262: $\\binom{N}{m} + \\binom{N}{m-1} = \\binom{N+1}{m}.$) is an important property of Combinations, which we have used before, such as in Prob.1.15. We will use the 'old fashioned' denotation $C_N^m$ to represent choose m objects from a total of N. With the prior knowledge:\n\n$$C_N^m = \\frac{N!}{m!(N-m)!}$$\n\nWe evaluate the left side of (2.262):\n\n$$C_N^m + C_N^{m-1} = \\frac{N!}{m!(N-m)!} + \\frac{N!}{(m-1)!(N-(m-1))!}$$\n\n$$= \\frac{N!}{(m-1)!(N-m)!} (\\frac{1}{m} + \\frac{1}{N-m+1})$$\n\n$$= \\frac{(N+1)!}{m!(N+1-m)!} = C_{N+1}^m$$\n\nTo proof (2.263: $(1+x)^N = \\sum_{m=0}^N \\binom{N}{m} x^m$), here we will proof a more general form:\n\n$$(x+y)^{N} = \\sum_{m=0}^{N} C_{N}^{m} x^{m} y^{N-m}$$\n (\\*)\n\nIf we let y = 1, (\\*) will reduce to (2.263: $(1+x)^N = \\sum_{m=0}^N \\binom{N}{m} x^m$). We will proof it by induction. First, it is obvious when N = 1, (\\*) holds. We assume that it holds for N, we will proof that it also holds for N + 1.\n\n$$(x+y)^{N+1} = (x+y) \\sum_{m=0}^{N} C_N^m x^m y^{N-m}$$\n\n$$= x \\sum_{m=0}^{N} C_N^m x^m y^{N-m} + y \\sum_{m=0}^{N} C_N^m x^m y^{N-m}$$\n\n$$= \\sum_{m=0}^{N} C_N^m x^{m+1} y^{N-m} + \\sum_{m=0}^{N} C_N^m x^m y^{N+1-m}$$\n\n$$= \\sum_{m=1}^{N+1} C_N^{m-1} x^m y^{N+1-m} + \\sum_{m=0}^{N} C_N^m x^m y^{N+1-m}$$\n\n$$= \\sum_{m=1}^{N} (C_N^{m-1} + C_N^m) x^m y^{N+1-m} + x^{N+1} + y^{N+1}$$\n\n$$= \\sum_{m=1}^{N} C_{N+1}^m x^m y^{N+1-m} + x^{N+1} + y^{N+1}$$\n\n$$= \\sum_{m=0}^{N+1} C_{N+1}^m x^m y^{N+1-m}$$\n\nBy far, we have proved (\\*). Therefore, if we let y = 1 in (\\*), (2.263: $(1+x)^N = \\sum_{m=0}^N \\binom{N}{m} x^m$) has been proved. If we let $x = \\mu$ and $y = 1 - \\mu$ , (2.264: $\\sum_{m=0}^{N} \\binom{N}{m} \\mu^m (1-\\mu)^{N-m} = 1$) has been proved.",
"answer_length": 1687
},
{
"chapter": 2,
"question_number": "2.30",
"difficulty": "easy",
"question_text": "By starting from (2.107: $\\mathbb{E}[\\mathbf{z}] = \\mathbf{R}^{-1} \\begin{pmatrix} \\mathbf{\\Lambda} \\boldsymbol{\\mu} - \\mathbf{A}^{\\mathrm{T}} \\mathbf{L} \\mathbf{b} \\\\ \\mathbf{L} \\mathbf{b} \\end{pmatrix}.$) and making use of the result (2.105: $cov[\\mathbf{z}] = \\mathbf{R}^{-1} = \\begin{pmatrix} \\mathbf{\\Lambda}^{-1} & \\mathbf{\\Lambda}^{-1} \\mathbf{A}^{\\mathrm{T}} \\\\ \\mathbf{A} \\mathbf{\\Lambda}^{-1} & \\mathbf{L}^{-1} + \\mathbf{A} \\mathbf{\\Lambda}^{-1} \\mathbf{A}^{\\mathrm{T}} \\end{pmatrix}.$), verify the result (2.108: $\\mathbb{E}[\\mathbf{z}] = \\begin{pmatrix} \\boldsymbol{\\mu} \\\\ \\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b} \\end{pmatrix}.$).",
"answer": "It is straightforward by multiplying (2.105: $cov[\\mathbf{z}] = \\mathbf{R}^{-1} = \\begin{pmatrix} \\mathbf{\\Lambda}^{-1} & \\mathbf{\\Lambda}^{-1} \\mathbf{A}^{\\mathrm{T}} \\\\ \\mathbf{A} \\mathbf{\\Lambda}^{-1} & \\mathbf{L}^{-1} + \\mathbf{A} \\mathbf{\\Lambda}^{-1} \\mathbf{A}^{\\mathrm{T}} \\end{pmatrix}.$) and (2.107: $\\mathbb{E}[\\mathbf{z}] = \\mathbf{R}^{-1} \\begin{pmatrix} \\mathbf{\\Lambda} \\boldsymbol{\\mu} - \\mathbf{A}^{\\mathrm{T}} \\mathbf{L} \\mathbf{b} \\\\ \\mathbf{L} \\mathbf{b} \\end{pmatrix}.$), which gives:\n\n$$\\begin{pmatrix} \\mathbf{\\Lambda}^{-1} & \\mathbf{\\Lambda}^{-1} \\mathbf{A}^{T} \\\\ \\mathbf{A}\\mathbf{\\Lambda}^{-1} & \\mathbf{L}^{-1} + \\mathbf{A}\\mathbf{\\Lambda}^{-1} \\mathbf{A}^{T} \\end{pmatrix} \\begin{pmatrix} \\mathbf{\\Lambda}\\boldsymbol{\\mu} - \\mathbf{A}^{T} \\mathbf{L} \\boldsymbol{b} \\\\ \\mathbf{L} \\boldsymbol{b} \\end{pmatrix} = \\begin{pmatrix} \\boldsymbol{\\mu} \\\\ \\mathbf{A}\\boldsymbol{\\mu} + \\boldsymbol{b} \\end{pmatrix}$$\n\nJust as required in the problem.",
"answer_length": 968
},
{
"chapter": 2,
"question_number": "2.31",
"difficulty": "medium",
"question_text": "Consider two multidimensional random vectors $\\mathbf{x}$ and $\\mathbf{z}$ having Gaussian distributions $p(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}_{\\mathbf{x}}, \\boldsymbol{\\Sigma}_{\\mathbf{x}})$ and $p(\\mathbf{z}) = \\mathcal{N}(\\mathbf{z}|\\boldsymbol{\\mu}_{\\mathbf{z}}, \\boldsymbol{\\Sigma}_{\\mathbf{z}})$ respectively, together with their sum $\\mathbf{y} = \\mathbf{x} + \\mathbf{z}$ . Use the results (2.109: $\\mathbb{E}[\\mathbf{y}] = \\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}$) and (2.110: $\\operatorname{cov}[\\mathbf{y}] = \\mathbf{L}^{-1} + \\mathbf{A}\\mathbf{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}}.$) to find an expression for the marginal distribution $p(\\mathbf{y})$ by considering the linear-Gaussian model comprising the product of the marginal distribution $p(\\mathbf{x})$ and the conditional distribution $p(\\mathbf{y}|\\mathbf{x})$ .",
"answer": "According to the problem, we can write two expressions:\n\n$$p(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}_{\\mathbf{x}}, \\boldsymbol{\\Sigma}_{\\mathbf{x}}), \\quad p(\\mathbf{y}|\\mathbf{x}) = \\mathcal{N}(\\mathbf{y}|\\boldsymbol{\\mu}_{\\mathbf{z}} + \\mathbf{x}, \\boldsymbol{\\Sigma}_{\\mathbf{z}})$$\n\nBy comparing the expression above and (2.113)-(2.117), we can write the expression of p(y):\n\n$$p(\\mathbf{y}) = \\mathcal{N}(\\mathbf{y}|\\boldsymbol{\\mu}_{x} + \\boldsymbol{\\mu}_{z}, \\boldsymbol{\\Sigma}_{x} + \\boldsymbol{\\Sigma}_{z})$$",
"answer_length": 532
},
{
"chapter": 2,
"question_number": "2.32",
"difficulty": "hard",
"question_text": "This exercise and the next provide practice at manipulating the quadratic forms that arise in linear-Gaussian models, as well as giving an independent check of results derived in the main text. Consider a joint distribution $p(\\mathbf{x}, \\mathbf{y})$ defined by the marginal and conditional distributions given by (2.99: $p(\\mathbf{x}) = \\mathcal{N}\\left(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}^{-1}\\right)$) and (2.100: $p(\\mathbf{y}|\\mathbf{x}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\mathbf{x} + \\mathbf{b}, \\mathbf{L}^{-1})$). By examining the quadratic form in the exponent of the joint distribution, and using the technique of 'completing the square' discussed in Section 2.3, find expressions for the mean and covariance of the marginal distribution $p(\\mathbf{y})$ in which the variable $\\mathbf{x}$ has been integrated out. To do this, make use of the Woodbury matrix inversion formula (2.289: $(\\mathbf{A} + \\mathbf{BCD})^{-1} = \\mathbf{A}^{-1} - \\mathbf{A}^{-1}\\mathbf{B}(\\mathbf{C}^{-1} + \\mathbf{D}\\mathbf{A}^{-1}\\mathbf{B})^{-1}\\mathbf{D}\\mathbf{A}^{-1}.$). Verify that these results agree with (2.109: $\\mathbb{E}[\\mathbf{y}] = \\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}$) and (2.110: $\\operatorname{cov}[\\mathbf{y}] = \\mathbf{L}^{-1} + \\mathbf{A}\\mathbf{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}}.$) obtained using the results of Chapter 2.",
"answer": "Let's make this problem more clear. The deduction in the main text, i.e., (2.101-2.110), firstly denote a new random variable z corresponding to the joint distribution, and then by completing square according to z,i.e.,(2.103), obtain the precision matrix R by comparing (2.103: $= -\\frac{1}{2}\\begin{pmatrix} \\mathbf{x} \\\\ \\mathbf{y} \\end{pmatrix}^{\\mathrm{T}}\\begin{pmatrix} \\mathbf{\\Lambda} + \\mathbf{A}^{\\mathrm{T}}\\mathbf{L}\\mathbf{A} & -\\mathbf{A}^{\\mathrm{T}}\\mathbf{L} \\\\ -\\mathbf{L}\\mathbf{A} & \\mathbf{L} \\end{pmatrix}\\begin{pmatrix} \\mathbf{x} \\\\ \\mathbf{y} \\end{pmatrix} = -\\frac{1}{2}\\mathbf{z}^{\\mathrm{T}}\\mathbf{R}\\mathbf{z} \\quad$) with the PDF of a multivariate Gaussian Distribution, and then it takes the inverse of precision matrix to obtain covariance matrix, and finally it obtains the linear term i.e., (2.106: $\\mathbf{x}^{\\mathrm{T}} \\mathbf{\\Lambda} \\boldsymbol{\\mu} - \\mathbf{x}^{\\mathrm{T}} \\mathbf{A}^{\\mathrm{T}} \\mathbf{L} \\mathbf{b} + \\mathbf{y}^{\\mathrm{T}} \\mathbf{L} \\mathbf{b} = \\begin{pmatrix} \\mathbf{x} \\\\ \\mathbf{y} \\end{pmatrix}^{\\mathrm{T}} \\begin{pmatrix} \\mathbf{\\Lambda} \\boldsymbol{\\mu} - \\mathbf{A}^{\\mathrm{T}} \\mathbf{L} \\mathbf{b} \\\\ \\mathbf{L} \\mathbf{b} \\end{pmatrix}.$) to calculate the mean.\n\nIn this problem, we are asked to solve the problem from another perspective: we need to write the joint distribution p(x, y) and then perform integration over x to obtain marginal distribution p(y). Let's begin by write the quadratic form in the exponential of p(x, y):\n\n$$-\\frac{1}{2}(\\boldsymbol{x}-\\boldsymbol{\\mu})^T \\boldsymbol{\\Lambda}(\\boldsymbol{x}-\\boldsymbol{\\mu}) - \\frac{1}{2}(\\boldsymbol{y}-\\boldsymbol{A}\\boldsymbol{x}-\\boldsymbol{b})^T \\boldsymbol{L}(\\boldsymbol{y}-\\boldsymbol{A}\\boldsymbol{x}-\\boldsymbol{b})$$\n\nWe extract those terms involving x:\n\n$$= -\\frac{1}{2} \\mathbf{x}^{T} (\\mathbf{\\Lambda} + \\mathbf{A}^{T} \\mathbf{L} \\mathbf{A}) \\mathbf{x} + \\mathbf{x}^{T} [\\mathbf{\\Lambda} \\boldsymbol{\\mu} + \\mathbf{A}^{T} \\mathbf{L} (\\mathbf{y} - \\mathbf{b})] + const$$\n\n$$= -\\frac{1}{2} (\\mathbf{x} - \\mathbf{m})^{T} (\\mathbf{\\Lambda} + \\mathbf{A}^{T} \\mathbf{L} \\mathbf{A}) (\\mathbf{x} - \\mathbf{m}) + \\frac{1}{2} \\mathbf{m}^{T} (\\mathbf{\\Lambda} + \\mathbf{A}^{T} \\mathbf{L} \\mathbf{A}) \\mathbf{m} + const$$\n\nWhere we have defined:\n\n$$\\boldsymbol{m} = (\\boldsymbol{\\Lambda} + \\boldsymbol{A}^T \\boldsymbol{L} \\boldsymbol{A})^{-1} [\\boldsymbol{\\Lambda} \\boldsymbol{\\mu} + \\boldsymbol{A}^T \\boldsymbol{L} (\\boldsymbol{y} - \\boldsymbol{b})]$$\n\nNow if we perform integration over x, we will see that the first term vanish to a constant, and we extract the terms including y from the remaining parts, we can obtain :\n\n$$= -\\frac{1}{2} \\mathbf{y}^{T} \\left[ \\mathbf{L} - \\mathbf{L} \\mathbf{A} (\\mathbf{\\Lambda} + \\mathbf{A}^{T} \\mathbf{L} \\mathbf{A})^{-1} \\mathbf{A}^{T} \\mathbf{L} \\right] \\mathbf{y}$$\n$$+ \\mathbf{y}^{T} \\left\\{ \\left[ \\mathbf{L} - \\mathbf{L} \\mathbf{A} (\\mathbf{\\Lambda} + \\mathbf{A}^{T} \\mathbf{L} \\mathbf{A})^{-1} \\mathbf{A}^{T} \\mathbf{L} \\right] \\mathbf{b}$$\n$$+ \\mathbf{L} \\mathbf{A} (\\mathbf{\\Lambda} + \\mathbf{A}^{T} \\mathbf{L} \\mathbf{A})^{-1} \\mathbf{\\Lambda} \\boldsymbol{\\mu} \\right\\}$$\n\nWe firstly view the quadratic term to obtain the precision matrix, and then we take advantage of (2.289: $(\\mathbf{A} + \\mathbf{BCD})^{-1} = \\mathbf{A}^{-1} - \\mathbf{A}^{-1}\\mathbf{B}(\\mathbf{C}^{-1} + \\mathbf{D}\\mathbf{A}^{-1}\\mathbf{B})^{-1}\\mathbf{D}\\mathbf{A}^{-1}.$), we will obtain (2.110: $\\operatorname{cov}[\\mathbf{y}] = \\mathbf{L}^{-1} + \\mathbf{A}\\mathbf{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}}.$). Finally, using the linear term combined with the already known covariance matrix, we can obtain (2.109: $\\mathbb{E}[\\mathbf{y}] = \\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}$).",
"answer_length": 3754
},
{
"chapter": 2,
"question_number": "2.33",
"difficulty": "hard",
"question_text": "Consider the same joint distribution as in Exercise 2.32, but now use the technique of completing the square to find expressions for the mean and covariance of the conditional distribution $p(\\mathbf{x}|\\mathbf{y})$ . Again, verify that these agree with the corresponding expressions (2.111: $\\mathbb{E}[\\mathbf{x}|\\mathbf{y}] = (\\mathbf{\\Lambda} + \\mathbf{A}^{\\mathrm{T}}\\mathbf{L}\\mathbf{A})^{-1} \\left\\{ \\mathbf{A}^{\\mathrm{T}}\\mathbf{L}(\\mathbf{y} - \\mathbf{b}) + \\mathbf{\\Lambda}\\boldsymbol{\\mu} \\right\\}$) and (2.112: $cov[\\mathbf{x}|\\mathbf{y}] = (\\mathbf{\\Lambda} + \\mathbf{A}^{\\mathrm{T}}\\mathbf{L}\\mathbf{A})^{-1}.$).",
"answer": "According to Bayesian Formula, we can write $p(\\mathbf{x}|\\mathbf{y}) = \\frac{p(\\mathbf{x},\\mathbf{y})}{p(\\mathbf{y})}$ , where we have already known the joint distribution $p(\\mathbf{x},\\mathbf{y})$ in (2.105: $cov[\\mathbf{z}] = \\mathbf{R}^{-1} = \\begin{pmatrix} \\mathbf{\\Lambda}^{-1} & \\mathbf{\\Lambda}^{-1} \\mathbf{A}^{\\mathrm{T}} \\\\ \\mathbf{A} \\mathbf{\\Lambda}^{-1} & \\mathbf{L}^{-1} + \\mathbf{A} \\mathbf{\\Lambda}^{-1} \\mathbf{A}^{\\mathrm{T}} \\end{pmatrix}.$) and (2.108: $\\mathbb{E}[\\mathbf{z}] = \\begin{pmatrix} \\boldsymbol{\\mu} \\\\ \\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b} \\end{pmatrix}.$), and the marginal distribution $p(\\mathbf{y})$ in Prob.2.32., we can follow the same procedure in Prob.2.32., i.e. firstly obtain the covariance matrix from the quadratic term and then obtain the mean from the linear term. The details are omitted here.",
"answer_length": 852
},
{
"chapter": 2,
"question_number": "2.34",
"difficulty": "medium",
"question_text": "- 2.34 (\\*\\*) www To find the maximum likelihood solution for the covariance matrix of a multivariate Gaussian, we need to maximize the log likelihood function (2.118: $\\ln p(\\mathbf{X}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}) = -\\frac{ND}{2} \\ln(2\\pi) - \\frac{N}{2} \\ln |\\boldsymbol{\\Sigma}| - \\frac{1}{2} \\sum_{n=1}^{N} (\\mathbf{x}_n - \\boldsymbol{\\mu})^{\\mathrm{T}} \\boldsymbol{\\Sigma}^{-1} (\\mathbf{x}_n - \\boldsymbol{\\mu}). \\quad$) with respect to Σ, noting that the covariance matrix must be symmetric and positive definite. Here we proceed by ignoring these constraints and doing a straightforward maximization. Using the results (C.21), (C.26), and (C.28) from Appendix C, show that the covariance matrix Σ that maximizes the log likelihood function (2.118: $\\ln p(\\mathbf{X}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}) = -\\frac{ND}{2} \\ln(2\\pi) - \\frac{N}{2} \\ln |\\boldsymbol{\\Sigma}| - \\frac{1}{2} \\sum_{n=1}^{N} (\\mathbf{x}_n - \\boldsymbol{\\mu})^{\\mathrm{T}} \\boldsymbol{\\Sigma}^{-1} (\\mathbf{x}_n - \\boldsymbol{\\mu}). \\quad$) is given by the sample covariance (2.122: $\\Sigma_{\\mathrm{ML}} = \\frac{1}{N} \\sum_{n=1}^{N} (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{\\mathrm{ML}}) (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{\\mathrm{ML}})^{\\mathrm{T}}$). We note that the final result is necessarily symmetric and positive definite (provided the sample covariance is nonsingular).",
"answer": "Let's follow the hint by firstly calculating the derivative of (2.118: $\\ln p(\\mathbf{X}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}) = -\\frac{ND}{2} \\ln(2\\pi) - \\frac{N}{2} \\ln |\\boldsymbol{\\Sigma}| - \\frac{1}{2} \\sum_{n=1}^{N} (\\mathbf{x}_n - \\boldsymbol{\\mu})^{\\mathrm{T}} \\boldsymbol{\\Sigma}^{-1} (\\mathbf{x}_n - \\boldsymbol{\\mu}). \\quad$) with respect to $\\Sigma$ and let it equal to 0:\n\n$$-\\frac{N}{2}\\frac{\\partial}{\\partial \\Sigma}ln|\\Sigma| - \\frac{1}{2}\\frac{\\partial}{\\partial \\Sigma}\\sum_{n=1}^{N}(\\boldsymbol{x_n} - \\boldsymbol{\\mu})^T \\Sigma^{-1}(\\boldsymbol{x_n} - \\boldsymbol{\\mu}) = 0$$\n\nBy using (C.28), the first term can be reduced to:\n\n$$-\\frac{N}{2}\\frac{\\partial}{\\partial \\boldsymbol{\\Sigma}}ln|\\boldsymbol{\\Sigma}| = -\\frac{N}{2}(\\boldsymbol{\\Sigma}^{-1})^T = -\\frac{N}{2}\\boldsymbol{\\Sigma}^{-1}$$\n\nProvided with the result that the optimal covariance matrix is the sample covariance, we denote sample matrix S as :\n\n$$S = \\frac{1}{N} \\sum_{n=1}^{N} (x_n - \\mu)(x_n - \\mu)^T$$\n\nWe rewrite the second term:\n\nsecond term = \n$$-\\frac{1}{2} \\frac{\\partial}{\\partial \\Sigma} \\sum_{n=1}^{N} (\\mathbf{x_n} - \\boldsymbol{\\mu})^T \\Sigma^{-1} (\\mathbf{x_n} - \\boldsymbol{\\mu})$$\n \n= $-\\frac{N}{2} \\frac{\\partial}{\\partial \\Sigma} Tr[\\Sigma^{-1} S]$ \n= $\\frac{N}{2} \\Sigma^{-1} S \\Sigma^{-1}$ \n\nWhere we have taken advantage of the following property, combined with the fact that S and $\\Sigma$ is symmetric. (Note: this property can be found in *The Matrix Cookbook*.)\n\n$$\\frac{\\partial}{\\partial \\boldsymbol{X}}Tr(\\boldsymbol{A}\\boldsymbol{X}^{-1}\\boldsymbol{B}) = -(\\boldsymbol{X}^{-1}\\boldsymbol{B}\\boldsymbol{A}\\boldsymbol{X}^{-1})^T = -(\\boldsymbol{X}^{-1})^T\\boldsymbol{A}^T\\boldsymbol{B}^T(\\boldsymbol{X}^{-1})^T$$\n\nThus we obtain:\n\n$$-\\frac{N}{2}\\boldsymbol{\\Sigma}^{-1} + \\frac{N}{2}\\boldsymbol{\\Sigma}^{-1}\\mathbf{S}\\boldsymbol{\\Sigma}^{-1} = 0$$\n\nObviously, we obtain $\\Sigma = S$ , just as required.",
"answer_length": 1931
},
{
"chapter": 2,
"question_number": "2.35",
"difficulty": "medium",
"question_text": "Use the result (2.59: $\\mathbb{E}[\\mathbf{x}] = \\boldsymbol{\\mu}$) to prove (2.62: $\\mathbb{E}[\\mathbf{x}\\mathbf{x}^{\\mathrm{T}}] = \\boldsymbol{\\mu}\\boldsymbol{\\mu}^{\\mathrm{T}} + \\boldsymbol{\\Sigma}.$). Now, using the results (2.59: $\\mathbb{E}[\\mathbf{x}] = \\boldsymbol{\\mu}$), and (2.62: $\\mathbb{E}[\\mathbf{x}\\mathbf{x}^{\\mathrm{T}}] = \\boldsymbol{\\mu}\\boldsymbol{\\mu}^{\\mathrm{T}} + \\boldsymbol{\\Sigma}.$), show that\n\n$$\\mathbb{E}[\\mathbf{x}_n \\mathbf{x}_m] = \\boldsymbol{\\mu} \\boldsymbol{\\mu}^{\\mathrm{T}} + I_{nm} \\boldsymbol{\\Sigma}$$\n (2.291: $\\mathbb{E}[\\mathbf{x}_n \\mathbf{x}_m] = \\boldsymbol{\\mu} \\boldsymbol{\\mu}^{\\mathrm{T}} + I_{nm} \\boldsymbol{\\Sigma}$)\n\nwhere $\\mathbf{x}_n$ denotes a data point sampled from a Gaussian distribution with mean $\\boldsymbol{\\mu}$ and covariance $\\boldsymbol{\\Sigma}$ , and $I_{nm}$ denotes the (n,m) element of the identity matrix. Hence prove the result (2.124: $\\mathbb{E}[\\Sigma_{\\mathrm{ML}}] = \\frac{N-1}{N} \\Sigma.$).",
"answer": "The proof of (2.62: $\\mathbb{E}[\\mathbf{x}\\mathbf{x}^{\\mathrm{T}}] = \\boldsymbol{\\mu}\\boldsymbol{\\mu}^{\\mathrm{T}} + \\boldsymbol{\\Sigma}.$) is quite clear in the main text, i.e., from page 82 to page 83 and hence we won't repeat it here. Let's prove (2.124: $\\mathbb{E}[\\Sigma_{\\mathrm{ML}}] = \\frac{N-1}{N} \\Sigma.$). We first begin by proving (2.123):\n\n$$\\mathbb{E}[\\boldsymbol{\\mu_{ML}}] = \\frac{1}{N} \\mathbb{E}[\\sum_{n=1}^{N} \\boldsymbol{x_n}] = \\frac{1}{N} \\cdot N\\boldsymbol{\\mu} = \\boldsymbol{\\mu}$$\n\nWhere we have taken advantage of the fact that $x_n$ is independently and identically distributed (i.i.d).\n\nThen we use the expression in (2.122):\n\n$$\\mathbb{E}[\\mathbf{\\Sigma}_{ML}] = \\frac{1}{N} \\mathbb{E}[\\sum_{n=1}^{N} (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{ML})(\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{ML})^{T}]$$\n\n$$= \\frac{1}{N} \\sum_{n=1}^{N} \\mathbb{E}[(\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{ML})(\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{ML})^{T}]$$\n\n$$= \\frac{1}{N} \\sum_{n=1}^{N} \\mathbb{E}[(\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{ML})(\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{ML})^{T}]$$\n\n$$= \\frac{1}{N} \\sum_{n=1}^{N} \\mathbb{E}[\\mathbf{x}_{n} \\mathbf{x}_{n}^{T} - 2\\boldsymbol{\\mu}_{ML} \\mathbf{x}_{n}^{T} + \\boldsymbol{\\mu}_{ML} \\boldsymbol{\\mu}_{ML}^{T}]$$\n\n$$= \\frac{1}{N} \\sum_{n=1}^{N} \\mathbb{E}[\\mathbf{x}_{n} \\mathbf{x}_{n}^{T}] - 2\\frac{1}{N} \\sum_{n=1}^{N} \\mathbb{E}[\\boldsymbol{\\mu}_{ML} \\mathbf{x}_{n}^{T}] + \\frac{1}{N} \\sum_{n=1}^{N} \\mathbb{E}[\\boldsymbol{\\mu}_{ML} \\boldsymbol{\\mu}_{ML}^{T}]$$\n\nBy using (2.291: $\\mathbb{E}[\\mathbf{x}_n \\mathbf{x}_m] = \\boldsymbol{\\mu} \\boldsymbol{\\mu}^{\\mathrm{T}} + I_{nm} \\boldsymbol{\\Sigma}$), the first term will equal to:\n\nfirst term = \n$$\\frac{1}{N} \\cdot N(\\mu \\mu^T + \\Sigma) = \\mu \\mu^T + \\Sigma$$\n\nThe second term will equal to:\n\nsecond term = \n$$-2\\frac{1}{N}\\sum_{n=1}^{N}\\mathbb{E}[\\boldsymbol{\\mu_{ML}x_n}^T]$$\n \n= $-2\\frac{1}{N}\\sum_{n=1}^{N}\\mathbb{E}[\\frac{1}{N}(\\sum_{m=1}^{N}\\boldsymbol{x_m})\\boldsymbol{x_n}^T]$ \n= $-2\\frac{1}{N^2}\\sum_{n=1}^{N}\\sum_{m=1}^{N}\\mathbb{E}[\\boldsymbol{x_mx_n}^T]$ \n= $-2\\frac{1}{N^2}\\sum_{n=1}^{N}\\sum_{m=1}^{N}(\\boldsymbol{\\mu\\boldsymbol{\\mu}^T} + \\boldsymbol{I_{nm}\\boldsymbol{\\Sigma}})$ \n= $-2\\frac{1}{N^2}(N^2\\boldsymbol{\\mu\\boldsymbol{\\mu}^T} + N\\boldsymbol{\\Sigma})$ \n= $-2(\\boldsymbol{\\mu\\boldsymbol{\\mu}^T} + \\frac{1}{N}\\boldsymbol{\\Sigma})$ \n\nSimilarly, the third term will equal to:\n\nthird term \n$$= \\frac{1}{N} \\sum_{n=1}^{N} \\mathbb{E}[\\boldsymbol{\\mu_{ML}} \\boldsymbol{\\mu_{ML}}^T]$$\n\n$$= \\frac{1}{N} \\sum_{n=1}^{N} \\mathbb{E}[(\\frac{1}{N} \\sum_{j=1}^{N} \\boldsymbol{x_j}) \\cdot (\\frac{1}{N} \\sum_{i=1}^{N} \\boldsymbol{x_i})]$$\n\n$$= \\frac{1}{N^3} \\sum_{n=1}^{N} \\mathbb{E}[(\\sum_{j=1}^{N} \\boldsymbol{x_j}) \\cdot (\\sum_{i=1}^{N} \\boldsymbol{x_i})]$$\n\n$$= \\frac{1}{N^3} \\sum_{n=1}^{N} (N^2 \\boldsymbol{\\mu} \\boldsymbol{\\mu}^T + N\\boldsymbol{\\Sigma})$$\n\n$$= \\boldsymbol{\\mu} \\boldsymbol{\\mu}^T + \\frac{1}{N} \\boldsymbol{\\Sigma}$$\n\nFinally, we combine those three terms, which gives:\n\n$$\\mathbb{E}[\\mathbf{\\Sigma}_{\\boldsymbol{ML}}] = \\frac{N-1}{N} \\mathbf{\\Sigma}$$\n\nNote: the same procedure from (2.59: $\\mathbb{E}[\\mathbf{x}] = \\boldsymbol{\\mu}$) to (2.62: $\\mathbb{E}[\\mathbf{x}\\mathbf{x}^{\\mathrm{T}}] = \\boldsymbol{\\mu}\\boldsymbol{\\mu}^{\\mathrm{T}} + \\boldsymbol{\\Sigma}.$) can be carried out to prove (2.291: $\\mathbb{E}[\\mathbf{x}_n \\mathbf{x}_m] = \\boldsymbol{\\mu} \\boldsymbol{\\mu}^{\\mathrm{T}} + I_{nm} \\boldsymbol{\\Sigma}$) and the only difference is that we need to introduce index m and n to represent the samples. (2.291: $\\mathbb{E}[\\mathbf{x}_n \\mathbf{x}_m] = \\boldsymbol{\\mu} \\boldsymbol{\\mu}^{\\mathrm{T}} + I_{nm} \\boldsymbol{\\Sigma}$) is quite straightforward if we see it in this way: If m = n, which means $x_n$ and $x_m$ are actually the same sample, (2.291: $\\mathbb{E}[\\mathbf{x}_n \\mathbf{x}_m] = \\boldsymbol{\\mu} \\boldsymbol{\\mu}^{\\mathrm{T}} + I_{nm} \\boldsymbol{\\Sigma}$) will reduce to (2.262: $\\binom{N}{m} + \\binom{N}{m-1} = \\binom{N+1}{m}.$) (i.e. the correlation between different dimensions exists) and if $m \\neq n$ , which means $x_n$ and $x_m$ are different samples, also i.i.d, then no correlation should exist, we can guess $\\mathbb{E}[x_n x_m^T] = \\mu \\mu^T$ in this case.",
"answer_length": 4249
},
{
"chapter": 2,
"question_number": "2.36",
"difficulty": "medium",
"question_text": "Using an analogous procedure to that used to obtain (2.126: $= \\mu_{\\text{ML}}^{(N-1)} + \\frac{1}{N} (\\mathbf{x}_{N} - \\mu_{\\text{ML}}^{(N-1)}).$), derive an expression for the sequential estimation of the variance of a univariate Gaussian\n\ndistribution, by starting with the maximum likelihood expression\n\n$$\\sigma_{\\rm ML}^2 = \\frac{1}{N} \\sum_{n=1}^{N} (x_n - \\mu)^2.$$\n (2.292: $\\sigma_{\\rm ML}^2 = \\frac{1}{N} \\sum_{n=1}^{N} (x_n - \\mu)^2.$)\n\nVerify that substituting the expression for a Gaussian distribution into the Robbins-Monro sequential estimation formula (2.135: $\\theta^{(N)} = \\theta^{(N-1)} + a_{N-1} \\frac{\\partial}{\\partial \\theta^{(N-1)}} \\ln p(x_N | \\theta^{(N-1)}).$) gives a result of the same form, and hence obtain an expression for the corresponding coefficients $a_N$ .",
"answer": "Let's follow the hint. However, firstly we will find the sequential expression based on definition, which will make the latter process on finding coefficient $a_{N-1}$ more easily. Suppose we have N observations in total, and then we can write:\n\n$$\\begin{split} \\sigma_{ML}^{2(N)} &= \\frac{1}{N} \\sum_{n=1}^{N} (x_n - \\mu_{ML}^{(N)})^2 \\\\ &= \\frac{1}{N} \\left[ \\sum_{n=1}^{N-1} (x_n - \\mu_{ML}^{(N)})^2 + (x_N - \\mu_{ML}^{(N)})^2 \\right] \\\\ &= \\frac{N-1}{N} \\frac{1}{N-1} \\sum_{n=1}^{N-1} (x_n - \\mu_{ML}^{(N)})^2 + \\frac{1}{N} (x_N - \\mu_{ML}^{(N)})^2 \\\\ &= \\frac{N-1}{N} \\sigma_{ML}^{2(N-1)} + \\frac{1}{N} (x_N - \\mu_{ML}^{(N)})^2 \\\\ &= \\sigma_{ML}^{2(N-1)} + \\frac{1}{N} \\left[ (x_N - \\mu_{ML}^{(N)})^2 - \\sigma_{ML}^{2(N-1)} \\right] \\end{split}$$\n\nAnd then let us write the expression for $\\sigma_{ML}$ .\n\n$$\\frac{\\partial}{\\partial \\sigma^2} \\left\\{ \\frac{1}{N} \\sum_{n=1}^{N} ln p(x_n | \\mu, \\sigma) \\right\\} \\bigg|_{\\sigma_{ML}} = 0$$\n\nBy exchanging the summation and the derivative, and letting $N \\to +\\infty$ , we can obtain :\n\n$$\\lim_{N \\to +\\infty} \\frac{1}{N} \\sum_{n=1}^{N} \\frac{\\partial}{\\partial \\sigma^2} ln p(x_n | \\mu, \\sigma) = \\mathbb{E}_x \\left[ \\frac{\\partial}{\\partial \\sigma^2} ln p(x_n | \\mu, \\sigma) \\right]$$\n\nComparing it with (2.127: $f(\\theta) \\equiv \\mathbb{E}[z|\\theta] = \\int zp(z|\\theta) dz$), we can obtain the sequential formula to estimate $\\sigma_{ML}$ :\n\n$$\\begin{split} \\sigma_{ML}^{2(N)} &= \\sigma_{ML}^{2(N-1)} + a_{N-1} \\frac{\\partial}{\\partial \\sigma_{ML}^{2(N-1)}} lnp(x_N | \\mu_{ML}^{(N)}, \\sigma_{ML}^{(N-1)}) & (*) \\\\ &= \\sigma_{ML}^{2(N-1)} + a_{N-1} \\left[ -\\frac{1}{2\\sigma_{ML}^{2(N-1)}} + \\frac{(x_N - \\mu_{ML}^{(N)})^2}{2\\sigma_{ML}^{4(N-1)}} \\right] \\end{split}$$\n\nWhere we use $\\sigma_{ML}^{2(N)}$ to represent the Nth estimation of $\\sigma_{ML}^2$ , i.e., the estimation of $\\sigma_{ML}^2$ after the Nth observation. What's more, if we choose :\n\n$$a_{N-1} = \\frac{2\\sigma_{ML}^{4(N-1)}}{N}$$\n\nThen we will obtain:\n\n$$\\sigma_{ML}^{2(N)} = \\sigma_{ML}^{2(N-1)} + \\frac{1}{N} \\left[ -\\sigma_{ML}^{2(N-1)} + (x_N - \\mu_{ML}^{(N)})^2 \\right]$$\n\nWe can see that the results are the same. An important thing should be noticed: In maximum likelihood, when estimating variance $\\sigma_{ML}^{2(N)}$ , we will first estimate mean $\\mu_{ML}^{(N)}$ , and then we we will calculate variance $\\sigma_{ML}^{2(N)}$ .\n\nIn other words, they are decoupled. It is the same in sequential method. For instance, if we want to estimate both mean and variance sequentially, after observing the Nth sample (i.e., $x_N$ ), firstly we can use $\\mu_{ML}^{(N-1)}$ together with (2.126: $= \\mu_{\\text{ML}}^{(N-1)} + \\frac{1}{N} (\\mathbf{x}_{N} - \\mu_{\\text{ML}}^{(N-1)}).$) to estimate $\\mu_{ML}^{(N)}$ and then use the conclusion in this problem to obtain $\\sigma_{ML}^{(N)}$ . That is why in (\\*) we write $lnp(x_N|\\mu_{ML}^{(N)},\\sigma_{ML}^{(N-1)})$ instead of $lnp(x_N|\\mu_{ML}^{(N-1)},\\sigma_{ML}^{(N-1)})$ .",
"answer_length": 2963
},
{
"chapter": 2,
"question_number": "2.37",
"difficulty": "medium",
"question_text": "Using an analogous procedure to that used to obtain (2.126: $= \\mu_{\\text{ML}}^{(N-1)} + \\frac{1}{N} (\\mathbf{x}_{N} - \\mu_{\\text{ML}}^{(N-1)}).$), derive an expression for the sequential estimation of the covariance of a multivariate Gaussian distribution, by starting with the maximum likelihood expression (2.122: $\\Sigma_{\\mathrm{ML}} = \\frac{1}{N} \\sum_{n=1}^{N} (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{\\mathrm{ML}}) (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{\\mathrm{ML}})^{\\mathrm{T}}$). Verify that substituting the expression for a Gaussian distribution into the Robbins-Monro sequential estimation formula (2.135: $\\theta^{(N)} = \\theta^{(N-1)} + a_{N-1} \\frac{\\partial}{\\partial \\theta^{(N-1)}} \\ln p(x_N | \\theta^{(N-1)}).$) gives a result of the same form, and hence obtain an expression for the corresponding coefficients $a_N$ .",
"answer": "We follow the same procedure in Prob.2.36 to solve this problem. Firstly,\n\nwe can obtain the sequential formula based on definition.\n\n$$\\begin{split} \\boldsymbol{\\Sigma}_{ML}^{(N)} &= \\frac{1}{N} \\sum_{n=1}^{N} (\\boldsymbol{x}_{n} - \\boldsymbol{\\mu}_{ML}^{(N)}) (\\boldsymbol{x}_{n} - \\boldsymbol{\\mu}_{ML}^{(N)})^{T} \\\\ &= \\frac{1}{N} \\left[ \\sum_{n=1}^{N-1} (\\boldsymbol{x}_{n} - \\boldsymbol{\\mu}_{ML}^{(N)}) (\\boldsymbol{x}_{n} - \\boldsymbol{\\mu}_{ML}^{(N)})^{T} + (\\boldsymbol{x}_{N} - \\boldsymbol{\\mu}_{ML}^{(N)}) (\\boldsymbol{x}_{N} - \\boldsymbol{\\mu}_{ML}^{(N)})^{T} \\right] \\\\ &= \\frac{N-1}{N} \\boldsymbol{\\Sigma}_{ML}^{(N-1)} + \\frac{1}{N} (\\boldsymbol{x}_{N} - \\boldsymbol{\\mu}_{ML}^{(N)}) (\\boldsymbol{x}_{N} - \\boldsymbol{\\mu}_{ML}^{(N)})^{T} \\\\ &= \\boldsymbol{\\Sigma}_{ML}^{(N-1)} + \\frac{1}{N} \\left[ (\\boldsymbol{x}_{N} - \\boldsymbol{\\mu}_{ML}^{(N)}) (\\boldsymbol{x}_{N} - \\boldsymbol{\\mu}_{ML}^{(N)})^{T} - \\boldsymbol{\\Sigma}_{ML}^{(N-1)} \\right] \\end{split}$$\n\nIf we use *Robbins-Monro sequential estimation formula*, i.e., (2.135: $\\theta^{(N)} = \\theta^{(N-1)} + a_{N-1} \\frac{\\partial}{\\partial \\theta^{(N-1)}} \\ln p(x_N | \\theta^{(N-1)}).$), we can obtain :\n\n$$\\begin{split} \\boldsymbol{\\Sigma}_{ML}^{(N)} &= \\boldsymbol{\\Sigma}_{ML}^{(N-1)} + \\boldsymbol{a}_{N-1} \\frac{\\partial}{\\partial \\boldsymbol{\\Sigma}_{ML}^{(N-1)}} lnp(\\boldsymbol{x}_{N} | \\boldsymbol{\\mu}_{ML}^{(N)}, \\boldsymbol{\\Sigma}_{ML}^{(N-1)}) \\\\ &= \\boldsymbol{\\Sigma}_{ML}^{(N-1)} + \\boldsymbol{a}_{N-1} \\frac{\\partial}{\\partial \\boldsymbol{\\Sigma}_{ML}^{(N-1)}} lnp(\\boldsymbol{x}_{N} | \\boldsymbol{\\mu}_{ML}^{(N)}, \\boldsymbol{\\Sigma}_{ML}^{(N-1)}) \\\\ &= \\boldsymbol{\\Sigma}_{ML}^{(N-1)} + \\boldsymbol{a}_{N-1} \\left[ -\\frac{1}{2} [\\boldsymbol{\\Sigma}_{ML}^{(N-1)}]^{-1} + \\frac{1}{2} [\\boldsymbol{\\Sigma}_{ML}^{(N-1)}]^{-1} (\\boldsymbol{x}_{n} - \\boldsymbol{\\mu}_{ML}^{(N-1)}) (\\boldsymbol{x}_{n} - \\boldsymbol{\\mu}_{ML}^{(N-1)})^{T} [\\boldsymbol{\\Sigma}_{ML}^{(N-1)}]^{-1} \\right] \\end{split}$$\n\nWhere we have taken advantage of the procedure we carried out in Prob.2.34 to calculate the derivative, and if we choose :\n\n$$\\boldsymbol{a}_{N-1} = \\frac{2}{N} \\boldsymbol{\\Sigma}_{\\boldsymbol{ML}}^{2(N-1)}$$\n\nWe can see that the equation above will be identical with our previous conclusion based on definition.",
"answer_length": 2304
},
{
"chapter": 2,
"question_number": "2.38",
"difficulty": "easy",
"question_text": "Use the technique of completing the square for the quadratic form in the exponent to derive the results (2.141: $\\mu_N = \\frac{\\sigma^2}{N\\sigma_0^2 + \\sigma^2} \\mu_0 + \\frac{N\\sigma_0^2}{N\\sigma_0^2 + \\sigma^2} \\mu_{ML}$) and (2.142: $\\frac{1}{\\sigma_N^2} = \\frac{1}{\\sigma_0^2} + \\frac{N}{\\sigma^2}$).",
"answer": "It is straightforward. Based on (2.137: $p(\\mathbf{X}|\\mu) = \\prod_{n=1}^{N} p(x_n|\\mu) = \\frac{1}{(2\\pi\\sigma^2)^{N/2}} \\exp\\left\\{-\\frac{1}{2\\sigma^2} \\sum_{n=1}^{N} (x_n - \\mu)^2\\right\\}.$), (2.138: $p(\\mu) = \\mathcal{N}\\left(\\mu|\\mu_0, \\sigma_0^2\\right)$) and (2.139: $p(\\mu|\\mathbf{X}) \\propto p(\\mathbf{X}|\\mu)p(\\mu).$), we focus on the exponential term of the posterior distribution $p(\\mu|\\mathbf{X})$ , which gives :\n\n$$-\\frac{1}{2\\sigma^2} \\sum_{n=1}^{N} (x_n - \\mu)^2 - \\frac{1}{2\\sigma_0^2} (\\mu - \\mu_0)^2 = -\\frac{1}{2\\sigma_N^2} (\\mu - \\mu_N)^2$$\n\nWe rewrite the left side regarding to $\\mu$ .\n\nquadratic term = \n$$-(\\frac{N}{2\\sigma^2} + \\frac{1}{2\\sigma_0^2})\\mu^2$$\n\nlinear term = \n$$(\\frac{\\sum_{n=1}^{N} x_n}{\\sigma^2} + \\frac{\\mu_0}{\\sigma_0^2}) \\mu$$\n\nWe also rewrite the right side regarding to $\\mu$ , and hence we will obtain :\n\n$$-(\\frac{N}{2\\sigma^2} + \\frac{1}{2\\sigma_0^2})\\mu^2 = -\\frac{1}{2\\sigma_N^2}\\mu^2, \\ (\\frac{\\sum_{n=1}^N x_n}{\\sigma^2} + \\frac{\\mu_0}{\\sigma_0^2})\\mu = \\frac{\\mu_N}{\\sigma_N^2}\\mu$$\n\nThen we will obtain:\n\n$$\\frac{1}{\\sigma_N^2} = \\frac{1}{\\sigma_0^2} + \\frac{N}{\\sigma^2}$$\n\nAnd with the prior knowledge that $\\sum_{n=1}^{N} x_n = N \\cdot \\mu_{ML}$ , we can write :\n\n$$\\mu_{N} = \\sigma_{N}^{2} \\cdot \\left(\\frac{\\sum_{n=1}^{N} x_{n}}{\\sigma^{2}} + \\frac{\\mu_{0}}{\\sigma_{0}^{2}}\\right)$$\n\n$$= \\left(\\frac{1}{\\sigma_{0}^{2}} + \\frac{N}{\\sigma^{2}}\\right)^{-1} \\cdot \\left(\\frac{N\\mu_{ML}}{\\sigma^{2}} + \\frac{\\mu_{0}}{\\sigma_{0}^{2}}\\right)$$\n\n$$= \\frac{\\sigma_{0}^{2}\\sigma^{2}}{\\sigma^{2} + N\\sigma_{0}^{2}} \\cdot \\frac{N\\mu_{ML}\\sigma_{0}^{2} + \\mu_{0}\\sigma^{2}}{\\sigma\\sigma_{0}^{2}}$$\n\n$$= \\frac{\\sigma^{2}}{N\\sigma_{0}^{2} + \\sigma^{2}} \\mu_{0} + \\frac{N\\sigma_{0}^{2}}{N\\sigma_{0}^{2} + \\sigma^{2}} \\mu_{ML}$$",
"answer_length": 1777
},
{
"chapter": 2,
"question_number": "2.39",
"difficulty": "medium",
"question_text": "Starting from the results (2.141: $\\mu_N = \\frac{\\sigma^2}{N\\sigma_0^2 + \\sigma^2} \\mu_0 + \\frac{N\\sigma_0^2}{N\\sigma_0^2 + \\sigma^2} \\mu_{ML}$) and (2.142: $\\frac{1}{\\sigma_N^2} = \\frac{1}{\\sigma_0^2} + \\frac{N}{\\sigma^2}$) for the posterior distribution of the mean of a Gaussian random variable, dissect out the contributions from the first N-1 data points and hence obtain expressions for the sequential update of $\\mu_N$ and $\\sigma_N^2$ . Now derive the same results starting from the posterior distribution $p(\\mu|x_1,\\ldots,x_{N-1}) = \\mathcal{N}(\\mu|\\mu_{N-1},\\sigma_{N-1}^2)$ and multiplying by the likelihood function $p(x_N|\\mu) = \\mathcal{N}(x_N|\\mu,\\sigma^2)$ and then completing the square and normalizing to obtain the posterior distribution after N observations.",
"answer": "Let's follow the hint.\n\n$$\\frac{1}{\\sigma_N^2} = \\frac{1}{\\sigma_0^2} + \\frac{N}{\\sigma^2} = \\frac{1}{\\sigma_0^2} + \\frac{N-1}{\\sigma^2} + \\frac{1}{\\sigma^2} = \\frac{1}{\\sigma_{N-1}^2} + \\frac{1}{\\sigma^2}$$\n\nHowever, it is complicated to derive a sequential formula for $\\mu_N$ directly. Based on (2.142: $\\frac{1}{\\sigma_N^2} = \\frac{1}{\\sigma_0^2} + \\frac{N}{\\sigma^2}$), we see that the denominator in (2.141: $\\mu_N = \\frac{\\sigma^2}{N\\sigma_0^2 + \\sigma^2} \\mu_0 + \\frac{N\\sigma_0^2}{N\\sigma_0^2 + \\sigma^2} \\mu_{ML}$) can be eliminated if we multiply $1/\\sigma_N^2$ on both side of (2.141: $\\mu_N = \\frac{\\sigma^2}{N\\sigma_0^2 + \\sigma^2} \\mu_0 + \\frac{N\\sigma_0^2}{N\\sigma_0^2 + \\sigma^2} \\mu_{ML}$). Therefore we will derive a sequential formula for $\\mu_N/\\sigma_N^2$ instead.\n\n$$\\begin{split} \\frac{\\mu_{N}}{\\sigma_{N}^{2}} &= \\frac{\\sigma^{2} + N\\sigma_{0}^{2}}{\\sigma_{0}^{2}\\sigma^{2}} (\\frac{\\sigma^{2}}{N\\sigma_{0}^{2} + \\sigma^{2}} \\mu_{0} + \\frac{N\\sigma_{0}^{2}}{N\\sigma_{0}^{2} + \\sigma^{2}} \\mu_{ML}^{(N)}) \\\\ &= \\frac{\\sigma^{2} + N\\sigma_{0}^{2}}{\\sigma_{0}^{2}\\sigma^{2}} (\\frac{\\sigma^{2}}{N\\sigma_{0}^{2} + \\sigma^{2}} \\mu_{0} + \\frac{N\\sigma_{0}^{2}}{N\\sigma_{0}^{2} + \\sigma^{2}} \\mu_{ML}^{(N)}) \\\\ &= \\frac{\\mu_{0}}{\\sigma_{0}^{2}} + \\frac{N\\mu_{ML}^{(N)}}{\\sigma^{2}} = \\frac{\\mu_{0}}{\\sigma_{0}^{2}} + \\frac{\\sum_{n=1}^{N} x_{n}}{\\sigma^{2}} \\\\ &= \\frac{\\mu_{0}}{\\sigma_{0}^{2}} + \\frac{\\sum_{n=1}^{N-1} x_{n}}{\\sigma^{2}} + \\frac{x_{N}}{\\sigma^{2}} \\\\ &= \\frac{\\mu_{N-1}}{\\sigma_{N-1}^{2}} + \\frac{x_{N}}{\\sigma^{2}} \\end{split}$$\n\nAnother possible solution is also given in the problem. We solve it by completing the square.\n\n$$-\\frac{1}{2\\sigma^2}(x_N-\\mu)^2-\\frac{1}{2\\sigma_{N-1}^2}(\\mu-\\mu_{N-1})^2=-\\frac{1}{2\\sigma_N^2}(\\mu-\\mu_N)^2$$\n\nBy comparing the quadratic and linear term regarding to $\\mu$ , we can obtain:\n\n $\\frac{1}{\\sigma_N^2} = \\frac{1}{\\sigma^2} + \\frac{1}{\\sigma_{N-1}^2}$ \n\nAnd:\n\n$$\\frac{\\mu_N}{\\sigma_N^2} = \\frac{x_N}{\\sigma^2} + \\frac{\\mu_{N-1}}{\\sigma_{N-1}^2}$$\n\nIt is the same as previous result. Note: after obtaining the Nth observation, we will firstly use the sequential formula to calculate $\\sigma_N^2$ , and then $\\mu_N$ . This is because the sequential formula for $\\mu_N$ is dependent on $\\sigma_N^2$ .",
"answer_length": 2284
},
{
"chapter": 2,
"question_number": "2.4",
"difficulty": "medium",
"question_text": "\\star)$ Show that the mean of the binomial distribution is given by (2.11: $\\mathbb{E}[m] \\equiv \\sum_{m=0}^{N} m \\operatorname{Bin}(m|N,\\mu) = N\\mu$). To do this, differentiate both sides of the normalization condition (2.264: $\\sum_{m=0}^{N} \\binom{N}{m} \\mu^m (1-\\mu)^{N-m} = 1$) with respect to $\\mu$ and then rearrange to obtain an expression for the mean of n. Similarly, by differentiating (2.264: $\\sum_{m=0}^{N} \\binom{N}{m} \\mu^m (1-\\mu)^{N-m} = 1$) twice with respect to $\\mu$ and making use of the result (2.11: $\\mathbb{E}[m] \\equiv \\sum_{m=0}^{N} m \\operatorname{Bin}(m|N,\\mu) = N\\mu$) for the mean of the binomial distribution prove the result (2.12: $var[m] \\equiv \\sum_{m=0}^{N} (m - \\mathbb{E}[m])^2 \\operatorname{Bin}(m|N,\\mu) = N\\mu(1-\\mu).$) for the variance of the binomial.",
"answer": "Solution has already been given in the problem, but we will solve it in a\n\nmore intuitive way, beginning by definition:\n\n$$\\mathbb{E}[m] = \\sum_{m=0}^{N} m C_{N}^{m} \\mu^{m} (1 - \\mu)^{N-m}$$\n\n$$= \\sum_{m=1}^{N} m C_{N}^{m} \\mu^{m} (1 - \\mu)^{N-m}$$\n\n$$= \\sum_{m=1}^{N} \\frac{N!}{(m-1)!(N-m)!} \\mu^{m} (1 - \\mu)^{N-m}$$\n\n$$= N \\cdot \\mu \\sum_{m=1}^{N} \\frac{(N-1)!}{(m-1)!(N-m)!} \\mu^{m-1} (1 - \\mu)^{N-m}$$\n\n$$= N \\cdot \\mu \\sum_{m=1}^{N} C_{N-1}^{m-1} \\mu^{m-1} (1 - \\mu)^{N-m}$$\n\n$$= N \\cdot \\mu \\sum_{k=0}^{N-1} C_{k-1}^{k} \\mu^{k} (1 - \\mu)^{N-1-k}$$\n\n$$= N \\cdot \\mu [\\mu + (1 - \\mu)]^{N-1} = N\\mu$$\n\nSome details should be explained here. We note that m=0 actually doesn't affect the *Expectation*, so we let the summation begin from m=1, i.e. (what we have done from the first step to the second step). Moreover, in the second last step, we rewrite the subindex of the summation, and what we actually do is let k=m-1. And in the last step, we have taken advantage of (2.264: $\\sum_{m=0}^{N} \\binom{N}{m} \\mu^m (1-\\mu)^{N-m} = 1$). Variance is straightforward once *Expectation* has been calculated.\n\n$$\\begin{split} var[m] &= \\mathbb{E}[m^2] - \\mathbb{E}[m]^2 \\\\ &= \\sum_{m=0}^{N} m^2 C_N^m \\mu^m (1-\\mu)^{N-m} - \\mathbb{E}[m] \\cdot \\mathbb{E}[m] \\\\ &= \\sum_{m=0}^{N} m^2 C_N^m \\mu^m (1-\\mu)^{N-m} - (N\\mu) \\cdot \\sum_{m=0}^{N} m C_N^m \\mu^m (1-\\mu)^{N-m} \\\\ &= \\sum_{m=1}^{N} m^2 C_N^m \\mu^m (1-\\mu)^{N-m} - N\\mu \\cdot \\sum_{m=1}^{N} m C_N^m \\mu^m (1-\\mu)^{N-m} \\\\ &= \\sum_{m=1}^{N} m \\frac{N!}{(m-1)!(N-m)!} \\mu^m (1-\\mu)^{N-m} - (N\\mu) \\cdot \\sum_{m=1}^{N} m C_N^m \\mu^m (1-\\mu)^{N-m} \\\\ &= N\\mu \\sum_{m=1}^{N} m \\frac{(N-1)!}{(m-1)!(N-m)!} \\mu^{m-1} (1-\\mu)^{N-m} - N\\mu \\cdot \\sum_{m=1}^{N} m C_N^m \\mu^m (1-\\mu)^{N-m} \\\\ &= N\\mu \\sum_{m=1}^{N} m \\mu^{m-1} (1-\\mu)^{N-m} (C_{N-1}^{m-1} - \\mu C_N^m) \\end{split}$$\n\nHere we will use a little tick, $-\\mu = -1 + (1 - \\mu)$ and then take advantage\n\nof the property, $C_{N}^{m} = C_{N-1}^{m} + C_{N-1}^{m-1}$ .\n\n$$\\begin{split} var[m] &= N\\mu \\sum_{m=1}^{N} m\\mu^{m-1} (1-\\mu)^{N-m} \\left[ C_{N-1}^{m-1} - C_{N}^{m} + (1-\\mu)C_{N}^{m} \\right] \\\\ &= N\\mu \\sum_{m=1}^{N} m\\mu^{m-1} (1-\\mu)^{N-m} \\left[ (1-\\mu)C_{N}^{m} + C_{N-1}^{m-1} - C_{N}^{m} \\right] \\\\ &= N\\mu \\sum_{m=1}^{N} m\\mu^{m-1} (1-\\mu)^{N-m} \\left[ (1-\\mu)C_{N}^{m} - C_{N-1}^{m} \\right] \\\\ &= N\\mu \\left\\{ \\sum_{m=1}^{N} m\\mu^{m-1} (1-\\mu)^{N-m+1} C_{N}^{m} - \\sum_{m=1}^{N} m\\mu^{m-1} (1-\\mu)^{N-m} C_{N-1}^{m} \\right\\} \\\\ &= N\\mu \\left\\{ \\cdot N(1-\\mu)[\\mu + (1-\\mu)]^{N-1} - (N-1)(1-\\mu)[\\mu + (1-\\mu)]^{N-2} \\right\\} \\\\ &= N\\mu \\left\\{ N(1-\\mu) - (N-1)(1-\\mu) \\right\\} = N\\mu(1-\\mu) \\end{split}$$",
"answer_length": 2625
},
{
"chapter": 2,
"question_number": "2.40",
"difficulty": "medium",
"question_text": "Consider a *D*-dimensional Gaussian random variable **x** with distribution $\\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma})$ in which the covariance $\\boldsymbol{\\Sigma}$ is known and for which we wish to infer the mean $\\boldsymbol{\\mu}$ from a set of observations $\\mathbf{X} = \\{\\mathbf{x}_1, \\dots, \\mathbf{x}_N\\}$ . Given a prior distribution $p(\\boldsymbol{\\mu}) = \\mathcal{N}(\\boldsymbol{\\mu}|\\boldsymbol{\\mu}_0, \\boldsymbol{\\Sigma}_0)$ , find the corresponding posterior distribution $p(\\boldsymbol{\\mu}|\\mathbf{X})$ .",
"answer": "Based on *Bayes Theorem*, we can write:\n\n$$p(\\boldsymbol{\\mu}|\\boldsymbol{X}) \\propto p(\\boldsymbol{X}|\\boldsymbol{\\mu})p(\\boldsymbol{\\mu})$$\n\nWe focus on the exponential term on the right side and then rearrange it regarding to $\\mu$ .\n\nright = \n$$\\left[ \\sum_{n=1}^{N} -\\frac{1}{2} (\\boldsymbol{x}_{n} - \\boldsymbol{\\mu})^{T} \\boldsymbol{\\Sigma}^{-1} (\\boldsymbol{x}_{n} - \\boldsymbol{\\mu}) \\right] - \\frac{1}{2} (\\boldsymbol{\\mu} - \\boldsymbol{\\mu}_{0})^{T} \\boldsymbol{\\Sigma}_{0}^{-1} (\\boldsymbol{\\mu} - \\boldsymbol{\\mu}_{0})$$\n\n$$= \\left[ \\sum_{n=1}^{N} -\\frac{1}{2} (\\boldsymbol{x}_{n} - \\boldsymbol{\\mu})^{T} \\boldsymbol{\\Sigma}^{-1} (\\boldsymbol{x}_{n} - \\boldsymbol{\\mu}) \\right] - \\frac{1}{2} (\\boldsymbol{\\mu} - \\boldsymbol{\\mu}_{0})^{T} \\boldsymbol{\\Sigma}_{0}^{-1} (\\boldsymbol{\\mu} - \\boldsymbol{\\mu}_{0})$$\n\n$$= -\\frac{1}{2} \\boldsymbol{\\mu} (\\boldsymbol{\\Sigma}_{0}^{-1} + N \\boldsymbol{\\Sigma}^{-1}) \\boldsymbol{\\mu} + \\boldsymbol{\\mu}^{T} (\\boldsymbol{\\Sigma}_{0}^{-1} \\boldsymbol{\\mu}_{0} + \\boldsymbol{\\Sigma}^{-1} \\sum_{n=1}^{N} \\boldsymbol{x}_{n}) + \\text{const}$$\n\nWhere 'const' represents all the constant terms independent of $\\mu$ . According to the quadratic term, we can obtain the posterior covariance matrix.\n\n$$\\boldsymbol{\\Sigma}_{N}^{-1} = \\boldsymbol{\\Sigma}_{0}^{-1} + N \\boldsymbol{\\Sigma}^{-1}$$\n\nThen using the linear term, we can obtain:\n\n$$\\Sigma_N^{-1}\\mu_N = (\\Sigma_0^{-1}\\mu_0 + \\Sigma^{-1}\\sum_{n=1}^N x_n)$$\n\nFinally we obtain posterior mean:\n\n$$\\mu_N = (\\Sigma_0^{-1} + N\\Sigma^{-1})^{-1}(\\Sigma_0^{-1}\\mu_0 + \\Sigma^{-1}\\sum_{n=1}^N x_n)$$\n\nWhich can also be written as:\n\n$$\\mu_N = (\\Sigma_0^{-1} + N\\Sigma^{-1})^{-1}(\\Sigma_0^{-1}\\mu_0 + \\Sigma^{-1}N\\mu_{ML})$$",
"answer_length": 1714
},
{
"chapter": 2,
"question_number": "2.41",
"difficulty": "easy",
"question_text": "Use the definition of the gamma function (1.141: $\\Gamma(x) \\equiv \\int_0^\\infty u^{x-1} e^{-u} \\, \\mathrm{d}u.$) to show that the gamma distribution (2.146: $Gam(\\lambda|a,b) = \\frac{1}{\\Gamma(a)} b^a \\lambda^{a-1} \\exp(-b\\lambda).$) is normalized.",
"answer": "Let's compute the integral of (2.146: $Gam(\\lambda|a,b) = \\frac{1}{\\Gamma(a)} b^a \\lambda^{a-1} \\exp(-b\\lambda).$) over $\\lambda$ .\n\n$$\\begin{split} \\int_0^{+\\infty} \\frac{1}{\\Gamma(a)} b^a \\lambda^{a-1} exp(-b\\lambda) \\, d\\lambda &= \\frac{b^a}{\\Gamma(a)} \\int_0^{+\\infty} \\lambda^{a-1} exp(-b\\lambda) \\, d\\lambda \\\\ &= \\frac{b^a}{\\Gamma(a)} \\int_0^{+\\infty} (\\frac{u}{b})^{a-1} exp(-u) \\frac{1}{b} \\, du \\\\ &= \\frac{1}{\\Gamma(a)} \\int_0^{+\\infty} u^{a-1} exp(-u) \\, du \\\\ &= \\frac{1}{\\Gamma(a)} \\cdot \\Gamma(a) = 1 \\end{split}$$\n\nWhere we first perform change of variable $b\\lambda = u$ , and then take advantage of the definition of gamma function:\n\n$$\\Gamma(x) = \\int_0^{+\\infty} u^{x-1} e^{-u} du$$",
"answer_length": 704
},
{
"chapter": 2,
"question_number": "2.42",
"difficulty": "medium",
"question_text": "Evaluate the mean, variance, and mode of the gamma distribution (2.146: $Gam(\\lambda|a,b) = \\frac{1}{\\Gamma(a)} b^a \\lambda^{a-1} \\exp(-b\\lambda).$).",
"answer": "We first calculate its mean.\n\n$$\\begin{split} \\int_0^{+\\infty} \\lambda \\frac{1}{\\Gamma(a)} b^a \\lambda^{a-1} exp(-b\\lambda) \\, d\\lambda &= \\frac{b^a}{\\Gamma(a)} \\int_0^{+\\infty} \\lambda^a exp(-b\\lambda) \\, d\\lambda \\\\ &= \\frac{b^a}{\\Gamma(a)} \\int_0^{+\\infty} (\\frac{u}{b})^a exp(-u) \\frac{1}{b} \\, du \\\\ &= \\frac{1}{\\Gamma(a) \\cdot b} \\int_0^{+\\infty} u^a exp(-u) \\, du \\\\ &= \\frac{1}{\\Gamma(a) \\cdot b} \\cdot \\Gamma(a+1) = \\frac{a}{b} \\end{split}$$\n\nWhere we have taken advantage of the property $\\Gamma(a+1) = a\\Gamma(a)$ . Then we calculate $\\mathbb{E}[\\lambda^2]$ .\n\n$$\\int_{0}^{+\\infty} \\lambda^{2} \\frac{1}{\\Gamma(a)} b^{a} \\lambda^{a-1} exp(-b\\lambda) d\\lambda = \\frac{b^{a}}{\\Gamma(a)} \\int_{0}^{+\\infty} \\lambda^{a+1} exp(-b\\lambda) d\\lambda$$\n\n$$= \\frac{b^{a}}{\\Gamma(a)} \\int_{0}^{+\\infty} (\\frac{u}{b})^{a+1} exp(-u) \\frac{1}{b} du$$\n\n$$= \\frac{1}{\\Gamma(a) \\cdot b^{2}} \\int_{0}^{+\\infty} u^{a+1} exp(-u) du$$\n\n$$= \\frac{1}{\\Gamma(a) \\cdot b^{2}} \\cdot \\Gamma(a+2) = \\frac{a(a+1)}{b^{2}}$$\n\nTherefore, according to $var[\\lambda] = \\mathbb{E}[\\lambda^2] - \\mathbb{E}[\\lambda]^2$ , we can obtain :\n\n$$var[\\lambda] = \\mathbb{E}[\\lambda^2] - \\mathbb{E}[\\lambda]^2 = \\frac{a(a+1)}{b^2} - (\\frac{a}{b})^2 = \\frac{a}{b^2}$$\n\nFor the mode of a gamma distribution, we need to find where the maximum of the PDF occurs, and hence we will calculate the derivative of the gamma distribution with respect to $\\lambda$ .\n\n$$\\frac{d}{d\\lambda} \\left[ \\frac{1}{\\Gamma(a)} b^a \\lambda^{a-1} exp(-b\\lambda) \\right] = [(a-1) - b\\lambda] \\frac{1}{\\Gamma(a)} b^a \\lambda^{a-2} exp(-b\\lambda)$$\n\nIt is obvious that $Gam(\\lambda|a,b)$ has its maximum at $\\lambda = (a-1)/b$ . In other words, the gamma distribution $Gam(\\lambda|a,b)$ has mode (a-1)/b.",
"answer_length": 1750
},
{
"chapter": 2,
"question_number": "2.43",
"difficulty": "easy",
"question_text": "The following distribution\n\n$$p(x|\\sigma^2, q) = \\frac{q}{2(2\\sigma^2)^{1/q}\\Gamma(1/q)} \\exp\\left(-\\frac{|x|^q}{2\\sigma^2}\\right)$$\n (2.293: $p(x|\\sigma^2, q) = \\frac{q}{2(2\\sigma^2)^{1/q}\\Gamma(1/q)} \\exp\\left(-\\frac{|x|^q}{2\\sigma^2}\\right)$)\n\nis a generalization of the univariate Gaussian distribution. Show that this distribution is normalized so that\n\n$$\\int_{-\\infty}^{\\infty} p(x|\\sigma^2, q) \\, \\mathrm{d}x = 1 \\tag{2.294}$$\n\nand that it reduces to the Gaussian when q=2. Consider a regression model in which the target variable is given by $t=y(\\mathbf{x},\\mathbf{w})+\\epsilon$ and $\\epsilon$ is a random noise\n\nvariable drawn from the distribution (2.293: $p(x|\\sigma^2, q) = \\frac{q}{2(2\\sigma^2)^{1/q}\\Gamma(1/q)} \\exp\\left(-\\frac{|x|^q}{2\\sigma^2}\\right)$). Show that the log likelihood function over $\\mathbf{w}$ and $\\sigma^2$ , for an observed data set of input vectors $\\mathbf{X} = \\{\\mathbf{x}_1, \\dots, \\mathbf{x}_N\\}$ and corresponding target variables $\\mathbf{t} = (t_1, \\dots, t_N)^T$ , is given by\n\n$$\\ln p(\\mathbf{t}|\\mathbf{X}, \\mathbf{w}, \\sigma^2) = -\\frac{1}{2\\sigma^2} \\sum_{n=1}^{N} |y(\\mathbf{x}_n, \\mathbf{w}) - t_n|^q - \\frac{N}{q} \\ln(2\\sigma^2) + \\text{const} \\quad (2.295)$$\n\nwhere 'const' denotes terms independent of both w and $\\sigma^2$ . Note that, as a function of w, this is the $L_q$ error function considered in Section 1.5.5.",
"answer": "Let's firstly calculate the following integral.\n\n$$\\begin{split} \\int_{-\\infty}^{+\\infty} exp(-\\frac{|x|^q}{2\\sigma^2}) \\, dx &= 2 \\int_{-\\infty}^{+\\infty} exp(-\\frac{x^q}{2\\sigma^2}) \\, dx \\\\ &= 2 \\int_{0}^{+\\infty} exp(-u) \\frac{(2\\sigma^2)^{\\frac{1}{q}}}{q} u^{\\frac{1}{q}-1} \\, du \\\\ &= 2 \\frac{(2\\sigma^2)^{\\frac{1}{q}}}{q} \\int_{0}^{+\\infty} exp(-u) u^{\\frac{1}{q}-1} \\, dx \\\\ &= 2 \\frac{(2\\sigma^2)^{\\frac{1}{q}}}{q} \\Gamma(\\frac{1}{q}) \\end{split}$$\n\nAnd then it is obvious that (2.293: $p(x|\\sigma^2, q) = \\frac{q}{2(2\\sigma^2)^{1/q}\\Gamma(1/q)} \\exp\\left(-\\frac{|x|^q}{2\\sigma^2}\\right)$) is normalized. Next, we consider about the log likelihood function. Since $\\epsilon = t - y(\\boldsymbol{x}, \\boldsymbol{w})$ and $\\epsilon \\sim p(\\epsilon | \\sigma^2, q)$ , we can write:\n\n$$\\ln p(\\mathbf{t}|\\boldsymbol{X},\\boldsymbol{w},\\sigma^2) = \\sum_{n=1}^{N} \\ln p\\left(y(\\boldsymbol{x_n},\\boldsymbol{w}) - t_n|\\sigma^2,q\\right)$$\n\n$$= -\\frac{1}{2\\sigma^2} \\sum_{n=1}^{N} |y(\\boldsymbol{x_n},\\boldsymbol{w}) - t_n|^q + N \\cdot \\ln \\left[\\frac{q}{2(2\\sigma^2)^{1/q}\\Gamma(1/q)}\\right]$$\n\n$$= -\\frac{1}{2\\sigma^2} \\sum_{n=1}^{N} |y(\\boldsymbol{x_n},\\boldsymbol{w}) - t_n|^q - \\frac{N}{q} \\ln(2\\sigma^2) + \\text{const}$$",
"answer_length": 1224
},
{
"chapter": 2,
"question_number": "2.44",
"difficulty": "medium",
"question_text": "Consider a univariate Gaussian distribution $\\mathcal{N}(x|\\mu,\\tau^{-1})$ having conjugate Gaussian-gamma prior given by (2.154: $p(\\mu, \\lambda) = \\mathcal{N}(\\mu | \\mu_0, (\\beta \\lambda)^{-1}) \\operatorname{Gam}(\\lambda | a, b)$), and a data set $\\mathbf{x} = \\{x_1, \\dots, x_N\\}$ of i.i.d. observations. Show that the posterior distribution is also a Gaussian-gamma distribution of the same functional form as the prior, and write down expressions for the parameters of this posterior distribution.",
"answer": "Here we use a simple method to solve this problem by taking advantage of (2.152: $\\propto \\left[\\lambda^{1/2} \\exp\\left(-\\frac{\\lambda\\mu^2}{2}\\right)\\right]^N \\exp\\left\\{\\lambda\\mu \\sum_{n=1}^{N} x_n - \\frac{\\lambda}{2} \\sum_{n=1}^{N} x_n^2\\right\\}. \\quad$) and (2.153: $= \\exp\\left\\{-\\frac{\\beta\\lambda}{2}(\\mu - c/\\beta)^2\\right\\} \\lambda^{\\beta/2} \\exp\\left\\{-\\left(d - \\frac{c^2}{2\\beta}\\right)\\lambda\\right\\} \\qquad$). By writing the prior distribution in the form of (2.153: $= \\exp\\left\\{-\\frac{\\beta\\lambda}{2}(\\mu - c/\\beta)^2\\right\\} \\lambda^{\\beta/2} \\exp\\left\\{-\\left(d - \\frac{c^2}{2\\beta}\\right)\\lambda\\right\\} \\qquad$), i.e., $p(\\mu, \\lambda | \\beta, c, d)$ , we can easily obtain the posterior distribution.\n\n$$\\begin{split} p(\\mu,\\lambda|\\boldsymbol{X}) &\\propto & p(\\boldsymbol{X}|\\mu,\\lambda) \\cdot p(\\mu,\\lambda) \\\\ &\\propto & \\left[\\lambda^{1/2}exp(-\\frac{\\lambda\\mu^2}{2})\\right]^{N+\\beta} exp\\left[(c+\\sum_{n=1}^N x_n)\\lambda\\mu - (d+\\sum_{n=1}^N \\frac{x_n^2}{2})\\lambda\\right] \\end{split}$$\n\nTherefore, we can see that the posterior distribution has parameters: $\\beta' = \\beta + N$ , $c' = c + \\sum_{n=1}^{N} x_n$ , $d' = d + \\sum_{n=1}^{N} \\frac{x_n^2}{2}$ . And since the prior distribution is actually the product of a Gaussian distribution and a Gamma distribution:\n\n$$p(\\mu, \\lambda | \\mu_0, \\beta, a, b) = \\mathcal{N} [\\mu | \\mu_0, (\\beta \\lambda)^{-1}] \\operatorname{Gam}(\\lambda | a, b)$$\n\nWhere $\\mu_0 = c/\\beta$ , $\\alpha = 1 + \\beta/2$ , $b = d - c^2/2\\beta$ . Hence the posterior distribution can also be written as the product of a Gaussian distribution and a Gamma distribution.\n\n$$p(\\mu, \\lambda | \\mathbf{X}) = \\mathcal{N} \\left[ \\mu | \\mu'_0, (\\beta' \\lambda)^{-1} \\right] \\operatorname{Gam}(\\lambda | a', b')$$\n\nWhere we have defined:\n\n$$\\mu'_0 = c'/\\beta' = (c + \\sum_{n=1}^N x_n)/(N+\\beta)$$\n\n$$a' = 1 + \\beta'/2 = 1 + (N+\\beta)/2$$\n\n$$b' = d' - c'^2/2\\beta' = d + \\sum_{n=1}^N \\frac{x_n^2}{2} - (c + \\sum_{n=1}^N x_n)^2/(2(\\beta+N))$$",
"answer_length": 1988
},
{
"chapter": 2,
"question_number": "2.45",
"difficulty": "easy",
"question_text": "Verify that the Wishart distribution defined by (2.155: $W(\\mathbf{\\Lambda}|\\mathbf{W}, \\nu) = B|\\mathbf{\\Lambda}|^{(\\nu-D-1)/2} \\exp\\left(-\\frac{1}{2}\\text{Tr}(\\mathbf{W}^{-1}\\mathbf{\\Lambda})\\right)$) is indeed a conjugate prior for the precision matrix of a multivariate Gaussian.",
"answer": "Let's begin by writing down the dependency of the prior distribution $\\mathcal{W}(\\Lambda | \\mathbf{W}, v)$ and the likelihood function $p(\\mathbf{X} | \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})$ on $\\boldsymbol{\\Lambda}$ .\n\n$$p(\\boldsymbol{X}|\\boldsymbol{\\mu},\\boldsymbol{\\Lambda}) \\propto |\\boldsymbol{\\Lambda}|^{N/2} exp\\left[\\sum_{n=1}^{N} -\\frac{1}{2}(\\boldsymbol{x_n} - \\boldsymbol{\\mu})^T \\boldsymbol{\\Lambda} (\\boldsymbol{x_n} - \\boldsymbol{\\mu})\\right]$$\n\nAnd if we denote\n\n$$S = \\frac{1}{N} \\sum_{n=1}^{N} (\\boldsymbol{x_n} - \\boldsymbol{\\mu}) (\\boldsymbol{x_n} - \\boldsymbol{\\mu})^T$$\n\nThen we can rewrite the equation above as:\n\n$$p(\\boldsymbol{X}|\\boldsymbol{\\mu},\\boldsymbol{\\Lambda}) \\propto |\\boldsymbol{\\Lambda}|^{N/2} exp\\left[-\\frac{1}{2}\\mathrm{Tr}(\\boldsymbol{S}\\boldsymbol{\\Lambda})\\right]$$\n\nJust as what we have done in Prob.2.34, and comparing this problem with Prob.2.34, one important thing should be noticed: since S and $\\Lambda$ are both symmetric, we have: $\\text{Tr}(S\\Lambda) = \\text{Tr}((S\\Lambda)^T) = \\text{Tr}(\\Lambda^T S^T) = \\text{Tr}(\\Lambda S)$ . And we can also write down the prior distribution as:\n\n$$\\mathcal{W}(\\pmb{\\Lambda}|\\pmb{W},v) \\propto |\\pmb{\\Lambda}|^{(v-D-1)/2} \\exp\\left[-\\frac{1}{2}\\mathrm{Tr}(\\pmb{W}^{-1}\\pmb{\\Lambda})\\right]$$\n\nTherefore, the posterior distribution can be obtained:\n\n$$\\begin{array}{ll} p(\\pmb{\\Lambda}|\\pmb{X},\\pmb{W},v) & \\propto & p(\\pmb{X}|\\pmb{\\mu},\\pmb{\\Lambda})\\cdot\\mathcal{W}(\\pmb{\\Lambda}|\\pmb{W},v) \\\\ \\\\ \\propto & |\\pmb{\\Lambda}|^{(N+v-D-1)/2}\\exp\\left\\{-\\frac{1}{2}\\mathrm{Tr}\\big[(\\pmb{W}^{-1}+\\pmb{S})\\pmb{\\Lambda}\\big]\\right\\} \\end{array}$$\n\nTherefore, $p(\\Lambda | X, W, v)$ is also a *Wishart* distribution, with parameters:\n\n$$v_N = N + v$$\n$$\\mathbf{W}_N = (\\mathbf{W}^{-1} + \\mathbf{S})^{-1}$$",
"answer_length": 1799
},
{
"chapter": 2,
"question_number": "2.46",
"difficulty": "easy",
"question_text": "Verify that evaluating the integral in (2.158) leads to the result (2.159: $St(x|\\mu,\\lambda,\\nu) = \\frac{\\Gamma(\\nu/2 + 1/2)}{\\Gamma(\\nu/2)} \\left(\\frac{\\lambda}{\\pi\\nu}\\right)^{1/2} \\left[1 + \\frac{\\lambda(x-\\mu)^2}{\\nu}\\right]^{-\\nu/2 - 1/2}$).",
"answer": "It is quite straightforward.\n\n$$\\begin{split} p(x|\\mu,a,b) &= \\int_0^\\infty \\mathcal{N}(x|\\mu,\\tau^{-1}) \\mathrm{Gam}(\\tau|a,b) \\, d\\tau \\\\ &= \\int_0^\\infty \\frac{b^a exp(-b\\tau)\\tau^{a-1}}{\\Gamma(a)} (\\frac{\\tau}{2\\pi})^{1/2} exp\\left\\{-\\frac{\\tau}{2}(x-\\mu)^2\\right\\} \\, d\\tau \\\\ &= \\frac{b^a}{\\Gamma(a)} (\\frac{1}{2\\pi})^{1/2} \\int_0^\\infty \\tau^{a-1/2} exp\\left\\{-b\\tau - \\frac{\\tau}{2}(x-\\mu)^2\\right\\} \\, d\\tau \\end{split}$$\n\nAnd if we make change of variable: $z = \\tau [b + (x - \\mu)^2/2]$ , the integral above can be written as:\n\n$$\\begin{split} p(x|\\mu,a,b) &= \\frac{b^a}{\\Gamma(a)} (\\frac{1}{2\\pi})^{1/2} \\int_0^\\infty \\tau^{a-1/2} exp \\left\\{ -b\\tau - \\frac{\\tau}{2} (x-\\mu)^2 \\right\\} d\\tau \\\\ &= \\frac{b^a}{\\Gamma(a)} (\\frac{1}{2\\pi})^{1/2} \\int_0^\\infty \\left[ \\frac{z}{b + (x-\\mu)^2/2} \\right]^{a-1/2} exp \\left\\{ -z \\right\\} \\frac{1}{b + (x-\\mu)^2/2} dz \\\\ &= \\frac{b^a}{\\Gamma(a)} (\\frac{1}{2\\pi})^{1/2} \\left[ \\frac{1}{b + (x-\\mu)^2/2} \\right]^{a+1/2} \\int_0^\\infty z^{a-1/2} exp \\left\\{ -z \\right\\} dz \\\\ &= \\frac{b^a}{\\Gamma(a)} (\\frac{1}{2\\pi})^{1/2} \\left[ b + \\frac{(x-\\mu)^2}{2} \\right]^{-a-1/2} \\Gamma(a+1/2) \\end{split}$$\n\nAnd if we substitute a = v/2 and $b = v/2\\lambda$ , we will obtain (2.159: $St(x|\\mu,\\lambda,\\nu) = \\frac{\\Gamma(\\nu/2 + 1/2)}{\\Gamma(\\nu/2)} \\left(\\frac{\\lambda}{\\pi\\nu}\\right)^{1/2} \\left[1 + \\frac{\\lambda(x-\\mu)^2}{\\nu}\\right]^{-\\nu/2 - 1/2}$).",
"answer_length": 1399
},
{
"chapter": 2,
"question_number": "2.47",
"difficulty": "easy",
"question_text": "Show that in the limit $\\nu \\to \\infty$ , the t-distribution (2.159: $St(x|\\mu,\\lambda,\\nu) = \\frac{\\Gamma(\\nu/2 + 1/2)}{\\Gamma(\\nu/2)} \\left(\\frac{\\lambda}{\\pi\\nu}\\right)^{1/2} \\left[1 + \\frac{\\lambda(x-\\mu)^2}{\\nu}\\right]^{-\\nu/2 - 1/2}$) becomes a Gaussian. Hint: ignore the normalization coefficient, and simply look at the dependence on x.",
"answer": "We focus on the dependency of (2.159: $St(x|\\mu,\\lambda,\\nu) = \\frac{\\Gamma(\\nu/2 + 1/2)}{\\Gamma(\\nu/2)} \\left(\\frac{\\lambda}{\\pi\\nu}\\right)^{1/2} \\left[1 + \\frac{\\lambda(x-\\mu)^2}{\\nu}\\right]^{-\\nu/2 - 1/2}$) on x.\n\n$$\\begin{split} \\operatorname{St}(x|\\mu,\\lambda,v) & \\propto & \\left[1+\\frac{\\lambda(x-\\mu)^2}{v}\\right]^{-v/2-1/2} \\\\ & \\propto & exp\\left[\\frac{-v-1}{2}ln(1+\\frac{\\lambda(x-\\mu)^2}{v})\\right] \\\\ & \\propto & exp\\left[\\frac{-v-1}{2}(\\frac{\\lambda(x-\\mu)^2}{v}+O(v^{-2}))\\right] \\\\ & \\approx & exp\\left[-\\frac{\\lambda(x-\\mu)^2}{2}\\right] \\quad (v\\to\\infty) \\end{split}$$\n\nWhere we have used *Taylor Expansion*: $ln(1+\\epsilon) = \\epsilon + O(\\epsilon^2)$ . We see that this, up to an overall constant, is a Gaussian distribution with mean $\\mu$ and precision $\\lambda$ .",
"answer_length": 790
},
{
"chapter": 2,
"question_number": "2.48",
"difficulty": "easy",
"question_text": "By following analogous steps to those used to derive the univariate Student's t-distribution (2.159: $St(x|\\mu,\\lambda,\\nu) = \\frac{\\Gamma(\\nu/2 + 1/2)}{\\Gamma(\\nu/2)} \\left(\\frac{\\lambda}{\\pi\\nu}\\right)^{1/2} \\left[1 + \\frac{\\lambda(x-\\mu)^2}{\\nu}\\right]^{-\\nu/2 - 1/2}$), verify the result (2.162: $\\operatorname{St}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}, \\nu) = \\frac{\\Gamma(D/2 + \\nu/2)}{\\Gamma(\\nu/2)} \\frac{|\\boldsymbol{\\Lambda}|^{1/2}}{(\\pi\\nu)^{D/2}} \\left[ 1 + \\frac{\\Delta^2}{\\nu} \\right]^{-D/2 - \\nu/2}$) for the multivariate form of the Student's t-distribution, by marginalizing over the variable $\\eta$ in (2.161: $St(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}, \\nu) = \\int_0^\\infty \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, (\\eta \\boldsymbol{\\Lambda})^{-1}) Gam(\\eta | \\nu/2, \\nu/2) \\, d\\eta.$). Using the definition (2.161: $St(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}, \\nu) = \\int_0^\\infty \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, (\\eta \\boldsymbol{\\Lambda})^{-1}) Gam(\\eta | \\nu/2, \\nu/2) \\, d\\eta.$), show by exchanging integration variables that the multivariate t-distribution is correctly normalized.",
"answer": "The same steps in Prob.2.46 can be used here.\n\n$$\\begin{split} \\operatorname{St}(\\boldsymbol{x} \\, \\big| \\, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}, v) &= \\int_0^{+\\infty} \\mathcal{N}(\\boldsymbol{x} \\, \\big| \\, \\boldsymbol{\\mu}, (\\eta \\boldsymbol{\\Lambda})^{-1}) \\cdot \\operatorname{Gam}(\\eta \\, \\big| \\, \\frac{v}{2}, \\frac{v}{2}) \\, \\, d\\eta \\\\ &= \\int_0^{+\\infty} \\frac{1}{(2\\pi)^{D/2}} |\\eta \\boldsymbol{\\Lambda}|^{1/2} exp \\left\\{ -\\frac{1}{2} (\\boldsymbol{x} - \\boldsymbol{\\mu})^T (\\eta \\boldsymbol{\\Lambda}) (\\boldsymbol{x} - \\boldsymbol{\\mu}) - \\frac{v\\eta}{2} \\right\\} \\frac{1}{\\Gamma(v/2)} (\\frac{v}{2})^{v/2} \\eta^{v/2 - 1} \\, d\\eta \\\\ &= \\frac{(v/2)^{v/2} \\, |\\boldsymbol{\\Lambda}|^{1/2}}{(2\\pi)^{D/2} \\, \\Gamma(v/2)} \\int_0^{+\\infty} exp \\left\\{ -\\frac{1}{2} (\\boldsymbol{x} - \\boldsymbol{\\mu})^T (\\eta \\boldsymbol{\\Lambda}) (\\boldsymbol{x} - \\boldsymbol{\\mu}) - \\frac{v\\eta}{2} \\right\\} \\eta^{D/2 + v/2 - 1} \\, d\\eta \\end{split}$$\n\nWhere we have taken advantage of the property: $|\\eta \\Lambda| = \\eta^D |\\Lambda|$ , and if we denote:\n\n$$\\Delta^2 = (\\boldsymbol{x} - \\boldsymbol{\\mu})^T \\Lambda (\\boldsymbol{x} - \\boldsymbol{\\mu})$$\n and $z = \\frac{\\eta}{2} (\\Delta^2 + v)$ \n\nThe expression above can be reduced to:\n\n$$\\begin{aligned} \\operatorname{St}(\\boldsymbol{x} \\,|\\, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}, v) &= \\frac{(v/2)^{v/2} \\,|\\boldsymbol{\\Lambda}|^{1/2}}{(2\\pi)^{D/2} \\,\\Gamma(v/2)} \\int_0^{+\\infty} exp(-z) (\\frac{2z}{\\boldsymbol{\\Lambda}^2 + v})^{D/2 + v/2 - 1} \\cdot \\frac{2}{\\boldsymbol{\\Lambda}^2 + v} \\, dz \\\\ &= \\frac{(v/2)^{v/2} \\,|\\boldsymbol{\\Lambda}|^{1/2}}{(2\\pi)^{D/2} \\,\\Gamma(v/2)} (\\frac{2}{\\boldsymbol{\\Lambda}^2 + v})^{D/2 + v/2} \\int_0^{+\\infty} exp(-z) \\cdot z^{D/2 + v/2 - 1} \\, dz \\\\ &= \\frac{(v/2)^{v/2} \\,|\\boldsymbol{\\Lambda}|^{1/2}}{(2\\pi)^{D/2} \\,\\Gamma(v/2)} (\\frac{2}{\\boldsymbol{\\Lambda}^2 + v})^{D/2 + v/2} \\,\\Gamma(D/2 + v/2) \\end{aligned}$$\n\nAnd if we rearrange the expression above, we will obtain (2.162: $\\operatorname{St}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}, \\nu) = \\frac{\\Gamma(D/2 + \\nu/2)}{\\Gamma(\\nu/2)} \\frac{|\\boldsymbol{\\Lambda}|^{1/2}}{(\\pi\\nu)^{D/2}} \\left[ 1 + \\frac{\\Delta^2}{\\nu} \\right]^{-D/2 - \\nu/2}$) just as required.",
"answer_length": 2214
},
{
"chapter": 2,
"question_number": "2.49",
"difficulty": "medium",
"question_text": "By using the definition (2.161: $St(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}, \\nu) = \\int_0^\\infty \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, (\\eta \\boldsymbol{\\Lambda})^{-1}) Gam(\\eta | \\nu/2, \\nu/2) \\, d\\eta.$) of the multivariate Student's t-distribution as a convolution of a Gaussian with a gamma distribution, verify the properties (2.164: $\\operatorname{cov}[\\mathbf{x}] = \\frac{\\nu}{(\\nu - 2)} \\boldsymbol{\\Lambda}^{-1}, \\quad \\text{if} \\quad \\nu > 2$), (2.165), and (2.166: $mode[\\mathbf{x}] = \\boldsymbol{\\mu}$) for the multivariate t-distribution defined by (2.162: $\\operatorname{St}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}, \\nu) = \\frac{\\Gamma(D/2 + \\nu/2)}{\\Gamma(\\nu/2)} \\frac{|\\boldsymbol{\\Lambda}|^{1/2}}{(\\pi\\nu)^{D/2}} \\left[ 1 + \\frac{\\Delta^2}{\\nu} \\right]^{-D/2 - \\nu/2}$).",
"answer": "Firstly, we notice that if and only if $\\mathbf{x} = \\boldsymbol{\\mu}$ , $\\Delta^2$ equals to 0, so that $\\mathrm{St}(\\mathbf{x}|\\boldsymbol{\\mu},\\boldsymbol{\\Lambda},v)$ achieves its maximum. In other words, the mode of $\\mathrm{St}(\\mathbf{x}|\\boldsymbol{\\mu},\\boldsymbol{\\Lambda},v)$ is $\\boldsymbol{\\mu}$ . Then we consider about its mean $\\mathbb{E}[\\mathbf{x}]$ .\n\n$$\\mathbb{E}[\\boldsymbol{x}] = \\int_{\\boldsymbol{x} \\in \\mathbb{R}^{D}} \\operatorname{St}(\\boldsymbol{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}, v) \\cdot \\boldsymbol{x} \\, d\\boldsymbol{x}$$\n\n$$= \\int_{\\boldsymbol{x} \\in \\mathbb{R}^{D}} \\left[ \\int_{0}^{+\\infty} \\mathcal{N}(\\boldsymbol{x} | \\boldsymbol{\\mu}, (\\eta \\boldsymbol{\\Lambda})^{-1}) \\cdot \\operatorname{Gam}(\\eta | \\frac{v}{2}, \\frac{v}{2}) \\, d\\eta \\, \\boldsymbol{x} \\right] d\\boldsymbol{x}$$\n\n$$= \\int_{\\boldsymbol{x} \\in \\mathbb{R}^{D}} \\int_{0}^{+\\infty} \\boldsymbol{x} \\mathcal{N}(\\boldsymbol{x} | \\boldsymbol{\\mu}, (\\eta \\boldsymbol{\\Lambda})^{-1}) \\cdot \\operatorname{Gam}(\\eta | \\frac{v}{2}, \\frac{v}{2}) \\, d\\eta \\, d\\boldsymbol{x}$$\n\n$$= \\int_{0}^{+\\infty} \\left[ \\int_{\\boldsymbol{x} \\in \\mathbb{R}^{D}} \\boldsymbol{x} \\mathcal{N}(\\boldsymbol{x} | \\boldsymbol{\\mu}, (\\eta \\boldsymbol{\\Lambda})^{-1}) \\, d\\boldsymbol{x} \\cdot \\operatorname{Gam}(\\eta | \\frac{v}{2}, \\frac{v}{2}) \\right] \\, d\\eta$$\n\n$$= \\int_{0}^{+\\infty} \\left[ \\boldsymbol{\\mu} \\cdot \\operatorname{Gam}(\\eta | \\frac{v}{2}, \\frac{v}{2}) \\right] \\, d\\eta$$\n\n$$= \\boldsymbol{\\mu} \\int_{0}^{+\\infty} \\operatorname{Gam}(\\eta | \\frac{v}{2}, \\frac{v}{2}) \\, d\\eta = \\boldsymbol{\\mu}$$\n\nWhere we have taken the following property:\n\n$$\\int_{\\boldsymbol{x}\\in\\mathbb{R}^D} \\boldsymbol{x} \\mathcal{N}(\\boldsymbol{x} \\, \\big| \\, \\boldsymbol{\\mu}, (\\eta \\boldsymbol{\\Lambda})^{-1}) \\, d\\boldsymbol{x} = \\mathbb{E}[\\boldsymbol{x}] = \\boldsymbol{\\mu}$$\n\nThen we calculate $\\mathbb{E}[xx^T]$ . The steps above can also be used here.\n\n$$\\mathbb{E}[\\boldsymbol{x}\\boldsymbol{x}^{T}] = \\int_{\\boldsymbol{x}\\in\\mathbb{R}^{D}} \\operatorname{St}(\\boldsymbol{x}|\\boldsymbol{\\mu},\\boldsymbol{\\Lambda},\\boldsymbol{v}) \\cdot \\boldsymbol{x}\\boldsymbol{x}^{T} d\\boldsymbol{x}$$\n\n$$= \\int_{\\boldsymbol{x}\\in\\mathbb{R}^{D}} \\left[ \\int_{0}^{+\\infty} \\mathcal{N}(\\boldsymbol{x}|\\boldsymbol{\\mu},(\\eta\\boldsymbol{\\Lambda})^{-1}) \\cdot \\operatorname{Gam}(\\eta|\\frac{\\boldsymbol{v}}{2},\\frac{\\boldsymbol{v}}{2}) \\ d\\eta \\ \\boldsymbol{x}\\boldsymbol{x}^{T} \\right] d\\boldsymbol{x}$$\n\n$$= \\int_{\\boldsymbol{x}\\in\\mathbb{R}^{D}} \\int_{0}^{+\\infty} \\boldsymbol{x}\\boldsymbol{x}^{T} \\mathcal{N}(\\boldsymbol{x}|\\boldsymbol{\\mu},(\\eta\\boldsymbol{\\Lambda})^{-1}) \\cdot \\operatorname{Gam}(\\eta|\\frac{\\boldsymbol{v}}{2},\\frac{\\boldsymbol{v}}{2}) \\ d\\eta \\ d\\boldsymbol{x}$$\n\n$$= \\int_{0}^{+\\infty} \\left[ \\int_{\\boldsymbol{x}\\in\\mathbb{R}^{D}} \\boldsymbol{x}\\boldsymbol{x}^{T} \\mathcal{N}(\\boldsymbol{x}|\\boldsymbol{\\mu},(\\eta\\boldsymbol{\\Lambda})^{-1}) \\ d\\boldsymbol{x} \\cdot \\operatorname{Gam}(\\eta|\\frac{\\boldsymbol{v}}{2},\\frac{\\boldsymbol{v}}{2}) \\right] \\ d\\eta$$\n\n$$= \\int_{0}^{+\\infty} \\left[ \\mathbb{E}[\\boldsymbol{\\mu}\\boldsymbol{\\mu}^{T}] \\cdot \\operatorname{Gam}(\\eta|\\frac{\\boldsymbol{v}}{2},\\frac{\\boldsymbol{v}}{2}) \\right] \\ d\\eta$$\n\n$$= \\int_{0}^{+\\infty} \\left[ \\boldsymbol{\\mu}\\boldsymbol{\\mu}^{T} + (\\eta\\boldsymbol{\\Lambda})^{-1} \\right] \\operatorname{Gam}(\\eta|\\frac{\\boldsymbol{v}}{2},\\frac{\\boldsymbol{v}}{2}) \\ d\\eta$$\n\n$$= \\boldsymbol{\\mu}\\boldsymbol{\\mu}^{T} + \\int_{0}^{+\\infty} (\\eta\\boldsymbol{\\Lambda})^{-1} \\cdot \\operatorname{Gam}(\\eta|\\frac{\\boldsymbol{v}}{2},\\frac{\\boldsymbol{v}}{2}) \\ d\\eta$$\n\n$$= \\boldsymbol{\\mu}\\boldsymbol{\\mu}^{T} + \\int_{0}^{+\\infty} (\\eta\\boldsymbol{\\Lambda})^{-1} \\cdot \\frac{1}{\\Gamma(\\boldsymbol{v}/2)} (\\frac{\\boldsymbol{v}}{2})^{\\boldsymbol{v}/2} \\eta^{\\boldsymbol{v}/2-1} exp(-\\frac{\\boldsymbol{v}}{2}\\eta) \\ d\\eta$$\n\n$$= \\boldsymbol{\\mu}\\boldsymbol{\\mu}^{T} + \\boldsymbol{\\Lambda}^{-1} \\frac{1}{\\Gamma(\\boldsymbol{v}/2)} (\\frac{\\boldsymbol{v}}{2})^{\\boldsymbol{v}/2} \\int_{0}^{+\\infty} \\eta^{\\boldsymbol{v}/2-2} exp(-\\frac{\\boldsymbol{v}}{2}\\eta) \\ d\\eta$$\n\nIf we denote: $z = \\frac{v\\eta}{2}$ , the equation above can be reduced to :\n\n$$\\begin{split} \\mathbb{E}[\\boldsymbol{x}\\boldsymbol{x}^{T}] &= \\mu \\mu^{T} + \\Lambda^{-1} \\frac{1}{\\Gamma(v/2)} (\\frac{v}{2})^{v/2} \\int_{0}^{+\\infty} (\\frac{2z}{v})^{v/2 - 2} exp(-z) \\frac{2}{v} \\, dz \\\\ &= \\mu \\mu^{T} + \\Lambda^{-1} \\frac{1}{\\Gamma(v/2)} \\cdot \\frac{v}{2} \\int_{0}^{+\\infty} z^{v/2 - 2} exp(-z) \\, dz \\\\ &= \\mu \\mu^{T} + \\Lambda^{-1} \\frac{\\Gamma(v/2 - 1)}{\\Gamma(v/2)} \\cdot \\frac{v}{2} \\\\ &= \\mu \\mu^{T} + \\Lambda^{-1} \\frac{1}{v/2 - 1} \\frac{v}{2} \\\\ &= \\mu \\mu^{T} + \\frac{v}{v - 2} \\Lambda^{-1} \\end{split}$$\n\nWhere we have taken advantage of the property: $\\Gamma(x+1) = x\\Gamma(x)$ , and since we have $cov[x] = \\mathbb{E}[(x-\\mathbb{E}[x])(x-\\mathbb{E}[x])^T]$ , together with $\\mathbb{E}[x] = \\mu$ , we can obtain:\n\n$$cov[x] = \\frac{v}{v-2} \\Lambda^{-1}$$",
"answer_length": 4965
},
{
"chapter": 2,
"question_number": "2.5",
"difficulty": "medium",
"question_text": "In this exercise, we prove that the beta distribution, given by (2.13: $(\\mu|a,b) = \\frac{\\Gamma(a+b)}{\\Gamma(a)\\Gamma(b)} \\mu^{a-1} (1-\\mu)^{b-1}$), is correctly normalized, so that (2.14: $\\int_0^1 \\text{Beta}(\\mu|a,b) \\, d\\mu = 1.$) holds. This is equivalent to showing that\n\n$$\\int_0^1 \\mu^{a-1} (1-\\mu)^{b-1} d\\mu = \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}.$$\n (2.265: $\\int_0^1 \\mu^{a-1} (1-\\mu)^{b-1} d\\mu = \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}.$)\n\nFrom the definition (1.141: $\\Gamma(x) \\equiv \\int_0^\\infty u^{x-1} e^{-u} \\, \\mathrm{d}u.$) of the gamma function, we have\n\n$$\\Gamma(a)\\Gamma(b) = \\int_0^\\infty \\exp(-x)x^{a-1} dx \\int_0^\\infty \\exp(-y)y^{b-1} dy.$$\n (2.266: $\\Gamma(a)\\Gamma(b) = \\int_0^\\infty \\exp(-x)x^{a-1} dx \\int_0^\\infty \\exp(-y)y^{b-1} dy.$)\n\nUse this expression to prove (2.265: $\\int_0^1 \\mu^{a-1} (1-\\mu)^{b-1} d\\mu = \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}.$) as follows. First bring the integral over y inside the integrand of the integral over x, next make the change of variable t = y + xwhere x is fixed, then interchange the order of the x and t integrations, and finally make the change of variable $x = t\\mu$ where t is fixed.",
"answer": "Hints have already been given in the description, and let's make a little improvement by introducing t = y + x and $x = t\\mu$ at the same time, i.e. we will do following changes:\n\n$$\\begin{cases} x = t\\mu \\\\ y = t(1-\\mu) \\end{cases} \\text{ and } \\begin{cases} t = x+y \\\\ \\mu = \\frac{x}{x+y} \\end{cases}$$\n\nNote $t \\in [0, +\\infty]$ , $\\mu \\in (0, 1)$ , and that when we change variables in integral, we will introduce a redundant term called *Jacobian Determinant*.\n\n$$\\frac{\\partial(x,y)}{\\partial(\\mu,t)} = \\begin{vmatrix} \\frac{\\partial x}{\\partial \\mu} & \\frac{\\partial x}{\\partial t} \\\\ \\frac{\\partial y}{\\partial \\mu} & \\frac{\\partial y}{\\partial t} \\end{vmatrix} = \\begin{vmatrix} t & \\mu \\\\ -t & 1-\\mu \\end{vmatrix} = t$$\n\nNow we can calculate the integral.\n\n$$\\begin{split} \\Gamma(a)\\Gamma(b) &= \\int_0^{+\\infty} exp(-x)x^{a-1} dx \\int_0^{+\\infty} exp(-y)y^{b-1} dy \\\\ &= \\int_0^{+\\infty} \\int_0^{+\\infty} exp(-x)x^{a-1} exp(-y)y^{b-1} dy dx \\\\ &= \\int_0^{+\\infty} \\int_0^{+\\infty} exp(-x-y)x^{a-1} y^{b-1} dy dx \\\\ &= \\int_0^1 \\int_0^{+\\infty} exp(-t)(t\\mu)^{a-1} (t(1-\\mu))^{b-1} t dt d\\mu \\\\ &= \\int_0^{+\\infty} exp(-t)t^{a+b-1} dt \\cdot \\int_0^1 \\mu^{a-1} (1-\\mu)^{b-1} d\\mu \\\\ &= \\Gamma(a+b) \\cdot \\int_0^1 \\mu^{a-1} (1-\\mu)^{b-1} d\\mu \\end{split}$$\n\nTherefore, we have obtained:\n\n$$\\int_0^1 \\mu^{a-1} (1-\\mu)^{b-1} d\\mu = \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}$$",
"answer_length": 1381
},
{
"chapter": 2,
"question_number": "2.50",
"difficulty": "easy",
"question_text": "Show that in the limit $\\nu \\to \\infty$ , the multivariate Student's t-distribution (2.162: $\\operatorname{St}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}, \\nu) = \\frac{\\Gamma(D/2 + \\nu/2)}{\\Gamma(\\nu/2)} \\frac{|\\boldsymbol{\\Lambda}|^{1/2}}{(\\pi\\nu)^{D/2}} \\left[ 1 + \\frac{\\Delta^2}{\\nu} \\right]^{-D/2 - \\nu/2}$) reduces to a Gaussian with mean $\\mu$ and precision $\\Lambda$ .",
"answer": "The same steps in Prob.2.47 can be used here.\n\n$$\\begin{split} \\operatorname{St}(\\pmb{x}|\\pmb{\\mu},\\pmb{\\Lambda},v) & \\propto & \\left[1+\\frac{\\pmb{\\Delta}^2}{v}\\right]^{-D/2-v/2} \\\\ & \\propto & exp\\left[(-D/2-v/2)\\cdot ln(1+\\frac{\\pmb{\\Delta}^2}{v})\\right] \\\\ & \\propto & exp\\left[-\\frac{D+v}{2}\\cdot(\\frac{\\pmb{\\Delta}^2}{v}+O(v^{-2}))\\right] \\\\ & \\approx & exp(-\\frac{\\pmb{\\Delta}^2}{2}) \\quad (v\\to\\infty) \\end{split}$$\n\nWhere we have used *Taylor Expansion*: $ln(1+\\epsilon) = \\epsilon + O(\\epsilon^2)$ . And since $\\Delta^2 = (x-\\mu)^T \\Lambda(x-\\mu)$ , we see that this, up to an overall constant, is a Gaussian distribution with mean $\\mu$ and precision $\\Lambda$ .",
"answer_length": 677
},
{
"chapter": 2,
"question_number": "2.51",
"difficulty": "easy",
"question_text": "- 2.51 (\\*) www The various trigonometric identities used in the discussion of periodic variables in this chapter can be proven easily from the relation\n\n$$\\exp(iA) = \\cos A + i\\sin A \\tag{2.296}$$\n\nin which i is the square root of minus one. By considering the identity\n\n$$\\exp(iA)\\exp(-iA) = 1 \\tag{2.297}$$\n\nprove the result (2.177: $\\cos^2 A + \\sin^2 A = 1$). Similarly, using the identity\n\n$$\\cos(A - B) = \\Re \\exp\\{i(A - B)\\}$$\n (2.298: $\\cos(A - B) = \\Re \\exp\\{i(A - B)\\}$)\n\nwhere $\\Re$ denotes the real part, prove (2.178: $\\cos A \\cos B + \\sin A \\sin B = \\cos(A - B).$). Finally, by using $\\sin(A - B) = \\Im \\exp\\{i(A - B)\\}$ , where $\\Im$ denotes the imaginary part, prove the result (2.183: $\\sin(A - B) = \\cos B \\sin A - \\cos A \\sin B$).",
"answer": "We first prove (2.177: $\\cos^2 A + \\sin^2 A = 1$). Since we have $exp(iA) \\cdot exp(-iA) = 1$ , and exp(iA) = cosA + isinA. We can obtain:\n\n$$(cosA + isinA) \\cdot (cosA - isinA) = 1$$\n\nWhich gives $cos^2A + sin^2A = 1$ . And then we prove (2.178: $\\cos A \\cos B + \\sin A \\sin B = \\cos(A - B).$) using the hint.\n\n$$cos(A-B) = \\Re[exp(i(A-B))]$$\n\n$$= \\Re[exp(iA)/exp(iB)]$$\n\n$$= \\Re[\\frac{cosA + isinA}{cosB + isinB}]$$\n\n$$= \\Re[\\frac{(cosA + isinA)(cosB - isinB)}{(cosB + isinB)(cosB - isinB)}]$$\n\n$$= \\Re[(cosA + isinA)(cosB - isinB)]$$\n\n$$= cosAcosB + sinAsinB$$\n\nIt is quite similar for (2.183: $\\sin(A - B) = \\cos B \\sin A - \\cos A \\sin B$).\n\n$$sin(A-B) = \\Im[exp(i(A-B))]$$\n\n$$= \\Im[(cosA + isinA)(cosB - isinB)]$$\n\n$$= sinAcosB - cosAsinB$$",
"answer_length": 747
},
{
"chapter": 2,
"question_number": "2.52",
"difficulty": "medium",
"question_text": "For large m, the von Mises distribution (2.179: $p(\\theta|\\theta_0, m) = \\frac{1}{2\\pi I_0(m)} \\exp\\{m\\cos(\\theta - \\theta_0)\\}$) becomes sharply peaked around the mode $\\theta_0$ . By defining $\\xi = m^{1/2}(\\theta - \\theta_0)$ and making the Taylor expansion of the cosine function given by\n\n$$\\cos \\alpha = 1 - \\frac{\\alpha^2}{2} + O(\\alpha^4) \\tag{2.299}$$\n\nshow that as $m \\to \\infty$ , the von Mises distribution tends to a Gaussian.",
"answer": "Let's follow the hint. We first derive an approximation for $exp[mcos(\\theta -$ \n\n $\\theta_0$ )].\n\n$$\\begin{split} \\exp\\{m\\cos(\\theta - \\theta_0)\\} &= \\exp\\left\\{m\\left[1 - \\frac{(\\theta - \\theta_0)^2}{2} + O((\\theta - \\theta_0)^4)\\right]\\right\\} \\\\ &= \\exp\\left\\{m - m\\frac{(\\theta - \\theta_0)^2}{2} - mO((\\theta - \\theta_0)^4)\\right\\} \\\\ &= \\exp(m) \\cdot \\exp\\left\\{-m\\frac{(\\theta - \\theta_0)^2}{2}\\right\\} \\cdot \\exp\\left\\{-mO((\\theta - \\theta_0)^4)\\right\\} \\end{split}$$\n\nIt is same for $exp(mcos\\theta)$ :\n\n$$exp\\{mcos\\theta\\} = exp(m) \\cdot exp(-m\\frac{\\theta^2}{2}) \\cdot exp\\left\\{-mO(\\theta^4)\\right\\}$$\n\nNow we rearrange (2.179):\n\n$$\\begin{split} p(\\theta|\\theta_{0},m) &= \\frac{1}{2\\pi I_{0}(m)} exp \\left\\{ mcos(\\theta-\\theta_{0}) \\right\\} \\\\ &= \\frac{1}{\\int_{0}^{2\\pi} exp \\left\\{ mcos\\theta \\right\\} d\\theta} exp \\left\\{ mcos(\\theta-\\theta_{0}) \\right\\} \\\\ &= \\frac{exp(m) \\cdot exp \\left\\{ -m\\frac{(\\theta-\\theta_{0})^{2}}{2} \\right\\} \\cdot exp \\left\\{ -mO((\\theta-\\theta_{0})^{4}) \\right\\}}{\\int_{0}^{2\\pi} exp(m) \\cdot exp(-m\\frac{\\theta^{2}}{2}) \\cdot exp \\left\\{ -mO(\\theta^{4}) \\right\\} d\\theta} \\\\ &= \\frac{1}{\\int_{0}^{2\\pi} exp(-m\\frac{\\theta^{2}}{2}) d\\theta} exp \\left\\{ -m\\frac{(\\theta-\\theta_{0})^{2}}{2} \\right\\} \\end{split}$$\n\nWhere we have taken advantage of the following fact:\n\n$$exp\\left\\{-mO((\\theta-\\theta_0)^4)\\right\\} \\approx exp\\left\\{-mO(\\theta^4)\\right\\} \\quad \\text{(when } m\\to\\infty\\text{)}$$\n\nTherefore, it is straightforward that when $m \\to \\infty$ , (2.179: $p(\\theta|\\theta_0, m) = \\frac{1}{2\\pi I_0(m)} \\exp\\{m\\cos(\\theta - \\theta_0)\\}$) reduces to a Gaussian Distribution with mean $\\theta_0$ and precision m.",
"answer_length": 1666
},
{
"chapter": 2,
"question_number": "2.53",
"difficulty": "easy",
"question_text": "Using the trigonometric identity (2.183: $\\sin(A - B) = \\cos B \\sin A - \\cos A \\sin B$), show that solution of (2.182: $\\sum_{n=1}^{N} \\sin(\\theta_n - \\theta_0) = 0.$) for $\\theta_0$ is given by (2.184: $\\theta_0^{\\rm ML} = \\tan^{-1} \\left\\{ \\frac{\\sum_n \\sin \\theta_n}{\\sum_n \\cos \\theta_n} \\right\\}$).",
"answer": "Let's rearrange (2.182: $\\sum_{n=1}^{N} \\sin(\\theta_n - \\theta_0) = 0.$) according to (2.183: $\\sin(A - B) = \\cos B \\sin A - \\cos A \\sin B$).\n\n$$\\begin{split} \\sum_{n=1}^{N} sin(\\theta - \\theta_0) &= \\sum_{n=1}^{N} (sin\\theta_n cos\\theta_0 - cos\\theta_n sin\\theta_0) \\\\ &= cos\\theta_0 \\sum_{n=1}^{N} sin\\theta_n - sin\\theta_0 \\sum_{n=1}^{N} cos\\theta_n \\end{split}$$\n\nWhere we have used (2.183: $\\sin(A - B) = \\cos B \\sin A - \\cos A \\sin B$), and then together with (2.182: $\\sum_{n=1}^{N} \\sin(\\theta_n - \\theta_0) = 0.$), we can obtain:\n\n$$cos\\theta_0 \\sum_{n=1}^{N} sin\\theta_n - sin\\theta_0 \\sum_{n=1}^{N} cos\\theta_n = 0$$\n\nWhich gives:\n\n$$\\theta_0^{ML} = tan^{-1} \\left\\{ \\frac{\\sum_n sin\\theta_n}{\\sum_n cos\\theta_n} \\right\\}$$",
"answer_length": 734
},
{
"chapter": 2,
"question_number": "2.54",
"difficulty": "easy",
"question_text": "By computing first and second derivatives of the von Mises distribution (2.179: $p(\\theta|\\theta_0, m) = \\frac{1}{2\\pi I_0(m)} \\exp\\{m\\cos(\\theta - \\theta_0)\\}$), and using $I_0(m) > 0$ for m > 0, show that the maximum of the distribution occurs when $\\theta = \\theta_0$ and that the minimum occurs when $\\theta = \\theta_0 + \\pi \\pmod{2\\pi}$ .",
"answer": "We calculate the first and second derivative of (2.179: $p(\\theta|\\theta_0, m) = \\frac{1}{2\\pi I_0(m)} \\exp\\{m\\cos(\\theta - \\theta_0)\\}$) with respect to $\\theta$ .\n\n$$p(\\theta|\\theta_0, m)' = \\frac{1}{2\\pi I_0(m)} [-m sin(\\theta - \\theta_0)] \\exp\\{m cos(\\theta - \\theta_0)\\}$$\n\n$$p(\\theta|\\theta_0,m)'' = \\frac{1}{2\\pi I_0(m)} \\left[ -m\\cos(\\theta - \\theta_0) + (-m\\sin(\\theta - \\theta_0))^2 \\right] exp\\left\\{ m\\cos(\\theta - \\theta_0) \\right\\}$$\n\nIf we let $p(\\theta|\\theta_0, m)'$ equals to 0, we will obtain its root:\n\n$$\\theta = \\theta_0 + k\\pi \\quad (k \\in \\mathbb{Z})$$\n\nWhen $k \\equiv 0 \\, (mod \\, 2)$ , i.e. $\\theta \\equiv \\theta_0 \\, (mod \\, 2\\pi)$ , we have:\n\n$$p(\\theta|\\theta_0,m)'' = \\frac{-m \\exp(m)}{2\\pi I_0(m)} < 0$$\n\nTherefore, when $\\theta = \\theta_0$ , (2.179: $p(\\theta|\\theta_0, m) = \\frac{1}{2\\pi I_0(m)} \\exp\\{m\\cos(\\theta - \\theta_0)\\}$) obtains its maximum. And when $k \\equiv 1 \\, (mod \\, 2)$ , i.e. $\\theta \\equiv \\theta_0 + \\pi \\, (mod \\, 2\\pi)$ , we have:\n\n$$p(\\theta|\\theta_0, m)'' = \\frac{m \\exp(-m)}{2\\pi I_0(m)} > 0$$\n\nTherefore, when $\\theta = \\theta_0 + \\pi \\, (mod \\, 2\\pi)$ , (2.179: $p(\\theta|\\theta_0, m) = \\frac{1}{2\\pi I_0(m)} \\exp\\{m\\cos(\\theta - \\theta_0)\\}$) obtains its minimum.",
"answer_length": 1234
},
{
"chapter": 2,
"question_number": "2.55",
"difficulty": "easy",
"question_text": "By making use of the result (2.168: $\\overline{r}\\cos\\overline{\\theta} = \\frac{1}{N}\\sum_{n=1}^{N}\\cos\\theta_n, \\qquad \\overline{r}\\sin\\overline{\\theta} = \\frac{1}{N}\\sum_{n=1}^{N}\\sin\\theta_n. \\qquad$), together with (2.184: $\\theta_0^{\\rm ML} = \\tan^{-1} \\left\\{ \\frac{\\sum_n \\sin \\theta_n}{\\sum_n \\cos \\theta_n} \\right\\}$) and the trigonometric identity (2.178: $\\cos A \\cos B + \\sin A \\sin B = \\cos(A - B).$), show that the maximum likelihood solution $m_{\\rm ML}$ for the concentration of the von Mises distribution satisfies $A(m_{\\rm ML}) = \\overline{r}$ where $\\overline{r}$ is the radius of the mean of the observations viewed as unit vectors in the two-dimensional Euclidean plane, as illustrated in Figure 2.17.",
"answer": "According to (2.185: $A(m) = \\frac{1}{N} \\sum_{n=1}^{N} \\cos(\\theta_n - \\theta_0^{\\text{ML}})$), we have :\n\n$$A(m_{ML}) = \\frac{1}{N} \\sum_{n=1}^{N} cos(\\theta_n - \\theta_0^{ML})$$\n\nBy using (2.178: $\\cos A \\cos B + \\sin A \\sin B = \\cos(A - B).$), we can write:\n\n$$\\begin{split} A(m_{ML}) &= \\frac{1}{N} \\sum_{n=1}^{N} cos(\\theta_{n} - \\theta_{0}^{ML}) \\\\ &= \\frac{1}{N} \\sum_{n=1}^{N} \\left( cos\\theta_{n} cos\\theta_{0}^{ML} + sin\\theta_{n} sin\\theta_{0}^{ML} \\right) \\\\ &= \\left( \\frac{1}{N} \\sum_{n=1}^{N} cos\\theta_{N} \\right) cos\\theta_{0}^{ML} + \\left( \\frac{1}{N} \\sum_{n=1}^{N} sin\\theta_{N} \\right) sin\\theta_{0}^{ML} \\end{split}$$\n\nBy using (2.168: $\\overline{r}\\cos\\overline{\\theta} = \\frac{1}{N}\\sum_{n=1}^{N}\\cos\\theta_n, \\qquad \\overline{r}\\sin\\overline{\\theta} = \\frac{1}{N}\\sum_{n=1}^{N}\\sin\\theta_n. \\qquad$), we can further derive:\n\n$$\\begin{split} A(m_{ML}) &= \\left(\\frac{1}{N}\\sum_{n=1}^{N}cos\\theta_{N}\\right)cos\\theta_{0}^{ML} + \\left(\\frac{1}{N}\\sum_{n=1}^{N}sin\\theta_{N}\\right)sin\\theta_{0}^{ML} \\\\ &= \\bar{r}cos\\bar{\\theta}\\cdot cos\\theta_{0}^{ML} + \\bar{r}sin\\bar{\\theta}\\cdot sin\\theta_{0}^{ML} \\\\ &= \\bar{r}cos(\\bar{\\theta}-\\theta_{0}^{ML}) \\end{split}$$\n\nAnd then by using (2.169: $\\overline{\\theta} = \\tan^{-1} \\left\\{ \\frac{\\sum_{n} \\sin \\theta_{n}}{\\sum_{n} \\cos \\theta_{n}} \\right\\}.$) and (2.184: $\\theta_0^{\\rm ML} = \\tan^{-1} \\left\\{ \\frac{\\sum_n \\sin \\theta_n}{\\sum_n \\cos \\theta_n} \\right\\}$), it is obvious that $\\bar{\\theta} = \\theta_0^{ML}$ , and hence $A(m_{ML}) = \\bar{r}$ .",
"answer_length": 1521
},
{
"chapter": 2,
"question_number": "2.56",
"difficulty": "medium",
"question_text": "Express the beta distribution (2.13: $(\\mu|a,b) = \\frac{\\Gamma(a+b)}{\\Gamma(a)\\Gamma(b)} \\mu^{a-1} (1-\\mu)^{b-1}$), the gamma distribution (2.146: $Gam(\\lambda|a,b) = \\frac{1}{\\Gamma(a)} b^a \\lambda^{a-1} \\exp(-b\\lambda).$), and the von Mises distribution (2.179: $p(\\theta|\\theta_0, m) = \\frac{1}{2\\pi I_0(m)} \\exp\\{m\\cos(\\theta - \\theta_0)\\}$) as members of the exponential family (2.194: $p(\\mathbf{x}|\\boldsymbol{\\eta}) = h(\\mathbf{x})g(\\boldsymbol{\\eta}) \\exp\\left\\{\\boldsymbol{\\eta}^{\\mathrm{T}}\\mathbf{u}(\\mathbf{x})\\right\\}$) and thereby identify their natural parameters.",
"answer": "Recall that the distributions belonging to the exponential family have the form:\n\n$$p(\\mathbf{x}|\\mathbf{\\eta}) = h(\\mathbf{x})g(\\mathbf{\\eta})exp(\\mathbf{\\eta}^T\\mathbf{u}(\\mathbf{x}))$$\n\nAnd according to (2.13: $(\\mu|a,b) = \\frac{\\Gamma(a+b)}{\\Gamma(a)\\Gamma(b)} \\mu^{a-1} (1-\\mu)^{b-1}$), the beta distribution can be written as:\n\n$$\\begin{aligned} \\operatorname{Beta}(x|a,b) &= \\frac{\\Gamma(a+b)}{\\Gamma(a)\\Gamma(b)} x^{a-1} (1-x)^{b-1} \\\\ &= \\frac{\\Gamma(a+b)}{\\Gamma(a)\\Gamma(b)} exp\\left[ (a-1)lnx + (b-1)ln(1-x) \\right] \\\\ &= \\frac{\\Gamma(a+b)}{\\Gamma(a)\\Gamma(b)} \\frac{exp\\left[ alnx + bln(1-x) \\right]}{x(1-x)} \\end{aligned}$$\n\nComparing it with the standard form of exponential family, we can obtain:\n\n$$\\begin{cases} \\boldsymbol{\\eta} = [a, b]^T \\\\ \\boldsymbol{u}(x) = [lnx, ln(1-x)]^T \\\\ g(\\boldsymbol{\\eta}) = \\Gamma(\\eta_1 + \\eta_2) / [\\Gamma(\\eta_1)\\Gamma(\\eta_2)] \\\\ h(x) = 1 / (x(1-x)) \\end{cases}$$\n\nWhere $\\eta_1$ means the first element of $\\eta$ , i.e. $\\eta_1 = a - 1$ , and $\\eta_2$ means the second element of $\\eta$ , i.e. $\\eta_2 = b - 1$ . According to (2.146: $Gam(\\lambda|a,b) = \\frac{1}{\\Gamma(a)} b^a \\lambda^{a-1} \\exp(-b\\lambda).$), Gamma distribution can be written as:\n\n$$Gam(x|a,b) = \\frac{1}{\\Gamma(a)}b^a x^{a-1} exp(-bx)$$\n\nComparing it with the standard form of exponential family, we can obtain:\n\n$$\\begin{cases} \\boldsymbol{\\eta} = [a, b]^T \\\\ \\boldsymbol{u}(x) = [0, -x] \\\\ g(\\boldsymbol{\\eta}) = \\eta_2^{\\eta_1} / \\Gamma(\\eta_1) \\\\ h(x) = x^{\\eta_1 - 1} \\end{cases}$$\n\nAccording to (2.179: $p(\\theta|\\theta_0, m) = \\frac{1}{2\\pi I_0(m)} \\exp\\{m\\cos(\\theta - \\theta_0)\\}$), the von Mises distribution can be written as:\n\n$$\\begin{split} p(x|\\theta_0,m) &= \\frac{1}{2\\pi I_0(m)} exp(mcos(x-\\theta_0)) \\\\ &= \\frac{1}{2\\pi I_0(m)} exp\\left[m(cosxcos\\theta_0 + sinxsin\\theta_0)\\right] \\end{split}$$\n\nComparing it with the standard form of exponential family, we can obtain:\n\n$$\\begin{cases} \\boldsymbol{\\eta} = [mcos\\theta_0, msin\\theta_0]^T \\\\ \\boldsymbol{u}(x) = [cosx, sinx] \\\\ g(\\boldsymbol{\\eta}) = 1/2\\pi I_0(\\sqrt{\\eta_1^2 + \\eta_2^2}) \\\\ h(x) = 1 \\end{cases}$$\n\nNote: a given distribution can be written into the exponential family in several ways with different natural parameters.",
"answer_length": 2239
},
{
"chapter": 2,
"question_number": "2.57",
"difficulty": "easy",
"question_text": "Verify that the multivariate Gaussian distribution can be cast in exponential family form (2.194: $p(\\mathbf{x}|\\boldsymbol{\\eta}) = h(\\mathbf{x})g(\\boldsymbol{\\eta}) \\exp\\left\\{\\boldsymbol{\\eta}^{\\mathrm{T}}\\mathbf{u}(\\mathbf{x})\\right\\}$) and derive expressions for $\\eta$ , $\\mathbf{u}(\\mathbf{x})$ , $h(\\mathbf{x})$ and $g(\\eta)$ analogous to (2.220)–(2.223).",
"answer": "Recall that the distributions belonging to the exponential family have the form:\n\n$$p(\\mathbf{x}|\\mathbf{\\eta}) = h(\\mathbf{x})g(\\mathbf{\\eta})exp(\\mathbf{\\eta}^T\\mathbf{u}(\\mathbf{x}))$$\n\nAnd the multivariate Gaussian Distribution has the form:\n\n$$\\mathcal{N}(\\boldsymbol{x}|\\boldsymbol{\\mu},\\boldsymbol{\\Sigma}) = \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\boldsymbol{\\Sigma}|^{1/2}} exp \\left\\{ -\\frac{1}{2} (\\boldsymbol{x} - \\boldsymbol{\\mu})^{T} \\boldsymbol{\\Sigma}^{-1} (\\boldsymbol{x} - \\boldsymbol{\\mu}) \\right\\}$$\n\nWe expand the exponential term with respect to $\\mu$ .\n\n$$\\mathcal{N}(\\boldsymbol{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}) = \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\boldsymbol{\\Sigma}|^{1/2}} exp \\left\\{ -\\frac{1}{2} (\\boldsymbol{x}^T \\boldsymbol{\\Sigma}^{-1} \\boldsymbol{x} - 2\\boldsymbol{\\mu}^T \\boldsymbol{\\Sigma}^{-1} \\boldsymbol{x} + \\boldsymbol{\\mu} \\boldsymbol{\\Sigma}^{-1} \\boldsymbol{\\mu}) \\right\\} \n= \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\boldsymbol{\\Sigma}|^{1/2}} exp \\left\\{ -\\frac{1}{2} \\boldsymbol{x}^T \\boldsymbol{\\Sigma}^{-1} \\boldsymbol{x} + \\boldsymbol{\\mu}^T \\boldsymbol{\\Sigma}^{-1} \\boldsymbol{x} \\right\\} exp \\left\\{ -\\frac{1}{2} \\boldsymbol{\\mu} \\boldsymbol{\\Sigma}^{-1} \\boldsymbol{\\mu}) \\right\\}$$\n\nComparing it with the standard form of exponential family, we can obtain:\n\n$$\\begin{cases} \\boldsymbol{\\eta} = [\\boldsymbol{\\Sigma}^{-1}\\boldsymbol{\\mu}, -\\frac{1}{2}vec(\\boldsymbol{\\Sigma}^{-1})]^T \\\\ \\boldsymbol{u}(\\boldsymbol{x}) = [\\boldsymbol{x}, vec(\\boldsymbol{x}\\boldsymbol{x}^T)] \\\\ g(\\boldsymbol{\\eta}) = exp(\\frac{1}{4}\\boldsymbol{\\eta_1}^T\\boldsymbol{\\eta_2}^{-1}\\boldsymbol{\\eta_1}) + |-2\\boldsymbol{\\eta_2}|^{1/2} \\\\ h(\\boldsymbol{x}) = (2\\pi)^{-D/2} \\end{cases}$$\n\nWhere we have used $\\eta_1$ to denote the first element of $\\eta$ , and $\\eta_2$ to denote the second element of $\\eta$ . And we also take advantage of the vectorizing operator, i.e. $vec(\\cdot)$ . The vectorization of a matrix is a linear transformation which converts the matrix into a column vector. This can be viewed in an example:\n\n$$\\mathbf{A} = \\begin{bmatrix} a & b \\\\ c & d \\end{bmatrix} = \\operatorname{vec}(\\mathbf{A}) = [a, c, b, d]^T$$\n\nNote: By introducing vectorizing operator, we actually have $vec(\\Sigma^{-1}) \\cdot vec(xx^T) = x^T \\Sigma^{-1} x$",
"answer_length": 2284
},
{
"chapter": 2,
"question_number": "2.58",
"difficulty": "easy",
"question_text": "The result (2.226: $-\\nabla \\ln g(\\boldsymbol{\\eta}) = \\mathbb{E}[\\mathbf{u}(\\mathbf{x})].$) showed that the negative gradient of $\\ln g(\\eta)$ for the exponential family is given by the expectation of $\\mathbf{u}(\\mathbf{x})$ . By taking the second derivatives of (2.195: $g(\\boldsymbol{\\eta}) \\int h(\\mathbf{x}) \\exp\\left\\{\\boldsymbol{\\eta}^{\\mathrm{T}} \\mathbf{u}(\\mathbf{x})\\right\\} d\\mathbf{x} = 1$), show that\n\n$$-\\nabla\\nabla \\ln g(\\boldsymbol{\\eta}) = \\mathbb{E}[\\mathbf{u}(\\mathbf{x})\\mathbf{u}(\\mathbf{x})^{\\mathrm{T}}] - \\mathbb{E}[\\mathbf{u}(\\mathbf{x})]\\mathbb{E}[\\mathbf{u}(\\mathbf{x})^{\\mathrm{T}}] = \\operatorname{cov}[\\mathbf{u}(\\mathbf{x})]. \\tag{2.300}$$",
"answer": "Based on (2.226: $-\\nabla \\ln g(\\boldsymbol{\\eta}) = \\mathbb{E}[\\mathbf{u}(\\mathbf{x})].$), we have already obtained:\n\n$$-\\nabla \\ln g(\\boldsymbol{\\eta}) = g(\\boldsymbol{\\eta}) \\int h(\\boldsymbol{x}) \\exp\\{\\boldsymbol{\\eta}^T \\boldsymbol{u}(\\boldsymbol{x})\\} \\boldsymbol{u}(\\boldsymbol{x}) d\\boldsymbol{x}$$\n\nThen we calculate the derivative of both sides of the equation above with respect to $\\eta$ using the Chain rule of Calculus:\n\n$$-\\nabla\\nabla \\ln g(\\boldsymbol{\\eta}) = \\nabla g(\\boldsymbol{\\eta}) \\int h(\\boldsymbol{x}) \\exp\\{\\boldsymbol{\\eta}^T \\boldsymbol{u}(\\boldsymbol{x})\\} \\boldsymbol{u}(\\boldsymbol{x})^T d\\boldsymbol{x} + g(\\boldsymbol{\\eta}) \\int h(\\boldsymbol{x}) \\exp\\{\\boldsymbol{\\eta}^T \\boldsymbol{u}(\\boldsymbol{x})\\} \\boldsymbol{u}(\\boldsymbol{x}) \\boldsymbol{u}(\\boldsymbol{x})^T d\\boldsymbol{x}$$\n\nOne thing needs to be addressed here: please pay attention to the transpose operation, and $-\\nabla\\nabla \\ln g(\\eta)$ should be a matrix. Notice the relationship $\\nabla \\ln g(\\eta) = \\nabla g(\\eta)/g(\\eta)$ , the first term on the right hand side of the above equation can be simplified as:\n\n(first term on the right) = \n$$\\nabla \\ln g(\\boldsymbol{\\eta}) \\cdot g(\\boldsymbol{\\eta}) \\int h(\\boldsymbol{x}) \\exp\\{\\boldsymbol{\\eta}^T \\boldsymbol{u}(\\boldsymbol{x})\\} \\boldsymbol{u}(\\boldsymbol{x})^T d\\boldsymbol{x}$$\n \n= $\\nabla \\ln g(\\boldsymbol{\\eta}) \\cdot \\mathbf{E}[\\boldsymbol{u}(\\boldsymbol{x})^T]$ \n= $-\\mathbf{E}[\\boldsymbol{u}(\\boldsymbol{x})] \\cdot \\mathbf{E}[\\boldsymbol{u}(\\boldsymbol{x})^T]$ \n\nBased on the definition, the second term on the right hand side is $\\mathbf{E}[u(x)u(x)^T]$ . Therefore, combining these two terms, we obtain:\n\n$$-\\nabla\\nabla \\ln g(\\boldsymbol{\\eta}) = -\\mathbf{E}[\\boldsymbol{u}(\\boldsymbol{x})] \\cdot \\mathbf{E}[\\boldsymbol{u}(\\boldsymbol{x})^T] + \\mathbf{E}[\\boldsymbol{u}(\\boldsymbol{x})\\boldsymbol{u}(\\boldsymbol{x})^T] = \\mathbf{cov}[\\boldsymbol{u}(\\boldsymbol{x})]$$",
"answer_length": 1952
},
{
"chapter": 2,
"question_number": "2.59",
"difficulty": "easy",
"question_text": "By changing variables using $y = x/\\sigma$ , show that the density (2.236: $p(x|\\sigma) = \\frac{1}{\\sigma} f\\left(\\frac{x}{\\sigma}\\right)$) will be correctly normalized, provided f(x) is correctly normalized.",
"answer": "It is straightforward.\n\n$$\\int p(x|\\sigma)dx = \\int \\frac{1}{\\sigma} f(\\frac{x}{\\sigma})dx$$\n$$= \\int \\frac{1}{\\sigma} f(u)\\sigma du$$\n$$= \\int f(u)du = 1$$\n\nWhere we have denoted $u = x/\\sigma$ .",
"answer_length": 197
},
{
"chapter": 2,
"question_number": "2.6",
"difficulty": "easy",
"question_text": "Make use of the result (2.265: $\\int_0^1 \\mu^{a-1} (1-\\mu)^{b-1} d\\mu = \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}.$) to show that the mean, variance, and mode of the beta distribution (2.13: $(\\mu|a,b) = \\frac{\\Gamma(a+b)}{\\Gamma(a)\\Gamma(b)} \\mu^{a-1} (1-\\mu)^{b-1}$) are given respectively by\n\n$$\\mathbb{E}[\\mu] = \\frac{a}{a+b} \\tag{2.267}$$\n\n$$\\mathbb{E}[\\mu] = \\frac{a}{a+b}$$\n (2.267: $\\mathbb{E}[\\mu] = \\frac{a}{a+b}$)\n\n$$var[\\mu] = \\frac{ab}{(a+b)^2(a+b+1)}$$\n (2.268: $var[\\mu] = \\frac{ab}{(a+b)^2(a+b+1)}$)\n\n$$\\text{mode}[\\mu] = \\frac{a-1}{a+b-2}.$$\n (2.269: $\\text{mode}[\\mu] = \\frac{a-1}{a+b-2}.$)",
"answer": "We will solve this problem based on definition.\n\n$$\\begin{split} \\mathbb{E}[\\mu] &= \\int_0^1 \\mu Beta(\\mu|a,b) d\\mu \\\\ &= \\int_0^1 \\frac{\\Gamma(a+b)}{\\Gamma(a)\\Gamma(b)} \\mu^a (1-\\mu)^{b-1} d\\mu \\\\ &= \\frac{\\Gamma(a+b)\\Gamma(a+1)}{\\Gamma(a+1+b)\\Gamma(a)} \\int_0^1 \\frac{\\Gamma(a+1+b)}{\\Gamma(a+1)\\Gamma(b)} \\mu^a (1-\\mu)^{b-1} d\\mu \\\\ &= \\frac{\\Gamma(a+b)\\Gamma(a+1)}{\\Gamma(a+1+b)\\Gamma(a)} \\int_0^1 Beta(\\mu|a+1,b) d\\mu \\\\ &= \\frac{\\Gamma(a+b)}{\\Gamma(a+1+b)} \\cdot \\frac{\\Gamma(a+1)}{\\Gamma(a)} \\\\ &= \\frac{a}{a+b} \\end{split}$$\n\nWhere we have taken advantage of the property: $\\Gamma(z+1) = z\\Gamma(z)$ . For variance, it is quite similar. We first evaluate $E[\\mu^2]$ .\n\n$$\\begin{split} \\mathbb{E}[\\mu^{2}] &= \\int_{0}^{1} \\mu^{2} Beta(\\mu|a,b) d\\mu \\\\ &= \\int_{0}^{1} \\frac{\\Gamma(a+b)}{\\Gamma(a)\\Gamma(b)} \\mu^{a+1} (1-\\mu)^{b-1} d\\mu \\\\ &= \\frac{\\Gamma(a+b)\\Gamma(a+2)}{\\Gamma(a+2+b)\\Gamma(a)} \\int_{0}^{1} \\frac{\\Gamma(a+2+b)}{\\Gamma(a+2)\\Gamma(b)} \\mu^{a+1} (1-\\mu)^{b-1} d\\mu \\\\ &= \\frac{\\Gamma(a+b)\\Gamma(a+2)}{\\Gamma(a+2+b)\\Gamma(a)} \\int_{0}^{1} Beta(\\mu|a+2,b) d\\mu \\\\ &= \\frac{\\Gamma(a+b)}{\\Gamma(a+2+b)} \\cdot \\frac{\\Gamma(a+2)}{\\Gamma(a)} \\\\ &= \\frac{a(a+1)}{(a+b)(a+b+1)} \\end{split}$$\n\nThen we use the formula: $var[\\mu] = E[\\mu^2] - E[\\mu]^2$ .\n\n$$var[\\mu] = \\frac{a(a+1)}{(a+b)(a+b+1)} - \\left(\\frac{a}{a+b}\\right)^2$$\n$$= \\frac{ab}{(a+b)^2(a+b+1)}$$",
"answer_length": 1375
},
{
"chapter": 2,
"question_number": "2.60",
"difficulty": "medium",
"question_text": "Consider a histogram-like density model in which the space $\\mathbf{x}$ is divided into fixed regions for which the density $p(\\mathbf{x})$ takes the constant value $h_i$ over the $i^{\\text{th}}$ region, and that the volume of region i is denoted $\\Delta_i$ . Suppose we have a set of N observations of $\\mathbf{x}$ such that $n_i$ of these observations fall in region i. Using a Lagrange multiplier to enforce the normalization constraint on the density, derive an expression for the maximum likelihood estimator for the $\\{h_i\\}$ .",
"answer": "Firstly, we write down the log likelihood function.\n\n$$\\sum_{n=1}^{N} lnp(\\boldsymbol{x_n}) = \\sum_{i=1}^{M} n_i ln(h_i)$$\n\nSome details should be explained here. If $x_n$ falls into region $\\Delta_i$ , then $p(x_n)$ will equal to $h_i$ , and since we have already been given that among\n\nall the N observations, there are $n_i$ samples fall into region $\\Delta_i$ , we can easily write down the likelihood function just as the equation above, and note we use M to denote the number of different regions. Therefore, an implicit equation should hold:\n\n$$\\sum_{i=1}^{M} n_i = N$$\n\nWe now need to take account of the constraint that p(x) must integrate to unity, which can be written as $\\sum_{j=1}^{M} h_j \\Delta_j = 1$ . We introduce a Lagrange multiplier to the expression, and then we need to minimize:\n\n$$\\sum_{i=1}^{M} n_i \\ln(h_i) + \\lambda (\\sum_{j}^{M} h_j \\Delta_j - 1)$$\n\nWe calculate its derivative with respect to $h_i$ and let it equal to 0.\n\n$$\\frac{n_i}{h_i} + \\lambda \\Delta_i = 0$$\n\nMultiplying both sides by $h_i$ , performing summation over i and then using the constraint, we can obtain:\n\n$$N + \\lambda = 0$$\n\nIn other words, $\\lambda = -N$ . Then we substitute the result into the likelihood function, which gives:\n\n$$h_i = \\frac{n_i}{N} \\frac{1}{\\Delta_i}$$\n\n# **Problem 2.61 Solution**\n\nIt is straightforward. In K nearest neighbours (KNN), when we want to estimate probability density at a point $x_i$ , we will consider a small sphere centered on $x_i$ and then allow the radius to grow until it contains K data points, and then $p(x_i)$ will equal to $K/(NV_i)$ , where N is total observations and $V_i$ is the volume of the sphere centered on $x_i$ . We can assume that $V_i$ is small enough that $p(x_i)$ is roughly constant in it. In this way, We can write down the integral:\n\n$$\\int p(\\boldsymbol{x}) d\\boldsymbol{x} \\approx \\sum_{i=1}^{N} p(\\boldsymbol{x_i}) \\cdot V_i = \\sum_{i=1}^{N} \\frac{K}{NV_i} \\cdot V_i = K \\neq 1$$\n\nWe also see that if we use \"1NN\" (K=1), the probability density will be well normalized. Note that if and only if the volume of all the spheres are small enough and N is large enough, the equation above will hold. Fortunately, these two conditions can be satisfied in KNN.\n\n# 0.3 Probability Distribution",
"answer_length": 2284
},
{
"chapter": 2,
"question_number": "2.7",
"difficulty": "medium",
"question_text": "- 2.7 (\\*\\*) Consider a binomial random variable x given by (2.9: $(m|N,\\mu) = \\binom{N}{m} \\mu^m (1-\\mu)^{N-m}$), with prior distribution for $\\mu$ given by the beta distribution (2.13: $(\\mu|a,b) = \\frac{\\Gamma(a+b)}{\\Gamma(a)\\Gamma(b)} \\mu^{a-1} (1-\\mu)^{b-1}$), and suppose we have observed m occurrences of x=1 and l occurrences of x=0. Show that the posterior mean value of x lies between the prior mean and the maximum likelihood estimate for $\\mu$ . To do this, show that the posterior mean can be written as $\\lambda$ times the prior mean plus $(1-\\lambda)$ times the maximum likelihood estimate, where $0 \\le \\lambda \\le 1$ . This illustrates the concept of the posterior distribution being a compromise between the prior distribution and the maximum likelihood solution.",
"answer": "The maximum likelihood estimation for $\\mu$ , i.e. (2.8: $\\mu_{\\rm ML} = \\frac{m}{N}$), can be written as :\n\n$$\\mu_{ML} = \\frac{m}{m+l}$$\n\nWhere m represents how many times we observe 'head', l represents how many times we observe 'tail'. And the prior mean of $\\mu$ is given by (2.15: $\\mathbb{E}[\\mu] = \\frac{a}{a+b}$), the posterior mean value of x is given by (2.20: $p(x=1|\\mathcal{D}) = \\frac{m+a}{m+a+l+b}$). Therefore, we will prove that (m+a)/(m+a+l+b) lies between m/(m+l), a/(a+b). Given the fact that:\n\n$$\\lambda \\frac{a}{a+b} + (1-\\lambda) \\frac{m}{m+l} = \\frac{m+a}{m+a+l+b}$$\n where $\\lambda = \\frac{a+b}{m+l+a+b}$ \n\nWe have solved problem. Note: you can also solve it in a more simple way by prove that:\n\n$$\\left(\\frac{m+a}{m+a+l+b} - \\frac{a}{a+b}\\right) \\cdot \\left(\\frac{m+a}{m+a+l+b} - \\frac{m}{m+l}\\right) \\le 0$$\n\nThe expression above can be proved by reduction of fractions to a common denominator.",
"answer_length": 925
},
{
"chapter": 2,
"question_number": "2.8",
"difficulty": "easy",
"question_text": "Consider two variables x and y with joint distribution p(x, y). Prove the following two results\n\n$$\\mathbb{E}[x] = \\mathbb{E}_y \\left[ \\mathbb{E}_x[x|y] \\right] \\tag{2.270}$$\n\n$$\\operatorname{var}[x] = \\mathbb{E}_y \\left[ \\operatorname{var}_x[x|y] \\right] + \\operatorname{var}_y \\left[ \\mathbb{E}_x[x|y] \\right]. \\tag{2.271}$$\n\nHere $\\mathbb{E}_x[x|y]$ denotes the expectation of x under the conditional distribution p(x|y), with a similar notation for the conditional variance.",
"answer": "We solve it base on definition.\n\n$$\\mathbb{E}_{y}[\\mathbb{E}_{x}[x|y]]] = \\int \\mathbb{E}_{x}[x|y]p(y)dy$$\n\n$$= \\int (\\int x p(x|y)dx)p(y)dy$$\n\n$$= \\int \\int x p(x|y)p(y)dxdy$$\n\n$$= \\int \\int x p(x,y)dxdy$$\n\n$$= \\int x p(x)dx = \\mathbb{E}[x]$$\n\n(2.271: $\\operatorname{var}[x] = \\mathbb{E}_y \\left[ \\operatorname{var}_x[x|y] \\right] + \\operatorname{var}_y \\left[ \\mathbb{E}_x[x|y] \\right].$) is complicated and we will calculate every term separately.\n\n$$\\begin{split} \\mathbb{E}_{y}[var_{x}[x|y]] &= \\int var_{x}[x|y]p(y)dy \\\\ &= \\int (\\int (x - \\mathbb{E}_{x}[x|y])^{2}p(x|y)dx)p(y)dy \\\\ &= \\int \\int (x - \\mathbb{E}_{x}[x|y])^{2}p(x,y)dxdy \\\\ &= \\int \\int (x^{2} - 2x\\mathbb{E}_{x}[x|y] + \\mathbb{E}_{x}[x|y]^{2})p(x,y)dxdy \\\\ &= \\int \\int x^{2}p(x)dx - \\int \\int 2x\\mathbb{E}_{x}[x|y]p(x,y)dxdy + \\int \\int (\\mathbb{E}_{x}[x|y]^{2})p(y)dy \\end{split}$$\n\nAbout the second term in the equation above, we further simplify it:\n\n$$\\iint 2x \\mathbb{E}_{x}[x|y] p(x,y) dx dy = 2 \\iint \\mathbb{E}_{x}[x|y] \\left( \\int x p(x,y) dx \\right) dy$$\n\n$$= 2 \\iint \\mathbb{E}_{x}[x|y] p(y) \\left( \\int x p(x|y) dx \\right) dy$$\n\n$$= 2 \\iint \\mathbb{E}_{x}[x|y]^{2} p(y) dy$$\n\nTherefore, we obtain the simple expression for the first term on the right side of (2.271):\n\n$$\\mathbb{E}_{y}[var_{x}[x|y]] = \\int \\int x^{2} p(x) dx - \\int \\int \\mathbb{E}_{x}[x|y]^{2} p(y) dy \\qquad (*)$$\n\nThen we process for the second term.\n\n$$\\begin{split} var_y[\\mathbb{E}_x[x|y]] &= \\int (\\mathbb{E}_x[x|y] - \\mathbb{E}_y[\\mathbb{E}_x[x|y]])^2 p(y) \\, dy \\\\ &= \\int (\\mathbb{E}_x[x|y] - \\mathbb{E}[x])^2 p(y) \\, dy \\\\ &= \\int \\mathbb{E}_x[x|y]^2 p(y) \\, dy - 2 \\int \\mathbb{E}[x] \\mathbb{E}_x[x|y] p(y) \\, dy + \\int \\mathbb{E}[x]^2 p(y) \\, dy \\\\ &= \\int \\mathbb{E}_x[x|y]^2 p(y) \\, dy - 2 \\mathbb{E}[x] \\int \\mathbb{E}_x[x|y] p(y) \\, dy + \\mathbb{E}[x]^2 \\end{split}$$\n\nThen following the same procedure, we deal with the second term of the equation above.\n\n$$2\\mathbb{E}[x] \\cdot \\int \\mathbb{E}_x[x|y] p(y) \\, dy = 2\\mathbb{E}[x] \\cdot \\mathbb{E}_y[\\mathbb{E}_x[x|y]]] = 2\\mathbb{E}[x]^2$$\n\nTherefore, we obtain the simple expression for the second term on the right side of (2.271):\n\n$$var_{y}[\\mathbb{E}_{x}[x|y]] = \\int \\mathbb{E}_{x}[x|y]^{2}p(y)dy - \\mathbb{E}[x]^{2}$$\n (\\*\\*)\n\nFinally, we add (\\*) and (\\*\\*), and then we will obtain:\n\n$$\\mathbb{E}_{y}[var_{x}[x|y]] + var_{y}[\\mathbb{E}_{x}[x|y]] = \\mathbb{E}[x^{2}] - \\mathbb{E}[x]^{2} = var[x]$$",
"answer_length": 2425
},
{
"chapter": 2,
"question_number": "2.9",
"difficulty": "hard",
"question_text": ". In this exercise, we prove the normalization of the Dirichlet distribution (2.38: $Dir(\\boldsymbol{\\mu}|\\boldsymbol{\\alpha}) = \\frac{\\Gamma(\\alpha_0)}{\\Gamma(\\alpha_1)\\cdots\\Gamma(\\alpha_K)} \\prod_{k=1}^K \\mu_k^{\\alpha_k - 1}$) using induction. We have already shown in Exercise 2.5 that the beta distribution, which is a special case of the Dirichlet for M=2, is normalized. We now assume that the Dirichlet distribution is normalized for M-1 variables and prove that it is normalized for M variables. To do this, consider the Dirichlet distribution over M variables, and take account of the constraint $\\sum_{k=1}^{M} \\mu_k = 1$ by eliminating $\\mu_M$ , so that the Dirichlet is written\n\n$$p_M(\\mu_1, \\dots, \\mu_{M-1}) = C_M \\prod_{k=1}^{M-1} \\mu_k^{\\alpha_k - 1} \\left( 1 - \\sum_{j=1}^{M-1} \\mu_j \\right)^{\\alpha_M - 1}$$\n (2.272: $p_M(\\mu_1, \\dots, \\mu_{M-1}) = C_M \\prod_{k=1}^{M-1} \\mu_k^{\\alpha_k - 1} \\left( 1 - \\sum_{j=1}^{M-1} \\mu_j \\right)^{\\alpha_M - 1}$)\n\nand our goal is to find an expression for $C_M$ . To do this, integrate over $\\mu_{M-1}$ , taking care over the limits of integration, and then make a change of variable so that this integral has limits 0 and 1. By assuming the correct result for $C_{M-1}$ and making use of (2.265: $\\int_0^1 \\mu^{a-1} (1-\\mu)^{b-1} d\\mu = \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)}.$), derive the expression for $C_M$ .",
"answer": "This problem is complexed, but hints have already been given in the description. Let's begin by performing integral of (2.272: $p_M(\\mu_1, \\dots, \\mu_{M-1}) = C_M \\prod_{k=1}^{M-1} \\mu_k^{\\alpha_k - 1} \\left( 1 - \\sum_{j=1}^{M-1} \\mu_j \\right)^{\\alpha_M - 1}$) over $\\mu_{M-1}$ . (Note:\n\nby integral over $\\mu_{M-1}$ , we actually obtain Dirichlet distribution with M-1 variables.)\n\n$$\\begin{array}{lll} p_{M-1}(\\pmb{\\mu},\\pmb{m},...,\\mu_{M-2}) & = & \\int_0^{1-\\pmb{\\mu}-\\pmb{m}-...-\\mu_{M-2}} C_M \\prod_{k=1}^{M-1} \\mu_k^{\\alpha_k-1} (1-\\sum_{j=1}^{M-1} \\mu_j)^{\\alpha_M-1} d\\mu_{M-1} \\\\ & = & C_M \\prod_{k=1}^{M-2} \\mu_k^{\\alpha_k-1} \\int_0^{1-\\pmb{\\mu}-\\pmb{m}-...-\\mu_{M-2}} \\mu_{M-1}^{\\alpha_{M-1}-1} (1-\\sum_{j=1}^{M-1} \\mu_j)^{\\alpha_M-1} d\\mu_{M-1} \\end{array}$$\n\nWe change variable by:\n\n$$t = \\frac{\\mu_{M-1}}{1 - \\mu - m - \\dots - \\mu_{M-2}}$$\n\nThe reason we do so is that $\\mu_{M-1} \\in [0, 1-\\mu-m-...-\\mu_{M-2}]$ , by making this changing of variable, we can see that $t \\in [0,1]$ . Then we can further simplify the expression.\n\n$$\\begin{array}{lll} p_{M-1} & = & C_M \\prod_{k=1}^{M-2} \\mu_k^{\\alpha_k-1} (1 - \\sum_{j=1}^{M-2} \\mu_j)^{\\alpha_{M-1} + \\alpha_M - 1} \\int_0^1 \\frac{\\mu_{M-1}^{\\alpha_{M-1} - 1} (1 - \\sum_{j=1}^{M-1} \\mu_j)^{\\alpha_M - 1}}{(1 - \\mu - m - \\dots - \\mu_{M-2})^{\\alpha_{M-1} + \\alpha_M - 2}} \\, dt \\\\ & = & C_M \\prod_{k=1}^{M-2} \\mu_k^{\\alpha_k - 1} (1 - \\sum_{j=1}^{M-2} \\mu_j)^{\\alpha_{M-1} + \\alpha_M - 1} \\int_0^1 t^{\\alpha_{M-1} - 1} (1 - t)^{\\alpha_M - 1} \\, dt \\\\ & = & C_M \\prod_{k=1}^{M-2} \\mu_k^{\\alpha_k - 1} (1 - \\sum_{j=1}^{M-2} \\mu_j)^{\\alpha_{M-1} + \\alpha_M - 1} \\frac{\\Gamma(\\alpha_{M-1} - 1)\\Gamma(\\alpha_M)}{\\Gamma(\\alpha_{M-1} + \\alpha_M)} \\end{array}$$\n\nComparing the expression above with a normalized Dirichlet Distribution with M-1 variables, and supposing that (2.272: $p_M(\\mu_1, \\dots, \\mu_{M-1}) = C_M \\prod_{k=1}^{M-1} \\mu_k^{\\alpha_k - 1} \\left( 1 - \\sum_{j=1}^{M-1} \\mu_j \\right)^{\\alpha_M - 1}$) holds for M-1, we can obtain that:\n\n$$C_M \\frac{\\Gamma(\\alpha_{M-1})\\Gamma(\\alpha_M)}{\\Gamma(\\alpha_{M-1} + \\alpha_M)} = \\frac{\\Gamma(\\alpha_1 + \\alpha_2 + \\dots + \\alpha_M)}{\\Gamma(\\alpha_1)\\Gamma(\\alpha_2)\\dots\\Gamma(\\alpha_{M-1} + \\alpha_M)}$$\n\nTherefore, we obtain\n\n$$C_M = \\frac{\\Gamma(\\alpha_1 + \\alpha_2 + \\dots + \\alpha_M)}{\\Gamma(\\alpha_1)\\Gamma(\\alpha_2)\\dots\\Gamma(\\alpha_{M-1})\\Gamma(\\alpha_M)}$$\n\nas required.",
"answer_length": 2394
}
]
},
{
"chapter_number": 3,
"total_questions": 21,
"difficulty_breakdown": {
"easy": 7,
"medium": 11,
"hard": 0,
"unknown": 5
},
"questions": [
{
"chapter": 3,
"question_number": "3.10",
"difficulty": "medium",
"question_text": "By making use of the result (2.115: $p(\\mathbf{y}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}, \\mathbf{L}^{-1} + \\mathbf{A}\\boldsymbol{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}})$) to evaluate the integral in (3.57: $p(t|\\mathbf{t},\\alpha,\\beta) = \\int p(t|\\mathbf{w},\\beta)p(\\mathbf{w}|\\mathbf{t},\\alpha,\\beta) \\,d\\mathbf{w}$), verify that the predictive distribution for the Bayesian linear regression model is given by (3.58: $p(t|\\mathbf{x}, \\mathbf{t}, \\alpha, \\beta) = \\mathcal{N}(t|\\mathbf{m}_N^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}), \\sigma_N^2(\\mathbf{x}))$) in which the input-dependent variance is given by (3.59: $\\sigma_N^2(\\mathbf{x}) = \\frac{1}{\\beta} + \\phi(\\mathbf{x})^{\\mathrm{T}} \\mathbf{S}_N \\phi(\\mathbf{x}).$).",
"answer": "We have already known:\n\n$$p(t|\\boldsymbol{w}, \\beta) = \\mathcal{N}(t|y(\\boldsymbol{x}, \\boldsymbol{w}), \\beta^{-1})$$\n\nAnd\n\n$$p(\\boldsymbol{w}|\\mathbf{t},\\alpha,\\beta) = \\mathcal{N}(\\boldsymbol{w}|\\boldsymbol{m}_{N},\\boldsymbol{S}_{N})$$\n\nWhere $m_N$ , $S_N$ are given by (3.53: $\\mathbf{m}_{N} = \\beta \\mathbf{S}_{N} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{t}$) and (3.54: $\\mathbf{S}_{N}^{-1} = \\alpha \\mathbf{I} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi}.$). As what we do in previous problem, we can rewrite $p(t|\\boldsymbol{w},\\beta)$ as:\n\n$$p(t|\\boldsymbol{w}, \\beta) = \\mathcal{N}(t|\\boldsymbol{\\phi}(\\boldsymbol{x})^T \\boldsymbol{w}, \\beta^{-1})$$\n\nAnd then we take advantage of (2.113: $p(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}^{-1})$), (2.114: $p(\\mathbf{y}|\\mathbf{x}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\mathbf{x} + \\mathbf{b}, \\mathbf{L}^{-1})$) and (2.115: $p(\\mathbf{y}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}, \\mathbf{L}^{-1} + \\mathbf{A}\\boldsymbol{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}})$), we can obtain:\n\n$$p(t|\\mathbf{t}, \\alpha, \\beta) = \\mathcal{N}(\\boldsymbol{\\phi}(\\mathbf{x})^T \\boldsymbol{m}_{N}, \\beta^{-1} + \\boldsymbol{\\phi}(\\mathbf{x})^T \\mathbf{S}_{N} \\boldsymbol{\\phi}(\\mathbf{x}))$$\n\nWhich is exactly the same as (3.58: $p(t|\\mathbf{x}, \\mathbf{t}, \\alpha, \\beta) = \\mathcal{N}(t|\\mathbf{m}_N^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}), \\sigma_N^2(\\mathbf{x}))$), if we notice that\n\n$$\\phi(\\mathbf{x})^T \\mathbf{m}_{\\mathbf{N}} = \\mathbf{m}_{\\mathbf{N}}^T \\phi(\\mathbf{x})$$",
"answer_length": 1575
},
{
"chapter": 3,
"question_number": "3.11",
"difficulty": "medium",
"question_text": "- 3.11 (\\*\\*) We have seen that, as the size of a data set increases, the uncertainty associated with the posterior distribution over model parameters decreases. Make use of the matrix identity (Appendix C)\n\n$$\\left(\\mathbf{M} + \\mathbf{v}\\mathbf{v}^{\\mathrm{T}}\\right)^{-1} = \\mathbf{M}^{-1} - \\frac{\\left(\\mathbf{M}^{-1}\\mathbf{v}\\right)\\left(\\mathbf{v}^{\\mathrm{T}}\\mathbf{M}^{-1}\\right)}{1 + \\mathbf{v}^{\\mathrm{T}}\\mathbf{M}^{-1}\\mathbf{v}}$$\n(3.110: $\\left(\\mathbf{M} + \\mathbf{v}\\mathbf{v}^{\\mathrm{T}}\\right)^{-1} = \\mathbf{M}^{-1} - \\frac{\\left(\\mathbf{M}^{-1}\\mathbf{v}\\right)\\left(\\mathbf{v}^{\\mathrm{T}}\\mathbf{M}^{-1}\\right)}{1 + \\mathbf{v}^{\\mathrm{T}}\\mathbf{M}^{-1}\\mathbf{v}}$)\n\nto show that the uncertainty $\\sigma_N^2(\\mathbf{x})$ associated with the linear regression function given by (3.59: $\\sigma_N^2(\\mathbf{x}) = \\frac{1}{\\beta} + \\phi(\\mathbf{x})^{\\mathrm{T}} \\mathbf{S}_N \\phi(\\mathbf{x}).$) satisfies\n\n$$\\sigma_{N+1}^2(\\mathbf{x}) \\leqslant \\sigma_N^2(\\mathbf{x}). \\tag{3.111}$$",
"answer": "We need to use the result obtained in Prob.3.8. In Prob.3.8, we have derived a formula for $\\mathbf{S}_{N+1}^{-1}$ :\n\n$$S_{N+1}^{-1} = S_N^{-1} + \\beta \\, \\phi(x_{N+1}) \\, \\phi(x_{N+1})^T$$\n\nAnd then using (3.110: $\\left(\\mathbf{M} + \\mathbf{v}\\mathbf{v}^{\\mathrm{T}}\\right)^{-1} = \\mathbf{M}^{-1} - \\frac{\\left(\\mathbf{M}^{-1}\\mathbf{v}\\right)\\left(\\mathbf{v}^{\\mathrm{T}}\\mathbf{M}^{-1}\\right)}{1 + \\mathbf{v}^{\\mathrm{T}}\\mathbf{M}^{-1}\\mathbf{v}}$), we can obtain:\n\n$$S_{N+1} = \\left[ S_{N}^{-1} + \\beta \\phi(x_{N+1}) \\phi(x_{N+1})^{T} \\right]^{-1}$$\n\n$$= \\left[ S_{N}^{-1} + \\sqrt{\\beta} \\phi(x_{N+1}) \\sqrt{\\beta} \\phi(x_{N+1})^{T} \\right]^{-1}$$\n\n$$= S_{N} - \\frac{S_{N}(\\sqrt{\\beta} \\phi(x_{N+1}))(\\sqrt{\\beta} \\phi(x_{N+1})^{T}) S_{N}}{1 + (\\sqrt{\\beta} \\phi(x_{N+1})^{T}) S_{N}(\\sqrt{\\beta} \\phi(x_{N+1}))}$$\n\n$$= S_{N} - \\frac{\\beta S_{N} \\phi(x_{N+1}) \\phi(x_{N+1})^{T} S_{N}}{1 + \\beta \\phi(x_{N+1})^{T} S_{N} \\phi(x_{N+1})}$$\n\nNow we calculate $\\sigma_N^2(\\mathbf{x}) - \\sigma_{N+1}^2(\\mathbf{x})$ according to (3.59: $\\sigma_N^2(\\mathbf{x}) = \\frac{1}{\\beta} + \\phi(\\mathbf{x})^{\\mathrm{T}} \\mathbf{S}_N \\phi(\\mathbf{x}).$).\n\n$$\\begin{split} \\sigma_N^2(x) - \\sigma_{N+1}^2(x) &= \\phi(x)^T (S_N - S_{N+1}) \\phi(x) \\\\ &= \\phi(x)^T \\frac{\\beta S_N \\phi(x_{N+1}) \\phi(x_{N+1})^T S_N}{1 + \\beta \\phi(x_{N+1})^T S_N \\phi(x_{N+1})} \\phi(x) \\\\ &= \\frac{\\phi(x)^T S_N \\phi(x_{N+1}) \\phi(x_{N+1})^T S_N \\phi(x)}{1/\\beta + \\phi(x_{N+1})^T S_N \\phi(x_{N+1})} \\\\ &= \\frac{\\left[\\phi(x)^T S_N \\phi(x_{N+1})\\right]^2}{1/\\beta + \\phi(x_{N+1})^T S_N \\phi(x_{N+1})} \\quad (*) \\end{split}$$\n\nAnd since $S_N$ is positive definite, (\\*) is larger than 0. Therefore, we have proved that $\\sigma_N^2(\\mathbf{x}) - \\sigma_{N+1}^2(\\mathbf{x}) \\ge 0$",
"answer_length": 1746
},
{
"chapter": 3,
"question_number": "3.12",
"difficulty": "medium",
"question_text": "We saw in Section 2.3.6 that the conjugate prior for a Gaussian distribution with unknown mean and unknown precision (inverse variance) is a normal-gamma distribution. This property also holds for the case of the conditional Gaussian distribution $p(t|\\mathbf{x}, \\mathbf{w}, \\beta)$ of the linear regression model. If we consider the likelihood function (3.10: $p(\\mathbf{t}|\\mathbf{X}, \\mathbf{w}, \\beta) = \\prod_{n=1}^{N} \\mathcal{N}(t_n | \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n), \\beta^{-1})$), then the conjugate prior for $\\mathbf{w}$ and $\\beta$ is given by\n\n$$p(\\mathbf{w}, \\beta) = \\mathcal{N}(\\mathbf{w}|\\mathbf{m}_0, \\beta^{-1}\\mathbf{S}_0)\\operatorname{Gam}(\\beta|a_0, b_0). \\tag{3.112}$$\n\nShow that the corresponding posterior distribution takes the same functional form, so that\n\n$$p(\\mathbf{w}, \\beta | \\mathbf{t}) = \\mathcal{N}(\\mathbf{w} | \\mathbf{m}_N, \\beta^{-1} \\mathbf{S}_N) \\operatorname{Gam}(\\beta | a_N, b_N)$$\n(3.113: $p(\\mathbf{w}, \\beta | \\mathbf{t}) = \\mathcal{N}(\\mathbf{w} | \\mathbf{m}_N, \\beta^{-1} \\mathbf{S}_N) \\operatorname{Gam}(\\beta | a_N, b_N)$)\n\nand find expressions for the posterior parameters $\\mathbf{m}_N$ , $\\mathbf{S}_N$ , $a_N$ , and $b_N$ .",
"answer": "Let's begin by writing down the prior PDF $p(\\mathbf{w}, \\beta)$ :\n\n$$p(\\boldsymbol{w}, \\beta) = \\mathcal{N}(\\boldsymbol{w} | \\boldsymbol{m_0}, \\beta^{-1} \\boldsymbol{S_0}) \\operatorname{Gam}(\\beta | a_0, b_0) \\quad (*)$$\n\n$$\\propto \\left(\\frac{\\beta}{|\\boldsymbol{S_0}|}\\right)^2 exp\\left(-\\frac{1}{2}(\\boldsymbol{w} - \\boldsymbol{m_0})^T \\beta \\boldsymbol{S_0}^{-1}(\\boldsymbol{w} - \\boldsymbol{m_0})\\right) b_0^{a_0} \\beta^{a_0 - 1} exp(-b_0 \\beta)$$\n\nAnd then we write down the likelihood function $p(\\mathbf{t}|\\mathbf{X}, \\boldsymbol{w}, \\beta)$ :\n\n$$p(\\mathbf{t}|\\mathbf{X}, \\boldsymbol{w}, \\beta) = \\prod_{n=1}^{N} \\mathcal{N}(t_n | \\boldsymbol{w}^T \\boldsymbol{\\phi}(\\boldsymbol{x_n}), \\beta^{-1})$$\n\n$$\\propto \\prod_{n=1}^{N} \\beta^{1/2} exp\\left[-\\frac{\\beta}{2}(t_n - \\boldsymbol{w}^T \\boldsymbol{\\phi}(\\boldsymbol{x_n}))^2\\right] \\quad (**)$$\n\nAccording to Bayesian Inference, we have $p(\\boldsymbol{w}, \\beta | \\mathbf{t}) \\propto p(\\mathbf{t} | \\mathbf{X}, \\boldsymbol{w}, \\beta) \\times p(\\boldsymbol{w}, \\beta)$ . We first focus on the quadratic term with regard to $\\boldsymbol{w}$ in the exponent.\n\nquadratic term = \n$$-\\frac{\\beta}{2} \\boldsymbol{w}^T \\boldsymbol{S_0}^{-1} \\boldsymbol{w} + \\sum_{n=1}^{N} -\\frac{\\beta}{2} \\boldsymbol{w}^T \\boldsymbol{\\phi}(\\boldsymbol{x_n}) \\boldsymbol{\\phi}(\\boldsymbol{x_n})^T \\boldsymbol{w}$$\n \n = $-\\frac{\\beta}{2} \\boldsymbol{w}^T [\\boldsymbol{S_0}^{-1} + \\sum_{n=1}^{N} \\boldsymbol{\\phi}(\\boldsymbol{x_n}) \\boldsymbol{\\phi}(\\boldsymbol{x_n})^T] \\boldsymbol{w}$ \n\nWhere the first term is generated by (\\*), and the second by (\\*\\*). By now, we know that:\n\n$$S_N^{-1} = S_0^{-1} + \\sum_{n=1}^{N} \\phi(x_n) \\phi(x_n)^T$$\n\nWe then focus on the linear term with regard to $\\boldsymbol{w}$ in the exponent.\n\nlinear term = \n$$\\beta \\boldsymbol{m_0}^T \\boldsymbol{S_0}^{-1} \\boldsymbol{w} + \\sum_{n=1}^{N} \\beta t_n \\boldsymbol{\\phi}(\\boldsymbol{x_n})^T \\boldsymbol{w}$$\n \n= $\\beta [\\boldsymbol{m_0}^T \\boldsymbol{S_0}^{-1} + \\sum_{n=1}^{N} t_n \\boldsymbol{\\phi}(\\boldsymbol{x_n})^T] \\boldsymbol{w}$ \n\nAgain, the first term is generated by (\\*), and the second by (\\*\\*). We can also obtain:\n\n$$m_N^T S_N^{-1} = m_0^T S_0^{-1} + \\sum_{n=1}^N t_n \\phi(x_n)^T$$\n\nWhich gives:\n\n$$\\boldsymbol{m_N} = \\boldsymbol{S_N} \\big[ \\boldsymbol{S_0}^{-1} \\boldsymbol{m_0} + \\sum_{n=1}^{N} t_n \\boldsymbol{\\phi}(\\boldsymbol{x_n}) \\big]$$\n\nThen we focus on the constant term with regard to $\\boldsymbol{w}$ in the exponent.\n\nconstant term = \n$$(-\\frac{\\beta}{2} \\boldsymbol{m_0}^T \\boldsymbol{S_0}^{-1} \\boldsymbol{m_0} - b_0 \\beta) - \\frac{\\beta}{2} \\sum_{n=1}^{N} t_n^2$$\n \n= $-\\beta \\left[ \\frac{1}{2} \\boldsymbol{m_0}^T \\boldsymbol{S_0}^{-1} \\boldsymbol{m_0} + b_0 + \\frac{1}{2} \\sum_{n=1}^{N} t_n^2 \\right]$ \n\nTherefore, we can obtain:\n\n$$\\frac{1}{2} \\boldsymbol{m_N}^T \\boldsymbol{S_N}^{-1} \\boldsymbol{m_N} + b_N = \\frac{1}{2} \\boldsymbol{m_0}^T \\boldsymbol{S_0}^{-1} \\boldsymbol{m_0} + b_0 + \\frac{1}{2} \\sum_{n=1}^{N} t_n^2$$\n\nWhich gives:\n\n$$b_{N} = \\frac{1}{2} \\boldsymbol{m_0}^T \\boldsymbol{S_0}^{-1} \\boldsymbol{m_0} + b_0 + \\frac{1}{2} \\sum_{n=1}^{N} t_n^2 - \\frac{1}{2} \\boldsymbol{m_N}^T \\boldsymbol{S_N}^{-1} \\boldsymbol{m_N}$$\n\nFinally, we focus on the exponential term whose base is $\\beta$ .\n\nexponent term = \n$$(2 + a_0 - 1) + \\frac{N}{2}$$\n\nWhich gives:\n\n$$2 + a_N - 1 = (2 + a_0 - 1) + \\frac{N}{2}$$\n\nHence,\n\n$$a_N = a_0 + \\frac{N}{2}$$",
"answer_length": 3411
},
{
"chapter": 3,
"question_number": "3.13",
"difficulty": "medium",
"question_text": "Show that the predictive distribution $p(t|\\mathbf{x}, \\mathbf{t})$ for the model discussed in Exercise 3.12 is given by a Student's t-distribution of the form\n\n$$p(t|\\mathbf{x}, \\mathbf{t}) = \\operatorname{St}(t|\\mu, \\lambda, \\nu) \\tag{3.114}$$\n\nand obtain expressions for $\\mu$ , $\\lambda$ and $\\nu$ .",
"answer": "Similar to (3.57: $p(t|\\mathbf{t},\\alpha,\\beta) = \\int p(t|\\mathbf{w},\\beta)p(\\mathbf{w}|\\mathbf{t},\\alpha,\\beta) \\,d\\mathbf{w}$), we write down the expression of the predictive distribution $p(t|\\mathbf{X},\\mathbf{t})$ :\n\n$$p(t|\\mathbf{X},\\mathbf{t}) = \\int \\int p(t|\\mathbf{w},\\beta) \\, p(\\mathbf{w},\\beta|\\mathbf{X},\\mathbf{t}) \\, d\\mathbf{w} \\, d\\beta$$\n (\\*)\n\nWe know that:\n\n$$p(t|\\boldsymbol{w}, \\beta) = \\mathcal{N}(t|y(\\boldsymbol{x}, \\boldsymbol{w}), \\beta^{-1}) = \\mathcal{N}(t|\\boldsymbol{\\phi}(\\boldsymbol{x})^T \\boldsymbol{w}, \\beta^{-1})$$\n\nand that:\n\n$$p(\\boldsymbol{w}, \\beta | \\mathbf{X}, \\mathbf{t}) = \\mathcal{N}(\\boldsymbol{w} | \\boldsymbol{m}_{N}, \\beta^{-1} \\mathbf{S}_{N}) \\operatorname{Gam}(\\beta | a_{N}, b_{N})$$\n\nWe go back to (\\*), and first deal with the integral with regard to $\\boldsymbol{w}$ :\n\n$$p(t|\\mathbf{X},\\mathbf{t}) = \\int \\left[ \\int \\mathcal{N}(t|\\boldsymbol{\\phi}(\\boldsymbol{x})^T \\boldsymbol{w}, \\beta^{-1}) \\mathcal{N}(\\boldsymbol{w}|\\boldsymbol{m}_N, \\beta^{-1} \\boldsymbol{S}_N) d\\boldsymbol{w} \\right] \\operatorname{Gam}(\\beta|a_N, b_N) d\\beta$$\n\n$$= \\int \\mathcal{N}(t|\\boldsymbol{\\phi}(\\boldsymbol{x})^T \\boldsymbol{m}_N, \\beta^{-1} + \\boldsymbol{\\phi}(\\boldsymbol{x})^T \\beta^{-1} \\boldsymbol{S}_N \\boldsymbol{\\phi}(\\boldsymbol{x})) \\operatorname{Gam}(\\beta|a_N, b_N) d\\beta$$\n\n$$= \\int \\mathcal{N}\\left[ t|\\boldsymbol{\\phi}(\\boldsymbol{x})^T \\boldsymbol{m}_N, \\beta^{-1} (1 + \\boldsymbol{\\phi}(\\boldsymbol{x})^T \\boldsymbol{S}_N \\boldsymbol{\\phi}(\\boldsymbol{x})) \\right] \\operatorname{Gam}(\\beta|a_N, b_N) d\\beta$$\n\nWhere we have used (2.113: $p(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}^{-1})$), (2.114: $p(\\mathbf{y}|\\mathbf{x}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\mathbf{x} + \\mathbf{b}, \\mathbf{L}^{-1})$) and (2.115: $p(\\mathbf{y}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}, \\mathbf{L}^{-1} + \\mathbf{A}\\boldsymbol{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}})$). Then, we follow (2.158)-(2.160), we can see that $p(t|\\mathbf{X}, \\mathbf{t}) = \\operatorname{St}(t|\\mu, \\lambda, v)$ , where we have defined:\n\n$$\\mu = \\phi(\\mathbf{x})^T \\mathbf{m}_N, \\quad \\lambda = \\frac{a_N}{b_N} \\cdot \\left[ 1 + \\phi(\\mathbf{x})^T \\mathbf{S}_N \\phi(\\mathbf{x}) \\right]^{-1}, \\quad v = 2a_N$$\n\n## Problem 3.14 Solution(Wait for updating)\n\nFirstly, according to (3.16: $\\mathbf{\\Phi} = \\begin{pmatrix} \\phi_0(\\mathbf{x}_1) & \\phi_1(\\mathbf{x}_1) & \\cdots & \\phi_{M-1}(\\mathbf{x}_1) \\\\ \\phi_0(\\mathbf{x}_2) & \\phi_1(\\mathbf{x}_2) & \\cdots & \\phi_{M-1}(\\mathbf{x}_2) \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ \\phi_0(\\mathbf{x}_N) & \\phi_1(\\mathbf{x}_N) & \\cdots & \\phi_{M-1}(\\mathbf{x}_N) \\end{pmatrix}.$), if we use the new orthonormal basis set specified in the problem to construct $\\Phi$ , we can obtain an important property: $\\Phi^T \\Phi = \\mathbf{I}$ . Hence, if $\\alpha = 0$ , together with (3.54: $\\mathbf{S}_{N}^{-1} = \\alpha \\mathbf{I} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi}.$), we know that $\\mathbf{S_N} = 1/\\beta$ . Finally, according to (3.62: $k(\\mathbf{x}, \\mathbf{x}') = \\beta \\phi(\\mathbf{x})^{\\mathrm{T}} \\mathbf{S}_N \\phi(\\mathbf{x}')$), we can obtain:\n\n$$k(\\mathbf{x}, \\mathbf{x}') = \\beta \\mathbf{\\psi}(\\mathbf{x})^T \\mathbf{S}_{\\mathbf{N}} \\mathbf{\\psi}(\\mathbf{x}') = \\mathbf{\\psi}(\\mathbf{x})^T \\mathbf{\\psi}(\\mathbf{x}')$$",
"answer_length": 3355
},
{
"chapter": 3,
"question_number": "3.15",
"difficulty": "easy",
"question_text": "- 3.15 (\\*) www Consider a linear basis function model for regression in which the parameters $\\alpha$ and $\\beta$ are set using the evidence framework. Show that the function $E(\\mathbf{m}_N)$ defined by (3.82: $E(\\mathbf{m}_N) = \\frac{\\beta}{2} \\|\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}_N\\|^2 + \\frac{\\alpha}{2} \\mathbf{m}_N^{\\mathrm{T}} \\mathbf{m}_N.$) satisfies the relation $2E(\\mathbf{m}_N) = N$ .",
"answer": "It is quite obvious if we substitute (3.92: $\\alpha = \\frac{\\gamma}{\\mathbf{m}_N^{\\mathrm{T}} \\mathbf{m}_N}.$) and (3.95: $\\frac{1}{\\beta} = \\frac{1}{N - \\gamma} \\sum_{n=1}^{N} \\left\\{ t_n - \\mathbf{m}_N^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) \\right\\}^2.$) into (3.82: $E(\\mathbf{m}_N) = \\frac{\\beta}{2} \\|\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}_N\\|^2 + \\frac{\\alpha}{2} \\mathbf{m}_N^{\\mathrm{T}} \\mathbf{m}_N.$), which gives,\n\n$$E(\\boldsymbol{m}_{N}) = \\frac{\\beta}{2} ||\\mathbf{t} - \\Phi \\boldsymbol{m}_{N}||^{2} + \\frac{\\alpha}{2} \\boldsymbol{m}_{N}^{T} \\boldsymbol{m}_{N} = \\frac{N - \\gamma}{2} + \\frac{\\gamma}{2} = \\frac{N}{2}$$\n\n## Problem 3.16 Solution\n\nWe know that\n\n$$p(\\mathbf{t}|\\boldsymbol{w},\\beta) = \\prod_{n=1}^{N} \\mathcal{N}(\\boldsymbol{\\phi}(\\boldsymbol{x_n})^T \\boldsymbol{w}, \\beta^{-1}) \\propto \\mathcal{N}(\\boldsymbol{\\Phi}\\boldsymbol{w}, \\beta^{-1}\\mathbf{I})$$\n\nAnd\n\n$$p(\\boldsymbol{w}|\\alpha) = \\mathcal{N}(\\mathbf{0}, \\alpha^{-1}\\mathbf{I})$$\n\nComparing them with (2.113: $p(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}^{-1})$), (2.114: $p(\\mathbf{y}|\\mathbf{x}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\mathbf{x} + \\mathbf{b}, \\mathbf{L}^{-1})$) and (2.115: $p(\\mathbf{y}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}, \\mathbf{L}^{-1} + \\mathbf{A}\\boldsymbol{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}})$), we can obtain:\n\n$$p(\\mathbf{t}|\\alpha,\\beta) = \\mathcal{N}(\\mathbf{0},\\beta^{-1}\\mathbf{I} + \\alpha^{-1}\\mathbf{\\Phi}\\mathbf{\\Phi}^{T})$$",
"answer_length": 1514
},
{
"chapter": 3,
"question_number": "3.17",
"difficulty": "easy",
"question_text": "Show that the evidence function for the Bayesian linear regression model can be written in the form (3.78: $p(\\mathbf{t}|\\alpha,\\beta) = \\left(\\frac{\\beta}{2\\pi}\\right)^{N/2} \\left(\\frac{\\alpha}{2\\pi}\\right)^{M/2} \\int \\exp\\left\\{-E(\\mathbf{w})\\right\\} d\\mathbf{w}$) in which $E(\\mathbf{w})$ is defined by (3.79: $= \\frac{\\beta}{2} \\|\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{w}\\|^2 + \\frac{\\alpha}{2} \\mathbf{w}^{\\mathrm{T}} \\mathbf{w}.$).",
"answer": "We know that:\n\n$$p(\\mathbf{t}|\\mathbf{w}, \\beta) = \\prod_{n=1}^{N} \\mathcal{N}(\\phi(\\mathbf{x}_{n})^{T}\\mathbf{w}, \\beta^{-1})$$\n\n$$= \\prod_{n=1}^{N} \\frac{1}{(2\\pi\\beta^{-1})^{1/2}} exp\\{-\\frac{1}{2\\beta^{-1}} (t_{n} - \\phi(\\mathbf{x}_{n})^{T}\\mathbf{w})^{2}\\}$$\n\n$$= (\\frac{\\beta}{2\\pi})^{N/2} exp\\{\\sum_{n=1}^{N} -\\frac{\\beta}{2} (t_{n} - \\phi(\\mathbf{x}_{n})^{T}\\mathbf{w})^{2}\\}$$\n\n$$= (\\frac{\\beta}{2\\pi})^{N/2} exp\\{-\\frac{\\beta}{2} ||\\mathbf{t} - \\Phi\\mathbf{w}||^{2}\\}$$\n\nAnd that:\n\n$$p(\\boldsymbol{w}|\\alpha) = \\mathcal{N}(\\boldsymbol{0}, \\alpha^{-1}\\mathbf{I})$$\n$$= \\frac{\\alpha^{M/2}}{(2\\pi)^{M/2}} exp\\left\\{-\\frac{\\alpha}{2}||\\boldsymbol{w}||^2\\right\\}$$\n\nIf we substitute the expressions above into (3.77: $p(\\mathbf{t}|\\alpha,\\beta) = \\int p(\\mathbf{t}|\\mathbf{w},\\beta)p(\\mathbf{w}|\\alpha)\\,\\mathrm{d}\\mathbf{w}.$), we can obtain (3.78: $p(\\mathbf{t}|\\alpha,\\beta) = \\left(\\frac{\\beta}{2\\pi}\\right)^{N/2} \\left(\\frac{\\alpha}{2\\pi}\\right)^{M/2} \\int \\exp\\left\\{-E(\\mathbf{w})\\right\\} d\\mathbf{w}$) just as required.",
"answer_length": 1032
},
{
"chapter": 3,
"question_number": "3.18",
"difficulty": "medium",
"question_text": "By completing the square over w, show that the error function (3.79: $= \\frac{\\beta}{2} \\|\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{w}\\|^2 + \\frac{\\alpha}{2} \\mathbf{w}^{\\mathrm{T}} \\mathbf{w}.$) in Bayesian linear regression can be written in the form (3.80: $E(\\mathbf{w}) = E(\\mathbf{m}_N) + \\frac{1}{2}(\\mathbf{w} - \\mathbf{m}_N)^{\\mathrm{T}} \\mathbf{A}(\\mathbf{w} - \\mathbf{m}_N)$).",
"answer": "We expand (3.79: $= \\frac{\\beta}{2} \\|\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{w}\\|^2 + \\frac{\\alpha}{2} \\mathbf{w}^{\\mathrm{T}} \\mathbf{w}.$) as follows:\n\n$$E(\\boldsymbol{w}) = \\frac{\\beta}{2} ||\\mathbf{t} - \\boldsymbol{\\Phi} \\boldsymbol{w}||^2 + \\frac{\\alpha}{2} \\boldsymbol{w}^T \\boldsymbol{w}$$\n\n$$= \\frac{\\beta}{2} (\\mathbf{t}^T \\mathbf{t} - 2\\mathbf{t}^T \\boldsymbol{\\Phi} \\boldsymbol{w} + \\boldsymbol{w}^T \\boldsymbol{\\Phi}^T \\boldsymbol{\\Phi} \\boldsymbol{w}) + \\frac{\\alpha}{2} \\boldsymbol{w}^T \\boldsymbol{w}$$\n\n$$= \\frac{1}{2} [\\boldsymbol{w}^T (\\beta \\boldsymbol{\\Phi}^T \\boldsymbol{\\Phi} + \\alpha \\mathbf{I}) \\boldsymbol{w} - 2\\beta \\mathbf{t}^T \\boldsymbol{\\Phi} \\boldsymbol{w} + \\beta \\mathbf{t}^T \\mathbf{t}]$$\n\nObserving the equation above, we see that $E({\\pmb w})$ contains the following term :\n\n$$\\frac{1}{2}(\\boldsymbol{w} - \\boldsymbol{m}_{\\boldsymbol{N}})^T \\mathbf{A}(\\boldsymbol{w} - \\boldsymbol{m}_{\\boldsymbol{N}}) \\tag{*}$$\n\nNow, we need to solve **A** and $m_N$ . We expand (\\*) and obtain:\n\n$$(*) = \\frac{1}{2} (\\boldsymbol{w}^T \\mathbf{A} \\boldsymbol{w} - 2\\boldsymbol{m}_{\\boldsymbol{N}}^T \\mathbf{A} \\boldsymbol{w} + \\boldsymbol{m}_{\\boldsymbol{N}}^T \\mathbf{A} \\boldsymbol{m}_{\\boldsymbol{N}})$$\n\nWe firstly compare the quadratic term, which gives:\n\n$$\\mathbf{A} = \\beta \\mathbf{\\Phi}^T \\mathbf{\\Phi} + \\alpha \\mathbf{I}$$\n\nAnd then we compare the linear term, which gives:\n\n$$\\mathbf{m_N}^T \\mathbf{A} = \\beta \\mathbf{t}^T \\mathbf{\\Phi}$$\n\nNoticing that $\\mathbf{A} = \\mathbf{A}^T$ , which implies $\\mathbf{A}^{-1}$ is also symmetric, we first transpose and then multiply $\\mathbf{A}^{-1}$ on both sides, which gives:\n\n$$\\boldsymbol{m}_{\\boldsymbol{N}} = \\beta \\mathbf{A}^{-1} \\mathbf{\\Phi}^T \\mathbf{t}$$\n\nNow we rewrite $E(\\boldsymbol{w})$ :\n\n$$E(\\boldsymbol{w}) = \\frac{1}{2} \\left[ \\boldsymbol{w}^T (\\beta \\boldsymbol{\\Phi}^T \\boldsymbol{\\Phi} + \\alpha \\mathbf{I}) \\boldsymbol{w} - 2\\beta \\mathbf{t}^T \\boldsymbol{\\Phi} \\boldsymbol{w} + \\beta \\mathbf{t}^T \\mathbf{t} \\right]$$\n\n$$= \\frac{1}{2} \\left[ (\\boldsymbol{w} - \\boldsymbol{m}_N)^T \\mathbf{A} (\\boldsymbol{w} - \\boldsymbol{m}_N) + \\beta \\mathbf{t}^T \\mathbf{t} - \\boldsymbol{m}_N^T \\mathbf{A} \\boldsymbol{m}_N \\right]$$\n\n$$= \\frac{1}{2} (\\boldsymbol{w} - \\boldsymbol{m}_N)^T \\mathbf{A} (\\boldsymbol{w} - \\boldsymbol{m}_N) + \\frac{1}{2} (\\beta \\mathbf{t}^T \\mathbf{t} - \\boldsymbol{m}_N^T \\mathbf{A} \\boldsymbol{m}_N)$$\n\n$$= \\frac{1}{2} (\\boldsymbol{w} - \\boldsymbol{m}_N)^T \\mathbf{A} (\\boldsymbol{w} - \\boldsymbol{m}_N) + \\frac{1}{2} (\\beta \\mathbf{t}^T \\mathbf{t} - 2\\boldsymbol{m}_N^T \\mathbf{A} \\boldsymbol{m}_N + \\boldsymbol{m}_N^T \\mathbf{A} \\boldsymbol{m}_N)$$\n\n$$= \\frac{1}{2} (\\boldsymbol{w} - \\boldsymbol{m}_N)^T \\mathbf{A} (\\boldsymbol{w} - \\boldsymbol{m}_N) + \\frac{1}{2} (\\beta \\mathbf{t}^T \\mathbf{t} - 2\\boldsymbol{m}_N^T \\mathbf{A} \\boldsymbol{m}_N + \\boldsymbol{m}_N^T (\\beta \\boldsymbol{\\Phi}^T \\boldsymbol{\\Phi} + \\alpha \\mathbf{I}) \\boldsymbol{m}_N)$$\n\n$$= \\frac{1}{2} (\\boldsymbol{w} - \\boldsymbol{m}_N)^T \\mathbf{A} (\\boldsymbol{w} - \\boldsymbol{m}_N) + \\frac{1}{2} [\\beta \\mathbf{t}^T \\mathbf{t} - 2\\beta \\mathbf{t}^T \\boldsymbol{\\Phi} \\boldsymbol{m}_N + \\boldsymbol{m}_N^T (\\beta \\boldsymbol{\\Phi}^T \\boldsymbol{\\Phi}) \\boldsymbol{m}_N] + \\frac{\\alpha}{2} \\boldsymbol{m}_N^T \\boldsymbol{m}_N$$\n\n$$= \\frac{1}{2} (\\boldsymbol{w} - \\boldsymbol{m}_N)^T \\mathbf{A} (\\boldsymbol{w} - \\boldsymbol{m}_N) + \\frac{\\beta}{2} ||\\mathbf{t} - \\boldsymbol{\\Phi} \\boldsymbol{m}_N||^2 + \\frac{\\alpha}{2} \\boldsymbol{m}_N^T \\boldsymbol{m}_N$$\n\nJust as required.",
"answer_length": 3565
},
{
"chapter": 3,
"question_number": "3.19",
"difficulty": "medium",
"question_text": "Show that the integration over w in the Bayesian linear regression model gives the result (3.85: $= \\exp\\left\\{-E(\\mathbf{m}_N)\\right\\} (2\\pi)^{M/2} |\\mathbf{A}|^{-1/2}.$). Hence show that the log marginal likelihood is given by (3.86: $\\ln p(\\mathbf{t}|\\alpha,\\beta) = \\frac{M}{2} \\ln \\alpha + \\frac{N}{2} \\ln \\beta - E(\\mathbf{m}_N) - \\frac{1}{2} \\ln |\\mathbf{A}| - \\frac{N}{2} \\ln(2\\pi)$).",
"answer": "Based on the standard form of a multivariate normal distribution, we know that\n\n$$\\int \\frac{1}{(2\\pi)^{M/2}} \\frac{1}{|\\mathbf{A}|^{1/2}} exp\\left\\{-\\frac{1}{2}(\\mathbf{w} - \\mathbf{m}_{N})^{T} \\mathbf{A}(\\mathbf{w} - \\mathbf{m}_{N})\\right\\} d\\mathbf{w} = 1$$\n\nHence,\n\n$$\\int exp\\left\\{-\\frac{1}{2}(\\boldsymbol{w}-\\boldsymbol{m_N})^T\\mathbf{A}(\\boldsymbol{w}-\\boldsymbol{m_N})\\right\\}d\\boldsymbol{w} = (2\\pi)^{M/2}|\\mathbf{A}|^{1/2}$$\n\nAnd since $E(\\mathbf{m}_N)$ doesn't depend on $\\mathbf{w}$ , (3.85: $= \\exp\\left\\{-E(\\mathbf{m}_N)\\right\\} (2\\pi)^{M/2} |\\mathbf{A}|^{-1/2}.$) is quite obvious. Then we substitute (3.85: $= \\exp\\left\\{-E(\\mathbf{m}_N)\\right\\} (2\\pi)^{M/2} |\\mathbf{A}|^{-1/2}.$) into (3.78: $p(\\mathbf{t}|\\alpha,\\beta) = \\left(\\frac{\\beta}{2\\pi}\\right)^{N/2} \\left(\\frac{\\alpha}{2\\pi}\\right)^{M/2} \\int \\exp\\left\\{-E(\\mathbf{w})\\right\\} d\\mathbf{w}$), which will immediately gives (3.86: $\\ln p(\\mathbf{t}|\\alpha,\\beta) = \\frac{M}{2} \\ln \\alpha + \\frac{N}{2} \\ln \\beta - E(\\mathbf{m}_N) - \\frac{1}{2} \\ln |\\mathbf{A}| - \\frac{N}{2} \\ln(2\\pi)$).",
"answer_length": 1067
},
{
"chapter": 3,
"question_number": "3.2",
"difficulty": "medium",
"question_text": "\\star)$ Show that the matrix\n\n$$\\mathbf{\\Phi}(\\mathbf{\\Phi}^{\\mathrm{T}}\\mathbf{\\Phi})^{-1}\\mathbf{\\Phi}^{\\mathrm{T}} \\tag{3.103}$$\n\ntakes any vector $\\mathbf{v}$ and projects it onto the space spanned by the columns of $\\mathbf{\\Phi}$ . Use this result to show that the least-squares solution (3.15: $\\mathbf{w}_{\\mathrm{ML}} = \\left(\\mathbf{\\Phi}^{\\mathrm{T}}\\mathbf{\\Phi}\\right)^{-1}\\mathbf{\\Phi}^{\\mathrm{T}}\\mathbf{t}$) corresponds to an orthogonal projection of the vector $\\mathbf{t}$ onto the manifold $\\mathcal{S}$ as shown in Figure 3.2.",
"answer": "To begin with, if we denote $\\mathbf{v}^* = (\\mathbf{\\Phi}^T \\mathbf{\\Phi})^{-1} \\mathbf{\\Phi}^T \\mathbf{v}$ . Then we have:\n\n$$\\mathbf{\\Phi}(\\mathbf{\\Phi}^T\\mathbf{\\Phi})^{-1}\\mathbf{\\Phi}^T\\mathbf{v} = \\mathbf{\\Phi}\\mathbf{v}^* \\tag{1}$$\n\nBy definition, $\\Phi v^*$ is in the column space of $\\Phi$ . In other words, we have prove $\\Phi(\\Phi^T\\Phi)^{-1}\\Phi^T$ can project a vector v into the column space of $\\Phi$ . Next, we are required to prove the 'residue' of the projection shown in Fig. 3.2 of the main text (i.e., v - t is orthogonal to the column space of $\\Phi$ ). Since we have:\n\n$$(\\mathbf{y} - \\mathbf{t})^{T} \\mathbf{\\Phi} = (\\mathbf{\\Phi} \\mathbf{w}_{ML} - \\mathbf{t})^{T}$$\n\n$$= (\\mathbf{\\Phi} (\\mathbf{\\Phi}^{T} \\mathbf{\\Phi})^{-1} \\mathbf{\\Phi}^{T} \\mathbf{t} - \\mathbf{t})^{T} \\mathbf{\\Phi}$$\n\n$$= \\mathbf{t}^{T} (\\mathbf{\\Phi} (\\mathbf{\\Phi}^{T} \\mathbf{\\Phi})^{-1} \\mathbf{\\Phi}^{T} - \\mathbf{I})^{T} \\mathbf{\\Phi}$$\n\n$$= \\mathbf{t}^{T} (\\mathbf{\\Phi} (\\mathbf{\\Phi}^{T} \\mathbf{\\Phi})^{-1} \\mathbf{\\Phi}^{T} - \\mathbf{I}) \\mathbf{\\Phi}$$\n\n$$= \\mathbf{t}^{T} (\\mathbf{\\Phi} - \\mathbf{\\Phi})$$\n\n$$= \\mathbf{0}$$\n\n$$(2)$$\n\nwhich means (y - t) is in the left null space of $\\Phi$ , and it is also orthogonal to the column space of $\\Phi$ .",
"answer_length": 1269
},
{
"chapter": 3,
"question_number": "3.20",
"difficulty": "medium",
"question_text": "Starting from (3.86: $\\ln p(\\mathbf{t}|\\alpha,\\beta) = \\frac{M}{2} \\ln \\alpha + \\frac{N}{2} \\ln \\beta - E(\\mathbf{m}_N) - \\frac{1}{2} \\ln |\\mathbf{A}| - \\frac{N}{2} \\ln(2\\pi)$) verify all of the steps needed to show that maximization of the log marginal likelihood function (3.86: $\\ln p(\\mathbf{t}|\\alpha,\\beta) = \\frac{M}{2} \\ln \\alpha + \\frac{N}{2} \\ln \\beta - E(\\mathbf{m}_N) - \\frac{1}{2} \\ln |\\mathbf{A}| - \\frac{N}{2} \\ln(2\\pi)$) with respect to $\\alpha$ leads to the re-estimation equation (3.92: $\\alpha = \\frac{\\gamma}{\\mathbf{m}_N^{\\mathrm{T}} \\mathbf{m}_N}.$).",
"answer": "You can just follow the steps from (3.87: $(\\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi}) \\mathbf{u}_i = \\lambda_i \\mathbf{u}_i.$) to (3.92: $\\alpha = \\frac{\\gamma}{\\mathbf{m}_N^{\\mathrm{T}} \\mathbf{m}_N}.$), which is already very clear.",
"answer_length": 239
},
{
"chapter": 3,
"question_number": "3.21",
"difficulty": "medium",
"question_text": "An alternative way to derive the result (3.92: $\\alpha = \\frac{\\gamma}{\\mathbf{m}_N^{\\mathrm{T}} \\mathbf{m}_N}.$) for the optimal value of $\\alpha$ in the evidence framework is to make use of the identity\n\n$$\\frac{d}{d\\alpha}\\ln|\\mathbf{A}| = \\operatorname{Tr}\\left(\\mathbf{A}^{-1}\\frac{d}{d\\alpha}\\mathbf{A}\\right). \\tag{3.117}$$\n\nProve this identity by considering the eigenvalue expansion of a real, symmetric matrix $\\mathbf{A}$ , and making use of the standard results for the determinant and trace of $\\mathbf{A}$ expressed in terms of its eigenvalues (Appendix C). Then make use of (3.117: $\\frac{d}{d\\alpha}\\ln|\\mathbf{A}| = \\operatorname{Tr}\\left(\\mathbf{A}^{-1}\\frac{d}{d\\alpha}\\mathbf{A}\\right).$) to derive (3.92: $\\alpha = \\frac{\\gamma}{\\mathbf{m}_N^{\\mathrm{T}} \\mathbf{m}_N}.$) starting from (3.86: $\\ln p(\\mathbf{t}|\\alpha,\\beta) = \\frac{M}{2} \\ln \\alpha + \\frac{N}{2} \\ln \\beta - E(\\mathbf{m}_N) - \\frac{1}{2} \\ln |\\mathbf{A}| - \\frac{N}{2} \\ln(2\\pi)$).",
"answer": "Let's first prove (3.117: $\\frac{d}{d\\alpha}\\ln|\\mathbf{A}| = \\operatorname{Tr}\\left(\\mathbf{A}^{-1}\\frac{d}{d\\alpha}\\mathbf{A}\\right).$). According to (C.47) and (C.48), we know that if **A** is a $M \\times M$ real symmetric matrix, with eigenvalues $\\lambda_i$ , i = 1, 2, ..., M, $|\\mathbf{A}|$ and $\\text{Tr}(\\mathbf{A})$ can be written as:\n\n$$|\\mathbf{A}| = \\prod_{i=1}^{M} \\lambda_i$$\n, $\\operatorname{Tr}(\\mathbf{A}) = \\sum_{i=1}^{M} \\lambda_i$ \n\nBack to this problem, according to section 3.5.2, we know that **A** has eigenvalues $\\alpha + \\lambda_i$ , i = 1, 2, ..., M. Hence the left side of (3.117: $\\frac{d}{d\\alpha}\\ln|\\mathbf{A}| = \\operatorname{Tr}\\left(\\mathbf{A}^{-1}\\frac{d}{d\\alpha}\\mathbf{A}\\right).$) equals to:\n\nleft side = \n$$\\frac{d}{d\\alpha}ln\\left[\\prod_{i=1}^{M}(\\alpha+\\lambda_i)\\right] = \\sum_{i=1}^{M}\\frac{d}{d\\alpha}ln(\\alpha+\\lambda_i) = \\sum_{i=1}^{M}\\frac{1}{\\alpha+\\lambda_i}$$\n\nAnd according to (3.81: $\\mathbf{A} = \\alpha \\mathbf{I} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi}$), we can obtain:\n\n$$\\mathbf{A}^{-1}\\frac{d}{d\\alpha}\\mathbf{A} = \\mathbf{A}^{-1}\\mathbf{I} = \\mathbf{A}^{-1}$$\n\nFor the symmetric matrix **A**, its inverse $\\mathbf{A}^{-1}$ has eigenvalues $1/(\\alpha + \\lambda_i)$ , i = 1, 2, ..., M. Therefore,\n\n$$\\operatorname{Tr}(\\mathbf{A}^{-1}\\frac{d}{d\\alpha}\\mathbf{A}) = \\sum_{i=1}^{M} \\frac{1}{\\alpha + \\lambda_i}$$\n\nHence there are the same, and (3.92: $\\alpha = \\frac{\\gamma}{\\mathbf{m}_N^{\\mathrm{T}} \\mathbf{m}_N}.$) is quite obvious.",
"answer_length": 1515
},
{
"chapter": 3,
"question_number": "3.22",
"difficulty": "medium",
"question_text": "Starting from (3.86: $\\ln p(\\mathbf{t}|\\alpha,\\beta) = \\frac{M}{2} \\ln \\alpha + \\frac{N}{2} \\ln \\beta - E(\\mathbf{m}_N) - \\frac{1}{2} \\ln |\\mathbf{A}| - \\frac{N}{2} \\ln(2\\pi)$) verify all of the steps needed to show that maximization of the log marginal likelihood function (3.86: $\\ln p(\\mathbf{t}|\\alpha,\\beta) = \\frac{M}{2} \\ln \\alpha + \\frac{N}{2} \\ln \\beta - E(\\mathbf{m}_N) - \\frac{1}{2} \\ln |\\mathbf{A}| - \\frac{N}{2} \\ln(2\\pi)$) with respect to $\\beta$ leads to the re-estimation equation (3.95: $\\frac{1}{\\beta} = \\frac{1}{N - \\gamma} \\sum_{n=1}^{N} \\left\\{ t_n - \\mathbf{m}_N^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) \\right\\}^2.$).",
"answer": "Let's derive (3.86: $\\ln p(\\mathbf{t}|\\alpha,\\beta) = \\frac{M}{2} \\ln \\alpha + \\frac{N}{2} \\ln \\beta - E(\\mathbf{m}_N) - \\frac{1}{2} \\ln |\\mathbf{A}| - \\frac{N}{2} \\ln(2\\pi)$) with regard to $\\beta$ . The first term dependent on $\\beta$ in (3.86: $\\ln p(\\mathbf{t}|\\alpha,\\beta) = \\frac{M}{2} \\ln \\alpha + \\frac{N}{2} \\ln \\beta - E(\\mathbf{m}_N) - \\frac{1}{2} \\ln |\\mathbf{A}| - \\frac{N}{2} \\ln(2\\pi)$) is:\n\n$$\\frac{d}{d\\beta}(\\frac{N}{2}ln\\beta) = \\frac{N}{2\\beta}$$\n\nThe second term is:\n\n$$\\frac{d}{d\\beta}E(\\boldsymbol{m}_{N}) = \\frac{1}{2}||\\mathbf{t} - \\boldsymbol{\\Phi}\\boldsymbol{m}_{N}||^{2} + \\frac{\\beta}{2}\\frac{d}{d\\beta}||\\mathbf{t} - \\boldsymbol{\\Phi}\\boldsymbol{m}_{N}||^{2} + \\frac{d}{d\\beta}\\frac{\\alpha}{2}\\boldsymbol{m}_{N}^{T}\\boldsymbol{m}_{N}$$\n\nThe last two terms in the equation above can be further written as:\n\n$$\\frac{\\beta}{2} \\frac{d}{d\\beta} ||\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}_{N}||^{2} + \\frac{d}{d\\beta} \\frac{\\alpha}{2} \\mathbf{m}_{N}^{T} \\mathbf{m}_{N} = \\left\\{ \\frac{\\beta}{2} \\frac{d}{d\\mathbf{m}_{N}} ||\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}_{N}||^{2} + \\frac{d}{d\\mathbf{m}_{N}} \\frac{\\alpha}{2} \\mathbf{m}_{N}^{T} \\mathbf{m}_{N} \\right\\} \\cdot \\frac{d\\mathbf{m}_{N}}{d\\beta} \\\\\n= \\left\\{ \\frac{\\beta}{2} [-2\\mathbf{\\Phi}^{T} (\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}_{N})] + \\frac{\\alpha}{2} 2\\mathbf{m}_{N} \\right\\} \\cdot \\frac{d\\mathbf{m}_{N}}{d\\beta} \\\\\n= \\left\\{ -\\beta \\mathbf{\\Phi}^{T} (\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}_{N}) + \\alpha \\mathbf{m}_{N} \\right\\} \\cdot \\frac{d\\mathbf{m}_{N}}{d\\beta} \\\\\n= \\left\\{ -\\beta \\mathbf{\\Phi}^{T} \\mathbf{t} + (\\alpha \\mathbf{I} + \\beta \\mathbf{\\Phi}^{T} \\mathbf{\\Phi}) \\mathbf{m}_{N} \\right\\} \\cdot \\frac{d\\mathbf{m}_{N}}{d\\beta} \\\\\n= \\left\\{ -\\beta \\mathbf{\\Phi}^{T} \\mathbf{t} + \\mathbf{A} \\mathbf{m}_{N} \\right\\} \\cdot \\frac{d\\mathbf{m}_{N}}{d\\beta} \\\\\n= 0$$\n\nWhere we have taken advantage of (3.83: $\\mathbf{A} = \\nabla \\nabla E(\\mathbf{w})$) and (3.84: $\\mathbf{m}_N = \\beta \\mathbf{A}^{-1} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{t}.$). Hence\n\n$$\\frac{d}{d\\beta}E(\\boldsymbol{m}_{\\boldsymbol{N}}) = \\frac{1}{2}||\\mathbf{t} - \\boldsymbol{\\Phi}\\boldsymbol{m}_{\\boldsymbol{N}}||^2 = \\frac{1}{2}\\sum_{n=1}^{N}(t_n - \\boldsymbol{m}_{\\boldsymbol{N}}^T\\boldsymbol{\\phi}(\\boldsymbol{x}_n))^2$$\n\nThe last term dependent on $\\beta$ in (3.86: $\\ln p(\\mathbf{t}|\\alpha,\\beta) = \\frac{M}{2} \\ln \\alpha + \\frac{N}{2} \\ln \\beta - E(\\mathbf{m}_N) - \\frac{1}{2} \\ln |\\mathbf{A}| - \\frac{N}{2} \\ln(2\\pi)$) is:\n\n$$\\frac{d}{d\\beta}(\\frac{1}{2}ln|\\mathbf{A}|) = \\frac{\\gamma}{2\\beta}$$\n\nTherefore, if we combine all those expressions together, we will obtain (3.94: $0 = \\frac{N}{2\\beta} - \\frac{1}{2} \\sum_{n=1}^{N} \\left\\{ t_n - \\mathbf{m}_N^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) \\right\\}^2 - \\frac{\\gamma}{2\\beta}$). And then if we rearrange it, we will obtain (3.95: $\\frac{1}{\\beta} = \\frac{1}{N - \\gamma} \\sum_{n=1}^{N} \\left\\{ t_n - \\mathbf{m}_N^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) \\right\\}^2.$).",
"answer_length": 2993
},
{
"chapter": 3,
"question_number": "3.23",
"difficulty": "medium",
"question_text": "Show that the marginal probability of the data, in other words the model evidence, for the model described in Exercise 3.12 is given by\n\n$$p(\\mathbf{t}) = \\frac{1}{(2\\pi)^{N/2}} \\frac{b_0^{a_0}}{b_N^{a_N}} \\frac{\\Gamma(a_N)}{\\Gamma(a_0)} \\frac{|\\mathbf{S}_N|^{1/2}}{|\\mathbf{S}_0|^{1/2}}$$\n(3.118: $p(\\mathbf{t}) = \\frac{1}{(2\\pi)^{N/2}} \\frac{b_0^{a_0}}{b_N^{a_N}} \\frac{\\Gamma(a_N)}{\\Gamma(a_0)} \\frac{|\\mathbf{S}_N|^{1/2}}{|\\mathbf{S}_0|^{1/2}}$)\n\nby first marginalizing with respect to w and then with respect to $\\beta$ .",
"answer": "First, according to (3.10: $p(\\mathbf{t}|\\mathbf{X}, \\mathbf{w}, \\beta) = \\prod_{n=1}^{N} \\mathcal{N}(t_n | \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n), \\beta^{-1})$), we know that $p(\\mathbf{t}|\\mathbf{X}, \\boldsymbol{w}, \\beta)$ can be further written as $p(\\mathbf{t}|\\mathbf{X}, \\boldsymbol{w}, \\beta) = \\mathcal{N}(\\mathbf{t}|\\boldsymbol{\\Phi}\\boldsymbol{w}, \\beta^{-1}\\mathbf{I})$ , and given that $p(\\boldsymbol{w}|\\beta) = \\mathcal{N}(\\boldsymbol{m_0}, \\beta^{-1}\\mathbf{S_0})$ and $p(\\beta) = \\operatorname{Gam}(\\beta|a_0, b_0)$ . Therefore, we just follow the hint in the problem.\n\n$$p(\\mathbf{t}) = \\int \\int p(\\mathbf{t}|\\mathbf{X}, \\boldsymbol{w}, \\beta) p(\\boldsymbol{w}|\\beta) d\\boldsymbol{w} p(\\beta) d\\beta$$\n\n$$= \\int \\int (\\frac{\\beta}{2\\pi})^{N/2} exp\\{-\\frac{\\beta}{2}(\\mathbf{t} - \\boldsymbol{\\Phi} \\boldsymbol{w})^T (\\mathbf{t} - \\boldsymbol{\\Phi} \\boldsymbol{w})\\} \\cdot$$\n\n$$(\\frac{\\beta}{2\\pi})^{M/2} |\\mathbf{S}_{\\mathbf{0}}|^{-1/2} exp\\{-\\frac{\\beta}{2}(\\boldsymbol{w} - \\boldsymbol{m}_{\\mathbf{0}})^T \\mathbf{S}_{\\mathbf{0}}^{-1} (\\boldsymbol{w} - \\boldsymbol{m}_{\\mathbf{0}})\\} d\\boldsymbol{w}$$\n\n$$\\Gamma(a_0)^{-1} b_0^{a_0} \\beta^{a_0 - 1} exp(-b_0 \\beta) d\\beta$$\n\n$$= \\frac{b_0^{a_0}}{(2\\pi)^{(M+N)/2} |\\mathbf{S}_{\\mathbf{0}}|^{1/2}} \\int \\int exp\\{-\\frac{\\beta}{2}(\\mathbf{t} - \\boldsymbol{\\Phi} \\boldsymbol{w})^T (\\mathbf{t} - \\boldsymbol{\\Phi} \\boldsymbol{w})\\}$$\n\n$$exp\\{-\\frac{\\beta}{2}(\\boldsymbol{w} - \\boldsymbol{m}_{\\mathbf{0}})^T \\mathbf{S}_{\\mathbf{0}}^{-1} (\\boldsymbol{w} - \\boldsymbol{m}_{\\mathbf{0}})\\} d\\boldsymbol{w}$$\n\n$$\\beta^{a_0 - 1 + N/2 + M/2} exp(-b_0 \\beta) d\\beta$$\n\n$$= \\frac{b_0^{a_0}}{(2\\pi)^{(M+N)/2} |\\mathbf{S}_{\\mathbf{0}}|^{1/2}} \\int \\int exp\\{-\\frac{\\beta}{2}(\\boldsymbol{w} - \\boldsymbol{m}_{\\mathbf{N}})^T \\mathbf{S}_{\\mathbf{N}}^{-1} (\\boldsymbol{w} - \\boldsymbol{m}_{\\mathbf{N}})\\} d\\boldsymbol{w}$$\n\n$$exp\\{-\\frac{\\beta}{2}(\\mathbf{t}^T \\mathbf{t} + \\boldsymbol{m}_{\\mathbf{0}}^T \\mathbf{S}_{\\mathbf{0}}^{-1} \\boldsymbol{m}_{\\mathbf{0}} - \\boldsymbol{m}_{\\mathbf{N}}^T \\mathbf{S}_{\\mathbf{N}}^{-1} \\boldsymbol{m}_{\\mathbf{N}})\\}$$\n\n$$\\beta^{a_N - 1 + M/2} exp(-b_0 \\beta) d\\beta$$\n\nWhere we have defined\n\n$$\\begin{aligned} \\boldsymbol{m}_{N} &= \\mathbf{S}_{N} \\left( \\mathbf{S}_{0}^{-1} \\boldsymbol{m}_{0} + \\boldsymbol{\\Phi}^{T} \\mathbf{t} \\right) \\\\ \\boldsymbol{S}_{N}^{-1} &= \\mathbf{S}_{0}^{-1} + \\boldsymbol{\\Phi}^{T} \\boldsymbol{\\Phi} \\\\ \\boldsymbol{a}_{N} &= \\boldsymbol{a}_{0} + \\frac{N}{2} \\\\ b_{N} &= b_{0} + \\frac{1}{2} (\\boldsymbol{m}_{0}^{T} \\mathbf{S}_{0}^{-1} \\boldsymbol{m}_{0} - \\boldsymbol{m}_{N}^{T} \\mathbf{S}_{N}^{-1} \\boldsymbol{m}_{N} + \\sum_{n=1}^{N} t_{n}^{2}) \\end{aligned}$$\n\nWhich are exactly the same as those in Prob.3.12, and then we evaluate the integral, taking advantage of the normalized property of multivariate Gaussian Distribution and Gamma Distribution.\n\n$$p(\\mathbf{t}) = \\frac{b_0^{a_0}}{(2\\pi)^{(M+N)/2} |\\mathbf{S_0}|^{1/2}} (\\frac{2\\pi}{\\beta})^{M/2} |\\mathbf{S_N}|^{1/2} \\int \\beta^{a_N - 1 + M/2} exp(-b_N \\beta) d\\beta$$\n\n$$= \\frac{b_0^{a_0}}{(2\\pi)^{(M+N)/2} |\\mathbf{S_0}|^{1/2}} (2\\pi)^{M/2} |\\mathbf{S_N}|^{1/2} \\int \\beta^{a_N - 1} exp(-b_N \\beta) d\\beta$$\n\n$$= \\frac{1}{(2\\pi)^{N/2}} \\frac{|\\mathbf{S_N}|^{1/2}}{|\\mathbf{S_0}|^{1/2}} \\frac{b_0^{a_0}}{b_N^{a_N}} \\frac{\\Gamma(a_N)}{\\Gamma(b_N)}$$\n\nJust as required.",
"answer_length": 3362
},
{
"chapter": 3,
"question_number": "3.24",
"difficulty": "medium",
"question_text": "\\star)$ Repeat the previous exercise but now use Bayes' theorem in the form\n\n$$p(\\mathbf{t}) = \\frac{p(\\mathbf{t}|\\mathbf{w}, \\beta)p(\\mathbf{w}, \\beta)}{p(\\mathbf{w}, \\beta|\\mathbf{t})}$$\n(3.119: $p(\\mathbf{t}) = \\frac{p(\\mathbf{t}|\\mathbf{w}, \\beta)p(\\mathbf{w}, \\beta)}{p(\\mathbf{w}, \\beta|\\mathbf{t})}$)\n\nand then substitute for the prior and posterior distributions and the likelihood function in order to derive the result (3.118: $p(\\mathbf{t}) = \\frac{1}{(2\\pi)^{N/2}} \\frac{b_0^{a_0}}{b_N^{a_N}} \\frac{\\Gamma(a_N)}{\\Gamma(a_0)} \\frac{|\\mathbf{S}_N|^{1/2}}{|\\mathbf{S}_0|^{1/2}}$).\n\n# Linear Models for Classification\n\nIn the previous chapter, we explored a class of regression models having particularly simple analytical and computational properties. We now discuss an analogous class of models for solving classification problems. The goal in classification is to take an input vector $\\mathbf{x}$ and to assign it to one of K discrete classes $\\mathcal{C}_k$ where $k=1,\\ldots,K$ . In the most common scenario, the classes are taken to be disjoint, so that each input is assigned to one and only one class. The input space is thereby divided into *decision regions* whose boundaries are called *decision boundaries* or *decision surfaces*. In this chapter, we consider linear models for classification, by which we mean that the decision surfaces are linear functions of the input vector $\\mathbf{x}$ and hence are defined by (D-1)-dimensional hyperplanes within the D-dimensional input space. Data sets whose classes can be separated exactly by linear decision surfaces are said to be *linearly separable*.\n\nFor regression problems, the target variable ${\\bf t}$ was simply the vector of real numbers whose values we wish to predict. In the case of classification, there are various\n\nways of using target values to represent class labels. For probabilistic models, the most convenient, in the case of two-class problems, is the binary representation in which there is a single target variable $t \\in \\{0,1\\}$ such that t=1 represents class $\\mathcal{C}_1$ and t=0 represents class $\\mathcal{C}_2$ . We can interpret the value of t as the probability that the class is $\\mathcal{C}_1$ , with the values of probability taking only the extreme values of 0 and 1. For K>2 classes, it is convenient to use a 1-of-K coding scheme in which t is a vector of length K such that if the class is $\\mathcal{C}_j$ , then all elements $t_k$ of t are zero except element $t_j$ , which takes the value 1. For instance, if we have K=5 classes, then a pattern from class 2 would be given the target vector\n\n$$\\mathbf{t} = (0, 1, 0, 0, 0)^{\\mathrm{T}}. (4.1)$$\n\nAgain, we can interpret the value of $t_k$ as the probability that the class is $C_k$ . For nonprobabilistic models, alternative choices of target variable representation will sometimes prove convenient.\n\nIn Chapter 1, we identified three distinct approaches to the classification problem. The simplest involves constructing a discriminant function that directly assigns each vector $\\mathbf{x}$ to a specific class. A more powerful approach, however, models the conditional probability distribution $p(\\mathcal{C}_k|\\mathbf{x})$ in an inference stage, and then subsequently uses this distribution to make optimal decisions. By separating inference and decision, we gain numerous benefits, as discussed in Section 1.5.4. There are two different approaches to determining the conditional probabilities $p(\\mathcal{C}_k|\\mathbf{x})$ . One technique is to model them directly, for example by representing them as parametric models and then optimizing the parameters using a training set. Alternatively, we can adopt a generative approach in which we model the class-conditional densities given by $p(\\mathbf{x}|\\mathcal{C}_k)$ , together with the prior probabilities $p(\\mathcal{C}_k)$ for the classes, and then we compute the required posterior probabilities using Bayes' theorem\n\n$$p(C_k|\\mathbf{x}) = \\frac{p(\\mathbf{x}|C_k)p(C_k)}{p(\\mathbf{x})}.$$\n(4.2: $p(C_k|\\mathbf{x}) = \\frac{p(\\mathbf{x}|C_k)p(C_k)}{p(\\mathbf{x})}.$)\n\nWe shall discuss examples of all three approaches in this chapter.\n\nIn the linear regression models considered in Chapter 3, the model prediction $y(\\mathbf{x}, \\mathbf{w})$ was given by a linear function of the parameters $\\mathbf{w}$ . In the simplest case, the model is also linear in the input variables and therefore takes the form $y(\\mathbf{x}) = \\mathbf{w}^T \\mathbf{x} + w_0$ , so that y is a real number. For classification problems, however, we wish to predict discrete class labels, or more generally posterior probabilities that lie in the range (0,1). To achieve this, we consider a generalization of this model in which we transform the linear function of $\\mathbf{w}$ using a nonlinear function $f(\\cdot)$ so that\n\n$$y(\\mathbf{x}) = f\\left(\\mathbf{w}^{\\mathrm{T}}\\mathbf{x} + w_0\\right). \\tag{4.3}$$\n\nIn the machine learning literature $f(\\cdot)$ is known as an *activation function*, whereas its inverse is called a *link function* in the statistics literature. The decision surfaces correspond to $y(\\mathbf{x}) = \\text{constant}$ , so that $\\mathbf{w}^T\\mathbf{x} + w_0 = \\text{constant}$ and hence the decision surfaces are linear functions of $\\mathbf{x}$ , even if the function $f(\\cdot)$ is nonlinear. For this reason, the class of models described by (4.3: $y(\\mathbf{x}) = f\\left(\\mathbf{w}^{\\mathrm{T}}\\mathbf{x} + w_0\\right).$) are called *generalized linear models*\n\n(McCullagh and Nelder, 1989). Note, however, that in contrast to the models used for regression, they are no longer linear in the parameters due to the presence of the nonlinear function $f(\\cdot)$ . This will lead to more complex analytical and computational properties than for linear regression models. Nevertheless, these models are still relatively simple compared to the more general nonlinear models that will be studied in subsequent chapters.\n\nThe algorithms discussed in this chapter will be equally applicable if we first make a fixed nonlinear transformation of the input variables using a vector of basis functions $\\phi(\\mathbf{x})$ as we did for regression models in Chapter 3. We begin by considering classification directly in the original input space $\\mathbf{x}$ , while in Section 4.3 we shall find it convenient to switch to a notation involving basis functions for consistency with later chapters.\n\n### 4.1. Discriminant Functions\n\nA discriminant is a function that takes an input vector $\\mathbf{x}$ and assigns it to one of K classes, denoted $\\mathcal{C}_k$ . In this chapter, we shall restrict attention to *linear discriminants*, namely those for which the decision surfaces are hyperplanes. To simplify the discussion, we consider first the case of two classes and then investigate the extension to K > 2 classes.\n\n### 4.1.1 Two classes\n\nThe simplest representation of a linear discriminant function is obtained by taking a linear function of the input vector so that\n\n$$y(\\mathbf{x}) = \\mathbf{w}^{\\mathrm{T}} \\mathbf{x} + w_0 \\tag{4.4}$$\n\nwhere $\\mathbf{w}$ is called a *weight vector*, and $w_0$ is a *bias* (not to be confused with bias in the statistical sense). The negative of the bias is sometimes called a *threshold*. An input vector $\\mathbf{x}$ is assigned to class $\\mathcal{C}_1$ if $y(\\mathbf{x}) \\geqslant 0$ and to class $\\mathcal{C}_2$ otherwise. The corresponding decision boundary is therefore defined by the relation $y(\\mathbf{x}) = 0$ , which corresponds to a (D-1)-dimensional hyperplane within the D-dimensional input space. Consider two points $\\mathbf{x}_A$ and $\\mathbf{x}_B$ both of which lie on the decision surface. Because $y(\\mathbf{x}_A) = y(\\mathbf{x}_B) = 0$ , we have $\\mathbf{w}^T(\\mathbf{x}_A - \\mathbf{x}_B) = 0$ and hence the vector $\\mathbf{w}$ is orthogonal to every vector lying within the decision surface, and so $\\mathbf{w}$ determines the orientation of the decision surface. Similarly, if $\\mathbf{x}$ is a point on the decision surface, then $y(\\mathbf{x}) = 0$ , and so the normal distance from the origin to the decision surface is given by\n\n$$\\frac{\\mathbf{w}^{\\mathrm{T}}\\mathbf{x}}{\\|\\mathbf{w}\\|} = -\\frac{w_0}{\\|\\mathbf{w}\\|}.\\tag{4.5}$$",
"answer": "Let's just follow the hint and we begin by writing down expression for the likelihood, prior and posterior PDF. We know that $p(\\mathbf{t}|\\boldsymbol{w},\\beta) = \\mathcal{N}(\\mathbf{t}|\\boldsymbol{\\Phi}\\boldsymbol{w},\\beta^{-1}\\mathbf{I})$ . What's more, the form of the prior and posterior are quite similar:\n\n$$p(\\boldsymbol{w}, \\beta) = \\mathcal{N}(\\boldsymbol{w}|\\mathbf{m_0}, \\beta^{-1}\\mathbf{S_0}) \\operatorname{Gam}(\\beta|a_0, b_0)$$\n\nAnd\n\n$$p(\\boldsymbol{w}, \\beta | \\mathbf{t}) = \\mathcal{N}(\\boldsymbol{w} | \\mathbf{m_N}, \\beta^{-1} \\mathbf{S_N}) \\operatorname{Gam}(\\beta | a_N, b_N)$$\n\nWhere the relationships among those parameters are shown in Prob.3.12, Prob.3.23. Now according to (3.119: $p(\\mathbf{t}) = \\frac{p(\\mathbf{t}|\\mathbf{w}, \\beta)p(\\mathbf{w}, \\beta)}{p(\\mathbf{w}, \\beta|\\mathbf{t})}$), we can write:\n\n$$p(\\mathbf{t}) = \\mathcal{N}(\\mathbf{t}|\\boldsymbol{\\Phi}\\boldsymbol{w}, \\boldsymbol{\\beta}^{-1}\\mathbf{I}) \\frac{\\mathcal{N}(\\boldsymbol{w}|\\mathbf{m}_{0}, \\boldsymbol{\\beta}^{-1}\\mathbf{S}_{0}) \\operatorname{Gam}(\\boldsymbol{\\beta}|a_{0}, b_{0})}{\\mathcal{N}(\\boldsymbol{w}|\\mathbf{m}_{N}, \\boldsymbol{\\beta}^{-1}\\mathbf{S}_{N}) \\operatorname{Gam}(\\boldsymbol{\\beta}|a_{N}, b_{N})}$$\n\n$$= \\mathcal{N}(\\mathbf{t}|\\boldsymbol{\\Phi}\\boldsymbol{w}, \\boldsymbol{\\beta}^{-1}\\mathbf{I}) \\frac{\\mathcal{N}(\\boldsymbol{w}|\\mathbf{m}_{0}, \\boldsymbol{\\beta}^{-1}\\mathbf{S}_{0})}{\\mathcal{N}(\\boldsymbol{w}|\\mathbf{m}_{N}, \\boldsymbol{\\beta}^{-1}\\mathbf{S}_{N})} \\frac{b_{0}^{a_{0}} \\boldsymbol{\\beta}^{a_{0}-1} exp(-b_{0}\\boldsymbol{\\beta})/\\Gamma(a_{0})}{b_{N}^{a_{N}} \\boldsymbol{\\beta}^{a_{N}-1} exp(-b_{N}\\boldsymbol{\\beta})/\\Gamma(a_{N})}$$\n\n$$= \\mathcal{N}(\\mathbf{t}|\\boldsymbol{\\Phi}\\boldsymbol{w}, \\boldsymbol{\\beta}^{-1}\\mathbf{I}) \\frac{\\mathcal{N}(\\boldsymbol{w}|\\mathbf{m}_{0}, \\boldsymbol{\\beta}^{-1}\\mathbf{S}_{0})}{\\mathcal{N}(\\boldsymbol{w}|\\mathbf{m}_{N}, \\boldsymbol{\\beta}^{-1}\\mathbf{S}_{N})} \\frac{b_{0}^{a_{0}}}{b_{N}^{a_{N}}} \\frac{\\Gamma(a_{N})}{\\Gamma(a_{0})} \\boldsymbol{\\beta}^{a_{0}-a_{N}} exp\\left\\{-(b_{0}-b_{N})\\boldsymbol{\\beta}\\right\\}$$\n\n$$= \\mathcal{N}(\\mathbf{t}|\\boldsymbol{\\Phi}\\boldsymbol{w}, \\boldsymbol{\\beta}^{-1}\\mathbf{I}) \\frac{\\mathcal{N}(\\boldsymbol{w}|\\mathbf{m}_{0}, \\boldsymbol{\\beta}^{-1}\\mathbf{S}_{0})}{\\mathcal{N}(\\boldsymbol{w}|\\mathbf{m}_{N}, \\boldsymbol{\\beta}^{-1}\\mathbf{S}_{N})} exp\\left\\{-(b_{0}-b_{N})\\boldsymbol{\\beta}\\right\\} \\frac{b_{0}^{a_{0}}}{b_{N}^{a_{N}}} \\frac{\\Gamma(a_{N})}{\\Gamma(a_{0})} \\boldsymbol{\\beta}^{-N/2}$$\n\nWhere we have used $a_N = a_0 + \\frac{N}{2}$ . Now we deal with the terms expressed in the form of Gaussian Distribution:\n\nGaussian terms = \n$$\\mathcal{N}(\\mathbf{t}|\\mathbf{\\Phi}\\boldsymbol{w}, \\beta^{-1}\\mathbf{I}) \\frac{\\mathcal{N}(\\boldsymbol{w}|\\mathbf{m}_{0}, \\beta^{-1}\\mathbf{S}_{0})}{\\mathcal{N}(\\boldsymbol{w}|\\mathbf{m}_{N}, \\beta^{-1}\\mathbf{S}_{N})}$$\n \n= $(\\frac{\\beta}{2\\pi})^{N/2} exp \\left\\{ -\\frac{\\beta}{2} (\\mathbf{t} - \\mathbf{\\Phi}\\boldsymbol{w})^{T} (\\mathbf{t} - \\mathbf{\\Phi}\\boldsymbol{w}) \\right\\} \\cdot \\frac{|\\beta^{-1}\\mathbf{S}_{N}|^{1/2}}{|\\beta^{-1}\\mathbf{S}_{0}|^{1/2}} \\frac{exp \\left\\{ -\\frac{\\beta}{2} (\\boldsymbol{w} - \\mathbf{m}_{0})^{T}\\mathbf{S}_{0}^{-1} (\\boldsymbol{w} - \\mathbf{m}_{0}) \\right\\}}{exp \\left\\{ -\\frac{\\beta}{2} (\\boldsymbol{w} - \\mathbf{m}_{N})^{T}\\mathbf{S}_{N}^{-1} (\\boldsymbol{w} - \\mathbf{m}_{N}) \\right\\}}$ \n= $(\\frac{\\beta}{2\\pi})^{N/2} \\frac{|\\mathbf{S}_{N}|^{1/2}}{|\\mathbf{S}_{0}|^{1/2}} exp \\left\\{ -\\frac{\\beta}{2} (\\mathbf{t} - \\mathbf{\\Phi}\\boldsymbol{w})^{T} (\\mathbf{t} - \\mathbf{\\Phi}\\boldsymbol{w}) \\right\\} \\cdot \\frac{exp \\left\\{ -\\frac{\\beta}{2} (\\boldsymbol{w} - \\mathbf{m}_{0})^{T}\\mathbf{S}_{0}^{-1} (\\boldsymbol{w} - \\mathbf{m}_{0}) \\right\\}}{exp \\left\\{ -\\frac{\\beta}{2} (\\boldsymbol{w} - \\mathbf{m}_{N})^{T}\\mathbf{S}_{N}^{-1} (\\boldsymbol{w} - \\mathbf{m}_{N}) \\right\\}}$ \n\nWe look back to the previous problem and we notice that at the last step in the deduction of $p(\\mathbf{t})$ , we complete the square according to $\\mathbf{w}$ . And if we carefully compare the left and right side at the last step, we can obtain :\n\n$$exp\\left\\{-\\frac{\\beta}{2}(\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{w})^{T}(\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{w})\\right\\} exp\\left\\{-\\frac{\\beta}{2}(\\mathbf{w} - \\mathbf{m_0})^{T} \\mathbf{S_0}^{-1}(\\mathbf{w} - \\mathbf{m_0})\\right\\}$$\n\n$$= exp\\left\\{-\\frac{\\beta}{2}(\\mathbf{w} - \\mathbf{m_N})^{T} \\mathbf{S_N}^{-1}(\\mathbf{w} - \\mathbf{m_N})\\right\\} exp\\left\\{-(b_N - b_0)\\beta\\right\\}$$\n\nHence, we go back to deal with the Gaussian terms:\n\nGaussian terms = \n$$(\\frac{\\beta}{2\\pi})^{N/2} \\frac{|\\mathbf{S_N}|^{1/2}}{|\\mathbf{S_0}|^{1/2}} exp\\{-(b_N - b_0)\\beta\\}$$\n\nIf we substitute the expressions above into $p(\\mathbf{t})$ , we will obtain (3.118: $p(\\mathbf{t}) = \\frac{1}{(2\\pi)^{N/2}} \\frac{b_0^{a_0}}{b_N^{a_N}} \\frac{\\Gamma(a_N)}{\\Gamma(a_0)} \\frac{|\\mathbf{S}_N|^{1/2}}{|\\mathbf{S}_0|^{1/2}}$) immediately.\n\n## 0.4 Linear Models Classification",
"answer_length": 4987
},
{
"chapter": 3,
"question_number": "3.3",
"difficulty": "easy",
"question_text": "Consider a data set in which each data point $t_n$ is associated with a weighting factor $r_n > 0$ , so that the sum-of-squares error function becomes\n\n$$E_D(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} r_n \\left\\{ t_n - \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) \\right\\}^2.$$\n (3.104: $E_D(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} r_n \\left\\{ t_n - \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) \\right\\}^2.$)\n\nFind an expression for the solution $\\mathbf{w}^*$ that minimizes this error function. Give two alternative interpretations of the weighted sum-of-squares error function in terms of (i) data dependent noise variance and (ii) replicated data points.",
"answer": "Let's calculate the derivative of (3.104: $E_D(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} r_n \\left\\{ t_n - \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) \\right\\}^2.$) with respect to $\\boldsymbol{w}$ .\n\n$$\\nabla E_D(\\boldsymbol{w}) = \\sum_{n=1}^{N} r_n \\left\\{ t_n - \\boldsymbol{w}^T \\boldsymbol{\\Phi}(\\boldsymbol{x_n}) \\right\\} \\boldsymbol{\\Phi}(\\boldsymbol{x_n})^T$$\n\nWe set the derivative equal to 0.\n\n$$0 = \\sum_{n=1}^{N} r_n t_n \\mathbf{\\Phi}(\\mathbf{x}_n)^T - \\mathbf{w}^T \\left( \\sum_{n=1}^{N} r_n \\mathbf{\\Phi}(\\mathbf{x}_n) \\mathbf{\\Phi}(\\mathbf{x}_n)^T \\right)$$\n\nIf we denote $\\sqrt{r_n} \\phi(x_n) = \\phi'(x_n)$ and $\\sqrt{r_n} t_n = t'_n$ , we can obtain:\n\n$$0 = \\sum_{n=1}^{N} t'_n \\mathbf{\\Phi}'(\\mathbf{x}_n)^T - \\mathbf{w}^T \\left( \\sum_{n=1}^{N} \\mathbf{\\Phi}'(\\mathbf{x}_n) \\mathbf{\\Phi}'(\\mathbf{x}_n)^T \\right)$$\n\nTaking advantage of (3.11: $= \\frac{N}{2} \\ln \\beta - \\frac{N}{2} \\ln(2\\pi) - \\beta E_D(\\mathbf{w})$) – (3.17: $\\mathbf{\\Phi}^{\\dagger} \\equiv \\left(\\mathbf{\\Phi}^{\\mathrm{T}}\\mathbf{\\Phi}\\right)^{-1}\\mathbf{\\Phi}^{\\mathrm{T}}$), we can derive a similar result, i.e. $\\boldsymbol{w}_{ML} = (\\boldsymbol{\\Phi}^T \\boldsymbol{\\Phi})^{-1} \\boldsymbol{\\Phi}^T \\boldsymbol{t}$ . But here, we define $\\boldsymbol{t}$ as:\n\n$$\\boldsymbol{t} = \\left[\\sqrt{r_1}t_1, \\sqrt{r_2}t_2, \\dots, \\sqrt{r_N}t_N\\right]^T$$\n\nWe also define $\\Phi$ as a $N \\times M$ matrix, with element $\\Phi(i,j) = \\sqrt{r_i} \\, \\phi_j(\\boldsymbol{x_i})$ . The interpretation is two folds: (1) Examining Eq (3.10)-(3.12), we see that if we substitute $\\beta^{-1}$ by $r_n \\cdot \\beta^{-1}$ in the summation term, Eq (3.12: $E_D(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} \\{t_n - \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n)\\}^2.$) will become the expression in exercise 3.3. (2) $r_n$ can also be viewed as the effective number of observation of $(\\mathbf{x}_n, t_n)$ . Alternatively speaking, you can treat $(\\mathbf{x}_n, t_n)$ as repeatedly occurring $r_n$ times.",
"answer_length": 2003
},
{
"chapter": 3,
"question_number": "3.4",
"difficulty": "easy",
"question_text": "Consider a linear model of the form\n\n$$y(x, \\mathbf{w}) = w_0 + \\sum_{i=1}^{D} w_i x_i$$\n (3.105: $y(x, \\mathbf{w}) = w_0 + \\sum_{i=1}^{D} w_i x_i$)\n\ntogether with a sum-of-squares error function of the form\n\n$$E_D(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} \\{y(x_n, \\mathbf{w}) - t_n\\}^2.$$\n (3.106: $E_D(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} \\{y(x_n, \\mathbf{w}) - t_n\\}^2.$)\n\nNow suppose that Gaussian noise $\\epsilon_i$ with zero mean and variance $\\sigma^2$ is added independently to each of the input variables $x_i$ . By making use of $\\mathbb{E}[\\epsilon_i] = 0$ and $\\mathbb{E}[\\epsilon_i\\epsilon_j] = \\delta_{ij}\\sigma^2$ , show that minimizing $E_D$ averaged over the noise distribution is equivalent to minimizing the sum-of-squares error for noise-free input variables with the addition of a weight-decay regularization term, in which the bias parameter $w_0$ is omitted from the regularizer.",
"answer": "Firstly, we rearrange $E_D(\\boldsymbol{w})$ .\n\n$$\\begin{split} E_{D}(\\boldsymbol{w}) &= \\frac{1}{2} \\sum_{n=1}^{N} \\left\\{ \\left[ w_{0} + \\sum_{i=1}^{D} w_{i}(x_{i} + \\epsilon_{i}) \\right] - t_{n} \\right\\}^{2} \\\\ &= \\frac{1}{2} \\sum_{n=1}^{N} \\left\\{ \\left( w_{0} + \\sum_{i=1}^{D} w_{i}x_{i} \\right) - t_{n} + \\sum_{i=1}^{D} w_{i}\\epsilon_{i} \\right\\}^{2} \\\\ &= \\frac{1}{2} \\sum_{n=1}^{N} \\left\\{ y(x_{n}, \\boldsymbol{w}) - t_{n} + \\sum_{i=1}^{D} w_{i}\\epsilon_{i} \\right\\}^{2} \\\\ &= \\frac{1}{2} \\sum_{n=1}^{N} \\left\\{ \\left( y(x_{n}, \\boldsymbol{w}) - t_{n} \\right)^{2} + \\left( \\sum_{i=1}^{D} w_{i}\\epsilon_{i} \\right)^{2} + 2\\left( \\sum_{i=1}^{D} w_{i}\\epsilon_{i} \\right) \\left( y(x_{n}, \\boldsymbol{w}) - t_{n} \\right) \\right\\} \\end{split}$$\n\nWhere we have used $y(x_n, \\boldsymbol{w})$ to denote the output of the linear model when input variable is $x_n$ , without noise added. For the second term in the equation above, we can obtain:\n\n$$\\mathbb{E}_{\\epsilon}[(\\sum_{i=1}^{D}w_{i}\\epsilon_{i})^{2}] = \\mathbb{E}_{\\epsilon}[\\sum_{i=1}^{D}\\sum_{j=1}^{D}w_{i}w_{j}\\epsilon_{i}\\epsilon_{j}] = \\sum_{i=1}^{D}\\sum_{j=1}^{D}w_{i}w_{j}\\mathbb{E}_{\\epsilon}[\\epsilon_{i}\\epsilon_{j}] = \\sigma^{2}\\sum_{i=1}^{D}\\sum_{j=1}^{D}w_{i}w_{j}\\delta_{ij}$$\n\nWhich gives\n\n$$\\mathbb{E}_{\\epsilon}[(\\sum_{i=1}^{D} w_i \\epsilon_i)^2] = \\sigma^2 \\sum_{i=1}^{D} w_i^2$$\n\nFor the third term, we can obtain:\n\n$$\\mathbb{E}_{\\epsilon}[2(\\sum_{i=1}^{D} w_{i} \\epsilon_{i})(y(x_{n}, \\boldsymbol{w}) - t_{n})] = 2(y(x_{n}, \\boldsymbol{w}) - t_{n}) \\mathbb{E}_{\\epsilon}[\\sum_{i=1}^{D} w_{i} \\epsilon_{i}]$$\n\n$$= 2(y(x_{n}, \\boldsymbol{w}) - t_{n}) \\sum_{i=1}^{D} \\mathbb{E}_{\\epsilon}[w_{i} \\epsilon_{i}]$$\n\n$$= 0$$\n\nTherefore, if we calculate the expectation of $E_D(\\boldsymbol{w})$ with respect to $\\epsilon$ , we can obtain:\n\n$$\\mathbb{E}_{\\epsilon}[E_D(\\boldsymbol{w})] = \\frac{1}{2} \\sum_{n=1}^{N} (y(x_n, \\boldsymbol{w}) - t_n)^2 + \\frac{\\sigma^2}{2} \\sum_{i=1}^{D} w_i^2$$",
"answer_length": 1964
},
{
"chapter": 3,
"question_number": "3.5",
"difficulty": "easy",
"question_text": "Using the technique of Lagrange multipliers, discussed in Appendix E, show that minimization of the regularized error function (3.29: $\\frac{1}{2} \\sum_{n=1}^{N} \\{t_n - \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n)\\}^2 + \\frac{\\lambda}{2} \\sum_{j=1}^{M} |w_j|^q$) is equivalent to minimizing the unregularized sum-of-squares error (3.12: $E_D(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} \\{t_n - \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n)\\}^2.$) subject to the constraint (3.30: $\\sum_{j=1}^{M} |w_j|^q \\leqslant \\eta$). Discuss the relationship between the parameters $\\eta$ and $\\lambda$ .",
"answer": "We can firstly rewrite the constraint (3.30: $\\sum_{j=1}^{M} |w_j|^q \\leqslant \\eta$) as:\n\n$$\\frac{1}{2} \\left( \\sum_{j=1}^{M} |w_j|^q - \\eta \\right) \\le 0$$\n\nWhere we deliberately introduce scaling factor 1/2 for convenience. Then it is straightforward to obtain the Lagrange function.\n\n$$L(\\boldsymbol{w}, \\lambda) = \\frac{1}{2} \\sum_{n=1}^{N} \\left\\{ t_n - \\boldsymbol{w}^T \\boldsymbol{\\phi}(\\boldsymbol{x_n}) \\right\\}^2 + \\frac{\\lambda}{2} \\left( \\sum_{j=1}^{M} |w_j|^q - \\eta \\right)$$\n\nIt is obvious that $L(\\boldsymbol{w}, \\lambda)$ and (3.29: $\\frac{1}{2} \\sum_{n=1}^{N} \\{t_n - \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n)\\}^2 + \\frac{\\lambda}{2} \\sum_{j=1}^{M} |w_j|^q$) has the same dependence on $\\boldsymbol{w}$ . Meanwhile, if we denote the optimal $\\boldsymbol{w}$ that can minimize $L(\\boldsymbol{w}, \\lambda)$ as $\\boldsymbol{w}^*(\\lambda)$ , we can see that\n\n$$\\eta = \\sum_{j=1}^{M} |w_j^{\\star}|^q$$",
"answer_length": 937
},
{
"chapter": 3,
"question_number": "3.6",
"difficulty": "easy",
"question_text": "Consider a linear basis function regression model for a multivariate target variable t having a Gaussian distribution of the form\n\n$$p(\\mathbf{t}|\\mathbf{W}, \\mathbf{\\Sigma}) = \\mathcal{N}(\\mathbf{t}|\\mathbf{y}(\\mathbf{x}, \\mathbf{W}), \\mathbf{\\Sigma})$$\n (3.107: $p(\\mathbf{t}|\\mathbf{W}, \\mathbf{\\Sigma}) = \\mathcal{N}(\\mathbf{t}|\\mathbf{y}(\\mathbf{x}, \\mathbf{W}), \\mathbf{\\Sigma})$)\n\nwhere\n\n$$\\mathbf{y}(\\mathbf{x}, \\mathbf{W}) = \\mathbf{W}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}) \\tag{3.108}$$\n\ntogether with a training data set comprising input basis vectors $\\phi(\\mathbf{x}_n)$ and corresponding target vectors $\\mathbf{t}_n$ , with $n=1,\\ldots,N$ . Show that the maximum likelihood solution $\\mathbf{W}_{\\mathrm{ML}}$ for the parameter matrix $\\mathbf{W}$ has the property that each column is given by an expression of the form (3.15: $\\mathbf{w}_{\\mathrm{ML}} = \\left(\\mathbf{\\Phi}^{\\mathrm{T}}\\mathbf{\\Phi}\\right)^{-1}\\mathbf{\\Phi}^{\\mathrm{T}}\\mathbf{t}$), which was the solution for an isotropic noise distribution. Note that this is independent of the covariance matrix $\\Sigma$ . Show that the maximum likelihood solution for $\\Sigma$ is given by\n\n$$\\Sigma = \\frac{1}{N} \\sum_{n=1}^{N} \\left( \\mathbf{t}_{n} - \\mathbf{W}_{\\mathrm{ML}}^{\\mathrm{T}} \\phi(\\mathbf{x}_{n}) \\right) \\left( \\mathbf{t}_{n} - \\mathbf{W}_{\\mathrm{ML}}^{\\mathrm{T}} \\phi(\\mathbf{x}_{n}) \\right)^{\\mathrm{T}}.$$\n (3.109: $\\Sigma = \\frac{1}{N} \\sum_{n=1}^{N} \\left( \\mathbf{t}_{n} - \\mathbf{W}_{\\mathrm{ML}}^{\\mathrm{T}} \\phi(\\mathbf{x}_{n}) \\right) \\left( \\mathbf{t}_{n} - \\mathbf{W}_{\\mathrm{ML}}^{\\mathrm{T}} \\phi(\\mathbf{x}_{n}) \\right)^{\\mathrm{T}}.$)",
"answer": "Firstly, we write down the log likelihood function.\n\n$$lnp(\\boldsymbol{T}|\\boldsymbol{X}, \\boldsymbol{W}, \\boldsymbol{\\beta}) = -\\frac{N}{2}ln|\\boldsymbol{\\Sigma}| - \\frac{1}{2}\\sum_{n=1}^{N} \\left[\\boldsymbol{t_n} - \\boldsymbol{W}^T \\boldsymbol{\\phi}(\\boldsymbol{x_n})\\right]^T \\boldsymbol{\\Sigma}^{-1} \\left[\\boldsymbol{t_n} - \\boldsymbol{W}^T \\boldsymbol{\\phi}(\\boldsymbol{x_n})\\right]$$\n\nWhere we have already omitted the constant term. We set the derivative of the equation above with respect to $\\boldsymbol{W}$ equals to zero.\n\n$$\\mathbf{0} = -\\sum_{n=1}^{N} \\mathbf{\\Sigma}^{-1} [\\boldsymbol{t_n} - \\boldsymbol{W}^T \\boldsymbol{\\phi}(\\boldsymbol{x_n})] \\boldsymbol{\\phi}(\\boldsymbol{x_n})^T$$\n\nTherefore, we can obtain similar result for W as (3.15: $\\mathbf{w}_{\\mathrm{ML}} = \\left(\\mathbf{\\Phi}^{\\mathrm{T}}\\mathbf{\\Phi}\\right)^{-1}\\mathbf{\\Phi}^{\\mathrm{T}}\\mathbf{t}$). For $\\Sigma$ , comparing with (2.118: $\\ln p(\\mathbf{X}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}) = -\\frac{ND}{2} \\ln(2\\pi) - \\frac{N}{2} \\ln |\\boldsymbol{\\Sigma}| - \\frac{1}{2} \\sum_{n=1}^{N} (\\mathbf{x}_n - \\boldsymbol{\\mu})^{\\mathrm{T}} \\boldsymbol{\\Sigma}^{-1} (\\mathbf{x}_n - \\boldsymbol{\\mu}). \\quad$) – (2.124: $\\mathbb{E}[\\Sigma_{\\mathrm{ML}}] = \\frac{N-1}{N} \\Sigma.$), we can easily write down a similar result :\n\n$$\\boldsymbol{\\Sigma} = \\frac{1}{N} \\sum_{n=1}^{N} [\\boldsymbol{t_n} - \\boldsymbol{W}_{ML}^T \\boldsymbol{\\phi}(\\boldsymbol{x_n})] [\\boldsymbol{t_n} - \\boldsymbol{W}_{ML}^T \\boldsymbol{\\phi}(\\boldsymbol{x_n})]^T$$\n\nWe can see that the solutions for W and $\\Sigma$ are also decoupled.",
"answer_length": 1591
},
{
"chapter": 3,
"question_number": "3.7",
"difficulty": "easy",
"question_text": "By using the technique of completing the square, verify the result (3.49) for the posterior distribution of the parameters w in the linear basis function model in which $\\mathbf{m}_N$ and $\\mathbf{S}_N$ are defined by (3.50: $\\mathbf{m}_{N} = \\mathbf{S}_{N} \\left( \\mathbf{S}_{0}^{-1} \\mathbf{m}_{0} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{t} \\right)$) and (3.51: $\\mathbf{S}_N^{-1} = \\mathbf{S}_0^{-1} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi}.$) respectively.",
"answer": "Let's begin by writing down the prior distribution p(w) and likelihood function $p(t|X, w, \\beta)$ .\n\n$$p(\\boldsymbol{w}) = \\mathcal{N}(\\boldsymbol{w}|\\boldsymbol{m}_0, \\boldsymbol{S}_0) , \\quad p(\\boldsymbol{t}|\\boldsymbol{X}, \\boldsymbol{w}, \\boldsymbol{\\beta}) = \\prod_{n=1}^{N} \\mathcal{N}(t_n|\\boldsymbol{w}^T \\boldsymbol{\\phi}(\\boldsymbol{x}_n), \\boldsymbol{\\beta}^{-1})$$\n\nSince the posterior PDF equals to the product of the prior PDF and likelihood function, up to a normalized constant. We mainly focus on the exponential term of the product.\n\nexponential term \n$$= -\\frac{\\beta}{2} \\sum_{n=1}^{N} \\left\\{ t_n - \\boldsymbol{w}^T \\boldsymbol{\\phi}(\\boldsymbol{x_n}) \\right\\}^2 - \\frac{1}{2} (\\boldsymbol{w} - \\boldsymbol{m_0})^T \\boldsymbol{S}_0^{-1} (\\boldsymbol{w} - \\boldsymbol{m_0})$$\n\n$$= -\\frac{\\beta}{2} \\sum_{n=1}^{N} \\left\\{ t_n^2 - 2t_n \\boldsymbol{w}^T \\boldsymbol{\\phi}(\\boldsymbol{x_n}) + \\boldsymbol{w}^T \\boldsymbol{\\phi}(\\boldsymbol{x_n}) \\boldsymbol{\\phi}(\\boldsymbol{x_n})^T \\boldsymbol{w} \\right\\} - \\frac{1}{2} (\\boldsymbol{w} - \\boldsymbol{m_0})^T \\boldsymbol{S}_0^{-1} (\\boldsymbol{w} - \\boldsymbol{m_0})$$\n\n$$= -\\frac{1}{2} \\boldsymbol{w}^T \\left[ \\sum_{n=1}^{N} \\beta \\boldsymbol{\\phi}(\\boldsymbol{x_n}) \\boldsymbol{\\phi}(\\boldsymbol{x_n})^T + \\boldsymbol{S}_0^{-1} \\right] \\boldsymbol{w}$$\n\n$$-\\frac{1}{2} \\left[ -2\\boldsymbol{m_0}^T \\boldsymbol{S}_0^{-1} - \\sum_{n=1}^{N} 2\\beta t_n \\boldsymbol{\\phi}(\\boldsymbol{x_n})^T \\right] \\boldsymbol{w}$$\n\nHence, by comparing the quadratic term with standard Gaussian Distribution, we can obtain: $\\mathbf{S}_N^{-1} = \\mathbf{S}_0^{-1} + \\beta \\mathbf{\\Phi}^T \\mathbf{\\Phi}$ . And then comparing the linear term, we can obtain :\n\n$$-2m_{N}^{T}S_{N}^{-1} = -2m_{0}^{T}S_{0}^{-1} - \\sum_{n=1}^{N} 2\\beta t_{n}\\phi(x_{n})^{T}$$\n\nIf we multiply -0.5 on both sides, and then transpose both sides, we can easily see that $m_N = S_N(S_0^{-1}m_0 + \\beta\\Phi^T t)$",
"answer_length": 1934
},
{
"chapter": 3,
"question_number": "3.8",
"difficulty": "medium",
"question_text": "Consider the linear basis function model in Section 3.1, and suppose that we have already observed N data points, so that the posterior distribution over w is given by (3.49). This posterior can be regarded as the prior for the next observation. By considering an additional data point $(\\mathbf{x}_{N+1}, t_{N+1})$ , and by completing the square in the exponential, show that the resulting posterior distribution is again given by (3.49) but with $\\mathbf{S}_N$ replaced by $\\mathbf{S}_{N+1}$ and $\\mathbf{m}_N$ replaced by $\\mathbf{m}_{N+1}$ .",
"answer": "Firstly, we write down the prior:\n\n$$p(\\boldsymbol{w}) = \\mathcal{N}(\\boldsymbol{m}_{N}, \\boldsymbol{S}_{N})$$\n\nWhere $m_N$ , $S_N$ are given by (3.50: $\\mathbf{m}_{N} = \\mathbf{S}_{N} \\left( \\mathbf{S}_{0}^{-1} \\mathbf{m}_{0} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{t} \\right)$) and (3.51: $\\mathbf{S}_N^{-1} = \\mathbf{S}_0^{-1} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi}.$). And if now we observe another sample $(X_{N+1}, t_{N+1})$ , we can write down the likelihood function:\n\n$$p(t_{N+1}|\\mathbf{x}_{N+1},\\mathbf{w}) = \\mathcal{N}(t_{N+1}|y(\\mathbf{x}_{N+1},\\mathbf{w}),\\beta^{-1})$$\n\nSince the posterior equals to the production of likelihood function and the prior, up to a constant, we focus on the exponential term.\n\nexponential term = \n$$(\\boldsymbol{w} - \\boldsymbol{m}_{N})^{T} \\boldsymbol{S}_{N}^{-1} (\\boldsymbol{w} - \\boldsymbol{m}_{N}) + \\beta (t_{N+1} - \\boldsymbol{w}^{T} \\boldsymbol{\\phi}(\\boldsymbol{x}_{N+1}))^{2}$$\n \n= $\\boldsymbol{w}^{T} [\\boldsymbol{S}_{N}^{-1} + \\beta \\boldsymbol{\\phi}(\\boldsymbol{x}_{N+1}) \\boldsymbol{\\phi}(\\boldsymbol{x}_{N+1})^{T}] \\boldsymbol{w}$ \n $-2 \\boldsymbol{w}^{T} [\\boldsymbol{S}_{N}^{-1} \\boldsymbol{m}_{N} + \\beta \\boldsymbol{\\phi}(\\boldsymbol{x}_{N+1}) t_{N+1}]$ \n+const\n\nTherefore, after observing $(X_{N+1}, t_{N+1})$ , we have $p(w) = \\mathcal{N}(m_{N+1}, S_{N+1})$ , where we have defined:\n\n$$S_{N+1}^{-1} = S_N^{-1} + \\beta \\phi(x_{N+1}) \\phi(x_{N+1})^T$$\n\nAnd\n\n$$m_{N+1} = S_{N+1} (S_N^{-1} m_N + \\beta \\phi(x_{N+1}) t_{N+1})$$",
"answer_length": 1513
},
{
"chapter": 3,
"question_number": "3.9",
"difficulty": "medium",
"question_text": "Repeat the previous exercise but instead of completing the square by hand, make use of the general result for linear-Gaussian models given by (2.116: $p(\\mathbf{x}|\\mathbf{y}) = \\mathcal{N}(\\mathbf{x}|\\mathbf{\\Sigma}\\{\\mathbf{A}^{\\mathrm{T}}\\mathbf{L}(\\mathbf{y}-\\mathbf{b}) + \\mathbf{\\Lambda}\\boldsymbol{\\mu}\\}, \\mathbf{\\Sigma})$).",
"answer": "We know that the prior $p(\\mathbf{w})$ can be written as:\n\n$$p(\\boldsymbol{w}) = \\mathcal{N}(\\boldsymbol{m}_{N}, \\boldsymbol{S}_{N})$$\n\nAnd the likelihood function $p(t_{N+1}|\\boldsymbol{x_{N+1}},\\boldsymbol{w})$ can be written as:\n\n$$p(t_{N+1}|x_{N+1}, w) = \\mathcal{N}(t_{N+1}|y(x_{N+1}, w), \\beta^{-1})$$\n\nAccording to the fact that $y(x_{N+1}, w) = w^T \\phi(x_{N+1}) = \\phi(x_{N+1})^T w$ , the likelihood can be further written as:\n\n$$p(t_{N+1}|\\boldsymbol{x_{N+1}},\\boldsymbol{w}) = \\mathcal{N}(t_{N+1}|(\\boldsymbol{\\phi}(\\boldsymbol{x_{N+1}})^T\\boldsymbol{w},\\beta^{-1})$$\n\nThen we take advantage of (2.113: $p(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}^{-1})$), (2.114: $p(\\mathbf{y}|\\mathbf{x}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\mathbf{x} + \\mathbf{b}, \\mathbf{L}^{-1})$) and (2.116: $p(\\mathbf{x}|\\mathbf{y}) = \\mathcal{N}(\\mathbf{x}|\\mathbf{\\Sigma}\\{\\mathbf{A}^{\\mathrm{T}}\\mathbf{L}(\\mathbf{y}-\\mathbf{b}) + \\mathbf{\\Lambda}\\boldsymbol{\\mu}\\}, \\mathbf{\\Sigma})$), which gives:\n\n$$p(\\boldsymbol{w}|\\boldsymbol{x}_{N+1},t_{N+1}) = \\mathcal{N}(\\boldsymbol{\\Sigma}\\{\\boldsymbol{\\phi}(\\boldsymbol{x}_{N+1})\\beta t_{N+1} + \\boldsymbol{S}_{N}^{-1}\\boldsymbol{m}_{N}\\},\\boldsymbol{\\Sigma})$$\n\nWhere $\\Sigma = (S_N^{-1} + \\phi(x_{N+1})\\beta\\phi(x_{N+1})^T)^{-1}$ , and we can see that the result is exactly the same as the one we obtained in the previous problem.",
"answer_length": 1406
}
]
},
{
"chapter_number": 4,
"total_questions": 22,
"difficulty_breakdown": {
"easy": 15,
"medium": 4,
"hard": 0,
"unknown": 7
},
"questions": [
{
"chapter": 4,
"question_number": "4.1",
"difficulty": "medium",
"question_text": "\\star)$ Given a set of data points $\\{\\mathbf{x}_n\\}$ , we can define the *convex hull* to be the set of all points $\\mathbf{x}$ given by\n\n$$\\mathbf{x} = \\sum_{n} \\alpha_n \\mathbf{x}_n \\tag{4.156}$$\n\nwhere $\\alpha_n \\geqslant 0$ and $\\sum_n \\alpha_n = 1$ . Consider a second set of points $\\{\\mathbf{y}_n\\}$ together with their corresponding convex hull. By definition, the two sets of points will be linearly separable if there exists a vector $\\widehat{\\mathbf{w}}$ and a scalar $w_0$ such that $\\widehat{\\mathbf{w}}^T\\mathbf{x}_n + w_0 > 0$ for all $\\mathbf{x}_n$ , and $\\widehat{\\mathbf{w}}^T\\mathbf{y}_n + w_0 < 0$ for all $\\mathbf{y}_n$ . Show that if their convex hulls intersect, the two sets of points cannot be linearly separable, and conversely that if they are linearly separable, their convex hulls do not intersect.",
"answer": "If the convex hull of $\\{\\mathbf{x_n}\\}$ and $\\{\\mathbf{y_n}\\}$ intersects, we know that there will be a point $\\mathbf{z}$ which can be written as $\\mathbf{z} = \\sum_n \\alpha_n \\mathbf{x_n}$ and also $\\mathbf{z} = \\sum_n \\beta_n \\mathbf{y_n}$ . Hence we can obtain:\n\n$$\\widehat{\\mathbf{w}}^T \\mathbf{z} + w_0 = \\widehat{\\mathbf{w}}^T (\\sum_n \\alpha_n \\mathbf{x_n}) + w_0$$\n\n$$= (\\sum_n \\alpha_n \\widehat{\\mathbf{w}}^T \\mathbf{x_n}) + (\\sum_n \\alpha_n) w_0$$\n\n$$= \\sum_n \\alpha_n (\\widehat{\\mathbf{w}}^T \\mathbf{x_n} + w_0) \\quad (*)$$\n\nWhere we have used $\\sum_n \\alpha_n = 1$ . And if $\\{\\mathbf{x_n}\\}$ and $\\{\\mathbf{y_n}\\}$ are linearly separable, we have $\\widehat{\\mathbf{w}}^T \\mathbf{x_n} + w_0 > 0$ and $\\widehat{\\mathbf{w}}^T \\mathbf{y_n} + w_0 < 0$ , for $\\forall \\mathbf{x_n}$ , $\\mathbf{y_n}$ . Together with $\\alpha_n \\geq 0$ and (\\*), we know that $\\widehat{\\mathbf{w}}^T \\mathbf{z} + w_0 > 0$ . And if we calculate $\\widehat{\\mathbf{w}}^T \\mathbf{z} + w_0$ from the perspective of $\\{\\mathbf{y_n}\\}$ following the same procedure, we can obtain $\\widehat{\\mathbf{w}}^T \\mathbf{z} + w_0 < 0$ . Hence contradictory occurs. In other words, they are not linearly separable if their convex hulls intersect.\n\nWe have already proved the first statement, i.e., \"convex hulls intersect\" gives \"not linearly separable\", and what the second part wants us to prove is that \"linearly separable\" gives \"convex hulls do not intersect\". This can be done simply by contrapositive.\n\nThe true converse of the first statement should be if their convex hulls do not intersect, the data sets should be linearly separable. This is exactly what Hyperplane Separation Theorem shows us.",
"answer_length": 1703
},
{
"chapter": 4,
"question_number": "4.11",
"difficulty": "medium",
"question_text": "Consider a classification problem with K classes for which the feature vector $\\phi$ has M components each of which can take L discrete states. Let the values of the components be represented by a 1-of-L binary coding scheme. Further suppose that, conditioned on the class $\\mathcal{C}_k$ , the M components of $\\phi$ are independent, so that the class-conditional density factorizes with respect to the feature vector components. Show that the quantities $a_k$ given by (4.63: $a_k = \\ln p(\\mathbf{x}|\\mathcal{C}_k)p(\\mathcal{C}_k).$), which appear in the argument to the softmax function describing the posterior class probabilities, are linear functions of the components of $\\phi$ . Note that this represents an example of the naive Bayes model which is discussed in Section 8.2.2.",
"answer": "Based on definition, we can write down\n\n$$p(\\boldsymbol{\\phi}|C_k) = \\prod_{m=1}^{M} \\prod_{l=1}^{L} \\mu_{kml}^{\\phi_{ml}}$$\n\nNote that here only one of the value among $\\phi_{m1}$ , $\\phi_{m2}$ , ... $\\phi_{mL}$ is 1, and the others are all 0 because we have used a 1-of-L binary coding scheme, and also we have taken advantage of the assumption that the M components of $\\phi$ are independent conditioned on the class $C_k$ . We substitute the expression above into (4.63: $a_k = \\ln p(\\mathbf{x}|\\mathcal{C}_k)p(\\mathcal{C}_k).$), which gives:\n\n$$a_k = \\sum_{m=1}^{M} \\sum_{l=1}^{L} \\phi_{ml} \\cdot \\ln \\mu_{kml} + \\ln p(C_k)$$\n\nHence it is obvious that $a_k$ is a linear function of the components of $\\phi$ .",
"answer_length": 723
},
{
"chapter": 4,
"question_number": "4.12",
"difficulty": "easy",
"question_text": "Verify the relation (4.88: $\\frac{d\\sigma}{da} = \\sigma(1 - \\sigma).$) for the derivative of the logistic sigmoid function defined by (4.59: $\\sigma(a) = \\frac{1}{1 + \\exp(-a)}$).",
"answer": "Based on definition, i.e., (4.59: $\\sigma(a) = \\frac{1}{1 + \\exp(-a)}$), we know that logistic sigmoid has the form:\n\n$$\\sigma(a) = \\frac{1}{1 + exp(-a)}$$\n\nNow, we calculate its derivative with regard to a.\n\n$$\\frac{d\\sigma(a)}{da} = \\frac{exp(a)}{[1+exp(-a)]^2} = \\frac{exp(a)}{1+exp(-a)} \\cdot \\frac{1}{1+exp(-a)} = [1-\\sigma(a)] \\cdot \\sigma(a)$$\n\nJust as required.",
"answer_length": 369
},
{
"chapter": 4,
"question_number": "4.13",
"difficulty": "easy",
"question_text": "By making use of the result (4.88: $\\frac{d\\sigma}{da} = \\sigma(1 - \\sigma).$) for the derivative of the logistic sigmoid, show that the derivative of the error function (4.90: $E(\\mathbf{w}) = -\\ln p(\\mathbf{t}|\\mathbf{w}) = -\\sum_{n=1}^{N} \\{t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n)\\}$) for the logistic regression model is given by (4.91: $\\nabla E(\\mathbf{w}) = \\sum_{n=1}^{N} (y_n - t_n) \\phi_n$).",
"answer": "Let's follow the hint.\n\n$$\\nabla E(\\mathbf{w}) = -\\nabla \\sum_{n=1}^{N} \\{t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n)\\}$$\n\n$$= -\\sum_{n=1}^{N} \\nabla \\{t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n)\\}$$\n\n$$= -\\sum_{n=1}^{N} \\frac{d\\{t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n)\\}}{dy_n} \\frac{dy_n}{da_n} \\frac{da_n}{d\\mathbf{w}}$$\n\n$$= -\\sum_{n=1}^{N} (\\frac{t_n}{y_n} - \\frac{1 - t_n}{1 - y_n}) \\cdot y_n (1 - y_n) \\cdot \\phi_n$$\n\n$$= -\\sum_{n=1}^{N} \\frac{t_n - y_n}{y_n (1 - y_n)} \\cdot y_n (1 - y_n) \\cdot \\phi_n$$\n\n$$= -\\sum_{n=1}^{N} (t_n - y_n) \\phi_n$$\n\n$$= \\sum_{n=1}^{N} (y_n - t_n) \\phi_n$$\n\nWhere we have used $y_n = \\sigma(a_n)$ , $a_n = \\mathbf{w}^T \\boldsymbol{\\phi_n}$ , the chain rules and (4.88: $\\frac{d\\sigma}{da} = \\sigma(1 - \\sigma).$).",
"answer_length": 736
},
{
"chapter": 4,
"question_number": "4.14",
"difficulty": "easy",
"question_text": "Show that for a linearly separable data set, the maximum likelihood solution for the logistic regression model is obtained by finding a vector $\\mathbf{w}$ whose decision boundary $\\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}) = 0$ separates the classes and then taking the magnitude of $\\mathbf{w}$ to infinity.",
"answer": "According to definition, we know that if a dataset is linearly separable, we can find $\\mathbf{w}$ , for some points $\\mathbf{x_n}$ , we have $\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x_n}) > 0$ , and the others $\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x_m}) < 0$ . Then the boundary is given by $\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}) = 0$ . Note that for any point $\\mathbf{x_0}$ in the dataset, the value of $\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x_0})$ should either be positive or negative, but it can not equal to 0.\n\nTherefore, the maximum likelihood solution for logistic regression is trivial. We suppose for those points $\\mathbf{x_n}$ belonging to class $C_1$ , we have $\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x_n}) > 0$ and $\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x_m}) < 0$ for those belonging to class $C_2$ . According to (4.87: $p(C_1|\\phi) = y(\\phi) = \\sigma\\left(\\mathbf{w}^{\\mathrm{T}}\\phi\\right)$), if $|\\mathbf{w}| \\to \\infty$ , we have\n\n$$p(C_1|\\boldsymbol{\\phi}(\\mathbf{x_n})) = \\sigma(\\mathbf{w}^T\\boldsymbol{\\phi}(\\mathbf{x_n})) \\to 1$$\n\nWhere we have used $\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x_n}) \\to +\\infty$ . And since $\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x_m}) \\to -\\infty$ , we can also obtain:\n\n$$p(C_2|\\boldsymbol{\\phi}(\\mathbf{x_m})) = 1 - p(C_1|\\boldsymbol{\\phi}(\\mathbf{x_m})) = 1 - \\sigma(\\mathbf{w}^T\\boldsymbol{\\phi}(\\mathbf{x_m})) \\to 1$$\n\nIn other words, for the likelihood function, i.e.,(4.89), if we have $|\\mathbf{w}| \\to \\infty$ , and also we label all the points lying on one side of the boundary as class $C_1$ , and those on the other side as class $C_2$ , the every term in (4.89: $p(\\mathbf{t}|\\mathbf{w}) = \\prod_{n=1}^{N} y_n^{t_n} \\left\\{ 1 - y_n \\right\\}^{1 - t_n}$) can achieve its maximum value, i.e., 1, finally leading to the maximum of the likelihood.\n\nHence, for a linearly separable dataset, the learning process may prefer to make $|\\mathbf{w}| \\to \\infty$ and use the linear boundary to label the datasets, which can cause severe over-fitting problem.\n\n## **Problem 4.15 Solution**\n\nSince $y_n$ is the output of the logistic sigmoid function, we know that $0 < y_n < 1$ and hence $y_n(1-y_n) > 0$ . Then we use (4.97: $\\mathbf{H} = \\nabla \\nabla E(\\mathbf{w}) = \\sum_{n=1}^{N} y_n (1 - y_n) \\boldsymbol{\\phi}_n \\boldsymbol{\\phi}_n^{\\mathrm{T}} = \\boldsymbol{\\Phi}^{\\mathrm{T}} \\mathbf{R} \\boldsymbol{\\Phi}$), for an arbitrary non-zero real vector $\\mathbf{a} \\neq \\mathbf{0}$ , we have:\n\n$$\\mathbf{a}^{T}\\mathbf{H}\\mathbf{a} = \\mathbf{a}^{T} \\left[ \\sum_{n=1}^{N} y_{n} (1 - y_{n}) \\boldsymbol{\\phi}_{n} \\boldsymbol{\\phi}_{n}^{T} \\right] \\mathbf{a}$$\n\n$$= \\sum_{n=1}^{N} y_{n} (1 - y_{n}) (\\boldsymbol{\\phi}_{n}^{T} \\mathbf{a})^{T} (\\boldsymbol{\\phi}_{n}^{T} \\mathbf{a})$$\n\n$$= \\sum_{n=1}^{N} y_{n} (1 - y_{n}) b_{n}^{2}$$\n\nWhere we have denoted $b_n = \\phi_n^T \\mathbf{a}$ . What's more, there should be at least one of $\\{b_1, b_2, ..., b_N\\}$ not equal to zero and then we can see that the expression above is larger than 0 and hence $\\mathbf{H}$ is positive definite.\n\nOtherwise, if all the $b_n = 0$ , $\\mathbf{a} = [a_1, a_2, ..., a_M]^T$ will locate in the null space of matrix $\\mathbf{\\Phi}_{N \\times M}$ . However, with regard to the *rank-nullity theorem*, we know that $\\operatorname{Rank}(\\mathbf{\\Phi}) + \\operatorname{Nullity}(\\mathbf{\\Phi}) = M$ , and we have already assumed that those M features are independent, i.e., $\\operatorname{Rank}(\\mathbf{\\Phi}) = M$ , which means there is only $\\mathbf{0}$ in its null space. Therefore contradictory occurs.",
"answer_length": 3587
},
{
"chapter": 4,
"question_number": "4.16",
"difficulty": "easy",
"question_text": "Consider a binary classification problem in which each observation $\\mathbf{x}_n$ is known to belong to one of two classes, corresponding to t=0 and t=1, and suppose that the procedure for collecting training data is imperfect, so that training points are sometimes mislabelled. For every data point $\\mathbf{x}_n$ , instead of having a value t for the class label, we have instead a value t representing the probability that t = 1. Given a probabilistic model t = 1t = 1t t write down the log likelihood function appropriate to such a data set.",
"answer": "We still denote $y_n = p(t = 1 | \\phi_n)$ , and then we can write down the log likelihood by replacing $t_n$ with $\\pi_n$ in (4.89: $p(\\mathbf{t}|\\mathbf{w}) = \\prod_{n=1}^{N} y_n^{t_n} \\left\\{ 1 - y_n \\right\\}^{1 - t_n}$) and (4.90: $E(\\mathbf{w}) = -\\ln p(\\mathbf{t}|\\mathbf{w}) = -\\sum_{n=1}^{N} \\{t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n)\\}$).\n\n$$\\ln p(\\mathbf{t}|\\mathbf{w}) = \\sum_{n=1}^{N} \\{\\pi_n \\ln y_n + (1 - \\pi_n) \\ln (1 - y_n)\\}\\$$",
"answer_length": 445
},
{
"chapter": 4,
"question_number": "4.17",
"difficulty": "easy",
"question_text": "Show that the derivatives of the softmax activation function (4.104: $p(\\mathcal{C}_k|\\phi) = y_k(\\phi) = \\frac{\\exp(a_k)}{\\sum_j \\exp(a_j)}$), where the $a_k$ are defined by (4.105: $a_k = \\mathbf{w}_k^{\\mathrm{T}} \\boldsymbol{\\phi}.$), are given by (4.106: $\\frac{\\partial y_k}{\\partial a_j} = y_k (I_{kj} - y_j)$).",
"answer": "We should discuss in two situations separately, namely j = k and $j \\neq k$ . When $j \\neq k$ , we have:\n\n$$\\frac{\\partial y_k}{\\partial a_j} = \\frac{-exp(a_k) \\cdot exp(a_j)}{[\\sum_j exp(a_j)]^2} = -y_k \\cdot y_j$$\n\nAnd when j = k, we have:\n\n$$\\frac{\\partial y_k}{\\partial a_k} = \\frac{exp(a_k)\\sum_j exp(a_j) - exp(a_k)exp(a_k)}{[\\sum_j exp(a_j)]^2} = y_k - y_k^2 = y_k(1-y_k)$$\n\nTherefore, we can obtain:\n\n$$\\frac{\\partial y_k}{\\partial a_j} = y_k (I_{kj} - y_j)$$\n\nWhere $I_{kj}$ is the elements of the indentity matrix.",
"answer_length": 528
},
{
"chapter": 4,
"question_number": "4.18",
"difficulty": "easy",
"question_text": "Using the result (4.91: $\\nabla E(\\mathbf{w}) = \\sum_{n=1}^{N} (y_n - t_n) \\phi_n$) for the derivatives of the softmax activation function, show that the gradients of the cross-entropy error (4.108: $E(\\mathbf{w}_1, \\dots, \\mathbf{w}_K) = -\\ln p(\\mathbf{T}|\\mathbf{w}_1, \\dots, \\mathbf{w}_K) = -\\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{nk} \\ln y_{nk}$) are given by (4.109: $\\nabla_{\\mathbf{w}_j} E(\\mathbf{w}_1, \\dots, \\mathbf{w}_K) = \\sum_{n=1}^N (y_{nj} - t_{nj}) \\, \\boldsymbol{\\phi}_n$).",
"answer": "We derive every term $t_{nk} \\ln y_{nk}$ with regard to $a_j$ .\n\n$$\\begin{array}{lll} \\frac{\\partial t_{nk} \\ln y_{nk}}{\\partial \\mathbf{w_j}} & = & \\frac{\\partial t_{nk} \\ln y_{nk}}{\\partial y_{nk}} \\frac{\\partial y_{nk}}{\\partial a_j} \\frac{\\partial a_j}{\\partial \\mathbf{w_j}} \\\\ & = & t_{nk} \\frac{1}{y_{nk}} \\cdot y_{nk} (I_{kj} - y_{nj}) \\cdot \\boldsymbol{\\phi_n} \\\\ & = & t_{nk} (I_{kj} - y_{nj}) \\boldsymbol{\\phi_n} \\end{array}$$\n\nWhere we have used (4.105: $a_k = \\mathbf{w}_k^{\\mathrm{T}} \\boldsymbol{\\phi}.$) and (4.106: $\\frac{\\partial y_k}{\\partial a_j} = y_k (I_{kj} - y_j)$). Next we perform summation over n and k.\n\n$$\\nabla_{\\mathbf{w_j}} E = -\\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{nk} (I_{kj} - y_{nj}) \\boldsymbol{\\phi_n}$$\n\n$$= \\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{nk} y_{nj} \\boldsymbol{\\phi_n} - \\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{nk} I_{kj} \\boldsymbol{\\phi_n}$$\n\n$$= \\sum_{n=1}^{N} \\left[ (\\sum_{k=1}^{K} t_{nk}) y_{nj} \\boldsymbol{\\phi_n} \\right] - \\sum_{n=1}^{N} t_{nj} \\boldsymbol{\\phi_n}$$\n\n$$= \\sum_{n=1}^{N} y_{nj} \\boldsymbol{\\phi_n} - \\sum_{n=1}^{N} t_{nj} \\boldsymbol{\\phi_n}$$\n\n$$= \\sum_{n=1}^{N} (y_{nj} - t_{nj}) \\boldsymbol{\\phi_n}$$\n\nWhere we have used the fact that for arbitrary n, we have $\\sum_{k=1}^{K} t_{nk} = 1$ . **Problem 4.19 Solution** \n\nWe write down the log likelihood.\n\n$$\\ln p(\\mathbf{t}|\\mathbf{w}) = \\sum_{n=1}^{N} \\{t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n)\\}$$\n\nTherefore, we can obtain:\n\n$$\\nabla_{\\mathbf{w}} \\ln p = \\frac{\\partial \\ln p}{\\partial y_n} \\cdot \\frac{\\partial y_n}{\\partial a_n} \\cdot \\frac{\\partial a_n}{\\partial \\mathbf{w}}$$\n\n$$= \\sum_{n=1}^{N} (\\frac{t_n}{y_n} - \\frac{1 - t_n}{1 - y_n}) \\Phi'(a_n) \\phi_n$$\n\n$$= \\sum_{n=1}^{N} \\frac{y_n - t_n}{y_n (1 - y_n)} \\Phi'(a_n) \\phi_n$$\n\nWhere we have used $y = p(t = 1|a) = \\Phi(a)$ and $a_n = \\mathbf{w}^T \\phi_n$ . According to (4.114: $\\Phi(a) = \\int_{-\\infty}^{a} \\mathcal{N}(\\theta|0,1) \\,\\mathrm{d}\\theta$), we can obtain:\n\n$$\\Phi'(a) = \\mathcal{N}(\\theta|0,1)\\big|_{\\theta=a} = \\frac{1}{\\sqrt{2\\pi}}exp(-\\frac{1}{2}a^2)$$\n\nHence, we can obtain:\n\n$$\\nabla_{\\mathbf{w}} \\ln p = \\sum_{n=1}^{N} \\frac{y_n - t_n}{y_n (1 - y_n)} \\frac{exp(-\\frac{a_n^2}{2})}{\\sqrt{2\\pi}} \\boldsymbol{\\phi_n}$$\n\nTo calculate the Hessian Matrix, we need to first evaluate several derivatives.\n\n$$\\frac{\\partial}{\\partial \\mathbf{w}} \\left\\{ \\frac{y_n - t_n}{y_n (1 - y_n)} \\right\\} = \\frac{\\partial}{\\partial y_n} \\left\\{ \\frac{y_n - t_n}{y_n (1 - y_n)} \\right\\} \\cdot \\frac{\\partial y_n}{\\partial a_n} \\cdot \\frac{\\partial a_n}{\\partial \\mathbf{w}}$$\n\n$$= \\frac{y_n (1 - y_n) - (y_n - t_n)(1 - 2y_n)}{[y_n (1 - y_n)]^2} \\Phi'(a_n) \\phi_n$$\n\n$$= \\frac{y_n^2 + t_n - 2y_n t_n}{y_n^2 (1 - y_n)^2} \\frac{exp(-\\frac{a_n^2}{2})}{\\sqrt{2\\pi}} \\phi_n$$\n\nAnd\n\n$$\\frac{\\partial}{\\partial \\mathbf{w}} \\left\\{ \\frac{exp(-\\frac{a_n^2}{2})}{\\sqrt{2\\pi}} \\right\\} = \\frac{\\partial}{\\partial a_n} \\left\\{ \\frac{exp(-\\frac{a_n^2}{2})}{\\sqrt{2\\pi}} \\right\\} \\frac{\\partial a_n}{\\partial \\mathbf{w}}$$\n$$= -\\frac{a_n}{\\sqrt{2\\pi}} exp(-\\frac{a_n^2}{2}) \\phi_n$$\n\nTherefore, using the chain rule, we can obtain:\n\n$$\\frac{\\partial}{\\partial \\mathbf{w}} \\left\\{ \\frac{y_n - t_n}{y_n (1 - y_n)} \\frac{exp(-\\frac{a_n^2}{2})}{\\sqrt{2\\pi}} \\right\\} = \\frac{\\partial}{\\partial \\mathbf{w}} \\left\\{ \\frac{y_n - t_n}{y_n (1 - y_n)} \\right\\} \\frac{exp(-\\frac{a_n^2}{2})}{\\sqrt{2\\pi}} + \\frac{y_n - t_n}{y_n (1 - y_n)} \\frac{\\partial}{\\partial \\mathbf{w}} \\left\\{ \\frac{exp(-\\frac{a_n^2}{2})}{\\sqrt{2\\pi}} \\right\\} \\\\\n= \\left[ \\frac{y_n^2 + t_n - 2y_n t_n}{y_n (1 - y_n)} \\frac{exp(-\\frac{a_n^2}{2})}{\\sqrt{2\\pi}} - a_n (y_n - t_n) \\right] \\frac{exp(-\\frac{a_n^2}{2})}{\\sqrt{2\\pi} y_n (1 - y_n)} \\phi_n$$\n\nFinally if we perform summation over n, we can obtain the Hessian Matrix:\n\n$$\\begin{split} \\mathbf{H} &= \\nabla \\nabla_{\\mathbf{w}} \\ln p \\\\ &= \\sum_{n=1}^{N} \\frac{\\partial}{\\partial \\mathbf{w}} \\left\\{ \\frac{y_n - t_n}{y_n (1 - y_n)} \\frac{exp(-\\frac{a_n^2}{2})}{\\sqrt{2\\pi}} \\right\\} \\cdot \\boldsymbol{\\phi_n} \\\\ &= \\sum_{n=1}^{N} \\left[ \\frac{y_n^2 + t_n - 2y_n t_n}{y_n (1 - y_n)} \\frac{exp(-\\frac{a_n^2}{2})}{\\sqrt{2\\pi}} - a_n (y_n - t_n) \\right] \\frac{exp(-\\frac{a_n^2}{2})}{\\sqrt{2\\pi} \\gamma_n (1 - y_n)} \\boldsymbol{\\phi_n} \\boldsymbol{\\phi_n}^T \\end{split}$$\n\n## Problem 4.20 Solution\n\nWe know that the Hessian Matrix is of size $MK \\times MK$ , and the (j,k)th block with size $M \\times M$ is given by (4.110: $\\nabla_{\\mathbf{w}_k} \\nabla_{\\mathbf{w}_j} E(\\mathbf{w}_1, \\dots, \\mathbf{w}_K) = -\\sum_{n=1}^N y_{nk} (I_{kj} - y_{nj}) \\phi_n \\phi_n^{\\mathrm{T}}.$), where j,k=1,2,...,K. Therefore, we can obtain:\n\n$$\\mathbf{u}^{\\mathbf{T}}\\mathbf{H}\\mathbf{u} = \\sum_{j=1}^{K} \\sum_{k=1}^{K} \\mathbf{u}_{j}^{\\mathbf{T}}\\mathbf{H}_{j,k}\\mathbf{u}_{k}$$\n (\\*)\n\nWhere we use $\\mathbf{u_k}$ to denote the kth block vector of $\\mathbf{u}$ with size $M \\times 1$ , and $\\mathbf{H_{j,k}}$ to denote the (j,k)th block matrix of $\\mathbf{H}$ with size $M \\times M$ . Then based on (4.110: $\\nabla_{\\mathbf{w}_k} \\nabla_{\\mathbf{w}_j} E(\\mathbf{w}_1, \\dots, \\mathbf{w}_K) = -\\sum_{n=1}^N y_{nk} (I_{kj} - y_{nj}) \\phi_n \\phi_n^{\\mathrm{T}}.$), we further expand (4.110):\n\n$$(*) = \\sum_{j=1}^{K} \\sum_{k=1}^{K} \\mathbf{u}_{j}^{\\mathbf{T}} \\{-\\sum_{n=1}^{N} y_{nk} (I_{kj} - y_{nj}) \\boldsymbol{\\phi}_{n} \\boldsymbol{\\phi}_{n}^{T} \\} \\mathbf{u}_{k}$$\n\n$$= \\sum_{j=1}^{K} \\sum_{k=1}^{K} \\sum_{n=1}^{N} \\mathbf{u}_{j}^{\\mathbf{T}} \\{-y_{nk} (I_{kj} - y_{nj}) \\boldsymbol{\\phi}_{n} \\boldsymbol{\\phi}_{n}^{T} \\} \\mathbf{u}_{k}$$\n\n$$= \\sum_{j=1}^{K} \\sum_{k=1}^{K} \\sum_{n=1}^{N} \\mathbf{u}_{j}^{\\mathbf{T}} \\{-y_{nk} I_{kj} \\boldsymbol{\\phi}_{n} \\boldsymbol{\\phi}_{n}^{T} \\} \\mathbf{u}_{k} + \\sum_{j=1}^{K} \\sum_{k=1}^{K} \\sum_{n=1}^{N} \\mathbf{u}_{j}^{\\mathbf{T}} \\{y_{nk} y_{nj} \\boldsymbol{\\phi}_{n} \\boldsymbol{\\phi}_{n}^{T} \\} \\mathbf{u}_{k}$$\n\n$$= \\sum_{k=1}^{K} \\sum_{n=1}^{N} \\mathbf{u}_{k}^{\\mathbf{T}} \\{-y_{nk} \\boldsymbol{\\phi}_{n} \\boldsymbol{\\phi}_{n}^{T} \\} \\mathbf{u}_{k} + \\sum_{j=1}^{K} \\sum_{k=1}^{K} \\sum_{n=1}^{N} y_{nj} \\mathbf{u}_{j}^{\\mathbf{T}} \\{ \\boldsymbol{\\phi}_{n} \\boldsymbol{\\phi}_{n}^{T} \\} y_{nk} \\mathbf{u}_{k}$$",
"answer_length": 6125
},
{
"chapter": 4,
"question_number": "4.19",
"difficulty": "easy",
"question_text": "Write down expressions for the gradient of the log likelihood, as well as the corresponding Hessian matrix, for the probit regression model defined in Section 4.3.5. These are the quantities that would be required to train such a model using IRLS.",
"answer": "Using the cross-entropy error function (4.90: $E(\\mathbf{w}) = -\\ln p(\\mathbf{t}|\\mathbf{w}) = -\\sum_{n=1}^{N} \\{t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n)\\}$), and following Exercise 4.13, we have\n\n$$\\frac{\\partial E}{\\partial y_n} = \\frac{y_n - t_n}{y_n (1 - y_n)}. (108)$$\n\nAlso\n\n$$\\nabla a_n = \\phi_n. \\tag{109}$$\n\nFrom (4.115: $\\operatorname{erf}(a) = \\frac{2}{\\sqrt{\\pi}} \\int_0^a \\exp(-\\theta^2/2) \\, \\mathrm{d}\\theta$) and (4.116: $\\Phi(a) = \\frac{1}{2} \\left\\{ 1 + \\frac{1}{\\sqrt{2}} \\operatorname{erf}(a) \\right\\}.$) we have\n\n$$\\frac{\\partial y_n}{\\partial a_n} = \\frac{\\partial \\Phi(a_n)}{\\partial a_n} = \\frac{1}{\\sqrt{2\\pi}} e^{-a_n^2}.$$\n (110)\n\nCombining (108), (109) and (110), we get\n\n$$\\nabla E = \\sum_{n=1}^{N} \\frac{\\partial E}{\\partial y_n} \\frac{\\partial y_n}{\\partial a_n} \\nabla a_n = \\sum_{n=1}^{N} \\frac{y_n - t_n}{y_n (1 - y_n)} \\frac{1}{\\sqrt{2\\pi}} e^{-a_n^2} \\phi_n.$$\n (111)\n\nIn order to find the expression for the Hessian, it is is convenient to first determine\n\n$$\\frac{\\partial}{\\partial y_n} \\frac{y_n - t_n}{y_n (1 - y_n)} = \\frac{y_n (1 - y_n)}{y_n^2 (1 - y_n)^2} - \\frac{(y_n - t_n)(1 - 2y_n)}{y_n^2 (1 - y_n)^2}\n= \\frac{y_n^2 + t_n - 2y_n t_n}{y_n^2 (1 - y_n)^2}.$$\n(112)\n\nThen using (109)–(112) we have\n\n$$\\nabla \\nabla E = \\sum_{n=1}^{N} \\left\\{ \\frac{\\partial}{\\partial y_n} \\left[ \\frac{y_n - t_n}{y_n (1 - y_n)} \\right] \\frac{1}{\\sqrt{2\\pi}} e^{-a_n^2} \\phi_n \\nabla y_n \\right.$$\n\n$$\\left. + \\frac{y_n - t_n}{y_n (1 - y_n)} \\frac{1}{\\sqrt{2\\pi}} e^{-a_n^2} (-2a_n) \\phi_n \\nabla a_n \\right\\}$$\n\n$$= \\sum_{n=1}^{N} \\left( \\frac{y_n^2 + t_n - 2y_n t_n}{y_n (1 - y_n)} \\frac{1}{\\sqrt{2\\pi}} e^{-a_n^2} - 2a_n (y_n - t_n) \\right) \\frac{e^{-2a_n^2} \\phi_n \\phi_n^{\\mathrm{T}}}{\\sqrt{2\\pi} y_n (1 - y_n)}.$$",
"answer_length": 1741
},
{
"chapter": 4,
"question_number": "4.2",
"difficulty": "medium",
"question_text": "\\star)$ www Consider the minimization of a sum-of-squares error function (4.15: $E_D(\\widetilde{\\mathbf{W}}) = \\frac{1}{2} \\text{Tr} \\left\\{ (\\widetilde{\\mathbf{X}} \\widetilde{\\mathbf{W}} - \\mathbf{T})^{\\mathrm{T}} (\\widetilde{\\mathbf{X}} \\widetilde{\\mathbf{W}} - \\mathbf{T}) \\right\\}.$), and suppose that all of the target vectors in the training set satisfy a linear constraint\n\n$$\\mathbf{a}^{\\mathrm{T}}\\mathbf{t}_{n} + b = 0 \\tag{4.157}$$\n\nwhere $\\mathbf{t}_n$ corresponds to the $n^{\\mathrm{th}}$ row of the matrix $\\mathbf{T}$ in (4.15: $E_D(\\widetilde{\\mathbf{W}}) = \\frac{1}{2} \\text{Tr} \\left\\{ (\\widetilde{\\mathbf{X}} \\widetilde{\\mathbf{W}} - \\mathbf{T})^{\\mathrm{T}} (\\widetilde{\\mathbf{X}} \\widetilde{\\mathbf{W}} - \\mathbf{T}) \\right\\}.$). Show that as a consequence of this constraint, the elements of the model prediction $\\mathbf{y}(\\mathbf{x})$ given by the least-squares solution (4.17: $\\mathbf{y}(\\mathbf{x}) = \\widetilde{\\mathbf{W}}^{\\mathrm{T}} \\widetilde{\\mathbf{x}} = \\mathbf{T}^{\\mathrm{T}} \\left( \\widetilde{\\mathbf{X}}^{\\dagger} \\right)^{\\mathrm{T}} \\widetilde{\\mathbf{x}}.$) also satisfy this constraint, so that\n\n$$\\mathbf{a}^{\\mathrm{T}}\\mathbf{y}(\\mathbf{x}) + b = 0. \\tag{4.158}$$\n\nTo do so, assume that one of the basis functions $\\phi_0(\\mathbf{x}) = 1$ so that the corresponding parameter $w_0$ plays the role of a bias.",
"answer": "Let's make the dependency of $E_D(\\widetilde{\\mathbf{W}})$ on $w_0$ explicitly:\n\n$$E_D(\\widetilde{\\mathbf{W}}) = \\frac{1}{2} \\text{Tr} \\{ (\\mathbf{X} \\mathbf{W} + \\mathbf{1} \\mathbf{w_0}^T - \\mathbf{T})^T (\\mathbf{X} \\mathbf{W} + \\mathbf{1} \\mathbf{w_0}^T - \\mathbf{T}) \\}$$\n\nThen we calculate the derivative of $E_D(\\widetilde{\\mathbf{W}})$ with respect to $\\mathbf{w_0}$ :\n\n$$\\frac{\\partial E_D(\\widetilde{\\mathbf{W}})}{\\partial \\mathbf{w_0}} = 2N\\mathbf{w_0} + 2(\\mathbf{X}\\mathbf{W} - \\mathbf{T})^T \\mathbf{1}$$\n\nWhere we have used the property:\n\n$$\\frac{\\partial}{\\partial \\mathbf{X}} \\mathrm{Tr} \\big[ (\\mathbf{A} \\mathbf{X} \\mathbf{B} + \\mathbf{C}) (\\mathbf{A} \\mathbf{X} \\mathbf{B} + \\mathbf{C})^T \\big] = 2 \\mathbf{A}^T (\\mathbf{A} \\mathbf{X} \\mathbf{B} + \\mathbf{C}) \\mathbf{B}^T$$\n\nWe set the derivative equals to 0, which gives:\n\n$$\\mathbf{w_0} = -\\frac{1}{N}(\\mathbf{XW} - \\mathbf{T})^T \\mathbf{1} = \\bar{\\mathbf{t}} - \\mathbf{W}^T \\bar{\\mathbf{x}}$$\n\nWhere we have denoted:\n\n$$\\bar{\\mathbf{t}} = \\frac{1}{N} \\mathbf{T}^T \\mathbf{1}$$\n, and $\\bar{\\mathbf{x}} = \\frac{1}{N} \\mathbf{X}^T \\mathbf{1}$ \n\nIf we substitute the equations above into $E_D(\\widetilde{\\mathbf{W}})$ , we can obtain:\n\n$$E_D(\\widetilde{\\mathbf{W}}) = \\frac{1}{2} \\mathrm{Tr} \\big\\{ (\\mathbf{X} \\mathbf{W} + \\bar{\\mathbf{T}} - \\bar{\\mathbf{X}} \\mathbf{W} - \\mathbf{T})^T (\\mathbf{X} \\mathbf{W} + \\bar{\\mathbf{T}} - \\bar{\\mathbf{X}} \\mathbf{W} - \\mathbf{T}) \\big\\}$$\n\nWhere we further denote\n\n$$\\bar{\\mathbf{T}} = \\mathbf{1}\\bar{\\mathbf{t}}^T$$\n. and $\\bar{\\mathbf{X}} = \\mathbf{1}\\bar{\\mathbf{x}}^T$ \n\nThen we set the derivative of $E_D(\\widetilde{\\mathbf{W}})$ with regard to $\\mathbf{W}$ to 0, which gives:\n\n$$W=\\widehat{X}^{\\dagger}\\widehat{T}$$\n\nWhere we have defined:\n\n$$\\hat{\\mathbf{X}} = \\mathbf{X} - \\bar{\\mathbf{X}}$$\n, and $\\hat{\\mathbf{T}} = \\mathbf{T} - \\bar{\\mathbf{T}}$ \n\nNow consider the prediction for a new given $\\mathbf{x}$ , we have:\n\n$$\\mathbf{y}(\\mathbf{x}) = \\mathbf{W}^T \\mathbf{x} + \\mathbf{w_0}$$\n\n$$= \\mathbf{W}^T \\mathbf{x} + \\bar{\\mathbf{t}} - \\mathbf{W}^T \\bar{\\mathbf{x}}$$\n\n$$= \\bar{\\mathbf{t}} + \\mathbf{W}^T (\\mathbf{x} - \\bar{\\mathbf{x}})$$\n\nIf we know that $\\mathbf{a}^T \\mathbf{t_n} + b = 0$ holds for some $\\mathbf{a}$ and b, we can obtain:\n\n$$\\mathbf{a}^T \\mathbf{\\bar{t}} = \\frac{1}{N} \\mathbf{a}^T \\mathbf{T}^T \\mathbf{1} = \\frac{1}{N} \\sum_{n=1}^{N} \\mathbf{a}^T \\mathbf{t_n} = -b$$\n\nTherefore,\n\n$$\\mathbf{a}^{T}\\mathbf{y}(\\mathbf{x}) = \\mathbf{a}^{T}[\\bar{\\mathbf{t}} + \\mathbf{W}^{T}(\\mathbf{x} - \\bar{\\mathbf{x}})]$$\n\n$$= \\mathbf{a}^{T}\\bar{\\mathbf{t}} + \\mathbf{a}^{T}\\mathbf{W}^{T}(\\mathbf{x} - \\bar{\\mathbf{x}})$$\n\n$$= -b + \\mathbf{a}^{T}\\widehat{\\mathbf{T}}^{T}(\\widehat{\\mathbf{X}}^{\\dagger})^{T}(\\mathbf{x} - \\bar{\\mathbf{x}})$$\n\n$$= -b$$\n\nWhere we have used:\n\n$$\\mathbf{a}^{T} \\widehat{\\mathbf{T}}^{T} = \\mathbf{a}^{T} (\\mathbf{T} - \\overline{\\mathbf{I}})^{T} = \\mathbf{a}^{T} (\\mathbf{T} - \\frac{1}{N} \\mathbf{1} \\mathbf{1}^{T} \\mathbf{T})^{T}$$\n$$= \\mathbf{a}^{T} \\mathbf{T}^{T} - \\frac{1}{N} \\mathbf{a}^{T} \\mathbf{T}^{T} \\mathbf{1} \\mathbf{1}^{T} = -b \\mathbf{1}^{T} + b \\mathbf{1}^{T}$$\n$$= \\mathbf{0}^{T}$$",
"answer_length": 3167
},
{
"chapter": 4,
"question_number": "4.21",
"difficulty": "easy",
"question_text": "Show that the probit function (4.114: $\\Phi(a) = \\int_{-\\infty}^{a} \\mathcal{N}(\\theta|0,1) \\,\\mathrm{d}\\theta$) and the erf function (4.115: $\\operatorname{erf}(a) = \\frac{2}{\\sqrt{\\pi}} \\int_0^a \\exp(-\\theta^2/2) \\, \\mathrm{d}\\theta$) are related by (4.116: $\\Phi(a) = \\frac{1}{2} \\left\\{ 1 + \\frac{1}{\\sqrt{2}} \\operatorname{erf}(a) \\right\\}.$).",
"answer": "It is quite obvious.\n\n$$\\begin{split} \\Phi(a) &= \\int_{-\\infty}^{a} \\mathcal{N}(\\theta|0,1) d\\theta \\\\ &= \\frac{1}{2} + \\int_{0}^{a} \\mathcal{N}(\\theta|0,1) d\\theta \\\\ &= \\frac{1}{2} + \\int_{0}^{a} \\mathcal{N}(\\theta|0,1) d\\theta \\\\ &= \\frac{1}{2} + \\frac{1}{\\sqrt{2\\pi}} \\int_{0}^{a} exp(-\\theta^{2}/2) d\\theta \\\\ &= \\frac{1}{2} + \\frac{1}{\\sqrt{2\\pi}} \\frac{\\sqrt{\\pi}}{2} \\int_{0}^{a} \\frac{2}{\\sqrt{\\pi}} exp(-\\theta^{2}/2) d\\theta \\\\ &= \\frac{1}{2} (1 + \\frac{1}{\\sqrt{2}} \\int_{0}^{a} \\frac{2}{\\sqrt{\\pi}} exp(-\\theta^{2}/2) d\\theta) \\\\ &= \\frac{1}{2} \\{1 + \\frac{1}{\\sqrt{2}} erf(a) \\} \\end{split}$$\n\nWhere we have used\n\n$$\\int_{-\\infty}^{0} \\mathcal{N}(\\theta|0,1)d\\theta = \\frac{1}{2}$$",
"answer_length": 695
},
{
"chapter": 4,
"question_number": "4.22",
"difficulty": "easy",
"question_text": "Using the result (4.135: $= f(\\mathbf{z}_0) \\frac{(2\\pi)^{M/2}}{|\\mathbf{A}|^{1/2}}$), derive the expression (4.137: $\\ln p(\\mathcal{D}) \\simeq \\ln p(\\mathcal{D}|\\boldsymbol{\\theta}_{\\text{MAP}}) + \\underbrace{\\ln p(\\boldsymbol{\\theta}_{\\text{MAP}}) + \\frac{M}{2}\\ln(2\\pi) - \\frac{1}{2}\\ln|\\mathbf{A}|}_{\\text{Occam factor}}$) for the log model evidence under the Laplace approximation.",
"answer": "If we denote $f(\\theta) = p(D|\\theta)p(\\theta)$ , we can write:\n\n$$p(D) = \\int p(D|\\boldsymbol{\\theta})p(\\boldsymbol{\\theta})d\\boldsymbol{\\theta} = \\int f(\\boldsymbol{\\theta})d\\boldsymbol{\\theta}$$\n$$= f(\\boldsymbol{\\theta}_{MAP})\\frac{(2\\pi)^{M/2}}{|\\mathbf{A}|^{1/2}}$$\n$$= p(D|\\boldsymbol{\\theta}_{MAP})p(\\boldsymbol{\\theta}_{MAP})\\frac{(2\\pi)^{M/2}}{|\\mathbf{A}|^{1/2}}$$\n\nWhere $\\theta_{MAP}$ is the value of $\\theta$ at the mode of $f(\\theta)$ , **A** is the Hessian Matrix of $-\\ln f(\\theta)$ and we have also used (4.135: $= f(\\mathbf{z}_0) \\frac{(2\\pi)^{M/2}}{|\\mathbf{A}|^{1/2}}$). Therefore,\n\n$$\\ln p(D) = \\ln p(D|\\boldsymbol{\\theta}_{MAP}) + \\ln p(\\boldsymbol{\\theta}_{MAP}) + \\frac{M}{2} \\ln 2\\pi - \\frac{1}{2} \\ln |\\mathbf{A}|$$\n\nJust as required.",
"answer_length": 769
},
{
"chapter": 4,
"question_number": "4.23",
"difficulty": "medium",
"question_text": "In this exercise, we derive the BIC result (4.139: $\\ln p(\\mathcal{D}) \\simeq \\ln p(\\mathcal{D}|\\boldsymbol{\\theta}_{\\text{MAP}}) - \\frac{1}{2}M \\ln N$) starting from the Laplace approximation to the model evidence given by (4.137: $\\ln p(\\mathcal{D}) \\simeq \\ln p(\\mathcal{D}|\\boldsymbol{\\theta}_{\\text{MAP}}) + \\underbrace{\\ln p(\\boldsymbol{\\theta}_{\\text{MAP}}) + \\frac{M}{2}\\ln(2\\pi) - \\frac{1}{2}\\ln|\\mathbf{A}|}_{\\text{Occam factor}}$). Show that if the prior over parameters is Gaussian of the form $p(\\theta) = \\mathcal{N}(\\theta|\\mathbf{m}, \\mathbf{V}_0)$ , the log model evidence under the Laplace approximation takes the form\n\n$$\\ln p(\\mathcal{D}) \\simeq \\ln p(\\mathcal{D}|\\boldsymbol{\\theta}_{\\mathrm{MAP}}) - \\frac{1}{2}(\\boldsymbol{\\theta}_{\\mathrm{MAP}} - \\mathbf{m})^{\\mathrm{T}}\\mathbf{V}_{0}^{-1}(\\boldsymbol{\\theta}_{\\mathrm{MAP}} - \\mathbf{m}) - \\frac{1}{2}\\ln |\\mathbf{H}| + \\mathrm{const}$$\n\nwhere $\\mathbf{H}$ is the matrix of second derivatives of the log likelihood $\\ln p(\\mathcal{D}|\\boldsymbol{\\theta})$ evaluated at $\\boldsymbol{\\theta}_{\\text{MAP}}$ . Now assume that the prior is broad so that $\\mathbf{V}_0^{-1}$ is small and the second term on the right-hand side above can be neglected. Furthermore, consider the case of independent, identically distributed data so that $\\mathbf{H}$ is the sum of terms one for each data point. Show that the log model evidence can then be written approximately in the form of the BIC expression (4.139: $\\ln p(\\mathcal{D}) \\simeq \\ln p(\\mathcal{D}|\\boldsymbol{\\theta}_{\\text{MAP}}) - \\frac{1}{2}M \\ln N$).",
"answer": "According to (4.137: $\\ln p(\\mathcal{D}) \\simeq \\ln p(\\mathcal{D}|\\boldsymbol{\\theta}_{\\text{MAP}}) + \\underbrace{\\ln p(\\boldsymbol{\\theta}_{\\text{MAP}}) + \\frac{M}{2}\\ln(2\\pi) - \\frac{1}{2}\\ln|\\mathbf{A}|}_{\\text{Occam factor}}$), we can write:\n\n$$\\ln p(D) = \\ln p(D|\\boldsymbol{\\theta}_{MAP}) + \\ln p(\\boldsymbol{\\theta}_{MAP}) + \\frac{M}{2} \\ln 2\\pi - \\frac{1}{2} \\ln |\\mathbf{A}|$$\n\n$$= \\ln p(D|\\boldsymbol{\\theta}_{MAP}) - \\frac{M}{2} \\ln 2\\pi - \\frac{1}{2} \\ln |\\mathbf{V_0}| - \\frac{1}{2} (\\boldsymbol{\\theta}_{MAP} - \\mathbf{m})^T \\mathbf{V_0}^{-1} (\\boldsymbol{\\theta}_{MAP} - \\mathbf{m})$$\n\n$$+ \\frac{M}{2} \\ln 2\\pi - \\frac{1}{2} \\ln |\\mathbf{A}|$$\n\n$$= \\ln p(D|\\boldsymbol{\\theta}_{MAP}) - \\frac{1}{2} \\ln |\\mathbf{V_0}| - \\frac{1}{2} (\\boldsymbol{\\theta}_{MAP} - \\mathbf{m})^T \\mathbf{V_0}^{-1} (\\boldsymbol{\\theta}_{MAP} - \\mathbf{m}) - \\frac{1}{2} \\ln |\\mathbf{A}|$$\n\nWhere we have used the definition of the multivariate Gaussian Distribution. Then, from (4.138: $\\mathbf{A} = -\\nabla\\nabla \\ln p(\\mathcal{D}|\\boldsymbol{\\theta}_{\\text{MAP}})p(\\boldsymbol{\\theta}_{\\text{MAP}}) = -\\nabla\\nabla \\ln p(\\boldsymbol{\\theta}_{\\text{MAP}}|\\mathcal{D}).$), we can write:\n\n$$\\mathbf{A} = -\\nabla \\nabla \\ln p(D|\\boldsymbol{\\theta_{MAP}}) p(\\boldsymbol{\\theta_{MAP}})$$\n\n$$= -\\nabla \\nabla \\ln p(D|\\boldsymbol{\\theta_{MAP}}) - \\nabla \\nabla \\ln p(\\boldsymbol{\\theta_{MAP}})$$\n\n$$= \\mathbf{H} - \\nabla \\nabla \\left\\{ -\\frac{1}{2} (\\boldsymbol{\\theta_{MAP}} - \\mathbf{m})^T \\mathbf{V_0}^{-1} (\\boldsymbol{\\theta_{MAP}} - \\mathbf{m}) \\right\\}$$\n\n$$= \\mathbf{H} + \\nabla \\left\\{ \\mathbf{V_0}^{-1} (\\boldsymbol{\\theta_{MAP}} - \\mathbf{m}) \\right\\}$$\n\n$$= \\mathbf{H} + \\mathbf{V_0}^{-1}$$\n\nWhere we have denoted $\\mathbf{H} = -\\nabla \\nabla \\ln p(D|\\boldsymbol{\\theta_{MAP}})$ . Therefore, the equation\n\nabove becomes:\n\n$$\\ln p(D) = \\ln p(D|\\boldsymbol{\\theta}_{MAP}) - \\frac{1}{2}(\\boldsymbol{\\theta}_{MAP} - \\mathbf{m})^T \\mathbf{V_0}^{-1}(\\boldsymbol{\\theta}_{MAP} - \\mathbf{m}) - \\frac{1}{2}\\ln\\left\\{|\\mathbf{V_0}| \\cdot |\\mathbf{H} + \\mathbf{V_0}^{-1}|\\right\\} \n= \\ln p(D|\\boldsymbol{\\theta}_{MAP}) - \\frac{1}{2}(\\boldsymbol{\\theta}_{MAP} - \\mathbf{m})^T \\mathbf{V_0}^{-1}(\\boldsymbol{\\theta}_{MAP} - \\mathbf{m}) - \\frac{1}{2}\\ln\\left\\{|\\mathbf{V_0}\\mathbf{H} + \\mathbf{I}|\\right\\} \n\\approx \\ln p(D|\\boldsymbol{\\theta}_{MAP}) - \\frac{1}{2}(\\boldsymbol{\\theta}_{MAP} - \\mathbf{m})^T \\mathbf{V_0}^{-1}(\\boldsymbol{\\theta}_{MAP} - \\mathbf{m}) - \\frac{1}{2}\\ln|\\mathbf{V_0}| - \\frac{1}{2}\\ln|\\mathbf{H}| \n\\approx \\ln p(D|\\boldsymbol{\\theta}_{MAP}) - \\frac{1}{2}(\\boldsymbol{\\theta}_{MAP} - \\mathbf{m})^T \\mathbf{V_0}^{-1}(\\boldsymbol{\\theta}_{MAP} - \\mathbf{m}) - \\frac{1}{2}\\ln|\\mathbf{H}| + \\text{const}$$\n\nWhere we have used the property of determinant: $|A| \\cdot |B| = |AB|$ , and the fact that the prior is board, i.e. I can be neglected with regard to $V_0H$ . What's more, since the prior is pre-given, we can view $V_0$ as constant. And if the data is large, we can write:\n\n$$\\mathbf{H} = \\sum_{n=1}^{N} \\mathbf{H_n} = N\\widehat{\\mathbf{H}}$$\n\nWhere $\\hat{\\mathbf{H}} = 1/N \\sum_{n=1}^{N} \\mathbf{H_n}$ , and then\n\n$$\\begin{split} \\ln p(D) &\\approx & \\ln p(D|\\boldsymbol{\\theta}_{MAP}) - \\frac{1}{2}(\\boldsymbol{\\theta}_{\\boldsymbol{MAP}} - \\mathbf{m})^T \\mathbf{V_0}^{-1}(\\boldsymbol{\\theta}_{\\boldsymbol{MAP}} - \\mathbf{m}) - \\frac{1}{2} \\ln |\\mathbf{H}| + \\mathrm{const} \\\\ &\\approx & \\ln p(D|\\boldsymbol{\\theta}_{MAP}) - \\frac{1}{2}(\\boldsymbol{\\theta}_{\\boldsymbol{MAP}} - \\mathbf{m})^T \\mathbf{V_0}^{-1}(\\boldsymbol{\\theta}_{\\boldsymbol{MAP}} - \\mathbf{m}) - \\frac{1}{2} \\ln |N\\widehat{\\mathbf{H}}| + \\mathrm{const} \\\\ &\\approx & \\ln p(D|\\boldsymbol{\\theta}_{MAP}) - \\frac{1}{2}(\\boldsymbol{\\theta}_{\\boldsymbol{MAP}} - \\mathbf{m})^T \\mathbf{V_0}^{-1}(\\boldsymbol{\\theta}_{\\boldsymbol{MAP}} - \\mathbf{m}) - \\frac{M}{2} \\ln N - \\frac{1}{2} \\ln |\\widehat{\\mathbf{H}}| + \\mathrm{const} \\\\ &\\approx & \\ln p(D|\\boldsymbol{\\theta}_{MAP}) - \\frac{M}{2} \\ln N \\end{split}$$\n\nThis is because when N >> 1, other terms can be neglected.\n\n## **Problem 4.24 Solution**(Waiting for updating)",
"answer_length": 4100
},
{
"chapter": 4,
"question_number": "4.25",
"difficulty": "medium",
"question_text": "Suppose we wish to approximate the logistic sigmoid $\\sigma(a)$ defined by (4.59: $\\sigma(a) = \\frac{1}{1 + \\exp(-a)}$) by a scaled probit function $\\Phi(\\lambda a)$ , where $\\Phi(a)$ is defined by (4.114: $\\Phi(a) = \\int_{-\\infty}^{a} \\mathcal{N}(\\theta|0,1) \\,\\mathrm{d}\\theta$). Show that if $\\lambda$ is chosen so that the derivatives of the two functions are equal at a=0, then $\\lambda^2=\\pi/8$ .\n\n### **4. LINEAR MODELS FOR CLASSIFICATION**",
"answer": "We first need to obtain the expression for the first derivative of probit function $\\Phi(\\lambda a)$ with regard to a. According to (4.114: $\\Phi(a) = \\int_{-\\infty}^{a} \\mathcal{N}(\\theta|0,1) \\,\\mathrm{d}\\theta$), we can write down:\n\n$$\\frac{d}{da}\\Phi(\\lambda a) = \\frac{d\\Phi(\\lambda a)}{d(\\lambda a)} \\cdot \\frac{d\\lambda a}{da}$$\n$$= \\frac{\\lambda}{\\sqrt{2\\pi}} exp\\left\\{-\\frac{1}{2}(\\lambda a)^2\\right\\}$$\n\nWhich further gives:\n\n$$\\frac{d}{da}\\Phi(\\lambda a)\\Big|_{a=0} = \\frac{\\lambda}{\\sqrt{2\\pi}}$$\n\nAnd for logistic sigmoid function, according to (4.88: $\\frac{d\\sigma}{da} = \\sigma(1 - \\sigma).$), we have\n\n$$\\frac{d\\sigma}{da} = \\sigma(1 - \\sigma) = 0.5 \\times 0.5 = \\frac{1}{4}$$\n\nWhere we have used $\\sigma(0) = 0.5$ . Let their derivatives at origin equals, we have:\n\n$$\\frac{\\lambda}{\\sqrt{2\\pi}} = \\frac{1}{4}$$\n\ni.e., $\\lambda = \\sqrt{2\\pi}/4$ . And hence $\\lambda^2 = \\pi/8$ is obvious.",
"answer_length": 913
},
{
"chapter": 4,
"question_number": "4.26",
"difficulty": "medium",
"question_text": "In this exercise, we prove the relation (4.152) for the convolution of a probit function with a Gaussian distribution. To do this, show that the derivative of the left-hand side with respect to $\\mu$ is equal to the derivative of the right-hand side, and then integrate both sides with respect to $\\mu$ and then show that the constant of integration vanishes. Note that before differentiating the left-hand side, it is convenient first to introduce a change of variable given by $a = \\mu + \\sigma z$ so that the integral over a is replaced by an integral over a. When we differentiate the left-hand side of the relation (4.152), we will then obtain a Gaussian integral over a that can be evaluated analytically.",
"answer": "We will prove (4.152) in a more simple and intuitive way. But firstly, we need to prove a trivial yet useful statement: Suppose we have a random variable satisfied normal distribution denoted as $X \\sim \\mathcal{N}(X|\\mu,\\sigma^2)$ , the probability of $X \\leq x$ is $P(X \\leq x) = \\Phi(\\frac{x-\\mu}{\\sigma})$ , and here x is a given real number. We can see this by writing down the integral:\n\n$$P(X \\le x) = \\int_{-\\infty}^{x} \\frac{1}{\\sqrt{2\\pi\\sigma^2}} exp\\left[-\\frac{1}{2\\sigma^2} (X - \\mu)^2\\right] dX$$\n\n$$= \\int_{-\\infty}^{\\frac{x-\\mu}{\\sigma}} \\frac{1}{\\sqrt{2\\pi\\sigma^2}} exp\\left(-\\frac{1}{2}\\gamma^2\\right) \\sigma d\\gamma$$\n\n$$= \\int_{-\\infty}^{\\frac{x-\\mu}{\\sigma}} \\frac{1}{\\sqrt{2\\pi}} exp\\left(-\\frac{1}{2}\\gamma^2\\right) d\\gamma$$\n\n$$= \\Phi\\left(\\frac{x-\\mu}{\\sigma}\\right)$$\n\nWhere we have changed the variable $X = \\mu + \\sigma \\gamma$ . Now consider two random variables $X \\sim \\mathcal{N}(0, \\lambda^{-2})$ and $Y \\sim \\mathcal{N}(\\mu, \\sigma^2)$ . We first calculate the conditional probability $P(X \\leq Y \\mid Y = \\alpha)$ :\n\n$$P(X \\le Y \\mid Y = a) = P(X \\le a) = \\Phi(\\frac{a - 0}{\\lambda^{-1}}) = \\Phi(\\lambda a)$$\n\nTogether with Bayesian Formula, we can obtain:\n\n$$P(X \\le Y) = \\int_{-\\infty}^{+\\infty} P(X \\le Y \\mid Y = a) p df(Y = a) dY$$\n$$= \\int_{-\\infty}^{+\\infty} \\Phi(\\lambda a) \\mathcal{N}(a \\mid \\mu, \\sigma^2) da$$\n\nWhere $pdf(\\cdot)$ denotes the probability density function and we have also used $pdf(Y) = \\mathcal{N}(\\mu, \\sigma^2)$ . What's more, we know that X - Y should also satisfy normal distribution, with:\n\n$$E[X - Y] = E[X] - E[Y] = 0 - \\mu = -\\mu$$\n\nAnd\n\n$$var[X - Y] = var[X] + var[Y] = \\lambda^{-2} + \\sigma^{2}$$\n\nTherefore, $X - Y \\sim \\mathcal{N}(-\\mu, \\lambda^{-2} + \\sigma^2)$ and it follows that:\n\n$$P(X - Y \\le 0) = \\Phi(\\frac{0 - (-\\mu)}{\\sqrt{\\lambda^{-2} + \\sigma^2}}) = \\Phi(\\frac{\\mu}{\\sqrt{\\lambda^{-2} + \\sigma^2}})$$\n\nSince $P(X \\le Y) = P(X - Y \\le 0)$ , we obtain what have been required.\n\n# 0.5 Neural Networks",
"answer_length": 2001
},
{
"chapter": 4,
"question_number": "4.3",
"difficulty": "medium",
"question_text": "Extend the result of Exercise 4.2 to show that if multiple linear constraints are satisfied simultaneously by the target vectors, then the same constraints will also be satisfied by the least-squares prediction of a linear model.",
"answer": "Suppose there are Q constraints in total. We can write $\\mathbf{a_q}^T \\mathbf{t_n} + b_q = 0$ , q = 1, 2, ..., Q for all the target vector $\\mathbf{t_n}$ , n = 1, 2, ..., N. Or alternatively, we can group them together:\n\n$$\\mathbf{A}^T \\mathbf{t_n} + \\mathbf{b} = \\mathbf{0}$$\n\nWhere **A** is a $Q \\times Q$ matrix, and the qth column of **A** is $\\mathbf{a_q}$ , and meanwhile **b** is a $Q \\times 1$ column vector, and the qth element is $\\mathbf{b_q}$ . for every pair of $\\{\\mathbf{a_q}, b_q\\}$ we can follow the same procedure in the previous problem to show that $\\mathbf{a_qy(x)} + b_q = 0$ . In other words, the proofs will not affect each other. Therefore, it is obvious:\n\n$$\\mathbf{A}^T \\mathbf{y}(\\mathbf{x}) + \\mathbf{b} = \\mathbf{0}$$",
"answer_length": 759
},
{
"chapter": 4,
"question_number": "4.4",
"difficulty": "easy",
"question_text": "Show that maximization of the class separation criterion given by (4.23: $m_k = \\mathbf{w}^{\\mathrm{T}} \\mathbf{m}_k$) with respect to $\\mathbf{w}$ , using a Lagrange multiplier to enforce the constraint $\\mathbf{w}^T\\mathbf{w}=1$ , leads to the result that $\\mathbf{w} \\propto (\\mathbf{m}_2 \\mathbf{m}_1)$ .",
"answer": "We use Lagrange multiplier to enforce the constraint $\\mathbf{w}^T\\mathbf{w} = 1$ . We now need to maximize:\n\n$$L(\\lambda, \\mathbf{w}) = \\mathbf{w}^T (\\mathbf{m}_2 - \\mathbf{m}_1) + \\lambda (\\mathbf{w}^T \\mathbf{w} - 1)$$\n\nWe calculate the derivatives:\n\n$$\\frac{\\partial L(\\lambda, \\mathbf{w})}{\\partial \\lambda} = \\mathbf{w}^T \\mathbf{w} - 1$$\n\nAnd\n\n$$\\frac{\\partial L(\\lambda, \\mathbf{w})}{\\partial \\mathbf{w}} = \\mathbf{m_2} - \\mathbf{m_1} + 2\\lambda \\mathbf{w}$$\n\nWe set the derivatives above equals to 0, which gives:\n\n$$\\mathbf{w} = -\\frac{1}{2\\lambda}(\\mathbf{m_2} - \\mathbf{m_1}) \\propto (\\mathbf{m_2} - \\mathbf{m_1})$$",
"answer_length": 628
},
{
"chapter": 4,
"question_number": "4.5",
"difficulty": "easy",
"question_text": "By making use of (4.20: $y = \\mathbf{w}^{\\mathrm{T}} \\mathbf{x}.$), (4.23: $m_k = \\mathbf{w}^{\\mathrm{T}} \\mathbf{m}_k$), and (4.24: $s_k^2 = \\sum_{n \\in \\mathcal{C}_k} (y_n - m_k)^2$), show that the Fisher criterion (4.25: $J(\\mathbf{w}) = \\frac{(m_2 - m_1)^2}{s_1^2 + s_2^2}.$) can be written in the form (4.26: $J(\\mathbf{w}) = \\frac{\\mathbf{w}^{\\mathrm{T}} \\mathbf{S}_{\\mathrm{B}} \\mathbf{w}}{\\mathbf{w}^{\\mathrm{T}} \\mathbf{S}_{\\mathrm{W}} \\mathbf{w}}$).",
"answer": "We expand (4.25: $J(\\mathbf{w}) = \\frac{(m_2 - m_1)^2}{s_1^2 + s_2^2}.$) using (4.22: $m_2 - m_1 = \\mathbf{w}^{\\mathrm{T}}(\\mathbf{m}_2 - \\mathbf{m}_1)$), (4.23: $m_k = \\mathbf{w}^{\\mathrm{T}} \\mathbf{m}_k$) and (4.24: $s_k^2 = \\sum_{n \\in \\mathcal{C}_k} (y_n - m_k)^2$).\n\n$$J(\\mathbf{w}) = \\frac{(m_2 - m_1)^2}{s_1^2 + s_2^2}$$\n\n$$= \\frac{||\\mathbf{w}^T (\\mathbf{m_2} - \\mathbf{m_1})||^2}{\\sum_{n \\in C_1} (\\mathbf{w}^T \\mathbf{x_n} - m_1)^2 + \\sum_{n \\in C_2} (\\mathbf{w}^T \\mathbf{x_n} - m_2)^2}$$\n\nThe numerator can be further written as:\n\nnumerator = \n$$[\\mathbf{w}^T (\\mathbf{m_2} - \\mathbf{m_1})] [\\mathbf{w}^T (\\mathbf{m_2} - \\mathbf{m_1})]^T = \\mathbf{w}^T \\mathbf{S_B} \\mathbf{w}$$\n\nWhere we have defined:\n\n$$\\mathbf{S}_{\\mathbf{B}} = (\\mathbf{m}_2 - \\mathbf{m}_1)(\\mathbf{m}_2 - \\mathbf{m}_1)^T$$\n\nAnd ti is the same for the denominator:\n\ndenominator = \n$$\\sum_{n \\in C_1} [\\mathbf{w}^T (\\mathbf{x_n} - \\mathbf{m_1})]^2 + \\sum_{n \\in C_2} [\\mathbf{w}^T (\\mathbf{x_n} - \\mathbf{m_2})]^2$$\n= \n$$\\mathbf{w}^T \\mathbf{S_{w1}} \\mathbf{w} + \\mathbf{w}^T \\mathbf{S_{w2}} \\mathbf{w}$$\n= \n$$\\mathbf{w}^T \\mathbf{S_{w}} \\mathbf{w}$$\n\nWhere we have defined:\n\n$$\\mathbf{S_w} = \\sum_{n \\in C_1} (\\mathbf{x_n} - \\mathbf{m_1})(\\mathbf{x_n} - \\mathbf{m_1})^T + \\sum_{n \\in C_2} (\\mathbf{x_n} - \\mathbf{m_2})(\\mathbf{x_n} - \\mathbf{m_2})^T$$\n\nJust as required.",
"answer_length": 1354
},
{
"chapter": 4,
"question_number": "4.6",
"difficulty": "easy",
"question_text": "Using the definitions of the between-class and within-class covariance matrices given by (4.27: $\\mathbf{S}_{\\mathrm{B}} = (\\mathbf{m}_2 - \\mathbf{m}_1)(\\mathbf{m}_2 - \\mathbf{m}_1)^{\\mathrm{T}}$) and (4.28: $\\mathbf{S}_{W} = \\sum_{n \\in \\mathcal{C}_{1}} (\\mathbf{x}_{n} - \\mathbf{m}_{1})(\\mathbf{x}_{n} - \\mathbf{m}_{1})^{\\mathrm{T}} + \\sum_{n \\in \\mathcal{C}_{2}} (\\mathbf{x}_{n} - \\mathbf{m}_{2})(\\mathbf{x}_{n} - \\mathbf{m}_{2})^{\\mathrm{T}}.$), respectively, together with (4.34: $w_0 = -\\mathbf{w}^{\\mathrm{T}}\\mathbf{m}$) and (4.36: $\\mathbf{m} = \\frac{1}{N} \\sum_{n=1}^{N} \\mathbf{x}_n = \\frac{1}{N} (N_1 \\mathbf{m}_1 + N_2 \\mathbf{m}_2).$) and the choice of target values described in Section 4.1.5, show that the expression (4.33: $\\sum_{n=1}^{N} \\left( \\mathbf{w}^{\\mathrm{T}} \\mathbf{x}_n + w_0 - t_n \\right) \\mathbf{x}_n = 0.$) that minimizes the sum-of-squares error function can be written in the form (4.37: $\\left(\\mathbf{S}_{\\mathrm{W}} + \\frac{N_1 N_2}{N} \\mathbf{S}_{\\mathrm{B}}\\right) \\mathbf{w} = N(\\mathbf{m}_1 - \\mathbf{m}_2)$).",
"answer": "Let's follow the hint, beginning by expanding (4.33: $\\sum_{n=1}^{N} \\left( \\mathbf{w}^{\\mathrm{T}} \\mathbf{x}_n + w_0 - t_n \\right) \\mathbf{x}_n = 0.$).\n\n$$(4.33) = \\sum_{n=1}^{N} \\mathbf{w}^{T} \\mathbf{x}_{n} \\mathbf{x}_{n} + w_{0} \\sum_{n=1}^{N} \\mathbf{x}_{n} - \\sum_{n=1}^{N} t_{n} \\mathbf{x}_{n}$$\n\n$$= \\sum_{n=1}^{N} \\mathbf{x}_{n} \\mathbf{x}_{n}^{T} \\mathbf{w} - \\mathbf{w}^{T} \\mathbf{m} \\sum_{n=1}^{N} \\mathbf{x}_{n} - (\\sum_{n \\in C_{1}} t_{n} \\mathbf{x}_{n} + \\sum_{n \\in C_{2}} t_{n} \\mathbf{x}_{n})$$\n\n$$= \\sum_{n=1}^{N} \\mathbf{x}_{n} \\mathbf{x}_{n}^{T} \\mathbf{w} - \\mathbf{w}^{T} \\mathbf{m} \\cdot (N \\mathbf{m}) - (\\sum_{n \\in C_{1}} \\frac{N}{N_{1}} \\mathbf{x}_{n} + \\sum_{n \\in C_{2}} \\frac{-N}{N_{2}} \\mathbf{x}_{n})$$\n\n$$= \\sum_{n=1}^{N} \\mathbf{x}_{n} \\mathbf{x}_{n}^{T} \\mathbf{w} - N \\mathbf{w}^{T} \\mathbf{m} \\mathbf{m} - N(\\sum_{n \\in C_{1}} \\frac{1}{N_{1}} \\mathbf{x}_{n} - \\sum_{n \\in C_{2}} \\frac{1}{N_{2}} \\mathbf{x}_{n})$$\n\n$$= \\sum_{n=1}^{N} \\mathbf{x}_{n} \\mathbf{x}_{n}^{T} \\mathbf{w} - N \\mathbf{m} \\mathbf{m}^{T} \\mathbf{w} - N(\\mathbf{m}_{1} - \\mathbf{m}_{2})$$\n\n$$= [\\sum_{n=1}^{N} (\\mathbf{x}_{n} \\mathbf{x}_{n}^{T}) - N \\mathbf{m} \\mathbf{m}^{T}] \\mathbf{w} - N(\\mathbf{m}_{1} - \\mathbf{m}_{2})$$\n\nIf we let the derivative equal to 0, we will see that:\n\n$$\\left[\\sum_{n=1}^{N} (\\mathbf{x_n} \\mathbf{x_n}^T) - N \\mathbf{m} \\mathbf{m}^T\\right] \\mathbf{w} = N(\\mathbf{m_1} - \\mathbf{m_2})$$\n\nTherefore, now we need to prove:\n\n$$\\sum_{n=1}^{N} (\\mathbf{x_n} \\mathbf{x_n}^T) - N \\mathbf{m} \\mathbf{m}^T = \\mathbf{S_w} + \\frac{N_1 N_2}{N} \\mathbf{S_B}$$\n\nLet's expand the left side of the equation above:\n\n$$\\begin{split} & \\text{left} &= \\sum_{n=1}^{N} \\mathbf{x_n} \\mathbf{x_n}^T - N(\\frac{N_1}{N} \\mathbf{m_1} + \\frac{N_2}{N} \\mathbf{m_2})^2 \\\\ &= \\sum_{n=1}^{N} \\mathbf{x_n} \\mathbf{x_n}^T - N(\\frac{N_1^2}{N^2} || \\mathbf{m_1} ||^2 + \\frac{N_2^2}{N^2} || \\mathbf{m_2} ||^2 + 2 \\frac{N_1 N_2}{N^2} \\mathbf{m_1} \\mathbf{m_2}^T) \\\\ &= \\sum_{n=1}^{N} \\mathbf{x_n} \\mathbf{x_n}^T - \\frac{N_1^2}{N} || \\mathbf{m_1} ||^2 - \\frac{N_2^2}{N} || \\mathbf{m_2} ||^2 - 2 \\frac{N_1 N_2}{N} \\mathbf{m_1} \\mathbf{m_2}^T \\\\ &= \\sum_{n=1}^{N} \\mathbf{x_n} \\mathbf{x_n}^T + (N_1 + \\frac{N_1 N_2}{N} - 2N_1) || \\mathbf{m_1} ||^2 + (N_2 + \\frac{N_1 N_2}{N} - 2N_2) || \\mathbf{m_2} ||^2 - 2 \\frac{N_1 N_2}{N} \\mathbf{m_1} \\mathbf{m_2}^T \\\\ &= \\sum_{n=1}^{N} \\mathbf{x_n} \\mathbf{x_n}^T + (N_1 - 2N_1) || \\mathbf{m_1} ||^2 + (N_2 - 2N_2) || \\mathbf{m_2} ||^2 + \\frac{N_1 N_2}{N} || \\mathbf{m_1} - \\mathbf{m_2} ||^2 \\\\ &= \\sum_{n=1}^{N} \\mathbf{x_n} \\mathbf{x_n}^T + (N_1 - 2N_1) || \\mathbf{m_1} ||^2 - 2 \\mathbf{m_1} \\cdot (N_1 \\mathbf{m_1}^T) + N_2 || \\mathbf{m_2} ||^2 - 2 \\mathbf{m_2} \\cdot (N_2 \\mathbf{m_2}^T) + \\frac{N_1 N_2}{N} \\mathbf{S_B} \\\\ &= \\sum_{n=1}^{N} \\mathbf{x_n} \\mathbf{x_n}^T + N_1 || \\mathbf{m_1} ||^2 - 2 \\mathbf{m_1} \\sum_{n \\in C_1} x_n^T + N_2 || \\mathbf{m_2} ||^2 - 2 \\mathbf{m_2} \\sum_{n \\in C_2} x_n^T + \\frac{N_1 N_2}{N} \\mathbf{S_B} \\\\ &= \\sum_{n \\in C_1} \\mathbf{x_n} \\mathbf{x_n}^T + N_1 || \\mathbf{m_1} ||^2 - 2 \\mathbf{m_1} \\sum_{n \\in C_1} x_n^T + \\frac{N_1 N_2}{N} \\mathbf{S_B} \\\\ &= \\sum_{n \\in C_1} (\\mathbf{x_n} \\mathbf{x_n}^T + N_2 || \\mathbf{m_2} ||^2 - 2 \\mathbf{m_2} \\sum_{n \\in C_2} x_n^T + \\frac{N_1 N_2}{N} \\mathbf{S_B} \\\\ &= \\sum_{n \\in C_1} (\\mathbf{x_n} \\mathbf{x_n}^T + || \\mathbf{m_1} ||^2 - 2 \\mathbf{m_1} x_n^T) + \\sum_{n \\in C_2} (\\mathbf{x_n} \\mathbf{x_n}^T + || \\mathbf{m_2} ||^2 - 2 \\mathbf{m_2} \\mathbf{x_n}^T) + \\frac{N_1 N_2}{N} \\mathbf{S_B} \\\\ &= \\sum_{n \\in C_1} || \\mathbf{x_n} - \\mathbf{m_1} ||^2 + \\sum_{n \\in C_2} || \\mathbf{x_n} - \\mathbf{m_2} ||^2 + \\frac{N_1 N_2}{N} \\mathbf{S_B} \\\\ &= \\mathbf{S_w} + \\frac{N_1 N_2}{N} \\mathbf{S_B} \\end{aligned}$$\n\nJust as required.",
"answer_length": 3750
},
{
"chapter": 4,
"question_number": "4.7",
"difficulty": "easy",
"question_text": "Show that the logistic sigmoid function (4.59: $\\sigma(a) = \\frac{1}{1 + \\exp(-a)}$) satisfies the property $\\sigma(-a) = 1 \\sigma(a)$ and that its inverse is given by $\\sigma^{-1}(y) = \\ln \\{y/(1-y)\\}$ .",
"answer": "This problem is quite simple. We can solve it by definition. We know that logistic sigmoid function has the form:\n\n$$\\sigma(a) = \\frac{1}{1 + exp(-a)}$$\n\nTherefore, we can obtain:\n\n$$\\sigma(a) + \\sigma(-a) = \\frac{1}{1 + exp(-a)} + \\frac{1}{1 + exp(a)}$$\n\n$$= \\frac{2 + exp(a) + exp(-a)}{[1 + exp(-a)][1 + exp(a)]}$$\n\n$$= \\frac{2 + exp(a) + exp(-a)}{2 + exp(a) + exp(-a)} = 1$$\n\nNext we exchange the dependent and independent variables to obtain its inverse.\n\n$$a = \\frac{1}{1 + exp(-y)}$$\n\nWe first rearrange the equation above, which gives:\n\n$$exp(-y) = \\frac{1-a}{a}$$\n\nThen we calculate the logarithm for both sides, which gives:\n\n$$y = \\ln(\\frac{a}{1-a})$$\n\nJust as required.",
"answer_length": 680
},
{
"chapter": 4,
"question_number": "4.8",
"difficulty": "easy",
"question_text": "Using (4.57: $= \\frac{1}{1 + \\exp(-a)} = \\sigma(a)$) and (4.58: $a = \\ln \\frac{p(\\mathbf{x}|\\mathcal{C}_1)p(\\mathcal{C}_1)}{p(\\mathbf{x}|\\mathcal{C}_2)p(\\mathcal{C}_2)}$), derive the result (4.65: $p(\\mathcal{C}_1|\\mathbf{x}) = \\sigma(\\mathbf{w}^{\\mathrm{T}}\\mathbf{x} + w_0)$) for the posterior class probability in the two-class generative model with Gaussian densities, and verify the results (4.66: $\\mathbf{w} = \\mathbf{\\Sigma}^{-1}(\\boldsymbol{\\mu}_1 - \\boldsymbol{\\mu}_2)$) and (4.67: $w_0 = -\\frac{1}{2} \\boldsymbol{\\mu}_1^{\\mathrm{T}} \\boldsymbol{\\Sigma}^{-1} \\boldsymbol{\\mu}_1 + \\frac{1}{2} \\boldsymbol{\\mu}_2^{\\mathrm{T}} \\boldsymbol{\\Sigma}^{-1} \\boldsymbol{\\mu}_2 + \\ln \\frac{p(\\mathcal{C}_1)}{p(\\mathcal{C}_2)}.$) for the parameters $\\mathbf{w}$ and $w_0$ .",
"answer": "According to (4.58: $a = \\ln \\frac{p(\\mathbf{x}|\\mathcal{C}_1)p(\\mathcal{C}_1)}{p(\\mathbf{x}|\\mathcal{C}_2)p(\\mathcal{C}_2)}$) and (4.64: $p(\\mathbf{x}|\\mathcal{C}_k) = \\frac{1}{(2\\pi)^{D/2}} \\frac{1}{|\\mathbf{\\Sigma}|^{1/2}} \\exp\\left\\{-\\frac{1}{2} (\\mathbf{x} - \\boldsymbol{\\mu}_k)^{\\mathrm{T}} \\mathbf{\\Sigma}^{-1} (\\mathbf{x} - \\boldsymbol{\\mu}_k)\\right\\}.$), we can write:\n\n$$a = \\ln \\frac{p(\\mathbf{x}|C_1)p(C_1)}{p(\\mathbf{x}|C_2)p(C_2)}$$\n\n$$= \\ln p(\\mathbf{x}|C_1) - \\ln p(\\mathbf{x}|C_2) + \\ln \\frac{p(C_1)}{p(C_2)}$$\n\n$$= -\\frac{1}{2}(\\mathbf{x} - \\boldsymbol{\\mu}_1)^T \\boldsymbol{\\Sigma}^{-1}(\\mathbf{x} - \\boldsymbol{\\mu}_1) + \\frac{1}{2}(\\mathbf{x} - \\boldsymbol{\\mu}_2)^T \\boldsymbol{\\Sigma}^{-1}(\\mathbf{x} - \\boldsymbol{\\mu}_2) + \\ln \\frac{p(C_1)}{p(C_2)}$$\n\n$$= \\boldsymbol{\\Sigma}^{-1}(\\boldsymbol{\\mu}_1 - \\boldsymbol{\\mu}_2)\\mathbf{x} - \\frac{1}{2}\\boldsymbol{\\mu}_1^T \\boldsymbol{\\Sigma}^{-1}\\boldsymbol{\\mu}_1 + \\frac{1}{2}\\boldsymbol{\\mu}_2^T \\boldsymbol{\\Sigma}^{-1}\\boldsymbol{\\mu}_2 + \\ln \\frac{p(C_1)}{p(C_2)}$$\n\n$$= \\mathbf{w}^T \\mathbf{x} + w_0$$\n\nWhere in the last second step, we rearrange the term according to $\\mathbf{x}$ , i.e., its quadratic, linear, constant term. We have also defined:\n\n$$\\mathbf{w} = \\mathbf{\\Sigma}^{-1}(\\boldsymbol{\\mu_1} - \\boldsymbol{\\mu_2})$$\n\nAnd\n\n$$w_0 = -\\frac{1}{2} \\boldsymbol{\\mu_1}^T \\boldsymbol{\\Sigma}^{-1} \\boldsymbol{\\mu_1} + \\frac{1}{2} \\boldsymbol{\\mu_2}^T \\boldsymbol{\\Sigma}^{-1} \\boldsymbol{\\mu_2} + \\ln \\frac{p(C_1)}{p(C_2)}$$\n\nFinally, since $p(C_1|\\mathbf{x}) = \\sigma(a)$ as stated in (4.57: $= \\frac{1}{1 + \\exp(-a)} = \\sigma(a)$), we have $p(C_1|\\mathbf{x}) = \\sigma(\\mathbf{w}^T\\mathbf{x} + w_0)$ just as required.",
"answer_length": 1705
},
{
"chapter": 4,
"question_number": "4.9",
"difficulty": "easy",
"question_text": "Consider a generative classification model for K classes defined by prior class probabilities $p(\\mathcal{C}_k) = \\pi_k$ and general class-conditional densities $p(\\phi|\\mathcal{C}_k)$ where $\\phi$ is the input feature vector. Suppose we are given a training data set $\\{\\phi_n, \\mathbf{t}_n\\}$ where $n=1,\\ldots,N$ , and $\\mathbf{t}_n$ is a binary target vector of length K that uses the 1-of-K coding scheme, so that it has components $t_{nj} = I_{jk}$ if pattern n is from class $\\mathcal{C}_k$ . Assuming that the data points are drawn independently from this model, show that the maximum-likelihood solution for the prior probabilities is given by\n\n$$\\pi_k = \\frac{N_k}{N} \\tag{4.159}$$\n\nwhere $N_k$ is the number of data points assigned to class $C_k$ .",
"answer": "We begin by writing down the likelihood function.\n\n$$p(\\{\\phi_{\\mathbf{n}}, t_n\\} | \\pi_1, \\pi_2, ..., \\pi_K) = \\prod_{n=1}^{N} \\prod_{k=1}^{K} [p(\\phi_{\\mathbf{n}} | C_k) p(C_k)]^{t_{nk}}$$\n$$= \\prod_{n=1}^{N} \\prod_{k=1}^{K} [\\pi_k p(\\phi_{\\mathbf{n}} | C_k)]^{t_{nk}}$$\n\nHence we can obtain the expression for the logarithm likelihood:\n\n$$\\ln p = \\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{nk} \\left[ \\ln \\pi_k + \\ln p(\\boldsymbol{\\phi_n}|C_k) \\right] \\propto \\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{nk} \\ln \\pi_k$$\n\nSince there is a constraint on $\\pi_k$ , so we need to add a Lagrange Multiplier to the expression, which becomes:\n\n$$L = \\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{nk} \\ln \\pi_k + \\lambda (\\sum_{k=1}^{K} \\pi_k - 1)$$\n\nWe calculate the derivative of the expression above with regard to $\\pi_k$ :\n\n$$\\frac{\\partial L}{\\partial \\pi_k} = \\sum_{n=1}^{N} \\frac{t_{nk}}{\\pi_k} + \\lambda$$\n\nAnd if we set the derivative equal to 0, we can obtain:\n\n$$\\pi_k = -\\left(\\sum_{n=1}^N t_{nk}\\right)/\\lambda = -\\frac{N_k}{\\lambda} \\tag{*}$$\n\nAnd if we preform summation on both sides with regard to k, we can see that:\n\n$$1 = -(\\sum_{k=1}^{K} N_k)/\\lambda = -\\frac{N}{\\lambda}$$\n\nWhich gives $\\lambda = -N$ , and substitute it into (\\*), we can obtain $\\pi_k = N_k/N$ .\n\n# **Problem 4.10 Solution**\n\nThis time, we focus on the term which dependent on $\\mu_k$ and $\\Sigma$ in the logarithm likelihood.\n\n$$\\ln p = \\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{nk} \\left[ \\ln \\pi_k + \\ln p(\\boldsymbol{\\phi_n}|C_k) \\right] \\propto \\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{nk} \\ln p(\\boldsymbol{\\phi_n}|C_k)$$\n\nProvided $p(\\phi|C_k) = \\mathcal{N}(\\phi|\\boldsymbol{\\mu_k}, \\boldsymbol{\\Sigma})$ , we can further derive:\n\n$$\\ln p \\propto \\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{nk} \\left[ -\\frac{1}{2} \\ln |\\mathbf{\\Sigma}| - \\frac{1}{2} (\\boldsymbol{\\phi_n} - \\boldsymbol{\\mu_k}) \\mathbf{\\Sigma}^{-1} (\\boldsymbol{\\phi_n} - \\boldsymbol{\\mu_k})^T \\right]$$\n\nWe first calculate the derivative of the expression above with regard to $\\mu_k$ :\n\n$$\\frac{\\partial \\ln p}{\\partial \\boldsymbol{\\mu_k}} = \\sum_{n=1}^{N} t_{nk} \\boldsymbol{\\Sigma}^{-1} (\\boldsymbol{\\phi_n} - \\boldsymbol{\\mu_k})$$\n\nWe set the derivative equals to 0, which gives:\n\n$$\\sum_{n=1}^{N} t_{nk} \\boldsymbol{\\Sigma}^{-1} \\boldsymbol{\\phi_n} = \\sum_{n=1}^{N} t_{nk} \\boldsymbol{\\Sigma}^{-1} \\boldsymbol{\\mu_k} = N_k \\boldsymbol{\\Sigma}^{-1} \\boldsymbol{\\mu_k}$$\n\nTherefore, if we multiply both sides by $\\Sigma/N_k$ , we will obtain (4.161: $\\mu_k = \\frac{1}{N_k} \\sum_{n=1}^{N} t_{nk} \\phi_n$). Now let's calculate the derivative of $\\ln p$ with regard to $\\Sigma$ , which gives:\n\n$$\\frac{\\partial \\ln p}{\\partial \\boldsymbol{\\Sigma}} = \\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{nk} \\left( -\\frac{1}{2} \\boldsymbol{\\Sigma}^{-1} \\right) - \\frac{1}{2} \\frac{\\partial}{\\partial \\boldsymbol{\\Sigma}} \\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{nk} (\\boldsymbol{\\phi_n} - \\boldsymbol{\\mu_k}) \\boldsymbol{\\Sigma}^{-1} (\\boldsymbol{\\phi_n} - \\boldsymbol{\\mu_k})^T$$\n\n$$= \\sum_{n=1}^{N} \\sum_{k=1}^{K} -\\frac{t_{nk}}{2} \\boldsymbol{\\Sigma}^{-1} - \\frac{1}{2} \\frac{\\partial}{\\partial \\boldsymbol{\\Sigma}} \\sum_{k=1}^{K} \\sum_{n=1}^{N} t_{nk} (\\boldsymbol{\\phi_n} - \\boldsymbol{\\mu_k}) \\boldsymbol{\\Sigma}^{-1} (\\boldsymbol{\\phi_n} - \\boldsymbol{\\mu_k})^T$$\n\n$$= \\sum_{n=1}^{N} -\\frac{1}{2} \\boldsymbol{\\Sigma}^{-1} - \\frac{1}{2} \\frac{\\partial}{\\partial \\boldsymbol{\\Sigma}} \\sum_{k=1}^{K} N_k \\operatorname{Tr}(\\boldsymbol{\\Sigma}^{-1} \\mathbf{S_k})$$\n\n$$= -\\frac{N}{2} \\boldsymbol{\\Sigma}^{-1} + \\frac{1}{2} \\sum_{k=1}^{K} N_k \\boldsymbol{\\Sigma}^{-1} \\mathbf{S_k} \\boldsymbol{\\Sigma}^{-1}$$\n\nWhere we have denoted\n\n$$\\mathbf{S_k} = \\frac{1}{N_k} \\sum_{n=1}^{N} t_{nk} (\\boldsymbol{\\phi_n} - \\boldsymbol{\\mu_k}) (\\boldsymbol{\\phi_n} - \\boldsymbol{\\mu_k})^T$$\n\nNow we set the derivative equals to 0, and rearrange the equation, which gives:\n\n$$\\mathbf{\\Sigma} = \\sum_{k=1}^{K} \\frac{N_k}{N} \\mathbf{S_k}$$",
"answer_length": 3903
}
]
},
{
"chapter_number": 5,
"total_questions": 38,
"difficulty_breakdown": {
"easy": 27,
"medium": 4,
"hard": 2,
"unknown": 8
},
"questions": [
{
"chapter": 5,
"question_number": "5.1",
"difficulty": "medium",
"question_text": "\\star)$ Consider a two-layer network function of the form (5.7: $y_k(\\mathbf{x}, \\mathbf{w}) = \\sigma \\left( \\sum_{j=1}^M w_{kj}^{(2)} h \\left( \\sum_{i=1}^D w_{ji}^{(1)} x_i + w_{j0}^{(1)} \\right) + w_{k0}^{(2)} \\right)$) in which the hiddenunit nonlinear activation functions $g(\\cdot)$ are given by logistic sigmoid functions of the form\n\n$$\\sigma(a) = \\{1 + \\exp(-a)\\}^{-1}.$$\n (5.191: $\\sigma(a) = \\{1 + \\exp(-a)\\}^{-1}.$)\n\nShow that there exists an equivalent network, which computes exactly the same function, but with hidden unit activation functions given by $\\tanh(a)$ where the $\\tanh$ function is defined by (5.59: $\\tanh(a) = \\frac{e^a - e^{-a}}{e^a + e^{-a}}.$). Hint: first find the relation between $\\sigma(a)$ and $\\tanh(a)$ , and then show that the parameters of the two networks differ by linear transformations.",
"answer": "Based on definition of $tanh(\\cdot)$ , we can obtain:\n\n$$tanh(a) = \\frac{e^{a} - e^{-a}}{e^{a} + e^{-a}}$$\n$$= -1 + \\frac{2e^{a}}{e^{a} + e^{-a}}$$\n$$= -1 + 2\\frac{1}{1 + e^{-2a}}$$\n$$= 2\\sigma(2a) - 1$$\n\nIf we have parameters $w_{ji}^{(1s)}$ , $w_{j0}^{(1s)}$ and $w_{kj}^{(2s)}$ , $w_{k0}^{(2s)}$ for a network whose hidden units use logistic sigmoid function as activation and $w_{ji}^{(1t)}$ , $w_{j0}^{(1t)}$ and $w_{kj}^{(2t)}$ , $w_{k0}^{(2t)}$ for another one using $tanh(\\cdot)$ , for the network using $tanh(\\cdot)$ as activation, we can write down the following expression by using (5.4):\n\n$$\\begin{split} a_k^{(t)} &= \\sum_{j=1}^M w_{kj}^{(2t)} tanh(a_j^{(t)}) + w_{k0}^{(2t)} \\\\ &= \\sum_{j=1}^M w_{kj}^{(2t)} [2\\sigma(2a_j^{(t)}) - 1] + w_{k0}^{(2t)} \\\\ &= \\sum_{j=1}^M 2w_{kj}^{(2t)} \\sigma(2a_j^{(t)}) + \\left[ -\\sum_{j=1}^M w_{kj}^{(2t)} + w_{k0}^{(2t)} \\right] \\end{split}$$\n\nWhat's more, we also have:\n\n$$a_k^{(s)} = \\sum_{j=1}^M w_{kj}^{(2s)} \\sigma(a_j^{(s)}) + w_{k0}^{(2s)}$$\n\nTo make the two networks equivalent, i.e., $a_k^{(s)} = a_k^{(t)}$ , we should make sure:\n\n$$\\begin{cases} a_{j}^{(s)} = 2a_{j}^{(t)} \\\\ w_{kj}^{(2s)} = 2w_{kj}^{(2t)} \\\\ w_{k0}^{(2s)} = -\\sum_{j=1}^{M} w_{kj}^{(2t)} + w_{k0}^{(2t)} \\end{cases}$$\n\nNote that the first condition can be achieved by simply enforcing:\n\n$$w_{ii}^{(1s)} = 2w_{ii}^{(1t)}$$\n, and $w_{i0}^{(1s)} = 2w_{i0}^{(1t)}$ \n\nTherefore, these two networks are equivalent under a linear transformation.",
"answer_length": 1484
},
{
"chapter": 5,
"question_number": "5.10",
"difficulty": "easy",
"question_text": "Consider a Hessian matrix $\\mathbf{H}$ with eigenvector equation (5.33: $\\mathbf{H}\\mathbf{u}_i = \\lambda_i \\mathbf{u}_i$). By setting the vector $\\mathbf{v}$ in (5.39: $\\mathbf{v}^{\\mathrm{T}}\\mathbf{H}\\mathbf{v} = \\sum_{i} c_{i}^{2} \\lambda_{i}$) equal to each of the eigenvectors $\\mathbf{u}_i$ in turn, show that $\\mathbf{H}$ is positive definite if, and only if, all of its eigenvalues are positive.",
"answer": "It is obvious. Suppose **H** is positive definite, i.e., (5.37) holds. We set **v** equals to the eigenvector of **H**, i.e., $\\mathbf{v} = \\mathbf{u_i}$ which gives:\n\n$$\\mathbf{v}^T \\mathbf{H} \\mathbf{v} = \\mathbf{v}^T (\\mathbf{H} \\mathbf{v}) = \\mathbf{u_i}^T \\lambda_i \\mathbf{u_i} = \\lambda_i ||\\mathbf{u_i}||^2$$\n\nTherefore, every $\\lambda_i$ should be positive. On the other hand, If all the eigenvalues $\\lambda_i$ are positive, from (5.38: $\\mathbf{v} = \\sum_{i} c_i \\mathbf{u}_i.$) and (5.39: $\\mathbf{v}^{\\mathrm{T}}\\mathbf{H}\\mathbf{v} = \\sum_{i} c_{i}^{2} \\lambda_{i}$), we see that **H** is positive definite.",
"answer_length": 627
},
{
"chapter": 5,
"question_number": "5.11",
"difficulty": "medium",
"question_text": "Consider a quadratic error function defined by (5.32: $E(\\mathbf{w}) = E(\\mathbf{w}^*) + \\frac{1}{2}(\\mathbf{w} - \\mathbf{w}^*)^{\\mathrm{T}}\\mathbf{H}(\\mathbf{w} - \\mathbf{w}^*)$), in which the Hessian matrix **H** has an eigenvalue equation given by (5.33: $\\mathbf{H}\\mathbf{u}_i = \\lambda_i \\mathbf{u}_i$). Show that the contours of constant error are ellipses whose axes are aligned with the eigenvectors $\\mathbf{u}_i$ , with lengths that are inversely proportional to the square root of the corresponding eigenvalues $\\lambda_i$ .",
"answer": "It is obvious. We follow (5.35: $\\mathbf{w} - \\mathbf{w}^* = \\sum_i \\alpha_i \\mathbf{u}_i.$) and then write the error function in the form of (5.36: $E(\\mathbf{w}) = E(\\mathbf{w}^*) + \\frac{1}{2} \\sum_{i} \\lambda_i \\alpha_i^2.$). To obtain the contour, we enforce $E(\\mathbf{w})$ to equal to a constant C.\n\n$$E(\\mathbf{w}) = E(\\mathbf{w}^*) + \\frac{1}{2} \\sum_{i} \\lambda_i \\alpha_i^2 = C$$\n\nWe rearrange the equation above, and then obtain:\n\n$$\\sum_{i} \\lambda_i \\alpha_i^2 = B$$\n\nWhere $B = 2C - 2E(\\mathbf{w}^*)$ is a constant. Therefore, the contours of constant error are ellipses whose axes are aligned with the eigenvector $\\mathbf{u_i}$ of the Hessian Matrix $\\mathbf{H}$ . The length for the jth axis is given by setting all $\\alpha_i = 0, s.t. i \\neq j$ :\n\n$$\\alpha_j = \\sqrt{\\frac{B}{\\lambda_j}}$$\n\nIn other words, the length is inversely proportional to the square root of the corresponding eigenvalue $\\lambda_i$ .",
"answer_length": 936
},
{
"chapter": 5,
"question_number": "5.12",
"difficulty": "medium",
"question_text": "By considering the local Taylor expansion (5.32: $E(\\mathbf{w}) = E(\\mathbf{w}^*) + \\frac{1}{2}(\\mathbf{w} - \\mathbf{w}^*)^{\\mathrm{T}}\\mathbf{H}(\\mathbf{w} - \\mathbf{w}^*)$) of an error function about a stationary point $\\mathbf{w}^*$ , show that the necessary and sufficient condition for the stationary point to be a local minimum of the error function is that the Hessian matrix $\\mathbf{H}$ , defined by (5.30: $(\\mathbf{H})_{ij} \\equiv \\left. \\frac{\\partial E}{\\partial w_i \\partial w_j} \\right|_{\\mathbf{w} = \\widehat{\\mathbf{w}}}.$) with $\\hat{\\mathbf{w}} = \\mathbf{w}^*$ , be positive definite.",
"answer": "If **H** is positive definite, we know the second term on the right side of (5.32: $E(\\mathbf{w}) = E(\\mathbf{w}^*) + \\frac{1}{2}(\\mathbf{w} - \\mathbf{w}^*)^{\\mathrm{T}}\\mathbf{H}(\\mathbf{w} - \\mathbf{w}^*)$) will be positive for arbitrary **w**. Therefore, $E(\\mathbf{w}^*)$ is a local minimum. On the other hand, if $\\mathbf{w}^*$ is a local minimum, we have\n\n$$E(\\mathbf{w}^*) - E(\\mathbf{w}) = -\\frac{1}{2}(\\mathbf{w} - \\mathbf{w}^*)^T \\mathbf{H}(\\mathbf{w} - \\mathbf{w}^*) < 0$$\n\nIn other words, for arbitrary $\\mathbf{w}$ , $(\\mathbf{w} - \\mathbf{w}^*)^T \\mathbf{H} (\\mathbf{w} - \\mathbf{w}^*) > 0$ , according to the previous problem, we know that this means $\\mathbf{H}$ is positive definite.",
"answer_length": 708
},
{
"chapter": 5,
"question_number": "5.13",
"difficulty": "easy",
"question_text": "Show that as a consequence of the symmetry of the Hessian matrix $\\mathbf{H}$ , the number of independent elements in the quadratic error function (5.28: $E(\\mathbf{w}) \\simeq E(\\widehat{\\mathbf{w}}) + (\\mathbf{w} - \\widehat{\\mathbf{w}})^{\\mathrm{T}} \\mathbf{b} + \\frac{1}{2} (\\mathbf{w} - \\widehat{\\mathbf{w}})^{\\mathrm{T}} \\mathbf{H} (\\mathbf{w} - \\widehat{\\mathbf{w}})$) is given by W(W+3)/2.",
"answer": "It is obvious. Suppose that there are W adaptive parameters in the network. Therefore, **b** has W independent parameters. Since **H** is symmetric, there should be W(W+1)/2 independent parameters in it. Therefore, there are W + W(W+1)/2 = W(W+3)/2 parameters in total.",
"answer_length": 269
},
{
"chapter": 5,
"question_number": "5.14",
"difficulty": "easy",
"question_text": "By making a Taylor expansion, verify that the terms that are $O(\\epsilon)$ cancel on the right-hand side of (5.69: $\\frac{\\partial E_n}{\\partial w_{ji}} = \\frac{E_n(w_{ji} + \\epsilon) - E_n(w_{ji} - \\epsilon)}{2\\epsilon} + O(\\epsilon^2).$).",
"answer": "It is obvious. Since we have\n\n$$E_n(w_{ji} + \\epsilon) = E_n(w_{ji}) + \\epsilon E'_n(w_{ji}) + \\frac{\\epsilon^2}{2} E''_n(w_{ji}) + O(\\epsilon^3)$$\n\nAnd\n\n$$E_n(w_{ji} - \\epsilon) = E_n(w_{ji}) - \\epsilon E'_n(w_{ji}) + \\frac{\\epsilon^2}{2} E''_n(w_{ji}) + O(\\epsilon^3)$$\n\nWe combine those two equations, which gives,\n\n$$E_n(w_{ii} + \\epsilon) - E_n(w_{ii} - \\epsilon) = 2\\epsilon E'_n(w_{ii}) + O(\\epsilon^3)$$\n\nRearrange the equation above, we obtain what has been required.",
"answer_length": 476
},
{
"chapter": 5,
"question_number": "5.15",
"difficulty": "medium",
"question_text": "In Section 5.3.4, we derived a procedure for evaluating the Jacobian matrix of a neural network using a backpropagation procedure. Derive an alternative formalism for finding the Jacobian based on *forward propagation* equations.",
"answer": "It is obvious. The back propagation formalism starts from performing summation near the input, as shown in (5.73: $= \\sum_j w_{ji} \\frac{\\partial y_k}{\\partial a_j}$). By symmetry, the forward propagation formalism should start near the output.\n\n$$J_{ki} = \\frac{\\partial y_k}{\\partial x_i} = \\frac{\\partial h(\\alpha_k)}{\\partial x_i} = h'(\\alpha_k) \\frac{\\partial \\alpha_k}{\\partial x_i} \\tag{*}$$\n\nWhere $h(\\cdot)$ is the activation function at the output node $a_k$ . Considering all the units j, which have links to unit k:\n\n$$\\frac{\\partial a_k}{\\partial x_i} = \\sum_j \\frac{\\partial a_k}{\\partial a_j} \\frac{\\partial a_j}{\\partial x_i} = \\sum_j w_{kj} h'(a_j) \\frac{\\partial a_j}{\\partial x_i} \\tag{**}$$\n\nWhere we have used:\n\n$$a_k = \\sum_j w_{kj} z_j, \\quad z_j = h(a_j)$$\n\nIt is similar for $\\partial a_j/\\partial x_i$ . In this way we have obtained a recursive formula starting from the input node:\n\n$$\\frac{\\partial a_l}{\\partial x_i} = \\begin{cases} w_{li}, & \\text{if there is a link from input unit } i \\text{ to } l\\\\ 0, & \\text{if there isn't a link from input unit } i \\text{ to } l \\end{cases}$$\n\nUsing recursive formula (\\*\\*) and then (\\*), we can obtain the Jacobian Matrix.",
"answer_length": 1199
},
{
"chapter": 5,
"question_number": "5.16",
"difficulty": "easy",
"question_text": "The outer product approximation to the Hessian matrix for a neural network using a sum-of-squares error function is given by (5.84: $\\mathbf{H} \\simeq \\sum_{n=1}^{N} \\mathbf{b}_n \\mathbf{b}_n^{\\mathrm{T}}$). Extend this result to the case of multiple outputs.",
"answer": "It is obvious. We begin by writing down the error function.\n\n$$E = \\frac{1}{2} \\sum_{n=1}^{N} ||\\mathbf{y_n} - \\mathbf{t_n}||^2 = \\frac{1}{2} \\sum_{n=1}^{N} \\sum_{m=1}^{M} (y_{n,m} - t_{n,m})^2$$\n\nWhere the subscript *m* denotes the *m*the element of the vector. Then we can write down the Hessian Matrix as before.\n\n$$\\mathbf{H} = \\nabla \\nabla E = \\sum_{n=1}^{N} \\sum_{m=1}^{M} \\nabla \\mathbf{y_{n,m}} \\nabla \\mathbf{y_{n,m}} + \\sum_{n=1}^{N} \\sum_{m=1}^{M} (y_{n,m} - t_{n,m}) \\nabla \\nabla \\mathbf{y_{n,m}}$$\n\nSimilarly, we now know that the Hessian Matrix can be approximated as:\n\n$$\\mathbf{H} \\simeq \\sum_{n=1}^{N} \\sum_{m=1}^{M} \\mathbf{b}_{n,m} \\mathbf{b}_{n,m}^{T}$$\n\nWhere we have defined:\n\n$$\\mathbf{b}_{n,m} = \\nabla y_{n,m}$$",
"answer_length": 738
},
{
"chapter": 5,
"question_number": "5.17",
"difficulty": "easy",
"question_text": "Consider a squared loss function of the form\n\n$$E = \\frac{1}{2} \\iint \\left\\{ y(\\mathbf{x}, \\mathbf{w}) - t \\right\\}^2 p(\\mathbf{x}, t) \\, d\\mathbf{x} \\, dt$$\n (5.193: $E = \\frac{1}{2} \\iint \\left\\{ y(\\mathbf{x}, \\mathbf{w}) - t \\right\\}^2 p(\\mathbf{x}, t) \\, d\\mathbf{x} \\, dt$)\n\nwhere $y(\\mathbf{x}, \\mathbf{w})$ is a parametric function such as a neural network. The result (1.89: $y(\\mathbf{x}) = \\frac{\\int tp(\\mathbf{x}, t) dt}{p(\\mathbf{x})} = \\int tp(t|\\mathbf{x}) dt = \\mathbb{E}_t[t|\\mathbf{x}]$) shows that the function $y(\\mathbf{x}, \\mathbf{w})$ that minimizes this error is given by the conditional expectation of t given $\\mathbf{x}$ . Use this result to show that the second derivative of E with respect to two elements $w_r$ and $w_s$ of the vector $\\mathbf{w}$ , is given by\n\n$$\\frac{\\partial^2 E}{\\partial w_r \\partial w_s} = \\int \\frac{\\partial y}{\\partial w_r} \\frac{\\partial y}{\\partial w_s} p(\\mathbf{x}) \\, d\\mathbf{x}.$$\n (5.194: $\\frac{\\partial^2 E}{\\partial w_r \\partial w_s} = \\int \\frac{\\partial y}{\\partial w_r} \\frac{\\partial y}{\\partial w_s} p(\\mathbf{x}) \\, d\\mathbf{x}.$)\n\nNote that, for a finite sample from $p(\\mathbf{x})$ , we obtain (5.84: $\\mathbf{H} \\simeq \\sum_{n=1}^{N} \\mathbf{b}_n \\mathbf{b}_n^{\\mathrm{T}}$).",
"answer": "It is obvious.\n\n$$\\frac{\\partial^{2} E}{\\partial w_{r} \\partial w_{s}} = \\frac{\\partial}{\\partial w_{r}} \\frac{1}{2} \\int \\int 2(y-t) \\frac{\\partial y}{\\partial w_{s}} p(\\mathbf{x}, t) d\\mathbf{x} dt \n= \\int \\int \\left[ (y-t) \\frac{\\partial y^{2}}{\\partial w_{r} \\partial w_{s}} + \\frac{\\partial y}{\\partial w_{s}} \\frac{\\partial y}{\\partial w_{r}} \\right] p(\\mathbf{x}, t) d\\mathbf{x} dt$$\n\nSince we know that\n\n$$\\int \\int (y-t) \\frac{\\partial y^2}{\\partial w_r \\partial w_s} p(\\mathbf{x}, t) d\\mathbf{x} dt = \\int \\int (y-t) \\frac{\\partial y^2}{\\partial w_r \\partial w_s} p(t|\\mathbf{x}) p(\\mathbf{x}) d\\mathbf{x} dt \n= \\int \\frac{\\partial y^2}{\\partial w_r \\partial w_s} \\left\\{ \\int (y-t) p(t|\\mathbf{x}) dt \\right\\} p(\\mathbf{x}) d\\mathbf{x} \n= 0$$\n\nNote that in the last step, we have used $y = \\int t p(t|\\mathbf{x}) dt$ . Then we substitute it into the second derivative, which gives,\n\n$$\\frac{\\partial^{2} E}{\\partial w_{r} \\partial w_{s}} = \\int \\int \\frac{\\partial y}{\\partial w_{s}} \\frac{\\partial y}{\\partial w_{r}} p(\\mathbf{x}, t) d\\mathbf{x} dt$$\n\n$$= \\int \\frac{\\partial y}{\\partial w_{s}} \\frac{\\partial y}{\\partial w_{r}} p(\\mathbf{x}) d\\mathbf{x}$$",
"answer_length": 1169
},
{
"chapter": 5,
"question_number": "5.18",
"difficulty": "easy",
"question_text": "Consider a two-layer network of the form shown in Figure 5.1 with the addition of extra parameters corresponding to skip-layer connections that go directly from the inputs to the outputs. By extending the discussion of Section 5.3.2, write down the equations for the derivatives of the error function with respect to these additional parameters.",
"answer": "By analogy with section 5.3.2, we denote $w_{ki}^{\\rm skip}$ as those parameters corresponding to skip-layer connections, i.e., it connects the input unit i with the output unit k. Note that the discussion in section 5.3.2 is still correct and now we only need to obtain the derivative of the error function with respect to the additional parameters $w_{ki}^{\\rm skip}$ .\n\n$$\\frac{\\partial E_n}{\\partial w_{ki}^{\\text{skip}}} = \\frac{\\partial E_n}{\\partial a_k} \\frac{\\partial a_k}{\\partial w_{ki}^{\\text{skip}}} = \\delta_k x_i$$\n\nWhere we have used $a_k = y_k$ due to linear activation at the output unit and:\n\n$$y_k = \\sum_{j=0}^{M} w_{kj}^{(2)} z_j + \\sum_{i} w_{ki}^{\\text{skip}} x_i$$\n\nWhere the first term on the right side corresponds to those information conveying from the hidden unit to the output and the second term corresponds to the information conveying directly from the input to output.",
"answer_length": 908
},
{
"chapter": 5,
"question_number": "5.19",
"difficulty": "easy",
"question_text": "Derive the expression (5.85: $\\mathbf{H} \\simeq \\sum_{n=1}^{N} y_n (1 - y_n) \\mathbf{b}_n \\mathbf{b}_n^{\\mathrm{T}}.$) for the outer product approximation to the Hessian matrix for a network having a single output with a logistic sigmoid output-unit activation function and a cross-entropy error function, corresponding to the result (5.84: $\\mathbf{H} \\simeq \\sum_{n=1}^{N} \\mathbf{b}_n \\mathbf{b}_n^{\\mathrm{T}}$) for the sum-of-squares error function.",
"answer": "The error function is given by (5.21: $E(\\mathbf{w}) = -\\sum_{n=1}^{N} \\left\\{ t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n) \\right\\}$). Therefore, we can obtain:\n\n$$\\begin{split} \\nabla E(\\mathbf{w}) &= \\sum_{n=1}^{N} \\frac{\\partial E}{\\partial a_n} \\nabla a_n \\\\ &= -\\sum_{n=1}^{N} \\frac{\\partial}{\\partial a_n} \\big[ t_n \\ln y_n + (1-t_n) \\ln(1-y_n) \\big] \\nabla a_n \\\\ &= -\\sum_{n=1}^{N} \\big\\{ \\frac{\\partial (t_n \\ln y_n)}{\\partial y_n} \\frac{\\partial y_n}{\\partial a_n} + \\frac{\\partial (1-t_n) \\ln(1-y_n)}{\\partial y_n} \\frac{\\partial y_n}{\\partial a_n} \\big\\} \\nabla a_n \\\\ &= -\\sum_{n=1}^{N} \\big[ \\frac{t_n}{y_n} \\cdot y_n (1-y_n) + (1-t_n) \\frac{-1}{1-y_n} \\cdot y_n (1-y_n) \\big] \\nabla a_n \\\\ &= -\\sum_{n=1}^{N} \\big[ t_n (1-y_n) - (1-t_n) y_n \\big] \\nabla a_n \\\\ &= \\sum_{n=1}^{N} (y_n - t_n) \\nabla a_n \\end{split}$$\n\nWhere we have used the conclusion of problem 5.6. Now we calculate the second derivative.\n\n$$\\nabla \\nabla E(\\mathbf{w}) = \\sum_{n=1}^{N} \\left\\{ y_n (1 - y_n) \\nabla a_n \\nabla a_n + (y_n - t_n) \\nabla \\nabla a_n \\right\\}$$\n\nSimilarly, we can drop the last term, which gives exactly what has been asked.\n\n## **Problem 5.20 Solution**\n\nWe begin by writing down the error function.\n\n$$E(\\mathbf{w}) = -\\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{nk} \\ln y_{nk}$$\n\nHere we assume that the output of the network has K units in total and there are W weights parameters in the network. WE first calculate the first derivative:\n\n$$\\nabla E = \\sum_{n=1}^{N} \\frac{dE}{d\\mathbf{a}_n} \\cdot \\nabla \\mathbf{a}_n$$\n\n$$= -\\sum_{n=1}^{N} \\left[ \\frac{d}{d\\mathbf{a}_n} (\\sum_{k=1}^{K} t_{nk} \\ln y_{nk}) \\right] \\cdot \\nabla \\mathbf{a}_n$$\n\n$$= \\sum_{n=1}^{N} \\mathbf{c}_n \\cdot \\nabla \\mathbf{a}_n$$\n\nNote that here $\\mathbf{c}_n = -dE/d\\mathbf{a}_n$ is a vector with size $K \\times 1$ , $\\nabla \\mathbf{a}_n$ is a matrix with size $K \\times W$ . Moreover, the operator $\\cdot$ means inner product, which gives $\\nabla E$ as a vector with size $1 \\times W$ . According to (4.106: $\\frac{\\partial y_k}{\\partial a_j} = y_k (I_{kj} - y_j)$), we can obtain the jth element of $\\mathbf{c}_n$ :\n\n$$c_{n,j} = -\\frac{\\partial}{\\partial a_j} \\left( \\sum_{k=1}^K t_{nk} \\ln y_{nk} \\right)$$\n\n$$= -\\sum_{k=1}^K \\frac{\\partial}{\\partial a_j} (t_{nk} \\ln y_{nk})$$\n\n$$= -\\sum_{k=1}^K \\frac{t_{nk}}{y_{nk}} y_{nk} (I_{kj} - y_{nj})$$\n\n$$= -\\sum_{k=1}^K t_{nk} I_{kj} + \\sum_{k=1}^K t_{nk} y_{nj}$$\n\n$$= -t_{nj} + y_{nj} \\left( \\sum_{k=1}^K t_{nk} \\right)$$\n\n$$= y_{nj} - t_{nj}$$\n\nNow we calculate the second derivative:\n\n$$\\nabla \\nabla E = \\sum_{n=1}^{N} \\left( \\frac{d\\mathbf{c}_{n}}{d\\mathbf{a}_{n}} \\nabla \\mathbf{a}_{n} \\right) \\cdot \\nabla \\mathbf{a}_{n} + \\mathbf{c}_{n} \\nabla \\nabla \\mathbf{a}_{n}$$\n\nHere $d\\mathbf{c}_n/d\\mathbf{a}_n$ is a matrix with size $K \\times K$ . Therefore, the second term can be neglected as before, which gives:\n\n$$\\mathbf{H} = \\sum_{n=1}^{N} (\\frac{d\\mathbf{c}_n}{d\\mathbf{a}_n} \\nabla \\mathbf{a}_n) \\cdot \\nabla \\mathbf{a_n}$$",
"answer_length": 2970
},
{
"chapter": 5,
"question_number": "5.2",
"difficulty": "easy",
"question_text": "Show that maximizing the likelihood function under the conditional distribution (5.16: $p(\\mathbf{t}|\\mathbf{x}, \\mathbf{w}) = \\mathcal{N}\\left(\\mathbf{t}|\\mathbf{y}(\\mathbf{x}, \\mathbf{w}), \\beta^{-1}\\mathbf{I}\\right).$) for a multioutput neural network is equivalent to minimizing the sum-of-squares error function (5.11: $E(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} \\|\\mathbf{y}(\\mathbf{x}_n, \\mathbf{w}) - \\mathbf{t}_n\\|^2.$).",
"answer": "It is obvious. We write down the likelihood.\n\n$$p(\\mathbf{T}|\\mathbf{X}, \\mathbf{w}) = \\prod_{n=1}^{N} \\mathcal{N}(\\mathbf{t_n}|\\mathbf{y}(\\mathbf{x_n}, \\mathbf{w}), \\beta^{-1}\\mathbf{I})$$\n\nTaking the negative logarithm, we can obtain:\n\n$$E(\\mathbf{w},\\beta) = -\\ln p(\\mathbf{T}|\\mathbf{X},\\mathbf{w}) = \\frac{\\beta}{2} \\sum_{n=1}^{N} \\left[ \\left( \\mathbf{y}(\\mathbf{x_n},\\mathbf{w}) - \\mathbf{t_n} \\right)^T \\left( \\mathbf{y}(\\mathbf{x_n},\\mathbf{w}) - \\mathbf{t_n} \\right) \\right] - \\frac{NK}{2} \\ln \\beta + \\mathrm{const}$$\n\nHere we have used const to denote the term independent of both $\\mathbf{w}$ and $\\beta$ . Note that here we have used the definition of the multivariate Gaussian Distribution. What's more, we see that the covariance matrix $\\beta^{-1}\\mathbf{I}$ and the weight parameter $\\mathbf{w}$ have decoupled, which is distinct from the next problem. We can first solve $\\mathbf{w}_{\\mathbf{ML}}$ by minimizing the first term on the right of the equation above or equivalently (5.11: $E(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} \\|\\mathbf{y}(\\mathbf{x}_n, \\mathbf{w}) - \\mathbf{t}_n\\|^2.$), i.e., imaging $\\beta$ is fixed. Then according to the derivative of $E(\\mathbf{w},\\beta)$ with regard to $\\beta$ , we can obtain (5.17: $\\frac{1}{\\beta_{\\text{ML}}} = \\frac{1}{NK} \\sum_{n=1}^{N} \\|\\mathbf{y}(\\mathbf{x}_n, \\mathbf{w}_{\\text{ML}}) - \\mathbf{t}_n\\|^2$) and hence $\\beta_{ML}$ .",
"answer_length": 1416
},
{
"chapter": 5,
"question_number": "5.21",
"difficulty": "hard",
"question_text": "\\star \\star)$ Extend the expression (5.86: $\\mathbf{H}_N = \\sum_{n=1}^N \\mathbf{b}_n \\mathbf{b}_n^{\\mathrm{T}}$) for the outer product approximation of the Hessian matrix to the case of K > 1 output units. Hence, derive a recursive expression analogous to (5.87: $\\mathbf{H}_{L+1} = \\mathbf{H}_L + \\mathbf{b}_{L+1} \\mathbf{b}_{L+1}^{\\mathrm{T}}.$) for incrementing the number N of patterns and a similar expression for incrementing the number K of outputs. Use these results, together with the identity (5.88: $\\left(\\mathbf{M} + \\mathbf{v}\\mathbf{v}^{\\mathrm{T}}\\right)^{-1} = \\mathbf{M}^{-1} - \\frac{\\left(\\mathbf{M}^{-1}\\mathbf{v}\\right)\\left(\\mathbf{v}^{\\mathrm{T}}\\mathbf{M}^{-1}\\right)}{1 + \\mathbf{v}^{\\mathrm{T}}\\mathbf{M}^{-1}\\mathbf{v}}$), to find sequential update expressions analogous to (5.89: $\\mathbf{H}_{L+1}^{-1} = \\mathbf{H}_{L}^{-1} - \\frac{\\mathbf{H}_{L}^{-1} \\mathbf{b}_{L+1} \\mathbf{b}_{L+1}^{\\mathrm{T}} \\mathbf{H}_{L}^{-1}}{1 + \\mathbf{b}_{L+1}^{\\mathrm{T}} \\mathbf{H}_{L}^{-1} \\mathbf{b}_{L+1}}.$) for finding the inverse of the Hessian by incrementally including both extra patterns and extra outputs.",
"answer": "We first write down the expression of Hessian Matrix in the case of K outputs.\n\n$$\\mathbf{H}_{N,K} = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\mathbf{b}_{n,k} \\mathbf{b}_{n,k}^{T}$$\n\nWhere $\\mathbf{b}_{n,k} = \\nabla_{\\mathbf{w}} \\mathbf{a}_{n,k}$ . Therefore, we have:\n\n$$\\mathbf{H}_{N+1,K} = \\mathbf{H}_{N,K} + \\sum_{k=1}^{K} \\mathbf{b}_{N+1,k} \\mathbf{b}_{N+1,k}^{T} = \\mathbf{H}_{N,K} + \\mathbf{B}_{N+1} \\mathbf{B}_{N+1}^{T}$$\n\nWhere $\\mathbf{B}_{N+1} = [\\mathbf{b}_{N+1,1}, \\mathbf{b}_{N+1,2}, ..., \\mathbf{b}_{N+1,K}]$ is a matrix with size $W \\times K$ , and here W is the total number of the parameters in the network. By analogy with (5.88)-(5.89), we can obtain:\n\n$$\\mathbf{H}_{N+1,K}^{-1} = \\mathbf{H}_{N,K}^{-1} - \\frac{\\mathbf{H}_{N,K}^{-1} \\mathbf{B}_{N+1} \\mathbf{B}_{N+1}^{T} \\mathbf{H}_{N,K}^{-1}}{1 + \\mathbf{B}_{N+1}^{T} \\mathbf{H}_{N,K}^{-1} \\mathbf{B}_{N+1}}$$\n(\\*)\n\nFurthermore, similarly, we have:\n\n$$\\mathbf{H}_{N+1,K+1} = \\mathbf{H}_{N+1,K} + \\sum_{n=1}^{N+1} \\mathbf{b}_{n,K+1} \\mathbf{b}_{n,K+1}^T = \\mathbf{H}_{N+1,K} + \\mathbf{B}_{K+1} \\mathbf{B}_{K+1}^T$$\n\nWhere $\\mathbf{B}_{K+1} = [\\mathbf{b}_{1,K+1}, \\mathbf{b}_{2,K+1}, ..., \\mathbf{b}_{N+1,K+1}]$ is a matrix with size $W \\times (N+1)$ . Also, we can obtain:\n\n$$\\mathbf{H}_{N+1,K+1}^{-1} = \\mathbf{H}_{N+1,K}^{-1} - \\frac{\\mathbf{H}_{N+1,K}^{-1} \\mathbf{B}_{K+1} \\mathbf{B}_{K+1}^T \\mathbf{H}_{N+1,K}^{-1}}{1 + \\mathbf{B}_{K+1}^T \\mathbf{H}_{N+1,K}^{-1} \\mathbf{B}_{K+1}}$$\n\nWhere $\\mathbf{H}_{N+1,K}^{-1}$ is defined by (\\*). If we substitute (\\*) into the expression above, we can obtain the relationship between $\\mathbf{H}_{N+1,K+1}^{-1}$ and $\\mathbf{H}_{N,K}^{-1}$ .",
"answer_length": 1657
},
{
"chapter": 5,
"question_number": "5.22",
"difficulty": "medium",
"question_text": "Derive the results (5.93: $\\frac{\\partial^2 E_n}{\\partial w_{kj}^{(2)} \\partial w_{k'j'}^{(2)}} = z_j z_{j'} M_{kk'}.$), (5.94), and (5.95: $\\frac{\\partial^2 E_n}{\\partial w_{ji}^{(1)} \\partial w_{kj'}^{(2)}} = x_i h'(a_{j'}) \\left\\{ \\delta_k I_{jj'} + z_j \\sum_{k'} w_{k'j'}^{(2)} H_{kk'} \\right\\}.$) for the elements of the Hessian matrix of a two-layer feed-forward network by application of the chain rule of calculus.",
"answer": "We begin by handling the first case.\n\n$$\\begin{split} \\frac{\\partial^2 E_n}{\\partial w_{kj}^{(2)} \\partial w_{k'j'}^{(2)}} &= \\frac{\\partial}{\\partial w_{kj}^{(2)}} (\\frac{\\partial E_n}{\\partial w_{k'j'}^{(2)}}) \\\\ &= \\frac{\\partial}{\\partial w_{kj}^{(2)}} (\\frac{\\partial E_n}{\\partial a_{k'}} \\frac{\\partial a_{k'}}{\\partial w_{k'j'}^{(2)}}) \\\\ &= \\frac{\\partial}{\\partial w_{kj}^{(2)}} (\\frac{\\partial E_n}{\\partial a_{k'}} \\frac{\\partial \\sum_{j'} w_{k'j'} z_{j'}}{\\partial w_{k'j'}^{(2)}}) \\\\ &= \\frac{\\partial}{\\partial w_{kj}^{(2)}} (\\frac{\\partial E_n}{\\partial a_{k'}} z_{j'}) \\\\ &= \\frac{\\partial}{\\partial w_{kj}^{(2)}} (\\frac{\\partial E_n}{\\partial a_{k'}}) z_{j'} + \\frac{\\partial E_n}{\\partial a_{k'}} \\frac{\\partial z_{j'}}{\\partial w_{kj}^{(2)}} \\\\ &= \\frac{\\partial}{\\partial a_k} (\\frac{\\partial E_n}{\\partial a_{k'}}) \\frac{\\partial a_k}{\\partial w_{kj}^{(2)}} z_{j'} + 0 \\\\ &= \\frac{\\partial}{\\partial a_k} (\\frac{\\partial E_n}{\\partial a_{k'}}) z_{j} z_{j'} \\\\ &= z_{j} z_{j'} M_{kk'} \\end{split}$$\n\nThen we focus on the second case, and if here $j \\neq j'$ \n\n$$\\frac{\\partial^{2}E_{n}}{\\partial w_{ji}^{(1)}\\partial w_{j'i'}^{(1)}} = \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\frac{\\partial E_{n}}{\\partial w_{j'i'}^{(1)}}) \n= \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\sum_{k'} \\frac{\\partial E_{n}}{\\partial a_{k'}} \\frac{\\partial a_{k'}}{\\partial w_{j'i'}^{(1)}}) \n= \\sum_{k'} \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\frac{\\partial E_{n}}{\\partial a_{k'}} w_{k'j'}^{(2)} h'(a_{j'}) x_{i'}) \n= \\sum_{k'} h'(a_{j'}) x_{i'} \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\frac{\\partial E_{n}}{\\partial a_{k'}} w_{k'j'}^{(2)}) \n= \\sum_{k'} h'(a_{j'}) x_{i'} \\sum_{k} \\frac{\\partial}{\\partial a_{k}} (\\frac{\\partial E_{n}}{\\partial a_{k'}} w_{k'j'}^{(2)}) \\frac{\\partial a_{k}}{\\partial w_{ji}^{(1)}} \n= \\sum_{k'} h'(a_{j'}) x_{i'} \\sum_{k} \\frac{\\partial}{\\partial a_{k}} (\\frac{\\partial E_{n}}{\\partial a_{k'}} w_{k'j'}^{(2)}) \\cdot (w_{kj}^{(2)} h'(a_{j}) x_{i}) \n= \\sum_{k'} h'(a_{j'}) x_{i'} \\sum_{k} M_{kk'} w_{k'j'}^{(2)} \\cdot w_{kj}^{(2)} h'(a_{j}) x_{i} \n= x_{i'} x_{i} h'(a_{j'}) h'(a_{j}) \\sum_{k'} \\sum_{k} w_{k'j'}^{(2)} \\cdot w_{kj}^{(2)} M_{kk'}$$\n\nWhen j = j', similarly we have:\n\n$$\\begin{split} \\frac{\\partial^{2}E_{n}}{\\partial w_{ji}^{(1)}\\partial w_{ji'}^{(1)}} &= \\sum_{k'} \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\frac{\\partial E_{n}}{\\partial a_{k'}} w_{k'j}^{(2)} h'(a_{j}) x_{i'}) \\\\ &= x_{i'} \\sum_{k'} \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\frac{\\partial E_{n}}{\\partial a_{k'}} w_{k'j}^{(2)}) h'(a_{j}) + x_{i'} \\sum_{k'} (\\frac{\\partial E_{n}}{\\partial a_{k'}} w_{k'j}^{(2)}) \\frac{\\partial h'(a_{j})}{\\partial w_{ji}^{(1)}} \\\\ &= x_{i'} x_{i} h'(a_{j}) h'(a_{j}) \\sum_{k'} \\sum_{k} w_{k'j}^{(2)} \\cdot w_{kj}^{(2)} M_{kk'} + x_{i'} \\sum_{k'} (\\frac{\\partial E_{n}}{\\partial a_{k'}} w_{k'j}^{(2)}) \\frac{\\partial h'(a_{j})}{\\partial w_{ji}^{(1)}} \\\\ &= x_{i'} x_{i} h'(a_{j}) h'(a_{j}) \\sum_{k'} \\sum_{k} w_{k'j}^{(2)} \\cdot w_{kj}^{(2)} M_{kk'} + x_{i'} \\sum_{k'} (\\frac{\\partial E_{n}}{\\partial a_{k'}} w_{k'j}^{(2)}) h''(a_{j}) x_{i} \\\\ &= x_{i'} x_{i} h'(a_{j}) h'(a_{j}) \\sum_{k'} \\sum_{k} w_{k'j}^{(2)} \\cdot w_{kj}^{(2)} M_{kk'} + h''(a_{j}) x_{i} x_{i'} \\sum_{k'} \\delta_{k'} w_{k'j}^{(2)} \\end{split}$$\n\nIt seems that what we have obtained is slightly different from (5.94) when j=j'. However this is not the case, since the summation over k' in the second term of our formulation and the summation over k in the first term of (5.94) is actually the same (i.e., they both represent the summation over all the output units). Combining the situation when j=j' and $j\\neq j'$ , we can obtain (5.94) just as required. Finally, we deal with the third case. Similarly we first focus on $j\\neq j'$ :\n\n$$\\frac{\\partial^{2}E_{n}}{\\partial w_{ji}^{(1)}\\partial w_{kj'}^{(2)}} = \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\frac{\\partial E_{n}}{\\partial w_{kj'}^{(2)}})$$\n\n$$= \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\frac{\\partial E_{n}}{\\partial a_{k}} \\frac{\\partial a_{k}}{\\partial w_{kj'}^{(2)}})$$\n\n$$= \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\frac{\\partial E_{n}}{\\partial a_{k}} \\frac{\\partial \\sum_{j'} w_{kj'} z_{j'}}{\\partial w_{kj'}^{(2)}})$$\n\n$$= \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\frac{\\partial E_{n}}{\\partial a_{k}} z_{j'})$$\n\n$$= z_{j'} \\sum_{k'} \\frac{\\partial}{\\partial a_{k'}} (\\frac{\\partial E_{n}}{\\partial a_{k}}) \\frac{\\partial a_{k'}}{\\partial w_{ji}^{(1)}}$$\n\n$$= z_{j'} \\sum_{k'} M_{kk'} w_{k'j}^{(2)} h'(a_{j}) x_{i}$$\n\n$$= x_{i} h'(a_{j}) z_{j'} \\sum_{k'} M_{kk'} w_{k'j}^{(2)}$$\n\nNote that in (5.95: $\\frac{\\partial^2 E_n}{\\partial w_{ji}^{(1)} \\partial w_{kj'}^{(2)}} = x_i h'(a_{j'}) \\left\\{ \\delta_k I_{jj'} + z_j \\sum_{k'} w_{k'j'}^{(2)} H_{kk'} \\right\\}.$), there are two typos: (i) $H_{kk'}$ should be $M_{kk'}$ . (ii) j should\n\nexchange position with j' in the right side of (5.95: $\\frac{\\partial^2 E_n}{\\partial w_{ji}^{(1)} \\partial w_{kj'}^{(2)}} = x_i h'(a_{j'}) \\left\\{ \\delta_k I_{jj'} + z_j \\sum_{k'} w_{k'j'}^{(2)} H_{kk'} \\right\\}.$). When j = j', we have:\n\n$$\\begin{split} \\frac{\\partial^2 E_n}{\\partial w_{ji}^{(1)} \\partial w_{kj}^{(2)}} &= \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\frac{\\partial E_n}{\\partial w_{kj}^{(2)}}) \\\\ &= \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\frac{\\partial E_n}{\\partial a_k} \\frac{\\partial a_k}{\\partial w_{kj}^{(2)}}) \\\\ &= \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\frac{\\partial E_n}{\\partial a_k} \\frac{\\partial \\sum_j w_{kj} z_j}{\\partial w_{kj}^{(2)}}) \\\\ &= \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\frac{\\partial E_n}{\\partial a_k} z_j) \\\\ &= \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\frac{\\partial E_n}{\\partial a_k} z_j) \\\\ &= \\frac{\\partial}{\\partial w_{ji}^{(1)}} (\\frac{\\partial E_n}{\\partial a_k} z_j) \\\\ &= x_i h'(a_j) z_j \\sum_{k'} M_{kk'} w_{k'j}^{(2)} + \\frac{\\partial E_n}{\\partial a_k} \\frac{\\partial z_j}{w_{ji}^{(1)}} \\\\ &= x_i h'(a_j) z_j \\sum_{k'} M_{kk'} w_{k'j}^{(2)} + \\delta_k h'(a_j) x_i \\end{split}$$\n\nCombing these two situations, we obtain (5.95: $\\frac{\\partial^2 E_n}{\\partial w_{ji}^{(1)} \\partial w_{kj'}^{(2)}} = x_i h'(a_{j'}) \\left\\{ \\delta_k I_{jj'} + z_j \\sum_{k'} w_{k'j'}^{(2)} H_{kk'} \\right\\}.$) just as required.",
"answer_length": 6181
},
{
"chapter": 5,
"question_number": "5.23",
"difficulty": "medium",
"question_text": "Extend the results of Section 5.4.5 for the exact Hessian of a two-layer network to include skip-layer connections that go directly from inputs to outputs.",
"answer": "It is similar to the previous problem.\n\n$$\\begin{split} \\frac{\\partial^2 E_n}{\\partial w_{k'i'} \\partial w_{kj}} &= \\frac{\\partial}{\\partial w_{k'i'}} (\\frac{\\partial E_n}{\\partial w_{kj}}) \\\\ &= \\frac{\\partial}{\\partial w_{k'i'}} (\\frac{\\partial E_n}{\\partial a_k} z_j) \\\\ &= z_j \\frac{\\partial w_{k'i'}}{\\partial a_{k'}} \\frac{\\partial}{\\partial a_{k'}} (\\frac{\\partial E_n}{\\partial a_k}) \\\\ &= z_j x_{i'} M_{kk'} \\end{split}$$\n\nAnd\n\n$$\\begin{split} \\frac{\\partial^2 E_n}{\\partial w_{k'i'} \\partial w_{ji}} &= \\frac{\\partial}{\\partial w_{k'i'}} (\\sum_k \\frac{\\partial E_n}{\\partial a_k} \\frac{\\partial a_k}{\\partial w_{ji}}) \\\\ &= \\frac{\\partial}{\\partial w_{k'i'}} (\\sum_k \\frac{\\partial E_n}{\\partial a_k} w_{kj} h'(a_j) x_i) \\\\ &= \\sum_k h'(a_j) x_i w_{kj} \\frac{\\partial}{\\partial w_{k'i'}} (\\frac{\\partial E_n}{\\partial a_k}) \\\\ &= \\sum_k h'(a_j) x_i w_{kj} \\frac{\\partial}{\\partial a_{k'}} (\\frac{\\partial E_n}{\\partial a_k}) \\frac{a_{k'}}{w_{k'i'}} \\\\ &= \\sum_k h'(a_j) x_i w_{kj} M_{kk'} x_{i'} \\\\ &= x_i x_{i'} h'(a_j) \\sum_k w_{kj} M_{kk'} \\end{split}$$\n\nFinally, we have\n\n$$\\begin{array}{ll} \\frac{\\partial^2 E_n}{\\partial w_{k'i'}w_{ki}} & = & \\frac{\\partial}{\\partial w_{k'i'}}(\\frac{\\partial E_n}{\\partial w_{ki}}) \\\\ & = & \\frac{\\partial}{\\partial w_{k'i'}}(\\frac{\\partial E_n}{\\partial a_k}x_i) \\\\ & = & x_i\\frac{\\partial}{\\partial a_{k'}}(\\frac{\\partial E_n}{\\partial a_k})\\frac{\\partial a_{k'}}{\\partial w_{k'i'}} \\\\ & = & x_ix_{i'}M_{kk'} \\end{array}$$\n\n# **Problem 5.24 Solution**\n\nIt is obvious. According to (5.113: $z_j = h\\left(\\sum_i w_{ji} x_i + w_{j0}\\right)$), we have:\n\n$$\\begin{split} \\widetilde{\\alpha}_j &= \\sum_i \\widetilde{w}_{ji} \\widetilde{x}_i + \\widetilde{w}_{j0} \\\\ &= \\sum_i \\frac{1}{a} w_{ji} \\cdot (ax_i + b) + w_{j0} - \\frac{b}{a} \\sum_i w_{ji} \\\\ &= \\sum_i w_{ji} x_i + w_{j0} = a_j \\end{split}$$\n\nWhere we have used (5.115: $x_i \\to \\widetilde{x}_i = ax_i + b.$), (5.116: $w_{ji} \\to \\widetilde{w}_{ji} = \\frac{1}{a} w_{ji}$) and (5.117: $w_{j0} \\to \\widetilde{w}_{j0} = w_{j0} - \\frac{b}{a} \\sum_{i} w_{ji}.$). Currently, we have proved that under the transformation the hidden unit $a_j$ is unchanged. If the activation function at the hidden unit is also unchanged, we have $\\tilde{z}_j = z_j$ . Now we deal with the output unit $\\tilde{y}_k$ :\n\n$$\\begin{split} \\widetilde{y}_k &= \\sum_j \\widetilde{w}_{kj} \\widetilde{z}_j + \\widetilde{w}_{k0} \\\\ &= \\sum_j c w_{kj} \\cdot z_j + c w_{k0} + d \\\\ &= c \\sum_j \\left[ w_{kj} \\cdot z_j + w_{k0} \\right] + d \\\\ &= c y_k + d \\end{split}$$\n\nWhere we have used (5.114: $y_k = \\sum_j w_{kj} z_j + w_{k0}.$), (5.119: $w_{kj} \\to \\widetilde{w}_{kj} = cw_{kj}$) and (5.120: $w_{k0} \\to \\widetilde{w}_{k0} = cw_{k0} + d.$). To be more specific, here we have proved that the linear transformation between $\\tilde{y}_k$ and $y_k$ can be achieved by making transformation (5.119: $w_{kj} \\to \\widetilde{w}_{kj} = cw_{kj}$) and (5.120: $w_{k0} \\to \\widetilde{w}_{k0} = cw_{k0} + d.$).",
"answer_length": 2974
},
{
"chapter": 5,
"question_number": "5.25",
"difficulty": "hard",
"question_text": "\\star \\star)$ www Consider a quadratic error function of the form\n\n$$E = E_0 + \\frac{1}{2} (\\mathbf{w} - \\mathbf{w}^*)^{\\mathrm{T}} \\mathbf{H} (\\mathbf{w} - \\mathbf{w}^*)$$\n (5.195: $E = E_0 + \\frac{1}{2} (\\mathbf{w} - \\mathbf{w}^*)^{\\mathrm{T}} \\mathbf{H} (\\mathbf{w} - \\mathbf{w}^*)$)\n\nwhere $\\mathbf{w}^*$ represents the minimum, and the Hessian matrix $\\mathbf{H}$ is positive definite and constant. Suppose the initial weight vector $\\mathbf{w}^{(0)}$ is chosen to be at the origin and is updated using simple gradient descent\n\n$$\\mathbf{w}^{(\\tau)} = \\mathbf{w}^{(\\tau-1)} - \\rho \\nabla E \\tag{5.196}$$\n\nwhere $\\tau$ denotes the step number, and $\\rho$ is the learning rate (which is assumed to be small). Show that, after $\\tau$ steps, the components of the weight vector parallel to the eigenvectors of $\\mathbf{H}$ can be written\n\n$$w_i^{(\\tau)} = \\{1 - (1 - \\rho \\eta_j)^{\\tau}\\} w_i^{\\star}$$\n (5.197: $w_i^{(\\tau)} = \\{1 - (1 - \\rho \\eta_j)^{\\tau}\\} w_i^{\\star}$)\n\nwhere $w_j = \\mathbf{w}^T \\mathbf{u}_j$ , and $\\mathbf{u}_j$ and $\\eta_j$ are the eigenvectors and eigenvalues, respectively, of $\\mathbf{H}$ so that\n\n$$\\mathbf{H}\\mathbf{u}_j = \\eta_j \\mathbf{u}_j. \\tag{5.198}$$\n\nShow that as $\\tau \\to \\infty$ , this gives $\\mathbf{w}^{(\\tau)} \\to \\mathbf{w}^*$ as expected, provided $|1 - \\rho \\eta_j| < 1$ . Now suppose that training is halted after a finite number $\\tau$ of steps. Show that the components of the weight vector parallel to the eigenvectors of the Hessian satisfy\n\n$$w_i^{(\\tau)} \\simeq w_i^{\\star} \\quad \\text{when} \\quad \\eta_j \\gg (\\rho \\tau)^{-1}$$\n (5.199: $w_i^{(\\tau)} \\simeq w_i^{\\star} \\quad \\text{when} \\quad \\eta_j \\gg (\\rho \\tau)^{-1}$)\n\n$$|w_j^{(\\tau)}| \\ll |w_j^{\\star}| \\text{ when } \\eta_j \\ll (\\rho \\tau)^{-1}.$$\n (5.200: $|w_j^{(\\tau)}| \\ll |w_j^{\\star}| \\text{ when } \\eta_j \\ll (\\rho \\tau)^{-1}.$)\n\nCompare this result with the discussion in Section 3.5.3 of regularization with simple weight decay, and hence show that $(\\rho\\tau)^{-1}$ is analogous to the regularization parameter $\\lambda$ . The above results also show that the effective number of parameters in the network, as defined by (3.91: $\\gamma = \\sum_{i} \\frac{\\lambda_i}{\\alpha + \\lambda_i}.$), grows as the training progresses.",
"answer": "Since we know the gradient of the error function with respect to $\\mathbf{w}$ is:\n\n$$\\nabla E = \\mathbf{H}(\\mathbf{w} - \\mathbf{w}^*)$$\n\nTogether with (5.196: $\\mathbf{w}^{(\\tau)} = \\mathbf{w}^{(\\tau-1)} - \\rho \\nabla E$), we can obtain:\n\n$$\\mathbf{w}^{(\\tau)} = \\mathbf{w}^{(\\tau-1)} - \\rho \\nabla E$$\n$$= \\mathbf{w}^{(\\tau-1)} - \\rho \\mathbf{H} (\\mathbf{w}^{(\\tau-1)} - \\mathbf{w}^*)$$\n\nMultiplying both sides by $\\mathbf{u}_j^T$ , using $w_j = \\mathbf{w}^T \\mathbf{u}_j$ , we can obtain:\n\n$$w_{j}^{(\\tau)} = \\mathbf{u}_{j}^{T} [\\mathbf{w}^{(\\tau-1)} - \\rho \\mathbf{H} (\\mathbf{w}^{(\\tau-1)} - \\mathbf{w}^{*})]$$\n\n$$= w_{j}^{(\\tau-1)} - \\rho \\mathbf{u}_{j}^{T} \\mathbf{H} (\\mathbf{w}^{(\\tau-1)} - \\mathbf{w}^{*})$$\n\n$$= w_{j}^{(\\tau-1)} - \\rho \\eta_{j} \\mathbf{u}_{j}^{T} (\\mathbf{w}^{(\\tau-1)} - \\mathbf{w}^{*})$$\n\n$$= w_{j}^{(\\tau-1)} - \\rho \\eta_{j} (w_{j}^{(\\tau-1)} - w_{j}^{*})$$\n\n$$= (1 - \\rho \\eta_{j}) w_{j}^{(\\tau-1)} + \\rho \\eta_{j} w_{j}^{*}$$\n\nWhere we have used (5.198: $\\mathbf{H}\\mathbf{u}_j = \\eta_j \\mathbf{u}_j.$). Then we use mathematical deduction to prove (5.197: $w_i^{(\\tau)} = \\{1 - (1 - \\rho \\eta_j)^{\\tau}\\} w_i^{\\star}$), beginning by calculating $w_i^{(1)}$ :\n\n$$w_{j}^{(1)} = (1 - \\rho \\eta_{j})w_{j}^{(0)} + \\rho \\eta_{j}w_{j}^{*}$$\n$$= \\rho \\eta_{j}w_{j}^{*}$$\n$$= [1 - (1 - \\rho \\eta_{j})]w_{j}^{*}$$\n\nSuppose (5.197: $w_i^{(\\tau)} = \\{1 - (1 - \\rho \\eta_j)^{\\tau}\\} w_i^{\\star}$) holds for $\\tau$ , we now prove that it also holds for $\\tau + 1$ .\n\n$$\\begin{split} w_{j}^{(\\tau+1)} &= (1 - \\rho \\eta_{j} w_{j}^{(\\tau)} + \\rho \\eta_{j} w_{j}^{*} \\\\ &= (1 - \\rho \\eta_{j}) \\big[ 1 - (1 - \\rho \\eta_{j})^{\\tau} \\big] w_{j}^{*} + \\rho \\eta_{j} w_{j}^{*} \\\\ &= \\big\\{ (1 - \\rho \\eta_{j}) \\big[ 1 - (1 - \\rho \\eta_{j})^{\\tau} \\big] + \\rho \\eta_{j} \\big\\} w_{j}^{*} \\\\ &= \\big[ 1 - (1 - \\rho \\eta_{j})^{\\tau+1} \\big] w_{j}^{*} \\end{split}$$\n\nHence (5.197: $w_i^{(\\tau)} = \\{1 - (1 - \\rho \\eta_j)^{\\tau}\\} w_i^{\\star}$) holds for $\\tau=1,2,...$ Provided $|1-\\rho\\eta_j|<1$ , we have $(1-\\rho\\eta_j)^{\\tau}\\to 0$ as $\\tau\\to\\infty$ ans thus $\\mathbf{w}^{(\\tau)}=\\mathbf{w}^*$ . If $\\tau$ is finite and $\\eta_j>>(\\rho\\tau)^{-1}$ , the above argument still holds since $\\tau$ is still relatively large. Conversely, when $\\eta_j<<(\\rho\\tau)^{-1}$ , we expand the expression above:\n\n$$|w_j^{(\\tau)}| = |\\left[1 - (1 - \\rho \\eta_j)^\\tau\\right] w_j^*| \\approx |\\tau \\rho \\eta_j w_j^*| << |w_j^*|$$\n\nWe can see that $(\\rho\\tau)^{-1}$ works as the regularization parameter $\\alpha$ in section 3.5.3.",
"answer_length": 2543
},
{
"chapter": 5,
"question_number": "5.26",
"difficulty": "medium",
"question_text": "Consider a multilayer perceptron with arbitrary feed-forward topology, which is to be trained by minimizing the *tangent propagation* error function (5.127: $\\widetilde{E} = E + \\lambda \\Omega$) in which the regularizing function is given by (5.128: $\\Omega = \\frac{1}{2} \\sum_{n} \\sum_{k} \\left( \\frac{\\partial y_{nk}}{\\partial \\xi} \\Big|_{\\xi=0} \\right)^2 = \\frac{1}{2} \\sum_{n} \\sum_{k} \\left( \\sum_{i=1}^{D} J_{nki} \\tau_{ni} \\right)^2.$). Show that the regularization term $\\Omega$ can be written as a sum over patterns of terms of the form\n\n$$\\Omega_n = \\frac{1}{2} \\sum_k \\left( \\mathcal{G} y_k \\right)^2 \\tag{5.201}$$\n\nwhere $\\mathcal{G}$ is a differential operator defined by\n\n$$\\mathcal{G} \\equiv \\sum_{i} \\tau_{i} \\frac{\\partial}{\\partial x_{i}}.$$\n (5.202: $\\mathcal{G} \\equiv \\sum_{i} \\tau_{i} \\frac{\\partial}{\\partial x_{i}}.$)\n\nBy acting on the forward propagation equations\n\n$$z_j = h(a_j), a_j = \\sum_i w_{ji} z_i (5.203)$$\n\nwith the operator $\\mathcal{G}$ , show that $\\Omega_n$ can be evaluated by forward propagation using the following equations:\n\n$$\\alpha_j = h'(a_j)\\beta_j, \\qquad \\beta_j = \\sum_i w_{ji}\\alpha_i.$$\n (5.204: $\\alpha_j = h'(a_j)\\beta_j, \\qquad \\beta_j = \\sum_i w_{ji}\\alpha_i.$)\n\nwhere we have defined the new variables\n\n$$\\alpha_j \\equiv \\mathcal{G}z_j, \\qquad \\beta_j \\equiv \\mathcal{G}a_j.$$\n (5.205: $\\alpha_j \\equiv \\mathcal{G}z_j, \\qquad \\beta_j \\equiv \\mathcal{G}a_j.$)\n\nNow show that the derivatives of $\\Omega_n$ with respect to a weight $w_{rs}$ in the network can be written in the form\n\n$$\\frac{\\partial \\Omega_n}{\\partial w_{rs}} = \\sum_k \\alpha_k \\left\\{ \\phi_{kr} z_s + \\delta_{kr} \\alpha_s \\right\\} \\tag{5.206}$$\n\nwhere we have defined\n\n$$\\delta_{kr} \\equiv \\frac{\\partial y_k}{\\partial a_r}, \\qquad \\phi_{kr} \\equiv \\mathcal{G}\\delta_{kr}.$$\n (5.207: $\\delta_{kr} \\equiv \\frac{\\partial y_k}{\\partial a_r}, \\qquad \\phi_{kr} \\equiv \\mathcal{G}\\delta_{kr}.$)\n\nWrite down the backpropagation equations for $\\delta_{kr}$ , and hence derive a set of backpropagation equations for the evaluation of the $\\phi_{kr}$ .",
"answer": "Based on definition or by analogy with (5.128: $\\Omega = \\frac{1}{2} \\sum_{n} \\sum_{k} \\left( \\frac{\\partial y_{nk}}{\\partial \\xi} \\Big|_{\\xi=0} \\right)^2 = \\frac{1}{2} \\sum_{n} \\sum_{k} \\left( \\sum_{i=1}^{D} J_{nki} \\tau_{ni} \\right)^2.$), we have:\n\n$$\\Omega_n = \\frac{1}{2} \\sum_{k} \\left( \\frac{\\partial y_{nk}}{\\partial \\xi} \\Big|_{\\xi=0} \\right)^2$$\n\n$$= \\frac{1}{2} \\sum_{k} \\left( \\sum_{i} \\frac{\\partial y_{nk}}{\\partial x_i} \\frac{\\partial x_i}{\\partial \\xi} \\Big|_{\\xi=0} \\right)^2$$\n\n$$= \\frac{1}{2} \\sum_{k} \\left( \\sum_{i} \\tau_i \\frac{\\partial}{\\partial x_i} y_{nk} \\right)^2$$\n\nWhere we have denoted\n\n$$\\tau_i = \\frac{\\partial x_i}{\\partial \\xi} \\big|_{\\xi=0}$$\n\nAnd this is exactly the form given in (5.201: $\\Omega_n = \\frac{1}{2} \\sum_k \\left( \\mathcal{G} y_k \\right)^2$) and (5.202: $\\mathcal{G} \\equiv \\sum_{i} \\tau_{i} \\frac{\\partial}{\\partial x_{i}}.$) if the nth observation $y_{nk}$ is denoted as $y_k$ in short. Firstly, we define $\\alpha_j$ and $\\beta_j$ as (5.205: $\\alpha_j \\equiv \\mathcal{G}z_j, \\qquad \\beta_j \\equiv \\mathcal{G}a_j.$) shows, where $z_j$ and $a_j$ are given by (5.203: $z_j = h(a_j), a_j = \\sum_i w_{ji} z_i$). Then we will prove (5.204: $\\alpha_j = h'(a_j)\\beta_j, \\qquad \\beta_j = \\sum_i w_{ji}\\alpha_i.$) holds:\n\n$$\\alpha_{j} = \\sum_{i} \\tau_{i} \\frac{\\partial z_{j}}{\\partial x_{i}} = \\sum_{i} \\tau_{i} \\frac{\\partial h(a_{j})}{\\partial x_{i}}$$\n\n$$= \\sum_{i} \\tau_{i} \\frac{\\partial h(a_{j})}{\\partial a_{j}} \\frac{\\partial a_{j}}{\\partial x_{i}}$$\n\n$$= h'(a_{j}) \\sum_{i} \\tau_{i} \\frac{\\partial}{\\partial x_{i}} a_{j} = h'(a_{j}) \\beta_{j}$$\n\nMoreover,\n\n$$\\begin{split} \\beta_{j} &= \\sum_{i} \\tau_{i} \\frac{\\partial \\alpha_{j}}{\\partial x_{i}} = \\sum_{i} \\tau_{i} \\frac{\\partial \\sum_{i'} w_{ji'} z_{i'}}{\\partial x_{i}} \\\\ &= \\sum_{i} \\tau_{i} \\sum_{i'} \\frac{\\partial w_{ji'} z_{i'}}{\\partial x_{i}} = \\sum_{i} \\tau_{i} \\sum_{i'} w_{ji'} \\frac{\\partial z_{i'}}{\\partial x_{i}} \\\\ &= \\sum_{i'} w_{ji'} \\sum_{i} \\tau_{i} \\frac{\\partial z_{i'}}{\\partial x_{i}} = \\sum_{i'} w_{ji'} \\alpha_{i'} \\end{split}$$\n\nSo far we have proved that (5.204: $\\alpha_j = h'(a_j)\\beta_j, \\qquad \\beta_j = \\sum_i w_{ji}\\alpha_i.$) holds and now we aim to find a forward propagation formula to calculate $\\Omega_n$ . We firstly begin by evaluating $\\{\\beta_j\\}$ at the input units, and then use the first equation in (5.204: $\\alpha_j = h'(a_j)\\beta_j, \\qquad \\beta_j = \\sum_i w_{ji}\\alpha_i.$) to obtain $\\{\\alpha_j\\}$ at the input units, and then the second equation to evaluate $\\{\\beta_j\\}$ at the first hidden layer, and again the first equation to evaluate $\\{\\alpha_j\\}$ at the first hidden layer. We repeatedly evaluate $\\{\\beta_j\\}$ and $\\{\\alpha_j\\}$ in this way until reaching the output\n\nlayer. Then we deal with (5.206):\n\n$$\\begin{split} \\frac{\\partial \\Omega_n}{\\partial w_{rs}} &= \\frac{\\partial}{\\partial w_{rs}} \\{ \\frac{1}{2} \\sum_k (\\mathcal{G} y_k)^2 \\} = \\frac{1}{2} \\sum_k \\frac{\\partial (\\mathcal{G} y_k)^2}{\\partial w_{rs}} \\\\ &= \\frac{1}{2} \\sum_k \\frac{\\partial (\\mathcal{G} y_k)^2}{\\partial (\\mathcal{G} y_k)} \\frac{\\partial (\\mathcal{G} y_k)}{\\partial w_{rs}} = \\sum_k \\mathcal{G} y_k \\frac{\\partial \\mathcal{G} y_k}{\\partial w_{rs}} \\\\ &= \\sum_k \\mathcal{G} y_k \\mathcal{G} \\left[ \\frac{\\partial y_k}{\\partial w_{rs}} \\right] = \\sum_k \\alpha_k \\mathcal{G} \\left[ \\frac{\\partial y_k}{\\partial \\alpha_r} \\frac{\\partial \\alpha_r}{\\partial w_{rs}} \\right] \\\\ &= \\sum_k \\alpha_k \\mathcal{G} \\left[ \\delta_{kr} z_s \\right] = \\sum_k \\alpha_k \\left\\{ \\mathcal{G} [\\delta_{kr}] z_s + \\mathcal{G} [z_s] \\delta_{kr} \\right\\} \\\\ &= \\sum_k \\alpha_k \\left\\{ \\phi_{kr} z_s + \\alpha_s \\delta_{kr} \\right\\} \\end{split}$$\n\nProvided with the idea in section 5.3, the backward propagation formula is easy to derive. We can simply replace $E_n$ with $y_k$ to obtain a backward equation, so we omit it here.",
"answer_length": 3876
},
{
"chapter": 5,
"question_number": "5.27",
"difficulty": "medium",
"question_text": "Consider the framework for training with transformed data in the special case in which the transformation consists simply of the addition of random noise $x \\to x + \\xi$ where $\\xi$ has a Gaussian distribution with zero mean and unit covariance. By following an argument analogous to that of Section 5.5.5, show that the resulting regularizer reduces to the Tikhonov form (5.135: $\\Omega = \\frac{1}{2} \\int \\|\\nabla y(\\mathbf{x})\\|^2 p(\\mathbf{x}) d\\mathbf{x}$).",
"answer": "Following the procedure in section 5.5.5, we can obtain:\n\n$$\\Omega = \\frac{1}{2} \\int (\\boldsymbol{\\tau}^T \\nabla y(\\mathbf{x}))^2 p(\\mathbf{x}) d\\mathbf{x}$$\n\nSince we have $\\tau = \\partial \\mathbf{s}(\\mathbf{x}, \\boldsymbol{\\xi}) / \\partial \\boldsymbol{\\xi}$ and $\\mathbf{s} = \\mathbf{x} + \\boldsymbol{\\xi}$ , so we have $\\tau = \\mathbf{I}$ . Therefore, substituting $\\tau$ into the equation above, we can obtain:\n\n$$\\Omega = \\frac{1}{2} \\int (\\nabla y(\\mathbf{x}))^2 p(\\mathbf{x}) d\\mathbf{x}$$\n\nJust as required.",
"answer_length": 522
},
{
"chapter": 5,
"question_number": "5.28",
"difficulty": "easy",
"question_text": "- 5.28 (\\*) www Consider a neural network, such as the convolutional network discussed in Section 5.5.6, in which multiple weights are constrained to have the same value. Discuss how the standard backpropagation algorithm must be modified in order to ensure that such constraints are satisfied when evaluating the derivatives of an error function with respect to the adjustable parameters in the network.",
"answer": "The modifications only affect derivatives with respect to the weights in the convolutional layer. The units within a feature map (indexed m) have different inputs, but all share a common weight vector, $\\mathbf{w}^{(m)}$ . Therefore, we can write:\n\n$$\\frac{\\partial E_n}{\\partial w_i^{(m)}} = \\sum_j \\frac{\\partial E_n}{\\partial a_j^{(m)}} \\frac{\\partial a_j^{(m)}}{\\partial w_i^{(m)}} = \\sum_j \\delta_j^{(m)} z_{ji}^{(m)}$$\n\nHere $a_j^{(m)}$ denotes the activation of the jth unit in th mth feature map, whereas $w_i^{(m)}$ denotes the ith element of the corresponding feature vector and finally $z_{ij}^{(m)}$ denotes the ith input for the jth unit in the mth feature map. Note that $\\delta_j^{(m)}$ can be computed recursively from the units in the following layer.",
"answer_length": 777
},
{
"chapter": 5,
"question_number": "5.29",
"difficulty": "easy",
"question_text": "Verify the result (5.141: $\\frac{\\partial \\widetilde{E}}{\\partial w_i} = \\frac{\\partial E}{\\partial w_i} + \\lambda \\sum_j \\gamma_j(w_i) \\frac{(w_i - \\mu_j)}{\\sigma_j^2}.$).",
"answer": "It is obvious. Firstly, we know that:\n\n$$\\frac{\\partial}{\\partial w_i} \\left\\{ \\pi_j \\mathcal{N}(w_i | \\mu_j, \\sigma_j^2) \\right\\} = -\\pi_j \\frac{w_i - \\mu_j}{\\sigma_j^2} \\mathcal{N}(w_i | \\mu_j, \\sigma_j^2)$$\n\nWe now derive the error function with respect to $w_i$ :\n\n$$\\begin{split} \\frac{\\partial \\widetilde{E}}{\\partial w_{i}} &= \\frac{\\partial E}{\\partial w_{i}} + \\frac{\\partial \\lambda \\Omega(\\mathbf{w})}{\\partial w_{i}} \\\\ &= \\frac{\\partial E}{\\partial w_{i}} - \\lambda \\frac{\\partial}{\\partial w_{i}} \\left\\{ \\sum_{i} \\ln \\left( \\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\right) \\right\\} \\\\ &= \\frac{\\partial E}{\\partial w_{i}} - \\lambda \\frac{\\partial}{\\partial w_{i}} \\left\\{ \\ln \\left( \\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\right) \\right\\} \\\\ &= \\frac{\\partial E}{\\partial w_{i}} - \\lambda \\frac{1}{\\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})} \\frac{\\partial}{\\partial w_{i}} \\left\\{ \\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\right\\} \\\\ &= \\frac{\\partial E}{\\partial w_{i}} + \\lambda \\frac{1}{\\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})} \\left\\{ \\sum_{j=1}^{M} \\pi_{j} \\frac{w_{i} - \\mu_{j}}{\\sigma_{j}^{2}} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\right\\} \\\\ &= \\frac{\\partial E}{\\partial w_{i}} + \\lambda \\frac{\\sum_{j=1}^{M} \\pi_{j} \\frac{w_{i} - \\mu_{j}}{\\sigma_{j}^{2}} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})}{\\sum_{k} \\pi_{k} \\mathcal{N}(w_{i} | \\mu_{k}, \\sigma_{k}^{2})} \\\\ &= \\frac{\\partial E}{\\partial w_{i}} + \\lambda \\sum_{j=1}^{M} \\frac{\\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})}{\\sum_{k} \\pi_{k} \\mathcal{N}(w_{i} | \\mu_{k}, \\sigma_{k}^{2})} \\frac{w_{i} - \\mu_{j}}{\\sigma_{j}^{2}} \\\\ &= \\frac{\\partial E}{\\partial w_{i}} + \\lambda \\sum_{j=1}^{M} \\gamma_{j}(w_{i}) \\frac{w_{i} - \\mu_{j}}{\\sigma_{j}^{2}} \\end{aligned}$$\n\nWhere we have used (5.138: $\\Omega(\\mathbf{w}) = -\\sum_{i} \\ln \\left( \\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\right).$) and defined (5.140: $\\gamma_j(w) = \\frac{\\pi_j \\mathcal{N}(w|\\mu_j, \\sigma_j^2)}{\\sum_k \\pi_k \\mathcal{N}(w|\\mu_k, \\sigma_k^2)}.$).",
"answer_length": 2181
},
{
"chapter": 5,
"question_number": "5.3",
"difficulty": "medium",
"question_text": "Consider a regression problem involving multiple target variables in which it is assumed that the distribution of the targets, conditioned on the input vector $\\mathbf{x}$ , is a Gaussian of the form\n\n$$p(\\mathbf{t}|\\mathbf{x}, \\mathbf{w}) = \\mathcal{N}(\\mathbf{t}|\\mathbf{y}(\\mathbf{x}, \\mathbf{w}), \\mathbf{\\Sigma})$$\n (5.192: $p(\\mathbf{t}|\\mathbf{x}, \\mathbf{w}) = \\mathcal{N}(\\mathbf{t}|\\mathbf{y}(\\mathbf{x}, \\mathbf{w}), \\mathbf{\\Sigma})$)\n\nwhere $\\mathbf{y}(\\mathbf{x}, \\mathbf{w})$ is the output of a neural network with input vector $\\mathbf{x}$ and weight vector $\\mathbf{w}$ , and $\\mathbf{\\Sigma}$ is the covariance of the assumed Gaussian noise on the targets. Given a set of independent observations of $\\mathbf{x}$ and $\\mathbf{t}$ , write down the error function that must be minimized in order to find the maximum likelihood solution for $\\mathbf{w}$ , if we assume that $\\mathbf{\\Sigma}$ is fixed and known. Now assume that $\\mathbf{\\Sigma}$ is also to be determined from the data, and write down an expression for the maximum likelihood solution for $\\mathbf{\\Sigma}$ . Note that the optimizations of $\\mathbf{w}$ and $\\mathbf{\\Sigma}$ are now coupled, in contrast to the case of independent target variables discussed in Section 5.2.",
"answer": "Following the process in the previous question, we first write down the negative logarithm of the likelihood function.\n\n$$E(\\mathbf{w}, \\mathbf{\\Sigma}) = \\frac{1}{2} \\sum_{n=1}^{N} \\left\\{ [\\mathbf{y}(\\mathbf{x}_n, \\mathbf{w}) - \\mathbf{t}_n]^T \\mathbf{\\Sigma}^{-1} [\\mathbf{y}(\\mathbf{x}_n, \\mathbf{w}) - \\mathbf{t}_n] \\right\\} + \\frac{N}{2} \\ln|\\mathbf{\\Sigma}| + \\text{const} \\quad (*)$$\n\nNote here we have assumed $\\Sigma$ is unknown and const denotes the term independent of both w and $\\Sigma$ . In the first situation, if $\\Sigma$ is fixed and known, the equation above will reduce to:\n\n$$E(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} \\left\\{ [\\mathbf{y}(\\mathbf{x_n}, \\mathbf{w}) - \\mathbf{t_n}]^T \\mathbf{\\Sigma}^{-1} [\\mathbf{y}(\\mathbf{x_n}, \\mathbf{w}) - \\mathbf{t_n}] \\right\\} + \\text{const}$$\n\nWe can simply solve $\\mathbf{w}_{ML}$ by minimizing it. If $\\Sigma$ is unknown, since $\\Sigma$ is in the first term on the right of (\\*), solving $\\mathbf{w}_{ML}$ will involve $\\Sigma$ . Note that in the previous problem, the main reason that they can decouple is due to the independent assumption, i.e., $\\Sigma$ reduces to $\\beta^{-1}\\mathbf{I}$ , so that we can bring $\\beta$ to the front and view it as a fixed multiplying factor when solving $\\mathbf{w}_{ML}$ .",
"answer_length": 1293
},
{
"chapter": 5,
"question_number": "5.30",
"difficulty": "easy",
"question_text": "Verify the result (5.142: $\\frac{\\partial \\widetilde{E}}{\\partial \\mu_j} = \\lambda \\sum_i \\gamma_j(w_i) \\frac{(\\mu_i - w_j)}{\\sigma_j^2}$).",
"answer": "Is is similar to the previous problem. Since we know that:\n\n$$\\frac{\\partial}{\\partial \\mu_j} \\left\\{ \\pi_j \\mathcal{N}(w_i | \\mu_j, \\sigma_j^2) \\right\\} = \\pi_j \\frac{w_i - \\mu_j}{\\sigma_j^2} \\mathcal{N}(w_i | \\mu_j, \\sigma_j^2)$$\n\nWe can derive:\n\n$$\\begin{split} \\frac{\\partial \\widetilde{E}}{\\partial \\mu_{j}} &= \\frac{\\partial \\lambda \\Omega(\\mathbf{w})}{\\partial \\mu_{j}} \\\\ &= -\\lambda \\frac{\\partial}{\\partial \\mu_{j}} \\left\\{ \\sum_{i} \\ln \\left( \\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\right) \\right\\} \\\\ &= -\\lambda \\sum_{i} \\frac{\\partial}{\\partial \\mu_{j}} \\left\\{ \\ln \\left( \\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\right) \\right\\} \\\\ &= -\\lambda \\sum_{i} \\frac{1}{\\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})} \\frac{\\partial}{\\partial \\mu_{j}} \\left\\{ \\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\right\\} \\\\ &= -\\lambda \\sum_{i} \\frac{1}{\\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})} \\pi_{j} \\frac{w_{i} - \\mu_{j}}{\\sigma_{j}^{2}} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\\\ &= \\lambda \\sum_{i} \\frac{\\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})}{\\sum_{k=1}^{K} \\pi_{k} \\mathcal{N}(w_{i} | \\mu_{k}, \\sigma_{k}^{2})} \\frac{\\mu_{j} - w_{i}}{\\sigma_{j}^{2}} = \\lambda \\sum_{i} \\gamma_{j}(w_{i}) \\frac{\\mu_{j} - w_{i}}{\\sigma_{j}^{2}} \\end{split}$$\n\nNote that there is a typo in (5.142: $\\frac{\\partial \\widetilde{E}}{\\partial \\mu_j} = \\lambda \\sum_i \\gamma_j(w_i) \\frac{(\\mu_i - w_j)}{\\sigma_j^2}$). The numerator should be $\\mu_j - w_i$ instead of $\\mu_i - w_j$ . This can be easily seen through the fact that the mean and variance of the Gaussian Distribution should have the same subindex and since $\\sigma_j$ is in the denominator, $\\mu_j$ should occur in the numerator instead of $\\mu_i$ .",
"answer_length": 1851
},
{
"chapter": 5,
"question_number": "5.31",
"difficulty": "easy",
"question_text": "Verify the result (5.143: $\\frac{\\partial \\widetilde{E}}{\\partial \\sigma_j} = \\lambda \\sum_{i} \\gamma_j(w_i) \\left( \\frac{1}{\\sigma_j} - \\frac{(w_i - \\mu_j)^2}{\\sigma_j^3} \\right)$).",
"answer": "It is similar to the previous problem. Since we know that:\n\n$$\\frac{\\partial}{\\partial \\sigma_j} \\left\\{ \\pi_j \\mathcal{N}(w_i | \\mu_j, \\sigma_j^2) \\right\\} = \\left( -\\frac{1}{\\sigma_j} + \\frac{(w_i - \\mu_j)^2}{\\sigma_j^3} \\right) \\pi_j \\mathcal{N}(w_i | \\mu_j, \\sigma_j^2)$$\n\nWe can derive:\n\n$$\\begin{split} \\frac{\\partial \\widetilde{E}}{\\partial \\sigma_{j}} &= \\frac{\\partial \\lambda \\Omega(\\mathbf{w})}{\\partial \\sigma_{j}} \\\\ &= -\\lambda \\frac{\\partial}{\\partial \\sigma_{j}} \\left\\{ \\sum_{i} \\ln \\left( \\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\right) \\right\\} \\\\ &= -\\lambda \\sum_{i} \\frac{\\partial}{\\partial \\sigma_{j}} \\left\\{ \\ln \\left( \\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\right) \\right\\} \\\\ &= -\\lambda \\sum_{i} \\frac{1}{\\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})} \\frac{\\partial}{\\partial \\sigma_{j}} \\left\\{ \\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\right\\} \\\\ &= -\\lambda \\sum_{i} \\frac{1}{\\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})} \\frac{\\partial}{\\partial \\sigma_{j}} \\left\\{ \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\right\\} \\\\ &= \\lambda \\sum_{i} \\frac{1}{\\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})} \\left( \\frac{1}{\\sigma_{j}} - \\frac{(w_{i} - \\mu_{j})^{2}}{\\sigma_{j}^{3}} \\right) \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\\\ &= \\lambda \\sum_{i} \\frac{\\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})}{\\sum_{k=1}^{M} \\pi_{k} \\mathcal{N}(w_{i} | \\mu_{k}, \\sigma_{k}^{2})} \\left( \\frac{1}{\\sigma_{j}} - \\frac{(w_{i} - \\mu_{j})^{2}}{\\sigma_{j}^{3}} \\right) \\\\ &= \\lambda \\sum_{i} \\gamma_{j}(w_{i}) \\left( \\frac{1}{\\sigma_{j}} - \\frac{(w_{i} - \\mu_{j})^{2}}{\\sigma_{j}^{3}} \\right) \\end{split}$$\n\nJust as required.",
"answer_length": 1818
},
{
"chapter": 5,
"question_number": "5.32",
"difficulty": "medium",
"question_text": "Show that the derivatives of the mixing coefficients $\\{\\pi_k\\}$ , defined by (5.146: $\\pi_j = \\frac{\\exp(\\eta_j)}{\\sum_{k=1}^{M} \\exp(\\eta_k)}.$), with respect to the auxiliary parameters $\\{\\eta_i\\}$ are given by\n\n$$\\frac{\\partial \\pi_k}{\\partial \\eta_i} = \\delta_{jk} \\pi_j - \\pi_j \\pi_k. \\tag{5.208}$$\n\nHence, by making use of the constraint $\\sum_{k} \\pi_{k} = 1$ , derive the result (5.147: $\\frac{\\partial \\widetilde{E}}{\\partial \\eta_j} = \\sum_i \\left\\{ \\pi_j - \\gamma_j(w_i) \\right\\}.$).",
"answer": "It is trivial. We begin by verifying (5.208: $\\frac{\\partial \\pi_k}{\\partial \\eta_i} = \\delta_{jk} \\pi_j - \\pi_j \\pi_k.$) when $j \\neq k$ .\n\n$$\\begin{array}{ll} \\frac{\\partial \\pi_k}{\\partial \\eta_j} & = & \\frac{\\partial}{\\partial \\eta_j} \\left\\{ \\frac{exp(\\eta_k)}{\\sum_k exp(\\eta_k)} \\right\\} \\\\ & = & \\frac{-exp(\\eta_k) exp(\\eta_j)}{\\left[\\sum_k exp(\\eta_k)\\right]^2} \\\\ & = & -\\pi_j \\pi_k \\end{array}$$\n\nAnd if now we have j = k:\n\n$$\\begin{array}{lcl} \\frac{\\partial \\pi_k}{\\partial \\eta_k} & = & \\frac{\\partial}{\\partial \\eta_k} \\left\\{ \\frac{exp(\\eta_k)}{\\sum_k exp(\\eta_k)} \\right\\} \\\\ & = & \\frac{exp(\\eta_k) \\left[ \\sum_k exp(\\eta_k) \\right] - exp(\\eta_k) exp(\\eta_k)}{\\left[ \\sum_k exp(\\eta_k) \\right]^2} \\\\ & = & \\pi_k - \\pi_k \\pi_k \\end{array}$$\n\nIf we combine these two cases, we can easily see that (5.208: $\\frac{\\partial \\pi_k}{\\partial \\eta_i} = \\delta_{jk} \\pi_j - \\pi_j \\pi_k.$) holds. Now\n\nwe prove (5.147: $\\frac{\\partial \\widetilde{E}}{\\partial \\eta_j} = \\sum_i \\left\\{ \\pi_j - \\gamma_j(w_i) \\right\\}.$).\n\n$$\\begin{split} \\frac{\\partial \\widetilde{E}}{\\partial \\eta_{j}} &= \\lambda \\frac{\\partial \\Omega(\\mathbf{w})}{\\partial \\eta_{j}} \\\\ &= -\\lambda \\frac{\\partial}{\\partial \\eta_{j}} \\left\\{ \\sum_{i} \\ln \\left\\{ \\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\right\\} \\right\\} \\\\ &= -\\lambda \\sum_{i} \\frac{\\partial}{\\partial \\eta_{j}} \\left\\{ \\ln \\left\\{ \\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) \\right\\} \\right\\} \\\\ &= -\\lambda \\sum_{i} \\frac{1}{\\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})} \\frac{\\partial}{\\partial \\eta_{j}} \\left\\{ \\sum_{k=1}^{M} \\pi_{k} \\mathcal{N}(w_{i} | \\mu_{k}, \\sigma_{k}^{2}) \\right\\} \\\\ &= -\\lambda \\sum_{i} \\frac{1}{\\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})} \\sum_{k=1}^{M} \\frac{\\partial}{\\partial \\eta_{j}} \\left\\{ \\pi_{k} \\mathcal{N}(w_{i} | \\mu_{k}, \\sigma_{k}^{2}) \\right\\} \\\\ &= -\\lambda \\sum_{i} \\frac{1}{\\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})} \\sum_{k=1}^{M} \\frac{\\partial}{\\partial \\eta_{k}} \\left\\{ \\pi_{k} \\mathcal{N}(w_{i} | \\mu_{k}, \\sigma_{k}^{2}) \\right\\} \\frac{\\partial \\pi_{k}}{\\partial \\eta_{j}} \\\\ &= -\\lambda \\sum_{i} \\frac{1}{\\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})} \\sum_{k=1}^{M} \\mathcal{N}(w_{i} | \\mu_{k}, \\sigma_{k}^{2}) (\\delta_{jk} \\pi_{j} - \\pi_{j} \\pi_{k}) \\\\ &= -\\lambda \\sum_{i} \\frac{1}{\\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})} \\left\\{ \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2}) - \\pi_{j} \\sum_{k=1}^{M} \\pi_{k} \\mathcal{N}(w_{i} | \\mu_{k}, \\sigma_{k}^{2}) \\right\\} \\\\ &= -\\lambda \\sum_{i} \\left\\{ \\frac{\\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})}{\\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})} - \\frac{\\pi_{j} \\sum_{k=1}^{M} \\pi_{k} \\mathcal{N}(w_{i} | \\mu_{k}, \\sigma_{k}^{2}) }{\\sum_{j=1}^{M} \\pi_{j} \\mathcal{N}(w_{i} | \\mu_{j}, \\sigma_{j}^{2})} \\right\\} \\\\ &= -\\lambda \\sum_{i} \\left\\{ \\gamma_{j}(w_{i}) - \\pi_{j} \\right\\} = \\lambda \\sum_{i} \\left\\{ \\pi_{j} - \\gamma_{j}(w_{i}) \\right\\} \\end{split}$$\n\nJust as required.",
"answer_length": 3140
},
{
"chapter": 5,
"question_number": "5.33",
"difficulty": "easy",
"question_text": "Write down a pair of equations that express the Cartesian coordinates $(x_1, x_2)$ for the robot arm shown in Figure 5.18 in terms of the joint angles $\\theta_1$ and $\\theta_2$ and the lengths $L_1$ and $L_2$ of the links. Assume the origin of the coordinate system is given by the attachment point of the lower arm. These equations define the 'forward kinematics' of the robot arm.",
"answer": "It is trivial. We set the attachment point of the lower arm with the ground as the origin of the coordinate. We first aim to find the vertical distance from the origin to the target point, and this is also the value of $x_2$ .\n\n$$x_2 = L_1 \\sin(\\pi - \\theta_1) + L_2 \\sin(\\theta_2 - (\\pi - \\theta_1))$$\n \n= $L_1 \\sin \\theta_1 - L_2 \\sin(\\theta_1 + \\theta_2)$ \n\nSimilarly, we calculate the horizontal distance from the origin to the target point.\n\n$$x_1 = -L_1 \\cos(\\pi - \\theta_1) + L_2 \\cos(\\theta_2 - (\\pi - \\theta_1))$$\n \n= $L_1 \\cos \\theta_1 - L_2 \\cos(\\theta_1 + \\theta_2)$ \n\nFrom these two equations, we can clearly see the 'forward kinematics' of the robot arm.",
"answer_length": 673
},
{
"chapter": 5,
"question_number": "5.34",
"difficulty": "easy",
"question_text": "Derive the result (5.155: $\\frac{\\partial E_n}{\\partial a_k^{\\pi}} = \\pi_k - \\gamma_k.$) for the derivative of the error function with respect to the network output activations controlling the mixing coefficients in the mixture density network.",
"answer": "By analogy with (5.208: $\\frac{\\partial \\pi_k}{\\partial \\eta_i} = \\delta_{jk} \\pi_j - \\pi_j \\pi_k.$), we can write:\n\n$$\\frac{\\partial \\pi_k(\\mathbf{x})}{\\partial \\alpha_j^\\pi} = \\delta_{jk} \\pi_j(\\mathbf{x}) - \\pi_j(\\mathbf{x}) \\pi_k(\\mathbf{x})$$\n\nUsing (5.153: $E(\\mathbf{w}) = -\\sum_{n=1}^{N} \\ln \\left\\{ \\sum_{k=1}^{k} \\pi_k(\\mathbf{x}_n, \\mathbf{w}) \\mathcal{N} \\left( \\mathbf{t}_n | \\boldsymbol{\\mu}_k(\\mathbf{x}_n, \\mathbf{w}), \\sigma_k^2(\\mathbf{x}_n, \\mathbf{w}) \\right) \\right\\}$), we can see that:\n\n$$E_n = -\\ln \\left\\{ \\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2) \\right\\}$$\n\nTherefore, we can derive:\n\n$$\\begin{split} \\frac{\\partial E_n}{\\partial a_j^{\\pi}} &= -\\frac{\\partial}{\\partial a_j^{\\pi}} \\ln \\left\\{ \\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2) \\right\\} \\\\ &= -\\frac{1}{\\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2)} \\frac{\\partial}{\\partial a_j^{\\pi}} \\left\\{ \\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2) \\right\\} \\\\ &= -\\frac{1}{\\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2)} \\sum_{k=1}^K \\frac{\\partial \\pi_k}{\\partial a_j^{\\pi}} \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2) \\\\ &= -\\frac{1}{\\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2)} \\sum_{k=1}^K \\left[ \\delta_{jk} \\pi_j(\\mathbf{x}_n) - \\pi_j(\\mathbf{x}_n) \\pi_k(\\mathbf{x}_n) \\right] \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2) \\\\ &= -\\frac{1}{\\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2)} \\left\\{ \\pi_j(\\mathbf{x}_n) \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_j, \\sigma_j^2) - \\pi_j(\\mathbf{x}_n) \\sum_{k=1}^K \\pi_k(\\mathbf{x}_n) \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2) \\right\\} \\\\ &= \\frac{1}{\\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2)} \\left\\{ -\\pi_j(\\mathbf{x}_n) \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_j, \\sigma_j^2) + \\pi_j(\\mathbf{x}_n) \\sum_{k=1}^K \\pi_k(\\mathbf{x}_n) \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2) \\right\\} \\end{split}$$\n\nAnd if we denoted (5.154: $\\gamma_k(\\mathbf{t}|\\mathbf{x}) = \\frac{\\pi_k \\mathcal{N}_{nk}}{\\sum_{l=1}^K \\pi_l \\mathcal{N}_{nl}}$), we will have:\n\n$$\\frac{\\partial E_n}{\\partial \\alpha_j^{\\pi}} = -\\gamma_j + \\pi_j$$\n\nNote that our result is slightly different from (5.155: $\\frac{\\partial E_n}{\\partial a_k^{\\pi}} = \\pi_k - \\gamma_k.$) by the subindex. But there are actually the same if we substitute index j by index k in the final expression.\n\n# **Problem 5.35 Solution**\n\nWe deal with the derivative of error function with respect to $\\mu_k$ instead, which will give a vector as result. Furthermore, the lth element of this vector will be what we have been required. Since we know that:\n\n$$\\frac{\\partial}{\\partial \\boldsymbol{\\mu}_k} \\left\\{ \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2) \\right\\} = \\frac{\\mathbf{t}_n - \\boldsymbol{\\mu}_k}{\\sigma_k^2} \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2)$$\n\nOne thing worthy noticing is that here we focus on the isotropic case as stated in page 273 of the textbook. To be more precise, $\\mathcal{N}(\\mathbf{t}_n|\\boldsymbol{\\mu}_k,\\sigma_k^2)$ should be $\\mathcal{N}(\\mathbf{t}_n|\\boldsymbol{\\mu}_k,\\sigma_k^2\\mathbf{I})$ . Provided with the equation above, we can further obtain:\n\n$$\\begin{split} \\frac{\\partial E_n}{\\partial \\boldsymbol{\\mu}_k} &= \\frac{\\partial}{\\partial \\boldsymbol{\\mu}_k} \\left\\{ -\\ln \\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2) \\right\\} \\\\ &= -\\frac{1}{\\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2)} \\frac{\\partial}{\\partial \\boldsymbol{\\mu}_k} \\left\\{ \\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2) \\right\\} \\\\ &= -\\frac{1}{\\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2)} \\cdot \\frac{\\mathbf{t}_n - \\boldsymbol{\\mu}_k}{\\sigma_k^2} \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2) \\\\ &= -\\gamma_k \\frac{\\mathbf{t}_n - \\boldsymbol{\\mu}_k}{\\sigma_k^2} \\end{split}$$\n\nHence noticing (5.152: $\\mu_{kj}(\\mathbf{x}) = a_{kj}^{\\mu}.$), the lth element of the result above is what we are required.\n\n$$\\frac{\\partial E_n}{\\partial \\alpha_{kl}^{\\mu}} = \\frac{\\partial E_n}{\\partial \\mu_{kl}} = \\gamma_k \\frac{\\mu_{kl} - \\mathbf{t}_l}{\\sigma_k^2}$$",
"answer_length": 4455
},
{
"chapter": 5,
"question_number": "5.36",
"difficulty": "easy",
"question_text": "Derive the result (5.157: $\\frac{\\partial E_n}{\\partial a_k^{\\sigma}} = -\\gamma_k \\left\\{ \\frac{\\|\\mathbf{t} - \\boldsymbol{\\mu}_k\\|^2}{\\sigma_k^3} - \\frac{1}{\\sigma_k} \\right\\}.$) for the derivative of the error function with respect to the network output activations controlling the component variances in the mixture density network.",
"answer": "Similarly, we know that:\n\n$$\\frac{\\partial}{\\partial \\sigma_k} \\left\\{ \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2) \\right\\} = \\left\\{ -\\frac{D}{\\sigma_k} + \\frac{||\\mathbf{t}_n - \\boldsymbol{\\mu}_k||^2}{\\sigma_k^3} \\right\\} \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2)$$\n\nTherefore, we can obtain:\n\n$$\\begin{split} \\frac{\\partial E_n}{\\partial \\sigma_k} &= \\frac{\\partial}{\\partial \\sigma_k} \\left\\{ -\\ln \\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2) \\right\\} \\\\ &= -\\frac{1}{\\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2)} \\frac{\\partial}{\\partial \\sigma_k} \\left\\{ \\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2) \\right\\} \\\\ &= -\\frac{1}{\\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2)} \\cdot \\left\\{ -\\frac{D}{\\sigma_k} + \\frac{||\\mathbf{t}_n - \\boldsymbol{\\mu}_k||^2}{\\sigma_k^3} \\right\\} \\pi_k \\mathcal{N}(\\mathbf{t}_n | \\boldsymbol{\\mu}_k, \\sigma_k^2) \\\\ &= -\\gamma_k \\left\\{ -\\frac{D}{\\sigma_k} + \\frac{||\\mathbf{t}_n - \\boldsymbol{\\mu}_k||^2}{\\sigma_k^3} \\right\\} \\end{split}$$\n\nNote that there is a typo in (5.157: $\\frac{\\partial E_n}{\\partial a_k^{\\sigma}} = -\\gamma_k \\left\\{ \\frac{\\|\\mathbf{t} - \\boldsymbol{\\mu}_k\\|^2}{\\sigma_k^3} - \\frac{1}{\\sigma_k} \\right\\}.$) and the underlying reason is that: $|\\sigma_k^2 \\mathbf{I}_{D \\times D}| = (\\sigma_k^2)^D$",
"answer_length": 1433
},
{
"chapter": 5,
"question_number": "5.37",
"difficulty": "easy",
"question_text": "Verify the results (5.158: $\\mathbb{E}\\left[\\mathbf{t}|\\mathbf{x}\\right] = \\int \\mathbf{t}p(\\mathbf{t}|\\mathbf{x}) \\, d\\mathbf{t} = \\sum_{k=1}^{K} \\pi_k(\\mathbf{x}) \\boldsymbol{\\mu}_k(\\mathbf{x})$) and (5.160: $= \\sum_{k=1}^{K} \\pi_{k}(\\mathbf{x}) \\left\\{ \\sigma_{k}^{2}(\\mathbf{x}) + \\left\\|\\boldsymbol{\\mu}_{k}(\\mathbf{x}) - \\sum_{l=1}^{K} \\pi_{l}(\\mathbf{x})\\boldsymbol{\\mu}_{l}(\\mathbf{x})\\right\\|^{2} \\right\\}$) for the conditional mean and variance of the mixture density network model.",
"answer": "First we know two properties for the Gaussian distribution $\\mathcal{N}(\\mathbf{t}|\\boldsymbol{\\mu}, \\sigma^2 \\mathbf{I})$ :\n\n$$\\mathbb{E}[\\mathbf{t}] = \\int \\mathbf{t} \\mathcal{N}(\\mathbf{t}|\\boldsymbol{\\mu}, \\sigma^2 \\mathbf{I}) d\\mathbf{t} = \\boldsymbol{\\mu}$$\n\nAnd\n\n$$\\mathbb{E}[||\\mathbf{t}||^2] = \\int ||\\mathbf{t}||^2 \\mathcal{N}(\\mathbf{t}|\\boldsymbol{\\mu}, \\sigma^2 \\mathbf{I}) d\\mathbf{t} = L\\sigma^2 + ||\\boldsymbol{\\mu}||^2$$\n\nWhere we have used $\\mathbb{E}[\\mathbf{t}^T \\mathbf{A} \\mathbf{t}] = \\text{Tr}[\\mathbf{A} \\sigma^2 \\mathbf{I}] + \\boldsymbol{\\mu}^T \\mathbf{A} \\boldsymbol{\\mu}$ by setting $\\mathbf{A} = \\mathbf{I}$ . This property can be found in Matrixcookbook eq(378). Here L is the dimension of $\\mathbf{t}$ . Noticing (5.148: $p(\\mathbf{t}|\\mathbf{x}) = \\sum_{k=1}^{K} \\pi_k(\\mathbf{x}) \\mathcal{N}\\left(\\mathbf{t}|\\boldsymbol{\\mu}_k(\\mathbf{x}), \\sigma_k^2(\\mathbf{x})\\right).$), we can write:\n\n$$\\begin{split} \\mathbb{E}[\\mathbf{t}|\\mathbf{x}] &= \\int \\mathbf{t} p(\\mathbf{t}|\\mathbf{x}) d\\mathbf{t} \\\\ &= \\int \\mathbf{t} \\sum_{k=1}^{K} \\pi_k \\mathcal{N}(\\mathbf{t}|\\boldsymbol{\\mu}_k, \\sigma_k^2) d\\mathbf{t} \\\\ &= \\sum_{k=1}^{K} \\pi_k \\int \\mathbf{t} \\mathcal{N}(\\mathbf{t}|\\boldsymbol{\\mu}_k, \\sigma_k^2) d\\mathbf{t} \\\\ &= \\sum_{k=1}^{K} \\pi_k \\boldsymbol{\\mu}_k \\end{split}$$\n\nThen we prove (5.160: $= \\sum_{k=1}^{K} \\pi_{k}(\\mathbf{x}) \\left\\{ \\sigma_{k}^{2}(\\mathbf{x}) + \\left\\|\\boldsymbol{\\mu}_{k}(\\mathbf{x}) - \\sum_{l=1}^{K} \\pi_{l}(\\mathbf{x})\\boldsymbol{\\mu}_{l}(\\mathbf{x})\\right\\|^{2} \\right\\}$).\n\n$$\\begin{split} s^2(\\mathbf{x}) &= & \\mathbb{E}[||\\mathbf{t} - \\mathbb{E}[\\mathbf{t}|\\mathbf{x}]||^2|\\mathbf{x}] = \\mathbb{E}[\\mathbf{t}^2 - 2\\mathbf{t}\\mathbb{E}[\\mathbf{t}|\\mathbf{x}] + \\mathbb{E}[\\mathbf{t}|\\mathbf{x}]^2)|\\mathbf{x}] \\\\ &= & \\mathbb{E}[\\mathbf{t}^2|\\mathbf{x}] - \\mathbb{E}[2\\mathbf{t}\\mathbb{E}[\\mathbf{t}|\\mathbf{x}]|\\mathbf{x}] + \\mathbb{E}[\\mathbf{t}|\\mathbf{x}]^2 = \\mathbb{E}[\\mathbf{t}^2|\\mathbf{x}] - \\mathbb{E}[\\mathbf{t}|\\mathbf{x}]^2 \\\\ &= & \\int ||\\mathbf{t}||^2 \\sum_{k=1}^K \\pi_k \\mathcal{N}(\\boldsymbol{\\mu}_k, \\sigma_k^2) d\\mathbf{t} - ||\\sum_{l=1}^K \\pi_l \\boldsymbol{\\mu}_l||^2 \\\\ &= & \\sum_{k=1}^K \\pi_k \\int ||\\mathbf{t}||^2 \\mathcal{N}(\\boldsymbol{\\mu}_k, \\sigma_k^2) d\\mathbf{t} - ||\\sum_{l=1}^K \\pi_l \\boldsymbol{\\mu}_l||^2 \\\\ &= & \\sum_{k=1}^K \\pi_k (L\\sigma_k^2 + ||\\boldsymbol{\\mu}_k||^2) - ||\\sum_{l=1}^K \\pi_l \\boldsymbol{\\mu}_l||^2 \\\\ &= & L \\sum_{k=1}^K \\pi_k \\sigma_k^2 + \\sum_{k=1}^K \\pi_k ||\\boldsymbol{\\mu}_k||^2 - ||\\sum_{l=1}^K \\pi_l \\boldsymbol{\\mu}_l||^2 \\\\ &= & L \\sum_{k=1}^K \\pi_k \\sigma_k^2 + \\sum_{k=1}^K \\pi_k ||\\boldsymbol{\\mu}_k||^2 - 2 \\times ||\\sum_{l=1}^K \\pi_l \\boldsymbol{\\mu}_l||^2 + 1 \\times ||\\sum_{l=1}^K \\pi_l \\boldsymbol{\\mu}_l||^2 \\\\ &= & L \\sum_{k=1}^K \\pi_k \\sigma_k^2 + \\sum_{k=1}^K \\pi_k ||\\boldsymbol{\\mu}_k||^2 - 2 (\\sum_{l=1}^K \\pi_l \\boldsymbol{\\mu}_l) (\\sum_{k=1}^K \\pi_k \\boldsymbol{\\mu}_k) + \\left(\\sum_{k=1}^K \\pi_k \\right) ||\\sum_{l=1}^K \\pi_l \\boldsymbol{\\mu}_l||^2 \\\\ &= & L \\sum_{k=1}^K \\pi_k \\sigma_k^2 + \\sum_{k=1}^K \\pi_k ||\\boldsymbol{\\mu}_k||^2 - 2 (\\sum_{l=1}^K \\pi_l \\boldsymbol{\\mu}_l) (\\sum_{k=1}^K \\pi_k \\boldsymbol{\\mu}_k) + \\sum_{k=1}^K \\pi_k ||\\sum_{l=1}^K \\pi_l \\boldsymbol{\\mu}_l||^2 \\\\ &= & L \\sum_{k=1}^K \\pi_k \\sigma_k^2 + \\sum_{k=1}^K \\pi_k ||\\boldsymbol{\\mu}_k||^2 - 2 (\\sum_{l=1}^K \\pi_l \\boldsymbol{\\mu}_l) (\\sum_{k=1}^K \\pi_k \\boldsymbol{\\mu}_k) + \\sum_{k=1}^K \\pi_k ||\\sum_{l=1}^K \\pi_l \\boldsymbol{\\mu}_l||^2 \\\\ &= & L \\sum_{k=1}^K \\pi_k \\sigma_k^2 + \\sum_{k=1}^K \\pi_k ||\\boldsymbol{\\mu}_k||^2 - 2 (\\sum_{l=1}^K \\pi_l \\boldsymbol{\\mu}_l) (\\sum_{k=1}^K \\pi_k \\boldsymbol{\\mu}_k) + \\sum_{k=1}^K \\pi_k ||\\sum_{l=1}^K \\pi_l \\boldsymbol{\\mu}_l||^2 \\\\ &= & \\sum_{k=1}^K \\pi_k (L\\sigma_k^2 + ||\\boldsymbol{\\mu}_k| - \\sum_{l=1}^K \\pi_l \\boldsymbol{\\mu}_l||^2 \\right) \\end{split}$$\n\nNote that there is a typo in (5.160: $= \\sum_{k=1}^{K} \\pi_{k}(\\mathbf{x}) \\left\\{ \\sigma_{k}^{2}(\\mathbf{x}) + \\left\\|\\boldsymbol{\\mu}_{k}(\\mathbf{x}) - \\sum_{l=1}^{K} \\pi_{l}(\\mathbf{x})\\boldsymbol{\\mu}_{l}(\\mathbf{x})\\right\\|^{2} \\right\\}$), i.e., the coefficient L in front of $\\sigma_k^2$ is missing.",
"answer_length": 4146
},
{
"chapter": 5,
"question_number": "5.38",
"difficulty": "easy",
"question_text": "Using the general result (2.115: $p(\\mathbf{y}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}, \\mathbf{L}^{-1} + \\mathbf{A}\\boldsymbol{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}})$), derive the predictive distribution (5.172: $p(t|\\mathbf{x}, \\mathcal{D}, \\alpha, \\beta) = \\mathcal{N}\\left(t|y(\\mathbf{x}, \\mathbf{w}_{\\text{MAP}}), \\sigma^2(\\mathbf{x})\\right)$) for the Laplace approximation to the Bayesian neural network model.\n\n### **290 5. NEURAL NETWORKS**",
"answer": "From (5.167: $q(\\mathbf{w}|\\mathcal{D}) = \\mathcal{N}(\\mathbf{w}|\\mathbf{w}_{\\text{MAP}}, \\mathbf{A}^{-1}).$) and (5.171: $p(t|\\mathbf{x}, \\mathbf{w}, \\beta) \\simeq \\mathcal{N}\\left(t|y(\\mathbf{x}, \\mathbf{w}_{\\text{MAP}}) + \\mathbf{g}^{\\mathbf{T}}(\\mathbf{w} - \\mathbf{w}_{\\text{MAP}}), \\beta^{-1}\\right).$), we can write down the expression for the predictive distribution:\n\n$$p(t|\\mathbf{x}, D, \\alpha, \\beta) = \\int p(\\mathbf{w}|D, \\alpha, \\beta) p(t|\\mathbf{x}, \\mathbf{w}, \\beta) d\\mathbf{w}$$\n\n$$\\approx \\int q(\\mathbf{w}|D) p(t|\\mathbf{x}, \\mathbf{w}, \\beta) d\\mathbf{w}$$\n\n$$= \\int \\mathcal{N}(\\mathbf{w}|\\mathbf{w}_{MAP}, \\mathbf{A}^{-1}) \\mathcal{N}(t|\\mathbf{g}^T \\mathbf{w} - \\mathbf{g}^T \\mathbf{w}_{MAP} + y(\\mathbf{x}, \\mathbf{w}_{MAP}), \\beta^{-1}) d\\mathbf{w}$$\n\nNote here $p(t|\\mathbf{x}, \\mathbf{w}, \\beta)$ is given by (5.171: $p(t|\\mathbf{x}, \\mathbf{w}, \\beta) \\simeq \\mathcal{N}\\left(t|y(\\mathbf{x}, \\mathbf{w}_{\\text{MAP}}) + \\mathbf{g}^{\\mathbf{T}}(\\mathbf{w} - \\mathbf{w}_{\\text{MAP}}), \\beta^{-1}\\right).$) and $q(\\mathbf{w}|D)$ is the approximation to the posterior $p(\\mathbf{w}|D, \\alpha, \\beta)$ , which is given by (5.167: $q(\\mathbf{w}|\\mathcal{D}) = \\mathcal{N}(\\mathbf{w}|\\mathbf{w}_{\\text{MAP}}, \\mathbf{A}^{-1}).$). Then by analogy with (2.115: $p(\\mathbf{y}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}, \\mathbf{L}^{-1} + \\mathbf{A}\\boldsymbol{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}})$), we first deal with the mean of the predictive distribution:\n\nmean = \n$$\\mathbf{g}^T \\mathbf{w} - \\mathbf{g}^T \\mathbf{w}_{MAP} + y(\\mathbf{x}, \\mathbf{w}_{MAP})|_{\\mathbf{w} = \\mathbf{w}_{MAP}}$$\n \n= $y(\\mathbf{x}, \\mathbf{w}_{MAP})$ \n\nThen we deal with the covariance matrix:\n\nCovariance matrix = \n$$\\beta^{-1} + \\mathbf{g}^T \\mathbf{A}^{-1} \\mathbf{g}$$\n\nJust as required.",
"answer_length": 1826
},
{
"chapter": 5,
"question_number": "5.39",
"difficulty": "easy",
"question_text": "Make use of the Laplace approximation result (4.135: $= f(\\mathbf{z}_0) \\frac{(2\\pi)^{M/2}}{|\\mathbf{A}|^{1/2}}$) to show that the evidence function for the hyperparameters $\\alpha$ and $\\beta$ in the Bayesian neural network model can be approximated by (5.175: $\\ln p(\\mathcal{D}|\\alpha,\\beta) \\simeq -E(\\mathbf{w}_{\\text{MAP}}) - \\frac{1}{2}\\ln|\\mathbf{A}| + \\frac{W}{2}\\ln\\alpha + \\frac{N}{2}\\ln\\beta - \\frac{N}{2}\\ln(2\\pi) \\quad$).",
"answer": "Using Laplace Approximation, we can obtain:\n\n$$p(D|\\mathbf{w},\\beta)p(\\mathbf{w}|\\alpha) = p(D|\\mathbf{w}_{\\text{MAP}},\\beta)p(\\mathbf{w}_{\\text{MAP}}|\\alpha)\\exp\\left\\{-(\\mathbf{w}-\\mathbf{w}_{\\text{MAP}})^T\\mathbf{A}(\\mathbf{w}-\\mathbf{w}_{\\text{MAP}})\\right\\}$$\n\nThen using (5.174: $p(\\mathcal{D}|\\alpha,\\beta) = \\int p(\\mathcal{D}|\\mathbf{w},\\beta)p(\\mathbf{w}|\\alpha) \\,d\\mathbf{w}.$), (5.162: $p(\\mathbf{w}|\\alpha) = \\mathcal{N}(\\mathbf{w}|\\mathbf{0}, \\alpha^{-1}\\mathbf{I}).$) and (5.163: $p(\\mathcal{D}|\\mathbf{w},\\beta) = \\prod_{n=1}^{N} \\mathcal{N}(t_n|y(\\mathbf{x}_n, \\mathbf{w}), \\beta^{-1})$), we can obtain:\n\n$$p(D|\\alpha,\\beta) = \\int p(D|\\mathbf{w},\\beta)p(\\mathbf{w},\\alpha)d\\mathbf{w}$$\n\n$$= \\int p(D|\\mathbf{w}_{\\text{MAP}},\\beta)p(\\mathbf{w}_{\\text{MAP}}|\\alpha)\\exp\\left\\{-(\\mathbf{w}-\\mathbf{w}_{\\text{MAP}})^{T}\\mathbf{A}(\\mathbf{w}-\\mathbf{w}_{\\text{MAP}})\\right\\}d\\mathbf{w}$$\n\n$$= p(D|\\mathbf{w}_{\\text{MAP}},\\beta)p(\\mathbf{w}_{\\text{MAP}}|\\alpha)\\frac{(2\\pi)^{W/2}}{|\\mathbf{A}|^{1/2}}$$\n\n$$= \\prod_{n=1}^{N} \\mathcal{N}(t_{n}|y(\\mathbf{x}_{n},\\mathbf{w}_{\\text{MAP}}),\\beta^{-1})\\mathcal{N}(\\mathbf{w}_{\\text{MAP}}|\\mathbf{0},\\alpha^{-1}\\mathbf{I})\\frac{(2\\pi)^{W/2}}{|\\mathbf{A}|^{1/2}}$$\n\nIf we take logarithm of both sides, we will obtain (5.175: $\\ln p(\\mathcal{D}|\\alpha,\\beta) \\simeq -E(\\mathbf{w}_{\\text{MAP}}) - \\frac{1}{2}\\ln|\\mathbf{A}| + \\frac{W}{2}\\ln\\alpha + \\frac{N}{2}\\ln\\beta - \\frac{N}{2}\\ln(2\\pi) \\quad$) just as required.",
"answer_length": 1469
},
{
"chapter": 5,
"question_number": "5.4",
"difficulty": "medium",
"question_text": "Consider a binary classification problem in which the target values are $t \\in \\{0,1\\}$ , with a network output $y(\\mathbf{x},\\mathbf{w})$ that represents $p(t=1|\\mathbf{x})$ , and suppose that there is a probability $\\epsilon$ that the class label on a training data point has been incorrectly set. Assuming independent and identically distributed data, write down the error function corresponding to the negative log likelihood. Verify that the error function (5.21: $E(\\mathbf{w}) = -\\sum_{n=1}^{N} \\left\\{ t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n) \\right\\}$) is obtained when $\\epsilon = 0$ . Note that this error function makes the model robust to incorrectly labelled data, in contrast to the usual error function.",
"answer": "Based on (5.20: $p(t|\\mathbf{x}, \\mathbf{w}) = y(\\mathbf{x}, \\mathbf{w})^t \\left\\{ 1 - y(\\mathbf{x}, \\mathbf{w}) \\right\\}^{1-t}.$), the current conditional distribution of targets, considering mislabel, given input $\\mathbf{x}$ and weight $\\mathbf{w}$ is:\n\n$$p(t = 1|\\mathbf{x}, \\mathbf{w}) = (1 - \\epsilon) \\cdot p(t_r = 1|\\mathbf{x}, \\mathbf{w}) + \\epsilon \\cdot p(t_r = 0|\\mathbf{x}, \\mathbf{w})$$\n\nNote that here we use t to denote the observed target label, $t_r$ to denote its real label, and that our network is aimed to predict the real label $t_r$ not t, i.e., $p(t_r = 1|\\mathbf{x}, \\mathbf{w}) = y(\\mathbf{x}, \\mathbf{w})$ , hence we see that:\n\n$$p(t = 1|\\mathbf{x}, \\mathbf{w}) = (1 - \\epsilon) \\cdot y(\\mathbf{x}, \\mathbf{w}) + \\epsilon \\cdot [1 - y(\\mathbf{x}, \\mathbf{w})]$$\n(\\*)\n\nAlso, it is the same for $p(t = 0 | \\mathbf{x}, \\mathbf{w})$ :\n\n$$p(t = 0|\\mathbf{x}, \\mathbf{w}) = (1 - \\epsilon) \\cdot [1 - y(\\mathbf{x}, \\mathbf{w})] + \\epsilon \\cdot y(\\mathbf{x}, \\mathbf{w})$$\n (\\*\\*)\n\nCombing (\\*) and (\\*\\*), we can obtain:\n\n$$p(t|\\mathbf{x}, \\mathbf{w}) = (1 - \\epsilon) \\cdot y^t (1 - y)^{1 - t} + \\epsilon \\cdot (1 - y)^t y^{1 - t}$$\n\nWhere y is short for $y(\\mathbf{x}, \\mathbf{w})$ . Therefore, taking the negative logarithm, we can obtain the error function:\n\n$$E(\\mathbf{w}) = -\\sum_{n=1}^{N} \\ln \\left\\{ (1 - \\epsilon) \\cdot y_n^{t_n} (1 - y_n)^{1 - t_n} + \\epsilon \\cdot (1 - y_n)^{t_n} y_n^{1 - t_n} \\right\\}$$\n\nWhen $\\epsilon = 0$ , it is obvious that the equation above will reduce to (5.21: $E(\\mathbf{w}) = -\\sum_{n=1}^{N} \\left\\{ t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n) \\right\\}$).",
"answer_length": 1624
},
{
"chapter": 5,
"question_number": "5.40",
"difficulty": "easy",
"question_text": "Outline the modifications needed to the framework for Bayesian neural networks, discussed in Section 5.7.3, to handle multiclass problems using networks having softmax output-unit activation functions.",
"answer": "For a k-class classification problem, we need to use softmax activation function and also the error function is now given by (5.24: $E(\\mathbf{w}) = -\\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{kn} \\ln y_k(\\mathbf{x}_n, \\mathbf{w}).$). Therefore, the\n\nHessian matrix should be derived from (5.24: $E(\\mathbf{w}) = -\\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{kn} \\ln y_k(\\mathbf{x}_n, \\mathbf{w}).$) and the cross entropy in (5.184: $E(\\mathbf{w}_{\\text{MAP}}) = -\\sum_{n=1}^{N} \\{t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n)\\} + \\frac{\\alpha}{2} \\mathbf{w}_{\\text{MAP}}^{\\text{T}} \\mathbf{w}_{\\text{MAP}}$) will also be replaced by (5.24: $E(\\mathbf{w}) = -\\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{kn} \\ln y_k(\\mathbf{x}_n, \\mathbf{w}).$).",
"answer_length": 702
},
{
"chapter": 5,
"question_number": "5.41",
"difficulty": "medium",
"question_text": "By following analogous steps to those given in Section 5.7.1 for regression networks, derive the result (5.183: $\\ln p(\\mathcal{D}|\\alpha) \\simeq -E(\\mathbf{w}_{\\text{MAP}}) - \\frac{1}{2} \\ln |\\mathbf{A}| + \\frac{W}{2} \\ln \\alpha + \\text{const}$) for the marginal likelihood in the case of a network having a cross-entropy error function and logistic-sigmoid output-unit activation function.\n\n\n\nIn Chapters 3 and 4, we considered linear parametric models for regression and classification in which the form of the mapping $y(\\mathbf{x}, \\mathbf{w})$ from input $\\mathbf{x}$ to output y is governed by a vector $\\mathbf{w}$ of adaptive parameters. During the learning phase, a set of training data is used either to obtain a point estimate of the parameter vector or to determine a posterior distribution over this vector. The training data is then discarded, and predictions for new inputs are based purely on the learned parameter vector $\\mathbf{w}$ . This approach is also used in nonlinear parametric models such as neural networks.\n\nHowever, there is a class of pattern recognition techniques, in which the training data points, or a subset of them, are kept and used also during the prediction phase. For instance, the Parzen probability density model comprised a linear combination of 'kernel' functions each one centred on one of the training data points. Similarly, in Section 2.5.2 we introduced a simple technique for classification called nearest neighbours, which involved assigning to each new test vector the same label as the\n\nChapter 5\n\nSection 2.5.1\n\nclosest example from the training set. These are examples of *memory-based* methods that involve storing the entire training set in order to make predictions for future data points. They typically require a metric to be defined that measures the similarity of any two vectors in input space, and are generally fast to 'train' but slow at making predictions for test data points.\n\nMany linear parametric models can be re-cast into an equivalent 'dual representation' in which the predictions are also based on linear combinations of a *kernel function* evaluated at the training data points. As we shall see, for models which are based on a fixed nonlinear *feature space* mapping $\\phi(\\mathbf{x})$ , the kernel function is given by the relation\n\n$$k(\\mathbf{x}, \\mathbf{x}') = \\phi(\\mathbf{x})^{\\mathrm{T}} \\phi(\\mathbf{x}'). \\tag{6.1}$$\n\nFrom this definition, we see that the kernel is a symmetric function of its arguments so that $k(\\mathbf{x}, \\mathbf{x}') = k(\\mathbf{x}', \\mathbf{x})$ . The kernel concept was introduced into the field of pattern recognition by Aizerman *et al.* (1964) in the context of the method of potential functions, so-called because of an analogy with electrostatics. Although neglected for many years, it was re-introduced into machine learning in the context of large-margin classifiers by Boser *et al.* (1992) giving rise to the technique of *support vector machines*. Since then, there has been considerable interest in this topic, both in terms of theory and applications. One of the most significant developments has been the extension of kernels to handle symbolic objects, thereby greatly expanding the range of problems that can be addressed.\n\nThe simplest example of a kernel function is obtained by considering the identity mapping for the feature space in (6.1: $k(\\mathbf{x}, \\mathbf{x}') = \\phi(\\mathbf{x})^{\\mathrm{T}} \\phi(\\mathbf{x}').$) so that $\\phi(\\mathbf{x}) = \\mathbf{x}$ , in which case $k(\\mathbf{x}, \\mathbf{x}') = \\mathbf{x}^T \\mathbf{x}'$ . We shall refer to this as the linear kernel.\n\nThe concept of a kernel formulated as an inner product in a feature space allows us to build interesting extensions of many well-known algorithms by making use of the *kernel trick*, also known as *kernel substitution*. The general idea is that, if we have an algorithm formulated in such a way that the input vector $\\mathbf{x}$ enters only in the form of scalar products, then we can replace that scalar product with some other choice of kernel. For instance, the technique of kernel substitution can be applied to principal component analysis in order to develop a nonlinear variant of PCA (Schölkopf *et al.*, 1998). Other examples of kernel substitution include nearest-neighbour classifiers and the kernel Fisher discriminant (Mika *et al.*, 1999; Roth and Steinhage, 2000; Baudat and Anouar, 2000).\n\nThere are numerous forms of kernel functions in common use, and we shall encounter several examples in this chapter. Many have the property of being a function only of the difference between the arguments, so that $k(\\mathbf{x}, \\mathbf{x}') = k(\\mathbf{x} - \\mathbf{x}')$ , which are known as *stationary* kernels because they are invariant to translations in input space. A further specialization involves *homogeneous* kernels, also known as *radial basis functions*, which depend only on the magnitude of the distance (typically Euclidean) between the arguments so that $k(\\mathbf{x}, \\mathbf{x}') = k(||\\mathbf{x} - \\mathbf{x}'||)$ .\n\nFor recent textbooks on kernel methods, see Schölkopf and Smola (2002), Herbrich (2002), and Shawe-Taylor and Cristianini (2004).\n\nChapter 7\n\nSection 12.3\n\nSection 6.3\n\n### 6.1. Dual Representations\n\nMany linear models for regression and classification can be reformulated in terms of a dual representation in which the kernel function arises naturally. This concept will play an important role when we consider support vector machines in the next chapter. Here we consider a linear regression model whose parameters are determined by minimizing a regularized sum-of-squares error function given by\n\n$$J(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} \\left\\{ \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) - t_n \\right\\}^2 + \\frac{\\lambda}{2} \\mathbf{w}^{\\mathrm{T}} \\mathbf{w}$$\n (6.2: $J(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} \\left\\{ \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) - t_n \\right\\}^2 + \\frac{\\lambda}{2} \\mathbf{w}^{\\mathrm{T}} \\mathbf{w}$)\n\nwhere $\\lambda \\geqslant 0$ . If we set the gradient of $J(\\mathbf{w})$ with respect to $\\mathbf{w}$ equal to zero, we see that the solution for $\\mathbf{w}$ takes the form of a linear combination of the vectors $\\phi(\\mathbf{x}_n)$ , with coefficients that are functions of $\\mathbf{w}$ , of the form\n\n$$\\mathbf{w} = -\\frac{1}{\\lambda} \\sum_{n=1}^{N} \\left\\{ \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) - t_n \\right\\} \\boldsymbol{\\phi}(\\mathbf{x}_n) = \\sum_{n=1}^{N} a_n \\boldsymbol{\\phi}(\\mathbf{x}_n) = \\boldsymbol{\\Phi}^{\\mathrm{T}} \\mathbf{a}$$\n(6.3: $\\mathbf{w} = -\\frac{1}{\\lambda} \\sum_{n=1}^{N} \\left\\{ \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) - t_n \\right\\} \\boldsymbol{\\phi}(\\mathbf{x}_n) = \\sum_{n=1}^{N} a_n \\boldsymbol{\\phi}(\\mathbf{x}_n) = \\boldsymbol{\\Phi}^{\\mathrm{T}} \\mathbf{a}$)\n\nwhere $\\Phi$ is the design matrix, whose $n^{\\text{th}}$ row is given by $\\phi(\\mathbf{x}_n)^{\\text{T}}$ . Here the vector $\\mathbf{a} = (a_1, \\dots, a_N)^{\\text{T}}$ , and we have defined\n\n$$a_n = -\\frac{1}{\\lambda} \\left\\{ \\mathbf{w}^{\\mathrm{T}} \\phi(\\mathbf{x}_n) - t_n \\right\\}. \\tag{6.4}$$",
"answer": "By analogy to Prob.5.39, we can write:\n\n$$p(D|\\alpha) = p(D|\\mathbf{w}_{\\text{MAP}})p(\\mathbf{w}_{\\text{MAP}}|\\alpha) \\frac{(2\\pi)^{W/2}}{|\\mathbf{A}|^{1/2}}$$\n\nSince we know that the prior $p(\\mathbf{w}|\\alpha)$ follows a Gaussian distribution, i.e., (5.162: $p(\\mathbf{w}|\\alpha) = \\mathcal{N}(\\mathbf{w}|\\mathbf{0}, \\alpha^{-1}\\mathbf{I}).$), as stated in the text. Therefore we can obtain:\n\n$$\\begin{split} \\ln p(D|\\alpha) &= & \\ln p(D|\\mathbf{w}_{\\text{MAP}}) + \\ln p(\\mathbf{w}_{\\text{MAP}}|\\alpha) - \\frac{1}{2}\\ln |\\mathbf{A}| + \\text{const} \\\\ &= & \\ln p(D|\\mathbf{w}_{\\text{MAP}}) - \\frac{\\alpha}{2}\\mathbf{w}^T\\mathbf{w} + \\frac{W}{2}\\ln \\alpha - \\frac{1}{2}\\ln |\\mathbf{A}| + \\text{const} \\\\ &= & -E(\\mathbf{w}_{\\text{MAP}}) + \\frac{W}{2}\\ln \\alpha - \\frac{1}{2}\\ln |\\mathbf{A}| + \\text{const} \\end{split}$$\n\nJust as required.\n\n## 0.6 Kernel Methods",
"answer_length": 863
},
{
"chapter": 5,
"question_number": "5.5",
"difficulty": "easy",
"question_text": "Show that maximizing likelihood for a multiclass neural network model in which the network outputs have the interpretation $y_k(\\mathbf{x}, \\mathbf{w}) = p(t_k = 1|\\mathbf{x})$ is equivalent to the minimization of the cross-entropy error function (5.24: $E(\\mathbf{w}) = -\\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{kn} \\ln y_k(\\mathbf{x}_n, \\mathbf{w}).$).",
"answer": "It is obvious by using (5.22: $p(\\mathbf{t}|\\mathbf{x}, \\mathbf{w}) = \\prod_{k=1}^{K} y_k(\\mathbf{x}, \\mathbf{w})^{t_k} \\left[ 1 - y_k(\\mathbf{x}, \\mathbf{w}) \\right]^{1 - t_k}.$).\n\n$$E(\\mathbf{w}) = -\\ln \\prod_{n=1}^{N} p(\\mathbf{t}|\\mathbf{x_n}, \\mathbf{w})$$\n\n$$= -\\ln \\prod_{n=1}^{N} \\prod_{k=1}^{K} y_k(\\mathbf{x_n}, \\mathbf{w})^{t_{nk}} [1 - y_k(\\mathbf{x_n}, \\mathbf{w})]^{1 - t_{nk}}$$\n\n$$= -\\sum_{n=1}^{N} \\sum_{k=1}^{K} \\ln \\{y_k(\\mathbf{x_n}, \\mathbf{w})^{t_{nk}} [1 - y_k(\\mathbf{x_n}, \\mathbf{w})]^{1 - t_{nk}} \\}$$\n\n$$= -\\sum_{n=1}^{N} \\sum_{k=1}^{K} \\ln [y_{nk}^{t_{nk}} (1 - y_{nk})^{1 - t_{nk}}]$$\n\n$$= -\\sum_{n=1}^{N} \\sum_{k=1}^{K} \\{t_{nk} \\ln y_{nk} + (1 - t_{nk}) \\ln (1 - y_{nk}) \\}$$\n\nWhere we have denoted\n\n$$y_{nk} = y_k(\\mathbf{x_n}, \\mathbf{w})$$",
"answer_length": 774
},
{
"chapter": 5,
"question_number": "5.6",
"difficulty": "easy",
"question_text": "Show the derivative of the error function (5.21: $E(\\mathbf{w}) = -\\sum_{n=1}^{N} \\left\\{ t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n) \\right\\}$) with respect to the activation $a_k$ for an output unit having a logistic sigmoid activation function satisfies (5.18: $\\frac{\\partial E}{\\partial a_k} = y_k - t_k$).",
"answer": "We know that $y_k = \\sigma(a_k)$ , where $\\sigma(\\cdot)$ represents the logistic sigmoid function. Moreover,\n\n$$\\frac{d\\sigma}{da} = \\sigma(1 - \\sigma)$$\n\n$$\\frac{dE(\\mathbf{w})}{da_k} = -t_k \\frac{1}{y_k} [y_k (1 - y_k)] + (1 - t_k) \\frac{1}{1 - y_k} [y_k (1 - y_k)]$$\n\n$$= [y_k (1 - y_k)] [\\frac{1 - t_k}{1 - y_k} - \\frac{t_k}{y_k}]$$\n\n$$= (1 - t_k) y_k - t_k (1 - y_k)$$\n\n$$= y_k - t_k$$\n\nJust as required.",
"answer_length": 412
},
{
"chapter": 5,
"question_number": "5.7",
"difficulty": "easy",
"question_text": "Show the derivative of the error function (5.24: $E(\\mathbf{w}) = -\\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{kn} \\ln y_k(\\mathbf{x}_n, \\mathbf{w}).$) with respect to the activation $a_k$ for output units having a softmax activation function satisfies (5.18: $\\frac{\\partial E}{\\partial a_k} = y_k - t_k$).",
"answer": "It is similar to the previous problem. First we denote $y_{kn} = y_k(\\mathbf{x_n}, \\mathbf{w})$ . If we use softmax function as activation for the output unit, according to (4.106: $\\frac{\\partial y_k}{\\partial a_j} = y_k (I_{kj} - y_j)$), we have:\n\n$$\\frac{dy_{kn}}{da_j} = y_{kn}(I_{kj} - y_{jn})$$\n\nTherefore,\n\n$$\\frac{dE(\\mathbf{w})}{da_{j}} = \\frac{d}{da_{k}} \\left\\{ -\\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{kn} \\ln y_{k}(\\mathbf{x_{n}}, \\mathbf{w}) \\right\\} \n= -\\sum_{n=1}^{N} \\sum_{k=1}^{K} \\frac{d}{da_{j}} \\left\\{ t_{kn} \\ln y_{kn} \\right\\} \n= -\\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{kn} \\frac{1}{y_{kn}} \\left[ y_{kn} (I_{kj} - y_{jn}) \\right] \n= -\\sum_{n=1}^{N} \\sum_{k=1}^{K} (t_{kn} I_{kj} - t_{kn} y_{jn}) \n= -\\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{kn} I_{kj} + \\sum_{n=1}^{N} \\sum_{k=1}^{K} t_{kn} y_{jn} \n= -\\sum_{n=1}^{N} t_{jn} + \\sum_{n=1}^{N} y_{jn} \n= \\sum_{n=1}^{N} (y_{jn} - t_{jn})$$\n\nWhere we have used the fact that only when $k=j,\\ I_{kj}=1\\neq 0$ and that $\\sum_{k=1}^K t_{kn}=1.$",
"answer_length": 994
},
{
"chapter": 5,
"question_number": "5.8",
"difficulty": "easy",
"question_text": "We saw in (4.88: $\\frac{d\\sigma}{da} = \\sigma(1 - \\sigma).$) that the derivative of the logistic sigmoid activation function can be expressed in terms of the function value itself. Derive the corresponding result for the 'tanh' activation function defined by (5.59: $\\tanh(a) = \\frac{e^a - e^{-a}}{e^a + e^{-a}}.$).",
"answer": "It is obvious based on definition of 'tanh', i.e., (5.59: $\\tanh(a) = \\frac{e^a - e^{-a}}{e^a + e^{-a}}.$).\n\n$$\\frac{d}{da}tanh(a) = \\frac{(e^a + e^{-a})(e^a + e^{-a}) - (e^a - e^{-a})(e^a - e^{-a})}{(e^a + e^{-a})^2}$$\n\n$$= 1 - \\frac{(e^a - e^{-a})^2}{(e^a + e^{-a})^2}$$\n\n$$= 1 - tanh(a)^2$$",
"answer_length": 293
},
{
"chapter": 5,
"question_number": "5.9",
"difficulty": "easy",
"question_text": "The error function (5.21: $E(\\mathbf{w}) = -\\sum_{n=1}^{N} \\left\\{ t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n) \\right\\}$) for binary classification problems was derived for a network having a logistic-sigmoid output activation function, so that $0 \\le y(\\mathbf{x}, \\mathbf{w}) \\le 1$ , and data having target values $t \\in \\{0, 1\\}$ . Derive the corresponding error function if we consider a network having an output $-1 \\le y(\\mathbf{x}, \\mathbf{w}) \\le 1$ and target values t = 1 for class $C_1$ and t = -1 for class $C_2$ . What would be the appropriate choice of output unit activation function?",
"answer": "We know that the logistic sigmoid function $\\sigma(a) \\in [0,1]$ , therefore if we perform a linear transformation $h(a) = 2\\sigma(a) - 1$ , we can find a mapping function h(a) from $(-\\infty, +\\infty)$ to [-1,1]. In this case, the conditional distribution of targets given inputs can be similarly written as:\n\n$$p(t|\\mathbf{x}, \\mathbf{w}) = \\left[\\frac{1 + y(\\mathbf{x}, \\mathbf{w})}{2}\\right]^{(1+t)/2} \\left[\\frac{1 - y(\\mathbf{x}, \\mathbf{w})}{2}\\right]^{(1-t)/2}$$\n\nWhere $[1+y(\\mathbf{x},\\mathbf{w})]/2$ represents the conditional probability $p(C_1|x)$ . Since now $y(\\mathbf{x},\\mathbf{w}) \\in [-1,1]$ , we also need to perform the linear transformation to make it satisfy the constraint for probability. Then we can further obtain:\n\n$$E(\\mathbf{w}) = -\\sum_{n=1}^{N} \\left\\{ \\frac{1+t_n}{2} \\ln \\frac{1+y_n}{2} + \\frac{1-t_n}{2} \\ln \\frac{1-y_n}{2} \\right\\}$$\n$$= -\\frac{1}{2} \\sum_{n=1}^{N} \\left\\{ (1+t_n) \\ln(1+y_n) + (1-t_n) \\ln(1-y_n) \\right\\} + N \\ln 2$$",
"answer_length": 978
}
]
},
{
"chapter_number": 6,
"total_questions": 23,
"difficulty_breakdown": {
"easy": 16,
"medium": 1,
"hard": 0,
"unknown": 10
},
"questions": [
{
"chapter": 6,
"question_number": "6.1",
"difficulty": "medium",
"question_text": "Consider the dual formulation of the least squares linear regression problem given in Section 6.1. Show that the solution for the components $a_n$ of the vector $\\mathbf{a}$ can be expressed as a linear combination of the elements of the vector $\\phi(\\mathbf{x}_n)$ . Denoting these coefficients by the vector $\\mathbf{w}$ , show that the dual of the dual formulation is given by the original representation in terms of the parameter vector $\\mathbf{w}$ .",
"answer": "Recall that in section.6.1, $a_n$ can be written as (6.4: $a_n = -\\frac{1}{\\lambda} \\left\\{ \\mathbf{w}^{\\mathrm{T}} \\phi(\\mathbf{x}_n) - t_n \\right\\}.$). We can derive:\n\n$$a_n = -\\frac{1}{\\lambda} \\{ \\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}_n) - t_n \\}$$\n\n$$= -\\frac{1}{\\lambda} \\{ w_1 \\phi_1(\\mathbf{x}_n) + w_2 \\phi_2(\\mathbf{x}_n) + \\dots + w_M \\phi_M(\\mathbf{x}_n) - t_n \\}$$\n\n$$= -\\frac{w_1}{\\lambda} \\phi_1(\\mathbf{x}_n) - \\frac{w_2}{\\lambda} \\phi_2(\\mathbf{x}_n) - \\dots - \\frac{w_M}{\\lambda} \\phi_M(\\mathbf{x}_n) + \\frac{t_n}{\\lambda}$$\n\n$$= (c_n - \\frac{w_1}{\\lambda}) \\phi_1(\\mathbf{x}_n) + (c_n - \\frac{w_2}{\\lambda}) \\phi_2(\\mathbf{x}_n) + \\dots + (c_n - \\frac{w_M}{\\lambda}) \\phi_M(\\mathbf{x}_n)$$\n\nHere we have defined:\n\n$$c_n = \\frac{t_n/\\lambda}{\\phi_1(\\mathbf{x}_n) + \\phi_2(\\mathbf{x}_n) + \\dots + \\phi_M(\\mathbf{x}_n)}$$\n\nFrom what we have derived above, we can see that $a_n$ is a linear combination of $\\phi(\\mathbf{x}_n)$ . What's more, we first substitute $\\mathbf{K} = \\mathbf{\\Phi}\\mathbf{\\Phi}^T$ into (6.7: $J(\\mathbf{a}) = \\frac{1}{2} \\mathbf{a}^{\\mathrm{T}} \\mathbf{K} \\mathbf{K} \\mathbf{a} - \\mathbf{a}^{\\mathrm{T}} \\mathbf{K} \\mathbf{t} + \\frac{1}{2} \\mathbf{t}^{\\mathrm{T}} \\mathbf{t} + \\frac{\\lambda}{2} \\mathbf{a}^{\\mathrm{T}} \\mathbf{K} \\mathbf{a}.$), and then we will obtain (6.5: $J(\\mathbf{a}) = \\frac{1}{2} \\mathbf{a}^{\\mathrm{T}} \\mathbf{\\Phi} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{a} - \\mathbf{a}^{\\mathrm{T}} \\mathbf{\\Phi} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{t} + \\frac{1}{2} \\mathbf{t}^{\\mathrm{T}} \\mathbf{t} + \\frac{\\lambda}{2} \\mathbf{a}^{\\mathrm{T}} \\mathbf{\\Phi} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{a}$). Next we substitute (6.3: $\\mathbf{w} = -\\frac{1}{\\lambda} \\sum_{n=1}^{N} \\left\\{ \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) - t_n \\right\\} \\boldsymbol{\\phi}(\\mathbf{x}_n) = \\sum_{n=1}^{N} a_n \\boldsymbol{\\phi}(\\mathbf{x}_n) = \\boldsymbol{\\Phi}^{\\mathrm{T}} \\mathbf{a}$) into (6.5: $J(\\mathbf{a}) = \\frac{1}{2} \\mathbf{a}^{\\mathrm{T}} \\mathbf{\\Phi} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{a} - \\mathbf{a}^{\\mathrm{T}} \\mathbf{\\Phi} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{t} + \\frac{1}{2} \\mathbf{t}^{\\mathrm{T}} \\mathbf{t} + \\frac{\\lambda}{2} \\mathbf{a}^{\\mathrm{T}} \\mathbf{\\Phi} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{a}$) we will obtain (6.2: $J(\\mathbf{w}) = \\frac{1}{2} \\sum_{n=1}^{N} \\left\\{ \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) - t_n \\right\\}^2 + \\frac{\\lambda}{2} \\mathbf{w}^{\\mathrm{T}} \\mathbf{w}$) just as required.",
"answer_length": 2582
},
{
"chapter": 6,
"question_number": "6.10",
"difficulty": "easy",
"question_text": "Show that an excellent choice of kernel for learning a function $f(\\mathbf{x})$ is given by $k(\\mathbf{x}, \\mathbf{x}') = f(\\mathbf{x}) f(\\mathbf{x}')$ by showing that a linear learning machine based on this kernel will always find a solution proportional to $f(\\mathbf{x})$ .",
"answer": "According to (6.9: $y(\\mathbf{x}) = \\mathbf{w}^{\\mathrm{T}} \\phi(\\mathbf{x}) = \\mathbf{a}^{\\mathrm{T}} \\Phi \\phi(\\mathbf{x}) = \\mathbf{k}(\\mathbf{x})^{\\mathrm{T}} (\\mathbf{K} + \\lambda \\mathbf{I}_{N})^{-1} \\mathbf{t}$), we have:\n\n$$y(\\mathbf{x}) = \\mathbf{k}(\\mathbf{x})^T (\\mathbf{K} + \\lambda \\mathbf{I}_N)^{-1} \\mathbf{t} = \\mathbf{k}(\\mathbf{x})^T \\mathbf{a} = \\sum_{n=1}^N f(\\mathbf{x}_n) \\cdot f(\\mathbf{x}) \\cdot a_n = \\left[ \\sum_{n=1}^N f(\\mathbf{x}_n) \\cdot a_n \\right] f(\\mathbf{x})$$\n\nWe see that if we choose $k(\\mathbf{x}, \\mathbf{x}') = f(\\mathbf{x})f(\\mathbf{x}')$ we will always find a solution $y(\\mathbf{x})$ proportional to $f(\\mathbf{x})$ .",
"answer_length": 666
},
{
"chapter": 6,
"question_number": "6.11",
"difficulty": "easy",
"question_text": "By making use of the expansion (6.25: $k(\\mathbf{x}, \\mathbf{x}') = \\exp\\left(-\\mathbf{x}^{\\mathrm{T}}\\mathbf{x}/2\\sigma^{2}\\right) \\exp\\left(\\mathbf{x}^{\\mathrm{T}}\\mathbf{x}'/\\sigma^{2}\\right) \\exp\\left(-(\\mathbf{x}')^{\\mathrm{T}}\\mathbf{x}'/2\\sigma^{2}\\right)$), and then expanding the middle factor as a power series, show that the Gaussian kernel (6.23: $k(\\mathbf{x}, \\mathbf{x}') = \\exp\\left(-\\|\\mathbf{x} - \\mathbf{x}'\\|^2 / 2\\sigma^2\\right)$) can be expressed as the inner product of an infinite-dimensional feature vector.",
"answer": "We follow the hint.\n\n$$k(\\mathbf{x}, \\mathbf{x}') = \\exp(-\\mathbf{x}^T \\mathbf{x}/2\\sigma^2) \\cdot \\exp(\\mathbf{x}^T \\mathbf{x}'/\\sigma^2) \\cdot \\exp(-(\\mathbf{x}')^T \\mathbf{x}'/2\\sigma^2)$$\n\n$$= \\exp(-\\mathbf{x}^T \\mathbf{x}/2\\sigma^2) \\cdot \\left(1 + \\frac{\\mathbf{x}^T \\mathbf{x}'}{\\sigma^2} + \\frac{(\\frac{\\mathbf{x}^T \\mathbf{x}'}{\\sigma^2})^2}{2!} + \\cdots\\right) \\cdot \\exp(-(\\mathbf{x}')^T \\mathbf{x}'/2\\sigma^2)$$\n\n$$= \\phi(\\mathbf{x})^T \\phi(\\mathbf{x}')$$\n\nwhere $\\phi(\\mathbf{x})$ is a column vector with infinite dimension. To be more specific, (6.12: $= \\phi(\\mathbf{x})^{\\mathrm{T}} \\phi(\\mathbf{z}).$) gives a simple example on how to decompose $(\\mathbf{x}^T\\mathbf{x}')^2$ . In our case, we can also decompose $(\\mathbf{x}^T\\mathbf{x}')^k$ , $k = 1, 2, ..., \\infty$ in the similar way. However, since $k \\to \\infty$ , i.e., the decomposition will consist monomials with infinite degree. Thus, there will be infinite terms in the decomposition and the feature mapping function $\\phi(\\mathbf{x})$ will have infinite dimension.",
"answer_length": 1052
},
{
"chapter": 6,
"question_number": "6.12",
"difficulty": "medium",
"question_text": "Consider the space of all possible subsets A of a given fixed set D. Show that the kernel function (6.27: $k(A_1, A_2) = 2^{|A_1 \\cap A_2|}$) corresponds to an inner product in a feature space of dimensionality $2^{|D|}$ defined by the mapping $\\phi(A)$ where A is a subset of D and the element $\\phi_U(A)$ , indexed by the subset U, is given by\n\n$$\\phi_U(A) = \\begin{cases} 1, & \\text{if } U \\subseteq A; \\\\ 0, & \\text{otherwise.} \\end{cases}$$\n (6.95: $\\phi_U(A) = \\begin{cases} 1, & \\text{if } U \\subseteq A; \\\\ 0, & \\text{otherwise.} \\end{cases}$)\n\nHere $U \\subseteq A$ denotes that U is either a subset of A or is equal to A.",
"answer": "First, let's explain the problem a little bit. According to (6.27: $k(A_1, A_2) = 2^{|A_1 \\cap A_2|}$), what we need to prove here is:\n\n$$k(A_1, A_2) = 2^{|A_1 \\cap A_2|} = \\phi(A_1)^T \\phi(A_2)$$\n\nThe biggest difference from the previous problem is that $\\phi(A)$ is a $2^{|D|} \\times 1$ column vector and instead of indexed by $1, 2, ..., 2^{|D|}$ here we index it by $\\{U|U \\subseteq D\\}$ (Note that $\\{U|U \\subseteq D\\}$ is all the possible subsets of D and thus there are $2^{|D|}$ elements in total). Therefore, according to (6.95: $\\phi_U(A) = \\begin{cases} 1, & \\text{if } U \\subseteq A; \\\\ 0, & \\text{otherwise.} \\end{cases}$), we can obtain:\n\n$$\\boldsymbol{\\phi}(A_1)^T\\boldsymbol{\\phi}(A_2) = \\sum_{U\\subseteq D} \\phi_U(A_1)\\phi_U(A_2)$$\n\nBy using the summation, we actually iterate through all the possible subsets of D. If and only if the current iterating subset U is a subset of both $A_1$ and $A_2$ simultaneously, the current adding term equals to 1. Therefore, we actually count how many subsets of D is in the intersection of $A_1$ and $A_2$ .\n\nMoreover, since $A_1$ and $A_2$ are both defined in the subset space of D, what we have deduced above can be written as:\n\n$$\\phi(A_1)^T \\phi(A_2) = 2^{|A_1 \\cap A_2|}$$\n\nJust as required.\n\nProblem 6.13 Solution Wait for update",
"answer_length": 1313
},
{
"chapter": 6,
"question_number": "6.14",
"difficulty": "easy",
"question_text": "Write down the form of the Fisher kernel, defined by (6.33: $k(\\mathbf{x}, \\mathbf{x}') = \\mathbf{g}(\\boldsymbol{\\theta}, \\mathbf{x})^{\\mathrm{T}} \\mathbf{F}^{-1} \\mathbf{g}(\\boldsymbol{\\theta}, \\mathbf{x}').$), for the case of a distribution $p(\\mathbf{x}|\\boldsymbol{\\mu}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\mathbf{S})$ that is Gaussian with mean $\\boldsymbol{\\mu}$ and fixed covariance $\\mathbf{S}$ .",
"answer": "Since the covariance matrix S is fixed, according to (6.32: $\\mathbf{g}(\\boldsymbol{\\theta}, \\mathbf{x}) = \\nabla_{\\boldsymbol{\\theta}} \\ln p(\\mathbf{x}|\\boldsymbol{\\theta})$) we can obtain:\n\n$$\\mathbf{g}(\\boldsymbol{\\mu}, \\mathbf{x}) = \\nabla_{\\boldsymbol{\\mu}} \\ln p(\\mathbf{x}|\\boldsymbol{\\mu}) = \\frac{\\partial}{\\partial \\boldsymbol{\\mu}} \\left( -\\frac{1}{2} (\\mathbf{x} - \\boldsymbol{\\mu})^T \\mathbf{S}^{-1} (\\mathbf{x} - \\boldsymbol{\\mu}) \\right) = \\mathbf{S}^{-1} (\\mathbf{x} - \\boldsymbol{\\mu})$$\n\nTherefore, according to (6.34: $\\mathbf{F} = \\mathbb{E}_{\\mathbf{x}} \\left[ \\mathbf{g}(\\boldsymbol{\\theta}, \\mathbf{x}) \\mathbf{g}(\\boldsymbol{\\theta}, \\mathbf{x})^{\\mathrm{T}} \\right]$), we can obtain:\n\n$$\\mathbf{F} = \\mathbb{E}_{\\mathbf{x}} \\left[ \\mathbf{g}(\\boldsymbol{\\mu}, \\mathbf{x}) \\mathbf{g}(\\boldsymbol{\\mu}, \\mathbf{x})^T \\right] = \\mathbf{S}^{-1} \\mathbb{E}_{\\mathbf{x}} \\left[ (\\mathbf{x} - \\boldsymbol{\\mu}) (\\mathbf{x} - \\boldsymbol{\\mu})^T \\right] \\mathbf{S}^{-1}$$\n\nSince $\\mathbf{x} \\sim \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\mathbf{S})$ , we have:\n\n$$\\mathbb{E}_{\\mathbf{x}}\\left[(\\mathbf{x}-\\boldsymbol{\\mu})(\\mathbf{x}-\\boldsymbol{\\mu})^{T}\\right] = \\mathbf{S}$$\n\nSo we obtain $\\mathbf{F} = \\mathbf{S}^{-1}$ and then according to (6.33: $k(\\mathbf{x}, \\mathbf{x}') = \\mathbf{g}(\\boldsymbol{\\theta}, \\mathbf{x})^{\\mathrm{T}} \\mathbf{F}^{-1} \\mathbf{g}(\\boldsymbol{\\theta}, \\mathbf{x}').$), we have:\n\n$$k(\\mathbf{x}, \\mathbf{x}') = \\mathbf{g}(\\boldsymbol{\\mu}, \\mathbf{x})^T \\mathbf{F}^{-1} \\mathbf{g}(\\boldsymbol{\\mu}, \\mathbf{x}') = (\\mathbf{x} - \\boldsymbol{\\mu})^T \\mathbf{S}^{-1} (\\mathbf{x}' - \\boldsymbol{\\mu})$$",
"answer_length": 1652
},
{
"chapter": 6,
"question_number": "6.15",
"difficulty": "easy",
"question_text": "By considering the determinant of a $2 \\times 2$ Gram matrix, show that a positive-definite kernel function k(x, x') satisfies the Cauchy-Schwartz inequality\n\n$$k(x_1, x_2)^2 \\le k(x_1, x_1)k(x_2, x_2).$$\n (6.96: $k(x_1, x_2)^2 \\le k(x_1, x_1)k(x_2, x_2).$)",
"answer": "We rewrite the problem. What we are required to prove is that the Gram matrix $\\mathbf{K}$ :\n\n$$\\mathbf{K} = \\left[ \\begin{array}{cc} k_{11} & k_{12} \\\\ k_{21} & k_{22} \\end{array} \\right],$$\n\nwhere $k_{ij}$ (i,j = 1,2) is short for $k(x_i,x_j)$ , should be positive semidefinite. A positive semidefinite matrix should have positive determinant, i.e.,\n\n$$k_{12}k_{21} \\leq k_{11}k_{22}$$\n.\n\nUsing the symmetric property of kernel, i.e., $k_{12} = k_{21}$ , we obtain what has been required.",
"answer_length": 495
},
{
"chapter": 6,
"question_number": "6.16",
"difficulty": "medium",
"question_text": "Consider a parametric model governed by the parameter vector w together with a data set of input values $\\mathbf{x}_1, \\dots, \\mathbf{x}_N$ and a nonlinear feature mapping $\\phi(\\mathbf{x})$ . Suppose that the dependence of the error function on w takes the form\n\n$$J(\\mathbf{w}) = f(\\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_1), \\dots, \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_N)) + g(\\mathbf{w}^{\\mathrm{T}} \\mathbf{w})$$\n(6.97: $J(\\mathbf{w}) = f(\\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_1), \\dots, \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_N)) + g(\\mathbf{w}^{\\mathrm{T}} \\mathbf{w})$)\n\nwhere $g(\\cdot)$ is a monotonically increasing function. By writing w in the form\n\n$$\\mathbf{w} = \\sum_{n=1}^{N} \\alpha_n \\boldsymbol{\\phi}(\\mathbf{x}_n) + \\mathbf{w}_{\\perp}$$\n (6.98: $\\mathbf{w} = \\sum_{n=1}^{N} \\alpha_n \\boldsymbol{\\phi}(\\mathbf{x}_n) + \\mathbf{w}_{\\perp}$)\n\nshow that the value of w that minimizes $J(\\mathbf{w})$ takes the form of a linear combination of the basis functions $\\phi(\\mathbf{x}_n)$ for n = 1, ..., N.",
"answer": "Based on the total derivative of function f, we have:\n\n$$f\\left[(\\mathbf{w} + \\Delta \\mathbf{w})^T \\boldsymbol{\\phi}_1, (\\mathbf{w} + \\Delta \\mathbf{w})^T \\boldsymbol{\\phi}_2, ..., (\\mathbf{w} + \\Delta \\mathbf{w})^T \\boldsymbol{\\phi}_N\\right] = \\sum_{n=1}^N \\frac{\\partial f}{\\partial (\\mathbf{w}^T \\boldsymbol{\\phi}_n)} \\cdot \\Delta \\mathbf{w}^T \\boldsymbol{\\phi}_n$$\n\nWhich can be further written as:\n\n$$f\\left[(\\mathbf{w} + \\Delta \\mathbf{w})^T \\boldsymbol{\\phi}_1, (\\mathbf{w} + \\Delta \\mathbf{w})^T \\boldsymbol{\\phi}_2, ..., (\\mathbf{w} + \\Delta \\mathbf{w})^T \\boldsymbol{\\phi}_N\\right] = \\left[\\sum_{n=1}^N \\frac{\\partial f}{\\partial (\\mathbf{w}^T \\boldsymbol{\\phi}_n)} \\cdot \\boldsymbol{\\phi}_n^T\\right] \\Delta \\mathbf{w}$$\n\nNote that here $\\phi_n$ is short for $\\phi(\\mathbf{x}_n)$ . Based on the equation above, we can obtain:\n\n$$\\nabla_{\\mathbf{w}} f = \\sum_{n=1}^{N} \\frac{\\partial f}{\\partial (\\mathbf{w}^{T} \\boldsymbol{\\phi}_{n})} \\cdot \\boldsymbol{\\phi}_{n}^{T}$$\n\nNow we focus on the derivative of function g with respect to $\\mathbf{w}$ :\n\n$$\\nabla_{\\mathbf{w}} g = \\frac{\\partial g}{\\partial (\\mathbf{w}^T \\mathbf{w})} \\cdot 2\\mathbf{w}^T$$\n\nIn order to find the optimal $\\mathbf{w}$ , we set the derivative of J with respect to $\\mathbf{w}$ equal to $\\mathbf{0}$ , yielding:\n\n$$\\nabla_{\\mathbf{w}} J = \\nabla_{\\mathbf{w}} f + \\nabla_{\\mathbf{w}} g = \\sum_{n=1}^{N} \\frac{\\partial f}{\\partial (\\mathbf{w}^{T} \\boldsymbol{\\phi}_{n})} \\cdot \\boldsymbol{\\phi}_{n}^{T} + \\frac{\\partial g}{\\partial (\\mathbf{w}^{T} \\mathbf{w})} \\cdot 2\\mathbf{w}^{T} = \\mathbf{0}$$\n\nRearranging the equation above, we can obtain:\n\n$$\\mathbf{w} = \\frac{1}{2a} \\sum_{n=1}^{N} \\frac{\\partial f}{\\partial (\\mathbf{w}^{T} \\boldsymbol{\\phi}_{n})} \\cdot \\boldsymbol{\\phi}_{n}$$\n\nWhere we have defined: $a = 1 \\div \\frac{\\partial g}{\\partial (\\mathbf{w}^T \\mathbf{w})}$ , and since g is a monotonically increasing function, we have a > 0.",
"answer_length": 1935
},
{
"chapter": 6,
"question_number": "6.17",
"difficulty": "medium",
"question_text": "Consider the sum-of-squares error function (6.39: $E = \\frac{1}{2} \\sum_{n=1}^{N} \\int \\{y(\\mathbf{x}_n + \\boldsymbol{\\xi}) - t_n\\}^2 \\nu(\\boldsymbol{\\xi}) \\,d\\boldsymbol{\\xi}.$) for data having noisy inputs, where $\\nu(\\xi)$ is the distribution of the noise. Use the calculus of variations to minimize this error function with respect to the function y(x), and hence show that the optimal solution is given by an expansion of the form (6.40: $y(\\mathbf{x}_n) = \\sum_{n=1}^{N} t_n h(\\mathbf{x} - \\mathbf{x}_n)$) in which the basis functions are given by (6.41: $h(\\mathbf{x} - \\mathbf{x}_n) = \\frac{\\nu(\\mathbf{x} - \\mathbf{x}_n)}{\\sum_{n=1}^{N} \\nu(\\mathbf{x} - \\mathbf{x}_n)}.$).",
"answer": "We consider a variation in the function $y(\\mathbf{x})$ of the form:\n\n$$y(\\mathbf{x}) \\to y(\\mathbf{x}) + \\epsilon \\eta(\\mathbf{x})$$\n\nSubstituting it into (6.39: $E = \\frac{1}{2} \\sum_{n=1}^{N} \\int \\{y(\\mathbf{x}_n + \\boldsymbol{\\xi}) - t_n\\}^2 \\nu(\\boldsymbol{\\xi}) \\,d\\boldsymbol{\\xi}.$) yields:\n\n$$\\begin{split} E[y+\\epsilon\\eta] &= \\frac{1}{2} \\sum_{n=1}^{N} \\int \\left\\{ y+\\epsilon\\eta - t_n \\right\\}^2 v(\\boldsymbol{\\xi}) d\\boldsymbol{\\xi} \\\\ &= \\frac{1}{2} \\sum_{n=1}^{N} \\int \\left\\{ (y-t_n)^2 + 2 \\cdot (\\epsilon\\eta) \\cdot (y-t_n) + (\\epsilon\\eta)^2 \\right\\} v(\\boldsymbol{\\xi}) d\\boldsymbol{\\xi} \\\\ &= E[y] + \\epsilon \\sum_{n=1}^{N} \\int \\left\\{ y-t_n \\right\\} \\eta v d\\boldsymbol{\\xi} + O(\\epsilon^2) \\end{split}$$\n\nNote that here y is short for $y(\\mathbf{x}_n + \\boldsymbol{\\xi})$ , $\\eta$ is short for $\\eta(\\mathbf{x}_n + \\boldsymbol{\\xi})$ and v is short for $v(\\boldsymbol{\\xi})$ respectively. Several clarifications must be made here. What we have done is that we vary the function y by a little bit (i.e., $\\epsilon\\eta$ ) and then we expand the corresponding error with respect to the small variation $\\epsilon$ . The coefficient before $\\epsilon$ is actually the first derivative of the error $E[y + \\epsilon\\eta]$ with respect to $\\epsilon$ at $\\epsilon = 0$ . Since we know that y is the optimal function that can make E the smallest, the first derivative of the error $E[y + \\epsilon\\eta]$ should equal to zero at $\\epsilon = 0$ , which gives:\n\n$$\\sum_{n=1}^{N} \\int \\{y(\\mathbf{x}_n + \\boldsymbol{\\xi}) - t_n\\} \\eta(\\mathbf{x}_n + \\boldsymbol{\\xi}) v(\\boldsymbol{\\xi}) d\\boldsymbol{\\xi} = 0$$\n\nNow we are required to find a function y that can satisfy the equation above no matter what $\\eta$ is. We choose:\n\n$$\\eta(\\mathbf{x}) = \\delta(\\mathbf{x} - \\mathbf{z})$$\n\nThis allows us to evaluate the integral:\n\n$$\\sum_{n=1}^{N} \\int \\left\\{ y(\\mathbf{x}_n + \\boldsymbol{\\xi}) - t_n \\right\\} \\eta(\\mathbf{x}_n + \\boldsymbol{\\xi}) v(\\boldsymbol{\\xi}) d\\boldsymbol{\\xi} = \\sum_{n=1}^{N} \\left\\{ y(\\mathbf{z}) - t_n \\right\\} v(\\mathbf{z} - \\mathbf{x}_n)$$\n\nWe set it to zero and rearrange it, which finally gives (6.40: $y(\\mathbf{x}_n) = \\sum_{n=1}^{N} t_n h(\\mathbf{x} - \\mathbf{x}_n)$) just as required.",
"answer_length": 2249
},
{
"chapter": 6,
"question_number": "6.18",
"difficulty": "easy",
"question_text": "Consider a Nadaraya-Watson model with one input variable x and one target variable t having Gaussian components with isotropic covariances, so that the covariance matrix is given by $\\sigma^2 \\mathbf{I}$ where $\\mathbf{I}$ is the unit matrix. Write down expressions for the conditional density p(t|x) and for the conditional mean $\\mathbb{E}[t|x]$ and variance var[t|x], in terms of the kernel function $k(x, x_n)$ .",
"answer": "According to the main text below Eq (6.48: $p(t|\\mathbf{x}) = \\frac{p(t,\\mathbf{x})}{\\int p(t,\\mathbf{x}) dt} = \\frac{\\sum_{n} f(\\mathbf{x} - \\mathbf{x}_{n}, t - t_{n})}{\\sum_{m} \\int f(\\mathbf{x} - \\mathbf{x}_{m}, t - t_{m}) dt}$), we know that f(x,t), i.e., $f(\\mathbf{z})$ , follows a zero-mean isotropic Gaussian:\n\n$$f(\\mathbf{z}) = \\mathcal{N}(\\mathbf{z}|\\mathbf{0}, \\sigma^2\\mathbf{I})$$\n\nThen $f(x-x_m,t-t_m)$ , i.e., $f(\\mathbf{z}-\\mathbf{z}_m)$ should also satisfy a Gaussian distribution:\n\n$$f(\\mathbf{z} - \\mathbf{z}_m) = \\mathcal{N}(\\mathbf{z}|\\mathbf{z}_m, \\sigma^2 \\mathbf{I})$$\n\nWhere we have defined:\n\n$$\\mathbf{z}_m = (x_m, t_m)$$\n\nThe integral $\\int f(\\mathbf{z} - \\mathbf{z}_m) dt$ corresponds to the marginal distribution with respect to the remaining variable x and, thus, we obtain:\n\n$$\\int f(\\mathbf{z} - \\mathbf{z}_m) dt = \\mathcal{N}(x|x_m, \\sigma^2)$$\n\nWe substitute all the expressions into Eq (6.48: $p(t|\\mathbf{x}) = \\frac{p(t,\\mathbf{x})}{\\int p(t,\\mathbf{x}) dt} = \\frac{\\sum_{n} f(\\mathbf{x} - \\mathbf{x}_{n}, t - t_{n})}{\\sum_{m} \\int f(\\mathbf{x} - \\mathbf{x}_{m}, t - t_{m}) dt}$), which gives:\n\n$$\\begin{split} p(t|x) &= \\frac{p(t,x)}{\\int p(t,x)dt} = \\frac{\\sum_{n} \\mathcal{N}(\\mathbf{z}|\\mathbf{z}_{m},\\sigma^{2}\\mathbf{I})}{\\sum_{m} \\mathcal{N}(x|x_{m},\\sigma^{2})} \\\\ &= \\frac{\\sum_{n} \\frac{1}{2\\pi\\sigma^{2}} exp\\left(-\\frac{1}{2}(\\mathbf{z}-\\mathbf{z}_{n})^{T}(\\sigma^{2}\\mathbf{I})^{-1}(\\mathbf{z}-\\mathbf{z}_{n})\\right)}{\\sum_{m} \\frac{1}{(2\\pi\\sigma^{2})^{1/2}} exp\\left(-\\frac{1}{2\\sigma^{2}}(x-x_{m})^{2}\\right)} \\\\ &= \\frac{\\sum_{n} \\frac{1}{2\\pi\\sigma^{2}} exp\\left(-\\frac{1}{2\\sigma^{2}}(x-x_{n})^{2}\\right) exp\\left(-\\frac{1}{2\\sigma^{2}}(t-t_{n})^{2}\\right)}{\\sum_{m} \\frac{1}{(2\\pi\\sigma^{2})^{1/2}} exp\\left(-\\frac{1}{2\\sigma^{2}}(x-x_{m})^{2}\\right)} \\\\ &= \\sum_{n} \\frac{\\frac{1}{\\sqrt{2\\pi\\sigma^{2}}} exp\\left(-\\frac{1}{2\\sigma^{2}}(x-x_{n})^{2}\\right)}{\\sum_{m} \\frac{1}{(2\\pi\\sigma^{2})^{1/2}} exp\\left(-\\frac{1}{2\\sigma^{2}}(x-x_{m})^{2}\\right)} \\cdot \\frac{1}{\\sqrt{2\\pi\\sigma^{2}}} exp\\left(-\\frac{1}{2\\sigma^{2}}(t-t_{n})^{2}\\right) \\\\ &= \\sum_{n} \\pi_{n} \\cdot \\mathcal{N}(t|t_{n},\\sigma^{2}) \\end{split}$$\n\nWhere we have defined:\n\n$$\\pi_n = \\frac{exp\\left(-\\frac{1}{2\\sigma^2}(x - x_n)^2\\right)}{\\sum_m exp\\left(-\\frac{1}{2\\sigma^2}(x - x_m)^2\\right)}$$\n\nWe also observe that:\n\n$$\\sum_{n} \\pi_n = 1$$\n\nTherefore, the conditional distribution p(t|x) is given by a Gaussian Mixture. Similarly, we attempt to find a specific form for Eq (6.46):\n\n$$k(x,x_n) = \\frac{\\int f(x-x_n,t) dt}{\\sum_m \\int f(x-x_m,t) dt}$$\n$$= \\frac{\\mathcal{N}(x|x_n,\\sigma^2)}{\\sum_m \\mathcal{N}(x|x_m,\\sigma^2)}$$\n$$= \\pi_n$$\n\nIn other words, the conditional distribution can be more precisely written as:\n\n$$p(t|x) = \\sum_{n} k(x, x_n) \\cdot \\mathcal{N}(t|t_n, \\sigma^2)$$\n\nThus its mean is given by:\n\n$$\\mathbb{E}[t|x] = \\sum_{n} k(x, x_n) \\cdot t_n$$\n\nIts variance is given by:\n\n$$\\begin{aligned} \\operatorname{var}[t|x] &= & \\mathbb{E}[(t|x)^2] - \\mathbb{E}[t|x]^2 \\\\ &= & \\sum_n k(x, x_n) \\cdot (t_n^2 + \\sigma^2) - \\left(\\sum_n k(x, x_n) \\cdot t_n\\right)^2 \\end{aligned}$$",
"answer_length": 3127
},
{
"chapter": 6,
"question_number": "6.19",
"difficulty": "medium",
"question_text": "- 6.19 (\\*\\*) Another viewpoint on kernel regression comes from a consideration of regression problems in which the input variables as well as the target variables are corrupted with additive noise. Suppose each target value tn is generated as usual by taking a function y(zn) evaluated at a point zn, and adding Gaussian noise. The value of zn is not directly observed, however, but only a noise corrupted version xn = zn + ξn where the random variable ξ is governed by some distribution g(ξ). Consider a set of observations {xn, tn}, where n = 1,..., N, together with a corresponding sum-of-squares error function defined by averaging over the distribution of input noise to give\n\n$$E = \\frac{1}{2} \\sum_{n=1}^{N} \\int \\{y(\\mathbf{x}_n - \\boldsymbol{\\xi}_n) - t_n\\}^2 g(\\boldsymbol{\\xi}_n) \\, d\\boldsymbol{\\xi}_n.$$\n (6.99: $E = \\frac{1}{2} \\sum_{n=1}^{N} \\int \\{y(\\mathbf{x}_n - \\boldsymbol{\\xi}_n) - t_n\\}^2 g(\\boldsymbol{\\xi}_n) \\, d\\boldsymbol{\\xi}_n.$)\n\nBy minimizing E with respect to the function $y(\\mathbf{z})$ using the calculus of variations (Appendix D), show that optimal solution for $y(\\mathbf{x})$ is given by a Nadaraya-Watson kernel regression solution of the form (6.45: $= \\sum_{n} k(\\mathbf{x}, \\mathbf{x}_{n})t_{n}$) with a kernel of the form (6.46: $k(\\mathbf{x}, \\mathbf{x}_n) = \\frac{g(\\mathbf{x} - \\mathbf{x}_n)}{\\sum_{m} g(\\mathbf{x} - \\mathbf{x}_m)}$).",
"answer": "Similar to Prob.6.17, it is straightforward to show that:\n\n$$y(\\mathbf{x}) = \\sum_{n} t_n \\, k(\\mathbf{x}, \\mathbf{x}_n)$$\n\nWhere we have defined:\n\n$$k(\\mathbf{x}, \\mathbf{x}_n) = \\frac{g(\\mathbf{x}_n - \\mathbf{x})}{\\sum_n g(\\mathbf{x}_n - \\mathbf{x})}$$",
"answer_length": 254
},
{
"chapter": 6,
"question_number": "6.2",
"difficulty": "medium",
"question_text": "In this exercise, we develop a dual formulation of the perceptron learning algorithm. Using the perceptron learning rule (4.55: $\\mathbf{w}^{(\\tau+1)} = \\mathbf{w}^{(\\tau)} - \\eta \\nabla E_{P}(\\mathbf{w}) = \\mathbf{w}^{(\\tau)} + \\eta \\phi_n t_n$), show that the learned weight vector $\\mathbf{w}$ can be written as a linear combination of the vectors $t_n \\phi(\\mathbf{x}_n)$ where $t_n \\in \\{-1, +1\\}$ . Denote the coefficients of this linear combination by $\\alpha_n$ and derive a formulation of the perceptron learning algorithm, and the predictive function for the perceptron, in terms of the $\\alpha_n$ . Show that the feature vector $\\phi(\\mathbf{x})$ enters only in the form of the kernel function $k(\\mathbf{x}, \\mathbf{x}') = \\phi(\\mathbf{x})^T \\phi(\\mathbf{x}')$ .",
"answer": "If we set $\\mathbf{w}^{(0)} = \\mathbf{0}$ in (4.55: $\\mathbf{w}^{(\\tau+1)} = \\mathbf{w}^{(\\tau)} - \\eta \\nabla E_{P}(\\mathbf{w}) = \\mathbf{w}^{(\\tau)} + \\eta \\phi_n t_n$), we can obtain:\n\n$$\\mathbf{w}^{(\\tau+1)} = \\sum_{n=1}^{N} \\eta c_n t_n \\boldsymbol{\\phi}_n$$\n\nwhere N is the total number of samples and $c_n$ is the times that $t_n \\phi_n$ has been added from step 0 to step $\\tau + 1$ . Therefore, it is obvious that we have:\n\n$$\\mathbf{w} = \\sum_{n=1}^{N} \\alpha_n t_n \\boldsymbol{\\phi}_n$$\n\nWe further substitute the expression above into (4.55: $\\mathbf{w}^{(\\tau+1)} = \\mathbf{w}^{(\\tau)} - \\eta \\nabla E_{P}(\\mathbf{w}) = \\mathbf{w}^{(\\tau)} + \\eta \\phi_n t_n$), which gives:\n\n$$\\sum_{n=1}^{N} \\alpha_n^{(\\tau+1)} t_n \\boldsymbol{\\phi}_n = \\sum_{n=1}^{N} \\alpha_n^{(\\tau)} t_n \\boldsymbol{\\phi}_n + \\eta t_n \\boldsymbol{\\phi}_n$$\n\nIn other words, the update process is to add learning rate $\\eta$ to the coefficient $\\alpha_n$ corresponding to the misclassified pattern $\\mathbf{x}_n$ , i.e.,\n\n$$\\alpha_n^{(\\tau+1)} = \\alpha_n^{(\\tau)} + \\eta$$\n\nNow we similarly substitute it into (4.52):\n\n$$y(\\mathbf{x}) = f(\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}))$$\n\n$$= f(\\sum_{n=1}^N \\alpha_n t_n \\boldsymbol{\\phi}_n^T \\boldsymbol{\\phi}(\\mathbf{x}))$$\n\n$$= f(\\sum_{n=1}^N \\alpha_n t_n k(\\mathbf{x}_n, \\mathbf{x}))$$",
"answer_length": 1331
},
{
"chapter": 6,
"question_number": "6.20",
"difficulty": "medium",
"question_text": "Verify the results (6.66: $\\sigma^{2}(\\mathbf{x}_{N+1}) = c - \\mathbf{k}^{\\mathrm{T}} \\mathbf{C}_{N}^{-1} \\mathbf{k}.$) and (6.67: $\\sigma^{2}(\\mathbf{x}_{N+1}) = c - \\mathbf{k}^{\\mathrm{T}} \\mathbf{C}_{N}^{-1} \\mathbf{k}.$).",
"answer": "Since we know that $\\mathbf{t}_{N+1} = (t_1, t_2, ..., t_N, t_{N+1})^T$ follows a Gaussian distribution, i.e., $\\mathbf{t}_{N+1} \\sim \\mathcal{N}(\\mathbf{t}_{N+1}|\\mathbf{0}, \\mathbf{C}_{N+1})$ given in Eq (6.64), if we rearrange its\n\norder by putting the last element (i.e., $t_{N+1}$ ) to the first position, denoted as $\\bar{\\mathbf{t}}_{N+1}$ , it should also satisfy a Gaussian distribution:\n\n$$\\bar{\\mathbf{t}}_{N+1} = (t_{N+1}, t_1, ..., t_2, t_N)^T \\sim \\mathcal{N}(\\bar{\\mathbf{t}}_{N+1} | \\mathbf{0}, \\bar{\\mathbf{C}}_{N+1})$$\n\nWhere we have defined:\n\n$$\\bar{\\mathbf{C}}_{N+1} = \\left( \\begin{array}{cc} c & \\mathbf{k}^T \\\\ \\mathbf{k} & \\mathbf{C}_N \\end{array} \\right)$$\n\nWhere **k** and *c* have been given in the main text below Eq (6.65: $\\mathbf{C}_{N+1} = \\begin{pmatrix} \\mathbf{C}_N & \\mathbf{k} \\\\ \\mathbf{k}^{\\mathrm{T}} & c \\end{pmatrix}$). The conditional distribution $p(t_{N+1}|\\mathbf{t}_N)$ should also be a Gaussian. By analogy to Eq (2.94)-(2.98), we can simply treat $t_{N+1}$ as $\\mathbf{x}_a$ , $\\mathbf{t}_N$ as $\\mathbf{x}_b$ , c as $\\mathbf{\\Sigma}_{aa}$ , **k** as $\\mathbf{\\Sigma}_{ba}$ , $\\mathbf{k}^T$ as $\\mathbf{\\Sigma}_{ab}$ and $\\mathbf{C}_N$ as $\\mathbf{\\Sigma}_{bb}$ . Substituting them into Eq (2.79: $\\Lambda_{aa} = (\\Sigma_{aa} - \\Sigma_{ab} \\Sigma_{bb}^{-1} \\Sigma_{ba})^{-1}$) and Eq (2.80: $\\Lambda_{ab} = -(\\Sigma_{aa} - \\Sigma_{ab}\\Sigma_{bb}^{-1}\\Sigma_{ba})^{-1}\\Sigma_{ab}\\Sigma_{bb}^{-1}.$) yields:\n\n$$\\boldsymbol{\\Lambda}_{aa} = (c - \\mathbf{k}^T \\mathbf{C}_N^{-1} \\mathbf{k})^{-1}$$\n\nAnd:\n\n$$\\mathbf{\\Lambda}_{ab} = -(c - \\mathbf{k}^T \\mathbf{C}_N^{-1} \\mathbf{k})^{-1} \\mathbf{k}^T \\mathbf{C}_N^{-1}$$\n\nThen we substitute them into Eq (2.96: $p(\\mathbf{x}_a|\\mathbf{x}_b) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}_{a|b}, \\boldsymbol{\\Lambda}_{aa}^{-1})$) and (2.97: $\\boldsymbol{\\mu}_{a|b} = \\boldsymbol{\\mu}_a - \\boldsymbol{\\Lambda}_{aa}^{-1} \\boldsymbol{\\Lambda}_{ab} (\\mathbf{x}_b - \\boldsymbol{\\mu}_b).$), yields:\n\n$$p(t_{N+1}|\\mathbf{t}_N) = \\mathcal{N}(\\boldsymbol{\\mu}_{a|b}, \\boldsymbol{\\Lambda}_{aa}^{-1})$$\n\nFor its mean $\\mu_{a|b}$ , we have:\n\n$$\\mu_{a|b} = 0 - \\left(c - \\mathbf{k}^T \\mathbf{C}_N^{-1} \\mathbf{k}\\right) \\cdot \\left[-(c - \\mathbf{k}^T \\mathbf{C}_N^{-1} \\mathbf{k})^{-1} \\mathbf{k}^T \\mathbf{C}_N^{-1}\\right] \\cdot (\\mathbf{t}_N - \\mathbf{0})$$\n\n$$= \\mathbf{k}^T \\mathbf{C}_N^{-1} \\mathbf{t}_N = m(\\mathbf{x}_{N+1})$$\n\nSimilarly, for its variance $\\Lambda_{aa}^{-1}$ (Note that here since $t_{N+1}$ is a scalar, the mean and the covariance matrix actually degenerate to one dimension case), we have:\n\n$$\\boldsymbol{\\Lambda}_{aa}^{-1} = c - \\mathbf{k}^T \\mathbf{C}_N^{-1} \\mathbf{k} = \\sigma^2(\\mathbf{x}_{N+1})$$",
"answer_length": 2726
},
{
"chapter": 6,
"question_number": "6.21",
"difficulty": "medium",
"question_text": "- 6.21 (\\*\\*) www Consider a Gaussian process regression model in which the kernel function is defined in terms of a fixed set of nonlinear basis functions. Show that the predictive distribution is identical to the result (3.58: $p(t|\\mathbf{x}, \\mathbf{t}, \\alpha, \\beta) = \\mathcal{N}(t|\\mathbf{m}_N^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}), \\sigma_N^2(\\mathbf{x}))$) obtained in Section 3.3.2 for the Bayesian linear regression model. To do this, note that both models have Gaussian predictive distributions, and so it is only necessary to show that the conditional mean and variance are the same. For the mean, make use of the matrix identity (C.6), and for the variance, make use of the matrix identity (C.7).",
"answer": "We follow the hint beginning by verifying the mean. We write Eq (6.62: $C(\\mathbf{x}_n, \\mathbf{x}_m) = k(\\mathbf{x}_n, \\mathbf{x}_m) + \\beta^{-1} \\delta_{nm}.$) in a matrix form:\n\n$$\\mathbf{C}_N = \\frac{1}{\\alpha} \\mathbf{\\Phi} \\mathbf{\\Phi}^T + \\beta^{-1} \\mathbf{I}_N$$\n\nWhere we have used Eq (6.54: $K_{nm} = k(\\mathbf{x}_n, \\mathbf{x}_m) = \\frac{1}{\\alpha} \\phi(\\mathbf{x}_n)^{\\mathrm{T}} \\phi(\\mathbf{x}_m)$). Here $\\Phi$ is the design matrix defined below Eq (6.51: $\\mathbf{V} = \\mathbf{\\Phi}\\mathbf{w}$) and $\\mathbf{I}_N$ is an identity matrix. Before we use Eq (6.66: $\\sigma^{2}(\\mathbf{x}_{N+1}) = c - \\mathbf{k}^{\\mathrm{T}} \\mathbf{C}_{N}^{-1} \\mathbf{k}.$), we need to obtain $\\mathbf{k}$ :\n\n$$\\mathbf{k} = [k(\\mathbf{x}_1, \\mathbf{x}_{N+1}), k(\\mathbf{x}_2, \\mathbf{x}_{N+1}), ..., k(\\mathbf{x}_N, \\mathbf{x}_{N+1})]^T$$\n\n$$= \\frac{1}{\\alpha} [\\boldsymbol{\\phi}(\\mathbf{x}_1)^T \\boldsymbol{\\phi}(\\mathbf{x}_{N+1}), \\boldsymbol{\\phi}(\\mathbf{x}_2)^T \\boldsymbol{\\phi}(\\mathbf{x}_{N+1}), ..., \\boldsymbol{\\phi}(\\mathbf{x}_n)^T \\boldsymbol{\\phi}(\\mathbf{x}_{N+1})]^T$$\n\n$$= \\frac{1}{\\alpha} \\boldsymbol{\\Phi} \\boldsymbol{\\phi}(\\mathbf{x}_{N+1})^T$$\n\nNow we substitute all the expressions into Eq (6.66: $\\sigma^{2}(\\mathbf{x}_{N+1}) = c - \\mathbf{k}^{\\mathrm{T}} \\mathbf{C}_{N}^{-1} \\mathbf{k}.$), yielding:\n\n$$m(\\mathbf{x}_{N+1}) = \\alpha^{-1} \\boldsymbol{\\phi}(\\mathbf{x}_{N+1})^T \\mathbf{\\Phi}^T \\left[ \\alpha^{-1} \\mathbf{\\Phi} \\mathbf{\\Phi}^T + \\beta^{-1} \\mathbf{I}_N \\right]^{-1} \\mathbf{t}$$\n\nNext using matrix identity (C.6), we obtain:\n\n$$\\mathbf{\\Phi}^T \\left[ \\alpha^{-1} \\mathbf{\\Phi} \\mathbf{\\Phi}^T + \\beta^{-1} \\mathbf{I}_N \\right]^{-1} = \\alpha \\beta \\left[ \\beta \\mathbf{\\Phi}^T \\mathbf{\\Phi} + \\alpha \\mathbf{I}_M \\right]^{-1} \\mathbf{\\Phi}^T = \\alpha \\beta \\mathbf{S}_N \\mathbf{\\Phi}^T$$\n\nWhere we have used Eq (3.54: $\\mathbf{S}_{N}^{-1} = \\alpha \\mathbf{I} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi}.$). Substituting it into $\\mathbf{m}(\\mathbf{x}_{N+1})$ , we obtain:\n\n$$m(\\mathbf{x}_{N+1}) = \\beta \\phi(\\mathbf{x}_{N+1})^T \\mathbf{S}_N \\mathbf{\\Phi}^T \\mathbf{t} = \\langle \\phi(\\mathbf{x}_{N+1})^T, \\beta \\mathbf{S}_N \\mathbf{\\Phi}^T \\mathbf{t} \\rangle$$\n\nWhere $\\langle \\cdot, \\cdot \\rangle$ represents the inner product. Comparing the result above with Eq (3.58: $p(t|\\mathbf{x}, \\mathbf{t}, \\alpha, \\beta) = \\mathcal{N}(t|\\mathbf{m}_N^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}), \\sigma_N^2(\\mathbf{x}))$), (3.54: $\\mathbf{S}_{N}^{-1} = \\alpha \\mathbf{I} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi}.$) and (3.53: $\\mathbf{m}_{N} = \\beta \\mathbf{S}_{N} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{t}$), we conclude that the means are equal. It is similar for the variance. We substitute c, $\\mathbf{k}$ and $\\mathbf{C}_N$ into Eq (6.67: $\\sigma^{2}(\\mathbf{x}_{N+1}) = c - \\mathbf{k}^{\\mathrm{T}} \\mathbf{C}_{N}^{-1} \\mathbf{k}.$). Then we simplify the expression using matrix identity (C.7). Finally, we will observe that it is equal to Eq (3.59).",
"answer_length": 3001
},
{
"chapter": 6,
"question_number": "6.22",
"difficulty": "medium",
"question_text": "Consider a regression problem with N training set input vectors $\\mathbf{x}_1, \\dots, \\mathbf{x}_N$ and L test set input vectors $\\mathbf{x}_{N+1}, \\dots, \\mathbf{x}_{N+L}$ , and suppose we define a Gaussian process prior over functions $t(\\mathbf{x})$ . Derive an expression for the joint predictive distribution for $t(\\mathbf{x}_{N+1}), \\dots, t(\\mathbf{x}_{N+L})$ , given the values of $t(\\mathbf{x}_1), \\dots, t(\\mathbf{x}_N)$ . Show the marginal of this distribution for one of the test observations $t_j$ where $N+1 \\leq j \\leq N+L$ is given by the usual Gaussian process regression result (6.66: $\\sigma^{2}(\\mathbf{x}_{N+1}) = c - \\mathbf{k}^{\\mathrm{T}} \\mathbf{C}_{N}^{-1} \\mathbf{k}.$) and (6.67: $\\sigma^{2}(\\mathbf{x}_{N+1}) = c - \\mathbf{k}^{\\mathrm{T}} \\mathbf{C}_{N}^{-1} \\mathbf{k}.$).",
"answer": "Based on Eq (6.64) and (6.65: $\\mathbf{C}_{N+1} = \\begin{pmatrix} \\mathbf{C}_N & \\mathbf{k} \\\\ \\mathbf{k}^{\\mathrm{T}} & c \\end{pmatrix}$), We first write down the joint distribution for $\\mathbf{t}_{N+L} = [t_1(\\mathbf{x}), t_2(\\mathbf{x}), ..., t_{N+L}(\\mathbf{x})]^T$ :\n\n$$p(\\mathbf{t}_{N+L}) = \\mathcal{N}(\\mathbf{t}_{N+L}|\\mathbf{0}, \\mathbf{C}_{N+L})$$\n\nWhere $\\mathbf{C}_{N+L}$ is similarly given by:\n\n$$\\mathbf{C}_{N+L} = \\left( \\begin{array}{cc} \\mathbf{C}_{1,N} & \\mathbf{K} \\\\ \\mathbf{K}^T & \\mathbf{C}_{N+1,N+L} \\end{array} \\right)$$\n\nThe expression above has already implicitly divided the vector $\\mathbf{t}_{N+L}$ into two parts. Similar to Prob.6.20, for later simplicity we rearrange the order of $\\mathbf{t}_{N+L}$ denoted as $\\bar{\\mathbf{t}}_{N+L} = [t_{N+1},...,t_{N+L},t_1,...,t_N]^T$ . Moreover, $\\bar{\\mathbf{t}}_{N+L}$ should also follows a Gaussian distribution:\n\n$$p(\\bar{\\mathbf{t}}_{N+L}) = \\mathcal{N}(\\bar{\\mathbf{t}}_{N+L}|\\mathbf{0},\\bar{\\mathbf{C}}_{N+L})$$\n\nWhere we have defined:\n\n$$\\bar{\\mathbf{C}}_{N+L} = \\left( \\begin{array}{cc} \\mathbf{C}_{N+1,N+L} & \\mathbf{K}^T \\\\ \\mathbf{K} & \\mathbf{C}_{1,N} \\end{array} \\right)$$\n\nNow we use Eq (2.94)-(2.98) and Eq (2.79)-(2.80) to derive the conditional distribution, beginning by calculate $\\Lambda_{aa}$ :\n\n$$\\boldsymbol{\\Lambda}_{aa} = (\\mathbf{C}_{N+1,N+L} - \\mathbf{K}^T \\cdot \\mathbf{C}_{1,N}^{-1} \\cdot \\mathbf{K})^{-1}$$\n\nand $\\Lambda_{ab}$ :\n\n$$\\Lambda_{ab} = -(\\mathbf{C}_{N+1,N+L} - \\mathbf{K}^T \\cdot \\mathbf{C}_{1N}^{-1} \\cdot \\mathbf{K})^{-1} \\cdot \\mathbf{K}^T \\cdot \\mathbf{C}_{1N}^{-1}$$\n\nNow we can obtain:\n\n$$p(t_{N+1},...,t_{N+L}|\\mathbf{t}_N) = \\mathcal{N}(\\boldsymbol{\\mu}_{a|b},\\boldsymbol{\\Lambda}_{aa}^{-1})$$\n\nWhere we have defined:\n\n$$\\boldsymbol{\\mu}_{a|b} = \\mathbf{0} + \\mathbf{K}^T \\cdot \\mathbf{C}_{1N}^{-1} \\cdot \\mathbf{t}_N = \\mathbf{K}^T \\cdot \\mathbf{C}_{1N}^{-1} \\cdot \\mathbf{t}_N$$\n\nIf now we want to find the conditional distribution $p(t_j|\\mathbf{t}_N)$ , where $N+1 \\le j \\le N+L$ , we only need to find the corresponding entry in the mean (i.e., the (j-N)-th entry) and covariance matrix (i.e., the (j-N)-th diagonal entry) of $p(t_{N+1},...,t_{N+L}|\\mathbf{t}_N)$ . In this case, it will degenerate to Eq (6.66: $\\sigma^{2}(\\mathbf{x}_{N+1}) = c - \\mathbf{k}^{\\mathrm{T}} \\mathbf{C}_{N}^{-1} \\mathbf{k}.$) and (6.67: $\\sigma^{2}(\\mathbf{x}_{N+1}) = c - \\mathbf{k}^{\\mathrm{T}} \\mathbf{C}_{N}^{-1} \\mathbf{k}.$) just as required.",
"answer_length": 2471
},
{
"chapter": 6,
"question_number": "6.24",
"difficulty": "easy",
"question_text": "Show that a diagonal matrix W whose elements satisfy $0 < W_{ii} < 1$ is positive definite. Show that the sum of two positive definite matrices is itself positive definite.",
"answer": "By definition, we only need to prove that for arbitrary vector $\\mathbf{x} \\neq \\mathbf{0}$ , $\\mathbf{x}^T \\mathbf{W} \\mathbf{x}$ is positive. Here suppose that $\\mathbf{W}$ is a $M \\times M$ matrix. We expand the multiplication:\n\n$$\\mathbf{x}^T \\mathbf{W} \\mathbf{x} = \\sum_{i=1}^M \\sum_{j=1}^M W_{ij} \\cdot x_i \\cdot x_j = \\sum_{i=1}^M W_{ii} \\cdot x_i^2$$\n\nwhere we have used the fact that **W** is a diagonal matrix. Since $W_{ii} > 0$ , we obtain $\\mathbf{x}^T \\mathbf{W} \\mathbf{x} > 0$ just as required. Suppose we have two positive definite matrix, denoted as $\\mathbf{A}_1$ and $\\mathbf{A}_2$ , i.e., for arbitrary vector $\\mathbf{x}$ , we have $\\mathbf{x}^T \\mathbf{A}_1 \\mathbf{x} > 0$ and $\\mathbf{x}^T \\mathbf{A}_2 \\mathbf{x} > 0$ . Therefore, we can obtain:\n\n$$\\mathbf{x}^T(\\mathbf{A}_1 + \\mathbf{A}_2)\\mathbf{x} = \\mathbf{x}^T\\mathbf{A}_1\\mathbf{x} + \\mathbf{x}^T\\mathbf{A}_2\\mathbf{x} > 0$$\n\nJust as required.",
"answer_length": 943
},
{
"chapter": 6,
"question_number": "6.25",
"difficulty": "easy",
"question_text": "Using the Newton-Raphson formula (4.92), derive the iterative update formula (6.83: $\\mathbf{a}_N^{\\text{new}} = \\mathbf{C}_N (\\mathbf{I} + \\mathbf{W}_N \\mathbf{C}_N)^{-1} \\left\\{ \\mathbf{t}_N - \\boldsymbol{\\sigma}_N + \\mathbf{W}_N \\mathbf{a}_N \\right\\}.$) for finding the mode $\\mathbf{a}_N^{\\star}$ of the posterior distribution in the Gaussian process classification model.",
"answer": "Based on Newton-Raphson formula, Eq(6.81) and Eq(6.82), we have:\n\n$$\\mathbf{a}_{N}^{new} = \\mathbf{a}_{N} - (-\\mathbf{W}_{N} - \\mathbf{C}_{N}^{-1})^{-1} (\\mathbf{t}_{N} - \\sigma_{N} - \\mathbf{C}_{N}^{-1} \\mathbf{a}_{N})$$\n\n$$= \\mathbf{a}_{N} + (\\mathbf{W}_{N} + \\mathbf{C}_{N}^{-1})^{-1} (\\mathbf{t}_{N} - \\sigma_{N} - \\mathbf{C}_{N}^{-1} \\mathbf{a}_{N})$$\n\n$$= (\\mathbf{W}_{N} + \\mathbf{C}_{N}^{-1})^{-1} [(\\mathbf{W}_{N} + \\mathbf{C}_{N}^{-1}) \\mathbf{a}_{N} + \\mathbf{t}_{N} - \\sigma_{N} - \\mathbf{C}_{N}^{-1} \\mathbf{a}_{N}]$$\n\n$$= \\mathbf{C}_{N} \\mathbf{C}_{N}^{-1} (\\mathbf{W}_{N} + \\mathbf{C}_{N}^{-1})^{-1} (\\mathbf{t}_{N} - \\sigma_{N} + \\mathbf{W}_{N} \\mathbf{a}_{N})$$\n\n$$= \\mathbf{C}_{N} (\\mathbf{C}_{N} \\mathbf{W}_{N} + \\mathbf{I})^{-1} (\\mathbf{t}_{N} - \\sigma_{N} + \\mathbf{W}_{N} \\mathbf{a}_{N})$$\n\nJust as required.\n\n# **Problem 6.26 Solution**\n\nUsing Eq(6.77), (6.78: $p(a_{N+1}|\\mathbf{a}_N) = \\mathcal{N}(a_{N+1}|\\mathbf{k}^{\\mathrm{T}}\\mathbf{C}_N^{-1}\\mathbf{a}_N, c - \\mathbf{k}^{\\mathrm{T}}\\mathbf{C}_N^{-1}\\mathbf{k}).$) and (6.86: $q(\\mathbf{a}_N) = \\mathcal{N}(\\mathbf{a}_N | \\mathbf{a}_N^*, \\mathbf{H}^{-1}).$), we can obtain:\n\n$$p(a_{N+1}|\\mathbf{t}_N) = \\int p(a_{N+1}|\\mathbf{a}_N)p(\\mathbf{a}_N|\\mathbf{t}_N)d\\mathbf{a}_N$$\n$$= \\int N(a_{N+1}|\\mathbf{k}^T\\mathbf{C}_N^{-1}\\mathbf{a}_N, c - \\mathbf{k}^T\\mathbf{C}_N^{-1}\\mathbf{k}) \\cdot N(\\mathbf{a}_N|\\mathbf{a}_N^{\\star}, \\mathbf{H}^{-1})d\\mathbf{a}_N$$\n\nBy analogy to Eq (2.115: $p(\\mathbf{y}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}, \\mathbf{L}^{-1} + \\mathbf{A}\\boldsymbol{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}})$), i.e.,\n\n$$p(\\mathbf{y}) = \\int p(\\mathbf{y}|\\mathbf{x})p(\\mathbf{x})d\\mathbf{x}$$\n\nWe can obtain:\n\n$$p(a_{N+1}|\\mathbf{t}_N) = N(\\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}, \\mathbf{L}^{-1} + \\mathbf{A}\\boldsymbol{\\Lambda}^{-1}\\mathbf{A}^T)$$\n (\\*)\n\nWhere we have defined:\n\n$$\\mathbf{A} = \\mathbf{k}^T \\mathbf{C}_N^{-1}, \\mathbf{b} = \\mathbf{0}, \\mathbf{L}^{-1} = c - \\mathbf{k}^T \\mathbf{C}_N^{-1} \\mathbf{k}$$\n\nAnd\n\n$$\\mu = \\mathbf{a}_N^{\\star}, \\Lambda = \\mathbf{H}$$\n\nTherefore, the mean is given by:\n\n$$\\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b} = \\mathbf{k}^T \\mathbf{C}_N^{-1} \\mathbf{a}_N^* = \\mathbf{k}^T \\mathbf{C}_N^{-1} \\mathbf{C}_N (\\mathbf{t}_N - \\sigma_N) = \\mathbf{k}^T (\\mathbf{t}_N - \\sigma_N)$$\n\nWhere we have used Eq (6.84: $\\mathbf{a}_N^{\\star} = \\mathbf{C}_N(\\mathbf{t}_N - \\boldsymbol{\\sigma}_N).$). The covariance matrix is given by:\n\n$$\\begin{split} \\mathbf{L}^{-1} + \\mathbf{A}\\boldsymbol{\\Lambda}^{-1}\\mathbf{A}^T &= c - \\mathbf{k}^T \\mathbf{C}_N^{-1}\\mathbf{k} + \\mathbf{k}^T \\mathbf{C}_N^{-1}\\mathbf{H}^{-1}(\\mathbf{k}^T \\mathbf{C}_N^{-1})^T \\\\ &= c - \\mathbf{k}^T (\\mathbf{C}_N^{-1} - \\mathbf{C}_N^{-1}\\mathbf{H}^{-1}\\mathbf{C}_N^{-1})\\mathbf{k} \\\\ &= c - \\mathbf{k}^T \\Big(\\mathbf{C}_N^{-1} - \\mathbf{C}_N^{-1}(\\mathbf{W}_N + \\mathbf{C}_N^{-1})^{-1}\\mathbf{C}_N^{-1}\\Big)\\mathbf{k} \\\\ &= c - \\mathbf{k}^T \\Big(\\mathbf{C}_N^{-1} - (\\mathbf{C}_N \\mathbf{W}_N \\mathbf{C}_N + \\mathbf{C}_N^{-1})^{-1}\\Big)\\mathbf{k} \\end{split}$$\n\nWhere we have used Eq (6.85: $\\mathbf{H} = -\\nabla \\nabla \\Psi(\\mathbf{a}_N) = \\mathbf{W}_N + \\mathbf{C}_N^{-1}$) and the fact that $\\mathbf{C}_N$ is symmetric. Then we use matrix identity (C.7) to further reduce the expression, which will finally give Eq (6.88: $\\operatorname{var}[a_{N+1}|\\mathbf{t}_N] = c - \\mathbf{k}^{\\mathrm{T}}(\\mathbf{W}_N^{-1} + \\mathbf{C}_N)^{-1}\\mathbf{k}.$).\n\n**Problem 6.27 Solution**(Wait for update) This problem is really complicated.\n\nWhat's more, I find that Eq (6.91) seems not right.\n\n# 0.7 Sparse Kernel Machines",
"answer_length": 3626
},
{
"chapter": 6,
"question_number": "6.3",
"difficulty": "easy",
"question_text": "The nearest-neighbour classifier (Section 2.5.2) assigns a new input vector $\\mathbf{x}$ to the same class as that of the nearest input vector $\\mathbf{x}_n$ from the training set, where in the simplest case, the distance is defined by the Euclidean metric $\\|\\mathbf{x} \\mathbf{x}_n\\|^2$ . By expressing this rule in terms of scalar products and then making use of kernel substitution, formulate the nearest-neighbour classifier for a general nonlinear kernel.",
"answer": "We begin by expanding the Euclidean metric.\n\n$$||\\mathbf{x} - \\mathbf{x}_n||^2 = (\\mathbf{x} - \\mathbf{x}_n)^T (\\mathbf{x} - \\mathbf{x}_n)$$\n\n$$= (\\mathbf{x}^T - \\mathbf{x}_n^T)(\\mathbf{x} - \\mathbf{x}_n)$$\n\n$$= \\mathbf{x}^T \\mathbf{x} - 2\\mathbf{x}_n^T \\mathbf{x} + \\mathbf{x}_n^T \\mathbf{x}_n$$\n\nSimilar to (6.24)-(6.26), we use a nonlinear kernel $k(\\mathbf{x}_n, \\mathbf{x})$ to replace $\\mathbf{x}_n^T \\mathbf{x}$ , which gives a general nonlinear nearest-neighbor classifier with cost function defined as:\n\n$$k(\\mathbf{x}, \\mathbf{x}) + k(\\mathbf{x}_n, \\mathbf{x}_n) - 2k(\\mathbf{x}_n, \\mathbf{x})$$",
"answer_length": 608
},
{
"chapter": 6,
"question_number": "6.4",
"difficulty": "easy",
"question_text": "In Appendix C, we give an example of a matrix that has positive elements but that has a negative eigenvalue and hence that is not positive definite. Find an example of the converse property, namely a 2 × 2 matrix with positive eigenvalues yet that has at least one negative element.",
"answer": "To construct such a matrix, let us suppose the two eigenvalues are 1 and 2, and the matrix has form:\n\n$$\\left[ egin{array}{cc} a & b \\\\ c & d \\end{array} \\right]$$\n\nTherefore, based on the definition of eigenvalue, we have two equations:\n\n$$\\begin{cases} (a-2)(d-2) = bc & (1) \\\\ (a-1)(d-1) = bc & (2) \\end{cases}$$\n\n(2)-(1), yielding:\n\n$$a + d = 3$$\n\nTherefore, we set a = 4 and d = -1. Then we substitute them into (1), and thus we see:\n\n$$bc = -6$$\n\nFinally, we choose b = 3 and c = -2. The constructed matrix is:\n\n$$\\left[\\begin{array}{cc}4&3\\\\-2&-1\\end{array}\\right]$$",
"answer_length": 573
},
{
"chapter": 6,
"question_number": "6.5",
"difficulty": "easy",
"question_text": "Verify the results (6.13: $k(\\mathbf{x}, \\mathbf{x}') = ck_1(\\mathbf{x}, \\mathbf{x}')$) and (6.14: $k(\\mathbf{x}, \\mathbf{x}') = f(\\mathbf{x})k_1(\\mathbf{x}, \\mathbf{x}')f(\\mathbf{x}')$) for constructing valid kernels.",
"answer": "Since $k_1(\\mathbf{x}, \\mathbf{x}')$ is a valid kernel, it can be written as:\n\n$$k_1(\\mathbf{x}, \\mathbf{x}') = \\phi(\\mathbf{x})^T \\phi(\\mathbf{x}')$$\n\nWe can obtain:\n\n$$k(\\mathbf{x}, \\mathbf{x}') = c k_1(\\mathbf{x}, \\mathbf{x}') = \\left[\\sqrt{c}\\phi(\\mathbf{x})\\right]^T \\left[\\sqrt{c}\\phi(\\mathbf{x}')\\right]$$\n\nTherefore, (6.13: $k(\\mathbf{x}, \\mathbf{x}') = ck_1(\\mathbf{x}, \\mathbf{x}')$) is a valid kernel. It is similar for (6.14):\n\n$$k(\\mathbf{x}, \\mathbf{x}') = f(\\mathbf{x})k_1(\\mathbf{x}, \\mathbf{x}')f(\\mathbf{x}') = [f(\\mathbf{x})\\phi(\\mathbf{x})]^T [f(\\mathbf{x}')\\phi(\\mathbf{x}')]$$\n\nJust as required.",
"answer_length": 619
},
{
"chapter": 6,
"question_number": "6.6",
"difficulty": "easy",
"question_text": "Verify the results (6.15: $k(\\mathbf{x}, \\mathbf{x}') = q(k_1(\\mathbf{x}, \\mathbf{x}'))$) and (6.16: $k(\\mathbf{x}, \\mathbf{x}') = \\exp(k_1(\\mathbf{x}, \\mathbf{x}'))$) for constructing valid kernels.",
"answer": "We suppose q(x) can be written as:\n\n$$q(x) = a_n x^n + a_{n-1} x^{n-1} + \\dots + a_1 x + a_0$$\n\nWe now obtain:\n\n$$k(\\mathbf{x}, \\mathbf{x}') = a_n k_1(\\mathbf{x}, \\mathbf{x}')^n + a_{n-1} k_1(\\mathbf{x}, \\mathbf{x}')^{n-1} + \\dots + a_1 k_1(\\mathbf{x}, \\mathbf{x}') + a_0$$\n\nBy repeatedly using (6.13: $k(\\mathbf{x}, \\mathbf{x}') = ck_1(\\mathbf{x}, \\mathbf{x}')$), (6.17: $k(\\mathbf{x}, \\mathbf{x}') = k_1(\\mathbf{x}, \\mathbf{x}') + k_2(\\mathbf{x}, \\mathbf{x}')$) and (6.18: $k(\\mathbf{x}, \\mathbf{x}') = k_1(\\mathbf{x}, \\mathbf{x}')k_2(\\mathbf{x}, \\mathbf{x}')$), we can easily verify $k(\\mathbf{x}, \\mathbf{x}')$ is a valid kernel. For (6.16: $k(\\mathbf{x}, \\mathbf{x}') = \\exp(k_1(\\mathbf{x}, \\mathbf{x}'))$), we can use Taylor expansion, and since the coefficients of Taylor expansion are all positive, we can similarly prove its validity.",
"answer_length": 845
},
{
"chapter": 6,
"question_number": "6.7",
"difficulty": "easy",
"question_text": "Verify the results (6.17: $k(\\mathbf{x}, \\mathbf{x}') = k_1(\\mathbf{x}, \\mathbf{x}') + k_2(\\mathbf{x}, \\mathbf{x}')$) and (6.18: $k(\\mathbf{x}, \\mathbf{x}') = k_1(\\mathbf{x}, \\mathbf{x}')k_2(\\mathbf{x}, \\mathbf{x}')$) for constructing valid kernels.",
"answer": "To prove (6.17: $k(\\mathbf{x}, \\mathbf{x}') = k_1(\\mathbf{x}, \\mathbf{x}') + k_2(\\mathbf{x}, \\mathbf{x}')$), we will use the property stated below (6.12: $= \\phi(\\mathbf{x})^{\\mathrm{T}} \\phi(\\mathbf{z}).$). Since we know $k_1(\\mathbf{x}, \\mathbf{x}')$ and $k_2(\\mathbf{x}, \\mathbf{x}')$ are valid kernels, their Gram matrix $\\mathbf{K}_1$ and $\\mathbf{K}_2$ \n\nare both positive semidefinite. Given the relation (6.12: $= \\phi(\\mathbf{x})^{\\mathrm{T}} \\phi(\\mathbf{z}).$), it can be easily shown $\\mathbf{K} = \\mathbf{K}_1 + \\mathbf{K}_2$ is also positive semidefinite and thus $k(\\mathbf{x}, \\mathbf{x}')$ is also a valid kernel.\n\nTo prove (6.18: $k(\\mathbf{x}, \\mathbf{x}') = k_1(\\mathbf{x}, \\mathbf{x}')k_2(\\mathbf{x}, \\mathbf{x}')$), we assume the map function for kernel $k_1(\\mathbf{x}, \\mathbf{x}')$ is $\\boldsymbol{\\phi}^{(1)}(\\mathbf{x})$ , and similarly $\\boldsymbol{\\phi}^{(2)}(\\mathbf{x})$ for $k_2(\\mathbf{x}, \\mathbf{x}')$ . Moreover, we further assume the dimension of $\\boldsymbol{\\phi}^{(1)}(\\mathbf{x})$ is M, and $\\boldsymbol{\\phi}^{(2)}(\\mathbf{x})$ is N. We expand $k(\\mathbf{x}, \\mathbf{x}')$ based on (6.18):\n\n$$k(\\mathbf{x}, \\mathbf{x}') = k_1(\\mathbf{x}, \\mathbf{x}') k_2(\\mathbf{x}, \\mathbf{x}')$$\n\n$$= \\phi^{(1)}(\\mathbf{x})^T \\phi^{(1)}(\\mathbf{x}') \\phi^{(2)}(\\mathbf{x})^T \\phi^{(2)}(\\mathbf{x}')$$\n\n$$= \\sum_{i=1}^M \\phi_i^{(1)}(\\mathbf{x}) \\phi_i^{(1)}(\\mathbf{x}') \\sum_{j=1}^N \\phi_j^{(2)}(\\mathbf{x}) \\phi_j^{(2)}(\\mathbf{x}')$$\n\n$$= \\sum_{i=1}^M \\sum_{j=1}^N \\left[ \\phi_i^{(1)}(\\mathbf{x}) \\phi_j^{(2)}(\\mathbf{x}) \\right] \\left[ \\phi_i^{(1)}(\\mathbf{x}') \\phi_j^{(2)}(\\mathbf{x}') \\right]$$\n\n$$= \\sum_{k=1}^{MN} \\phi_k(\\mathbf{x}) \\phi_k(\\mathbf{x}') = \\phi(\\mathbf{x})^T \\phi(\\mathbf{x}')$$\n\nwhere $\\phi_i^{(1)}(\\mathbf{x})$ is the ith element of $\\phi^{(1)}(\\mathbf{x})$ , and $\\phi_j^{(2)}(\\mathbf{x})$ is the jth element of $\\phi^{(2)}(\\mathbf{x})$ . To be more specific, we have proved that $k(\\mathbf{x},\\mathbf{x}')$ can be written as $\\phi(\\mathbf{x})^T\\phi(\\mathbf{x}')$ . Here $\\phi(\\mathbf{x})$ is a $MN\\times 1$ column vector, and the kth (k=1,2,...,MN) element is given by $\\phi_i^{(1)}(\\mathbf{x})\\times\\phi_j^{(2)}(\\mathbf{x})$ . What's more, we can also express i,j in terms of k:\n\n$$i = (k-1) \\otimes N + 1$$\n and $j = (k-1) \\otimes N + 1$ \n\nwhere $\\emptyset$ and $\\odot$ means integer division and remainder, respectively.",
"answer_length": 2420
},
{
"chapter": 6,
"question_number": "6.8",
"difficulty": "easy",
"question_text": "Verify the results (6.19: $k(\\mathbf{x}, \\mathbf{x}') = k_3(\\phi(\\mathbf{x}), \\phi(\\mathbf{x}'))$) and (6.20: $k(\\mathbf{x}, \\mathbf{x}') = \\mathbf{x}^{\\mathrm{T}} \\mathbf{A} \\mathbf{x}'$) for constructing valid kernels.",
"answer": "For (6.19: $k(\\mathbf{x}, \\mathbf{x}') = k_3(\\phi(\\mathbf{x}), \\phi(\\mathbf{x}'))$) we suppose $k_3(\\mathbf{x}, \\mathbf{x}') = \\mathbf{g}(\\mathbf{x})^T \\mathbf{g}(\\mathbf{x}')$ , and thus we have:\n\n$$k(\\mathbf{x}, \\mathbf{x}') = k_3(\\phi(\\mathbf{x}), \\phi(\\mathbf{x}')) = g(\\phi(\\mathbf{x}))^T g(\\phi(\\mathbf{x}')) = f(\\mathbf{x})^T f(\\mathbf{x}')$$\n\nwhere we have denoted $g(\\phi(\\mathbf{x})) = f(\\mathbf{x})$ and now it is obvious that (6.19: $k(\\mathbf{x}, \\mathbf{x}') = k_3(\\phi(\\mathbf{x}), \\phi(\\mathbf{x}'))$) holds. To prove (6.20: $k(\\mathbf{x}, \\mathbf{x}') = \\mathbf{x}^{\\mathrm{T}} \\mathbf{A} \\mathbf{x}'$), we suppose $\\mathbf{x}$ is a $N \\times 1$ column vector and $\\mathbf{A}$ is a $N \\times N$ symmetric positive semidefinite matrix. We know that $\\mathbf{A}$ can be decomposed to $\\mathbf{Q}\\mathbf{B}\\mathbf{Q}^T$ . Here $\\mathbf{Q}$ is a $N \\times N$ orthogonal matrix, and $\\mathbf{B}$ is a $N \\times N$ diagonal matrix whose elements are no less than 0. Now we can derive:\n\n$$k(\\mathbf{x}, \\mathbf{x}') = \\mathbf{x}^T \\mathbf{A} \\mathbf{x}' = \\mathbf{x}^T \\mathbf{Q} \\mathbf{B} \\mathbf{Q}^T \\mathbf{x}' = (\\mathbf{Q}^T \\mathbf{x})^T \\mathbf{B} (\\mathbf{Q}^T \\mathbf{x}') = \\mathbf{y}^T \\mathbf{B} \\mathbf{y}'$$\n$$= \\sum_{i=1}^N B_{ii} y_i y_i' = \\sum_{i=1}^N (\\sqrt{B_{ii}} y_i) (\\sqrt{B_{ii}} y_i') = \\boldsymbol{\\phi}(\\mathbf{x})^T \\boldsymbol{\\phi}(\\mathbf{x}')$$\n\nTo be more specific, we have proved that $k(\\mathbf{x}, \\mathbf{x}') = \\phi(\\mathbf{x})^T \\phi(\\mathbf{x}')$ , and here $\\phi(\\mathbf{x})$ is a $N \\times 1$ column vector, whose ith (i = 1, 2, ..., N) element is given by $\\sqrt{B_{ii}} y_i$ , i.e., $\\sqrt{B_{ii}} (\\mathbf{Q}^T \\mathbf{x})_i$ .",
"answer_length": 1714
},
{
"chapter": 6,
"question_number": "6.9",
"difficulty": "easy",
"question_text": "Verify the results (6.21: $k(\\mathbf{x}, \\mathbf{x}') = k_a(\\mathbf{x}_a, \\mathbf{x}'_a) + k_b(\\mathbf{x}_b, \\mathbf{x}'_b)$) and (6.22: $k(\\mathbf{x}, \\mathbf{x}') = k_a(\\mathbf{x}_a, \\mathbf{x}'_a)k_b(\\mathbf{x}_b, \\mathbf{x}'_b)$) for constructing valid kernels.",
"answer": "To prove (6.21: $k(\\mathbf{x}, \\mathbf{x}') = k_a(\\mathbf{x}_a, \\mathbf{x}'_a) + k_b(\\mathbf{x}_b, \\mathbf{x}'_b)$), let's first expand the expression:\n\n$$k(\\mathbf{x}, \\mathbf{x}') = k_a(\\mathbf{x}_a, \\mathbf{x}'_a) + k_b(\\mathbf{x}_b, \\mathbf{x}'_b)$$\n\n$$= \\sum_{i=1}^{M} \\phi_i^{(a)}(\\mathbf{x}_a) \\phi_i^{(a)}(\\mathbf{x}'_a) + \\sum_{j=1}^{N} \\phi_i^{(b)}(\\mathbf{x}_b) \\phi_i^{(b)}(\\mathbf{x}'_b)$$\n\n$$= \\sum_{k=1}^{M+N} \\phi_k(\\mathbf{x}) \\phi_k(\\mathbf{x}') = \\phi(\\mathbf{x})^T \\phi(\\mathbf{x}')$$\n\nwhere we have assumed the dimension of $\\mathbf{x}_a$ is M and the dimension of $\\mathbf{x}_b$ is N. The mapping function $\\phi(\\mathbf{x})$ is a $(M+N)\\times 1$ column vector, whose kth (k=1,2,...,M+N) element $\\phi_k(\\mathbf{x})$ is:\n\n$$\\phi_k(\\mathbf{x}) = \\begin{cases} \\phi_k^{(a)}(\\mathbf{x}) & 1 \\le k \\le M \\\\ \\phi_{k-M}^{(b)}(\\mathbf{x}_a) & M+1 \\le k \\le M+N \\end{cases}$$\n\n(6.22: $k(\\mathbf{x}, \\mathbf{x}') = k_a(\\mathbf{x}_a, \\mathbf{x}'_a)k_b(\\mathbf{x}_b, \\mathbf{x}'_b)$) is quite similar to (6.18: $k(\\mathbf{x}, \\mathbf{x}') = k_1(\\mathbf{x}, \\mathbf{x}')k_2(\\mathbf{x}, \\mathbf{x}')$). We follow the same procedure:\n\n$$k(\\mathbf{x}, \\mathbf{x}') = k_a(\\mathbf{x}_a, \\mathbf{x}'_a) k_b(\\mathbf{x}_b, \\mathbf{x}'_b)$$\n\n$$= \\sum_{i=1}^{M} \\phi_i^{(a)}(\\mathbf{x}_a) \\phi_i^{(a)}(\\mathbf{x}'_a) \\sum_{j=1}^{N} \\phi_j^{(b)}(\\mathbf{x}_b) \\phi_j^{(b)}(\\mathbf{x}'_b)$$\n\n$$= \\sum_{i=1}^{M} \\sum_{j=1}^{N} \\left[ \\phi_i^{(a)}(\\mathbf{x}_a) \\phi_j^{(b)}(\\mathbf{x}_b) \\right] \\left[ \\phi_i^{(a)}(\\mathbf{x}'_a) \\phi_j^{(b)}(\\mathbf{x}'_b) \\right]$$\n\n$$= \\sum_{b=1}^{MN} \\phi_k(\\mathbf{x}) \\phi_k(\\mathbf{x}') = \\phi(\\mathbf{x})^T \\phi(\\mathbf{x}')$$\n\nBy analogy to (6.18: $k(\\mathbf{x}, \\mathbf{x}') = k_1(\\mathbf{x}, \\mathbf{x}')k_2(\\mathbf{x}, \\mathbf{x}')$), the mapping function $\\phi(\\mathbf{x})$ is a $MN \\times 1$ column vector, whose kth (k = 1, 2, ..., MN) element $\\phi_k(\\mathbf{x})$ is:\n\n$$\\phi_k(\\mathbf{x}) = \\phi_i^{(a)}(\\mathbf{x}_a) \\times \\phi_j^{(b)}(\\mathbf{x}_b)$$\n\nTo be more specific, $\\mathbf{x}_a$ is the sub-vector of $\\mathbf{x}$ made up of the first M element of $\\mathbf{x}$ , and $\\mathbf{x}_b$ is the sub-vector of $\\mathbf{x}$ made up of the last N element of $\\mathbf{x}$ . What's more, we can also express i, j in terms of k:\n\n$$i = (k-1) \\otimes N + 1$$\n and $j = (k-1) \\otimes N + 1$ \n\nwhere $\\emptyset$ and $\\odot$ means integer division and remainder, respectively.",
"answer_length": 2450
}
]
},
{
"chapter_number": 7,
"total_questions": 18,
"difficulty_breakdown": {
"easy": 7,
"medium": 6,
"hard": 0,
"unknown": 6
},
"questions": [
{
"chapter": 7,
"question_number": "7.1",
"difficulty": "medium",
"question_text": "- 7.1 (\\*\\*) www Suppose we have a data set of input vectors $\\{\\mathbf{x}_n\\}$ with corresponding target values $t_n \\in \\{-1,1\\}$ , and suppose that we model the density of input vectors within each class separately using a Parzen kernel density estimator (see Section 2.5.1) with a kernel $k(\\mathbf{x}, \\mathbf{x}')$ . Write down the minimum misclassification-rate decision rule assuming the two classes have equal prior probability. Show also that, if the kernel is chosen to be $k(\\mathbf{x}, \\mathbf{x}') = \\mathbf{x}^T \\mathbf{x}'$ , then the classification rule reduces to simply assigning a new input vector to the class having the closest mean. Finally, show that, if the kernel takes the form $k(\\mathbf{x}, \\mathbf{x}') = \\phi(\\mathbf{x})^T \\phi(\\mathbf{x}')$ , that the classification is based on the closest mean in the feature space $\\phi(\\mathbf{x})$ .",
"answer": "By analogy to Eq (2.249: $p(\\mathbf{x}) = \\frac{1}{N} \\sum_{n=1}^{N} \\frac{1}{h^D} k\\left(\\frac{\\mathbf{x} - \\mathbf{x}_n}{h}\\right)$), we can obtain:\n\n$$p(\\mathbf{x}|t) = \\begin{cases} \\frac{1}{N_{+1}} \\sum_{n=1}^{N_{+1}} \\frac{1}{Z_k} \\cdot k(\\mathbf{x}, \\mathbf{x}_n) & t = +1\\\\ \\frac{1}{N_{-1}} \\sum_{n=1}^{N_{-1}} \\frac{1}{Z_k} \\cdot k(\\mathbf{x}, \\mathbf{x}_n) & t = -1 \\end{cases}$$\n\nwhere $N_{+1}$ represents the number of samples with label t = +1 and it is the same for $N_{-1}$ . $Z_k$ is a normalization constant representing the volume of the hypercube. Since we have equal prior for the class, i.e.,\n\n$$p(t) = \\begin{cases} 0.5 & t = +1 \\\\ 0.5 & t = -1 \\end{cases}$$\n\nBased on Bayes' Theorem, we have $p(t|\\mathbf{x}) \\propto p(\\mathbf{x}|t) \\cdot p(t)$ , yielding:\n\n$$p(t|\\mathbf{x}) = \\begin{cases} \\frac{1}{Z} \\cdot \\frac{1}{N_{+1}} \\sum_{n=1}^{N_{+1}} \\cdot k(\\mathbf{x}, \\mathbf{x}_n) & t = +1\\\\ \\frac{1}{Z} \\cdot \\frac{1}{N_{-1}} \\sum_{n=1}^{N_{-1}} \\cdot k(\\mathbf{x}, \\mathbf{x}_n) & t = -1 \\end{cases}$$\n\nWhere 1/Z is a normalization constant to guarantee the integration of the posterior equal to 1. To classify a new sample $\\mathbf{x}^*$ , we try to find the value $t^*$ that can maximize $p(t|\\mathbf{x})$ . Therefore, we can obtain:\n\n$$t^{*} = \\begin{cases} +1 & \\text{if } \\frac{1}{N_{+1}} \\sum_{n=1}^{N_{+1}} \\cdot k(\\mathbf{x}, \\mathbf{x}_{n}) \\ge \\frac{1}{N_{-1}} \\sum_{n=1}^{N_{-1}} \\cdot k(\\mathbf{x}, \\mathbf{x}_{n}) \\\\ -1 & \\text{if } \\frac{1}{N_{+1}} \\sum_{n=1}^{N_{+1}} \\cdot k(\\mathbf{x}, \\mathbf{x}_{n}) \\le \\frac{1}{N_{-1}} \\sum_{n=1}^{N_{-1}} \\cdot k(\\mathbf{x}, \\mathbf{x}_{n}) \\end{cases}$$\n(\\*)\n\nIf we now choose the kernel function as $k(\\mathbf{x}, \\mathbf{x}') = \\mathbf{x}^T \\mathbf{x}'$ , we have:\n\n$$\\frac{1}{N_{+1}} \\sum_{n=1}^{N_{+1}} k(\\mathbf{x}, \\mathbf{x}_n) = \\frac{1}{N_{+1}} \\sum_{n=1}^{N_{+1}} \\mathbf{x}^T \\mathbf{x}_n = \\mathbf{x}^T \\tilde{\\mathbf{x}}_{+1}$$\n\nWhere we have denoted:\n\n$$\\tilde{\\mathbf{x}}_{+1} = \\frac{1}{N_{+1}} \\sum_{n=1}^{N_{+1}} \\mathbf{x}_n$$\n\nand similarly for $\\tilde{\\mathbf{x}}_{-1}$ . Therefore, the classification criterion (\\*) can be written as:\n\n$$t^{\\star} = \\begin{cases} +1 & \\text{if } \\tilde{\\mathbf{x}}_{+1} \\ge \\tilde{\\mathbf{x}}_{-1} \\\\ -1 & \\text{if } \\tilde{\\mathbf{x}}_{+1} \\le \\tilde{\\mathbf{x}}_{-1} \\end{cases}$$\n\nWhen we choose the kernel function as $k(\\mathbf{x}, \\mathbf{x}') = \\phi(\\mathbf{x})^T \\phi(\\mathbf{x}')$ , we can similarly obtain the classification criterion:\n\n$$t^* = \\begin{cases} +1 & \\text{if } \\tilde{\\phi}(\\mathbf{x}_{+1}) \\ge \\tilde{\\phi}(\\mathbf{x}_{-1}) \\\\ -1 & \\text{if } \\tilde{\\phi}(\\mathbf{x}_{+1}) \\le \\tilde{\\phi}(\\mathbf{x}_{-1}) \\end{cases}$$\n\nWhere we have defined:\n\n$$\\tilde{\\phi}(\\mathbf{x}_{+1}) = \\frac{1}{N_{+1}} \\sum_{n=1}^{N_{+1}} \\phi(\\mathbf{x}_n)$$",
"answer_length": 2812
},
{
"chapter": 7,
"question_number": "7.10",
"difficulty": "medium",
"question_text": "Derive the result (7.85: $= -\\frac{1}{2} \\left\\{ N \\ln(2\\pi) + \\ln |\\mathbf{C}| + \\mathbf{t}^{\\mathrm{T}} \\mathbf{C}^{-1} \\mathbf{t} \\right\\}$) for the marginal likelihood function in the regression RVM, by performing the Gaussian integral over w in (7.84: $p(\\mathbf{t}|\\mathbf{X}, \\boldsymbol{\\alpha}, \\beta) = \\int p(\\mathbf{t}|\\mathbf{X}, \\mathbf{w}, \\beta) p(\\mathbf{w}|\\boldsymbol{\\alpha}) \\, d\\mathbf{w}.$) using the technique of completing the square in the exponential.",
"answer": "We first note that this result is given immediately from (2.113)–(2.115), but the task set in the exercise was to practice the technique of completing the square. In this solution and that of Exercise 7.12, we broadly follow the presentation in Section 3.5.1. Using (7.79: $p(\\mathbf{t}|\\mathbf{X}, \\mathbf{w}, \\beta) = \\prod_{n=1}^{N} p(t_n|\\mathbf{x}_n, \\mathbf{w}, \\beta^{-1}).$) and (7.80: $p(\\mathbf{w}|\\boldsymbol{\\alpha}) = \\prod_{i=1}^{M} \\mathcal{N}(w_i|0, \\alpha_i^{-1})$), we can write (7.84: $p(\\mathbf{t}|\\mathbf{X}, \\boldsymbol{\\alpha}, \\beta) = \\int p(\\mathbf{t}|\\mathbf{X}, \\mathbf{w}, \\beta) p(\\mathbf{w}|\\boldsymbol{\\alpha}) \\, d\\mathbf{w}.$) in a form similar to (3.78: $p(\\mathbf{t}|\\alpha,\\beta) = \\left(\\frac{\\beta}{2\\pi}\\right)^{N/2} \\left(\\frac{\\alpha}{2\\pi}\\right)^{M/2} \\int \\exp\\left\\{-E(\\mathbf{w})\\right\\} d\\mathbf{w}$)\n\n$$p(\\mathbf{t}|\\mathbf{X}, \\boldsymbol{\\alpha}, \\beta) = \\left(\\frac{\\beta}{2\\pi}\\right)^{N/2} \\frac{1}{(2\\pi)^{N/2}} \\prod_{i=1}^{M} \\alpha_i \\int \\exp\\left\\{-E(\\mathbf{w})\\right\\} d\\mathbf{w}$$\n (129)\n\nwhere\n\n$$E(\\mathbf{w}) = \\frac{\\beta}{2} \\|\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{w}\\|^2 + \\frac{1}{2} \\mathbf{w}^{\\mathrm{T}} \\mathbf{A} \\mathbf{w}$$\n\nand $\\mathbf{A} = \\operatorname{diag}(\\boldsymbol{\\alpha})$ .\n\nCompleting the square over w, we get\n\n$$E(\\mathbf{w}) = \\frac{1}{2} (\\mathbf{w} - \\mathbf{m})^{\\mathrm{T}} \\mathbf{\\Sigma}^{-1} (\\mathbf{w} - \\mathbf{m}) + E(\\mathbf{t})$$\n (130)\n\nwhere m and $\\Sigma$ are given by (7.82: $\\mathbf{m} = \\beta \\mathbf{\\Sigma} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{t}$) and (7.83: $\\Sigma = (\\mathbf{A} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi})^{-1}$), respectively, and\n\n$$E(\\mathbf{t}) = \\frac{1}{2} \\left( \\beta \\mathbf{t}^{\\mathrm{T}} \\mathbf{t} - \\mathbf{m}^{\\mathrm{T}} \\mathbf{\\Sigma}^{-1} \\mathbf{m} \\right).$$\n (131)\n\nUsing (130), we can evaluate the integral in (129) to obtain\n\n$$\\int \\exp\\{-E(\\mathbf{w})\\} d\\mathbf{w} = \\exp\\{-E(\\mathbf{t})\\} (2\\pi)^{M/2} |\\mathbf{\\Sigma}|^{1/2}.$$\n (132)\n\nConsidering this as a function of **t** we see from (7.83: $\\Sigma = (\\mathbf{A} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi})^{-1}$), that we only need to deal with the factor $\\exp\\{-E(\\mathbf{t})\\}$ . Using (7.82: $\\mathbf{m} = \\beta \\mathbf{\\Sigma} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{t}$), (7.83: $\\Sigma = (\\mathbf{A} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi})^{-1}$), (C.7) and (7.86: $\\mathbf{C} = \\beta^{-1} \\mathbf{I} + \\mathbf{\\Phi} \\mathbf{A}^{-1} \\mathbf{\\Phi}^{\\mathrm{T}}.$), we can re-write\n\n(131) as follows\n\n$$E(\\mathbf{t}) = \\frac{1}{2} (\\beta \\mathbf{t}^{\\mathrm{T}} \\mathbf{t} - \\mathbf{m}^{\\mathrm{T}} \\mathbf{\\Sigma}^{-1} \\mathbf{m})$$\n\n$$= \\frac{1}{2} (\\beta \\mathbf{t}^{\\mathrm{T}} \\mathbf{t} - \\beta \\mathbf{t}^{\\mathrm{T}} \\mathbf{\\Phi} \\mathbf{\\Sigma} \\mathbf{\\Sigma}^{-1} \\mathbf{\\Sigma} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{t} \\beta)$$\n\n$$= \\frac{1}{2} \\mathbf{t}^{\\mathrm{T}} (\\beta \\mathbf{I} - \\beta \\mathbf{\\Phi} \\mathbf{\\Sigma} \\mathbf{\\Phi}^{\\mathrm{T}} \\beta) \\mathbf{t}$$\n\n$$= \\frac{1}{2} \\mathbf{t}^{\\mathrm{T}} (\\beta \\mathbf{I} - \\beta \\mathbf{\\Phi} (\\mathbf{A} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi})^{-1} \\mathbf{\\Phi}^{\\mathrm{T}} \\beta) \\mathbf{t}$$\n\n$$= \\frac{1}{2} \\mathbf{t}^{\\mathrm{T}} (\\beta^{-1} \\mathbf{I} + \\mathbf{\\Phi} \\mathbf{A}^{-1} \\mathbf{\\Phi}^{\\mathrm{T}})^{-1} \\mathbf{t}$$\n\n$$= \\frac{1}{2} \\mathbf{t}^{\\mathrm{T}} \\mathbf{C}^{-1} \\mathbf{t}.$$\n\nThis gives us the last term on the r.h.s. of (7.85); the two preceding terms are given implicitly, as they form the normalization constant for the posterior Gaussian distribution $p(\\mathbf{t}|\\mathbf{X}, \\boldsymbol{\\alpha}, \\boldsymbol{\\beta})$ .",
"answer_length": 3684
},
{
"chapter": 7,
"question_number": "7.12",
"difficulty": "medium",
"question_text": "Show that direct maximization of the log marginal likelihood (7.85: $= -\\frac{1}{2} \\left\\{ N \\ln(2\\pi) + \\ln |\\mathbf{C}| + \\mathbf{t}^{\\mathrm{T}} \\mathbf{C}^{-1} \\mathbf{t} \\right\\}$) for the regression relevance vector machine leads to the re-estimation equations (7.87: $\\alpha_i^{\\text{new}} = \\frac{\\gamma_i}{m_i^2}$) and (7.88: $(\\beta^{\\text{new}})^{-1} = \\frac{\\|\\mathbf{t} - \\mathbf{\\Phi}\\mathbf{m}\\|^2}{N - \\sum_{i} \\gamma_i}$) where $\\gamma_i$ is defined by (7.89: $\\gamma_i = 1 - \\alpha_i \\Sigma_{ii}$).",
"answer": "According to the previous problem, we can explicitly write down the log marginal likelihood in an alternative form:\n\n$$\\ln p(\\mathbf{t}|\\mathbf{X}, \\boldsymbol{\\alpha}, \\boldsymbol{\\beta}) = \\frac{N}{2} \\ln \\boldsymbol{\\beta} - \\frac{N}{2} \\ln 2\\pi + \\frac{1}{2} \\ln |\\boldsymbol{\\Sigma}| + \\frac{1}{2} \\sum_{i=1}^{M} \\ln \\alpha_i - E(\\mathbf{t})$$\n\nWe first derive:\n\n$$\\begin{split} \\frac{dE(\\mathbf{t})}{d\\alpha_i} &= -\\frac{1}{2} \\frac{d}{d\\alpha_i} (\\mathbf{m}^T \\mathbf{\\Sigma}^{-1} \\mathbf{m}) \\\\ &= -\\frac{1}{2} \\frac{d}{d\\alpha_i} (\\beta^2 \\mathbf{t}^T \\mathbf{\\Phi} \\mathbf{\\Sigma} \\mathbf{\\Sigma}^{-1} \\mathbf{\\Sigma} \\mathbf{\\Phi}^T \\mathbf{t}) \\\\ &= -\\frac{1}{2} \\frac{d}{d\\alpha_i} (\\beta^2 \\mathbf{t}^T \\mathbf{\\Phi} \\mathbf{\\Sigma} \\mathbf{\\Phi}^T \\mathbf{t}) \\\\ &= -\\frac{1}{2} Tr \\left[ \\frac{d}{d\\mathbf{\\Sigma}^{-1}} (\\beta^2 \\mathbf{t}^T \\mathbf{\\Phi} \\mathbf{\\Sigma} \\mathbf{\\Phi}^T \\mathbf{t}) \\cdot \\frac{d\\mathbf{\\Sigma}^{-1}}{d\\alpha_i} \\right] \\\\ &= \\frac{1}{2} \\beta^2 Tr \\left[ \\mathbf{\\Sigma} (\\mathbf{\\Phi}^T \\mathbf{t}) (\\mathbf{\\Phi}^T \\mathbf{t})^T \\mathbf{\\Sigma} \\cdot \\mathbf{I}_i \\right] = \\frac{1}{2} m_{ii}^2 \\end{split}$$\n\nIn the last step, we have utilized the following equation:\n\n$$\\frac{d}{d\\mathbf{X}}Tr(\\mathbf{A}\\mathbf{X}^{-1}\\mathbf{B}) = -\\mathbf{X}^{-T}\\mathbf{A}^{T}\\mathbf{B}^{T}\\mathbf{X}^{-T}$$\n\nMoreover, here $I_i$ is a matrix with all elements equal to zero, expect the i-th diagonal element, and the i-th diagonal element equals to 1. Then we utilize matrix identity Eq (C.22) to derive:\n\n$$\\frac{d\\ln|\\mathbf{\\Sigma}|}{d\\alpha_i} = -\\frac{d\\ln|\\mathbf{\\Sigma}^{-1}|}{d\\alpha_i}$$\n$$= -Tr\\Big[\\mathbf{\\Sigma}\\frac{d}{d\\alpha_i}(\\mathbf{A} + \\beta\\mathbf{\\Phi}^T\\mathbf{\\Phi})\\Big]$$\n$$= -\\Sigma_{ii}$$\n\nTherefore, we can obtain:\n\n$$\\frac{d\\ln p}{d\\alpha_i} = \\frac{1}{2\\alpha_i} - \\frac{1}{2}m_i^2 - \\frac{1}{2}\\Sigma_{ii}$$\n\nSet it to zero and obtain:\n\n$$\\alpha_i = \\frac{1 - \\alpha_i \\Sigma_{ii}}{m_i} = \\frac{\\gamma_i}{m_i^2}$$\n\nThen we calculate the derivatives of $\\ln p$ with respect to $\\beta$ beginning by:\n\n$$\\frac{d\\ln|\\mathbf{\\Sigma}|}{d\\beta} = -\\frac{d\\ln|\\mathbf{\\Sigma}^{-1}|}{d\\beta}$$\n$$= -Tr\\left[\\mathbf{\\Sigma}\\frac{d}{d\\beta}(\\mathbf{A} + \\beta\\mathbf{\\Phi}^T\\mathbf{\\Phi})\\right]$$\n$$= -Tr\\left[\\mathbf{\\Sigma}\\mathbf{\\Phi}^T\\mathbf{\\Phi}\\right]$$\n\nThen we continue:\n\n$$\\begin{split} \\frac{dE(\\mathbf{t})}{d\\beta} &= \\frac{1}{2}\\mathbf{t}^T\\mathbf{t} - \\frac{1}{2}\\frac{d}{d\\beta}(\\mathbf{m}^T\\mathbf{\\Sigma}^{-1}\\mathbf{m}) \\\\ &= \\frac{1}{2}\\mathbf{t}^T\\mathbf{t} - \\frac{1}{2}\\frac{d}{d\\beta}(\\beta^2\\mathbf{t}^T\\mathbf{\\Phi}\\mathbf{\\Sigma}\\mathbf{\\Sigma}^{-1}\\mathbf{\\Sigma}\\mathbf{\\Phi}^T\\mathbf{t}) \\\\ &= \\frac{1}{2}\\mathbf{t}^T\\mathbf{t} - \\frac{1}{2}\\frac{d}{d\\beta}(\\beta^2\\mathbf{t}^T\\mathbf{\\Phi}\\mathbf{\\Sigma}\\mathbf{\\Phi}^T\\mathbf{t}) \\\\ &= \\frac{1}{2}\\mathbf{t}^T\\mathbf{t} - \\beta\\mathbf{t}^T\\mathbf{\\Phi}\\mathbf{\\Sigma}\\mathbf{\\Phi}^T\\mathbf{t} - \\frac{1}{2}\\beta^2\\frac{d}{d\\beta}(\\mathbf{t}^T\\mathbf{\\Phi}\\mathbf{\\Sigma}\\mathbf{\\Phi}^T\\mathbf{t}) \\\\ &= \\frac{1}{2}\\Big\\{\\mathbf{t}^T\\mathbf{t} - 2\\beta\\mathbf{t}^T\\mathbf{\\Phi}\\mathbf{\\Sigma}\\mathbf{\\Phi}^T\\mathbf{t} - \\beta^2\\frac{d}{d\\beta}(\\mathbf{t}^T\\mathbf{\\Phi}\\mathbf{\\Sigma}\\mathbf{\\Phi}^T\\mathbf{t})\\Big\\} \\\\ &= \\frac{1}{2}\\Big\\{\\mathbf{t}^T\\mathbf{t} - 2\\mathbf{t}^T(\\mathbf{\\Phi}\\mathbf{m}) - \\beta^2\\frac{d}{d\\beta}(\\mathbf{t}^T\\mathbf{\\Phi}\\mathbf{\\Sigma}\\mathbf{\\Phi}^T\\mathbf{t})\\Big\\} \\\\ &= \\frac{1}{2}\\Big\\{\\mathbf{t}^T\\mathbf{t} - 2\\mathbf{t}^T(\\mathbf{\\Phi}\\mathbf{m}) - \\beta^2Tr\\Big[\\frac{d}{d\\mathbf{\\Sigma}^{-1}}(\\mathbf{t}^T\\mathbf{\\Phi}\\mathbf{\\Sigma}\\mathbf{\\Phi}^T\\mathbf{t}) \\cdot \\frac{d\\mathbf{\\Sigma}^{-1}}{d\\beta}\\Big]\\Big\\} \\\\ &= \\frac{1}{2}\\Big\\{\\mathbf{t}^T\\mathbf{t} - 2\\mathbf{t}^T(\\mathbf{\\Phi}\\mathbf{m}) + \\beta^2Tr\\Big[\\mathbf{\\Sigma}(\\mathbf{\\Phi}^T\\mathbf{t})(\\mathbf{\\Phi}^T\\mathbf{t})^T\\mathbf{\\Sigma} \\cdot \\mathbf{\\Phi}^T\\mathbf{\\Phi}\\Big]\\Big\\} \\\\ &= \\frac{1}{2}\\Big\\{\\mathbf{t}^T\\mathbf{t} - 2\\mathbf{t}^T(\\mathbf{\\Phi}\\mathbf{m}) + Tr\\Big[\\mathbf{m}\\mathbf{m}^T \\cdot \\mathbf{\\Phi}^T\\mathbf{\\Phi}\\Big]\\Big\\} \\\\ &= \\frac{1}{2}\\Big\\{\\mathbf{t}^T\\mathbf{t} - 2\\mathbf{t}^T(\\mathbf{\\Phi}\\mathbf{m}) + Tr\\Big[\\mathbf{\\Phi}\\mathbf{m}\\mathbf{m}^T \\cdot \\mathbf{\\Phi}^T\\Big]\\Big\\} \\\\ &= \\frac{1}{2}||\\mathbf{t} - \\mathbf{\\Phi}\\mathbf{m}||^2 \\end{split}$$\n\nTherefore, we have obtained:\n\n$$\\frac{d \\ln p}{d \\beta} = \\frac{1}{2} \\left( \\frac{N}{\\beta} - ||\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}||^2 - Tr[\\mathbf{\\Sigma} \\mathbf{\\Phi}^T \\mathbf{\\Phi}] \\right)$$\n\nUsing Eq (7.83: $\\Sigma = (\\mathbf{A} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi})^{-1}$), we can obtain:\n\n$$\\Sigma \\Phi^{T} \\Phi = \\Sigma \\Phi^{T} \\Phi + \\beta^{-1} \\Sigma \\mathbf{A} - \\beta^{-1} \\Sigma \\mathbf{A}$$\n\n$$= \\Sigma (\\beta \\Phi^{T} \\Phi + \\mathbf{A}) \\beta^{-1} - \\beta^{-1} \\Sigma \\mathbf{A}$$\n\n$$= \\mathbf{I} \\beta^{-1} - \\beta^{-1} \\Sigma \\mathbf{A}$$\n\n$$= (\\mathbf{I} - \\Sigma \\mathbf{A}) \\beta^{-1}$$\n\nSetting the derivative equal to zero, we can obtain:\n\n$$\\beta^{-1} = \\frac{||\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}||^2}{N - Tr(\\mathbf{I} - \\mathbf{\\Sigma} \\mathbf{A})} = \\frac{||\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}||^2}{N - \\sum_i \\gamma_i}$$\n\nJust as required.",
"answer_length": 5244
},
{
"chapter": 7,
"question_number": "7.13",
"difficulty": "medium",
"question_text": "In the evidence framework for RVM regression, we obtained the re-estimation formulae (7.87: $\\alpha_i^{\\text{new}} = \\frac{\\gamma_i}{m_i^2}$) and (7.88: $(\\beta^{\\text{new}})^{-1} = \\frac{\\|\\mathbf{t} - \\mathbf{\\Phi}\\mathbf{m}\\|^2}{N - \\sum_{i} \\gamma_i}$) by maximizing the marginal likelihood given by (7.85: $= -\\frac{1}{2} \\left\\{ N \\ln(2\\pi) + \\ln |\\mathbf{C}| + \\mathbf{t}^{\\mathrm{T}} \\mathbf{C}^{-1} \\mathbf{t} \\right\\}$). Extend this approach by inclusion of hyperpriors given by gamma distributions of the form (B.26) and obtain the corresponding re-estimation formulae for $\\alpha$ and $\\beta$ by maximizing the corresponding posterior probability $p(\\mathbf{t}, \\alpha, \\beta | \\mathbf{X})$ with respect to $\\alpha$ and $\\beta$ .",
"answer": "This problem is quite confusing. In my point of view, the posterior should be denoted as $p(\\mathbf{w}|\\mathbf{t}, \\mathbf{X}, \\{a_i, b_i\\}, a_{\\beta}, b_{\\beta})$ , where $a_{\\beta}, b_{\\beta}$ controls the Gamma distribution of $\\beta$ , and $a_i, b_i$ controls the Gamma distribution of $\\alpha_i$ . What we should do is to maximize the marginal likelihood $p(\\mathbf{t}|\\mathbf{X}, \\{a_i, b_i\\}, a_{\\beta}, b_{\\beta})$ with respect to $\\{a_i, b_i\\}, a_{\\beta}, b_{\\beta}$ . Now we do not have a point estimation for the hyperparameters $\\beta$ and $\\alpha_i$ . We have a distribution (controlled by the hyper priors, i.e., $\\{a_i, b_i\\}, a_{\\beta}, b_{\\beta}$ ) instead.",
"answer_length": 688
},
{
"chapter": 7,
"question_number": "7.14",
"difficulty": "medium",
"question_text": "Derive the result (7.90: $= \\mathcal{N}\\left(t|\\mathbf{m}^{\\mathrm{T}}\\boldsymbol{\\phi}(\\mathbf{x}), \\sigma^{2}(\\mathbf{x})\\right).$) for the predictive distribution in the relevance vector machine for regression. Show that the predictive variance is given by (7.91: $\\sigma^{2}(\\mathbf{x}) = (\\beta^{*})^{-1} + \\phi(\\mathbf{x})^{\\mathrm{T}} \\mathbf{\\Sigma} \\phi(\\mathbf{x})$).",
"answer": "We begin by writing down $p(t|\\mathbf{x}, \\mathbf{w}, \\beta^*)$ . Using Eq (7.76: $p(t|\\mathbf{x}, \\mathbf{w}, \\beta) = \\mathcal{N}(t|y(\\mathbf{x}), \\beta^{-1})$) and Eq (7.77: $y(\\mathbf{x}) = \\sum_{i=1}^{M} w_i \\phi_i(\\mathbf{x}) = \\mathbf{w}^{\\mathrm{T}} \\phi(\\mathbf{x})$), we can obtain:\n\n$$p(t|\\mathbf{x}, \\mathbf{w}, \\beta^*) = \\mathcal{N}(t|\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}), (\\beta^*)^{-1})$$\n\nThen we write down $p(\\mathbf{w}|\\mathbf{X},\\mathbf{t},\\alpha^*,\\beta^*)$ . Using Eq (7.81: $p(\\mathbf{w}|\\mathbf{t}, \\mathbf{X}, \\boldsymbol{\\alpha}, \\beta) = \\mathcal{N}(\\mathbf{w}|\\mathbf{m}, \\boldsymbol{\\Sigma})$), (7.82: $\\mathbf{m} = \\beta \\mathbf{\\Sigma} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{t}$) and (7.83: $\\Sigma = (\\mathbf{A} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi})^{-1}$), we can obtain:\n\n$$p(\\mathbf{w}|\\mathbf{X}, \\mathbf{t}, \\alpha^*, \\beta^*) = \\mathcal{N}(\\mathbf{w}|\\mathbf{m}, \\Sigma)$$\n\nWhere **m** and $\\Sigma$ are evaluated using Eq (7.82: $\\mathbf{m} = \\beta \\mathbf{\\Sigma} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{t}$) and (7.83: $\\Sigma = (\\mathbf{A} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi})^{-1}$) given $\\alpha = \\alpha^*$ and $\\beta = \\beta^*$ . Then we utilize Eq (7.90: $= \\mathcal{N}\\left(t|\\mathbf{m}^{\\mathrm{T}}\\boldsymbol{\\phi}(\\mathbf{x}), \\sigma^{2}(\\mathbf{x})\\right).$) and obtain:\n\n$$p(t|\\mathbf{x}, \\mathbf{X}, \\mathbf{t}, \\alpha^*, \\beta^*) = \\int \\mathcal{N}(t|\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}), (\\beta^*)^{-1}) \\mathcal{N}(\\mathbf{w}|\\mathbf{m}, \\boldsymbol{\\Sigma}) d\\mathbf{w}$$\n$$= \\int \\mathcal{N}(t|\\boldsymbol{\\phi}(\\mathbf{x})^T \\mathbf{w}, (\\beta^*)^{-1}) \\mathcal{N}(\\mathbf{w}|\\mathbf{m}, \\boldsymbol{\\Sigma}) d\\mathbf{w}$$\n\nUsing Eq (2.113)-(2.117), we can obtain:\n\n$$p(t|\\mathbf{x}, \\mathbf{X}, \\mathbf{t}, \\alpha^*, \\beta^*) = \\mathcal{N}(\\mu, \\sigma^2)$$\n\nWhere we have defined:\n\n$$\\mu = \\mathbf{m}^T \\boldsymbol{\\phi}(\\mathbf{x})$$\n\nAnd\n\n$$\\sigma^2 = (\\beta^*)^{-1} + \\phi(\\mathbf{x})^T \\mathbf{\\Sigma} \\phi(\\mathbf{x})$$\n\nJust as required.",
"answer_length": 2039
},
{
"chapter": 7,
"question_number": "7.15",
"difficulty": "medium",
"question_text": "Using the results (7.94: $|\\mathbf{C}| = |\\mathbf{C}_{-i}||1 + \\alpha_i^{-1} \\boldsymbol{\\varphi}_i^{\\mathrm{T}} \\mathbf{C}_{-i}^{-1} \\boldsymbol{\\varphi}_i|$) and (7.95: $\\mathbf{C}^{-1} = \\mathbf{C}_{-i}^{-1} - \\frac{\\mathbf{C}_{-i}^{-1} \\boldsymbol{\\varphi}_{i} \\boldsymbol{\\varphi}_{i}^{\\mathrm{T}} \\mathbf{C}_{-i}^{-1}}{\\alpha_{i} + \\boldsymbol{\\varphi}_{i}^{\\mathrm{T}} \\mathbf{C}_{-i}^{-1} \\boldsymbol{\\varphi}_{i}}.$), show that the marginal likelihood (7.85: $= -\\frac{1}{2} \\left\\{ N \\ln(2\\pi) + \\ln |\\mathbf{C}| + \\mathbf{t}^{\\mathrm{T}} \\mathbf{C}^{-1} \\mathbf{t} \\right\\}$) can be written in the form (7.96: $L(\\alpha) = L(\\alpha_{-i}) + \\lambda(\\alpha_i)$), where $\\lambda(\\alpha_n)$ is defined by (7.97: $\\lambda(\\alpha_i) = \\frac{1}{2} \\left[ \\ln \\alpha_i - \\ln (\\alpha_i + s_i) + \\frac{q_i^2}{\\alpha_i + s_i} \\right]$) and the sparsity and quality factors are defined by (7.98: $s_i = \\varphi_i^{\\mathrm{T}} \\mathbf{C}_{-i}^{-1} \\varphi_i$) and (7.99: $q_i = \\boldsymbol{\\varphi}_i^{\\mathrm{T}} \\mathbf{C}_{-i}^{-1} \\mathbf{t}.$), respectively.",
"answer": "We just follow the hint.\n\n$$\\begin{split} L(\\pmb{\\alpha}) &= -\\frac{1}{2} \\{ N \\ln 2\\pi + \\ln |\\mathbf{C}| + \\mathbf{t}^T \\mathbf{C}^{-1} \\mathbf{t} \\} \\\\ &= -\\frac{1}{2} \\Big\\{ N \\ln 2\\pi + \\ln |\\mathbf{C}_{-i}| + \\ln |1 + \\alpha_i^{-1} \\pmb{\\varphi}_i^T \\mathbf{C}_{-i}^{-1} \\pmb{\\varphi}_i | \\\\ &+ \\mathbf{t}^T (\\mathbf{C}_{-i}^{-1} - \\frac{\\mathbf{C}_{-i}^{-1} \\pmb{\\varphi}_i \\pmb{\\varphi}_i^T \\mathbf{C}_{-i}^{-1}}{\\alpha_i + \\pmb{\\varphi}_i^T \\mathbf{C}_{-i}^{-1} \\pmb{\\varphi}_i}) \\mathbf{t} \\Big\\} \\\\ &= L(\\pmb{\\alpha}_{-i}) - \\frac{1}{2} \\ln |1 + \\alpha_i^{-1} \\pmb{\\varphi}_i^T \\mathbf{C}_{-i}^{-1} \\pmb{\\varphi}_i| + \\frac{1}{2} \\mathbf{t}^T \\frac{\\mathbf{C}_{-i}^{-1} \\pmb{\\varphi}_i \\pmb{\\varphi}_i^T \\mathbf{C}_{-i}^{-1}}{\\alpha_i + \\pmb{\\varphi}_i^T \\mathbf{C}_{-i}^{-1} \\pmb{\\varphi}_i} \\mathbf{t} \\\\ &= L(\\pmb{\\alpha}_{-i}) - \\frac{1}{2} \\ln |1 + \\alpha_i^{-1} s_i| + \\frac{1}{2} \\frac{q_i^2}{\\alpha_i + s_i} \\\\ &= L(\\pmb{\\alpha}_{-i}) - \\frac{1}{2} \\ln \\frac{\\alpha_i + s_i}{\\alpha_i} + \\frac{1}{2} \\frac{q_i^2}{\\alpha_i + s_i} \\\\ &= L(\\pmb{\\alpha}_{-i}) + \\frac{1}{2} \\Big[ \\ln \\alpha_i - \\ln(\\alpha_i + s_i) + \\frac{q_i^2}{\\alpha_i + s_i} \\Big] = L(\\pmb{\\alpha}_{-i}) + \\lambda(\\alpha_i) \\end{split}$$\n\nWhere we have defined $\\lambda(\\alpha_i)$ , $s_i$ and $q_i$ as shown in Eq (7.97)-(7.99).",
"answer_length": 1318
},
{
"chapter": 7,
"question_number": "7.16",
"difficulty": "easy",
"question_text": "By taking the second derivative of the log marginal likelihood (7.97: $\\lambda(\\alpha_i) = \\frac{1}{2} \\left[ \\ln \\alpha_i - \\ln (\\alpha_i + s_i) + \\frac{q_i^2}{\\alpha_i + s_i} \\right]$) for the regression RVM with respect to the hyperparameter $\\alpha_i$ , show that the stationary point given by (7.101: $\\alpha_i = \\frac{s_i^2}{q_i^2 - s_i}.$) is a maximum of the marginal likelihood.",
"answer": "We first calculate the first derivative of Eq(7.97) with respect to $\\alpha_i$ :\n\n$$\\frac{\\partial \\lambda}{\\partial \\alpha_i} = \\frac{1}{2} \\left[ \\frac{1}{\\alpha_i} - \\frac{1}{\\alpha_i + s_i} - \\frac{q_i^2}{(\\alpha_i + s_i)^2} \\right]$$\n\nThen we calculate the second derivative:\n\n$$\\frac{\\partial^2 \\lambda}{\\partial \\alpha_i^2} = \\frac{1}{2} \\left[ -\\frac{1}{\\alpha_i^2} + \\frac{1}{(\\alpha_i + s_i)^2} + \\frac{2q_i^2}{(\\alpha_i + s_i)^3} \\right]$$\n\nNext we aim to prove that when $\\alpha_i$ is given by Eq (7.101: $\\alpha_i = \\frac{s_i^2}{q_i^2 - s_i}.$), i.e., setting the first derivative equal to 0, the second derivative (i.e., the expression above) is negative. First we can obtain:\n\n$$\\alpha_i + s_i = \\frac{s_i^2}{q_i^2 - s_i} + s_i = \\frac{s_i q_i^2}{q_i^2 - s_i}$$\n\nTherefore, substituting $\\alpha_i + s_i$ and $\\alpha_i$ into the second derivative, we can obtain:\n\n$$\\begin{split} \\frac{\\partial^2 \\lambda}{\\partial \\alpha_i^2} &= \\frac{1}{2} \\left[ -\\frac{(q_i^2 - s_i)^2}{s_i^4} + \\frac{(q_i^2 - s_i)^2}{s_i^2 q_i^4} + \\frac{2q_i^2 (q_i^2 - s_i)^3}{s_i^3 q_i^6} \\right] \\\\ &= \\frac{1}{2} \\left[ -\\frac{q_i^4 (q_i^2 - s_i)^2}{q_i^4 s_i^4} + \\frac{s_i^2 (q_i^2 - s_i)^2}{s_i^4 q_i^4} + \\frac{2s_i (q_i^2 - s_i)^3}{s_i^4 q_i^4} \\right] \\\\ &= \\frac{1}{2} \\frac{(q_i^2 - s_i)^2}{q_i^4 s_i^4} \\left[ -q_i^4 + s_i^2 + 2s_i (q_i^2 - s_i) \\right] \\\\ &= \\frac{1}{2} \\frac{(q_i^2 - s_i)^2}{q_i^4 s_i^4} \\left[ -(q_i^2 - s_i)^2 \\right] \\\\ &= -\\frac{1}{2} \\frac{(q_i^2 - s_i)^4}{q_i^4 s_i^4} < 0 \\end{split}$$\n\nJust as required.",
"answer_length": 1537
},
{
"chapter": 7,
"question_number": "7.17",
"difficulty": "medium",
"question_text": "Using (7.83: $\\Sigma = (\\mathbf{A} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi})^{-1}$) and (7.86: $\\mathbf{C} = \\beta^{-1} \\mathbf{I} + \\mathbf{\\Phi} \\mathbf{A}^{-1} \\mathbf{\\Phi}^{\\mathrm{T}}.$), together with the matrix identity (C.7), show that the quantities $S_n$ and $Q_n$ defined by (7.102: $Q_i = \\boldsymbol{\\varphi}_i^{\\mathrm{T}} \\mathbf{C}^{-1} \\mathbf{t}$) and (7.103: $S_i = \\boldsymbol{\\varphi}_i^{\\mathrm{T}} \\mathbf{C}^{-1} \\boldsymbol{\\varphi}_i.$) can be written in the form (7.106) and (7.107: $S_i = \\beta \\boldsymbol{\\varphi}_i^{\\mathrm{T}} \\boldsymbol{\\varphi}_i - \\beta^2 \\boldsymbol{\\varphi}_i^{\\mathrm{T}} \\boldsymbol{\\Phi} \\boldsymbol{\\Sigma} \\boldsymbol{\\Phi}^{\\mathrm{T}} \\boldsymbol{\\varphi}_i$).",
"answer": "We just follow the hint. According to Eq (7.102: $Q_i = \\boldsymbol{\\varphi}_i^{\\mathrm{T}} \\mathbf{C}^{-1} \\mathbf{t}$), Eq (7.86: $\\mathbf{C} = \\beta^{-1} \\mathbf{I} + \\mathbf{\\Phi} \\mathbf{A}^{-1} \\mathbf{\\Phi}^{\\mathrm{T}}.$) and matrix identity (C.7), we have:\n\n$$Q_{i} = \\boldsymbol{\\varphi}_{i}^{T} \\mathbf{C}^{-1} \\mathbf{t}$$\n\n$$= \\boldsymbol{\\varphi}_{i}^{T} (\\boldsymbol{\\beta}^{-1} \\mathbf{I} + \\boldsymbol{\\Phi} \\mathbf{A}^{-1} \\boldsymbol{\\Phi}^{T})^{-1} \\mathbf{t}$$\n\n$$= \\boldsymbol{\\varphi}_{i}^{T} (\\boldsymbol{\\beta} \\mathbf{I} - \\boldsymbol{\\beta} \\mathbf{I} \\boldsymbol{\\Phi} (\\mathbf{A} + \\boldsymbol{\\Phi}^{T} \\boldsymbol{\\beta} \\mathbf{I} \\boldsymbol{\\Phi})^{-1} \\boldsymbol{\\Phi}^{T} \\boldsymbol{\\beta} \\mathbf{I}) \\mathbf{t}$$\n\n$$= \\boldsymbol{\\varphi}_{i}^{T} (\\boldsymbol{\\beta} - \\boldsymbol{\\beta}^{2} \\boldsymbol{\\Phi} (\\mathbf{A} + \\boldsymbol{\\beta} \\boldsymbol{\\Phi}^{T} \\boldsymbol{\\Phi})^{-1} \\boldsymbol{\\Phi}^{T}) \\mathbf{t}$$\n\n$$= \\boldsymbol{\\varphi}_{i}^{T} (\\boldsymbol{\\beta} - \\boldsymbol{\\beta}^{2} \\boldsymbol{\\Phi} \\boldsymbol{\\Sigma} \\boldsymbol{\\Phi}^{T}) \\mathbf{t}$$\n\n$$= \\boldsymbol{\\beta} \\boldsymbol{\\varphi}_{i}^{T} \\mathbf{t} - \\boldsymbol{\\beta}^{2} \\boldsymbol{\\varphi}_{i}^{T} \\boldsymbol{\\Phi} \\boldsymbol{\\Sigma} \\boldsymbol{\\Phi}^{T} \\mathbf{t}$$\n\nSimilarly, we can obtain:\n\n$$S_{i} = \\boldsymbol{\\varphi}_{i}^{T} \\mathbf{C}^{-1} \\boldsymbol{\\varphi}_{i}$$\n\n$$= \\boldsymbol{\\varphi}_{i}^{T} (\\beta - \\beta^{2} \\boldsymbol{\\Phi} \\boldsymbol{\\Sigma} \\boldsymbol{\\Phi}^{T}) \\boldsymbol{\\varphi}_{i}$$\n\n$$= \\beta \\boldsymbol{\\varphi}_{i}^{T} \\boldsymbol{\\varphi}_{i} - \\beta^{2} \\boldsymbol{\\varphi}_{i}^{T} \\boldsymbol{\\Phi} \\boldsymbol{\\Sigma} \\boldsymbol{\\Phi}^{T} \\boldsymbol{\\varphi}_{i}$$\n\nJust as required.",
"answer_length": 1771
},
{
"chapter": 7,
"question_number": "7.18",
"difficulty": "easy",
"question_text": "Show that the gradient vector and Hessian matrix of the log posterior distribution (7.109: $= \\sum_{n=1}^{N} \\left\\{ t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n) \\right\\} - \\frac{1}{2} \\mathbf{w}^{\\mathrm{T}} \\mathbf{A} \\mathbf{w} + \\text{const} \\quad$) for the classification relevance vector machine are given by (7.110: $\\nabla \\ln p(\\mathbf{w}|\\mathbf{t}, \\boldsymbol{\\alpha}) = \\boldsymbol{\\Phi}^{\\mathrm{T}}(\\mathbf{t} - \\mathbf{y}) - \\mathbf{A}\\mathbf{w}$) and (7.111: $\\nabla \\nabla \\ln p(\\mathbf{w}|\\mathbf{t}, \\boldsymbol{\\alpha}) = -(\\boldsymbol{\\Phi}^{\\mathrm{T}} \\mathbf{B} \\boldsymbol{\\Phi} + \\mathbf{A})$).",
"answer": "We begin by deriving the first term in Eq (7.109: $= \\sum_{n=1}^{N} \\left\\{ t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n) \\right\\} - \\frac{1}{2} \\mathbf{w}^{\\mathrm{T}} \\mathbf{A} \\mathbf{w} + \\text{const} \\quad$) with respect to $\\mathbf{w}$ . This can be easily evaluate based on Eq (4.90)-(4.91).\n\n$$\\frac{\\partial}{\\partial \\mathbf{w}} \\left\\{ \\sum_{n=1}^{N} t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n) \\right\\} = \\sum_{n=1}^{N} (t_n - y_n) \\boldsymbol{\\phi}_n = \\boldsymbol{\\Phi}^T (\\mathbf{t} - \\mathbf{y})$$\n\nSince the derivative of the second term in Eq (7.109: $= \\sum_{n=1}^{N} \\left\\{ t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n) \\right\\} - \\frac{1}{2} \\mathbf{w}^{\\mathrm{T}} \\mathbf{A} \\mathbf{w} + \\text{const} \\quad$) with respect to $\\mathbf{w}$ is rather simple to obtain. Therefore, The first derivative of Eq (7.109: $= \\sum_{n=1}^{N} \\left\\{ t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n) \\right\\} - \\frac{1}{2} \\mathbf{w}^{\\mathrm{T}} \\mathbf{A} \\mathbf{w} + \\text{const} \\quad$) with respect to $\\mathbf{w}$ is:\n\n$$\\frac{\\partial \\ln p}{\\partial \\mathbf{w}} = \\mathbf{\\Phi}^T (\\mathbf{t} - \\mathbf{y}) - \\mathbf{A}\\mathbf{w}$$\n\nFor the Hessian matrix, we can first obtain:\n\n$$\\frac{\\partial}{\\partial \\mathbf{w}} \\left\\{ \\mathbf{\\Phi}^{T} (\\mathbf{t} - \\mathbf{y}) \\right\\} = \\sum_{n=1}^{N} \\frac{\\partial}{\\partial \\mathbf{w}} \\left\\{ (t_{n} - y_{n}) \\boldsymbol{\\phi}_{n} \\right\\}$$\n\n$$= -\\sum_{n=1}^{N} \\frac{\\partial}{\\partial \\mathbf{w}} \\left\\{ y_{n} \\cdot \\boldsymbol{\\phi}_{n} \\right\\}$$\n\n$$= -\\sum_{n=1}^{N} \\frac{\\partial \\sigma(\\mathbf{w}^{T} \\boldsymbol{\\phi}_{n})}{\\partial \\mathbf{w}} \\cdot \\boldsymbol{\\phi}_{n}^{T}$$\n\n$$= -\\sum_{n=1}^{N} \\frac{\\partial \\sigma(a)}{\\partial a} \\cdot \\frac{\\partial a}{\\partial a} \\cdot \\boldsymbol{\\phi}_{n}^{T}$$\n\nWhere we have defined $a = \\mathbf{w}^T \\boldsymbol{\\phi}_n$ . Then we can utilize Eq (4.88: $\\frac{d\\sigma}{da} = \\sigma(1 - \\sigma).$) to derive:\n\n$$\\frac{\\partial}{\\partial \\mathbf{w}} \\Big\\{ \\mathbf{\\Phi}^T (\\mathbf{t} - \\mathbf{y}) \\Big\\} = -\\sum_{n=1}^N \\sigma (1 - \\sigma) \\cdot \\boldsymbol{\\phi}_n \\cdot \\boldsymbol{\\phi}_n^T = -\\mathbf{\\Phi}^T \\mathbf{B} \\mathbf{\\Phi}$$\n\nWhere **B** is a diagonal $N \\times N$ matrix with elements $b_n = y_n(1-y_n)$ . Therefore, we can obtain the Hessian matrix:\n\n$$\\mathbf{H} = \\frac{\\partial}{\\partial \\mathbf{w}} \\left\\{ \\frac{\\partial \\ln p}{\\partial \\mathbf{w}} \\right\\} = -(\\mathbf{\\Phi}^T \\mathbf{B} \\mathbf{\\Phi} + \\mathbf{A})$$\n\nJust as required.",
"answer_length": 2459
},
{
"chapter": 7,
"question_number": "7.19",
"difficulty": "medium",
"question_text": "Verify that maximization of the approximate log marginal likelihood function (7.114: $\\simeq p(\\mathbf{t}|\\mathbf{w}^*)p(\\mathbf{w}^*|\\alpha)(2\\pi)^{M/2}|\\Sigma|^{1/2}.$) for the classification relevance vector machine leads to the result (7.116: $\\alpha_i^{\\text{new}} = \\frac{\\gamma_i}{(w_i^{\\star})^2}$) for re-estimation of the hyperparameters.\n\n# Graphical Models\n\nProbabilities play a central role in modern pattern recognition. We have seen in Chapter 1 that probability theory can be expressed in terms of two simple equations corresponding to the sum rule and the product rule. All of the probabilistic inference and learning manipulations discussed in this book, no matter how complex, amount to repeated application of these two equations. We could therefore proceed to formulate and solve complicated probabilistic models purely by algebraic manipulation. However, we shall find it highly advantageous to augment the analysis using diagrammatic representations of probability distributions, called *probabilistic graphical models*. These offer several useful properties:\n\n- 1. They provide a simple way to visualize the structure of a probabilistic model and can be used to design and motivate new models.\n- 2. Insights into the properties of the model, including conditional independence properties, can be obtained by inspection of the graph.\n\n3. Complex computations, required to perform inference and learning in sophisticated models, can be expressed in terms of graphical manipulations, in which underlying mathematical expressions are carried along implicitly.\n\nA graph comprises *nodes* (also called *vertices*) connected by *links* (also known as *edges* or *arcs*). In a probabilistic graphical model, each node represents a random variable (or group of random variables), and the links express probabilistic relationships between these variables. The graph then captures the way in which the joint distribution over all of the random variables can be decomposed into a product of factors each depending only on a subset of the variables. We shall begin by discussing *Bayesian networks*, also known as *directed graphical models*, in which the links of the graphs have a particular directionality indicated by arrows. The other major class of graphical models are *Markov random fields*, also known as *undirected graphical models*, in which the links do not carry arrows and have no directional significance. Directed graphs are useful for expressing causal relationships between random variables, whereas undirected graphs are better suited to expressing soft constraints between random variables. For the purposes of solving inference problems, it is often convenient to convert both directed and undirected graphs into a different representation called a *factor graph*.\n\nIn this chapter, we shall focus on the key aspects of graphical models as needed for applications in pattern recognition and machine learning. More general treatments of graphical models can be found in the books by Whittaker (1990), Lauritzen (1996), Jensen (1996), Castillo *et al.* (1997), Jordan (1999), Cowell *et al.* (1999), and Jordan (2007).\n\n### 8.1. Bayesian Networks\n\nIn order to motivate the use of directed graphs to describe probability distributions, consider first an arbitrary joint distribution p(a,b,c) over three variables a,b, and c. Note that at this stage, we do not need to specify anything further about these variables, such as whether they are discrete or continuous. Indeed, one of the powerful aspects of graphical models is that a specific graph can make probabilistic statements for a broad class of distributions. By application of the product rule of probability (1.11: $p(X,Y) = p(Y|X)p(X).$), we can write the joint distribution in the form\n\n$$p(a, b, c) = p(c|a, b)p(a, b).$$\n (8.1: $p(a, b, c) = p(c|a, b)p(a, b).$)\n\nA second application of the product rule, this time to the second term on the right-hand side of (8.1: $p(a, b, c) = p(c|a, b)p(a, b).$), gives\n\n$$p(a, b, c) = p(c|a, b)p(b|a)p(a).$$\n (8.2: $p(a, b, c) = p(c|a, b)p(b|a)p(a).$)\n\nNote that this decomposition holds for any choice of the joint distribution. We now represent the right-hand side of (8.2: $p(a, b, c) = p(c|a, b)p(b|a)p(a).$) in terms of a simple graphical model as follows. First we introduce a node for each of the random variables a, b, and c and associate each node with the corresponding conditional distribution on the right-hand side of\n\n**Figure 8.1** A directed graphical model representing the joint probability distribution over three variables *a*, *b*, and *c*, corresponding to the decomposition on the right-hand side of (8.2: $p(a, b, c) = p(c|a, b)p(b|a)p(a).$).\n\n\n\n(8.2: $p(a, b, c) = p(c|a, b)p(b|a)p(a).$). Then, for each conditional distribution we add directed links (arrows) to the graph from the nodes corresponding to the variables on which the distribution is conditioned. Thus for the factor p(c|a,b), there will be links from nodes a and b to node c, whereas for the factor p(a) there will be no incoming links. The result is the graph shown in Figure 8.1. If there is a link going from a node a to a node b, then we say that node a is the *parent* of node b, and we say that node b is the *child* of node a. Note that we shall not make any formal distinction between a node and the variable to which it corresponds but will simply use the same symbol to refer to both.\n\nAn interesting point to note about (8.2: $p(a, b, c) = p(c|a, b)p(b|a)p(a).$) is that the left-hand side is symmetrical with respect to the three variables a, b, and c, whereas the right-hand side is not. Indeed, in making the decomposition in (8.2), we have implicitly chosen a particular ordering, namely a, b, c, and had we chosen a different ordering we would have obtained a different decomposition and hence a different graphical representation. We shall return to this point later.\n\nFor the moment let us extend the example of Figure 8.1 by considering the joint distribution over K variables given by $p(x_1, \\ldots, x_K)$ . By repeated application of the product rule of probability, this joint distribution can be written as a product of conditional distributions, one for each of the variables\n\n$$p(x_1, \\dots, x_K) = p(x_K | x_1, \\dots, x_{K-1}) \\dots p(x_2 | x_1) p(x_1).$$\n(8.3)\n\nFor a given choice of K, we can again represent this as a directed graph having K nodes, one for each conditional distribution on the right-hand side of (8.3), with each node having incoming links from all lower numbered nodes. We say that this graph is *fully connected* because there is a link between every pair of nodes.\n\nSo far, we have worked with completely general joint distributions, so that the decompositions, and their representations as fully connected graphs, will be applicable to any choice of distribution. As we shall see shortly, it is the *absence* of links in the graph that conveys interesting information about the properties of the class of distributions that the graph represents. Consider the graph shown in Figure 8.2. This is not a fully connected graph because, for instance, there is no link from $x_1$ to $x_2$ or from $x_3$ to $x_7$ .\n\nWe shall now go from this graph to the corresponding representation of the joint probability distribution written in terms of the product of a set of conditional distributions, one for each node in the graph. Each such conditional distribution will be conditioned only on the parents of the corresponding node in the graph. For instance, $x_5$ will be conditioned on $x_1$ and $x_3$ . The joint distribution of all 7 variables\n\n**Figure 8.2** Example of a directed acyclic graph describing the joint distribution over variables $x_1, \\ldots, x_7$ . The corresponding decomposition of the joint distribution is given by (8.4).",
"answer": "We begin from Eq (7.114: $\\simeq p(\\mathbf{t}|\\mathbf{w}^*)p(\\mathbf{w}^*|\\alpha)(2\\pi)^{M/2}|\\Sigma|^{1/2}.$).\n\n$$p(\\mathbf{t}|\\alpha) = p(\\mathbf{t}|\\mathbf{w}^*)p(\\mathbf{w}^*|\\alpha)(2\\pi)^{M/2}|\\mathbf{\\Sigma}|^{1/2}$$\n\n$$= \\left[\\prod_{n=1}^{N} p(t_n|x_n, \\mathbf{w})\\right] \\left[\\prod_{i=1}^{M} \\mathcal{N}(w_i|0, \\alpha_i^{-1})\\right] (2\\pi)^{M/2}|\\mathbf{\\Sigma}|^{1/2}\\Big|_{\\mathbf{w}=\\mathbf{w}^*}$$\n\n$$= \\left[\\prod_{n=1}^{N} p(t_n|x_n, \\mathbf{w})\\right] \\cdot \\mathcal{N}(\\mathbf{w}|\\mathbf{0}, \\mathbf{A}) \\cdot (2\\pi)^{M/2}|\\mathbf{\\Sigma}|^{1/2}\\Big|_{\\mathbf{w}=\\mathbf{w}^*}$$\n\nWe further take logarithm for both sides.\n\n$$\\begin{split} \\ln p(\\mathbf{t}|\\alpha) &= \\left[ \\left. \\sum_{n=1}^{N} \\ln p(t_n|x_n, \\mathbf{w}) + \\ln \\mathcal{N}(\\mathbf{w}|\\mathbf{0}, \\mathbf{A}) + \\frac{M}{2} \\ln 2\\pi + \\frac{1}{2} \\ln |\\mathbf{\\Sigma}| \\right] \\right|_{\\mathbf{w} = \\mathbf{w}^*} \\\\ &= \\left[ \\left. \\sum_{n=1}^{N} \\left[ t_n \\ln y_n + (1 - t_n) \\ln (1 - y_n) \\right] - \\frac{1}{2} \\mathbf{w}^T \\mathbf{A} \\mathbf{w} - \\frac{1}{2} \\ln |\\mathbf{A}| + \\frac{1}{2} \\ln |\\mathbf{\\Sigma}| + const \\right] \\right|_{\\mathbf{w} = \\mathbf{w}^*} \\\\ &= \\left[ \\left. \\sum_{n=1}^{N} \\left[ t_n \\ln y_n + (1 - t_n) \\ln (1 - y_n) \\right] - \\frac{1}{2} \\mathbf{w}^T \\mathbf{A} \\mathbf{w} \\right] + \\left[ \\frac{1}{2} \\ln |\\mathbf{\\Sigma}| - \\frac{1}{2} \\ln |\\mathbf{A}| + const \\right] \\right|_{\\mathbf{w} = \\mathbf{w}^*} \\end{split}$$\n\nUsing the Chain rule, we can obtain:\n\n$$\\left. \\frac{\\partial \\ln p(\\mathbf{t}|\\alpha)}{\\partial \\alpha_i} \\right|_{\\mathbf{w} = \\mathbf{w}^*} = \\frac{\\partial \\ln p(\\mathbf{t}|\\alpha)}{\\partial \\mathbf{w}} \\frac{\\partial \\mathbf{w}}{\\partial \\alpha_i} \\right|_{\\mathbf{w} = \\mathbf{w}^*}$$\n\nObserving Eq (7.109: $= \\sum_{n=1}^{N} \\left\\{ t_n \\ln y_n + (1 - t_n) \\ln(1 - y_n) \\right\\} - \\frac{1}{2} \\mathbf{w}^{\\mathrm{T}} \\mathbf{A} \\mathbf{w} + \\text{const} \\quad$), (7.110: $\\nabla \\ln p(\\mathbf{w}|\\mathbf{t}, \\boldsymbol{\\alpha}) = \\boldsymbol{\\Phi}^{\\mathrm{T}}(\\mathbf{t} - \\mathbf{y}) - \\mathbf{A}\\mathbf{w}$) and that (7.110: $\\nabla \\ln p(\\mathbf{w}|\\mathbf{t}, \\boldsymbol{\\alpha}) = \\boldsymbol{\\Phi}^{\\mathrm{T}}(\\mathbf{t} - \\mathbf{y}) - \\mathbf{A}\\mathbf{w}$) will equal 0 at $\\mathbf{w}^*$ , we can conclude that the first term on the right hand side of $\\ln p(\\mathbf{t}|\\alpha)$ will have zero derivative with respect to $\\mathbf{w}$ at $\\mathbf{w}^*$ . Therefore, we only need to focus on the second term:\n\n$$\\left. \\frac{\\partial \\ln p(\\mathbf{t}|\\alpha)}{\\partial \\alpha_i} \\right|_{\\mathbf{w} = \\mathbf{w}^*} = \\frac{\\partial}{\\partial \\alpha_i} \\left[ \\frac{1}{2} \\ln |\\mathbf{\\Sigma}| - \\frac{1}{2} \\ln |\\mathbf{A}| \\right] \\bigg|_{\\mathbf{w} = \\mathbf{w}^*}$$\n\nIt is rather easy to obtain:\n\n$$\\frac{\\partial}{\\partial \\alpha_i} [-\\frac{1}{2} \\ln |\\mathbf{A}|] = -\\frac{1}{2} \\frac{\\partial}{\\partial \\alpha_i} \\left[ \\sum_i \\ln \\alpha_i^{-1} \\right] = \\frac{1}{2\\alpha_i}$$\n\nThen we follow the same procedure as in Prob.7.12, we can obtain:\n\n$$\\frac{\\partial}{\\partial \\alpha_i} \\left[ \\frac{1}{2} \\ln |\\mathbf{\\Sigma}| \\right] = -\\frac{1}{2} \\Sigma_{ii}$$\n\nTherefore, we obtain:\n\n$$\\frac{\\partial \\ln p(\\mathbf{t}|\\alpha)}{\\partial \\alpha_i} = \\frac{1}{2\\alpha_i} - \\frac{1}{2} \\Sigma_{ii}$$\n\nNote: here I draw a different conclusion as the main text. I have also verified my result in another way. You can write the prior as the product of $\\mathcal{N}(w_i|0,\\alpha_i^{-1})$ instead of $\\mathcal{N}(\\mathbf{w}|\\mathbf{0},\\mathbf{A})$ . In this form, since we know that:\n\n$$\\frac{\\partial}{\\partial \\alpha_i} \\sum_{i=1}^M \\ln \\mathcal{N}(w_i|0,\\alpha_i^{-1}) = \\frac{\\partial}{\\partial \\alpha_i} (\\frac{1}{2} \\ln \\alpha_i - \\frac{\\alpha_i}{2} w_i^2) = \\frac{1}{2\\alpha_i} - \\frac{1}{2} (w_i^*)^2$$\n\nThe above expression can be used to replace the derivative of $-1/2\\mathbf{w}^T\\mathbf{A}\\mathbf{w}-1/2\\ln|\\mathbf{A}|$ . Since the derivative of the likelihood with respect to $\\alpha_i$ is not zero at $\\mathbf{w}^*$ , (7.115: $-\\frac{1}{2}(w_i^{\\star})^2 + \\frac{1}{2\\alpha_i} - \\frac{1}{2}\\Sigma_{ii} = 0.$) seems not right anyway.\n\n# 0.8 Graphical Models",
"answer_length": 4137
},
{
"chapter": 7,
"question_number": "7.2",
"difficulty": "easy",
"question_text": "Show that, if the 1 on the right-hand side of the constraint (7.5: $t_n\\left(\\mathbf{w}^{\\mathrm{T}}\\boldsymbol{\\phi}(\\mathbf{x}_n) + b\\right) \\geqslant 1, \\qquad n = 1, \\dots, N.$) is replaced by some arbitrary constant $\\gamma > 0$ , the solution for the maximum margin hyperplane is unchanged.",
"answer": "Suppose we have find $\\mathbf{w}_0$ and $b_0$ , which can let all points satisfy Eq (7.5: $t_n\\left(\\mathbf{w}^{\\mathrm{T}}\\boldsymbol{\\phi}(\\mathbf{x}_n) + b\\right) \\geqslant 1, \\qquad n = 1, \\dots, N.$) and simultaneously minimize Eq (7.3: $\\underset{\\mathbf{w},b}{\\operatorname{arg\\,max}} \\left\\{ \\frac{1}{\\|\\mathbf{w}\\|} \\min_{n} \\left[ t_n \\left( \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) + b \\right) \\right] \\right\\}$). This hyperlane decided by $\\mathbf{w}_0$ and $b_0$ is the optimal classification margin. Now if the constraint in Eq (7.5: $t_n\\left(\\mathbf{w}^{\\mathrm{T}}\\boldsymbol{\\phi}(\\mathbf{x}_n) + b\\right) \\geqslant 1, \\qquad n = 1, \\dots, N.$) becomes:\n\n$$t_n(\\mathbf{w}^T\\phi(\\mathbf{x}_n)+b) \\ge \\gamma$$\n\nWe can conclude that if we perform change of variables: $\\mathbf{w}_0 - \\gamma \\mathbf{w}_0$ and $b - \\gamma p$ , the constraint will still satisfy and Eq (7.3: $\\underset{\\mathbf{w},b}{\\operatorname{arg\\,max}} \\left\\{ \\frac{1}{\\|\\mathbf{w}\\|} \\min_{n} \\left[ t_n \\left( \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) + b \\right) \\right] \\right\\}$) will be minimize. In other words, if the right side of the constraint changes from 1 to $\\gamma$ , The new hyperlane decided by $\\gamma \\mathbf{w}_0$ and $\\gamma b_0$ is the optimal classification margin. However, the minimum distance from the points to the classification margin is still the same.",
"answer_length": 1413
},
{
"chapter": 7,
"question_number": "7.3",
"difficulty": "medium",
"question_text": "\\star)$ Show that, irrespective of the dimensionality of the data space, a data set consisting of just two data points, one from each class, is sufficient to determine the location of the maximum-margin hyperplane.",
"answer": "Suppose we have $\\mathbf{x}_1$ belongs to class one and we denote its target value $t_1 = 1$ , and similarly $\\mathbf{x}_2$ belongs to class two and we denote its target value $t_2 = -1$ . Since we only have two points, they must have $t_i \\cdot y(\\mathbf{x}_i) = 1$ as shown in Fig. 7.1. Therefore, we have an equality constrained optimization problem:\n\nminimize \n$$\\frac{1}{2}||\\mathbf{w}||^2$$\n s.t. $\\begin{cases} \\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}_1) + b = 1 \\\\ \\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}_2) + b = -1 \\end{cases}$ \n\nThis is an convex optimization problem and it has been proved that global optimal exists.",
"answer_length": 641
},
{
"chapter": 7,
"question_number": "7.4",
"difficulty": "medium",
"question_text": "Show that the value $\\rho$ of the margin for the maximum-margin hyperplane is given by\n\n$$\\frac{1}{\\rho^2} = \\sum_{n=1}^{N} a_n \\tag{7.123}$$\n\nwhere $\\{a_n\\}$ are given by maximizing (7.10: $\\widetilde{L}(\\mathbf{a}) = \\sum_{n=1}^{N} a_n - \\frac{1}{2} \\sum_{n=1}^{N} \\sum_{m=1}^{N} a_n a_m t_n t_m k(\\mathbf{x}_n, \\mathbf{x}_m)$) subject to the constraints (7.11: $a_n \\geqslant 0, \\qquad n = 1, \\dots, N,$) and (7.12: $\\sum_{n=1}^{N} a_n t_n = 0.$).",
"answer": "Since we know that\n\n$$\\rho = \\frac{1}{||\\mathbf{w}||}$$\n\nTherefore, we have:\n\n$$\\frac{1}{\\rho^2} = ||\\mathbf{w}||^2$$\n\nIn other words, we only need to prove that\n\n$$||\\mathbf{w}||^2 = \\sum_{n=1}^N a_n$$\n\nWhen we find th optimal solution, the second term on the right hand side of Eq (7.7: $L(\\mathbf{w}, b, \\mathbf{a}) = \\frac{1}{2} \\|\\mathbf{w}\\|^2 - \\sum_{n=1}^{N} a_n \\left\\{ t_n(\\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}_n) + b) - 1 \\right\\}$) vanishes. Based on Eq (7.8: $\\mathbf{w} = \\sum_{n=1}^{N} a_n t_n \\phi(\\mathbf{x}_n)$) and Eq (7.10: $\\widetilde{L}(\\mathbf{a}) = \\sum_{n=1}^{N} a_n - \\frac{1}{2} \\sum_{n=1}^{N} \\sum_{m=1}^{N} a_n a_m t_n t_m k(\\mathbf{x}_n, \\mathbf{x}_m)$), we also observe that its dual is given by:\n\n$$\\tilde{L}(\\mathbf{a}) = \\sum_{n=1}^{N} a_n - \\frac{1}{2} ||\\mathbf{w}||^2$$\n\nTherefore, we have:\n\n$$\\frac{1}{2}||\\mathbf{w}||^2 = L(\\mathbf{a}) = \\tilde{L}(\\mathbf{a}) = \\sum_{n=1}^{N} a_n - \\frac{1}{2}||\\mathbf{w}||^2$$\n\nRearranging it, we will obtain what we are required.",
"answer_length": 1020
},
{
"chapter": 7,
"question_number": "7.5",
"difficulty": "medium",
"question_text": "\\star)$ Show that the values of $\\rho$ and $\\{a_n\\}$ in the previous exercise also satisfy\n\n$$\\frac{1}{\\rho^2} = 2\\widetilde{L}(\\mathbf{a}) \\tag{7.124}$$\n\nwhere $\\widetilde{L}(\\mathbf{a})$ is defined by (7.10: $\\widetilde{L}(\\mathbf{a}) = \\sum_{n=1}^{N} a_n - \\frac{1}{2} \\sum_{n=1}^{N} \\sum_{m=1}^{N} a_n a_m t_n t_m k(\\mathbf{x}_n, \\mathbf{x}_m)$). Similarly, show that\n\n$$\\frac{1}{\\rho^2} = \\|\\mathbf{w}\\|^2. \\tag{7.125}$$",
"answer": "We have already proved this problem in the previous one.",
"answer_length": 56
},
{
"chapter": 7,
"question_number": "7.6",
"difficulty": "easy",
"question_text": "Consider the logistic regression model with a target variable $t \\in \\{-1, 1\\}$ . If we define $p(t = 1|y) = \\sigma(y)$ where $y(\\mathbf{x})$ is given by (7.1: $y(\\mathbf{x}) = \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}) + b$), show that the negative log likelihood, with the addition of a quadratic regularization term, takes the form (7.47: $\\sum_{n=1}^{N} E_{LR}(y_n t_n) + \\lambda \\|\\mathbf{w}\\|^2.$).",
"answer": "If the target variable can only choose from $\\{-1,1\\}$ , and we know that\n\n$$p(t=1|y) = \\sigma(y)$$\n\nWe can obtain:\n\n$$p(t = -1|y) = 1 - p(t = 1|y) = 1 - \\sigma(y) = \\sigma(-y)$$\n\nTherefore, combining these two situations, we can derive:\n\n$$p(t|y) = \\sigma(yt)$$\n\nConsequently, we can obtain the negative log likelihood:\n\n$$-\\ln p(\\mathbf{D}) = -\\ln \\prod_{n=1}^N \\sigma(y_n t_n) = -\\sum_{n=1}^N \\ln \\sigma(y_n t_n) = \\sum_{n=1}^N E_{LR}(y_n t_n)$$\n\nHere **D** represents the dataset, i.e., $\\mathbf{D} = \\{(\\mathbf{x}_n, t_n); n = 1, 2, ..., N\\}$ , and $E_{LR}(yt)$ is given by Eq (7.48: $E_{LR}(yt) = \\ln(1 + \\exp(-yt)).$). With the addition of a quadratic regularization, we obtain exactly Eq (7.47: $\\sum_{n=1}^{N} E_{LR}(y_n t_n) + \\lambda \\|\\mathbf{w}\\|^2.$).",
"answer_length": 769
},
{
"chapter": 7,
"question_number": "7.7",
"difficulty": "easy",
"question_text": "Consider the Lagrangian (7.56: $- \\sum_{n=1}^{N} a_n (\\epsilon + \\xi_n + y_n - t_n) - \\sum_{n=1}^{N} \\hat{a}_n (\\epsilon + \\hat{\\xi}_n - y_n + t_n). \\quad$) for the regression support vector machine. By setting the derivatives of the Lagrangian with respect to $\\mathbf{w}$ , b, $\\xi_n$ , and $\\hat{\\xi}_n$ to zero and then back substituting to eliminate the corresponding variables, show that the dual Lagrangian is given by (7.61: $-\\epsilon \\sum_{n=1}^{N} (a_n + \\widehat{a}_n) + \\sum_{n=1}^{N} (a_n - \\widehat{a}_n)t_n$).",
"answer": "The derivatives are easy to obtain. Our main task is to derive Eq (7.61: $-\\epsilon \\sum_{n=1}^{N} (a_n + \\widehat{a}_n) + \\sum_{n=1}^{N} (a_n - \\widehat{a}_n)t_n$)\n\nusing Eq (7.57)-(7.60).\n\n$$\\begin{split} L &= C \\sum_{n=1}^{N} (\\xi_{n} + \\widehat{\\xi}_{n}) + \\frac{1}{2} ||\\mathbf{w}||^{2} - \\sum_{n=1}^{N} (\\mu_{n} \\xi_{n} + \\widehat{\\mu}_{n} \\widehat{\\xi}_{n}) \\\\ &- \\sum_{n=1}^{N} a_{n} (\\epsilon + \\xi_{n} + y_{n} - t_{n}) - \\sum_{n=1}^{N} \\widehat{a}_{n} (\\epsilon + \\widehat{\\xi}_{n} + y_{n} - t_{n}) \\\\ &= C \\sum_{n=1}^{N} (\\xi_{n} + \\widehat{\\xi}_{n}) + \\frac{1}{2} ||\\mathbf{w}||^{2} - \\sum_{n=1}^{N} (a_{n} + \\mu_{n}) \\xi_{n} - \\sum_{n=1}^{N} (\\widehat{a}_{n} + \\widehat{\\mu}_{n}) \\widehat{\\xi}_{n} \\\\ &- \\sum_{n=1}^{N} a_{n} (\\epsilon + y_{n} - t_{n}) - \\sum_{n=1}^{N} \\widehat{a}_{n} (\\epsilon + y_{n} - t_{n}) \\\\ &= C \\sum_{n=1}^{N} (\\xi_{n} + \\widehat{\\xi}_{n}) + \\frac{1}{2} ||\\mathbf{w}||^{2} - \\sum_{n=1}^{N} C \\xi_{n} - \\sum_{n=1}^{N} C \\widehat{\\xi}_{n} \\\\ &- \\sum_{n=1}^{N} (a_{n} + \\widehat{a}_{n}) \\epsilon - \\sum_{n=1}^{N} (a_{n} - \\widehat{a}_{n}) (y_{n} - t_{n}) \\\\ &= \\frac{1}{2} ||\\mathbf{w}||^{2} - \\sum_{n=1}^{N} (a_{n} + \\widehat{a}_{n}) \\epsilon - \\sum_{n=1}^{N} (a_{n} - \\widehat{a}_{n}) (y_{n} - t_{n}) \\\\ &= \\frac{1}{2} ||\\mathbf{w}||^{2} - \\sum_{n=1}^{N} (a_{n} - \\widehat{a}_{n}) (\\mathbf{w}^{T} \\phi(\\mathbf{x}_{n}) + b - t_{n}) - \\sum_{n=1}^{N} (a_{n} + \\widehat{a}_{n}) \\epsilon + \\sum_{n=1}^{N} (a_{n} - \\widehat{a}_{n}) t_{n} \\\\ &= \\frac{1}{2} ||\\mathbf{w}||^{2} - \\sum_{n=1}^{N} (a_{n} - \\widehat{a}_{n}) \\mathbf{w}^{T} \\phi(\\mathbf{x}_{n}) - \\sum_{n=1}^{N} (a_{n} + \\widehat{a}_{n}) \\epsilon + \\sum_{n=1}^{N} (a_{n} - \\widehat{a}_{n}) t_{n} \\\\ &= \\frac{1}{2} ||\\mathbf{w}||^{2} - \\sum_{n=1}^{N} (a_{n} - \\widehat{a}_{n}) \\mathbf{w}^{T} \\phi(\\mathbf{x}_{n}) - \\sum_{n=1}^{N} (a_{n} - \\widehat{a}_{n}) t_{n} \\\\ &= -\\frac{1}{2} ||\\mathbf{w}||^{2} - \\sum_{n=1}^{N} (a_{n} + \\widehat{a}_{n}) \\epsilon + \\sum_{n=1}^{N} (a_{n} - \\widehat{a}_{n}) t_{n} \\\\ &= -\\frac{1}{2} ||\\mathbf{w}||^{2} - \\sum_{n=1}^{N} (a_{n} + \\widehat{a}_{n}) \\epsilon + \\sum_{n=1}^{N} (a_{n} - \\widehat{a}_{n}) t_{n} \\end{aligned}$$\n\nJust as required.",
"answer_length": 2163
},
{
"chapter": 7,
"question_number": "7.8",
"difficulty": "easy",
"question_text": "For the regression support vector machine considered in Section 7.1.4, show that all training data points for which $\\xi_n > 0$ will have $a_n = C$ , and similarly all points for which $\\hat{\\xi}_n > 0$ will have $\\hat{a}_n = C$ .",
"answer": "This obviously follows from the KKT condition, described in Eq (7.67: $(C - a_n)\\xi_n = 0$) and (7.68: $(C - \\widehat{a}_n)\\widehat{\\xi}_n = 0.$).",
"answer_length": 146
},
{
"chapter": 7,
"question_number": "7.9",
"difficulty": "easy",
"question_text": "Verify the results (7.82: $\\mathbf{m} = \\beta \\mathbf{\\Sigma} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{t}$) and (7.83: $\\Sigma = (\\mathbf{A} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi})^{-1}$) for the mean and covariance of the posterior distribution over weights in the regression RVM.",
"answer": "The prior is given by Eq (7.80: $p(\\mathbf{w}|\\boldsymbol{\\alpha}) = \\prod_{i=1}^{M} \\mathcal{N}(w_i|0, \\alpha_i^{-1})$).\n\n$$p(\\mathbf{w}|\\boldsymbol{\\alpha}) = \\prod_{i=1}^{M} \\mathcal{N}(0, \\alpha_i^{-1}) = \\mathcal{N}(\\mathbf{w}|\\mathbf{0}, \\mathbf{A}^{-1})$$\n\nWhere we have defined:\n\n$$\\mathbf{A} = diag(\\alpha_i)$$\n\nThe likelihood is given by Eq (7.79: $p(\\mathbf{t}|\\mathbf{X}, \\mathbf{w}, \\beta) = \\prod_{n=1}^{N} p(t_n|\\mathbf{x}_n, \\mathbf{w}, \\beta^{-1}).$).\n\n$$p(\\mathbf{t}|\\mathbf{X}, \\mathbf{w}, \\beta) = \\prod_{n=1}^{N} p(t_n|\\mathbf{x}_n, \\mathbf{w}, \\beta^{-1})$$\n$$= \\prod_{n=1}^{N} \\mathcal{N}(t_n|\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}_n), \\beta^{-1})$$\n$$= \\mathcal{N}(\\mathbf{t}|\\mathbf{\\Phi}\\mathbf{w}, \\beta^{-1}\\mathbf{I})$$\n\nWhere we have defined:\n\n$$\\mathbf{\\Phi} = [\\boldsymbol{\\phi}(\\mathbf{x}_1), \\boldsymbol{\\phi}(\\mathbf{x}_2), ..., \\boldsymbol{\\phi}(\\mathbf{x}_n)]^T$$\n\nOur definitions of $\\Phi$ and A as consistent with the main text. Therefore, according to Eq (2.113)-Eq (2.117: $\\Sigma = (\\mathbf{\\Lambda} + \\mathbf{A}^{\\mathrm{T}} \\mathbf{L} \\mathbf{A})^{-1}.$), we have:\n\n$$p(\\mathbf{w}|\\mathbf{t}, \\mathbf{X}, \\boldsymbol{\\alpha}, \\boldsymbol{\\beta}) = \\mathcal{N}(\\mathbf{m}, \\boldsymbol{\\Sigma})$$\n\nWhere we have defined:\n\n$$\\mathbf{\\Sigma} = (\\mathbf{A} + \\beta \\mathbf{\\Phi}^T \\mathbf{\\Phi})^{-1}$$\n\nAnd\n\n$$\\mathbf{m} = \\beta \\mathbf{\\Sigma} \\mathbf{\\Phi}^T \\mathbf{t}$$\n\nJust as required.\n\n## Problem 7.10&7.11 Solution\n\nIt is quite similar to the previous problem. We begin by writting down the prior:\n\n$$p(\\mathbf{w}|\\boldsymbol{\\alpha}) = \\prod_{i=1}^{M} \\mathcal{N}(0, \\alpha_i^{-1}) = \\mathcal{N}(\\mathbf{w}|\\mathbf{0}, \\mathbf{A}^{-1})$$\n\nThen we write down the likelihood:\n\n$$p(\\mathbf{t}|\\mathbf{X}, \\mathbf{w}, \\beta) = \\prod_{n=1}^{N} p(t_n|\\mathbf{x}_n, \\mathbf{w}, \\beta^{-1})$$\n$$= \\prod_{n=1}^{N} \\mathcal{N}(t_n|\\mathbf{w}^T \\boldsymbol{\\phi}(\\mathbf{x}_n), \\beta^{-1})$$\n$$= \\mathcal{N}(\\mathbf{t}|\\boldsymbol{\\Phi}\\mathbf{w}, \\beta^{-1}\\mathbf{I})$$\n\nSince we know that:\n\n$$p(\\mathbf{t}|\\mathbf{X}, \\boldsymbol{\\alpha}, \\boldsymbol{\\beta}) = \\int p(\\mathbf{t}|\\mathbf{X}, \\mathbf{w}, \\boldsymbol{\\beta}) p(\\mathbf{w}|\\boldsymbol{\\alpha}) d\\mathbf{w}$$\n\nFirst as required by Prob.7.10, we will solve it by completing the square. We begin by write down the expression for $p(\\mathbf{t}|\\mathbf{X}, \\mathbf{w}, \\beta)$ :\n\n$$p(\\mathbf{t}|\\mathbf{X}, \\boldsymbol{\\alpha}, \\boldsymbol{\\beta}) = \\int \\mathcal{N}(\\mathbf{w}|\\mathbf{0}, \\mathbf{A}^{-1}) \\mathcal{N}(\\mathbf{t}|\\mathbf{\\Phi}\\mathbf{w}, \\boldsymbol{\\beta}^{-1}\\mathbf{I}) d\\mathbf{w}$$\n$$= (\\frac{\\beta}{2\\pi})^{N/2} \\cdot \\frac{1}{(2\\pi)^{M/2}} \\cdot \\prod_{m=1}^{M} \\alpha_i^{1/2} \\cdot \\int exp\\{-E(\\mathbf{w})\\} d\\mathbf{w}$$\n\nWhere we have defined:\n\n$$E(\\mathbf{w}) = \\frac{1}{2}\\mathbf{w}^T \\mathbf{A} \\mathbf{w} + \\frac{\\beta}{2}||\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{w}||^2$$\n\nWe expand $E(\\mathbf{w})$ with respect to $\\mathbf{w}$ :\n\n$$E(\\mathbf{w}) = \\frac{1}{2} \\left\\{ \\mathbf{w}^T (\\mathbf{A} + \\beta \\mathbf{\\Phi}^T \\mathbf{\\Phi}) \\mathbf{w} - 2\\beta \\mathbf{t}^T (\\mathbf{\\Phi} \\mathbf{w}) + \\beta \\mathbf{t}^T \\mathbf{t} \\right\\}$$\n$$= \\frac{1}{2} \\left\\{ \\mathbf{w}^T \\mathbf{\\Sigma}^{-1} \\mathbf{w} - 2\\mathbf{m}^T \\mathbf{\\Sigma}^{-1} \\mathbf{w} + \\beta \\mathbf{t}^T \\mathbf{t} \\right\\}$$\n$$= \\frac{1}{2} \\left\\{ (\\mathbf{w} - \\mathbf{m})^T \\mathbf{\\Sigma}^{-1} (\\mathbf{w} - \\mathbf{m}) + \\beta \\mathbf{t}^T \\mathbf{t} - \\mathbf{m}^T \\mathbf{\\Sigma}^{-1} \\mathbf{m} \\right\\}$$\n\nWhere we have used Eq (7.82: $\\mathbf{m} = \\beta \\mathbf{\\Sigma} \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{t}$) and Eq (7.83: $\\Sigma = (\\mathbf{A} + \\beta \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi})^{-1}$). Substituting $E(\\mathbf{w})$ into the integral, we will obtain:\n\n$$\\begin{split} p(\\mathbf{t}|\\mathbf{X}, \\boldsymbol{\\alpha}, \\boldsymbol{\\beta}) &= (\\frac{\\beta}{2\\pi})^{N/2} \\cdot \\frac{1}{(2\\pi)^{M/2}} \\cdot \\prod_{m=1}^{M} \\alpha_i^{1/2} \\cdot \\int exp\\{-E(\\mathbf{w})\\} \\, d\\mathbf{w} \\\\ &= (\\frac{\\beta}{2\\pi})^{N/2} \\cdot \\frac{1}{(2\\pi)^{M/2}} \\cdot \\prod_{m=1}^{M} \\alpha_i^{1/2} \\cdot (2\\pi)^{M/2} \\cdot |\\mathbf{\\Sigma}|^{1/2} exp\\left\\{-\\frac{1}{2}(\\boldsymbol{\\beta} \\mathbf{t}^T \\mathbf{t} - \\mathbf{m}^T \\mathbf{\\Sigma}^{-1} \\mathbf{m})\\right\\} \\\\ &= (\\frac{\\beta}{2\\pi})^{N/2} \\cdot |\\mathbf{\\Sigma}|^{1/2} \\cdot \\prod_{m=1}^{M} \\alpha_i^{1/2} \\cdot exp\\left\\{-\\frac{1}{2}(\\boldsymbol{\\beta} \\mathbf{t}^T \\mathbf{t} - \\mathbf{m}^T \\mathbf{\\Sigma}^{-1} \\mathbf{m})\\right\\} \\\\ &= (\\frac{\\beta}{2\\pi})^{N/2} \\cdot |\\mathbf{\\Sigma}|^{1/2} \\cdot \\prod_{m=1}^{M} \\alpha_i^{1/2} \\cdot exp\\left\\{-E(\\mathbf{t})\\right\\} \\end{split}$$\n\nWe further expand $E(\\mathbf{t})$ :\n\n$$\\begin{split} E(\\mathbf{t}) &= \\frac{1}{2}(\\beta \\mathbf{t}^T \\mathbf{t} - \\mathbf{m}^T \\mathbf{\\Sigma}^{-1} \\mathbf{m}) \\\\ &= \\frac{1}{2}(\\beta \\mathbf{t}^T \\mathbf{t} - (\\beta \\mathbf{\\Sigma} \\mathbf{\\Phi}^T \\mathbf{t})^T \\mathbf{\\Sigma}^{-1}(\\beta \\mathbf{\\Sigma} \\mathbf{\\Phi}^T \\mathbf{t})) \\\\ &= \\frac{1}{2}(\\beta \\mathbf{t}^T \\mathbf{t} - \\beta^2 \\mathbf{t}^T \\mathbf{\\Phi} \\mathbf{\\Sigma} \\mathbf{\\Sigma}^{-1} \\mathbf{\\Sigma} \\mathbf{\\Phi}^T \\mathbf{t}) \\\\ &= \\frac{1}{2}(\\beta \\mathbf{t}^T \\mathbf{t} - \\beta^2 \\mathbf{t}^T \\mathbf{\\Phi} \\mathbf{\\Sigma} \\mathbf{\\Phi}^T \\mathbf{t}) \\\\ &= \\frac{1}{2}(\\beta \\mathbf{t}^T \\mathbf{t} - \\beta^2 \\mathbf{\\Phi} \\mathbf{\\Sigma} \\mathbf{\\Phi}^T \\mathbf{t}) \\\\ &= \\frac{1}{2} \\mathbf{t}^T (\\beta \\mathbf{I} - \\beta^2 \\mathbf{\\Phi} \\mathbf{\\Sigma} \\mathbf{\\Phi}^T) \\mathbf{t} \\\\ &= \\frac{1}{2} \\mathbf{t}^T \\left[ \\beta \\mathbf{I} - \\beta \\mathbf{\\Phi} (\\mathbf{A} + \\beta \\mathbf{\\Phi}^T \\mathbf{\\Phi})^{-1} \\mathbf{\\Phi}^T \\beta \\right] \\mathbf{t} \\\\ &= \\frac{1}{2} \\mathbf{t}^T (\\beta^{-1} \\mathbf{I} + \\mathbf{\\Phi} \\mathbf{A}^{-1} \\mathbf{\\Phi}^T)^{-1} \\mathbf{t} = \\frac{1}{2} \\mathbf{t}^T \\mathbf{C}^{-1} \\mathbf{t} \\end{split}$$\n\nNote that in the last step we have used matrix identity Eq (C.7). Therefore, as we know that the pdf is Gaussian and the exponential term has been given by $E(\\mathbf{t})$ , we can easily write down Eq (7.85: $= -\\frac{1}{2} \\left\\{ N \\ln(2\\pi) + \\ln |\\mathbf{C}| + \\mathbf{t}^{\\mathrm{T}} \\mathbf{C}^{-1} \\mathbf{t} \\right\\}$) considering those normalization constant.\n\nWhat's more, as required by Prob.7.11, the evaluation of the integral can be easily performed using Eq(2.113)- Eq(2.117).",
"answer_length": 6386
}
]
},
{
"chapter_number": 8,
"total_questions": 21,
"difficulty_breakdown": {
"easy": 13,
"medium": 4,
"hard": 0,
"unknown": 12
},
"questions": [
{
"chapter": 8,
"question_number": "8.1",
"difficulty": "easy",
"question_text": "By marginalizing out the variables in order, show that the representation (8.5: $p(\\mathbf{x}) = \\prod_{k=1}^{K} p(x_k | \\mathbf{pa}_k)$) for the joint distribution of a directed graph is correctly normalized, provided each of the conditional distributions is normalized.",
"answer": "We are required to prove:\n\n$$\\int_{\\mathbf{x}} p(\\mathbf{x}) d\\mathbf{x} = \\int_{\\mathbf{x}} \\prod_{k=1}^{K} p(x_k | pa_k) d\\mathbf{x} = 1$$\n\nHere we adopt the same assumption as in the main text: No arrows lead from a higher numbered node to a According to Eq(8.5), we can write:\n\n$$\\int_{\\mathbf{x}} p(\\mathbf{x}) d\\mathbf{x} = \\int_{\\mathbf{x}} \\prod_{k=1}^{K} p(x_k | pa_k) d\\mathbf{x} \n= \\int_{\\mathbf{x}} p(x_K | pa_K) \\prod_{k=1}^{K-1} p(x_k | pa_k) d\\mathbf{x} \n= \\int_{[x_1, x_2, ..., x_{K-1}]} \\int_{x_K} \\left[ p(x_K | pa_K) \\prod_{k=1}^{K-1} p(x_k | pa_k) dx_K \\right] dx_1 dx_2, ... dx_{K-1} \n= \\int_{[x_1, x_2, ..., x_{K-1}]} \\left[ \\prod_{k=1}^{K-1} p(x_k | pa_k) \\int_{x_K} p(x_K | pa_K) dx_K \\right] dx_1 dx_2, ... dx_{K-1} \n= \\int_{[x_1, x_2, ..., x_{K-1}]} \\left[ \\prod_{k=1}^{K-1} p(x_k | pa_k) \\right] dx_1 dx_2, ... dx_{K-1} \n= \\int_{[x_1, x_2, ..., x_{K-1}]} \\prod_{k=1}^{K-1} p(x_k | pa_k) dx_1 dx_2, ... dx_{K-1}$$\n\nNote that from the third line to the fourth line, we have used the fact that $x_1, x_2, ... x_{K-1}$ do not depend on $x_K$ , and thus the product from k = 1 to K-1 can be moved to the outside of the integral with respect to $x_K$ , and that we have used the fact that the conditional probability is correctly normalized from the fourth line to the fifth line. The aforementioned procedure will be repeated for K times until all the variables have been integrated out.",
"answer_length": 1413
},
{
"chapter": 8,
"question_number": "8.10",
"difficulty": "easy",
"question_text": "Consider the directed graph shown in Figure 8.54 in which none of the variables is observed. Show that $a \\perp \\!\\!\\!\\perp b \\mid \\emptyset$ . Suppose we now observe the variable d. Show that in general $a \\perp \\!\\!\\!\\!\\perp b \\mid d$ .",
"answer": "By examining Fig.8.54, we can obtain:\n\n$$p(a,b,c,d) = p(a)p(b)p(c|a,b)p(d|c)$$\n\nNext we performing summation on both sides with respect to c and d, we can obtain:\n\n$$p(a,b) = p(a)p(b) \\sum_{c} \\sum_{d} p(c|a,b)p(d|c)$$\n\n$$= p(a)p(b) \\sum_{c} p(c|a,b) \\left[ \\sum_{d} p(d|c) \\right]$$\n\n$$= p(a)p(b) \\sum_{c} p(c|a,b) \\times 1$$\n\n$$= p(a)p(b) \\times 1$$\n\n$$= p(a)p(b)$$\n\nIf we want to prove that a and b are dependent conditioned on d, we only need to prove:\n\n$$p(a,b|d) = p(a|d)p(b|d)$$\n\nWe multiply both sides by p(d) and use Bayes' Theorem, yielding:\n\n$$p(a,b,d) = p(a)p(b|d) \\tag{*}$$\n\nIn other words, we can equivalently prove the expression above instead. Recall that we have:\n\n$$p(a,b,c,d) = p(a) p(b) p(c|a,b) p(d|c)$$\n\nWe perform summation on both sides with respect to c, yielding:\n\n$$p(a,b,d) = p(a)p(b)\\sum_{c}p(c|a,b)p(d|c)$$\n\nCombining with (\\*), we only need to prove:\n\n$$p(b|d) = p(b) \\sum_{c} p(c|a,b) p(d|c)$$\n\nHowever, we can see that the value of the right hand side depends on a,b and d, while the left hand side only depends on b and d. In general, this expression will not hold, and, thus, a and b are not dependent conditioned on d.",
"answer_length": 1154
},
{
"chapter": 8,
"question_number": "8.11",
"difficulty": "medium",
"question_text": "Consider the example of the car fuel system shown in Figure 8.21, and suppose that instead of observing the state of the fuel gauge G directly, the gauge is seen by the driver D who reports to us the reading on the gauge. This report is either that the gauge shows full D=1 or that it shows empty D=0. Our driver is a bit unreliable, as expressed through the following probabilities\n\n$$p(D=1|G=1) = 0.9 (8.105)$$\n\n$$p(D=0|G=0) = 0.9. (8.106)$$\n\nSuppose that the driver tells us that the fuel gauge shows empty, in other words that we observe D=0. Evaluate the probability that the tank is empty given only this observation. Similarly, evaluate the corresponding probability given also the observation that the battery is flat, and note that this second probability is lower. Discuss the intuition behind this result, and relate the result to Figure 8.54.",
"answer": "This problem is quite straightforward, but it needs some patience. According to the Bayes' Theorem, we have:\n\n$$p(F=0|D=0) = \\frac{p(D=0|F=0)p(F=0)}{p(D=0)} \\tag{*}$$\n\nWe will calculate each of the term on the right hand side. Let's begin from the numerator p(D = 0). According to the sum rule, we have:\n\n$$p(D=0) = p(D=0,G=0) + p(D=0,G=1)$$\n\n$$= p(D=0|G=0)p(G=0) + p(D=0|G=1)p(G=1)$$\n\n$$= 0.9 \\times 0.315 + (1-0.9) \\times (1-0.315)$$\n\n$$= 0.352$$\n\nWhere we have used Eq(8.30), Eq(8.105) and Eq(8.106). Note that the second term in the denominator, i.e., p(F=0), equals 0.1, which can be easily derived from the main test above Eq(8.30). We now only need to calculate p(D=0|F=0). Similarly, according to the sum rule, we have:\n\n$$\\begin{split} p(D=0|F=0) &= \\sum_{G=0,1} p(D=0,G|F=0) \\\\ &= \\sum_{G=0,1} p(D=0|G,F=0) \\, p(G|F=0) \\\\ &= \\sum_{G=0,1} p(D=0|G) \\, p(G|F=0) \\\\ &= 0.9 \\times 0.81 + (1-0.9) \\times (1-0.81) \\\\ &= 0.748 \\end{split}$$\n\nSeveral clarifications must be made here. First, from the second line to the third line, we simply eliminate the dependence on F=0 because we know that D only depends on G according to Eq(8.105) and Eq(8.106). Second, from the third line to the fourth line, we have used Eq(8.31), Eq(8.105) and Eq(8.106). Now, we substitute all of them back to (\\*), yielding:\n\n$$p(F=0|D=0) = \\frac{p(D=0|F=0)p(F=0)}{p(D=0)} = \\frac{0.748 \\times 0.1}{0.352} = 0.2125$$\n\nNext, we are required to calculate the probability conditioned on both D = 0 and B = 0. Similarly, we can write:\n\n$$\\begin{split} p(F=0|D=0,B=0) &= \\frac{p(D=0,B=0,F=0)}{p(D=0,B=0)} \\\\ &= \\frac{\\sum_{G} p(D=0,B=0,F=0,G)}{\\sum_{G} p(D=0,B=0,G)} \\\\ &= \\frac{\\sum_{G} p(B=0,F=0,G) p(D=0|B=0,F=0,G)}{\\sum_{G} p(B=0,G) p(D=0|B=0,G)} \\\\ &= \\frac{\\sum_{G} p(B=0,F=0,G) p(D=0|G)}{\\sum_{G} p(B=0,G) p(D=0|G)} \\quad (**) \\end{split}$$\n\nWe need to calculate p(B = 0, F = 0, G) and p(B = 0, G), where G = 0, 1.\n\nWe begin by calculating p(B = 0, F = 0, G = 0):\n\n$$p(B=0,F=0,G=0) = p(G=0|B=0,F=0) \\times p(B=0,F=0)$$\n\n$$= p(G=0|B=0,F=0) \\times p(B=0) \\times p(F=0)$$\n\n$$= (1-0.1) \\times (1-0.9) \\times (1-0.9)$$\n\n$$= 0.009$$\n\nSimilarly, we can obtain p(B = 0, F = 0, G = 1) = 0.001. Next we calculate p(B = 0, G):\n\n$$\\begin{split} p(B=0,G=0) &= \\sum_{F=0,1} p(B=0,G=0,F) \\\\ &= \\sum_{F=0,1} p(G=0|B=0,F) \\times p(B=0,F) \\\\ &= \\sum_{F=0,1} p(G=0|B=0,F) \\times p(B=0) \\times p(F) \\\\ &= (1-0.1) \\times (1-0.9) \\times (1-0.9) + (1-0.2) \\times (1-0.9) \\times 0.9 \\\\ &= 0.081 \\end{split}$$\n\nSimilarly, we can obtain p(B = 0, G = 1) = 0.019. We substitute them back into (\\*\\*), yielding:\n\n$$\\begin{split} p(F=0|D=0,B=0) &= \\frac{\\sum_G p(B=0,F=0,G) \\, p(D=0|G)}{\\sum_G p(B=0,G) \\, p(D=0|G)} \\\\ &= \\frac{0.009 \\times 0.9 + 0.001 \\times (1-0.9)}{0.081 \\times 0.9 + 0.019 \\times (1-0.9)} \\\\ &= 0.1096 \\end{split}$$\n\nJust as required. The intuition behind this result coincides with the common sense. Moreover, by analogy to Fig.8.54, the node a and b in Fig.8.54 represents B and F in our case. Node c represents G, while node d represents D. You can use d-separation criterion to verify the conditional properties.",
"answer_length": 3089
},
{
"chapter": 8,
"question_number": "8.12",
"difficulty": "easy",
"question_text": "Show that there are $2^{M(M-1)/2}$ distinct undirected graphs over a set of M distinct random variables. Draw the 8 possibilities for the case of M=3.",
"answer": "An intuitive solution is that we construct a matrix $\\mathbf{A}$ with size of $M \\times M$ . If there is a link from node i to node j, the entry on the i-th row and j-th column of matrix $\\mathbf{A}$ , i.e., $A_{i,j}$ , will equal to 1. Otherwise, it will equal to 0. Since the graph is undirected, the matrix $\\mathbf{A}$ will be symmetric. What's more, the element on the diagonal is 0 by definition. For a undirected graph, we can use a matrix $\\mathbf{A}$ to represent it. It is also a one-to-one mapping.\n\nIn other words, we equivalently count the number of possible matrix **A** satisfying the following criteria: (i) each of the entry is either 0 or 1, (ii) it is symmetric, and (iii) all of the entries on the diagonal are already determined (i.e., they all equal 0).\n\nUsing the property of symmetry, we only need to count the free variables on the lower triangle of the matrix. In the first column, there are (M-1) free variables. In the second column, there are (M-2) free variables. Therefore, the total free variables are given by:\n\n$$(M-1)+(M-2)+...+0=\\frac{M(M-1)}{2}$$\n\nEach value of these free variables has two choices, i.e., 1 or 0. Therefore, the total number of such matrix is $2^{M(M-1)/2}$ . In the case of M=3, there are 8 possible undirected graphs:\n\n\n\nFigure 3: the undirected graph when M = 3",
"answer_length": 1357
},
{
"chapter": 8,
"question_number": "8.13",
"difficulty": "easy",
"question_text": "Consider the use of iterated conditional modes (ICM) to minimize the energy function given by (8.42: $E(\\mathbf{x}, \\mathbf{y}) = h \\sum_{i} x_i - \\beta \\sum_{\\{i,j\\}} x_i x_j - \\eta \\sum_{i} x_i y_i$). Write down an expression for the difference in the values of the energy associated with the two states of a particular variable $x_j$ , with all other variables held fixed, and show that it depends only on quantities that are local to $x_j$ in the graph.",
"answer": "It is straightforward. Suppose that $x_k$ is the target variable whose state may be $\\{-1.1\\}$ while all other variables are fixed. According to Eq (8.42: $E(\\mathbf{x}, \\mathbf{y}) = h \\sum_{i} x_i - \\beta \\sum_{\\{i,j\\}} x_i x_j - \\eta \\sum_{i} x_i y_i$), we can obtain:\n\n$$E(\\mathbf{x}, \\mathbf{y}) = h \\sum_{i \\neq k} x_i - \\beta \\sum_{i,j \\neq k} x_i x_j - \\eta \\sum_{i \\neq k} x_i y_i$$\n$$+ h x_k - \\beta \\sum_{m} x_k x_m - \\eta x_k y_k$$\n\nNote that we write down the dependence of $E(\\mathbf{x}, \\mathbf{y})$ on $x_k$ explicitly, which is expressed via the second line. Moreover, the $x_i x_j$ term in the first line doesn't include the pairs $\\{x_i, x_j\\}$ , which one of them is $x_k$ . These terms are considered by $x_k x_m$ in the secone line. To be more specific, here $x_m$ represents the neighbor of $x_k$ . Noticing that the first line doesn't depend on $x_k$ , we can obtain:\n\n$$E(\\mathbf{x}, \\mathbf{y})|_{x_k = 1} - E(\\mathbf{x}, \\mathbf{y})|_{x_k = -1} = 2h - 2\\beta \\sum_{\\mathbf{x}} x_m - 2\\eta y_k$$\n\nObviously, the difference depends locally on $x_k$ , implied by h, the neighbors $x_m$ and its observed value $y_k$ .",
"answer_length": 1162
},
{
"chapter": 8,
"question_number": "8.14",
"difficulty": "easy",
"question_text": "Consider a particular case of the energy function given by (8.42: $E(\\mathbf{x}, \\mathbf{y}) = h \\sum_{i} x_i - \\beta \\sum_{\\{i,j\\}} x_i x_j - \\eta \\sum_{i} x_i y_i$) in which the coefficients $\\beta = h = 0$ . Show that the most probable configuration of the latent variables is given by $x_i = y_i$ for all i.",
"answer": "It is quite obvious. When h = 0, $\\beta = 0$ , the energy function reduces to\n\n$$E(\\mathbf{x}, \\mathbf{y}) = -\\eta \\sum_{i} x_i y_i$$\n\nIf there exists some index j which satisfies $x_j \\neq y_j$ , considering that $x_j, y_j \\in \\{-1.1\\}$ , then $x_jy_j$ will equal to -1. By changing the sign of $x_j$ , we can always increase the value of $x_jy_j$ from -1 to 1, and, thus, decrease the energy function $E(\\mathbf{x}, \\mathbf{y})$ .\n\nTherefore, given the observed binary pixels $y_i \\in \\{-1.1\\}$ , where i = 1, 2, ..., D, in order to obtain the minimum of energy, the optimal choice for $x_i$ is to set it equal to $y_i$ .",
"answer_length": 636
},
{
"chapter": 8,
"question_number": "8.15",
"difficulty": "medium",
"question_text": "Show that the joint distribution $p(x_{n-1}, x_n)$ for two neighbouring nodes in the graph shown in Figure 8.38 is given by an expression of the form (8.58: $p(x_{n-1}, x_n) = \\frac{1}{Z} \\mu_{\\alpha}(x_{n-1}) \\psi_{n-1,n}(x_{n-1}, x_n) \\mu_{\\beta}(x_n).$).",
"answer": "This problem can be solved by analogy to Eq (8.49: $p(\\mathbf{x}) = \\frac{1}{Z} \\psi_{1,2}(x_1, x_2) \\psi_{2,3}(x_2, x_3) \\cdots \\psi_{N-1,N}(x_{N-1}, x_N).$) - Eq(8.54). We begin by noticing:\n\n$$p(x_{n-1},x_n) = \\sum_{x_1} ... \\sum_{x_{n-2}} \\sum_{x_{n+1}} ... \\sum_{x_N} p(\\mathbf{x})$$\n\nWe also have:\n\n$$p(\\mathbf{x}) = \\frac{1}{Z} \\psi_{1,2}(x_1, x_2) \\psi_{2,3}(x_2, x_3) \\dots \\psi_{N-1,N}(x_{N-1}, x_N)$$\n\nBy analogy to Eq(8.52), we can obtain:\n\n$$p(x_{n-1},x_n) = \\frac{1}{Z} \\left[ \\sum_{x_{n-2}} \\psi_{n-2,n-1}(x_{n-2},x_{n-1}) \\dots \\left[ \\sum_{x_2} \\psi_{2,3}(x_2,x_3) \\left[ \\sum_{x_1} \\psi_{1,2}(x_1,x_2) \\right] \\right] \\dots \\right] \\\\ \\times \\psi_{n-1,n}(x_{n-1,x_n}) \\\\ \\times \\left[ \\sum_{x_{n+1}} \\psi_{n,n+1}(x_n,x_{n+1}) \\dots \\left[ \\sum_{x_N} \\psi_{N-1,N}(x_{N-1},x_N) \\right] \\dots \\right] \\\\ = \\frac{1}{Z} \\times \\mu_{\\alpha}(x_{n-1}) \\times \\psi_{n-1,n}(x_{n-1},x_n) \\times \\mu_{\\beta}(x_n)$$\n\nJust as required.",
"answer_length": 939
},
{
"chapter": 8,
"question_number": "8.16",
"difficulty": "medium",
"question_text": "Consider the inference problem of evaluating $p(\\mathbf{x}_n|\\mathbf{x}_N)$ for the graph shown in Figure 8.38, for all nodes $n\\in\\{1,\\ldots,N-1\\}$ . Show that the message passing algorithm discussed in Section 8.4.1 can be used to solve this efficiently, and discuss which messages are modified and in what way.",
"answer": "We can simply obtain $p(x_N)$ using Eq(8.52) and Eq(8.54):\n\n$$p(x_N) = \\frac{1}{Z} \\mu_{\\alpha}(x_N) \\tag{*}$$\n\nAccording to Bayes' Theorem, we have:\n\n$$p(x_n|x_N) = \\frac{p(x_n, x_N)}{p(x_N)}$$\n\nTherefore, now we only need to derive an expression for $p(x_n, x_N)$ , where n = 1, 2, ..., N - 1. We follow the same procedure as in the previous problem. Since we know that:\n\n$$p(x_n, x_N) = \\sum_{x_1} ... \\sum_{x_{n-1}} \\sum_{x_{n+1}} ... \\sum_{x_{N-1}} p(\\mathbf{x})$$\n\nWe can obtain:\n\n$$p(x_{n},x_{N}) = \\frac{1}{Z} \\left[ \\sum_{x_{n-1}} \\psi_{n-1,n}(x_{n-1},x_{n}) \\dots \\left[ \\sum_{x_{2}} \\psi_{2,3}(x_{2},x_{3}) \\left[ \\sum_{x_{1}} \\psi_{1,2}(x_{1},x_{2}) \\right] \\right] \\dots \\right] \\times \\left[ \\sum_{x_{n+1}} \\psi_{n,n+1}(x_{n},x_{n+1}) \\dots \\left[ \\sum_{x_{N-1}} \\psi_{N-2,N-1}(x_{N-2},x_{N-1}) \\psi_{N-1,N}(x_{N-1},x_{N}) \\right] \\dots \\right]$$\n\nNote that in the second line, the summation term with respect to $x_{N-1}$ is the product of $\\psi_{N-2,N-1}(x_{N-2},x_{N-1})$ and $\\psi_{N-1,N}(x_{N-1},x_N)$ . So here we can actually draw an undirected graph with N-1 nodes, and adopt the proposed algorithm to solve $p(x_n,x_N)$ . If we use $x_n^{\\star}$ to represent the new node, then the joint distribution can be written as:\n\n$$p(\\mathbf{x}^{\\star}) = \\frac{1}{Z^{\\star}} \\psi_{1,2}^{\\star}(x_{1}^{\\star}, x_{2}^{\\star}) \\psi_{2,3}^{\\star}(x_{2}^{\\star}, x_{3}^{\\star}) \\dots \\psi_{N-2, N-1}^{\\star}(x_{N-2}^{\\star}, x_{N-1}^{\\star})$$\n\nWhere $\\psi_{n,n+1}^{\\star}(x_n^{\\star},x_{n+1}^{\\star})$ is defined as:\n\n$$\\psi_{n,n+1}^{\\star}(x_{n}^{\\star},x_{n+1}^{\\star}) = \\left\\{ \\begin{array}{ll} \\psi_{n,n+1}(x_{n},x_{n+1}), & n=1,2,...,N-3 \\\\ \\psi_{N-2,N-1}(x_{N-2},x_{N-1})\\psi_{N-1,N}(x_{N-1},x_{N}), & n=N-2 \\end{array} \\right.$$\n\nIn other words, we have combined the original node $x_{N-1}$ and $x_N$ . Moreover, we have the relationship:\n\n$$p(x_n, x_N) = p(x_n^*) = \\frac{1}{Z^*} \\mu_\\alpha^*(x_n^*) \\mu_\\beta^*(x_n^*) \\quad n = 1, 2, ..., N-1$$\n\nBy adopting the proposed algorithm to the new undirected graph, $p(x_n^*)$ can be easily evaluated, and so is $p(x_n, x_N)$ .",
"answer_length": 2112
},
{
"chapter": 8,
"question_number": "8.17",
"difficulty": "medium",
"question_text": "Consider a graph of the form shown in Figure 8.38 having N=5 nodes, in which nodes $x_3$ and $x_5$ are observed. Use d-separation to show that $x_2 \\perp \\!\\!\\! \\perp x_5 \\mid x_3$ . Show that if the message passing algorithm of Section 8.4.1 is applied to the evaluation of $p(x_2|x_3,x_5)$ , the result will be independent of the value of $x_5$ .",
"answer": "It is straightforward to see that for every path connecting node $x_2$ and $x_5$ in Fig.8.38, it must pass through node $x_3$ . Therefore, all paths are blocked and the conditional property holds. For more details, you should read section 8.3.1. According to Bayes' Theorem, we can obtain:\n\n$$p(x_2|x_3,x_5) = \\frac{p(x_2,x_3,x_5)}{p(x_2)}$$\n\nUsing the proposed algorithm in section 8.4.1, we can obtain:\n\n$$p(x_{2}|x_{3},x_{5}) = \\frac{p(x_{2},x_{3},x_{5})}{p(x_{3},x_{5})} = \\frac{\\sum_{x_{1}} \\sum_{x_{2}} \\sum_{x_{4}} p(\\mathbf{x})}{\\sum_{x_{1}} \\sum_{x_{2}} \\sum_{x_{4}} p(\\mathbf{x})}$$\n\n$$= \\frac{\\sum_{x_{1}} \\sum_{x_{4}} \\psi_{1,2} \\psi_{2,3} \\psi_{3,4} \\psi_{4,5}}{\\sum_{x_{1}} \\sum_{x_{2}} \\sum_{x_{4}} \\psi_{1,2} \\psi_{2,3} \\psi_{3,4} \\psi_{4,5}}$$\n\n$$= \\frac{\\left(\\sum_{x_{1}} \\psi_{1,2}\\right) \\cdot \\psi_{2,3} \\cdot \\left(\\sum_{x_{4}} \\psi_{3,4} \\psi_{4,5}\\right)}{\\sum_{x_{2}} \\left[\\left(\\sum_{x_{1}} \\psi_{1,2}\\right) \\cdot \\psi_{2,3}\\right] \\cdot \\left(\\sum_{x_{4}} \\psi_{3,4} \\psi_{4,5}\\right)}$$\n\n$$= \\frac{\\left(\\sum_{x_{1}} \\psi_{1,2}\\right) \\cdot \\psi_{2,3}}{\\sum_{x_{2}} \\left[\\left(\\sum_{x_{1}} \\psi_{1,2}\\right) \\psi_{2,3}\\right]}$$\n\nIt is obvious that the right hand side doesn't depend on $x_5$ .",
"answer_length": 1232
},
{
"chapter": 8,
"question_number": "8.18",
"difficulty": "medium",
"question_text": "Show that a distribution represented by a directed tree can trivially be written as an equivalent distribution over the corresponding undirected tree. Also show that a distribution expressed as an undirected tree can, by suitable normalization of the clique potentials, be written as a directed tree. Calculate the number of distinct directed trees that can be constructed from a given undirected tree.",
"answer": "First, the distribution represented by a directed tree can be trivially be written as an equivalent distribution over an undirected tree by moralization. You can find more details in section 8.4.2.\n\nAlternatively, now we want to represent a distribution, which is given by a directed graph, via a directed graph. For example, the distribution defined by the undirected tree in Fig.4 can be written as:\n\n$$p(\\mathbf{x}) = \\frac{1}{Z} \\psi_{1,3}(x_1, x_3) \\, \\psi_{2,3}(x_2, x_3) \\, \\psi_{3,4}(x_3, x_4) \\, \\psi_{4,5}(x_4, x_5)$$\n\nWe simply choose $x_4$ as the root and the corresponding directed tree is well defined by working outwards. In this case, the distribution defined by the directed tree is:\n\n$$p(\\mathbf{x}) = p(x_4) p(x_5|x_4) p(x_3|x_4) p(x_1|x_3) p(x_2|x_3)$$\n\nThus it is not difficult to change an undirected tree to a directed on if performing:\n\n$$p(x_4)p(x_5|x_4) \\propto \\psi_{5,4}, p(x_3|x_4) \\propto \\psi_{3,4}, p(x_2|x_3) \\propto \\psi_{2,3}, p(x_1|x_3) \\propto \\psi_{1,3},$$\n\n\n\nFigure 4: Example of changing an undirected tree to a directed one $x_i$ \n\nThe symbol $\\propto$ is used to represent a normalization term, which is used to guarantee the integral of PDF equal to 1. In summary, in the particular case of an undirected tree, there is only one path between any pair of nodes, and thus the maximal clique is given by a pair of two nodes in an undirected tree. This is because if we choose any three nodes $x_1, x_2, x_3$ , according to the definition there cannot exist a loop. Otherwise there are two paths between $x_1$ and $x_3$ : (i) $x_1 - > x_3$ and (ii) $x_1 - > x_3 - > x_3$ . In the directed tree, each node\n\nonly depends on only one node (except the root), i.e., its parent. Thus we can easily change a undirected tree to a directed one by matching the potential function with the corresponding conditional PDF, as shown in the example.\n\nMoreover, we can choose any node in the undirected tree to be the root and then work outwards to obtain a directed tree. Therefore, in an undirected tree with n nodes, there is n corresponding directed trees in total.\n\n## Problem 8.19-8.29 Solution \n\nI am quite confused by the deduction in Eq(8.66). I do not understand the sum-product algorithm and the max-sum algorithm very well.\n\n## 0.9 Mixture Models and EM",
"answer_length": 2330
},
{
"chapter": 8,
"question_number": "8.2",
"difficulty": "easy",
"question_text": "Show that the property of there being no directed cycles in a directed graph follows from the statement that there exists an ordered numbering of the nodes such that for each node there are no links going to a lower-numbered node.\n\n| T 11 00 7 | | 10 4 91 40 | | | |\n|-----------|------------|--------------|------------|--------|--------------|\n| Table 8.2 | i ne ioint | distribution | over three | binarv | / variables. |\n\n| a | b | c | p(a,b,c) |\n|---|---|---|----------|\n| 0 | 0 | 0 | 0.192 |\n| 0 | 0 | 1 | 0.144 |\n| 0 | 1 | 0 | 0.048 |\n| 0 | 1 | 1 | 0.216 |\n| 1 | 0 | 0 | 0.192 |\n| 1 | 0 | 1 | 0.064 |\n| 1 | 1 | 0 | 0.048 |\n| 1 | 1 | 1 | 0.096 |\n| | | | |",
"answer": "This statement is obvious. Suppose that there exists an ordered numbering of the nodes such that for each node there are no links going to a lower-numbered node, and that there is a directed cycle in the graph:\n\n$$a_1 \\rightarrow a_2 \\rightarrow \\dots \\rightarrow a_N$$\n\nTo make it a real cycle, we also require $a_N \\to a_1$ . According to the assumption, we have $a_1 \\le a_2 \\le ... \\le a_N$ . Therefore, the last link $a_N \\to a_1$ is invalid since $a_N \\ge a_1$ .",
"answer_length": 473
},
{
"chapter": 8,
"question_number": "8.20",
"difficulty": "easy",
"question_text": "Consider the message passing protocol for the sum-product algorithm on a tree-structured factor graph in which messages are first propagated from the leaves to an arbitrarily chosen root node and then from the root node out to the leaves. Use proof by induction to show that the messages can be passed in such an order that at every step, each node that must send a message has received all of the incoming messages necessary to construct its outgoing messages.",
"answer": "We do the induction over the size of the tree and we grow the tree one node at a time while, at the same time, we update the message passing schedule. Note that we can build up any tree this way.\n\nFor a single root node, the required condition holds trivially true, since there are no messages to be passed. We then assume that it holds for a tree with N nodes. In the induction step we add a new leaf node to such a tree. This new leaf node need not to wait for any messages from other nodes in order to send its outgoing message and so it can be scheduled to send it first, before any other messages are sent. Its parent node will receive this message, whereafter the message propagation will follow the schedule for the original tree with N nodes, for which the condition is assumed to hold.\n\nFor the propagation of the outward messages from the root back to the leaves, we first follow the propagation schedule for the original tree with N nodes, for which the condition is assumed to hold. When this has completed, the parent of the new leaf node will be ready to send its outgoing message to the new leaf node, thereby completing the propagation for the tree with N+1 nodes.",
"answer_length": 1180
},
{
"chapter": 8,
"question_number": "8.23",
"difficulty": "medium",
"question_text": "In Section 8.4.4, we showed that the marginal distribution $p(x_i)$ for a variable node $x_i$ in a factor graph is given by the product of the messages arriving at this node from neighbouring factor nodes in the form (8.63: $= \\prod_{s \\in ne(x)} \\mu_{f_s \\to x}(x).$). Show that the marginal $p(x_i)$ can also be written as the product of the incoming message along any one of the links with the outgoing message along the same link.",
"answer": "This follows from the fact that the message that a node, $x_i$ , will send to a factor $f_s$ , consists of the product of all other messages received by $x_i$ . From (8.63: $= \\prod_{s \\in ne(x)} \\mu_{f_s \\to x}(x).$) and (8.69: $= \\prod_{l \\in \\text{ne}(x_m) \\setminus f_s} \\mu_{f_l \\to x_m}(x_m)$), we have\n\n$$p(x_i) = \\prod_{s \\in ne(x_i)} \\mu_{f_s \\to x_i}(x_i)$$\n\n$$= \\mu_{f_s \\to x_i}(x_i) \\prod_{t \\in ne(x_i) \\setminus f_s} \\mu_{f_t \\to x_i}(x_i)$$\n\n$$= \\mu_{f_s \\to x_i}(x_i) \\mu_{x_i \\to f_s}(x_i).$$",
"answer_length": 513
},
{
"chapter": 8,
"question_number": "8.28",
"difficulty": "medium",
"question_text": "The concept of a *pending* message in the sum-product algorithm for a factor graph was defined in Section 8.4.7. Show that if the graph has one or more cycles, there will always be at least one pending message irrespective of how long the algorithm runs.",
"answer": "If a graph has one or more cycles, there exists at least one set of nodes and edges such that, starting from an arbitrary node in the set, we can visit all the nodes in the set and return to the starting node, without traversing any edge more than once.\n\nConsider one particular such cycle. When one of the nodes $n_1$ in the cycle sends a message to one of its neighbours $n_2$ in the cycle, this causes a pending messages on the edge to the next node $n_3$ in that cycle. Thus sending a pending message along an edge in the cycle always generates a pending message on the next edge in that cycle. Since this is true for every node in the cycle it follows that there will always exist at least one pending message in the graph.",
"answer_length": 734
},
{
"chapter": 8,
"question_number": "8.29",
"difficulty": "medium",
"question_text": "Show that if the sum-product algorithm is run on a factor graph with a tree structure (no loops), then after a finite number of messages have been sent, there will be no pending messages.\n\n\n\nIf we define a joint distribution over observed and latent variables, the corresponding distribution of the observed variables alone is obtained by marginalization. This allows relatively complex marginal distributions over observed variables to be expressed in terms of more tractable joint distributions over the expanded space of observed and latent variables. The introduction of latent variables thereby allows complicated distributions to be formed from simpler components. In this chapter, we shall see that mixture distributions, such as the Gaussian mixture discussed in Section 2.3.9, can be interpreted in terms of discrete latent variables. Continuous latent variables will form the subject of Chapter 12.\n\nAs well as providing a framework for building more complex probability distributions, mixture models can also be used to cluster data. We therefore begin our discussion of mixture distributions by considering the problem of finding clusters in a set of data points, which we approach first using a nonprobabilistic technique called the K-means algorithm (Lloyd, 1982). Then we introduce the latent variable\n\nSection 9.1\n\nSection 9.2\n\nSection 9.3\n\nSection 9.4\n\nview of mixture distributions in which the discrete latent variables can be interpreted as defining assignments of data points to specific components of the mixture. A general technique for finding maximum likelihood estimators in latent variable models is the expectation-maximization (EM) algorithm. We first of all use the Gaussian mixture distribution to motivate the EM algorithm in a fairly informal way, and then we give a more careful treatment based on the latent variable viewpoint. We shall see that the K-means algorithm corresponds to a particular nonprobabilistic limit of EM applied to mixtures of Gaussians. Finally, we discuss EM in some generality.\n\nGaussian mixture models are widely used in data mining, pattern recognition, machine learning, and statistical analysis. In many applications, their parameters are determined by maximum likelihood, typically using the EM algorithm. However, as we shall see there are some significant limitations to the maximum likelihood approach, and in Chapter 10 we shall show that an elegant Bayesian treatment can be given using the framework of variational inference. This requires little additional computation compared with EM, and it resolves the principal difficulties of maximum likelihood while also allowing the number of components in the mixture to be inferred automatically from the data.\n\n### 9.1. K-means Clustering\n\nWe begin by considering the problem of identifying groups, or clusters, of data points in a multidimensional space. Suppose we have a data set $\\{\\mathbf{x}_1,\\ldots,\\mathbf{x}_N\\}$ consisting of N observations of a random D-dimensional Euclidean variable $\\mathbf{x}$ . Our goal is to partition the data set into some number K of clusters, where we shall suppose for the moment that the value of K is given. Intuitively, we might think of a cluster as comprising a group of data points whose inter-point distances are small compared with the distances to points outside of the cluster. We can formalize this notion by first introducing a set of D-dimensional vectors $\\mu_k$ , where $k=1,\\ldots,K$ , in which $\\mu_k$ is a prototype associated with the $k^{\\text{th}}$ cluster. As we shall see shortly, we can think of the $\\mu_k$ as representing the centres of the clusters. Our goal is then to find an assignment of data points to clusters, as well as a set of vectors $\\{\\mu_k\\}$ , such that the sum of the squares of the distances of each data point to its closest vector $\\mu_k$ , is a minimum.\n\nIt is convenient at this point to define some notation to describe the assignment of data points to clusters. For each data point $\\mathbf{x}_n$ , we introduce a corresponding set of binary indicator variables $r_{nk} \\in \\{0,1\\}$ , where $k=1,\\ldots,K$ describing which of the K clusters the data point $\\mathbf{x}_n$ is assigned to, so that if data point $\\mathbf{x}_n$ is assigned to cluster k then $r_{nk}=1$ , and $r_{nj}=0$ for $j\\neq k$ . This is known as the 1-of-K coding scheme. We can then define an objective function, sometimes called a *distortion measure*, given by\n\n$$J = \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\|\\mathbf{x}_n - \\boldsymbol{\\mu}_k\\|^2$$\n (9.1: $J = \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\|\\mathbf{x}_n - \\boldsymbol{\\mu}_k\\|^2$)\n\nwhich represents the sum of the squares of the distances of each data point to its\n\nassigned vector $\\mu_k$ . Our goal is to find values for the $\\{r_{nk}\\}$ and the $\\{\\mu_k\\}$ so as to minimize J. We can do this through an iterative procedure in which each iteration involves two successive steps corresponding to successive optimizations with respect to the $r_{nk}$ and the $\\mu_k$ . First we choose some initial values for the $\\mu_k$ . Then in the first phase we minimize J with respect to the $r_{nk}$ , keeping the $\\mu_k$ fixed. In the second phase we minimize J with respect to the $\\mu_k$ , keeping $r_{nk}$ fixed. This two-stage optimization is then repeated until convergence. We shall see that these two stages of updating $r_{nk}$ and updating $\\mu_k$ correspond respectively to the E (expectation) and M (maximization) steps of the EM algorithm, and to emphasize this we shall use the terms E step and M step in the context of the K-means algorithm.\n\nSection 9.4\n\nConsider first the determination of the $r_{nk}$ . Because J in (9.1: $J = \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\|\\mathbf{x}_n - \\boldsymbol{\\mu}_k\\|^2$) is a linear function of $r_{nk}$ , this optimization can be performed easily to give a closed form solution. The terms involving different n are independent and so we can optimize for each n separately by choosing $r_{nk}$ to be 1 for whichever value of k gives the minimum value of $\\|\\mathbf{x}_n - \\boldsymbol{\\mu}_k\\|^2$ . In other words, we simply assign the $n^{\\text{th}}$ data point to the closest cluster centre. More formally, this can be expressed as\n\n$$r_{nk} = \\begin{cases} 1 & \\text{if } k = \\arg\\min_{j} \\|\\mathbf{x}_n - \\boldsymbol{\\mu}_j\\|^2 \\\\ 0 & \\text{otherwise.} \\end{cases}$$\n (9.2: $r_{nk} = \\begin{cases} 1 & \\text{if } k = \\arg\\min_{j} \\|\\mathbf{x}_n - \\boldsymbol{\\mu}_j\\|^2 \\\\ 0 & \\text{otherwise.} \\end{cases}$)\n\nNow consider the optimization of the $\\mu_k$ with the $r_{nk}$ held fixed. The objective function J is a quadratic function of $\\mu_k$ , and it can be minimized by setting its derivative with respect to $\\mu_k$ to zero giving\n\n$$2\\sum_{n=1}^{N} r_{nk}(\\mathbf{x}_n - \\boldsymbol{\\mu}_k) = 0$$\n\n$$(9.3)$$\n\nwhich we can easily solve for $\\mu_k$ to give\n\n$$\\mu_k = \\frac{\\sum_n r_{nk} \\mathbf{x}_n}{\\sum_n r_{nk}}.$$\n(9.4: $\\mu_k = \\frac{\\sum_n r_{nk} \\mathbf{x}_n}{\\sum_n r_{nk}}.$)",
"answer": "We show this by induction over the number of nodes in the tree-structured factor graph.\n\nFirst consider a graph with two nodes, in which case only two messages will be sent across the single edge, one in each direction. None of these messages will induce any pending messages and so the algorithm terminates.\n\nWe then assume that for a factor graph with N nodes, there will be no pending messages after a finite number of messages have been sent. Given such a graph, we can construct a new graph with N+1 nodes by adding a new node. This new node will have a single edge to the original graph (since the graph must remain a tree) and so if this new node receives a message on this edge, it will induce no pending messages. A message sent from the new node will trigger propagation of messages in the original graph with N nodes, but by assumption, after a finite number of messages have been sent, there will be no pending messages and the algorithm will terminate.\n\n# **Chapter 9** Mixture Models and EM",
"answer_length": 1004
},
{
"chapter": 8,
"question_number": "8.3",
"difficulty": "medium",
"question_text": "Consider three binary variables $a, b, c \\in \\{0, 1\\}$ having the joint distribution given in Table 8.2. Show by direct evaluation that this distribution has the property that a and b are marginally dependent, so that $p(a,b) \\neq p(a)p(b)$ , but that they become independent when conditioned on c, so that p(a,b|c) = p(a|c)p(b|c) for both c = 0 and c = 1.",
"answer": "Based on definition, we can obtain:\n\n$$p(a,b) = p(a,b,c=0) + p(a,b,c=1) = \\begin{cases} 0.336, & \\text{if } a = 0, b = 0\\\\ 0.264, & \\text{if } a = 0, b = 1\\\\ 0.256, & \\text{if } a = 1, b = 0\\\\ 0.144, & \\text{if } a = 1, b = 1 \\end{cases}$$\n\nSimilarly, we can obtain:\n\n$$p(a) = p(a, b = 0) + p(a, b = 1) =$$\n\n$$\\begin{cases}\n0.6, & \\text{if } a = 0 \\\\\n0.4, & \\text{if } a = 1\n\\end{cases}$$\n\nAnd\n\n$$p(b) = p(a = 0, b) + p(a = 1, b) =$$\n\n$$\\begin{cases}\n0.592, & \\text{if } b = 0 \\\\\n0.408, & \\text{if } b = 1\n\\end{cases}$$\n\nTherefore, we conclude that $p(a,b) \\neq p(a)p(b)$ . For instance, we have $p(a=1,b=1)=0.144,\\ p(a=1)=0.4$ and p(b=1)=0.408. It is obvious that:\n\n$$0.144 = p(a = 1, b = 1) \\neq p(a = 1)p(b = 1) = 0.4 \\times 0.408$$\n\nTo prove the conditional dependency, we first calculate p(c):\n\n$$p(c) = \\sum_{a,b=0,1} p(a,b,c) = \\begin{cases} 0.480, & \\text{if } c = 0\\\\ 0.520, & \\text{if } c = 1 \\end{cases}$$\n\nAccording to Bayes' Theorem, we have:\n\n$$p(a,b|c) = \\frac{p(a,b,c)}{p(c)} = \\begin{cases} 0.400, & \\text{if } a = 0, b = 0, c = 0\\\\ 0.277, & \\text{if } a = 0, b = 0, c = 1\\\\ 0.100, & \\text{if } a = 0, b = 1, c = 0\\\\ 0.415, & \\text{if } a = 0, b = 1, c = 1\\\\ 0.400, & \\text{if } a = 1, b = 0, c = 0\\\\ 0.123, & \\text{if } a = 1, b = 0, c = 1\\\\ 0.100, & \\text{if } a = 1, b = 1, c = 0\\\\ 0.185, & \\text{if } a = 1, b = 1, c = 1 \\end{cases}$$\n\nSimilarly, we also have:\n\n$$p(a|c) = \\frac{p(a,c)}{p(c)} = \\begin{cases} 0.240/0.480 = 0.500, & \\text{if } a = 0, c = 0\\\\ 0.360/0.520 = 0.692, & \\text{if } a = 0, c = 1\\\\ 0.240/0.480 = 0.500, & \\text{if } a = 1, c = 0\\\\ 0.160/0.520 = 0.308, & \\text{if } a = 1, c = 1 \\end{cases}$$\n\nWhere we have used p(a,c) = p(a,b=0,c) + p(a,b=1,c). Similarly, we can obtain:\n\n$$p(b|c) = \\frac{p(b,c)}{p(c)} = \\begin{cases} 0.384/0.480 = 0.800, \\text{ if } b = 0, c = 0\\\\ 0.208/0.520 = 0.400, \\text{ if } b = 0, c = 1\\\\ 0.096/0.480 = 0.200, \\text{ if } b = 1, c = 0\\\\ 0.312/0.520 = 0.600, \\text{ if } b = 1, c = 1 \\end{cases}$$\n\nNow we can easily verify the statement p(a,b|c) = p(a|c)p(b|c). For instance, we have:\n\n$$0.1 = p(a = 1, b = 1 | c = 0) = p(a = 1 | c = 0)p(b = 1 | c = 0) = 0.5 \\times 0.2 = 0.1$$",
"answer_length": 2153
},
{
"chapter": 8,
"question_number": "8.4",
"difficulty": "hard",
"question_text": "Evaluate the distributions p(a), p(b|c), and p(c|a) corresponding to the joint distribution given in Table 8.2. Hence show by direct evaluation that p(a,b,c) = p(a)p(c|a)p(b|c). Draw the corresponding directed graph.",
"answer": "This problem follows the previous one. We have already calculated p(a) and p(b|c), we rewrite it here.\n\n$$p(a) = p(a, b = 0) + p(a, b = 1) = \\begin{cases} 0.6, & \\text{if } a = 0 \\\\ 0.4, & \\text{if } a = 1 \\end{cases}$$\n\nAnd\n\n$$p(b|c) = \\frac{p(b,c)}{p(c)} = \\begin{cases} 0.384/0.480 = 0.800, & \\text{if } b = 0, c = 0 \\\\ 0.208/0.520 = 0.400, & \\text{if } b = 0, c = 1 \\\\ 0.096/0.480 = 0.200, & \\text{if } b = 1, c = 0 \\\\ 0.312/0.520 = 0.600, & \\text{if } b = 1, c = 1 \\end{cases}$$\n\nWe can also obtain p(c|a):\n\n$$p(c|a) = \\frac{p(a,c)}{p(a)} = \\begin{cases} 0.24/0.6 = 0.4, & \\text{if } a = 0, c = 0\\\\ 0.36/0.6 = 0.6, & \\text{if } a = 0, c = 1\\\\ 0.24/0.4 = 0.6, & \\text{if } a = 1, c = 0\\\\ 0.16/0.4 = 0.4, & \\text{if } a = 1, c = 1 \\end{cases}$$\n\nNow we can easily verify the statement that p(a,b,c) = p(a)p(c|a)p(b|c) given Table 8.2. The directed graph looks like:\n\n$$a \\rightarrow c \\rightarrow b$$",
"answer_length": 903
},
{
"chapter": 8,
"question_number": "8.5",
"difficulty": "easy",
"question_text": "Draw a directed probabilistic graphical model corresponding to the relevance vector machine described by (7.79: $p(\\mathbf{t}|\\mathbf{X}, \\mathbf{w}, \\beta) = \\prod_{n=1}^{N} p(t_n|\\mathbf{x}_n, \\mathbf{w}, \\beta^{-1}).$) and (7.80: $p(\\mathbf{w}|\\boldsymbol{\\alpha}) = \\prod_{i=1}^{M} \\mathcal{N}(w_i|0, \\alpha_i^{-1})$).",
"answer": "It looks quite like Figure 8.6. The difference is that we introduce $\\alpha_i$ for each $w_i$ , where i = 1, 2, ..., M.\n\n\n\nFigure 1: probabilistic graphical model corresponding to the RVM described in (7.79: $p(\\mathbf{t}|\\mathbf{X}, \\mathbf{w}, \\beta) = \\prod_{n=1}^{N} p(t_n|\\mathbf{x}_n, \\mathbf{w}, \\beta^{-1}).$) and (7.80: $p(\\mathbf{w}|\\boldsymbol{\\alpha}) = \\prod_{i=1}^{M} \\mathcal{N}(w_i|0, \\alpha_i^{-1})$).\n\n## Problem 8.6 Solution(Wait for update)",
"answer_length": 493
},
{
"chapter": 8,
"question_number": "8.7",
"difficulty": "medium",
"question_text": "Using the recursion relations (8.15: $\\mathbb{E}[x_i] = \\sum_{j \\in \\text{pa}_i} w_{ij} \\mathbb{E}[x_j] + b_i.$) and (8.16), show that the mean and covariance of the joint distribution for the graph shown in Figure 8.14 are given by (8.17) and (8.18: $\\Sigma = \\begin{pmatrix} v_1 & w_{21}v_1 & w_{32}w_{21}v_1 \\\\ w_{21}v_1 & v_2 + w_{21}^2v_1 & w_{32}(v_2 + w_{21}^2v_1) \\\\ w_{32}w_{21}v_1 & w_{32}(v_2 + w_{21}^2v_1) & v_3 + w_{22}^2(v_2 + w_{21}^2v_1) \\end{pmatrix} .$), respectively.",
"answer": "Let's just follow the hint. We begin by calculating the mean $\\mu$ .\n\n$$\\mathbb{E}[x_1] = b_1$$\n\nAccording to Eq (8.15: $\\mathbb{E}[x_i] = \\sum_{j \\in \\text{pa}_i} w_{ij} \\mathbb{E}[x_j] + b_i.$), we can obtain:\n\n$$\\mathbb{E}[x_2] = \\sum_{j \\in pa_2} w_{2j} \\mathbb{E}[x_j] + b_2 = w_{21}b_1 + b_2$$\n\nThen we can obtain:\n\n$$\\mathbb{E}[x_3] = w_{32}\\mathbb{E}[x_2] + b_3$$\n\n$$= w_{32}(w_{21}b_1 + b_2) + b_3$$\n\n$$= w_{32}w_{21}b_1 + w_{32}b_2 + b_3$$\n\nTherefore, we obtain Eq (8.17) just as required. Next, we deal with the covariance matrix.\n\n$$cov[x_1, x_1] = v_1$$\n\nThen we can obtain:\n\n$$cov[x_1, x_2] = \\sum_{k=1}^{\\infty} w_{2k} cov[x_1, x_k] + I_{12}v_2 = w_{21} cov[x_1, x_1] = w_{21}v_1$$\n\nAnd also $cov[x_2,x_1] = cov[x_1,x_2] = w_{21}v_1$ . Hence, we can obtain:\n\n$$cov[x_2, x_2] = \\sum_{k=1}^{\\infty} w_{2k} cov[x_2, x_k] + I_{22}v_2 = w_{21}^2 v_1 + v_2$$\n\nNext, we can obtain:\n\n$$cov[x_1, x_3] = \\sum_{k=2} w_{3k} cov[x_1, x_k] + I_{31}v_1 = w_{32}w_{21}v_1$$\n\nThen, we can obtain:\n\n$$cov[x_2, x_3] = \\sum_{k=2} w_{3k} cov[x_2, x_k] + I_{23}v_3 = w_{32}(v_2 + w_{21}^2 v_1)$$\n\nFinally, we can obtain:\n\n$$\\begin{array}{lll} \\mathrm{cov}[x_3,x_3] & = & \\sum_{k=2} w_{3k} \\mathrm{cov}[x_3,x_k] + I_{33} v_3 \\\\ \\\\ & = & w_{32} \\Big[ w_{32} (v_2 + w_{21}^2 v_1) \\Big] + v_3 \\end{array}$$\n\nWhere we have used the fact that $cov[x_3, x_2] = cov[x_2, x_3]$ . By now, we have obtained Eq (8.18: $\\Sigma = \\begin{pmatrix} v_1 & w_{21}v_1 & w_{32}w_{21}v_1 \\\\ w_{21}v_1 & v_2 + w_{21}^2v_1 & w_{32}(v_2 + w_{21}^2v_1) \\\\ w_{32}w_{21}v_1 & w_{32}(v_2 + w_{21}^2v_1) & v_3 + w_{22}^2(v_2 + w_{21}^2v_1) \\end{pmatrix} .$) just as required.",
"answer_length": 1641
},
{
"chapter": 8,
"question_number": "8.8",
"difficulty": "easy",
"question_text": "Show that $a \\perp \\!\\!\\!\\perp b, c \\mid d$ implies $a \\perp \\!\\!\\!\\!\\perp b \\mid d$ .",
"answer": "According to the definition, we can write:\n\n$$p(a,b,c|d) = p(a|d)p(b,c|d)$$\n\nWe marginalize both sides with respect to c, yielding:\n\n$$p(a,b|d) = p(a|d)p(b|d)$$\n\nJust as required.",
"answer_length": 179
},
{
"chapter": 8,
"question_number": "8.9",
"difficulty": "easy",
"question_text": "Www Using the d-separation criterion, show that the conditional distribution for a node x in a directed graph, conditioned on all of the nodes in the Markov blanket, is independent of the remaining variables in the graph.\n\nFigure 8.54 Example of a graphical model used to explore the conditional independence properties of the head-to-head path a–c–b when a descendant of c, namely the node d, is observed.\n\n",
"answer": "This statement is easy to see but a little bit difficult to prove. We put Fig 8.26 here to give a better illustration.\n\n\n\nFigure 2: Markov blanket of a node $x_i$ \n\nMarkov blanket $\\Phi$ of node $x_i$ is made up of three kinds of nodes:(i) the set $\\Phi_1$ containing all the parents of node $x_i$ ( $x_1$ and $x_2$ in Fig.2), (ii) the set $\\Phi_2$ containing all the children of node $x_i$ ( $x_5$ and $x_6$ in Fig.2), and (iii) the set $\\Phi_3$ containing all the co-parents of node $x_i$ ( $x_3$ and $x_4$ in Fig.2). According to the d-separation criterion, we need to show that all the paths from node $x_i$ to an arbitrary node $\\hat{x} \\notin \\Phi = \\{\\Phi_1 \\cup \\Phi_2 \\cup \\Phi_3\\}$ are blocked given that the Markov blanket $\\Phi$ are observed.\n\nIt is obvious that $\\hat{x}$ can only connect to the target node $x_i$ via two kinds of node: $\\Phi_1, \\Phi_2$ . First, suppose that $\\hat{x}$ connects to $x_i$ via some node $x^* \\in \\Phi_1$ . The arrows definitely meet head-to-tail or tail-to-tail at node $x^*$ because the link from a parent node $x^*$ to $x_i$ has its tail connected to the parent node $x^*$ , and since $x^*$ is in $\\Phi_1 \\subseteq \\Phi$ , we see that this path is blocked.\n\nIn the second case, suppose that $\\hat{x}$ connects to $x_i$ via some node $x^* \\in \\Phi_2$ . We need to further divide this situation. If the path from $\\hat{x}$ to $x_i$ also goes through a node $x^{**}$ from $\\Phi_3$ (e.g., in Fig.2, some node $\\hat{x}$ connects to node $x_3$ , and in this example $x^{**} = x_3$ , $x^* = x_5$ ), it is clearly that the arrows meet head-to-tail or tail-to-tail at the node $x^{**} \\in \\Phi_3 \\subseteq \\Phi$ , this path is blocked.\n\nIn the final case, suppose that $\\hat{x}$ connects to $x_i$ via some node $x^* \\in \\Phi_2$ and the path doesn't go through any node from $\\Phi_3$ . An important observation is that the arrows cannot meet head-to-head at node $x^*$ (otherwise, this path will go through a node from $\\Phi_3$ ). Thus, the arrows must meet either head-to-tail or tail-to-tail at node $x^* \\in \\Phi_2 \\subseteq \\Phi$ . Therefore, the path is also blocked.",
"answer_length": 2219
}
]
},
{
"chapter_number": 9,
"total_questions": 27,
"difficulty_breakdown": {
"easy": 18,
"medium": 2,
"hard": 0,
"unknown": 7
},
"questions": [
{
"chapter": 9,
"question_number": "9.1",
"difficulty": "easy",
"question_text": "Consider the K-means algorithm discussed in Section 9.1. Show that as a consequence of there being a finite number of possible assignments for the set of discrete indicator variables $r_{nk}$ , and that for each such assignment there is a unique optimum for the $\\{\\mu_k\\}$ , the K-means algorithm must converge after a finite number of iterations.",
"answer": "For each $r_{nk}$ when n is fixed and k=1,2,...,K, only one of them equals 1 and others are all 0. Therefore, there are K possible choices. When N data are given, there are $K^N$ possible assignments for $\\{r_{nk}; n=1,2,...,N; k=1,2,...,K\\}$ . For each assignments, the optimal $\\{\\mu_k; k=1,2,...,K\\}$ are well determined by Eq (9.4: $\\mu_k = \\frac{\\sum_n r_{nk} \\mathbf{x}_n}{\\sum_n r_{nk}}.$).\n\nAs discussed in the main text, by iteratively performing E-step and M-step, the distortion measure in Eq (9.1: $J = \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\|\\mathbf{x}_n - \\boldsymbol{\\mu}_k\\|^2$) is gradually minimized. The worst case is that we find the optimal assignment and $\\{\\mu_k\\}$ in the last iteration. In other words, $K^N$ iterations are required. However, it is guaranteed to converge because the assignments are finite and the optimal $\\{\\mu_k\\}$ is determined once the assignment is given.",
"answer_length": 915
},
{
"chapter": 9,
"question_number": "9.10",
"difficulty": "medium",
"question_text": "\\star)$ Consider a density model given by a mixture distribution\n\n$$p(\\mathbf{x}) = \\sum_{k=1}^{K} \\pi_k p(\\mathbf{x}|k)$$\n (9.81: $p(\\mathbf{x}) = \\sum_{k=1}^{K} \\pi_k p(\\mathbf{x}|k)$)\n\nand suppose that we partition the vector $\\mathbf{x}$ into two parts so that $\\mathbf{x} = (\\mathbf{x}_a, \\mathbf{x}_b)$ . Show that the conditional density $p(\\mathbf{x}_b|\\mathbf{x}_a)$ is itself a mixture distribution and find expressions for the mixing coefficients and for the component densities.",
"answer": "According to the property of PDF, we know that:\n\n$$p(\\mathbf{x}_b|\\mathbf{x}_a) = \\frac{p(\\mathbf{x}_a, \\mathbf{x}_b)}{p(\\mathbf{x}_a)} = \\frac{p(\\mathbf{x})}{p(\\mathbf{x}_a)} = \\sum_{k=1}^K \\frac{\\pi_k}{p(\\mathbf{x}_a)} \\cdot p(\\mathbf{x}|k)$$\n\nNote that here $p(\\mathbf{x}_a)$ can be viewed as a normalization constant used to guarantee that the integration of $p(\\mathbf{x}_b|\\mathbf{x}_a)$ equal to 1. Moreover, similarly, we can also obtain:\n\n$$p(\\mathbf{x}_a|\\mathbf{x}_b) = \\sum_{k=1}^{K} \\frac{\\pi_k}{p(\\mathbf{x}_b)} \\cdot p(\\mathbf{x}|k)$$",
"answer_length": 553
},
{
"chapter": 9,
"question_number": "9.11",
"difficulty": "easy",
"question_text": "In Section 9.3.2, we obtained a relationship between K means and EM for Gaussian mixtures by considering a mixture model in which all components have covariance $\\epsilon \\mathbf{I}$ . Show that in the limit $\\epsilon \\to 0$ , maximizing the expected completedata log likelihood for this model, given by (9.40: $\\mathbb{E}_{\\mathbf{Z}}[\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}, \\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\left\\{ \\ln \\pi_k + \\ln \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k) \\right\\}. \\quad$), is equivalent to minimizing the distortion measure J for the K-means algorithm given by (9.1: $J = \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\|\\mathbf{x}_n - \\boldsymbol{\\mu}_k\\|^2$).",
"answer": "According to the problem description, the expectation, i.e., Eq(9.40), can now be written as:\n\n$$\\mathbb{E}_{z}[\\ln p] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\left\\{ \\ln \\pi_{k} + \\ln \\mathcal{N}(\\mathbf{x}_{n} | \\boldsymbol{\\mu}_{k}, \\epsilon \\mathbf{I}) \\right\\}$$\n\nIn the M-step, we are required to maximize the expression above with respect to $\\mu_k$ and $\\pi_k$ . In Prob.9.8, we have already proved that $\\mu_k$ should be given by Eq (9.17):\n\n$$\\boldsymbol{\\mu}_k = \\frac{1}{N_k} \\sum_{n=1}^N \\gamma(z_{nk}) \\mathbf{x}_n \\tag{*}$$\n\nWhere $N_k$ is given by Eq (9.18: $N_k = \\sum_{n=1}^{N} \\gamma(z_{nk}).$). Moreover, in this case, by analogy to Eq (9.16: $0 = -\\sum_{n=1}^{N} \\frac{\\pi_k \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k)}{\\sum_{j} \\pi_j \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_j, \\boldsymbol{\\Sigma}_j)} \\boldsymbol{\\Sigma}_k(\\mathbf{x}_n - \\boldsymbol{\\mu}_k)$), $\\gamma(z_{nk})$ is slightly different:\n\n$$\\gamma(z_{nk}) = \\frac{\\pi_k \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\epsilon \\mathbf{I})}{\\sum_j \\pi_j \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_j, \\epsilon \\mathbf{I})}$$\n\nWhen $\\epsilon \\to 0$ , we can obtain:\n\n$$\\sum_{j} \\pi_{j} \\mathcal{N}(\\mathbf{x}_{n} | \\boldsymbol{\\mu}_{j}, \\epsilon \\mathbf{I}) \\approx \\pi_{m} \\mathcal{N}(\\mathbf{x}_{n} | \\boldsymbol{\\mu}_{m}, \\epsilon \\mathbf{I}), \\text{ where } m = \\operatorname{argmin}_{j} ||\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{j}||^{2}$$\n\nTo be more clear, the summation is dominated by the max of $\\pi_j \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_j, \\epsilon \\mathbf{I})$ , and this term is further determined by the exponent, i.e., $-||\\mathbf{x}_n - \\boldsymbol{\\mu}_j||^2$ . Therefore, $\\gamma(z_{nk})$ is given by exactly Eq (9.2: $r_{nk} = \\begin{cases} 1 & \\text{if } k = \\arg\\min_{j} \\|\\mathbf{x}_n - \\boldsymbol{\\mu}_j\\|^2 \\\\ 0 & \\text{otherwise.} \\end{cases}$), i.e., we have $\\gamma(z_{nk}) = r_{nk}$ . Combining with (\\*), we can obtain exactly Eq (9.4: $\\mu_k = \\frac{\\sum_n r_{nk} \\mathbf{x}_n}{\\sum_n r_{nk}}.$). Next, according to Prob.9.9, $\\pi_k$ is given by Eq(9.22):\n\n$$\\pi_k = \frac{N_k}{N} = \frac{\\sum_{n=1}^N \\gamma(z_{nk})}{N} = \frac{r_{nk}}{N}$$\n\nIn other words, $\\pi_k$ equals the fraction of the data points assigned to the k-th cluster.",
"answer_length": 2304
},
{
"chapter": 9,
"question_number": "9.12",
"difficulty": "easy",
"question_text": "Consider a mixture distribution of the form\n\n$$p(\\mathbf{x}) = \\sum_{k=1}^{K} \\pi_k p(\\mathbf{x}|k)$$\n (9.82: $p(\\mathbf{x}) = \\sum_{k=1}^{K} \\pi_k p(\\mathbf{x}|k)$)\n\nwhere the elements of $\\mathbf{x}$ could be discrete or continuous or a combination of these. Denote the mean and covariance of $p(\\mathbf{x}|k)$ by $\\mu_k$ and $\\Sigma_k$ , respectively. Show that the mean and covariance of the mixture distribution are given by (9.49: $\\mathbb{E}[\\mathbf{x}] = \\sum_{k=1}^{K} \\pi_k \\boldsymbol{\\mu}_k$) and (9.50: $\\operatorname{cov}[\\mathbf{x}] = \\sum_{k=1}^{K} \\pi_k \\left\\{ \\mathbf{\\Sigma}_k + \\boldsymbol{\\mu}_k \\boldsymbol{\\mu}_k^{\\mathrm{T}} \\right\\} - \\mathbb{E}[\\mathbf{x}] \\mathbb{E}[\\mathbf{x}]^{\\mathrm{T}}$).",
"answer": "First we calculate the mean $\\mu_k$ :\n\n$$\\mu_k = \\int \\mathbf{x} p(\\mathbf{x}) d\\mathbf{x}$$\n\n$$= \\int \\mathbf{x} \\sum_{k=1}^K \\pi_k p(\\mathbf{x}|k) d\\mathbf{x}$$\n\n$$= \\sum_{k=1}^K \\pi_k \\int \\mathbf{x} p(\\mathbf{x}|k) d\\mathbf{x}$$\n\n$$= \\sum_{k=1}^K \\pi_k \\mu_k$$\n\nThen we deal with the covariance matrix. For an arbitrary random variable $\\mathbf{x}$ , according to Eq (2.63: $cov[\\mathbf{x}] = \\mathbb{E}\\left[ (\\mathbf{x} - \\mathbb{E}[\\mathbf{x}])(\\mathbf{x} - \\mathbb{E}[\\mathbf{x}])^{\\mathrm{T}} \\right].$) we have:\n\n$$cov[\\mathbf{x}] = \\mathbb{E}[(\\mathbf{x} - \\mathbb{E}[\\mathbf{x}])(\\mathbf{x} - \\mathbb{E}[\\mathbf{x}])^T]$$\n$$= \\mathbb{E}[\\mathbf{x}\\mathbf{x}^T] - \\mathbb{E}[\\mathbf{x}]\\mathbb{E}[\\mathbf{x}]^T$$\n\nSince $\\mathbb{E}[\\mathbf{x}]$ is already obtained, we only need to solve $\\mathbb{E}[\\mathbf{x}\\mathbf{x}^T]$ . First we only focus on the k-th component and rearrange the expression above, yielding:\n\n$$\\mathbb{E}_{k}[\\mathbf{x}\\mathbf{x}^{T}] = \\operatorname{cov}_{k}[\\mathbf{x}] + \\mathbb{E}_{k}[\\mathbf{x}]\\mathbb{E}_{k}[\\mathbf{x}]^{T} = \\mathbf{\\Sigma}_{k} + \\boldsymbol{\\mu}_{k}\\boldsymbol{\\mu}_{k}^{T}$$\n\nWe further use Eq (2.62: $\\mathbb{E}[\\mathbf{x}\\mathbf{x}^{\\mathrm{T}}] = \\boldsymbol{\\mu}\\boldsymbol{\\mu}^{\\mathrm{T}} + \\boldsymbol{\\Sigma}.$), yielding:\n\n$$\\mathbb{E}[\\mathbf{x}\\mathbf{x}^T] = \\int \\mathbf{x}\\mathbf{x}^T \\sum_{k=1}^K \\pi_k \\, p(\\mathbf{x}|k) \\, d\\mathbf{x}$$\n\n$$= \\sum_{k=1}^K \\pi_k \\int \\mathbf{x}\\mathbf{x}^T \\, p(\\mathbf{x}|k) \\, d\\mathbf{x}$$\n\n$$= \\sum_{k=1}^K \\pi_k \\, \\mathbb{E}_k[\\mathbf{x}\\mathbf{x}^T]$$\n\n$$= \\sum_{k=1}^K \\pi_k \\, (\\boldsymbol{\\mu}_k \\boldsymbol{\\mu}_k^T + \\boldsymbol{\\Sigma}_k)$$\n\nTherefore, we obtain Eq (9.50: $\\operatorname{cov}[\\mathbf{x}] = \\sum_{k=1}^{K} \\pi_k \\left\\{ \\mathbf{\\Sigma}_k + \\boldsymbol{\\mu}_k \\boldsymbol{\\mu}_k^{\\mathrm{T}} \\right\\} - \\mathbb{E}[\\mathbf{x}] \\mathbb{E}[\\mathbf{x}]^{\\mathrm{T}}$) just as required.",
"answer_length": 1926
},
{
"chapter": 9,
"question_number": "9.13",
"difficulty": "medium",
"question_text": "Using the re-estimation equations for the EM algorithm, show that a mixture of Bernoulli distributions, with its parameters set to values corresponding to a maximum of the likelihood function, has the property that\n\n$$\\mathbb{E}[\\mathbf{x}] = \\frac{1}{N} \\sum_{n=1}^{N} \\mathbf{x}_n \\equiv \\overline{\\mathbf{x}}.$$\n (9.83: $\\mathbb{E}[\\mathbf{x}] = \\frac{1}{N} \\sum_{n=1}^{N} \\mathbf{x}_n \\equiv \\overline{\\mathbf{x}}.$)\n\nHence show that if the parameters of this model are initialized such that all components have the same mean $\\mu_k = \\widehat{\\mu}$ for $k = 1, \\ldots, K$ , then the EM algorithm will converge after one iteration, for any choice of the initial mixing coefficients, and that this solution has the property $\\mu_k = \\overline{\\mathbf{x}}$ . Note that this represents a degenerate case of the mixture model in which all of the components are identical, and in practice we try to avoid such solutions by using an appropriate initialization.",
"answer": "First, let's make this problem more clear. In a mixture of Bernoulli distribution, whose complete-data log likelihood is given by Eq (9.54: $\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\pi}) = \\sum_{n=1}^{N} \\sum_{k=1}^{K} z_{nk} \\left\\{ \\ln \\pi_k + \\sum_{i=1}^{D} \\left[ x_{ni} \\ln \\mu_{ki} + (1 - x_{ni}) \\ln(1 - \\mu_{ki}) \\right] \\right\\}$) and whose model parameters are $\\pi_k$ and $\\mu_k$ . If we want to obtain those parameters, we can adopt EM algorithm. In the E-step, we calculate $\\gamma(z_{nk})$ as shown in Eq (9.56: $= \\frac{\\pi_k p(\\mathbf{x}_n | \\boldsymbol{\\mu}_k)}{\\sum_{j=1}^K \\pi_j p(\\mathbf{x}_n | \\boldsymbol{\\mu}_j)}.$). In the M-step, we update $\\pi_k$ and $\\mu_k$ according to Eq (9.59: $\\mu_k = \\overline{\\mathbf{x}}_k.$) and Eq (9.60: $\\pi_k = \\frac{N_k}{N}$), where $N_k$ and $\\mathbf{x}_k$ are defined in Eq (9.57: $N_k = \\sum_{n=1}^N \\gamma(z_{nk})$) and Eq (9.58: $\\overline{\\mathbf{x}}_k = \\frac{1}{N_k} \\sum_{n=1}^N \\gamma(z_{nk}) \\mathbf{x}_n$). Now let's back to this problem. The expectation of $\\mathbf{x}$ is given by Eq (9.49):\n\n$$\\mathbb{E}[\\mathbf{x}] = \\sum_{k=1}^{K} \\pi_k^{(opt)} \\boldsymbol{\\mu}_k^{(opt)}$$\n\nHere $\\pi_k^{(opt)}$ and $\\pmb{\\mu}_k^{(opt)}$ are the parameters obtained when EM is converged.\n\nUsing Eq (9.58: $\\overline{\\mathbf{x}}_k = \\frac{1}{N_k} \\sum_{n=1}^N \\gamma(z_{nk}) \\mathbf{x}_n$) and Eq(9.59), we can obtain:\n\n$$\\mathbb{E}[\\mathbf{x}] = \\sum_{k=1}^{K} \\pi_{k}^{(opt)} \\boldsymbol{\\mu}_{k}^{(opt)}$$\n\n$$= \\sum_{k=1}^{K} \\pi_{k}^{(opt)} \\frac{1}{N_{K}^{(opt)}} \\sum_{n=1}^{N} \\gamma(z_{nk})^{(opt)} \\mathbf{x}_{n}$$\n\n$$= \\sum_{k=1}^{K} \\frac{N_{k}^{(opt)}}{N} \\frac{1}{N_{K}^{(opt)}} \\sum_{n=1}^{N} \\gamma(z_{nk})^{(opt)} \\mathbf{x}_{n}$$\n\n$$= \\sum_{k=1}^{K} \\frac{1}{N} \\sum_{n=1}^{N} \\gamma(z_{nk})^{(opt)} \\mathbf{x}_{n}$$\n\n$$= \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\frac{\\gamma(z_{nk})^{(opt)} \\mathbf{x}_{n}}{N}$$\n\n$$= \\sum_{n=1}^{N} \\frac{\\mathbf{x}_{n}}{N} \\sum_{k=1}^{K} \\gamma(z_{nk})^{(opt)}$$\n\n$$= \\frac{1}{N} \\sum_{n=1}^{N} \\mathbf{x}_{n} = \\bar{\\mathbf{x}}$$\n\nIf we set all $\\mu_k$ equal to $\\hat{\\mu}$ in initialization, in the first E-step, we can obtain:\n\n$$\\gamma(z_{nk})^{(1)} = \\frac{\\pi_k^{(0)} p(\\mathbf{x}_n | \\boldsymbol{\\mu}_k = \\widehat{\\boldsymbol{\\mu}})}{\\sum_{j=1}^K \\pi_j^{(0)} p(\\mathbf{x}_n | \\boldsymbol{\\mu}_j = \\widehat{\\boldsymbol{\\mu}})} = \\frac{\\pi_k^{(0)}}{\\sum_{j=1}^K \\pi_j^{(0)}} = \\pi_k^{(0)}$$\n\nNote that here $\\hat{\\mu}$ and $\\pi_k^{(0)}$ are the initial values. In the subsequent M-step, according to Eq (9.57)-(9.60), we can obtain:\n\n$$\\boldsymbol{\\mu}_{k}^{(1)} = \\frac{1}{N_{k}^{(1)}} \\sum_{n=1}^{N} \\gamma(z_{nk})^{(1)} \\mathbf{x}_{n} = \\frac{\\sum_{n=1}^{N} \\gamma(z_{nk})^{(1)} \\mathbf{x}_{n}}{\\sum_{n=1}^{N} \\gamma(z_{nk})^{(1)}} = \\frac{\\sum_{n=1}^{N} \\pi_{k}^{(0)} \\mathbf{x}_{n}}{\\sum_{n=1}^{N} \\pi_{k}^{(0)}} = \\frac{\\sum_{n=1}^{N} \\mathbf{x}_{n}}{N}$$\n\nAnd\n\n$$\\pi_k^{(1)} = \\frac{N_k^{(1)}}{N} = \\frac{\\sum_{n=1}^N \\gamma(z_{nk})^{(1)}}{N} = \\frac{\\sum_{n=1}^N \\pi_k^{(0)}}{N} = \\pi_k^{(0)}$$\n\nIn other words, in this case, after the first EM iteration, we find that the new $\\boldsymbol{\\mu}_k^{(1)}$ are all identical, which are all given by $\\bar{\\mathbf{x}}$ . Moreover, the new $\\pi_k^{(1)}$ are identical to their corresponding initial value $\\pi_k^{(0)}$ . Therefore, in the second EM iteration, we can similarly conclude that:\n\n$$\\mu_k^{(2)} = \\mu_k^{(1)} = \\bar{\\mathbf{x}} , \\quad \\pi_k^{(2)} = \\pi_k^{(1)} = \\pi_k^{(0)}$$\n\nIn other words, the EM algorithm actually stops after the first iteration.",
"answer_length": 3576
},
{
"chapter": 9,
"question_number": "9.14",
"difficulty": "easy",
"question_text": "Consider the joint distribution of latent and observed variables for the Bernoulli distribution obtained by forming the product of $p(\\mathbf{x}|\\mathbf{z}, \\boldsymbol{\\mu})$ given by (9.52: $p(\\mathbf{x}|\\mathbf{z}, \\boldsymbol{\\mu}) = \\prod_{k=1}^{K} p(\\mathbf{x}|\\boldsymbol{\\mu}_k)^{z_k}$) and $p(\\mathbf{z}|\\boldsymbol{\\pi})$ given by (9.53: $p(\\mathbf{z}|\\boldsymbol{\\pi}) = \\prod_{k=1}^{K} \\pi_k^{z_k}.$). Show that if we marginalize this joint distribution with respect to $\\mathbf{z}$ , then we obtain (9.47: $p(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\pi}) = \\sum_{k=1}^{K} \\pi_k p(\\mathbf{x}|\\boldsymbol{\\mu}_k)$).",
"answer": "Let's follow the hint.\n\n$$p(\\mathbf{x}, \\mathbf{z} | \\boldsymbol{\\mu}, \\boldsymbol{\\pi}) = p(\\mathbf{x} | \\mathbf{z}, \\boldsymbol{\\mu}) \\cdot p(\\mathbf{z} | \\boldsymbol{\\pi})$$\n\n$$= \\prod_{k=1}^{K} p(\\mathbf{x} | \\boldsymbol{\\mu}_{k})^{z_{k}} \\cdot \\prod_{k=1}^{K} \\pi_{k}^{z_{k}}$$\n\n$$= \\prod_{k=1}^{K} \\left[ \\pi_{k} p(\\mathbf{x} | \\boldsymbol{\\mu}_{k}) \\right]^{z_{k}}$$\n\nThen we marginalize over z, yielding:\n\n$$p(\\mathbf{x}|\\boldsymbol{\\mu}) = \\sum_{\\mathbf{z}} p(\\mathbf{x}, \\mathbf{z}|\\boldsymbol{\\mu}, \\boldsymbol{\\pi}) = \\sum_{\\mathbf{z}} \\prod_{k=1}^{K} \\left[ \\pi_k p(\\mathbf{x}|\\boldsymbol{\\mu}_k) \\right]^{z_k}$$\n\nThe summation over $\\mathbf{z}$ is made up of K terms and the k-th term corresponds to $z_k = 1$ and other $z_j$ , where $j \\neq k$ , equals 0. Therefore, the k-th term will simply reduce to $\\pi_k p(\\mathbf{x}|\\boldsymbol{\\mu}_k)$ . Hence, performing the summation over $\\mathbf{z}$ will finally give Eq (9.47: $p(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\pi}) = \\sum_{k=1}^{K} \\pi_k p(\\mathbf{x}|\\boldsymbol{\\mu}_k)$) just as required. To be more clear, we summarize the aforementioned statement:\n\n$$\\begin{aligned} p(\\mathbf{x}|\\boldsymbol{\\mu}) &= \\sum_{\\mathbf{z}} \\prod_{k=1}^{K} \\left[ \\pi_k p(\\mathbf{x}|\\boldsymbol{\\mu}_k) \\right]^{z_k} \\\\ &= \\prod_{k=1}^{K} \\left[ \\pi_k p(\\mathbf{x}|\\boldsymbol{\\mu}_k) \\right]^{z_k} \\Big|_{z_1=1} + \\dots + \\prod_{k=1}^{K} \\left[ \\pi_k p(\\mathbf{x}|\\boldsymbol{\\mu}_k) \\right]^{z_k} \\Big|_{z_K=1} \\\\ &= \\pi_1 p(\\mathbf{x}|\\boldsymbol{\\mu}_1) + \\dots + \\pi_K p(\\mathbf{x}|\\boldsymbol{\\mu}_K) \\\\ &= \\sum_{k=1}^{K} \\pi_k p(\\mathbf{x}|\\boldsymbol{\\mu}_k) \\end{aligned}$$",
"answer_length": 1647
},
{
"chapter": 9,
"question_number": "9.15",
"difficulty": "easy",
"question_text": "Show that if we maximize the expected complete-data log likelihood function (9.55: $\\mathbb{E}_{\\mathbf{Z}}[\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\left\\{ \\ln \\pi_{k} + \\sum_{i=1}^{D} \\left[ x_{ni} \\ln \\mu_{ki} + (1 - x_{ni}) \\ln(1 - \\mu_{ki}) \\right] \\right\\}$) for a mixture of Bernoulli distributions with respect to $\\mu_k$ , we obtain the M step equation (9.59: $\\mu_k = \\overline{\\mathbf{x}}_k.$).",
"answer": "Noticing that $\\pi_k$ doesn't depend on any $\\mu_{ki}$ , we can omit the first term in the open brace when calculating the derivative of Eq (9.55: $\\mathbb{E}_{\\mathbf{Z}}[\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\left\\{ \\ln \\pi_{k} + \\sum_{i=1}^{D} \\left[ x_{ni} \\ln \\mu_{ki} + (1 - x_{ni}) \\ln(1 - \\mu_{ki}) \\right] \\right\\}$) with respect to $\\mu_{ki}$ :\n\n$$\\frac{\\partial \\mathbb{E}_{z}[\\ln p]}{\\partial \\mu_{ki}} = \\frac{\\partial}{\\partial \\mu_{ki}} \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\left\\{ \\gamma(z_{nk}) \\sum_{i=1}^{D} \\left[ x_{ni} \\ln \\mu_{ki} + (1 - x_{ni}) \\ln(1 - \\mu_{ki}) \\right] \\right\\} \n= \\frac{\\partial}{\\partial \\mu_{ki}} \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\sum_{i=1}^{D} \\left\\{ \\gamma(z_{nk}) \\left[ x_{ni} \\ln \\mu_{ki} + (1 - x_{ni}) \\ln(1 - \\mu_{ki}) \\right] \\right\\} \n= \\sum_{n=1}^{N} \\frac{\\partial}{\\partial \\mu_{ki}} \\left\\{ \\gamma(z_{nk}) \\left[ x_{ni} \\ln \\mu_{ki} + (1 - x_{ni}) \\ln(1 - \\mu_{ki}) \\right] \\right\\} \n= \\sum_{n=1}^{N} \\gamma(z_{nk}) \\left( \\frac{x_{ni}}{\\mu_{ki}} - \\frac{1 - x_{ni}}{1 - \\mu_{ki}} \\right) \n= \\sum_{n=1}^{N} \\gamma(z_{nk}) \\frac{x_{ni} - \\mu_{ki}}{\\mu_{ki}(1 - \\mu_{ki})}$$\n\nSetting the derivative equal to 0, we can obtain:\n\n$$\\mu_{ki} = \\frac{\\sum_{n=1}^{N} \\gamma(z_{nk}) x_{ni}}{\\sum_{n=1}^{N} \\gamma(z_{nk})} = \\frac{1}{N_k} \\sum_{n=1}^{N} \\gamma(z_{nk}) x_{ni}$$\n\nWhere $N_k$ is defined as Eq (9.57: $N_k = \\sum_{n=1}^N \\gamma(z_{nk})$). If we group all the $\\mu_{ki}$ as a column vector, i.e., $\\boldsymbol{\\mu}_k = [\\mu_{k1}, \\mu_{k2}, ..., \\mu_{kD}]^T$ , we will obtain Eq (9.59: $\\mu_k = \\overline{\\mathbf{x}}_k.$) just as required.",
"answer_length": 1677
},
{
"chapter": 9,
"question_number": "9.16",
"difficulty": "easy",
"question_text": "Show that if we maximize the expected complete-data log likelihood function (9.55: $\\mathbb{E}_{\\mathbf{Z}}[\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\left\\{ \\ln \\pi_{k} + \\sum_{i=1}^{D} \\left[ x_{ni} \\ln \\mu_{ki} + (1 - x_{ni}) \\ln(1 - \\mu_{ki}) \\right] \\right\\}$) for a mixture of Bernoulli distributions with respect to the mixing coefficients $\\pi_k$ , using a Lagrange multiplier to enforce the summation constraint, we obtain the M step equation (9.60: $\\pi_k = \\frac{N_k}{N}$).",
"answer": "We follow the hint beginning by introducing a Lagrange multiplier:\n\n$$L = \\mathbb{E}_{z}[\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\pi})] + \\lambda (\\sum_{k=1}^{K} \\pi_{k} - 1)$$\n\nWe calculate the derivative of L with respect to $\\pi_k$ and then set it equal to 0:\n\n$$\\frac{\\partial L}{\\partial \\pi_k} = \\sum_{n=1}^{N} \\frac{\\gamma(z_{nk})}{\\pi_k} + \\lambda = 0 \\tag{*}$$\n\nHere $\\mathbb{E}_z[\\ln p]$ is given by Eq (9.55: $\\mathbb{E}_{\\mathbf{Z}}[\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\left\\{ \\ln \\pi_{k} + \\sum_{i=1}^{D} \\left[ x_{ni} \\ln \\mu_{ki} + (1 - x_{ni}) \\ln(1 - \\mu_{ki}) \\right] \\right\\}$). We first multiply both sides of the expression by $\\pi_k$ and then adopt summation with respect to k, which gives:\n\n$$\\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) + \\sum_{k=1}^{K} \\lambda \\pi_{k} = 0$$\n\nNoticing that $\\sum_{k=1}^{K} \\pi_k$ equals 1, we can obtain:\n\n$$\\lambda = -\\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk})$$\n\nFinally, substituting it back into (\\*) and rearranging it, we can obtain:\n\n$$\\pi_k = -\\frac{\\sum_{k=1}^K \\gamma(z_{nk})}{\\lambda} = \\frac{\\sum_{k=1}^K \\gamma(z_{nk})}{\\sum_{n=1}^N \\sum_{k=1}^K \\gamma(z_{nk})} = \\frac{N_k}{N}$$\n\nWhere $N_k$ is defined by Eq (9.57: $N_k = \\sum_{n=1}^N \\gamma(z_{nk})$) and N is the summation of $N_k$ over k, and also equal to the number of data points.",
"answer_length": 1423
},
{
"chapter": 9,
"question_number": "9.17",
"difficulty": "easy",
"question_text": "Show that as a consequence of the constraint $0 \\le p(\\mathbf{x}_n | \\boldsymbol{\\mu}_k) \\le 1$ for the discrete variable $\\mathbf{x}_n$ , the incomplete-data log likelihood function for a mixture of Bernoulli distributions is bounded above, and hence that there are no singularities for which the likelihood goes to infinity.",
"answer": "The incomplete-data log likelihood is given by Eq (9.51: $\\ln p(\\mathbf{X}|\\boldsymbol{\\mu}, \\boldsymbol{\\pi}) = \\sum_{n=1}^{N} \\ln \\left\\{ \\sum_{k=1}^{K} \\pi_k p(\\mathbf{x}_n | \\boldsymbol{\\mu}_k) \\right\\}.$), and $p(\\mathbf{x}_n|\\boldsymbol{\\mu}_k)$ lies in the interval [0, 1], which can be easily verified by its definition, i.e., Eq (9.44: $p(\\mathbf{x}|\\boldsymbol{\\mu}) = \\prod_{i=1}^{D} \\mu_i^{x_i} (1 - \\mu_i)^{(1 - x_i)}$). Therefore, we can obtain:\n\n$$\\ln p(\\mathbf{X}|\\boldsymbol{\\mu}, \\boldsymbol{\\pi}) = \\sum_{n=1}^{N} \\ln \\left\\{ \\sum_{k=1}^{K} \\pi_k p(\\mathbf{x}_n | \\boldsymbol{\\mu}_k) \\right\\} \\le \\sum_{n=1}^{N} \\ln \\left\\{ \\sum_{k=1}^{K} \\pi_k \\times 1 \\right\\} \\le \\sum_{n=1}^{N} \\ln 1 = 0$$\n\nWhere we have used the fact that the logarithm is monotonic increasing, and that the summation of $\\pi_k$ over k equals 1. Moreover, if we want to achieve the equality, we need $p(\\mathbf{x}_n|\\boldsymbol{\\mu}_k)$ equal to 1 for all n=1,2,...,N. However, this is hardly possible.\n\nTo illustrate this, suppose that $p(\\mathbf{x}_n|\\boldsymbol{\\mu}_k)$ equals 1 for all data points. Without loss of generality, consider two data points $\\mathbf{x}_1 = [x_{11}, x_{12}, ..., x_{1D}]^T$ and $\\mathbf{x}_2 = [x_{21}, x_{22}, ..., x_{2D}]^T$ , whose *i*-th entries are different. We further assume $x_{1i} = 1$ and $x_{2i} = 0$ since $x_i$ is a binary variable. According to Eq (9.44: $p(\\mathbf{x}|\\boldsymbol{\\mu}) = \\prod_{i=1}^{D} \\mu_i^{x_i} (1 - \\mu_i)^{(1 - x_i)}$), if we want $p(\\mathbf{x}_1|\\boldsymbol{\\mu}_k) = 1$ , we must have $\\mu_i = 1$ (otherwise it muse be less than 1). However, this will lead $p(\\mathbf{x}_2|\\boldsymbol{\\mu}_k)$ equal to 0 since there is a term $1 - \\mu_i = 0$ in the product shown in Eq (9.44: $p(\\mathbf{x}|\\boldsymbol{\\mu}) = \\prod_{i=1}^{D} \\mu_i^{x_i} (1 - \\mu_i)^{(1 - x_i)}$).\n\nTherefore, when the data set is pathological, we will achieve this singularity point by adopting EM. Note that in the main text, the author states that the condition should be pathological initialization. This is also true. For instance, in the extreme case, when the data set is not pathological, if we initialize one $\\pi_k$ equal to 1 and others all 0, and some of $\\mu_i$ to 1 and others 0, we may also achieve the singularity.",
"answer_length": 2290
},
{
"chapter": 9,
"question_number": "9.18",
"difficulty": "medium",
"question_text": "Consider a Bernoulli mixture model as discussed in Section 9.3.3, together with a prior distribution $p(\\mu_k|a_k,b_k)$ over each of the parameter vectors $\\mu_k$ given by the beta distribution (2.13: $(\\mu|a,b) = \\frac{\\Gamma(a+b)}{\\Gamma(a)\\Gamma(b)} \\mu^{a-1} (1-\\mu)^{b-1}$), and a Dirichlet prior $p(\\pi|\\alpha)$ given by (2.38: $Dir(\\boldsymbol{\\mu}|\\boldsymbol{\\alpha}) = \\frac{\\Gamma(\\alpha_0)}{\\Gamma(\\alpha_1)\\cdots\\Gamma(\\alpha_K)} \\prod_{k=1}^K \\mu_k^{\\alpha_k - 1}$). Derive the EM algorithm for maximizing the posterior probability $p(\\mu,\\pi|\\mathbf{X})$ .",
"answer": "In Prob.9.4, we have proved that if we want to maximize the posterior by EM, the only modification is that in the M-step, we need to maximize $Q'(\\theta, \\theta^{\\text{old}}) = Q(\\theta, \\theta^{\\text{old}}) + \\ln p(\\theta)$ . Here $Q(\\theta, \\theta^{\\text{old}})$ has already been given by $\\mathbb{E}_z[\\ln p]$ , i.e., Eq (9.55: $\\mathbb{E}_{\\mathbf{Z}}[\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\left\\{ \\ln \\pi_{k} + \\sum_{i=1}^{D} \\left[ x_{ni} \\ln \\mu_{ki} + (1 - x_{ni}) \\ln(1 - \\mu_{ki}) \\right] \\right\\}$). Therefore, we derive for $\\ln p(\\theta)$ . Note that $\\ln p(\\theta)$ is made up of two parts:(i) the prior for $\\mu_k$ and (ii) the prior for $\\pi$ , we begin by dealing with the first part. Here we assume the Beta prior for $\\mu_{ki}$ , where k is fixed, is the same, i.e.,:\n\n$$p(\\mu_{ki}|a_k,b_k) = \\frac{\\Gamma(a_k+b_k)}{\\Gamma(a_k)\\Gamma(b_k)} \\mu_{ki}^{a_k-1} \\left(1-\\mu_{ki}\\right)^{b_k-1}, \\quad i=1,2,...,D$$\n\nTherefore, the contribution of this Beta prior to $\\ln p(\\theta)$ should be given by:\n\n$$\\sum_{k=1}^{K} \\sum_{i=1}^{D} (a_i - 1) \\ln \\mu_{ki} + (b_i - 1) \\ln (1 - \\mu_{ki})$$\n\nOne thing worthy mentioned is that since we will maximize $Q'(\\theta, \\theta^{\\text{old}})$ with respect to $\\pi, \\mu_k$ , we can omit the terms which do not depend on $\\pi, \\mu_k$ , such as $\\Gamma(a_k + b_k) / \\Gamma(a_k) \\Gamma(b_k)$ . Then we deal with the second part. According to Eq (2.38: $Dir(\\boldsymbol{\\mu}|\\boldsymbol{\\alpha}) = \\frac{\\Gamma(\\alpha_0)}{\\Gamma(\\alpha_1)\\cdots\\Gamma(\\alpha_K)} \\prod_{k=1}^K \\mu_k^{\\alpha_k - 1}$), we can obtain:\n\n$$p(\\boldsymbol{\\pi}|\\boldsymbol{\\alpha}) = \\frac{\\Gamma(\\alpha_0)}{\\Gamma(\\alpha_1)...\\Gamma(\\alpha_K)} \\prod_{k=1}^K \\pi_k^{\\alpha_k - 1}$$\n\nTherefore, the contribution of the Dirichlet prior to $\\ln p(\\theta)$ should be given by:\n\n$$\\sum_{k=1}^{K} (\\alpha_k - 1) \\ln \\pi_k$$\n\nTherefore, now $Q'(\\theta, \\theta^{\\text{old}})$ can be written as:\n\n$$Q'(\\theta, \\theta^{\\text{old}}) = \\mathbb{E}_{z}[\\ln p] + \\sum_{k=1}^{K} \\sum_{i=1}^{D} \\left[ (\\alpha_{i} - 1) \\ln \\mu_{ki} + (b_{i} - 1) \\ln (1 - \\mu_{ki}) \\right] + \\sum_{k=1}^{K} (\\alpha_{k} - 1) \\ln \\pi_{ki}$$\n\nSimilarly, we calculate the derivative of $Q'(\\boldsymbol{\\theta}, \\boldsymbol{\\theta}^{\\text{old}})$ with respect to $\\mu_{ki}$ . This can be simplified by reusing the deduction in Prob.9.15:\n\n$$\\begin{split} \\frac{\\partial Q^{'}}{\\partial \\mu_{ki}} &= \\frac{\\partial \\mathbb{E}_{z}[\\ln p]}{\\partial \\mu_{ki}} + \\frac{a_{i} - 1}{\\mu_{ki}} - \\frac{b_{i} - 1}{1 - \\mu_{ki}} \\\\ &= \\sum_{n=1}^{N} \\gamma(z_{nk}) (\\frac{x_{ni}}{\\mu_{ki}} - \\frac{1 - x_{ni}}{1 - \\mu_{ki}}) + \\frac{a_{i} - 1}{\\mu_{ki}} - \\frac{b_{i} - 1}{1 - \\mu_{ki}} \\\\ &= \\frac{\\sum_{n=1}^{N} x_{ni} \\cdot \\gamma(z_{nk}) + a_{i} - 1}{\\mu_{ki}} - \\frac{\\sum_{n=1}^{N} (1 - x_{ni}) \\gamma(z_{nk}) + b_{i} - 1}{1 - \\mu_{ki}} \\\\ &= \\frac{N_{k} \\bar{x}_{ki} + a_{i} - 1}{\\mu_{ki}} - \\frac{N_{k} - N_{k} \\bar{x}_{ki} + b_{i} - 1}{1 - \\mu_{ki}} \\end{split}$$\n\nNote that here $\\bar{x}_{ki}$ is defined as the *i*-th entry of $\\bar{x}_k$ defined in Eq (9.58: $\\overline{\\mathbf{x}}_k = \\frac{1}{N_k} \\sum_{n=1}^N \\gamma(z_{nk}) \\mathbf{x}_n$). To be more clear, we have used Eq (9.57: $N_k = \\sum_{n=1}^N \\gamma(z_{nk})$) and Eq (9.58: $\\overline{\\mathbf{x}}_k = \\frac{1}{N_k} \\sum_{n=1}^N \\gamma(z_{nk}) \\mathbf{x}_n$) in the last step:\n\n$$\\sum_{n=1}^{N} x_{ni} \\cdot \\gamma(z_{nk}) = N_k \\cdot \\left[ \\frac{1}{N_k} \\sum_{n=1}^{N} x_{ni} \\cdot \\gamma(z_{nk}) \\right] = N_k \\cdot \\bar{x}_{ki}$$\n\nSetting the derivative equal to 0 and rearranging it, we can obtain:\n\n$$\\mu_{ki} = \\frac{N_k \\bar{x}_{ki} + a_i - 1}{N_k + a_i - 1 + b_i - 1}$$\n\nNext we maximize $Q'(\\theta, \\theta^{\\text{old}})$ with respect to $\\pi$ . By analogy to Prob.9.16, we introduce Lagrange multiplier:\n\n$$L \\propto \\mathbb{E}_z + \\sum_{k=1}^K (\\alpha_k - 1) \\ln \\pi_k + \\lambda (\\sum_{k=1}^K \\pi_k - 1)$$\n\nNote that the second term on the right hand side of Q' in its definition has been omitted, since that term can be viewed as a constant with regard to $\\pi$ . We then calculate the derivative of L with respect to $\\pi_k$ by taking advantage of Prob.9.16:\n\n$$\\frac{\\partial L}{\\partial \\pi_k} = \\sum_{n=1}^{N} \\frac{\\gamma(z_{nk})}{\\pi_k} + \\frac{\\alpha_k - 1}{\\pi_k} + \\lambda = 0$$\n\nSimilarly, We first multiply both sides of the expression by $\\pi_k$ and then adopt summation with respect to k, which gives:\n\n$$\\sum_{k=1}^{K} \\sum_{n=1}^{N} \\gamma(z_{nk}) + \\sum_{k=1}^{K} (\\alpha_k - 1) + \\sum_{k=1}^{K} \\lambda \\pi_k = 0$$\n\nNoticing that $\\sum_{k=1}^{K} \\pi_k$ equals 1, we can obtain:\n\n$$\\lambda = -\\sum_{k=1}^{K} N_k - \\sum_{k=1}^{K} (\\alpha_k - 1) = -N - \\alpha_0 + K$$\n\nHere we have used Eq (2.39: $\\alpha_0 = \\sum_{k=1}^K \\alpha_k.$). Substituting it back into the derivative, we can obtain:\n\n $\\pi_k = \\frac{\\sum_{n=1}^N \\gamma(z_{nk}) + \\alpha_k - 1}{-\\lambda} = \\frac{N_k + \\alpha_k - 1}{N + \\alpha_0 - K}$ \n\nIt is not difficult to show that if N is large, the update formula for $\\pi$ and $\\mu$ in this case (MAP), will reduce to the results given in the main text (MLE).",
"answer_length": 5170
},
{
"chapter": 9,
"question_number": "9.19",
"difficulty": "medium",
"question_text": "Consider a D-dimensional variable $\\mathbf{x}$ each of whose components i is itself a multinomial variable of degree M so that $\\mathbf{x}$ is a binary vector with components $x_{ij}$ where $i=1,\\ldots,D$ and $j=1,\\ldots,M$ , subject to the constraint that $\\sum_j x_{ij}=1$ for all i. Suppose that the distribution of these variables is described by a mixture of the discrete multinomial distributions considered in Section 2.2 so that\n\n$$p(\\mathbf{x}) = \\sum_{k=1}^{K} \\pi_k p(\\mathbf{x} | \\boldsymbol{\\mu}_k)$$\n (9.84: $p(\\mathbf{x}) = \\sum_{k=1}^{K} \\pi_k p(\\mathbf{x} | \\boldsymbol{\\mu}_k)$)\n\nwhere\n\n$$p(\\mathbf{x}|\\boldsymbol{\\mu}_k) = \\prod_{i=1}^{D} \\prod_{j=1}^{M} \\mu_{kij}^{x_{ij}}.$$\n(9.85: $p(\\mathbf{x}|\\boldsymbol{\\mu}_k) = \\prod_{i=1}^{D} \\prod_{j=1}^{M} \\mu_{kij}^{x_{ij}}.$)\n\nThe parameters $\\mu_{kij}$ represent the probabilities $p(x_{ij}=1|\\boldsymbol{\\mu}_k)$ and must satisfy $0 \\leqslant \\mu_{kij} \\leqslant 1$ together with the constraint $\\sum_j \\mu_{kij} = 1$ for all values of k and i. Given an observed data set $\\{\\mathbf{x}_n\\}$ , where $n=1,\\ldots,N$ , derive the E and M step equations of the EM algorithm for optimizing the mixing coefficients $\\pi_k$ and the component parameters $\\mu_{kij}$ of this distribution by maximum likelihood.",
"answer": "We first introduce a latent variable $\\mathbf{z} = [z_1, z_2, ..., z_K]^T$ , only one of which equals 1 and others all 0. The conditional distribution of $\\mathbf{x}$ is given by:\n\n$$p(\\mathbf{x}|\\mathbf{z}, \\boldsymbol{\\mu}) = \\prod_{k=1}^{K} p(\\mathbf{x}|\\boldsymbol{\\mu}_k)^{z_k}$$\n\nThe distribution of the latent variable is given by:\n\n$$p(\\mathbf{z}|\\boldsymbol{\\pi}) = \\prod_{k=1}^K \\pi_k^{z_k}$$\n\nIf we follow the same procedure as in Prob.9.14, we can show that Eq (9.84: $p(\\mathbf{x}) = \\sum_{k=1}^{K} \\pi_k p(\\mathbf{x} | \\boldsymbol{\\mu}_k)$) holds. In other words, the introduction of the latent variable is valid. Therefore, according to Bayes' Theorem, we can obtain:\n\n$$p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\pi}) = \\prod_{n=1}^{N} p(\\mathbf{z}_n | \\boldsymbol{\\pi}) p(\\mathbf{x}_n | \\mathbf{z}_n, \\boldsymbol{\\mu}) = \\prod_{n=1}^{N} \\prod_{k=1}^{K} \\left[ \\pi_k p(\\mathbf{x} | \\boldsymbol{\\mu}) \\right]^{z_{nk}}$$\n\nWe further use Eq (9.85: $p(\\mathbf{x}|\\boldsymbol{\\mu}_k) = \\prod_{i=1}^{D} \\prod_{j=1}^{M} \\mu_{kij}^{x_{ij}}.$), which gives:\n\n$$\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\pi}) = \\sum_{n=1}^{N} \\sum_{k=1}^{K} z_{nk} \\ln \\left[ \\pi_{k} \\prod_{d=1}^{D} \\prod_{j=1}^{M} \\mu_{kij}^{x_{nij}} \\right]$$\n$$= \\sum_{n=1}^{N} \\sum_{k=1}^{K} z_{nk} \\left[ \\ln \\pi_{k} + \\sum_{d=1}^{D} \\sum_{j=1}^{M} x_{nij} \\ln \\mu_{kij} \\right]$$\n\nSimilarly, in the E-step, the responsibilities are evaluated using Bayes' theorem, which gives:\n\n$$\\gamma(z_{nk}) = \\mathbb{E}[z_{nk}] = \\frac{\\pi_k p(\\mathbf{x}_n | \\boldsymbol{\\mu}_k)}{\\sum_{j=1}^K \\pi_j p(\\mathbf{x}_n | \\boldsymbol{\\mu}_j)}$$\n\nNext, in the M-step, we are required to maximize $\\mathbb{E}_z[\\ln p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\mu}, \\boldsymbol{\\pi})]$ with respect to $\\boldsymbol{\\pi}$ and $\\boldsymbol{\\mu}_k$ , where $\\mathbb{E}_z[\\ln p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\mu}, \\boldsymbol{\\pi})]$ is given by:\n\n$$\\mathbb{E}_{z}[\\ln p(\\mathbf{X},\\mathbf{Z}|\\boldsymbol{\\mu},\\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\Big[ \\ln \\pi_{k} + \\sum_{i=1}^{D} \\sum_{j=1}^{M} x_{nij} \\ln \\mu_{kij} \\Big]$$\n\nNotice that there exists two constraints: (i) the summation of $\\pi_k$ over k equals 1, and (ii) the summation of $\\mu_{kij}$ over j equals 1 for any k and i, we need to introduce Lagrange multiplier:\n\n$$L = \\mathbb{E}_{z}[\\ln p] + \\lambda(\\sum_{k=1}^{K} \\pi_{k} - 1) + \\sum_{k=1}^{K} \\sum_{i=1}^{D} \\eta_{ki}(\\sum_{i=1}^{M} \\mu_{kij} - 1)$$\n\nFirst we maximize L with respect to $\\pi_k$ . This is actually identical to the case in the main text. To be more clear, we calculate the derivative of L with respect to $\\pi_k$ :\n\n$$\\frac{\\partial L}{\\partial \\pi_k} = \\sum_{n=1}^{N} \\frac{\\gamma(z_{nk})}{\\pi_k} + \\lambda$$\n\nAs in Prob.9.16, we can obtain:\n\n$$\\pi_k = \\frac{N_k}{N}$$\n\nWhere $N_k$ is defined as:\n\n$$N_k = \\sum_{n=1}^N \\gamma(z_{nk})$$\n\nN is the summation of $N_k$ over k, and also equals the number of data points. Then we calculate the derivative of L with respect to $\\mu_{kij}$ :\n\n$$\\frac{\\partial L}{\\partial \\mu_{kij}} = \\sum_{n=1}^{N} \\frac{\\gamma(z_{nk}) x_{nij}}{\\mu_{kij}} + \\eta_{ki}$$\n\nWe set it to 0 and multiply both sides by $\\mu_{kij}$ , which gives:\n\n$$\\sum_{n=1}^{N} \\gamma(z_{nk}) x_{nij} + \\eta_{ki} \\mu_{kij} = 0$$\n\nBy analogy to deriving $\\pi_k$ , an intuitive idea is to perform summation for the above expression over j and hence we can use the constraint $\\sum_j \\mu_{kij} = 1$ .\n\n$$\\eta_{ki} = -\\sum_{i=1}^{M} \\sum_{n=1}^{N} \\gamma(z_{nk}) x_{nij} = -\\sum_{n=1}^{N} \\gamma(z_{nk}) \\left[ \\sum_{i=1}^{M} x_{nij} \\right] = -\\sum_{n=1}^{N} \\gamma(z_{nk}) = -N_k$$\n\nWhere we have used the fact that $\\sum_{j} x_{nij} = 1$ . Substituting back into the derivative, we can obtain:\n\n$$\\mu_{kij} = -\\frac{\\sum_{n=1}^{N} \\gamma(z_{nk}) x_{nij}}{\\eta_{ki}} = \\frac{1}{N_k} \\sum_{n=1}^{N} \\gamma(z_{nk}) x_{nij}$$",
"answer_length": 3905
},
{
"chapter": 9,
"question_number": "9.2",
"difficulty": "easy",
"question_text": "Apply the Robbins-Monro sequential estimation procedure described in Section 2.3.5 to the problem of finding the roots of the regression function given by the derivatives of J in (9.1: $J = \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\|\\mathbf{x}_n - \\boldsymbol{\\mu}_k\\|^2$) with respect to $\\mu_k$ . Show that this leads to a stochastic K-means algorithm in which, for each data point $\\mathbf{x}_n$ , the nearest prototype $\\mu_k$ is updated using (9.5: $\\boldsymbol{\\mu}_k^{\\text{new}} = \\boldsymbol{\\mu}_k^{\\text{old}} + \\eta_n(\\mathbf{x}_n - \\boldsymbol{\\mu}_k^{\\text{old}})$).",
"answer": "By analogy to Eq (9.1: $J = \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\|\\mathbf{x}_n - \\boldsymbol{\\mu}_k\\|^2$), we can write down:\n\n$$J_N = J_{N-1} + \\sum_{k=1}^{K} r_{Nk} ||\\mathbf{x}_N - \\boldsymbol{\\mu}_k||^2$$\n\nIn the E-step, we still assign the N-th data $\\mathbf{x}_N$ to the closet center and suppose that this closet center is $\\boldsymbol{\\mu}_m$ . Therefore, the expression above will reduce to:\n\n$$J_N = J_{N-1} + ||\\mathbf{x}_n - \\boldsymbol{\\mu}_m||^2$$\n\nIn the M-step, we set the derivative of $J_N$ with respect to $\\mu_k$ to 0, where k = 1, 2, ..., K. We can observe that for those $\\mu_k$ , $k \\neq m$ , we have:\n\n$$\\frac{\\partial J_N}{\\partial \\boldsymbol{\\mu}_k} = \\frac{\\partial J_{N-1}}{\\partial \\boldsymbol{\\mu}_k}$$\n\nIn other words, we will only update $\\mu_m$ in the M-step by setting the derivative of $J_N$ equal to 0. Utilizing Eq (9.4: $\\mu_k = \\frac{\\sum_n r_{nk} \\mathbf{x}_n}{\\sum_n r_{nk}}.$), we can obtain:\n\n$$\\begin{split} \\boldsymbol{\\mu}_{m}^{(N)} &= \\frac{\\sum_{n=1}^{N-1} r_{nk} \\mathbf{x}_{n} + \\mathbf{x}_{N}}{\\sum_{n=1}^{N-1} r_{nk} + 1} \\\\ &= \\frac{\\frac{\\sum_{n=1}^{N-1} r_{nk} \\mathbf{x}_{n}}{\\sum_{n=1}^{N-1} r_{nk}} + \\frac{\\mathbf{x}_{N}}{\\sum_{n=1}^{N-1} r_{nk}}}{1 + \\frac{1}{\\sum_{n=1}^{N-1} r_{nk}}} \\\\ &= \\frac{\\boldsymbol{\\mu}_{m}^{(N-1)} + \\frac{\\mathbf{x}_{N}}{\\sum_{n=1}^{N-1} r_{nk}}}{1 + \\frac{1}{\\sum_{n=1}^{N-1} r_{nk}}} \\\\ &= \\boldsymbol{\\mu}_{m}^{(N-1)} + \\frac{\\frac{\\mathbf{x}_{N}}{\\sum_{n=1}^{N-1} r_{nk}} - \\frac{\\boldsymbol{\\mu}_{m}^{(N-1)}}{\\sum_{n=1}^{N-1} r_{nk}}}{1 + \\frac{1}{\\sum_{n=1}^{N-1} r_{nk}}} \\\\ &= \\boldsymbol{\\mu}_{m}^{(N-1)} + \\frac{\\mathbf{x}_{N} - \\boldsymbol{\\mu}_{m}^{(N-1)}}{1 + \\sum_{n=1}^{N-1} r_{nk}} \\end{split}$$\n\nSo far we have obtained a sequential on-line update formula just as required.",
"answer_length": 1795
},
{
"chapter": 9,
"question_number": "9.20",
"difficulty": "easy",
"question_text": "Show that maximization of the expected complete-data log likelihood function (9.62: $\\mathbb{E}\\left[\\ln p(\\mathbf{t}, \\mathbf{w} | \\alpha, \\beta)\\right] = \\frac{M}{2} \\ln \\left(\\frac{\\alpha}{2\\pi}\\right) - \\frac{\\alpha}{2} \\mathbb{E}\\left[\\mathbf{w}^{\\mathrm{T}} \\mathbf{w}\\right] + \\frac{N}{2} \\ln \\left(\\frac{\\beta}{2\\pi}\\right) - \\frac{\\beta}{2} \\sum_{n=1}^{N} \\mathbb{E}\\left[(t_n - \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}_n)^2\\right].$) for the Bayesian linear regression model leads to the M step reestimation result (9.63: $\\alpha = \\frac{M}{\\mathbb{E}\\left[\\mathbf{w}^{\\mathrm{T}}\\mathbf{w}\\right]} = \\frac{M}{\\mathbf{m}_{N}^{\\mathrm{T}}\\mathbf{m}_{N} + \\mathrm{Tr}(\\mathbf{S}_{N})}.$) for $\\alpha$ .",
"answer": "We first calculate the derivative of Eq (9.62: $\\mathbb{E}\\left[\\ln p(\\mathbf{t}, \\mathbf{w} | \\alpha, \\beta)\\right] = \\frac{M}{2} \\ln \\left(\\frac{\\alpha}{2\\pi}\\right) - \\frac{\\alpha}{2} \\mathbb{E}\\left[\\mathbf{w}^{\\mathrm{T}} \\mathbf{w}\\right] + \\frac{N}{2} \\ln \\left(\\frac{\\beta}{2\\pi}\\right) - \\frac{\\beta}{2} \\sum_{n=1}^{N} \\mathbb{E}\\left[(t_n - \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}_n)^2\\right].$) with respect to $\\alpha$ and set it to 0:\n\n$$\\frac{\\partial E[\\ln p]}{\\partial \\alpha} = \\frac{M}{2} \\frac{1}{2\\pi} \\frac{2\\pi}{\\alpha} - \\frac{\\mathbb{E}[\\mathbf{w}^T \\mathbf{w}]}{2} = 0$$\n\nWe rearrange the equation above, which gives:\n\n$$\\alpha = \\frac{M}{\\mathbb{E}[\\mathbf{w}^T \\mathbf{w}]} \\tag{*}$$\n\nTherefore, we now need to calculate the expectation $\\mathbb{E}[\\mathbf{w}^T\\mathbf{w}]$ . Notice that the posterior has already been given by Eq (3.49):\n\n$$p(\\mathbf{w}|\\mathbf{t}) = \\mathcal{N}(\\mathbf{m}_N, \\mathbf{S}_N)$$\n\nTo calculate $\\mathbb{E}[\\mathbf{w}^T\\mathbf{w}]$ , here we write down an property for a Gaussian random variable: if $\\mathbf{x} \\sim \\mathcal{N}(\\mathbf{m}, \\mathbf{\\Sigma})$ , we have:\n\n$$\\mathbb{F}[\\mathbf{x}^T \\mathbf{A} \\mathbf{x}] = \\mathbf{Tr}[\\mathbf{A} \\mathbf{\\Sigma}] + \\mathbf{m}^T \\mathbf{A} \\mathbf{m}$$\n\nThis property has been shown in Eq(378) in 'the Matrix Cookbook'. Utilizing this property, we can obtain:\n\n$$\\mathbb{E}[\\mathbf{w}^T\\mathbf{w}] = \\mathrm{Tr}[\\mathbf{S}_N] + \\mathbf{m}_N^T\\mathbf{m}_N$$\n\nSubstituting it back into (\\*), we obtain what is required.",
"answer_length": 1529
},
{
"chapter": 9,
"question_number": "9.21",
"difficulty": "medium",
"question_text": "Using the evidence framework of Section 3.5, derive the M-step re-estimation equations for the parameter $\\beta$ in the Bayesian linear regression model, analogous to the result (9.63: $\\alpha = \\frac{M}{\\mathbb{E}\\left[\\mathbf{w}^{\\mathrm{T}}\\mathbf{w}\\right]} = \\frac{M}{\\mathbf{m}_{N}^{\\mathrm{T}}\\mathbf{m}_{N} + \\mathrm{Tr}(\\mathbf{S}_{N})}.$) for $\\alpha$ .",
"answer": "We calculate the derivative of Eq (9.62: $\\mathbb{E}\\left[\\ln p(\\mathbf{t}, \\mathbf{w} | \\alpha, \\beta)\\right] = \\frac{M}{2} \\ln \\left(\\frac{\\alpha}{2\\pi}\\right) - \\frac{\\alpha}{2} \\mathbb{E}\\left[\\mathbf{w}^{\\mathrm{T}} \\mathbf{w}\\right] + \\frac{N}{2} \\ln \\left(\\frac{\\beta}{2\\pi}\\right) - \\frac{\\beta}{2} \\sum_{n=1}^{N} \\mathbb{E}\\left[(t_n - \\mathbf{w}^{\\mathrm{T}} \\boldsymbol{\\phi}_n)^2\\right].$) with respect to $\\beta$ and set it equal to 0:\n\n$$\\frac{\\partial \\ln p}{\\partial \\beta} = \\frac{N}{2} \\frac{1}{2\\pi} \\frac{2\\pi}{\\beta} - \\frac{1}{2} \\sum_{n=1}^{N} \\mathbb{E}[(t_n - \\mathbf{w}^T \\boldsymbol{\\phi}_n)^2] = 0$$\n\nRearranging it, we obtain:\n\n$$\\beta = \\frac{N}{\\sum_{n=1}^{N} \\mathbb{E}[(t_n - \\mathbf{w}^T \\boldsymbol{\\phi}_n)^2]}$$\n\nTherefore, we are required to calculate the expectation. To be more clear, this expectation is with respect to the posterior defined by Eq (3.49):\n\n$$p(\\mathbf{w}|\\mathbf{t}) = \\mathcal{N}(\\mathbf{m}_N, \\mathbf{S}_N)$$\n\nWe expand the expectation:\n\n$$\\begin{split} \\mathbb{E}[(t_n - \\mathbf{w}^T \\boldsymbol{\\phi}_n)^2] &= \\mathbb{E}[t_n^2 - 2t_n \\cdot \\mathbf{w}^T \\boldsymbol{\\phi}_n + \\mathbf{w}^T \\boldsymbol{\\phi}_n \\boldsymbol{\\phi}_n^T \\mathbf{w}] \\\\ &= \\mathbb{E}[t_n^2] - \\mathbb{E}[2t_n \\cdot \\mathbf{w}^T \\boldsymbol{\\phi}_n] + \\mathbb{E}[\\mathbf{w}^T (\\boldsymbol{\\phi}_n \\boldsymbol{\\phi}_n^T) \\mathbf{w}] \\\\ &= t_n^2 - 2t_n \\cdot \\mathbb{E}[\\boldsymbol{\\phi}_n^T \\mathbf{w}] + \\text{Tr}[\\boldsymbol{\\phi}_n \\boldsymbol{\\phi}_n^T \\mathbf{S}_N] + \\mathbf{m}_N^T \\boldsymbol{\\phi}_n \\boldsymbol{\\phi}_n^T \\mathbf{m}_N \\\\ &= t_n^2 - 2t_n \\boldsymbol{\\phi}_n^T \\cdot \\mathbb{E}[\\mathbf{w}] + \\text{Tr}[\\boldsymbol{\\phi}_n \\boldsymbol{\\phi}_n^T \\mathbf{S}_N] + \\mathbf{m}_N^T \\boldsymbol{\\phi}_n \\boldsymbol{\\phi}_n^T \\mathbf{m}_N \\\\ &= t_n^2 - 2t_n \\boldsymbol{\\phi}_n^T \\mathbf{m}_N + \\text{Tr}[\\boldsymbol{\\phi}_n \\boldsymbol{\\phi}_n^T \\mathbf{S}_N] + \\mathbf{m}_N^T \\boldsymbol{\\phi}_n \\boldsymbol{\\phi}_n^T \\mathbf{m}_N \\\\ &= (t_n - \\mathbf{m}_N^T \\boldsymbol{\\phi}_N)^2 + \\text{Tr}[\\boldsymbol{\\phi}_n \\boldsymbol{\\phi}_n^T \\mathbf{S}_N] \\end{split}$$\n\nSubstituting it back into the derivative, we can obtain:\n\n$$\\frac{1}{\\beta} = \\frac{1}{N} \\sum_{n=1}^{N} \\left\\{ (t_n - \\mathbf{m}_N^T \\boldsymbol{\\phi}_N)^2 + \\text{Tr}[\\boldsymbol{\\phi}_n \\boldsymbol{\\phi}_n^T \\mathbf{S}_N] \\right\\}$$\n$$= \\frac{1}{N} \\left\\{ ||\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}_N||^2 + \\text{Tr}[\\mathbf{\\Phi}^T \\mathbf{\\Phi} \\mathbf{S}_N] \\right\\}$$\n\nNote that in the last step, we have performed vectorization. Here the *j*-th row of $\\Phi$ is given by $\\phi_j$ , identical to the definition given in Chapter 3.",
"answer_length": 2657
},
{
"chapter": 9,
"question_number": "9.22",
"difficulty": "medium",
"question_text": "By maximization of the expected complete-data log likelihood defined by (9.66: $\\mathbb{E}_{\\mathbf{w}} \\left[ \\ln p(\\mathbf{t}|\\mathbf{X}, \\mathbf{w}, \\beta) p(\\mathbf{w}|\\alpha) \\right]$), derive the M step equations (9.67: $\\alpha_i^{\\text{new}} = \\frac{1}{m_i^2 + \\Sigma_{ii}}$) and (9.68: $(\\beta^{\\text{new}})^{-1} = \\frac{\\|\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}_N\\|^2 + \\beta^{-1} \\sum_i \\gamma_i}{N}$) for re-estimating the hyperparameters of the relevance vector machine for regression.",
"answer": "First let's expand the complete-data log likelihood using Eq (7.79: $p(\\mathbf{t}|\\mathbf{X}, \\mathbf{w}, \\beta) = \\prod_{n=1}^{N} p(t_n|\\mathbf{x}_n, \\mathbf{w}, \\beta^{-1}).$), Eq (7.80: $p(\\mathbf{w}|\\boldsymbol{\\alpha}) = \\prod_{i=1}^{M} \\mathcal{N}(w_i|0, \\alpha_i^{-1})$) and Eq (7.76: $p(t|\\mathbf{x}, \\mathbf{w}, \\beta) = \\mathcal{N}(t|y(\\mathbf{x}), \\beta^{-1})$).\n\n$$\\begin{split} \\ln p(\\mathbf{t}|\\mathbf{X},\\mathbf{w},\\beta)p(\\mathbf{w}|\\boldsymbol{\\alpha}) &= & \\ln p(\\mathbf{t}|\\mathbf{X},\\mathbf{w},\\beta) + \\ln p(\\mathbf{w}|\\boldsymbol{\\alpha}) \\\\ &= & \\sum_{n=1}^{N} \\ln p(t_n|x_n,\\mathbf{w},\\beta^{-1}) + \\sum_{i=1}^{M} \\ln \\mathcal{N}(w_i|0,\\alpha_i^{-1}) \\\\ &= & \\sum_{n=1}^{N} \\ln \\mathcal{N}(t_n|\\mathbf{w}^T\\boldsymbol{\\phi}_n,\\beta^{-1}) + \\sum_{i=1}^{M} \\ln \\mathcal{N}(w_i|0,\\alpha_i^{-1}) \\\\ &= & \\frac{N}{2} \\ln \\frac{\\beta}{2\\pi} - \\frac{\\beta}{2} \\sum_{n=1}^{N} (t_n - \\mathbf{w}^T\\boldsymbol{\\phi}_n)^2 + \\frac{1}{2} \\sum_{i=1}^{M} \\ln \\frac{\\alpha_i}{2\\pi} - \\sum_{i=1}^{M} \\frac{\\alpha_i}{2} w_i^2 \\end{split}$$\n\nTherefore, the expectation of the complete-data log likelihood with respect to the posterior of $\\mathbf{w}$ equals:\n\n$$\\mathbb{E}_{\\mathbf{w}}[\\ln p] = \\frac{N}{2} \\ln \\frac{\\beta}{2\\pi} - \\frac{\\beta}{2} \\sum_{n=1}^{N} \\mathbb{E}_{\\mathbf{w}}[(t_n - \\mathbf{w}^T \\boldsymbol{\\phi}_n)^2] + \\frac{1}{2} \\sum_{i=1}^{M} \\ln \\frac{\\alpha_i}{2\\pi} - \\sum_{i=1}^{M} \\frac{\\alpha_i}{2} \\mathbb{E}_{\\mathbf{w}}[w_i^2]$$\n\nWe calculate the derivative of $\\mathbb{E}_{\\mathbf{w}}[\\ln p]$ with respect to $\\alpha_i$ and set it to 0:\n\n$$\\frac{\\partial \\mathbb{E}_{\\mathbf{w}}[\\ln p]}{\\partial \\alpha_i} = \\frac{1}{2} \\frac{1}{2\\pi} \\frac{2\\pi}{\\alpha_i} - \\frac{1}{2} \\mathbb{E}_{\\mathbf{w}}[w_i^2] = 0$$\n\nRearranging it, we can obtain:\n\n$$\\alpha_i = \\frac{1}{\\mathbb{E}_{\\mathbf{w}}[w_i^2]} = \\frac{1}{\\mathbb{E}_{\\mathbf{w}}[\\mathbf{w}\\mathbf{w}^T]_{(i,i)}}$$\n\nHere the subscript (i,i) represents the entry on the i-th row and i-th column of the matrix $\\mathbb{E}_{\\mathbf{w}}[\\mathbf{w}\\mathbf{w}^T]$ . So now, we are required to calculate the expectation. To be more clear, this expectation is with respect to the posterior defined by Eq (7.81):\n\n$$p(\\mathbf{w}|\\mathbf{t}, \\mathbf{X}, \\boldsymbol{\\alpha}, \\boldsymbol{\\beta}) = \\mathcal{N}(\\mathbf{m}, \\boldsymbol{\\Sigma})$$\n\nHere we use Eq (377) described in 'the Matrix Cookbook'. We restate it here: if $\\mathbf{w} \\sim \\mathcal{N}(\\mathbf{m}, \\Sigma)$ , we have:\n\n$$\\mathbb{E}[\\mathbf{w}\\mathbf{w}^T] = \\mathbf{\\Sigma} + \\mathbf{m}\\mathbf{m}^T$$\n\nAccording to this equation, we can obtain:\n\n$$\\alpha_i = \\frac{1}{\\mathbb{E}_{\\mathbf{w}}[\\mathbf{w}\\mathbf{w}^T]_{(i,i)}} = \\frac{1}{(\\mathbf{\\Sigma} + \\mathbf{m}\\mathbf{m}^T)_{(i,i)}} = \\frac{1}{\\Sigma_{ii} + m_i^2}$$\n\nNow We calculate the derivative of $\\mathbb{E}_{\\mathbf{w}}[\\ln p]$ with respect to $\\beta$ and set it to 0:\n\n$$\\frac{\\partial \\mathbb{E}_{\\mathbf{w}}[\\ln p]}{\\partial \\beta} = \\frac{N}{2} \\frac{1}{2\\pi} \\frac{2\\pi}{\\beta} - \\frac{1}{2} \\sum_{n=1}^{N} \\mathbb{E}_{\\mathbf{w}}[(t_n - \\mathbf{w}^T \\boldsymbol{\\phi}_n)^2] = 0$$\n\nRearranging it, we obtain:\n\n$$\\boldsymbol{\\beta}^{(new)} = \\frac{N}{\\sum_{n=1}^{N} \\mathbb{E}_{\\mathbf{w}}[(t_n - \\mathbf{w}^T \\boldsymbol{\\phi}_n)^2]}$$\n\nTherefore, we are required to calculate the expectation. By analogy to the deduction in Prob.9.21, we can obtain:\n\n$$\\begin{split} \\frac{1}{\\beta^{(new)}} &= \\frac{1}{N} \\sum_{n=1}^{N} \\left\\{ (t_n - \\mathbf{m}^T \\boldsymbol{\\phi}_N)^2 + \\text{Tr}[\\boldsymbol{\\phi}_n \\boldsymbol{\\phi}_n^T \\boldsymbol{\\Sigma}] \\right\\} \\\\ &= \\frac{1}{N} \\left\\{ ||\\mathbf{t} - \\boldsymbol{\\Phi} \\mathbf{m}||^2 + \\text{Tr}[\\boldsymbol{\\Phi}^T \\boldsymbol{\\Phi} \\boldsymbol{\\Sigma}] \\right\\} \\end{split}$$\n\nTo make it consistent with Eq (9.68: $(\\beta^{\\text{new}})^{-1} = \\frac{\\|\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}_N\\|^2 + \\beta^{-1} \\sum_i \\gamma_i}{N}$), let's first prove a statement:\n\n$$(\\boldsymbol{\\beta}^{-1}\\mathbf{A} + \\mathbf{\\Phi}^T\\mathbf{\\Phi})\\mathbf{\\Sigma} = \\boldsymbol{\\beta}^{-1}\\mathbf{I}$$\n\nThis can be easily shown by substituting $\\Sigma$ , i.e., Eq(7.83), back into the expression:\n\n$$(\\beta^{-1}\\mathbf{A} + \\mathbf{\\Phi}^T\\mathbf{\\Phi})\\,\\mathbf{\\Sigma} = (\\beta^{-1}\\mathbf{A} + \\mathbf{\\Phi}^T\\mathbf{\\Phi})(\\mathbf{A} + \\beta\\mathbf{\\Phi}^T\\mathbf{\\Phi})^{-1} = \\beta^{-1}\\mathbf{I}$$\n\nNow we start from this statement and rearrange it, which gives:\n\n$$\\mathbf{\\Phi}^T \\mathbf{\\Phi} \\mathbf{\\Sigma} = \\beta^{-1} \\mathbf{I} - \\beta^{-1} \\mathbf{A} \\mathbf{\\Sigma} = \\beta^{-1} (\\mathbf{I} - \\mathbf{A} \\mathbf{\\Sigma})$$\n\nSubstituting back into the expression for $\\beta^{(new)}$ :\n\n$$\\begin{split} \\frac{1}{\\beta^{(new)}} &= \\frac{1}{N} \\Big\\{ ||\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}||^2 + \\mathrm{Tr}[\\mathbf{\\Phi}^T \\mathbf{\\Phi} \\mathbf{\\Sigma}] \\Big\\} \\\\ &= \\frac{1}{N} \\Big\\{ ||\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}||^2 + \\mathrm{Tr}[\\beta^{-1}(\\mathbf{I} - \\mathbf{A} \\mathbf{\\Sigma})] \\Big\\} \\\\ &= \\frac{1}{N} \\Big\\{ ||\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}||^2 + \\beta^{-1} \\mathrm{Tr}[\\mathbf{I} - \\mathbf{A} \\mathbf{\\Sigma}] \\Big\\} \\\\ &= \\frac{1}{N} \\Big\\{ ||\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}||^2 + \\beta^{-1} \\sum_i (1 - \\alpha_i \\Sigma_{ii}) \\Big\\} \\\\ &= \\frac{||\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}||^2 + \\beta^{-1} \\sum_i \\gamma_i}{N} \\end{split}$$\n\nHere we have defined $\\gamma_i = 1 - \\alpha_i \\Sigma_{ii}$ as in Eq (7.89: $\\gamma_i = 1 - \\alpha_i \\Sigma_{ii}$). Note that there is a typo in Eq (9.68: $(\\beta^{\\text{new}})^{-1} = \\frac{\\|\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}_N\\|^2 + \\beta^{-1} \\sum_i \\gamma_i}{N}$), $\\mathbf{m}_N$ should be $\\mathbf{m}$ .",
"answer_length": 5656
},
{
"chapter": 9,
"question_number": "9.23",
"difficulty": "medium",
"question_text": "In Section 7.2.1 we used direct maximization of the marginal likelihood to derive the re-estimation equations (7.87: $\\alpha_i^{\\text{new}} = \\frac{\\gamma_i}{m_i^2}$) and (7.88: $(\\beta^{\\text{new}})^{-1} = \\frac{\\|\\mathbf{t} - \\mathbf{\\Phi}\\mathbf{m}\\|^2}{N - \\sum_{i} \\gamma_i}$) for finding values of the hyperparameters $\\alpha$ and $\\beta$ for the regression RVM. Similarly, in Section 9.3.4 we used the EM algorithm to maximize the same marginal likelihood, giving the re-estimation equations (9.67: $\\alpha_i^{\\text{new}} = \\frac{1}{m_i^2 + \\Sigma_{ii}}$) and (9.68: $(\\beta^{\\text{new}})^{-1} = \\frac{\\|\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}_N\\|^2 + \\beta^{-1} \\sum_i \\gamma_i}{N}$). Show that these two sets of re-estimation equations are formally equivalent.",
"answer": "Some clarifications must be made here, Eq (7.87)-(7.88) only gives the same stationary points, i.e., the same $\\alpha^*$ and $\\beta^*$ , as those given by Eq (9.67)-(9.68). However, the hyper-parameters estimated at some specific iteration may not be the same by those two different methods.\n\nWhen convergence is reached, Eq (7.87: $\\alpha_i^{\\text{new}} = \\frac{\\gamma_i}{m_i^2}$) can be written as:\n\n$$\\alpha^{\\star} = \\frac{1 - \\alpha^{\\star} \\Sigma_{ii}}{m_i^2}$$\n\nRearranging it, we can obtain:\n\n$$\\alpha^{\\star} = \\frac{1}{m_i^2 + \\Sigma_{ii}}$$\n\nThis is identical to Eq (9.67: $\\alpha_i^{\\text{new}} = \\frac{1}{m_i^2 + \\Sigma_{ii}}$). When convergence is reached, Eq (9.68: $(\\beta^{\\text{new}})^{-1} = \\frac{\\|\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}_N\\|^2 + \\beta^{-1} \\sum_i \\gamma_i}{N}$) can be written as:\n\n$$(\\boldsymbol{\\beta}^{\\star})^{-1} = \\frac{||\\mathbf{t} - \\mathbf{\\Phi} \\mathbf{m}||^2 + (\\boldsymbol{\\beta}^{\\star})^{-1} \\sum_{i} \\gamma_i}{N}$$\n\nRearranging it, we can obtain:\n\n$$(\\boldsymbol{\\beta}^{\\star})^{-1} = \\frac{||\\mathbf{t} - \\boldsymbol{\\Phi} \\mathbf{m}||^2}{N - \\sum_i \\gamma_i}$$\n\nThis is identical to Eq (7.88: $(\\beta^{\\text{new}})^{-1} = \\frac{\\|\\mathbf{t} - \\mathbf{\\Phi}\\mathbf{m}\\|^2}{N - \\sum_{i} \\gamma_i}$).",
"answer_length": 1253
},
{
"chapter": 9,
"question_number": "9.24",
"difficulty": "easy",
"question_text": "Verify the relation (9.70: $\\ln p(\\mathbf{X}|\\boldsymbol{\\theta}) = \\mathcal{L}(q,\\boldsymbol{\\theta}) + \\mathrm{KL}(q||p)$) in which $\\mathcal{L}(q, \\theta)$ and $\\mathrm{KL}(q||p)$ are defined by (9.71: $\\mathcal{L}(q, \\boldsymbol{\\theta}) = \\sum_{\\mathbf{Z}} q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\theta})}{q(\\mathbf{Z})} \\right\\}$) and (9.72: $KL(q||p) = -\\sum_{\\mathbf{Z}} q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta})}{q(\\mathbf{Z})} \\right\\}.$), respectively.",
"answer": "We substitute Eq (9.71: $\\mathcal{L}(q, \\boldsymbol{\\theta}) = \\sum_{\\mathbf{Z}} q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\theta})}{q(\\mathbf{Z})} \\right\\}$) and Eq (9.72: $KL(q||p) = -\\sum_{\\mathbf{Z}} q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta})}{q(\\mathbf{Z})} \\right\\}.$) into Eq (9.70):\n\n$$\\begin{split} L(q, \\pmb{\\theta}) + \\mathrm{KL}(q||p) &= \\sum_{\\mathbf{Z}} q(\\mathbf{Z}) \\Big\\{ \\ln \\frac{p(\\mathbf{X}, \\mathbf{Z}|\\pmb{\\theta})}{q(\\mathbf{Z})} - \\ln \\frac{p(\\mathbf{Z}|\\mathbf{X}, \\pmb{\\theta})}{q(\\mathbf{Z})} \\Big\\} \\\\ &= \\sum_{\\mathbf{Z}} q(\\mathbf{Z}) \\Big\\{ \\ln \\frac{p(\\mathbf{X}, \\mathbf{Z}|\\pmb{\\theta})}{p(\\mathbf{Z}|\\mathbf{X}, \\pmb{\\theta})} \\Big\\} \\\\ &= \\sum_{\\mathbf{Z}} q(\\mathbf{Z}) \\ln p(\\mathbf{X}|\\pmb{\\theta}) \\\\ &= \\ln p(\\mathbf{X}|\\pmb{\\theta}) \\end{split}$$\n\nNote that in the last step, we have used the fact that $\\ln p(\\mathbf{X}|\\boldsymbol{\\theta})$ doesn't depend on $\\mathbf{Z}$ , and that the summation of $q(\\mathbf{Z})$ over $\\mathbf{Z}$ equal to 1 because $q(\\mathbf{Z})$ is a PDF.",
"answer_length": 1096
},
{
"chapter": 9,
"question_number": "9.25",
"difficulty": "easy",
"question_text": "Show that the lower bound $\\mathcal{L}(q, \\theta)$ given by (9.71: $\\mathcal{L}(q, \\boldsymbol{\\theta}) = \\sum_{\\mathbf{Z}} q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\theta})}{q(\\mathbf{Z})} \\right\\}$), with $q(\\mathbf{Z}) = p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})})$ , has the same gradient with respect to $\\boldsymbol{\\theta}$ as the log likelihood function $\\ln p(\\mathbf{X}|\\boldsymbol{\\theta})$ at the point $\\boldsymbol{\\theta} = \\boldsymbol{\\theta}^{(\\text{old})}$ .",
"answer": "We calculate the derivative of Eq (9.71: $\\mathcal{L}(q, \\boldsymbol{\\theta}) = \\sum_{\\mathbf{Z}} q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\theta})}{q(\\mathbf{Z})} \\right\\}$) with respect to $\\theta$ , given $q(\\mathbf{Z}) = p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})})$ :\n\n$$\\begin{split} \\frac{\\partial L(q, \\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} &= \\frac{\\partial}{\\partial \\boldsymbol{\\theta}} \\Big\\{ \\sum_{\\mathbf{Z}} p(\\mathbf{Z} | \\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})}) \\ln \\frac{p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\theta})}{p(\\mathbf{Z} | \\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})})} \\Big\\} \\\\ &= \\frac{\\partial}{\\partial \\boldsymbol{\\theta}} \\Big\\{ \\sum_{\\mathbf{Z}} p(\\mathbf{Z} | \\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})}) \\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\theta}) - \\sum_{\\mathbf{Z}} p(\\mathbf{Z} | \\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})}) \\ln p(\\mathbf{Z} | \\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})}) \\Big\\} \\\\ &= \\frac{\\partial}{\\partial \\boldsymbol{\\theta}} \\Big\\{ \\sum_{\\mathbf{Z}} p(\\mathbf{Z} | \\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})}) \\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\theta}) \\Big\\} \\\\ &= \\sum_{\\mathbf{Z}} p(\\mathbf{Z} | \\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})}) \\frac{\\partial \\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\\\ &= \\sum_{\\mathbf{Z}} p(\\mathbf{Z} | \\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})}) \\frac{1}{p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\theta})} \\frac{\\partial p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\\\ &= \\sum_{\\mathbf{Z}} p(\\mathbf{Z} | \\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})}) \\frac{1}{p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\theta})} \\frac{\\partial p(\\mathbf{X} | \\boldsymbol{\\theta}) \\cdot p(\\mathbf{Z} | \\mathbf{X}, \\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\\\ &= \\sum_{\\mathbf{Z}} \\frac{p(\\mathbf{Z} | \\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})})}{p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\theta})} \\Big[ p(\\mathbf{X} | \\boldsymbol{\\theta}) \\frac{\\partial p(\\mathbf{Z} | \\mathbf{X}, \\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} + p(\\mathbf{Z} | \\mathbf{X}, \\boldsymbol{\\theta}) \\frac{\\partial p(\\mathbf{X} | \\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\Big] \\end{split}$$\n\nWe evaluate this derivative at $\\theta = \\theta^{\\text{old}}$ :\n\n$$\\begin{split} \\frac{\\partial L(q, \\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\Big|_{\\boldsymbol{\\theta}^{\\text{old}}} &= \\Big\\{ \\sum_{\\mathbf{Z}} \\frac{p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})})}{p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta})} \\Big[ p(\\mathbf{X}|\\boldsymbol{\\theta}) \\frac{\\partial p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} + p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}) \\frac{\\partial p(\\mathbf{X}|\\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\Big] \\Big\\} \\Big|_{\\boldsymbol{\\theta}^{\\text{old}}} \\\\ &= \\sum_{\\mathbf{Z}} \\frac{p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})})}{p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta}^{(\\text{old})})} \\Big[ p(\\mathbf{X}|\\boldsymbol{\\theta}^{(\\text{old})}) \\frac{\\partial p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\Big|_{\\boldsymbol{\\theta}^{(\\text{old})}} + p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})}) \\frac{\\partial p(\\mathbf{X}|\\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\Big|_{\\boldsymbol{\\theta}^{(\\text{old})}} \\Big] \\\\ &= \\sum_{\\mathbf{Z}} \\frac{1}{p(\\mathbf{X}|\\boldsymbol{\\theta}^{(\\text{old})})} \\Big[ p(\\mathbf{X}|\\boldsymbol{\\theta}^{(\\text{old})}) \\frac{\\partial p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\Big|_{\\boldsymbol{\\theta}^{(\\text{old})}} + p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})}) \\frac{\\partial p(\\mathbf{X}|\\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\Big|_{\\boldsymbol{\\theta}^{(\\text{old})}} \\Big] \\\\ &= \\sum_{\\mathbf{Z}} \\frac{\\partial p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\Big|_{\\boldsymbol{\\theta}^{(\\text{old})}} + \\sum_{\\mathbf{Z}} \\frac{p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})})}{p(\\mathbf{X}|\\boldsymbol{\\theta}^{(\\text{old})})} \\cdot \\frac{\\partial p(\\mathbf{X}|\\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\Big|_{\\boldsymbol{\\theta}^{(\\text{old})}} \\\\ &= \\sum_{\\mathbf{Z}} \\frac{\\partial p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\Big|_{\\boldsymbol{\\theta}^{(\\text{old})}} + \\frac{1}{p(\\mathbf{X}|\\boldsymbol{\\theta}^{(\\text{old})})} \\cdot \\frac{\\partial p(\\mathbf{X}|\\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\Big|_{\\boldsymbol{\\theta}^{(\\text{old})}} \\\\ &= \\sum_{\\mathbf{Z}} \\frac{\\partial p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\Big|_{\\boldsymbol{\\theta}^{(\\text{old})}} + \\frac{\\partial \\ln p(\\mathbf{X}|\\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\Big|_{\\boldsymbol{\\theta}^{(\\text{old})}} \\\\ &= \\left\\{ \\frac{\\partial}{\\partial \\boldsymbol{\\theta}} \\sum_{\\mathbf{Z}} p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}) \\right\\} \\Big|_{\\boldsymbol{\\theta}^{(\\text{old})}} + \\frac{\\partial \\ln p(\\mathbf{X}|\\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\Big|_{\\boldsymbol{\\theta}^{(\\text{old})}} \\\\ &= \\frac{\\partial \\ln p(\\mathbf{X}|\\boldsymbol{\\theta})}{\\partial \\boldsymbol{\\theta}} \\Big|_{\\boldsymbol{\\theta}^{(\\text{old})}} \\end{aligned}$$\n\nThis problem can be much easier to prove if we view it from the perspective of KL divergence. Note that when $q(\\mathbf{Z}) = p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{(\\text{old})})$ , the KL divergence vanishes, and that in general KL divergence is less or equal to zero. Therefore, we must have:\n\n$$\\frac{\\partial KL(q||p)}{\\partial \\boldsymbol{\\theta}}\\Big|_{\\boldsymbol{\\theta}^{(\\text{old})}} = 0$$\n\nOtherwise, there exists a point $\\theta$ in the neighborhood near $\\theta^{\\text{(old)}}$ which leads the KL divergence less than 0. Then using Eq (9.70: $\\ln p(\\mathbf{X}|\\boldsymbol{\\theta}) = \\mathcal{L}(q,\\boldsymbol{\\theta}) + \\mathrm{KL}(q||p)$), it is trivial to prove.",
"answer_length": 6224
},
{
"chapter": 9,
"question_number": "9.26",
"difficulty": "easy",
"question_text": "Consider the incremental form of the EM algorithm for a mixture of Gaussians, in which the responsibilities are recomputed only for a specific data point $\\mathbf{x}_m$ . Starting from the M-step formulae (9.17: $\\boldsymbol{\\mu}_k = \\frac{1}{N_k} \\sum_{n=1}^{N} \\gamma(z_{nk}) \\mathbf{x}_n$) and (9.18: $N_k = \\sum_{n=1}^{N} \\gamma(z_{nk}).$), derive the results (9.78: $\\boldsymbol{\\mu}_{k}^{\\text{new}} = \\boldsymbol{\\mu}_{k}^{\\text{old}} + \\left(\\frac{\\gamma^{\\text{new}}(z_{mk}) - \\gamma^{\\text{old}}(z_{mk})}{N_{k}^{\\text{new}}}\\right) \\left(\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}\\right)$) and (9.79: $N_k^{\\text{new}} = N_k^{\\text{old}} + \\gamma^{\\text{new}}(z_{mk}) - \\gamma^{\\text{old}}(z_{mk}).$) for updating the component means.",
"answer": "From Eq (9.18: $N_k = \\sum_{n=1}^{N} \\gamma(z_{nk}).$), we have:\n\n$$N_k^{\\mathrm{old}} = \\sum_n \\gamma^{\\mathrm{old}}(z_{nk})$$\n\nIf now we just re-evaluate the responsibilities for one data point $\\mathbf{x}_m$ , we can obtain:\n\n$$\\begin{split} N_k^{\\text{new}} &= \\sum_{n \\neq m} \\gamma^{\\text{old}}(z_{nk}) + \\gamma^{\\text{new}}(z_{mk}) \\\\ &= \\sum_{n} \\gamma^{\\text{old}}(z_{nk}) + \\gamma^{\\text{new}}(z_{mk}) - \\gamma^{\\text{old}}(z_{mk}) \\\\ &= N_k^{\\text{old}} + \\gamma^{\\text{new}}(z_{mk}) - \\gamma^{\\text{old}}(z_{mk}) \\end{split}$$\n\nSimilarly, according to Eq (9.17: $\\boldsymbol{\\mu}_k = \\frac{1}{N_k} \\sum_{n=1}^{N} \\gamma(z_{nk}) \\mathbf{x}_n$), we can obtain:\n\n$$\\begin{split} \\boldsymbol{\\mu}_{k}^{\\text{new}} &= \\frac{1}{N_{k}^{\\text{new}}} \\sum_{n \\neq m} \\gamma^{\\text{old}}(\\boldsymbol{z}_{nk}) \\mathbf{x}_{n} + \\frac{\\gamma^{\\text{new}}(\\boldsymbol{z}_{mk}) \\mathbf{x}_{m}}{N_{k}^{\\text{new}}} \\\\ &= \\frac{1}{N_{k}^{\\text{new}}} \\sum_{n} \\gamma^{\\text{old}}(\\boldsymbol{z}_{nk}) \\mathbf{x}_{n} + \\frac{\\gamma^{\\text{new}}(\\boldsymbol{z}_{mk}) \\mathbf{x}_{m}}{N_{k}^{\\text{new}}} - \\frac{\\gamma^{\\text{old}}(\\boldsymbol{z}_{mk}) \\mathbf{x}_{m}}{N_{k}^{\\text{new}}} \\\\ &= \\frac{N_{k}^{\\text{old}}}{N_{k}^{\\text{new}}} \\frac{1}{N_{k}^{\\text{old}}} \\sum_{n} \\gamma^{\\text{old}}(\\boldsymbol{z}_{nk}) \\mathbf{x}_{n} + \\frac{\\gamma^{\\text{new}}(\\boldsymbol{z}_{mk}) \\mathbf{x}_{m}}{N_{k}^{\\text{new}}} - \\frac{\\gamma^{\\text{old}}(\\boldsymbol{z}_{mk}) \\mathbf{x}_{m}}{N_{k}^{\\text{new}}} \\\\ &= \\frac{N_{k}^{\\text{old}}}{N_{k}^{\\text{new}}} \\boldsymbol{\\mu}_{k}^{\\text{old}} + \\left[ \\gamma^{\\text{new}}(\\boldsymbol{z}_{mk}) - \\gamma^{\\text{old}}(\\boldsymbol{z}_{mk}) \\right] \\frac{\\mathbf{x}_{m}}{N_{k}^{\\text{new}}} \\\\ &= \\boldsymbol{\\mu}_{k}^{\\text{old}} - \\frac{N_{k}^{\\text{new}} - N_{k}^{\\text{old}}}{N_{k}^{\\text{new}}} \\boldsymbol{\\mu}_{k}^{\\text{old}} + \\left[ \\gamma^{\\text{new}}(\\boldsymbol{z}_{mk}) - \\gamma^{\\text{old}}(\\boldsymbol{z}_{mk}) \\right] \\frac{\\mathbf{x}_{m}}{N_{k}^{\\text{new}}} \\\\ &= \\boldsymbol{\\mu}_{k}^{\\text{old}} - \\frac{\\gamma^{\\text{new}}(\\boldsymbol{z}_{mk}) - \\gamma^{\\text{old}}(\\boldsymbol{z}_{mk})}{N_{k}^{\\text{new}}} \\boldsymbol{\\mu}_{k}^{\\text{old}} + \\left[ \\gamma^{\\text{new}}(\\boldsymbol{z}_{mk}) - \\gamma^{\\text{old}}(\\boldsymbol{z}_{mk}) \\right] \\frac{\\mathbf{x}_{m}}{N_{k}^{\\text{new}}} \\\\ &= \\boldsymbol{\\mu}_{k}^{\\text{old}} + \\frac{\\gamma^{\\text{new}}(\\boldsymbol{z}_{mk}) - \\gamma^{\\text{old}}(\\boldsymbol{z}_{mk})}{N_{k}^{\\text{new}}} \\cdot \\left( \\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}} \\right) \\end{split}$$\n\nJust as required.",
"answer_length": 2600
},
{
"chapter": 9,
"question_number": "9.27",
"difficulty": "medium",
"question_text": "Derive M-step formulae for updating the covariance matrices and mixing coefficients in a Gaussian mixture model when the responsibilities are updated incrementally, analogous to the result (9.78: $\\boldsymbol{\\mu}_{k}^{\\text{new}} = \\boldsymbol{\\mu}_{k}^{\\text{old}} + \\left(\\frac{\\gamma^{\\text{new}}(z_{mk}) - \\gamma^{\\text{old}}(z_{mk})}{N_{k}^{\\text{new}}}\\right) \\left(\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}\\right)$) for updating the means.",
"answer": "By analogy to the previous problem, we use Eq (9.24)-Eq(9.27), beginning by first deriving an update formula for mixing coefficients $\\pi_k$ :\n\n$$\\begin{split} \\pi_k^{\\text{new}} &= \\frac{N_k^{\\text{new}}}{N} = \\frac{1}{N} \\Big\\{ N_k^{\\text{old}} + \\gamma^{\\text{new}}(z_{mk}) - \\gamma^{\\text{old}}(z_{mk}) \\Big\\} \\\\ &= \\pi_k^{\\text{old}} + \\frac{\\gamma^{\\text{new}}(z_{mk}) - \\gamma^{\\text{old}}(z_{mk})}{N} \\end{split}$$\n\nHere we have used the conclusion (the update formula for $N_k^{\\text{new}}$ ) in the previous problem. Next we deal with the covariance matrix $\\Sigma$ . By analogy to\n\nthe previous problem, we can obtain:\n\n$$\\begin{split} \\boldsymbol{\\Sigma}_{k}^{new} &= \\frac{1}{N_{k}^{new}} \\sum_{n \\neq m} \\gamma^{\\text{old}}(\\boldsymbol{z}_{nk}) (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k}^{new}) (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k}^{new})^{T} \\\\ &+ \\frac{1}{N_{k}^{new}} \\gamma^{\\text{new}}(\\boldsymbol{z}_{mk}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{new}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{new})^{T} \\\\ &\\approx \\frac{1}{N_{k}^{new}} \\sum_{n \\neq m} \\gamma^{\\text{old}}(\\boldsymbol{z}_{nk}) (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k}^{\\text{old}}) (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k}^{\\text{old}})^{T} \\\\ &+ \\frac{1}{N_{k}^{new}} \\gamma^{\\text{new}}(\\boldsymbol{z}_{mk}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}})^{T} \\\\ &= \\frac{1}{N_{k}^{new}} \\sum_{n} \\gamma^{\\text{old}}(\\boldsymbol{z}_{nk}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}})^{T} \\\\ &+ \\frac{1}{N_{k}^{new}} \\gamma^{\\text{new}}(\\boldsymbol{z}_{mk}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}})^{T} \\\\ &= \\frac{1}{N_{k}^{new}} \\gamma^{\\text{old}}(\\boldsymbol{z}_{mk}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}})^{T} \\\\ &= \\frac{1}{N_{k}^{new}} \\gamma^{\\text{old}}(\\boldsymbol{z}_{mk}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}})^{T} \\\\ &= \\left\\{1 + \\frac{N_{k}^{\\text{old}} - N_{k}^{\\text{new}}}{N_{k}^{new}}\\right\\} \\boldsymbol{\\Sigma}_{k}^{\\text{old}} \\\\ &+ \\frac{1}{N_{k}^{new}} \\gamma^{\\text{new}}(\\boldsymbol{z}_{mk}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}})^{T} \\\\ &= \\left\\{1 + \\frac{\\gamma^{\\text{old}}(\\boldsymbol{z}_{mk}) - \\gamma^{\\text{new}}(\\boldsymbol{z}_{mk})}{N_{k}^{new}} (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}})^{T} \\\\ &= \\left\\{1 + \\frac{\\gamma^{\\text{old}}(\\boldsymbol{z}_{mk}) - \\gamma^{\\text{new}}(\\boldsymbol{z}_{mk})}{N_{k}^{new}} (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}})^{T} \\\\ &= \\sum_{k}^{\\text{old}} \\\\ &+ \\frac{\\gamma^{\\text{new}}(\\boldsymbol{z}_{mk})}{N_{k}^{new}} (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}})^{T} \\\\ &= \\boldsymbol{\\Sigma}_{k}^{\\text{old}} \\\\ &+ \\frac{\\gamma^{\\text{new}}(\\boldsymbol{z}_{mk})}{N_{k}^{new}} \\left\\{ (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}})^{T} - \\boldsymbol{\\Sigma}_{k}^{\\text{old}} \\right\\} \\\\ &- \\frac{\\gamma^{\\text{old}}(\\boldsymbol{z}_{mk})}{N_{k}^{new}} \\left\\{ (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}})^{T} - \\boldsymbol{\\Sigma}_{k}^{\\text{old}} \\right\\} \\\\ &- \\frac{\\gamma^{\\text{old}}(\\boldsymbol{z}_{mk})}{N_{k}^{new}} \\left\\{ (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}) (\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}})^{T} - \\boldsymbol{\\Sigma}_{k}^{\\text{old}} \\right\\} \\end{aligned}$$\n\nOne important thing worthy mentioned is that in the second step, there is an approximate equal sign. Note that in the previous problem, we have\n\nshown that if we only recompute the data point $\\mathbf{x}_m$ , all the center $\\boldsymbol{\\mu}_k$ will also change from $\\boldsymbol{\\mu}_k^{\\text{old}}$ to $\\boldsymbol{\\mu}_k^{\\text{new}}$ , and the update formula is given by Eq (9.78: $\\boldsymbol{\\mu}_{k}^{\\text{new}} = \\boldsymbol{\\mu}_{k}^{\\text{old}} + \\left(\\frac{\\gamma^{\\text{new}}(z_{mk}) - \\gamma^{\\text{old}}(z_{mk})}{N_{k}^{\\text{new}}}\\right) \\left(\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}\\right)$). However, for the convenience of computing, we have made an approximation here. Other approximation methods can also be applied here. For instance, you can replace $\\boldsymbol{\\mu}_k^{\\text{new}}$ with $\\boldsymbol{\\mu}_k^{\\text{old}}$ whenever it occurs.\n\nThe complete solution should be given by substituting Eq (9.78: $\\boldsymbol{\\mu}_{k}^{\\text{new}} = \\boldsymbol{\\mu}_{k}^{\\text{old}} + \\left(\\frac{\\gamma^{\\text{new}}(z_{mk}) - \\gamma^{\\text{old}}(z_{mk})}{N_{k}^{\\text{new}}}\\right) \\left(\\mathbf{x}_{m} - \\boldsymbol{\\mu}_{k}^{\\text{old}}\\right)$) into the right side of the first equal sign and then rearranging it, in order to construct a relation between $\\Sigma_k^{\\mathrm{new}}$ and $\\Sigma_k^{\\mathrm{old}}$ . However, this is too complicated.\n\n## 0.10 Variational Inference",
"answer_length": 5281
},
{
"chapter": 9,
"question_number": "9.3",
"difficulty": "easy",
"question_text": "Consider a Gaussian mixture model in which the marginal distribution $p(\\mathbf{z})$ for the latent variable is given by (9.10: $p(\\mathbf{z}) = \\prod_{k=1}^{K} \\pi_k^{z_k}.$), and the conditional distribution $p(\\mathbf{x}|\\mathbf{z})$ for the observed variable is given by (9.11: $p(\\mathbf{x}|\\mathbf{z}) = \\prod_{k=1}^{K} \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k)^{z_k}.$). Show that the marginal distribution $p(\\mathbf{x})$ , obtained by summing $p(\\mathbf{z})p(\\mathbf{x}|\\mathbf{z})$ over all possible values of $\\mathbf{z}$ , is a Gaussian mixture of the form (9.7: $p(\\mathbf{x}) = \\sum_{k=1}^{K} \\pi_k \\mathcal{N}(\\mathbf{x} | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k).$).",
"answer": "We simply follow the hint.\n\n$$p(\\mathbf{x}) = \\sum_{\\mathbf{z}} p(\\mathbf{z}) p(\\mathbf{x}|\\mathbf{z})$$\n$$= \\sum_{\\mathbf{z}} \\prod_{k=1}^{K} \\left[ (\\pi_k \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k)) \\right]^{z_k}$$\n\nNote that we have used 1-of-K coding scheme for $\\mathbf{z} = [z_1, z_2, ..., z_K]^T$ . To be more specific, only one of $z_1, z_2, ..., z_K$ will be 1 and all others will equal 0. Therefore, the summation over $\\mathbf{z}$ actually consists of K terms and the k-th term corresponds to $z_k$ equal to 1 and others 0. Moreover, for the k-th term, the product will reduce to $\\pi_k \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k)$ . Therefore, we can obtain:\n\n$$p(\\mathbf{x}) = \\sum_{\\mathbf{z}} \\prod_{k=1}^{K} \\left[ (\\pi_k \\mathcal{N}(\\mathbf{x} | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k))^{z_k} = \\sum_{k=1}^{K} \\pi_k \\mathcal{N}(\\mathbf{x} | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k) \\right]$$\n\nJust as required.",
"answer_length": 985
},
{
"chapter": 9,
"question_number": "9.4",
"difficulty": "easy",
"question_text": "Suppose we wish to use the EM algorithm to maximize the posterior distribution over parameters $p(\\theta|\\mathbf{X})$ for a model containing latent variables, where $\\mathbf{X}$ is the observed data set. Show that the E step remains the same as in the maximum likelihood case, whereas in the M step the quantity to be maximized is given by $\\mathcal{Q}(\\theta, \\theta^{\\text{old}}) + \\ln p(\\theta)$ where $\\mathcal{Q}(\\theta, \\theta^{\\text{old}})$ is defined by (9.30: $Q(\\boldsymbol{\\theta}, \\boldsymbol{\\theta}^{\\text{old}}) = \\sum_{\\mathbf{Z}} p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{\\text{old}}) \\ln p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta}).$).",
"answer": "According to Bayes' Theorem, we can write:\n\n$$p(\\boldsymbol{\\theta}|\\mathbf{X}) \\propto p(\\mathbf{X}|\\boldsymbol{\\theta})p(\\boldsymbol{\\theta})$$\n\nTaking logarithm on both sides, we can write:\n\n$$\\ln p(\\boldsymbol{\\theta}|\\mathbf{X}) \\propto \\ln p(\\mathbf{X}|\\boldsymbol{\\theta}) + \\ln p(\\boldsymbol{\\theta})$$\n\nFurther utilizing Eq (9.29: $\\ln p(\\mathbf{X}|\\boldsymbol{\\theta}) = \\ln \\left\\{ \\sum_{\\mathbf{Z}} p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta}) \\right\\}.$), we can obtain:\n\n$$\\ln p(\\boldsymbol{\\theta}|\\mathbf{X}) \\propto \\ln \\left\\{ \\sum_{\\mathbf{Z}} p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta}) \\right\\} + \\ln p(\\boldsymbol{\\theta})$$\n\n$$= \\ln \\left\\{ \\left[ \\sum_{\\mathbf{Z}} p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta}) \\right] \\cdot p(\\boldsymbol{\\theta}) \\right\\}$$\n\n$$= \\ln \\left\\{ \\sum_{\\mathbf{Z}} p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta}) p(\\boldsymbol{\\theta}) \\right\\}$$\n\nIn other words, in thise case, the only modification is that the term $p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta})$ in Eq (9.29: $\\ln p(\\mathbf{X}|\\boldsymbol{\\theta}) = \\ln \\left\\{ \\sum_{\\mathbf{Z}} p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta}) \\right\\}.$) will be replaced by $p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta})p(\\boldsymbol{\\theta})$ . Therefore, in the E-step, we still need to calculate the posterior $p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{old})$ and then in the M-step, we are required to maximize $Q'(\\boldsymbol{\\theta}, \\boldsymbol{\\theta}^{old})$ . In this case, by analogy to Eq (9.30: $Q(\\boldsymbol{\\theta}, \\boldsymbol{\\theta}^{\\text{old}}) = \\sum_{\\mathbf{Z}} p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{\\text{old}}) \\ln p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta}).$), we can write down $Q'(\\boldsymbol{\\theta}, \\boldsymbol{\\theta}^{old})$ :\n\n$$\\begin{aligned} Q'(\\boldsymbol{\\theta}, \\boldsymbol{\\theta}^{old}) &= \\sum_{Z} p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{old}) \\ln \\left[ p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta}) p(\\boldsymbol{\\theta}) \\right] \\\\ &= \\sum_{Z} p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{old}) \\left[ \\ln p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta}) + \\ln p(\\boldsymbol{\\theta}) \\right] \\\\ &= \\sum_{Z} p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{old}) \\ln p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta}) + \\sum_{Z} p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{old}) \\ln p(\\boldsymbol{\\theta}) \\\\ &= \\sum_{Z} p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{old}) \\ln p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta}) + \\ln p(\\boldsymbol{\\theta}) \\cdot \\sum_{Z} p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{old}) \\\\ &= \\sum_{Z} p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\theta}^{old}) \\ln p(\\mathbf{X}, \\mathbf{Z}|\\boldsymbol{\\theta}) + \\ln p(\\boldsymbol{\\theta}) \\\\ &= Q(\\boldsymbol{\\theta}, \\boldsymbol{\\theta}^{old}) + \\ln p(\\boldsymbol{\\theta}) \\end{aligned}$$\n\nJust as required.",
"answer_length": 2859
},
{
"chapter": 9,
"question_number": "9.5",
"difficulty": "easy",
"question_text": "- 9.5 (\\*) Consider the directed graph for a Gaussian mixture model shown in Figure 9.6. By making use of the d-separation criterion discussed in Section 8.2, show that the posterior distribution of the latent variables factorizes with respect to the different data points so that\n\n$$p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}, \\boldsymbol{\\pi}) = \\prod_{n=1}^{N} p(\\mathbf{z}_n | \\mathbf{x}_n, \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}, \\boldsymbol{\\pi}).$$\n(9.80: $p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}, \\boldsymbol{\\pi}) = \\prod_{n=1}^{N} p(\\mathbf{z}_n | \\mathbf{x}_n, \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}, \\boldsymbol{\\pi}).$)",
"answer": "Notice that the condition on $\\mu$ , $\\Sigma$ and $\\pi$ can be omitted here, and we only need to prove $p(\\mathbf{Z}|\\mathbf{X})$ can be written as the product of $p(\\mathbf{z}_n|\\mathbf{x}_n)$ . Correspondingly, the small dots representing $\\mu$ , $\\Sigma$ and $\\pi$ can also be omitted in Fig 9.6. Observing Fig 9.6 and based on definition, we can write:\n\n$$p(\\mathbf{X}, \\mathbf{Z}) = p(\\mathbf{x}_1, \\mathbf{z}_1)p(\\mathbf{z}_1)...p(\\mathbf{x}_N, \\mathbf{z}_N)p(\\mathbf{z}_N) = p(\\mathbf{x}_1, \\mathbf{z}_1)...p(\\mathbf{x}_N, \\mathbf{z}_N)$$\n\nMoreover, since there is no link from $\\mathbf{z}_m$ to $\\mathbf{z}_n$ , from $\\mathbf{x}_m$ to $\\mathbf{x}_n$ , and from $\\mathbf{z}_m$ to $\\mathbf{x}_n$ ( $m \\neq n$ ), we can obtain:\n\n$$p(\\mathbf{Z}) = p(\\mathbf{z}_1)...p(\\mathbf{z}_N), \\quad p(\\mathbf{X}) = p(\\mathbf{x}_1)...p(\\mathbf{x}_N)$$\n\nThese can also be verified by calculating the marginal distribution from $p(\\mathbf{X}, \\mathbf{Z})$ , for example:\n\n$$p(\\mathbf{Z}) = \\sum_{\\mathbf{X}} p(\\mathbf{X}, \\mathbf{Z}) = \\sum_{\\mathbf{x}_1, \\dots, \\mathbf{x}_N} p(\\mathbf{x}_1, \\mathbf{z}_1) \\dots p(\\mathbf{x}_N, \\mathbf{z}_N) = p(\\mathbf{z}_1) \\dots p(\\mathbf{z}_N)$$\n\nAccording to Bayes' Theorem, we have\n\n$$p(\\mathbf{Z}|\\mathbf{X}) = \\frac{p(\\mathbf{X}|\\mathbf{Z})p(\\mathbf{Z})}{p(\\mathbf{X})}$$\n\n$$= \\frac{\\left[\\prod_{n=1}^{N} p(\\mathbf{x}_{n}|\\mathbf{z}_{n})\\right] \\left[\\prod_{n=1}^{N} p(\\mathbf{z}_{n})\\right]}{\\prod_{n=1}^{N} p(\\mathbf{x}_{n})}$$\n\n$$= \\prod_{n=1}^{N} \\frac{p(\\mathbf{x}_{n}|\\mathbf{z}_{n})p(\\mathbf{z}_{n})}{p(\\mathbf{x}_{n})}$$\n\n$$= \\prod_{n=1}^{N} p(\\mathbf{z}_{n}|\\mathbf{x}_{n})$$\n\nJust as required. The essence behind the problem is that in the directed graph, there are only links from $\\mathbf{z}_n$ to $\\mathbf{x}_n$ . The deeper reason is that (i) the mixture model is given by Fig 9.4, and (ii) we assume the data $\\{\\mathbf{x}_n\\}$ is i.i.d, and thus there is no link from $\\mathbf{x}_m$ to $\\mathbf{x}_n$ .",
"answer_length": 1984
},
{
"chapter": 9,
"question_number": "9.6",
"difficulty": "medium",
"question_text": "- 9.6 (\\*\\*) Consider a special case of a Gaussian mixture model in which the covariance matrices Σk of the components are all constrained to have a common value Σ. Derive the EM equations for maximizing the likelihood function under such a model.",
"answer": "By analogy to Eq (9.19: $\\Sigma_k = \\frac{1}{N_k} \\sum_{n=1}^N \\gamma(z_{nk}) (\\mathbf{x}_n - \\boldsymbol{\\mu}_k) (\\mathbf{x}_n - \\boldsymbol{\\mu}_k)^{\\mathrm{T}}$), we calculate the derivative of Eq (9.14: $\\ln p(\\mathbf{X}|\\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}) = \\sum_{n=1}^{N} \\ln \\left\\{ \\sum_{k=1}^{K} \\pi_k \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k) \\right\\}.$) with respect to $\\Sigma$ :\n\n$$\\frac{\\partial \\ln p}{\\partial \\Sigma} = \\frac{\\partial}{\\partial \\Sigma} \\{ \\sum_{n=1}^{N} \\ln \\alpha_n \\} = \\sum_{n=1}^{N} \\frac{1}{\\alpha_n} \\frac{\\partial \\alpha_n}{\\partial \\Sigma}$$\n (\\*)\n\nWhere we have defined:\n\n$$a_n = \\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma})$$\n\nRecall that in Prob.2.34, we have proved:\n\n$$\\frac{\\partial \\ln \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma})}{\\partial \\boldsymbol{\\Sigma}} = -\\frac{1}{2} \\boldsymbol{\\Sigma}^{-1} + \\frac{1}{2} \\boldsymbol{\\Sigma}^{-1} \\mathbf{S}_{nk} \\boldsymbol{\\Sigma}^{-1}$$\n\nWhere we have defined:\n\n$$\\mathbf{S}_{nk} = (\\mathbf{x}_n - \\boldsymbol{\\mu}_k)(\\mathbf{x}_n - \\boldsymbol{\\mu}_k)^T$$\n\nTherefore, we can obtain:\n\n$$\\begin{split} \\frac{\\partial a_n}{\\partial \\boldsymbol{\\Sigma}} &= \\frac{\\partial}{\\partial \\boldsymbol{\\Sigma}} \\Big\\{ \\sum_{k=1}^K \\pi_k \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}) \\Big\\} \\\\ &= \\sum_{k=1}^K \\frac{\\partial}{\\partial \\boldsymbol{\\Sigma}} \\Big\\{ \\pi_k \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}) \\Big\\} \\\\ &= \\sum_{k=1}^K \\pi_k \\frac{\\partial}{\\partial \\boldsymbol{\\Sigma}} \\Big\\{ \\exp \\big[ \\ln \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}) \\big] \\Big\\} \\\\ &= \\sum_{k=1}^K \\pi_k \\cdot \\exp \\big[ \\ln \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}) \\big] \\cdot \\frac{\\partial}{\\partial \\boldsymbol{\\Sigma}} \\Big[ \\ln \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}) \\Big] \\\\ &= \\sum_{k=1}^K \\pi_k \\cdot \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}) \\cdot (-\\frac{1}{2} \\boldsymbol{\\Sigma}^{-1} + \\frac{1}{2} \\boldsymbol{\\Sigma}^{-1} \\mathbf{S}_{nk} \\boldsymbol{\\Sigma}^{-1}) \\end{split}$$\n\nSubstitute the equation above into (\\*), we can obtain:\n\n$$\\begin{split} \\frac{\\partial \\ln p}{\\partial \\boldsymbol{\\Sigma}} &= \\sum_{n=1}^{N} \\frac{1}{a_n} \\frac{\\partial a_n}{\\partial \\boldsymbol{\\Sigma}} \\\\ &= \\sum_{n=1}^{N} \\frac{\\sum_{k=1}^{K} \\pi_k \\cdot \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}) \\cdot (-\\frac{1}{2} \\boldsymbol{\\Sigma}^{-1} + \\boldsymbol{\\Sigma}^{-1} \\mathbf{S}_{nk} \\boldsymbol{\\Sigma}^{-1})}{\\sum_{j=1}^{K} \\pi_j \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_j, \\boldsymbol{\\Sigma})} \\\\ &= \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\cdot (-\\frac{1}{2} \\boldsymbol{\\Sigma}^{-1} + \\frac{1}{2} \\boldsymbol{\\Sigma}^{-1} \\mathbf{S}_{nk} \\boldsymbol{\\Sigma}^{-1}) \\\\ &= -\\frac{1}{2} \\Big\\{ \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\Big\\} \\boldsymbol{\\Sigma}^{-1} + \\frac{1}{2} \\boldsymbol{\\Sigma}^{-1} \\Big\\{ \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\mathbf{S}_{nk} \\Big\\} \\boldsymbol{\\Sigma}^{-1} \\end{split}$$\n\nIf we set the derivative equal to 0, we can obtain:\n\n$$\\boldsymbol{\\Sigma} = \\frac{\\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\mathbf{S}_{nk}}{\\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk})}$$",
"answer_length": 3394
},
{
"chapter": 9,
"question_number": "9.7",
"difficulty": "easy",
"question_text": "Verify that maximization of the complete-data log likelihood (9.36: $\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}, \\boldsymbol{\\pi}) = \\sum_{n=1}^{N} \\sum_{k=1}^{K} z_{nk} \\left\\{ \\ln \\pi_k + \\ln \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k) \\right\\}.$) for a Gaussian mixture model leads to the result that the means and covariances of each component are fitted independently to the corresponding group of data points, and the mixing coefficients are given by the fractions of points in each group.",
"answer": "We begin by calculating the derivative of Eq (9.36: $\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}, \\boldsymbol{\\pi}) = \\sum_{n=1}^{N} \\sum_{k=1}^{K} z_{nk} \\left\\{ \\ln \\pi_k + \\ln \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k) \\right\\}.$) with respect to $\\mu_k$ :\n\n$$\\frac{\\partial \\ln p}{\\partial \\boldsymbol{\\mu}_{k}} = \\frac{\\partial}{\\partial \\boldsymbol{\\mu}_{k}} \\left\\{ \\sum_{n=1}^{N} \\sum_{k=1}^{K} z_{nk} \\left[ \\ln \\pi_{k} + \\ln \\mathcal{N}(\\mathbf{x}_{n} | \\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Sigma}_{k}) \\right] \\right\\}$$\n\n$$= \\frac{\\partial}{\\partial \\boldsymbol{\\mu}_{k}} \\left\\{ \\sum_{n=1}^{N} z_{nk} \\left[ \\ln \\pi_{k} + \\ln \\mathcal{N}(\\mathbf{x}_{n} | \\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Sigma}_{k}) \\right] \\right\\}$$\n\n$$= \\sum_{n=1}^{N} \\frac{\\partial}{\\partial \\boldsymbol{\\mu}_{k}} \\left\\{ z_{nk} \\ln \\mathcal{N}(\\mathbf{x}_{n} | \\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Sigma}_{k}) \\right\\}$$\n\n$$= \\sum_{\\mathbf{x}_{n} \\in C_{k}} \\frac{\\partial}{\\partial \\boldsymbol{\\mu}_{k}} \\left\\{ \\ln \\mathcal{N}(\\mathbf{x}_{n} | \\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Sigma}_{k}) \\right\\}$$\n\nWhere we have used $\\mathbf{x}_n \\in C_k$ to represent the data point $\\mathbf{x}_n$ which are assigned to the k-th cluster. Therefore, $\\boldsymbol{\\mu}_k$ is given by the mean of those $x_n \\in C_k$ just as the case of a single Gaussian. It is exactly the same for the covariance. Next, we maximize Eq (9.36: $\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}, \\boldsymbol{\\pi}) = \\sum_{n=1}^{N} \\sum_{k=1}^{K} z_{nk} \\left\\{ \\ln \\pi_k + \\ln \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k) \\right\\}.$) with respect to $\\pi_k$ by enforcing a Lagrange multiplier:\n\n$$L = \\ln p + \\lambda (\\sum_{k=1}^{K} \\pi_k - 1)$$\n\nWe calculate the derivative of L with respect to $\\pi_k$ and set it to 0:\n\n$$\\frac{\\partial L}{\\partial \\pi_k} = \\sum_{n=1}^{N} \\frac{z_{nk}}{\\pi_k} + \\lambda = 0$$\n\nWe multiply both sides by $\\pi_k$ and sum over k making use of the constraint Eq (9.9: $\\sum_{k=1}^{K} \\pi_k = 1$), yielding $\\lambda = -N$ . Substituting it back into the expression, we can obtain:\n\n$$\\pi_k = \\frac{1}{N} \\sum_{n=1}^N z_{nk}$$\n\nJust as required.",
"answer_length": 2243
},
{
"chapter": 9,
"question_number": "9.8",
"difficulty": "easy",
"question_text": "Show that if we maximize (9.40: $\\mathbb{E}_{\\mathbf{Z}}[\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}, \\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\left\\{ \\ln \\pi_k + \\ln \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k) \\right\\}. \\quad$) with respect to $\\mu_k$ while keeping the responsibilities $\\gamma(z_{nk})$ fixed, we obtain the closed form solution given by (9.17: $\\boldsymbol{\\mu}_k = \\frac{1}{N_k} \\sum_{n=1}^{N} \\gamma(z_{nk}) \\mathbf{x}_n$).",
"answer": "Since $\\gamma(z_{nk})$ is fixed, the only dependency of Eq (9.40: $\\mathbb{E}_{\\mathbf{Z}}[\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}, \\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\left\\{ \\ln \\pi_k + \\ln \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k) \\right\\}. \\quad$) on $\\mu_k$ occurs in the Gaussian, yielding:\n\n$$\\frac{\\partial \\mathbb{E}_{z}[\\ln p]}{\\partial \\boldsymbol{\\mu}_{k}} = \\frac{\\partial}{\\partial \\boldsymbol{\\mu}_{k}} \\left\\{ \\sum_{n=1}^{N} \\gamma(z_{nk}) \\ln \\mathcal{N}(\\mathbf{x}_{n} | \\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Sigma}_{k}) \\right\\}$$\n\n$$= \\sum_{n=1}^{N} \\gamma(z_{nk}) \\cdot \\frac{\\partial \\ln \\mathcal{N}(\\mathbf{x}_{n} | \\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Sigma}_{k})}{\\partial \\boldsymbol{\\mu}_{k}}$$\n\n$$= \\sum_{n=1}^{N} \\gamma(z_{nk}) \\cdot \\left[ -\\boldsymbol{\\Sigma}_{k}^{-1}(\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k}) \\right]$$\n\nSetting the derivative equal to 0, we obtain exactly Eq (9.16: $0 = -\\sum_{n=1}^{N} \\frac{\\pi_k \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k)}{\\sum_{j} \\pi_j \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_j, \\boldsymbol{\\Sigma}_j)} \\boldsymbol{\\Sigma}_k(\\mathbf{x}_n - \\boldsymbol{\\mu}_k)$), and consequently Eq (9.17: $\\boldsymbol{\\mu}_k = \\frac{1}{N_k} \\sum_{n=1}^{N} \\gamma(z_{nk}) \\mathbf{x}_n$) just as required. Note that there is a typo in Eq (9.16: $0 = -\\sum_{n=1}^{N} \\frac{\\pi_k \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k)}{\\sum_{j} \\pi_j \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_j, \\boldsymbol{\\Sigma}_j)} \\boldsymbol{\\Sigma}_k(\\mathbf{x}_n - \\boldsymbol{\\mu}_k)$), $\\Sigma_k$ shoule be $\\Sigma_b^{-1}$ .",
"answer_length": 1705
},
{
"chapter": 9,
"question_number": "9.9",
"difficulty": "easy",
"question_text": "Show that if we maximize (9.40: $\\mathbb{E}_{\\mathbf{Z}}[\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}, \\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\left\\{ \\ln \\pi_k + \\ln \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k) \\right\\}. \\quad$) with respect to $\\Sigma_k$ and $\\pi_k$ while keeping the responsibilities $\\gamma(z_{nk})$ fixed, we obtain the closed form solutions given by (9.19: $\\Sigma_k = \\frac{1}{N_k} \\sum_{n=1}^N \\gamma(z_{nk}) (\\mathbf{x}_n - \\boldsymbol{\\mu}_k) (\\mathbf{x}_n - \\boldsymbol{\\mu}_k)^{\\mathrm{T}}$) and (9.22: $\\pi_k = \\frac{N_k}{N}$).",
"answer": "We first calculate the derivative of Eq (9.40: $\\mathbb{E}_{\\mathbf{Z}}[\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}, \\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\left\\{ \\ln \\pi_k + \\ln \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k) \\right\\}. \\quad$) with respect to $\\Sigma_k$ :\n\n$$\\frac{\\partial \\mathbb{E}_{z}}{\\partial \\boldsymbol{\\Sigma}_{k}} = \\frac{\\partial}{\\partial \\boldsymbol{\\Sigma}_{k}} \\left\\{ \\sum_{n=1}^{N} \\gamma(z_{nk}) \\ln \\mathcal{N}(\\mathbf{x}_{n} | \\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Sigma}_{k}) \\right\\} \n= \\sum_{n=1}^{N} \\gamma(z_{nk}) \\frac{\\partial \\ln \\mathcal{N}(\\mathbf{x}_{n} | \\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Sigma}_{k})}{\\partial \\boldsymbol{\\Sigma}_{k}} \n= \\sum_{n=1}^{N} \\gamma(z_{nk}) \\cdot \\left[ -\\frac{1}{2} \\boldsymbol{\\Sigma}_{k}^{-1} + \\frac{1}{2} \\boldsymbol{\\Sigma}_{k}^{-1} \\mathbf{S}_{nk} \\boldsymbol{\\Sigma}_{k}^{-1} \\right]$$\n\nAs in Prob 9.6, we have defined:\n\n$$\\mathbf{S}_{nk} = (\\mathbf{x}_n - \\boldsymbol{\\mu}_k)(\\mathbf{x}_n - \\boldsymbol{\\mu}_k)^T$$\n\nSetting the derivative equal to 0 and rearranging it, we obtain:\n\n$$\\boldsymbol{\\Sigma}_k = \\frac{\\sum_{n=1}^N \\gamma(z_{nk}) \\, \\mathbf{S}_{nk}}{\\sum_{n=1}^N \\gamma(z_{nk})} = \\frac{\\sum_{n=1}^N \\gamma(z_{nk}) \\, \\mathbf{S}_{nk}}{N_k}$$\n\nWhere $N_k$ is given by Eq (9.18: $N_k = \\sum_{n=1}^{N} \\gamma(z_{nk}).$). So now we have obtained Eq (9.19: $\\Sigma_k = \\frac{1}{N_k} \\sum_{n=1}^N \\gamma(z_{nk}) (\\mathbf{x}_n - \\boldsymbol{\\mu}_k) (\\mathbf{x}_n - \\boldsymbol{\\mu}_k)^{\\mathrm{T}}$) just as required. Next to maximize Eq (9.40: $\\mathbb{E}_{\\mathbf{Z}}[\\ln p(\\mathbf{X}, \\mathbf{Z} | \\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}, \\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\left\\{ \\ln \\pi_k + \\ln \\mathcal{N}(\\mathbf{x}_n | \\boldsymbol{\\mu}_k, \\boldsymbol{\\Sigma}_k) \\right\\}. \\quad$) with respect to $\\pi_k$ , we still need to introduce Lagrange multiplier to enforce the summation of $pi_k$ over k equal to 1, as in Prob 9.7:\n\n$$L = \\mathbb{E}_z + \\lambda (\\sum_{k=1}^K \\pi_k - 1)$$\n\nWe calculate the derivative of L with respect to $\\pi_k$ and set it to 0:\n\n$$\\frac{\\partial L}{\\partial \\pi_k} = \\sum_{n=1}^{N} \\frac{\\gamma(z_{nk})}{\\pi_k} + \\lambda = 0$$\n\nWe multiply both sides by $\\pi_k$ and sum over k making use of the constraint Eq (9.9: $\\sum_{k=1}^{K} \\pi_k = 1$), yielding $\\lambda = -N$ (you can see Eq (9.20)- Eq (9.22: $\\pi_k = \\frac{N_k}{N}$) for more details). Substituting it back into the expression, we can obtain:\n\n$$\\pi_k = \\frac{1}{N} \\sum_{n=1}^N \\gamma(z_{nk}) = \\frac{N_k}{N}$$\n\nJust as Eq (9.22: $\\pi_k = \\frac{N_k}{N}$).",
"answer_length": 2658
}
]
},
{
"chapter_number": 10,
"total_questions": 35,
"difficulty_breakdown": {
"easy": 10,
"medium": 6,
"hard": 5,
"unknown": 17
},
"questions": [
{
"chapter": 10,
"question_number": "10.1",
"difficulty": "easy",
"question_text": "Verify that the log marginal distribution of the observed data $\\ln p(\\mathbf{X})$ can be decomposed into two terms in the form (10.2: $\\ln p(\\mathbf{X}) = \\mathcal{L}(q) + \\mathrm{KL}(q||p)$) where $\\mathcal{L}(q)$ is given by (10.3: $\\mathcal{L}(q) = \\int q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{X}, \\mathbf{Z})}{q(\\mathbf{Z})} \\right\\} d\\mathbf{Z}$) and $\\mathrm{KL}(q||p)$ is given by (10.4: $KL(q||p) = -\\int q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{Z}|\\mathbf{X})}{q(\\mathbf{Z})} \\right\\} d\\mathbf{Z}.$).",
"answer": "This problem is very similar to Prob.9.24. We substitute Eq (10.3: $\\mathcal{L}(q) = \\int q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{X}, \\mathbf{Z})}{q(\\mathbf{Z})} \\right\\} d\\mathbf{Z}$) and Eq (10.4: $KL(q||p) = -\\int q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{Z}|\\mathbf{X})}{q(\\mathbf{Z})} \\right\\} d\\mathbf{Z}.$) into Eq (10.2):\n\n$$L(q) + \\text{KL}(q||p) = \\int_{\\mathbf{Z}} q(\\mathbf{Z}) \\left\\{ \\ln \\frac{p(\\mathbf{X}, \\mathbf{Z})}{q(\\mathbf{Z})} - \\ln \\frac{p(\\mathbf{Z}|\\mathbf{X})}{q(\\mathbf{Z})} \\right\\} d\\mathbf{Z}$$\n\n$$= \\int_{\\mathbf{Z}} q(\\mathbf{Z}) \\left\\{ \\ln \\frac{p(\\mathbf{X}, \\mathbf{Z})}{p(\\mathbf{Z}|\\mathbf{X})} \\right\\} d\\mathbf{Z}$$\n\n$$= \\int_{\\mathbf{Z}} q(\\mathbf{Z}) \\ln p(\\mathbf{X}) d\\mathbf{Z}$$\n\n$$= \\ln p(\\mathbf{X})$$\n\nNote that in the last step, we have used the fact that $\\ln p(\\mathbf{X})$ doesn't depend on $\\mathbf{Z}$ , and that the integration of $q(\\mathbf{Z})$ over $\\mathbf{Z}$ equal to 1 because $q(\\mathbf{Z})$ is a PDF.",
"answer_length": 977
},
{
"chapter": 10,
"question_number": "10.10",
"difficulty": "easy",
"question_text": "Derive the decomposition given by (10.34) that is used to find approximate posterior distributions over models using variational inference.",
"answer": "We substitute $\\mathcal{L}_m$ , i.e., Eq (10.35: $\\mathcal{L}_{m} = \\sum_{m} \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m)q(m) \\ln \\left\\{ \\frac{p(\\mathbf{Z}, \\mathbf{X}, m)}{q(\\mathbf{Z}|m)q(m)} \\right\\}.$), back into the right hand side of Eq (10.34), yielding:\n\n(right) \n$$= \\sum_{m} \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) q(m) \\left\\{ \\ln \\frac{p(\\mathbf{Z}, \\mathbf{X}, m)}{q(\\mathbf{Z}|m) q(m)} - \\ln \\frac{p(\\mathbf{Z}, m | \\mathbf{X})}{q(\\mathbf{Z}|m) q(m)} \\right\\}$$\n\n$$= \\sum_{m} \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) q(m) \\left\\{ \\ln \\frac{p(\\mathbf{Z}, \\mathbf{X}, m)}{p(\\mathbf{Z}, m | \\mathbf{X})} \\right\\}$$\n\n$$= \\sum_{m} \\sum_{\\mathbf{Z}} q(\\mathbf{Z}, m) \\ln p(\\mathbf{X})$$\n\n$$= \\ln p(\\mathbf{X})$$\n\nJust as required.",
"answer_length": 716
},
{
"chapter": 10,
"question_number": "10.11",
"difficulty": "medium",
"question_text": "By using a Lagrange multiplier to enforce the normalization constraint on the distribution q(m), show that the maximum of the lower bound (10.35: $\\mathcal{L}_{m} = \\sum_{m} \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m)q(m) \\ln \\left\\{ \\frac{p(\\mathbf{Z}, \\mathbf{X}, m)}{q(\\mathbf{Z}|m)q(m)} \\right\\}.$) is given by (10.36).",
"answer": "We introduce the Lagrange Multiplier:\n\n$$\\begin{split} L &= \\sum_{m} \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) q(m) \\ln \\left\\{ \\frac{p(\\mathbf{Z}, \\mathbf{X}, m)}{q(\\mathbf{Z}|m) q(m)} \\right\\} - \\lambda \\left\\{ \\sum_{m} q(m) - 1 \\right\\} \\\\ &= \\sum_{m} \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) q(m) \\ln \\left\\{ p(\\mathbf{Z}, \\mathbf{X}, m) - q(\\mathbf{Z}|m) \\right\\} - \\sum_{m} \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) q(m) \\ln q(m) - \\lambda \\left\\{ \\sum_{m} q(m) - 1 \\right\\} \\\\ &= \\sum_{m} q(m) \\cdot C - \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) \\left\\{ \\sum_{m} q(m) \\ln q(m) \\right\\} - \\lambda \\left\\{ \\sum_{m} q(m) - 1 \\right\\} \\end{split}$$\n\nWhere we have defined:\n\n$$C = \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) \\ln \\left\\{ p(\\mathbf{Z}, \\mathbf{X}, m) - q(\\mathbf{Z}|m) \\right\\}$$\n\nAccording to Calculus of Variations given in Appendix D, and also Prob.1.34, we can obtain the derivative of L with respect to q(m) and set it to 0:\n\n$$\\frac{\\partial L}{\\partial q(m)} = C + \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) \\left[ \\ln q(m) + 1 \\right] - \\lambda$$\n\n$$= \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) \\ln \\left\\{ p(\\mathbf{Z}, \\mathbf{X}, m) - q(\\mathbf{Z}|m) \\right\\} + \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) \\ln q(m) + 1 - \\lambda$$\n\n$$= \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) \\ln \\left\\{ \\frac{p(\\mathbf{Z}, \\mathbf{X}, m)}{q(\\mathbf{Z}|m)q(m)} \\right\\} + 1 - \\lambda = 0$$\n\nWe multiply both sides by q(m) and then perform summation over m, yielding:\n\n$$\\sum_{m} \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m)q(m) \\ln \\left\\{ \\frac{p(\\mathbf{Z}, \\mathbf{X}, m)}{q(\\mathbf{Z}|m)q(m)} \\right\\} + (1 - \\lambda) \\sum_{m} q(m) = 0$$\n\nNotice that the first term is actually $\\mathcal{L}_m$ defined in Eq (10.35: $\\mathcal{L}_{m} = \\sum_{m} \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m)q(m) \\ln \\left\\{ \\frac{p(\\mathbf{Z}, \\mathbf{X}, m)}{q(\\mathbf{Z}|m)q(m)} \\right\\}.$) and that the summation of q(m) over m equals 1, we can obtain:\n\n$$\\lambda = \\mathcal{L}_m + 1$$\n\nWe substitute $\\lambda$ back into the derivative, yielding:\n\n$$\\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) \\ln \\left\\{ \\frac{p(\\mathbf{Z}, \\mathbf{X}, m)}{q(\\mathbf{Z}|m)q(m)} \\right\\} - \\mathcal{L}_m = 0$$\n (\\*)\n\nOne important thing must be clarified here, there is a typo in Eq (10.36), $\\mathcal{L}_m$ in Eq (10.36) should be $\\mathcal{L}^{''}$ , which is defined as:\n\n$$\\mathscr{L}^{\"} = \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) \\ln \\left\\{ \\frac{p(\\mathbf{Z}, \\mathbf{X}|m)}{q(\\mathbf{Z}|m)} \\right\\}$$\n\nNow with the definition of $\\mathscr{L}^{''}$ , we expand (\\*):\n\n$$(*) = \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) \\ln \\left\\{ \\frac{p(\\mathbf{Z}, \\mathbf{X}, m)}{q(\\mathbf{Z}|m)q(m)} \\right\\} - \\mathcal{L}_{m}$$\n\n$$= \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) \\ln \\left\\{ \\frac{p(\\mathbf{Z}, \\mathbf{X}|m)p(m)}{q(\\mathbf{Z}|m)q(m)} \\right\\} - \\mathcal{L}_{m}$$\n\n$$= \\mathcal{L}'' + \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) \\ln \\frac{p(m)}{q(m)} - \\mathcal{L}_{m}$$\n\n$$= \\mathcal{L}'' + \\ln \\frac{p(m)}{q(m)} - \\sum_{m} \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m)q(m) \\ln \\left\\{ \\frac{p(\\mathbf{Z}, \\mathbf{X}, m)}{q(\\mathbf{Z}|m)q(m)} \\right\\}$$\n\n$$= \\mathcal{L}'' + \\ln \\frac{p(m)}{q(m)} - \\sum_{m} q(m) \\left\\{ \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) \\ln \\frac{p(\\mathbf{Z}, \\mathbf{X}|m)p(m)}{q(\\mathbf{Z}|m)q(m)} \\right\\}$$\n\n$$= \\mathcal{L}'' + \\ln \\frac{p(m)}{q(m)} - \\sum_{m} q(m) \\left\\{ \\mathcal{L}'' + \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m) \\ln \\frac{p(m)}{q(m)} \\right\\}$$\n\n$$= \\mathcal{L}''' + \\ln \\frac{p(m)}{q(m)} - \\sum_{m} q(m) \\left\\{ \\mathcal{L}'' + \\ln \\frac{p(m)}{q(m)} \\right\\}$$\n\n$$= \\ln \\frac{p(m) \\exp(\\mathcal{L}'')}{q(m)} - \\sum_{m} q(m) \\ln \\frac{p(m) \\exp(\\mathcal{L}'')}{q(m)} = 0$$\n\nThe solution is given by:\n\n$$q(m) = \\frac{1}{A} \\cdot p(m) \\exp(\\mathcal{L}^{''})$$\n\nWhere $\\frac{1}{A}$ is a normalization constant, used to guarantee the summation of q(m) over m equals 1. More specific, it is given by:\n\n$$A = \\sum_{\\mathbf{Z}} p(m) \\exp(\\mathcal{L}'')$$\n\nTherefore, it is obvious that A does not depend on the value of $\\mathbf{Z}$ . You can verify the result of q(m) by substituting it back into the last line of (\\*), yielding:\n\n$$\\ln \\frac{p(m)\\exp(\\mathcal{L}'')}{q(m)} - \\sum_{m} q(m) \\ln \\frac{p(m)\\exp(\\mathcal{L}'')}{q(m)} = \\ln A - \\sum_{m} q(m) \\cdot \\ln A = 0$$\n\nOne last thing worthy mentioning is that you can directly start from $\\mathcal{L}_m$ given in Eq (10.35: $\\mathcal{L}_{m} = \\sum_{m} \\sum_{\\mathbf{Z}} q(\\mathbf{Z}|m)q(m) \\ln \\left\\{ \\frac{p(\\mathbf{Z}, \\mathbf{X}, m)}{q(\\mathbf{Z}|m)q(m)} \\right\\}.$), without enforcing Lagrange Multiplier, to obtain q(m). In this way, we can actually obtain:\n\n$$\\mathcal{L}_m = \\sum_{m} q(m) \\ln \\frac{p(m) \\exp(\\mathcal{L}^{''})}{q(m)}$$\n\nIt is actually the KL divergence between q(m) and $p(m)\\exp(\\mathcal{L}'')$ . Note that $p(m)\\exp(\\mathcal{L}'')$ is not normalized, we cannot let q(m) equal to $p(m)\\exp(\\mathcal{L}'')$ to achieve the minimum of a KL distance, i.e., 0, since q(m) is a probability distribution and should sum to 1 over m.\n\nTherefore, we can guess that the optimal q(m) is given by the normalized $p(m) \\exp(\\mathcal{L}^n)$ . In this way, the constraint, i.e., summation of q(m) over m equals 1, is implicitly guaranteed. The more strict proof using Lagrange Multiplier has been shown above.",
"answer_length": 5164
},
{
"chapter": 10,
"question_number": "10.12",
"difficulty": "medium",
"question_text": "Starting from the joint distribution (10.41: $p(\\mathbf{X}, \\mathbf{Z}, \\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) = p(\\mathbf{X}|\\mathbf{Z}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})p(\\mathbf{Z}|\\boldsymbol{\\pi})p(\\boldsymbol{\\pi})p(\\boldsymbol{\\mu}|\\boldsymbol{\\Lambda})p(\\boldsymbol{\\Lambda})$), and applying the general result (10.9: $\\ln q_i^{\\star}(\\mathbf{Z}_i) = \\mathbb{E}_{i \\neq i}[\\ln p(\\mathbf{X}, \\mathbf{Z})] + \\text{const.}$), show that the optimal variational distribution $q^{\\star}(\\mathbf{Z})$ over the latent variables for the Bayesian mixture of Gaussians is given by (10.48: $q^{\\star}(\\mathbf{Z}) = \\prod_{n=1}^{N} \\prod_{k=1}^{K} r_{nk}^{z_{nk}}$) by verifying the steps given in the text.",
"answer": "The solution procedure has already been given in Eq (10.43: $\\ln q^{\\star}(\\mathbf{Z}) = \\mathbb{E}_{\\pi,\\mu,\\Lambda}[\\ln p(\\mathbf{X}, \\mathbf{Z}, \\pi, \\mu, \\Lambda)] + \\text{const.}$) - (10.49: $r_{nk} = \\frac{\\rho_{nk}}{\\sum_{j=1}^{K} \\rho_{nj}}.$), so here we explain it in more details, starting from Eq (10.43):\n\n$$\\begin{split} & \\ln q^{\\star}(\\mathbf{Z}) &= \\mathbb{E}_{\\pi,\\mu,\\Lambda}[\\ln p(\\mathbf{X},\\mathbf{Z},\\pi,\\mu,\\Lambda)] + \\operatorname{const} \\\\ &= \\mathbb{E}_{\\pi}[\\ln p(\\mathbf{Z}|\\pi)] + \\mathbb{E}_{\\mu,\\Lambda}[\\ln p(\\mathbf{X}|\\mathbf{Z},\\mu,\\Lambda)] + \\operatorname{const} \\\\ &= \\operatorname{const} + \\mathbb{E}_{\\pi}[\\sum_{n=1}^{N} \\sum_{k=1}^{K} z_{nk} \\ln \\pi_{k}] \\\\ &+ \\mathbb{E}_{\\mu,\\Lambda}[\\sum_{n=1}^{N} \\sum_{k=1}^{K} z_{nk} \\{ \\frac{1}{2} \\ln |\\Lambda_{k}| - \\frac{D}{2} \\ln 2\\pi - \\frac{1}{2} (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k})^{T} \\Lambda_{k} (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k}) \\}] \\\\ &= \\operatorname{const} + \\sum_{n=1}^{N} \\sum_{k=1}^{K} z_{nk} \\mathbb{E}_{\\pi}[\\ln \\pi_{k}] \\\\ &+ \\sum_{n=1}^{N} \\sum_{k=1}^{K} z_{nk} \\mathbb{E}_{\\mu,\\Lambda}[\\{ \\frac{1}{2} \\ln |\\Lambda_{k}| - \\frac{D}{2} \\ln 2\\pi - \\frac{1}{2} (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k})^{T} \\Lambda_{k} (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k}) \\}] \\\\ &= \\sum_{n=1}^{N} \\sum_{k=1}^{K} z_{nk} \\ln \\rho_{nk} + \\operatorname{const} \\end{split}$$\n\nWhere we have substituted used Eq (10.37: $p(\\mathbf{Z}|\\boldsymbol{\\pi}) = \\prod_{n=1}^{N} \\prod_{k=1}^{K} \\pi_k^{z_{nk}}.$) and Eq (10.38: $p(\\mathbf{X}|\\mathbf{Z}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) = \\prod_{n=1}^{N} \\prod_{k=1}^{K} \\mathcal{N} \\left( \\mathbf{x}_{n} | \\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Lambda}_{k}^{-1} \\right)^{z_{nk}}$), and D is the dimension of $\\mathbf{x}_n$ . Here $\\ln \\rho_{nk}$ is defined as:\n\n$$\\begin{split} &\\ln \\rho_{nk} &= &\\mathbb{E}_{\\boldsymbol{\\pi}}[\\ln \\pi_k] + \\mathbb{E}_{\\boldsymbol{\\mu},\\boldsymbol{\\Lambda}}[\\{\\frac{1}{2}\\ln |\\boldsymbol{\\Lambda}_k| - \\frac{D}{2}\\ln 2\\pi - \\frac{1}{2}(\\mathbf{x}_n - \\boldsymbol{\\mu}_k)^T\\boldsymbol{\\Lambda}_k(\\mathbf{x}_n - \\boldsymbol{\\mu}_k)\\}] \\\\ &= &\\mathbb{E}_{\\boldsymbol{\\pi}}[\\ln \\pi_k] + \\frac{1}{2}\\mathbb{E}_{\\boldsymbol{\\mu},\\boldsymbol{\\Lambda}}[\\ln |\\boldsymbol{\\Lambda}_k|] - \\frac{D}{2}\\ln 2\\pi - \\frac{1}{2}\\mathbb{E}_{\\boldsymbol{\\mu}_k,\\boldsymbol{\\Lambda}_k}[(\\mathbf{x}_n - \\boldsymbol{\\mu}_k)^T\\boldsymbol{\\Lambda}_k(\\mathbf{x}_n - \\boldsymbol{\\mu}_k)] \\end{split}$$\n\nTaking exponential of both sides, we can obtain:\n\n$$q^{\\star}(\\mathbf{Z}) \\propto \\prod_{n=1}^{N} \\prod_{k=1}^{K} \\rho_{nk}^{z_{nk}}$$\n\nBecause $q^*(\\mathbf{Z})$ should be correctly normalized, we are required to find the normalization constant. In this problem, we find that directly calculate the normalization constant by performing summation of $q^*(\\mathbf{Z})$ over $\\mathbf{Z}$ is non trivial. Therefore, we will proof that Eq (10.49: $r_{nk} = \\frac{\\rho_{nk}}{\\sum_{j=1}^{K} \\rho_{nj}}.$) is the correct normalization by mathematical induction. When N=1, $q^*(\\mathbf{Z})$ will reduce to: $\\prod_{k=1}^K \\rho_{1k}^{z_{1k}}$ , and it is easy to see that the normalization constant is given by:\n\n$$A = \\sum_{\\mathbf{z}_1} \\prod_{k=1}^K \\rho_{1k}^{z_{1k}} = \\sum_{j=1}^K \\rho_{1j}$$\n\nHere we have used 1-of-K coding scheme for $\\mathbf{z}_1 = [z_{11}, z_{12}, ..., z_{1K}]^T$ , i.e., only one of $\\{z_{11}, z_{12}, ..., z_{1K}\\}$ will be 1 and others all 0. Therefore the summation over $\\mathbf{z}_1$ is made up of K terms, and the j-th term corresponds to $z_{1j} = 1$ and other $z_{1i}$ equals 0. In this case, we have obtained:\n\n$$q^{\\star}(\\mathbf{Z}) = \\frac{1}{A} \\prod_{k=1}^{K} \\rho_{1k}^{z_{1k}} = \\prod_{k=1}^{K} \\left( \\frac{\\rho_{1k}}{\\sum_{j=1}^{K} \\ln \\rho_{1j}} \\right)^{z_{1k}}$$\n\nIt is exactly the same as Eq (10.48: $q^{\\star}(\\mathbf{Z}) = \\prod_{n=1}^{N} \\prod_{k=1}^{K} r_{nk}^{z_{nk}}$) and Eq (10.49: $r_{nk} = \\frac{\\rho_{nk}}{\\sum_{j=1}^{K} \\rho_{nj}}.$). Suppose now we have proved that for N-1, the normalized $q^*(\\mathbf{Z})$ is given by Eq (10.48: $q^{\\star}(\\mathbf{Z}) = \\prod_{n=1}^{N} \\prod_{k=1}^{K} r_{nk}^{z_{nk}}$) and Eq (10.49: $r_{nk} = \\frac{\\rho_{nk}}{\\sum_{j=1}^{K} \\rho_{nj}}.$). For N, we have:\n\n$$\\begin{split} \\sum_{\\mathbf{Z}} q^{\\star}(\\mathbf{Z}) &= \\sum_{\\mathbf{z}_{1}, \\dots, \\mathbf{z}_{N}} \\prod_{n=1}^{N} \\prod_{k=1}^{K} r_{nk}^{z_{nk}} \\\\ &= \\sum_{\\mathbf{z}_{N}} \\left\\{ \\sum_{\\mathbf{z}_{1}, \\dots, \\mathbf{z}_{N-1}} \\prod_{n=1}^{N} \\prod_{k=1}^{K} r_{nk}^{z_{nk}} \\right\\} \\\\ &= \\sum_{\\mathbf{z}_{N}} \\left\\{ \\sum_{\\mathbf{z}_{1}, \\dots, \\mathbf{z}_{N-1}} \\left[ \\prod_{k=1}^{K} r_{Nk}^{z_{Nk}} \\right] \\cdot \\left[ \\prod_{n=1}^{N-1} \\prod_{k=1}^{K} r_{nk}^{z_{nk}} \\right] \\right\\} \\\\ &= \\sum_{\\mathbf{z}_{N}} \\left\\{ \\left[ \\prod_{k=1}^{K} r_{Nk}^{z_{Nk}} \\right] \\cdot \\sum_{\\mathbf{z}_{1}, \\dots, \\mathbf{z}_{N-1}} \\prod_{n=1}^{N-1} \\prod_{k=1}^{K} r_{nk}^{z_{nk}} \\right\\} \\\\ &= \\sum_{\\mathbf{z}_{N}} \\left\\{ \\left[ \\prod_{k=1}^{K} r_{Nk}^{z_{Nk}} \\right] \\cdot 1 \\right\\} \\\\ &= \\sum_{\\mathbf{z}_{N}} \\prod_{k=1}^{K} r_{Nk}^{z_{Nk}} = \\sum_{k=1}^{K} r_{Nk} = 1 \\end{split}$$\n\nThe proof of the final step is exactly the same as that for N=1. So now, with the assumption Eq (10.48: $q^{\\star}(\\mathbf{Z}) = \\prod_{n=1}^{N} \\prod_{k=1}^{K} r_{nk}^{z_{nk}}$) and Eq (10.49) are right for N-1, we have shown that they are also correct for N. The proof is complete.",
"answer_length": 5396
},
{
"chapter": 10,
"question_number": "10.13",
"difficulty": "medium",
"question_text": "Starting from (10.54: $+\\sum_{k=1}^{K}\\sum_{n=1}^{N}\\mathbb{E}[z_{nk}]\\ln\\mathcal{N}\\left(\\mathbf{x}_{n}|\\boldsymbol{\\mu}_{k},\\boldsymbol{\\Lambda}_{k}^{-1}\\right)+\\text{const.}$), derive the result (10.59: $q^{\\star}(\\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Lambda}_{k}) = \\mathcal{N}\\left(\\boldsymbol{\\mu}_{k} | \\mathbf{m}_{k}, (\\beta_{k} \\boldsymbol{\\Lambda}_{k})^{-1}\\right) \\, \\mathcal{W}(\\boldsymbol{\\Lambda}_{k} | \\mathbf{W}_{k}, \\nu_{k})$) for the optimum variational posterior distribution over $\\mu_k$ and $\\Lambda_k$ in the Bayesian mixture of Gaussians, and hence verify the expressions for the parameters of this distribution given by (10.60)–(10.63).",
"answer": "Let's start from Eq (10.54: $+\\sum_{k=1}^{K}\\sum_{n=1}^{N}\\mathbb{E}[z_{nk}]\\ln\\mathcal{N}\\left(\\mathbf{x}_{n}|\\boldsymbol{\\mu}_{k},\\boldsymbol{\\Lambda}_{k}^{-1}\\right)+\\text{const.}$).\n\n$$\\begin{split} \\ln q^{\\star}(\\boldsymbol{\\pi},\\boldsymbol{\\mu},\\boldsymbol{\\Lambda}) &\\propto & \\ln p(\\boldsymbol{\\pi}) + \\sum_{k=1}^{K} \\ln p(\\boldsymbol{\\mu}_{k},\\boldsymbol{\\Lambda}_{k}) + \\mathbb{E}[\\ln p(\\mathbf{Z}|\\boldsymbol{\\pi})] + \\sum_{k=1}^{K} \\sum_{n=1}^{N} \\mathbb{E}[\\boldsymbol{z}_{nk}] \\ln \\mathcal{N}(\\mathbf{x}_{n}|\\boldsymbol{\\mu}_{k},\\boldsymbol{\\Lambda}_{k}^{-1}) \\\\ &= & \\ln C(\\boldsymbol{\\alpha}_{0}) + \\sum_{k=1}^{K} (\\alpha_{0} - 1) \\ln \\pi_{k} \\\\ &+ \\sum_{k=1}^{K} \\ln \\mathcal{N}(\\boldsymbol{\\mu}_{k}|\\mathbf{m}_{0}, (\\beta_{0}\\boldsymbol{\\Lambda}_{k})^{-1}) + \\sum_{k=1}^{K} \\ln \\mathcal{W}(\\boldsymbol{\\Lambda}_{k}|\\mathbf{W}_{0}, \\boldsymbol{v}_{0}) \\\\ &+ \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\ln \\pi_{k} \\mathbb{E}[\\boldsymbol{z}_{nk}] + \\sum_{k=1}^{K} \\sum_{n=1}^{N} \\mathbb{E}[\\boldsymbol{z}_{nk}] \\ln \\mathcal{N}(\\mathbf{x}_{n}|\\boldsymbol{\\mu}_{k},\\boldsymbol{\\Lambda}_{k}^{-1}) \\end{split}$$\n\nIt is easy to observe that the equation above can be decomposed into a sum of terms involving only $\\pi$ together with those only involving $\\mu$ and $\\Lambda$ . In other words, $q(\\pi, \\mu, \\Lambda)$ can be factorized into the product of $q(\\pi)$ and $q(\\mu, \\Lambda)$ . We first extract those terms depend on $\\pi$ .\n\n$$\\ln q^{\\star}(\\pi) \\propto (\\alpha_0 - 1) \\sum_{k=1}^{K} \\ln \\pi_k + \\sum_{k=1}^{K} \\sum_{n=1}^{N} \\ln \\pi_k \\mathbb{E}[z_{nk}]$$\n\n$$= (\\alpha_0 - 1) \\sum_{k=1}^{K} \\ln \\pi_k + \\sum_{k=1}^{K} \\sum_{n=1}^{N} r_{nk} \\ln \\pi_k$$\n\n$$= \\sum_{k=1}^{K} \\ln \\pi_k \\cdot \\left[\\alpha_0 - 1 + \\sum_{n=1}^{N} r_{nk}\\right]$$\n\nComparing it to the standard form of a Dirichlet distribution, we can conclude that $q^*(\\pi) = \\text{Dir}(\\pi | \\alpha)$ , where the k-th entry of $\\alpha$ , i.e., $\\alpha_k$ is given by:\n\n$$\\alpha_k = \\alpha_0 + \\sum_{n=1}^N r_{nk} = \\alpha_0 + N_k$$\n\nNext we gather all the terms dependent on $\\mu = \\{\\mu_k\\}$ and $\\Lambda = \\{\\Lambda_k\\}$ :\n\n$$\\begin{aligned} \\ln q^{\\star}(\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) &= \\sum_{k=1}^{K} \\ln \\mathcal{N}(\\boldsymbol{\\mu}_{k} | \\mathbf{m}_{0}, (\\beta_{0} \\boldsymbol{\\Lambda}_{k})^{-1}) + \\sum_{k=1}^{K} \\ln \\mathcal{W}(\\boldsymbol{\\Lambda}_{k} | \\mathbf{W}_{0}, v_{0}) + \\sum_{k=1}^{K} \\sum_{n=1}^{N} \\mathbb{E}[\\boldsymbol{z}_{nk}] \\ln \\mathcal{N}(\\mathbf{x}_{n} | \\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Lambda}_{k}^{-1}) \\\\ &\\propto \\sum_{k=1}^{K} \\left\\{ \\frac{1}{2} \\ln |\\beta_{0} \\boldsymbol{\\Lambda}_{k}| - \\frac{1}{2} (\\boldsymbol{\\mu}_{k} - \\mathbf{m}_{0})^{T} \\beta_{0} \\boldsymbol{\\Lambda}_{k} (\\boldsymbol{\\mu}_{k} - \\mathbf{m}_{0}) \\right\\} \\\\ &+ \\sum_{k=1}^{K} \\left\\{ \\frac{v_{0} - D - 1}{2} \\ln |\\boldsymbol{\\Lambda}_{k}| - \\frac{1}{2} \\mathrm{Tr}(\\mathbf{W}_{0}^{-1} \\boldsymbol{\\Lambda}_{k}) \\right\\} \\\\ &+ \\sum_{k=1}^{K} \\sum_{n=1}^{N} r_{nk} \\left\\{ \\frac{1}{2} \\ln |\\boldsymbol{\\Lambda}_{k}| - \\frac{1}{2} (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k})^{T} \\boldsymbol{\\Lambda}_{k} (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k}) \\right\\} \\end{aligned}$$\n\nWith the knowledge that the optimal $q^*(\\mu, \\Lambda)$ can be written as:\n\n$$q^{\\star}(\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) = \\prod_{k=1}^{K} q^{\\star}(\\boldsymbol{\\mu}_{k} | \\boldsymbol{\\Lambda}_{k}) q^{\\star}(\\boldsymbol{\\Lambda}_{k}) = \\prod_{k=1}^{K} \\mathcal{N}(\\boldsymbol{\\mu}_{k} | \\mathbf{m}_{k}, (\\beta_{k} \\boldsymbol{\\Lambda}_{k})^{-1}) \\mathcal{W}(\\boldsymbol{\\Lambda}_{k} | \\mathbf{W}_{k}, v_{k}) \\quad (*)$$\n\nWe first complete square with respect to $\\mu_k$ . The quadratic term is given by:\n\n$$-\\frac{1}{2}\\boldsymbol{\\mu}_k^T(\\beta_0\\boldsymbol{\\Lambda}_k)\\boldsymbol{\\mu}_k - \\sum_{k=1}^K \\sum_{n=1}^N r_{nk} \\frac{1}{2}\\boldsymbol{\\mu}_k^T \\boldsymbol{\\Lambda}_k \\boldsymbol{\\mu}_k = -\\frac{1}{2}\\boldsymbol{\\mu}_k^T (\\beta_0\\boldsymbol{\\Lambda}_k + N_k\\boldsymbol{\\Lambda}_k)\\boldsymbol{\\mu}_k$$\n\nTherefore, comparing with (\\*), we can obtain:\n\n$$\\beta_k = \\beta_0 + N_k$$\n\nNext, we write down the linear term with respect to $\\mu_k$ :\n\n$$\\mu_k^T(\\beta_0 \\mathbf{\\Lambda}_k \\mathbf{m}_0) + \\sum_{n=1}^N r_{nk} \\cdot \\boldsymbol{\\mu}_k^T(\\mathbf{\\Lambda}_k \\mathbf{x}_n) = \\boldsymbol{\\mu}_k^T(\\beta_0 \\mathbf{\\Lambda}_k \\mathbf{m}_0 + \\sum_{n=1}^N r_{nk} \\mathbf{\\Lambda}_k \\mathbf{x}_n)$$\n\n$$= \\boldsymbol{\\mu}_k^T \\mathbf{\\Lambda}_k(\\beta_0 \\mathbf{m}_0 + \\sum_{n=1}^N r_{nk} \\mathbf{x}_n)$$\n\n$$= \\boldsymbol{\\mu}_k^T \\mathbf{\\Lambda}_k(\\beta_0 \\mathbf{m}_0 + N_k \\bar{\\mathbf{x}}_k)$$\n\nWhere we have defined:\n\n$$ar{\\mathbf{x}}_k = \frac{1}{\\sum_{n=1}^{N} r_{nk}} \\sum_{n=1}^{N} r_{nk} \\mathbf{x}_n = \frac{1}{N_k} \\sum_{n=1}^{N} r_{nk} \\mathbf{x}_n$$\n\nComparing to the standard form, we can obtain:\n\n$$\\mathbf{m}_k = \frac{1}{eta_k}(eta_0 \\mathbf{m}_0 + N_k ar{\\mathbf{x}}_k)$$\n\nNow we have obtained $q^*(\\mu_k|\\Lambda_k)=\\mathcal{N}(\\mu_k|\\mathbf{m}_k,(\\beta_k\\Lambda_k)^{-1})$ , using the relation:\n\n$$\\ln q^{\\star}(\\boldsymbol{\\Lambda}_k) = \\ln q^{\\star}(\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k) - \\ln q^{\\star}(\\boldsymbol{\\mu}_k | \\boldsymbol{\\Lambda}_k)$$\n\nAnd focusing only on the terms dependent on $\\Lambda_k$ , we can obtain:\n\n$$\\begin{split} \\ln q^{\\star}(\\boldsymbol{\\Lambda}_{k}) &\\propto & \\left\\{\\frac{1}{2}\\ln|\\beta_{0}\\boldsymbol{\\Lambda}_{k}| - \\frac{1}{2}(\\boldsymbol{\\mu}_{k} - \\mathbf{m}_{0})^{T}\\beta_{0}\\boldsymbol{\\Lambda}_{k}(\\boldsymbol{\\mu}_{k} - \\mathbf{m}_{0})\\right\\} \\\\ &+ \\left\\{\\frac{v_{0} - D - 1}{2}\\ln|\\boldsymbol{\\Lambda}_{k}| - \\frac{1}{2}\\mathrm{Tr}(\\mathbf{W}_{0}^{-1}\\boldsymbol{\\Lambda}_{k})\\right\\} \\\\ &+ \\sum_{n=1}^{N}r_{nk}\\left\\{\\frac{1}{2}\\ln|\\boldsymbol{\\Lambda}_{k}| - \\frac{1}{2}(\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k})^{T}\\boldsymbol{\\Lambda}_{k}(\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k})\\right\\} \\\\ &- \\left\\{\\frac{1}{2}\\ln|\\beta_{k}\\boldsymbol{\\Lambda}_{k}| - \\frac{1}{2}(\\boldsymbol{\\mu}_{k} - \\mathbf{m}_{k})^{T}\\beta_{k}\\boldsymbol{\\Lambda}_{k}(\\boldsymbol{\\mu}_{k} - \\mathbf{m}_{k})\\right\\} \\\\ &\\propto & \\left\\{\\frac{1}{2}\\ln|\\boldsymbol{\\Lambda}_{k}| - \\frac{1}{2}\\mathrm{Tr}\\left[\\beta_{0}(\\boldsymbol{\\mu}_{k} - \\mathbf{m}_{0})(\\boldsymbol{\\mu}_{k} - \\mathbf{m}_{0})^{T} \\cdot \\boldsymbol{\\Lambda}_{k}\\right] \\right. \\\\ &+ \\left\\{\\frac{v_{0} - D - 1}{2}\\ln|\\boldsymbol{\\Lambda}_{k}| - \\frac{1}{2}\\mathrm{Tr}\\left[\\sum_{n=1}^{N}r_{nk}(\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k})^{T}(\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k}) \\cdot \\boldsymbol{\\Lambda}_{k}\\right] \\\\ &- \\left\\{\\frac{1}{2}\\ln|\\boldsymbol{\\Lambda}_{k}| - \\frac{1}{2}\\mathrm{Tr}\\left[\\beta_{k}(\\boldsymbol{\\mu}_{k} - \\mathbf{m}_{k})^{T}(\\boldsymbol{\\mu}_{k} - \\mathbf{m}_{k}) \\cdot \\boldsymbol{\\Lambda}_{k}\\right] \\right. \\\\ &= & \\frac{v_{0} - D - 1 + N_{k}}{2}\\ln|\\boldsymbol{\\Lambda}_{k}| - \\frac{1}{2}\\mathrm{Tr}[\\mathbf{T}\\cdot\\boldsymbol{\\Lambda}_{k}] \\end{split}$$\n\nWhere we have defined:\n\n$$\\mathbf{T} = \\beta_0(\\boldsymbol{\\mu}_k - \\mathbf{m}_0)(\\boldsymbol{\\mu}_k - \\mathbf{m}_0)^T + \\mathbf{W}_0^{-1} + \\sum_{n=1}^N r_{nk}(\\mathbf{x}_n - \\boldsymbol{\\mu}_k)^T(\\mathbf{x}_n - \\boldsymbol{\\mu}_k) - \\beta_k(\\boldsymbol{\\mu}_k - \\mathbf{m}_k)^T(\\boldsymbol{\\mu}_k - \\mathbf{m}_k)$$\n\nBy matching the coefficient ahead of $\\ln |\\Lambda_k|$ , we can obtain:\n\n$$v_k = v_0 + N_k$$\n\nNext, by matching the coefficient in the Trace, we see that:\n\n$$\\mathbf{W}_{h}^{-1} = \\mathbf{T}$$\n\nLet's further simplify T, beginning by introducing a useful equation, which will be used here and later in Prob.10.16:\n\n$$\\begin{split} \\sum_{n=1}^{N} r_{nk} \\mathbf{x}_{n} \\mathbf{x}_{n}^{T} &= \\sum_{n=1}^{N} r_{nk} (\\mathbf{x}_{n} - \\bar{\\mathbf{x}}_{k} + \\bar{\\mathbf{x}}_{k}) (\\mathbf{x}_{n} - \\bar{\\mathbf{x}}_{k} + \\bar{\\mathbf{x}}_{k})^{T} \\\\ &= \\sum_{n=1}^{N} r_{nk} \\left[ (\\mathbf{x}_{n} - \\bar{\\mathbf{x}}_{k}) (\\mathbf{x}_{n} - \\bar{\\mathbf{x}}_{k})^{T} + \\bar{\\mathbf{x}}_{k} \\bar{\\mathbf{x}}_{k}^{T} + 2(\\mathbf{x}_{n} - \\bar{\\mathbf{x}}_{k}) \\bar{\\mathbf{x}}_{k}^{T} \\right] \\\\ &= \\sum_{n=1}^{N} r_{nk} \\left[ (\\mathbf{x}_{n} - \\bar{\\mathbf{x}}_{k}) (\\mathbf{x}_{n} - \\bar{\\mathbf{x}}_{k})^{T} \\right] + \\sum_{n=1}^{N} r_{nk} \\left[ \\bar{\\mathbf{x}}_{k} \\bar{\\mathbf{x}}_{k}^{T} \\right] + \\sum_{n=1}^{N} r_{nk} \\left[ 2(\\mathbf{x}_{n} - \\bar{\\mathbf{x}}_{k}) \\bar{\\mathbf{x}}_{k}^{T} \\right] \\\\ &= N_{k} \\mathbf{S}_{k} + N_{k} \\bar{\\mathbf{x}}_{k} \\bar{\\mathbf{x}}_{k}^{T} + 2 \\left[ (N_{k} \\bar{\\mathbf{x}}_{k} - N_{k} \\bar{\\mathbf{x}}_{k}) \\bar{\\mathbf{x}}_{k}^{T} \\right] \\\\ &= N_{k} \\mathbf{S}_{k} + N_{k} \\bar{\\mathbf{x}}_{k} \\bar{\\mathbf{x}}_{k}^{T} + 2 \\left[ (N_{k} \\bar{\\mathbf{x}}_{k} - N_{k} \\bar{\\mathbf{x}}_{k}) \\bar{\\mathbf{x}}_{k}^{T} \\right] \\\\ &= N_{k} \\mathbf{S}_{k} + N_{k} \\bar{\\mathbf{x}}_{k} \\bar{\\mathbf{x}}_{k}^{T} \\end{split}$$\n\nWhere in the last step we have used Eq (10.51). Now we are ready to prove that **T** is exactly given by Eq (10.62: $\\mathbf{W}_{k}^{-1} = \\mathbf{W}_{0}^{-1} + N_{k}\\mathbf{S}_{k} + \\frac{\\beta_{0}N_{k}}{\\beta_{0} + N_{k}}(\\overline{\\mathbf{x}}_{k} - \\mathbf{m}_{0})(\\overline{\\mathbf{x}}_{k} - \\mathbf{m}_{0})^{\\mathrm{T}}$). Let's first consider the coefficients ahead of the quadratic term with repsect to $\\mu_k$ :\n\n(quad) = \n$$\\beta_0 \\mu_k \\mu_k^T + \\sum_{n=1}^N r_{nk} \\mu_k \\mu_k^T - \\beta_k \\mu_k \\mu_k^T = (\\beta_0 + \\sum_{n=1}^N r_{nk} - \\beta_k) \\mu_k \\mu_k^T = 0$$\n\nWhere the summation is actually equal to $N_k$ and we have also used Eq (10.60: $\\beta_k = \\beta_0 + N_k$). Next we focus on the linear term:\n\n(linear) = \n$$-2\\beta_0 \\mathbf{m}_0 \\boldsymbol{\\mu}_k^T + \\sum_{n=1}^N 2r_{nk} \\mathbf{x}_n \\boldsymbol{\\mu}_k^T + 2\\beta_k \\mathbf{m}_k \\boldsymbol{\\mu}_k^T$$\n \n= $2(-\\beta_0 \\mathbf{m}_0 + \\sum_{n=1}^N r_{nk} \\mathbf{x}_n + \\beta_k \\mathbf{m}_k) \\boldsymbol{\\mu}_k^T = 0$ \n\nFinally we deal with the constant term:\n\n$$\\begin{aligned} &(\\text{const}) &= & \\mathbf{W}_0^{-1} + \\beta_0 \\mathbf{m}_0 \\mathbf{m}_0^T + \\sum_{n=1}^N r_{nk} \\mathbf{x}_n \\mathbf{x}_n^T - \\beta_k \\mathbf{m}_k \\mathbf{m}_k^T \\\\ &= & \\mathbf{W}_0^{-1} + \\beta_0 \\mathbf{m}_0 \\mathbf{m}_0^T + N_k \\mathbf{S}_k + N_k \\bar{\\mathbf{x}}_k \\bar{\\mathbf{x}}_k^T - \\beta_k \\mathbf{m}_k \\mathbf{m}_k^T \\\\ &= & \\mathbf{W}_0^{-1} + N_k \\mathbf{S}_k + \\beta_0 \\mathbf{m}_0 \\mathbf{m}_0^T + N_k \\bar{\\mathbf{x}}_k \\bar{\\mathbf{x}}_k^T - \\frac{1}{\\beta_k} \\beta_k^2 \\mathbf{m}_k \\mathbf{m}_k^T \\\\ &= & \\mathbf{W}_0^{-1} + N_k \\mathbf{S}_k + \\beta_0 \\mathbf{m}_0 \\mathbf{m}_0^T + N_k \\bar{\\mathbf{x}}_k \\bar{\\mathbf{x}}_k^T - \\frac{1}{\\beta_k} (\\beta_0 \\mathbf{m}_0 + N_K \\bar{\\mathbf{x}}_k) (\\beta_0 \\mathbf{m}_0 + N_K \\bar{\\mathbf{x}}_k)^T \\\\ &= & \\mathbf{W}_0^{-1} + N_k \\mathbf{S}_k + (\\beta_0 - \\frac{\\beta_0^2}{\\beta_k}) \\mathbf{m}_0 \\mathbf{m}_0^T + (N_k - \\frac{N_k^2}{\\beta_k}) \\bar{\\mathbf{x}}_k \\bar{\\mathbf{x}}_k^T - \\frac{1}{\\beta_k} 2(\\beta_0 \\mathbf{m}_0) \\cdot (N_K \\bar{\\mathbf{x}}_k)^T \\\\ &= & \\mathbf{W}_0^{-1} + N_k \\mathbf{S}_k + \\frac{\\beta_0 N_k}{\\beta_k} \\mathbf{m}_0 \\mathbf{m}_0^T + \\frac{\\beta_0 N_k}{\\beta_k} \\bar{\\mathbf{x}}_k \\bar{\\mathbf{x}}_k^T - \\frac{\\beta_0 N_K}{\\beta_k} 2(\\mathbf{m}_0) \\cdot (\\bar{\\mathbf{x}}_k)^T \\\\ &= & \\mathbf{W}_0^{-1} + N_k \\mathbf{S}_k + \\frac{\\beta_0 N_k}{\\beta_k} (\\mathbf{m}_0 - \\bar{\\mathbf{x}}_k) (\\mathbf{m}_0 - \\bar{\\mathbf{x}}_k)^T \\end{aligned}$$\n\nJust as required.",
"answer_length": 11294
},
{
"chapter": 10,
"question_number": "10.14",
"difficulty": "medium",
"question_text": "Using the distribution (10.59: $q^{\\star}(\\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Lambda}_{k}) = \\mathcal{N}\\left(\\boldsymbol{\\mu}_{k} | \\mathbf{m}_{k}, (\\beta_{k} \\boldsymbol{\\Lambda}_{k})^{-1}\\right) \\, \\mathcal{W}(\\boldsymbol{\\Lambda}_{k} | \\mathbf{W}_{k}, \\nu_{k})$), verify the result (10.64: $= D\\beta_{k}^{-1} + \\nu_{k}(\\mathbf{x}_{n}-\\mathbf{m}_{k})^{\\mathrm{T}}\\mathbf{W}_{k}(\\mathbf{x}_{n}-\\mathbf{m}_{k}) \\quad$).",
"answer": "Let's begin by definition.\n\n$$\\begin{split} \\mathbb{E}_{\\boldsymbol{\\mu}_{k},\\boldsymbol{\\Lambda}_{k}}[(\\mathbf{x}_{n}-\\boldsymbol{\\mu}_{k})^{T}\\boldsymbol{\\Lambda}_{k}(\\mathbf{x}_{n}-\\boldsymbol{\\mu}_{k})] &= \\int \\int (\\mathbf{x}_{n}-\\boldsymbol{\\mu}_{k})^{T}\\boldsymbol{\\Lambda}_{k}(\\mathbf{x}_{n}-\\boldsymbol{\\mu}_{k}))q^{\\star}(\\boldsymbol{\\mu}_{k},\\boldsymbol{\\Lambda}_{k})d\\boldsymbol{\\mu}_{k}d\\boldsymbol{\\Lambda}_{k} \\\\ &= \\int \\left\\{ \\int (\\mathbf{x}_{n}-\\boldsymbol{\\mu}_{k})^{T}\\boldsymbol{\\Lambda}_{k}(\\mathbf{x}_{n}-\\boldsymbol{\\mu}_{k}))q^{\\star}(\\boldsymbol{\\mu}_{k}|\\boldsymbol{\\Lambda}_{k})d\\boldsymbol{\\mu}_{k} \\right\\}q^{\\star}(\\boldsymbol{\\Lambda}_{k})d\\boldsymbol{\\Lambda}_{k} \\\\ &= \\int \\mathbb{E}_{\\boldsymbol{\\mu}_{k}}[(\\boldsymbol{\\mu}_{k}-\\mathbf{x}_{n})^{T}\\boldsymbol{\\Lambda}_{k}(\\boldsymbol{\\mu}_{k}-\\mathbf{x}_{n})]\\cdot q^{\\star}(\\boldsymbol{\\Lambda}_{k})d\\boldsymbol{\\Lambda}_{k} \\end{split}$$\n\nThe inner expectation is with respect to $\\mu_k$ , which satisfies a Gaussian distribution. We use Eq (380) in 'MatrixCookbook': if $\\mathbf{x} \\sim \\mathcal{N}(\\mathbf{m}, \\Sigma)$ , we have:\n\n$$\\mathbb{E}[(\\mathbf{x} - \\mathbf{m}')^{T} \\mathbf{A} (\\mathbf{x} - \\mathbf{m}')] = (\\mathbf{m} - \\mathbf{m}')^{T} \\mathbf{A} (\\mathbf{m} - \\mathbf{m}') + \\text{Tr}(\\mathbf{A} \\mathbf{\\Sigma})$$\n\nTherefore, here we can obtain:\n\n$$\\mathbb{E}_{\\boldsymbol{\\mu}_k}[(\\boldsymbol{\\mu}_k - \\mathbf{x}_n)^T \\boldsymbol{\\Lambda}_k (\\boldsymbol{\\mu}_k - \\mathbf{x}_n)] = (\\mathbf{m}_k - \\mathbf{x}_n)^T \\boldsymbol{\\Lambda}_k (\\mathbf{m}_k - \\mathbf{x}_n) + \\mathrm{Tr} \\left[ \\boldsymbol{\\Lambda}_k \\cdot (\\beta_k \\boldsymbol{\\Lambda}_k)^{-1} \\right]$$\n\nSubstituting it back into the integration, we can obtain:\n\n$$\\mathbb{E}_{\\boldsymbol{\\mu}_{k},\\boldsymbol{\\Lambda}_{k}}[(\\mathbf{x}_{n}-\\boldsymbol{\\mu}_{k})^{T}\\boldsymbol{\\Lambda}_{k}(\\mathbf{x}_{n}-\\boldsymbol{\\mu}_{k})] = \\int \\left[ (\\mathbf{m}_{k}-\\mathbf{x}_{n})^{T}\\boldsymbol{\\Lambda}_{k}(\\mathbf{m}_{k}-\\mathbf{x}_{n}) + D\\boldsymbol{\\beta}_{k}^{-1} \\right] \\cdot q^{\\star}(\\boldsymbol{\\Lambda}_{k}) d\\boldsymbol{\\Lambda}_{k}$$\n\n$$= D\\boldsymbol{\\beta}_{k}^{-1} + \\mathbb{E}_{\\boldsymbol{\\Lambda}_{k}} \\left[ (\\mathbf{m}_{k}-\\mathbf{x}_{n})^{T}\\boldsymbol{\\Lambda}_{k}(\\mathbf{m}_{k}-\\mathbf{x}_{n}) \\right]$$\n\n$$= D\\boldsymbol{\\beta}_{k}^{-1} + \\mathbb{E}_{\\boldsymbol{\\Lambda}_{k}} \\left\\{ \\operatorname{Tr}[\\boldsymbol{\\Lambda}_{k} \\cdot (\\mathbf{m}_{k}-\\mathbf{x}_{n})(\\mathbf{m}_{k}-\\mathbf{x}_{n})^{T}] \\right\\}$$\n\n$$= D\\boldsymbol{\\beta}_{k}^{-1} + \\operatorname{Tr} \\left\\{ \\mathbb{E}_{\\boldsymbol{\\Lambda}_{k}}[\\boldsymbol{\\Lambda}_{k}] \\cdot (\\mathbf{m}_{k}-\\mathbf{x}_{n})(\\mathbf{m}_{k}-\\mathbf{x}_{n})^{T} \\right\\}$$\n\n$$= D\\boldsymbol{\\beta}_{k}^{-1} + \\operatorname{Tr} \\left\\{ v_{k} \\mathbf{W}_{k} \\cdot (\\mathbf{m}_{k}-\\mathbf{x}_{n})(\\mathbf{m}_{k}-\\mathbf{x}_{n})^{T} \\right\\}$$\n\n$$= D\\boldsymbol{\\beta}_{k}^{-1} + v_{k} (\\mathbf{m}_{k}-\\mathbf{x}_{n})^{T} \\mathbf{W}_{k} (\\mathbf{m}_{k}-\\mathbf{x}_{n})$$\n\nJust as required.",
"answer_length": 3025
},
{
"chapter": 10,
"question_number": "10.15",
"difficulty": "easy",
"question_text": "Using the result (B.17), show that the expected value of the mixing coefficients in the variational mixture of Gaussians is given by (10.69).",
"answer": "There is a typo in Eq (10.69). The numerator should be $\\alpha_0 + N_k$ . Let's substitute Eq (10.58: $\\alpha_k = \\alpha_0 + N_k.$) into (B.17):\n\n$$\\mathbb{E}[\\pi_k] = \\frac{\\alpha_k}{\\sum_k \\alpha_k} = \\frac{\\alpha_0 + N_k}{K\\alpha_0 + \\sum_k N_k} = \\frac{\\alpha_0 + N_k}{K\\alpha_0 + N}$$",
"answer_length": 290
},
{
"chapter": 10,
"question_number": "10.16",
"difficulty": "medium",
"question_text": "Verify the results (10.71: $\\mathbb{E}[\\ln p(\\mathbf{X}|\\mathbf{Z}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})] = \\frac{1}{2} \\sum_{k=1}^{K} N_k \\left\\{ \\ln \\widetilde{\\Lambda}_k - D\\beta_k^{-1} - \\nu_k \\text{Tr}(\\mathbf{S}_k \\mathbf{W}_k) - \\nu_k (\\overline{\\mathbf{x}}_k - \\mathbf{m}_k)^{\\mathrm{T}} \\mathbf{W}_k (\\overline{\\mathbf{x}}_k - \\mathbf{m}_k) - D \\ln(2\\pi) \\right\\}$) and (10.72: $\\mathbb{E}[\\ln p(\\mathbf{Z}|\\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\ln \\widetilde{\\pi}_k$) for the first two terms in the lower bound for the variational Gaussian mixture model given by (10.70: $-\\mathbb{E}[\\ln q(\\mathbf{Z})] - \\mathbb{E}[\\ln q(\\boldsymbol{\\pi})] - \\mathbb{E}[\\ln q(\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})]$).",
"answer": "According to Eq (10.38: $p(\\mathbf{X}|\\mathbf{Z}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) = \\prod_{n=1}^{N} \\prod_{k=1}^{K} \\mathcal{N} \\left( \\mathbf{x}_{n} | \\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Lambda}_{k}^{-1} \\right)^{z_{nk}}$), we can obtain:\n\n$$\\mathbb{E}[\\ln p(\\mathbf{X}|\\mathbf{Z}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\mathbb{E}[z_{nk} \\ln \\mathcal{N}(\\mathbf{x}_{n}|\\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Lambda}_{k}^{-1})]$$\n\n$$= \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\mathbb{E}[z_{nk}] \\cdot \\mathbb{E}[-\\frac{D}{2} \\ln 2\\pi + \\frac{1}{2} \\ln |\\boldsymbol{\\Lambda}_{k}| - \\frac{1}{2} (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k})^{T} \\boldsymbol{\\Lambda}_{k} (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k})]$$\n\n$$= \\frac{1}{2} \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\mathbb{E}[z_{nk}] \\cdot \\left\\{ -D \\ln 2\\pi + \\mathbb{E}[\\ln |\\boldsymbol{\\Lambda}_{k}|] - \\mathbb{E}[(\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k})^{T} \\boldsymbol{\\Lambda}_{k} (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k})] \\right\\}$$\n\n$$= \\frac{1}{2} \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\cdot \\left\\{ -D \\ln 2\\pi + \\ln \\tilde{\\boldsymbol{\\Lambda}}_{k} - D \\boldsymbol{\\beta}_{k}^{-1} - v_{k} (\\mathbf{x}_{n} - \\mathbf{m}_{k})^{T} \\mathbf{W}_{k} (\\mathbf{x}_{n} - \\mathbf{m}_{k}) \\right\\}$$\n\nWhere we have used Eq (10.50: $\\mathbb{E}[z_{nk}] = r_{nk}$), Eq (10.64: $= D\\beta_{k}^{-1} + \\nu_{k}(\\mathbf{x}_{n}-\\mathbf{m}_{k})^{\\mathrm{T}}\\mathbf{W}_{k}(\\mathbf{x}_{n}-\\mathbf{m}_{k}) \\quad$) and Eq (10.65: $\\ln \\widetilde{\\Lambda}_k \\equiv \\mathbb{E}\\left[\\ln |\\mathbf{\\Lambda}_k|\\right] = \\sum_{i=1}^D \\psi\\left(\\frac{\\nu_k + 1 - i}{2}\\right) + D\\ln 2 + \\ln |\\mathbf{W}_k| \\quad$). Then we first deal with the first three terms inside the bracket, i.e.,\n\n$$\\begin{split} \\frac{1}{2} \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\cdot \\left\\{ -D \\ln 2\\pi + \\ln \\widetilde{\\Lambda}_{k} - D\\beta_{k}^{-1} \\right\\} &= \\frac{1}{2} \\sum_{k=1}^{K} \\sum_{n=1}^{N} r_{nk} \\cdot \\left\\{ -D \\ln 2\\pi + \\ln \\widetilde{\\Lambda}_{k} - D\\beta_{k}^{-1} \\right\\} \\\\ &= \\frac{1}{2} \\sum_{k=1}^{K} \\left[ \\sum_{n=1}^{N} r_{nk} \\right] \\cdot \\left[ -D \\ln 2\\pi + \\ln \\widetilde{\\Lambda}_{k} - D\\beta_{k}^{-1} \\right] \\\\ &= \\frac{1}{2} \\sum_{k=1}^{K} N_{k} \\cdot \\left[ -D \\ln 2\\pi + \\ln \\widetilde{\\Lambda}_{k} - D\\beta_{k}^{-1} \\right] \\end{split}$$\n\nWhere we have used the definition of $N_k$ . Next we deal with the last term inside the bracket, i.e.,\n\n$$\\frac{1}{2} \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\cdot \\left\\{ -v_k (\\mathbf{x}_n - \\mathbf{m}_k)^T \\mathbf{W}_k (\\mathbf{x}_n - \\mathbf{m}_k) \\right\\} = -\\frac{1}{2} \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\operatorname{Tr}[r_{nk} v_k \\cdot (\\mathbf{x}_n - \\mathbf{m}_k) (\\mathbf{x}_n - \\mathbf{m}_k)^T \\cdot \\mathbf{W}_k] \\\\\n= -\\frac{1}{2} \\sum_{k=1}^{K} \\operatorname{Tr}[\\sum_{n=1}^{N} r_{nk} v_k \\cdot (\\mathbf{x}_n - \\mathbf{m}_k) (\\mathbf{x}_n - \\mathbf{m}_k)^T \\cdot \\mathbf{W}_k]$$\n\nSince we have:\n\n$$\\begin{split} \\sum_{n=1}^{N} r_{nk} v_k \\cdot (\\mathbf{x}_n - \\mathbf{m}_k) (\\mathbf{x}_n - \\mathbf{m}_k)^T &= v_k \\sum_{n=1}^{N} r_{nk} \\cdot (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k + \\mathbf{x}_n - \\bar{\\mathbf{x}}_k) (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k + \\mathbf{x}_n - \\bar{\\mathbf{x}}_k)^T \\\\ &= v_k \\sum_{n=1}^{N} r_{nk} \\cdot (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k) (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k)^T \\\\ &+ v_k \\sum_{n=1}^{N} r_{nk} \\cdot (\\mathbf{x}_n - \\bar{\\mathbf{x}}_k) (\\mathbf{x}_n - \\bar{\\mathbf{x}}_k)^T \\\\ &+ v_k \\sum_{n=1}^{N} r_{nk} \\cdot 2 (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k) (\\mathbf{x}_n - \\bar{\\mathbf{x}}_k)^T \\\\ &= v_k N_k \\cdot (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k) (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k)^T \\\\ &+ v_k N_k \\mathbf{S}_k \\\\ &+ v_k \\cdot 2 (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k) (\\sum_{n=1}^{N} r_{nk} \\mathbf{x}_n - \\sum_{n=1}^{N} r_{nk} \\bar{\\mathbf{x}}_k)^T \\\\ &= v_k N_k \\cdot (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k) (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k)^T + v_k N_k \\mathbf{S}_k \\\\ &+ v_k \\cdot 2 (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k) (N_k \\bar{\\mathbf{x}}_k - N_k \\bar{\\mathbf{x}}_k)^T \\\\ &= v_k N_k \\cdot (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k) (N_k \\bar{\\mathbf{x}}_k - N_k \\bar{\\mathbf{x}}_k)^T \\\\ &= v_k N_k \\cdot (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k) (N_k \\bar{\\mathbf{x}}_k - N_k \\bar{\\mathbf{x}}_k)^T \\\\ &= v_k N_k \\cdot (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k) (N_k \\bar{\\mathbf{x}}_k - N_k \\bar{\\mathbf{x}}_k)^T \\end{split}$$\n\nTherefore, the last term can be reduced to:\n\n$$\\begin{split} \\frac{1}{2} \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\cdot \\left\\{ -v_{k} (\\mathbf{x}_{n} - \\mathbf{m}_{k})^{T} \\mathbf{W}_{k} (\\mathbf{x}_{n} - \\mathbf{m}_{k}) \\right\\} &= -\\frac{1}{2} \\sum_{k=1}^{K} \\mathrm{Tr}[v_{k} N_{k} \\cdot (\\bar{\\mathbf{x}}_{k} - \\mathbf{m}_{k})(\\bar{\\mathbf{x}}_{k} - \\mathbf{m}_{k})^{T} \\mathbf{W}_{k}] \\\\ &- \\frac{1}{2} \\sum_{k=1}^{K} \\mathrm{Tr}[v_{k} N_{k} \\mathbf{S}_{k} \\mathbf{W}_{k}] \\\\ &= -\\frac{1}{2} \\sum_{k=1}^{K} N_{k} v_{k} \\cdot (\\bar{\\mathbf{x}}_{k} - \\mathbf{m}_{k}) \\mathbf{W}_{k} (\\bar{\\mathbf{x}}_{k} - \\mathbf{m}_{k})^{T} \\\\ &- \\frac{1}{2} \\sum_{k=1}^{K} N_{k} v_{k} \\mathrm{Tr}[\\mathbf{S}_{k} \\mathbf{W}_{k}] \\end{split}$$\n\nIf we combine the first three and the last term, we just obtain Eq (10.71: $\\mathbb{E}[\\ln p(\\mathbf{X}|\\mathbf{Z}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})] = \\frac{1}{2} \\sum_{k=1}^{K} N_k \\left\\{ \\ln \\widetilde{\\Lambda}_k - D\\beta_k^{-1} - \\nu_k \\text{Tr}(\\mathbf{S}_k \\mathbf{W}_k) - \\nu_k (\\overline{\\mathbf{x}}_k - \\mathbf{m}_k)^{\\mathrm{T}} \\mathbf{W}_k (\\overline{\\mathbf{x}}_k - \\mathbf{m}_k) - D \\ln(2\\pi) \\right\\}$). Next we prove Eq (10.72: $\\mathbb{E}[\\ln p(\\mathbf{Z}|\\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\ln \\widetilde{\\pi}_k$). According to Eq (10.37: $p(\\mathbf{Z}|\\boldsymbol{\\pi}) = \\prod_{n=1}^{N} \\prod_{k=1}^{K} \\pi_k^{z_{nk}}.$), we have:\n\n$$\\mathbb{E}[\\ln p(\\mathbf{Z}|\\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\mathbb{E}[z_{nk} \\ln \\pi_k] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\ln \\widetilde{\\pi}_k$$\n\nJust as required.",
"answer_length": 5950
},
{
"chapter": 10,
"question_number": "10.17",
"difficulty": "hard",
"question_text": "Verify the results (10.73)–(10.77) for the remaining terms in the lower bound for the variational Gaussian mixture model given by (10.70: $-\\mathbb{E}[\\ln q(\\mathbf{Z})] - \\mathbb{E}[\\ln q(\\boldsymbol{\\pi})] - \\mathbb{E}[\\ln q(\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})]$).",
"answer": "According to Eq (10.39: $p(\\boldsymbol{\\pi}) = \\operatorname{Dir}(\\boldsymbol{\\pi}|\\boldsymbol{\\alpha}_0) = C(\\boldsymbol{\\alpha}_0) \\prod_{k=1}^K \\pi_k^{\\alpha_0 - 1}$), we have:\n\n$$\\mathbb{E}[\\ln p(\\pi)] = \\ln C(\\boldsymbol{\\alpha}_0) + (\\alpha_0 - 1) \\sum_{k=1}^K \\mathbb{E}[\\ln \\pi_k]$$\n$$= \\ln C(\\boldsymbol{\\alpha}_0) + (\\alpha_0 - 1) \\sum_{k=1}^K \\ln \\widetilde{\\pi}_k$$\n\nAccording to Eq (10.40: $= \\prod_{k=1}^{K} \\mathcal{N}\\left(\\boldsymbol{\\mu}_{k}|\\mathbf{m}_{0}, (\\beta_{0}\\boldsymbol{\\Lambda}_{k})^{-1}\\right) \\mathcal{W}(\\boldsymbol{\\Lambda}_{k}|\\mathbf{W}_{0}, \\nu_{0}) \\qquad$), we have:\n\n$$\\begin{split} \\mathbb{E}[\\ln p(\\pmb{\\mu}, \\pmb{\\Lambda})] &= \\sum_{k=1}^K \\mathbb{E}[\\ln \\mathcal{N}(\\pmb{\\mu}_k | \\mathbf{m}_0, (\\beta_0 \\mathbf{\\Lambda}_k)^{-1})] + \\sum_{k=1}^K \\mathbb{E}[\\ln \\mathcal{W}(\\mathbf{\\Lambda}_k | \\mathbf{W}_0, v_0)] \\\\ &= \\sum_{k=1}^K \\mathbb{E}\\Big\\{-\\frac{D}{2}\\ln 2\\pi + \\frac{1}{2}\\ln |\\beta_0 \\mathbf{\\Lambda}_k| - \\frac{1}{2}(\\pmb{\\mu}_k - \\mathbf{m}_0)^T (\\beta_0 \\mathbf{\\Lambda}_k) (\\pmb{\\mu}_k - \\mathbf{m}_0)\\Big\\} \\\\ &+ \\sum_{k=1}^K \\mathbb{E}\\Big\\{\\ln B(\\mathbf{W}_0, v_0) + \\frac{v_0 - D - 1}{2}\\ln |\\mathbf{\\Lambda}_k| - \\frac{1}{2}\\mathrm{Tr}[\\mathbf{W}_0^{-1}\\mathbf{\\Lambda}_k]\\Big\\} \\\\ &= \\sum_{k=1}^K \\mathbb{E}\\Big\\{-\\frac{D}{2}\\ln 2\\pi + \\frac{D}{2}\\ln \\beta_0 + \\frac{1}{2}\\ln |\\mathbf{\\Lambda}_k| - \\frac{1}{2}(\\pmb{\\mu}_k - \\mathbf{m}_0)^T (\\beta_0 \\mathbf{\\Lambda}_k) (\\pmb{\\mu}_k - \\mathbf{m}_0)\\Big\\} \\\\ &+ \\sum_{k=1}^K \\mathbb{E}\\Big\\{\\ln B(\\mathbf{W}_0, v_0) + \\frac{v_0 - D - 1}{2}\\ln |\\mathbf{\\Lambda}_k| - \\frac{1}{2}\\mathrm{Tr}[\\mathbf{W}_0^{-1}\\mathbf{\\Lambda}_k]\\Big\\} \\\\ &= \\frac{K \\cdot D}{2}\\ln \\frac{\\beta_0}{2\\pi} + \\frac{1}{2}\\sum_{k=1}^K \\ln \\widetilde{\\mathbf{\\Lambda}_k} - \\frac{1}{2}\\sum_{k=1}^K \\mathbb{E}\\Big\\{(\\pmb{\\mu}_k - \\mathbf{m}_0)^T (\\beta_0 \\mathbf{\\Lambda}_k) (\\pmb{\\mu}_k - \\mathbf{m}_0)\\Big\\} \\\\ &K \\cdot \\ln B(\\mathbf{W}_0, v_0) + \\frac{v_0 - D - 1}{2}\\sum_{k=1}^K \\ln \\widetilde{\\mathbf{\\Lambda}_k} - \\frac{1}{2}\\sum_{k=1}^K \\mathbb{E}\\Big\\{\\mathrm{Tr}[\\mathbf{W}_0^{-1}\\mathbf{\\Lambda}_k]\\Big\\} \\end{split}$$\n\nSo now we need to calculate these two expectations. Using (B.80), we can obtain:\n\n$$\\sum_{k=1}^K \\mathbb{E} \\Big\\{ \\mathrm{Tr}[\\mathbf{W}_0^{-1} \\mathbf{\\Lambda}_k] = \\sum_{k=1}^K \\mathrm{Tr} \\Big\\{ \\mathbf{W}_0^{-1} \\cdot \\mathbb{E}[\\mathbf{\\Lambda}_k] \\Big\\} = \\sum_{k=1}^K \\upsilon_k \\cdot \\mathrm{Tr} \\Big\\{ \\mathbf{W}_0^{-1} \\mathbf{W}_k \\Big\\}$$\n\nTo calculate the other expectation, first we write down two properties of the Gaussian distribution, i.e.,\n\n$$\\mathbb{E}[\\boldsymbol{\\mu}_k] = \\mathbf{m}_k , \\quad \\mathbb{E}[\\boldsymbol{\\mu}_k \\boldsymbol{\\mu}_k^T] = \\mathbf{m}_k \\mathbf{m}_k^T + \\boldsymbol{\\beta}_k^{-1} \\boldsymbol{\\Lambda}_k^{-1}$$\n\nTherefore, we can obtain:\n\n$$\\begin{split} \\sum_{k=1}^K \\mathbb{E}\\Big\\{ (\\boldsymbol{\\mu}_k - \\mathbf{m}_0)^T (\\beta_0 \\boldsymbol{\\Lambda}_k) (\\boldsymbol{\\mu}_k - \\mathbf{m}_0) \\Big\\} &= \\beta_0 \\sum_{k=1}^K \\mathbb{E}\\Big\\{ \\mathrm{Tr}[\\boldsymbol{\\Lambda}_k \\cdot (\\boldsymbol{\\mu}_k - \\mathbf{m}_0) (\\boldsymbol{\\mu}_k - \\mathbf{m}_0)^T] \\Big\\} \\\\ &= \\beta_0 \\sum_{k=1}^K \\mathbb{E}_{\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k} \\Big\\{ \\mathrm{Tr}\\big[\\boldsymbol{\\Lambda}_k \\cdot (\\boldsymbol{\\mu}_k \\boldsymbol{\\mu}_k^T - 2\\boldsymbol{\\mu}_k \\mathbf{m}_0^T + \\mathbf{m}_0 \\mathbf{m}_0^T)] \\Big\\} \\\\ &= \\beta_0 \\sum_{k=1}^K \\mathbb{E}_{\\boldsymbol{\\Lambda}_k} \\Big\\{ \\mathrm{Tr}\\big[\\boldsymbol{\\Lambda}_k \\cdot (\\mathbf{m}_k \\mathbf{m}_k^T + \\boldsymbol{\\beta}_k^{-1} \\boldsymbol{\\Lambda}_k^{-1} - 2\\mathbf{m}_k \\mathbf{m}_0^T + \\mathbf{m}_0 \\mathbf{m}_0^T)] \\Big\\} \\\\ &= \\beta_0 \\sum_{k=1}^K \\mathbb{E}_{\\boldsymbol{\\Lambda}_k} \\Big\\{ \\mathrm{Tr}\\big[\\boldsymbol{\\beta}_k^{-1} \\mathbf{I} + \\boldsymbol{\\Lambda}_k \\cdot (\\mathbf{m}_k \\mathbf{m}_k^T - 2\\mathbf{m}_k \\mathbf{m}_0^T + \\mathbf{m}_0 \\mathbf{m}_0^T)] \\Big\\} \\\\ &= \\beta_0 \\sum_{k=1}^K \\mathbb{E}_{\\boldsymbol{\\Lambda}_k} \\Big\\{ \\boldsymbol{D} \\cdot \\boldsymbol{\\beta}_k^{-1} + \\mathrm{Tr}\\big[\\boldsymbol{\\Lambda}_k \\cdot (\\mathbf{m}_k - \\mathbf{m}_0) (\\mathbf{m}_k - \\mathbf{m}_0)^T \\big] \\Big\\} \\\\ &= \\frac{KD\\beta_0}{\\beta_k} + \\beta_0 \\sum_{k=1}^K \\mathbb{E}_{\\boldsymbol{\\Lambda}_k} \\Big\\{ (\\mathbf{m}_k - \\mathbf{m}_0) \\boldsymbol{\\Lambda}_k (\\mathbf{m}_k - \\mathbf{m}_0)^T \\Big\\} \\\\ &= \\frac{KD\\beta_0}{\\beta_k} + \\beta_0 \\sum_{k=1}^K (\\mathbf{m}_k - \\mathbf{m}_0) \\cdot \\mathbb{E}_{\\boldsymbol{\\Lambda}_k} [\\boldsymbol{\\Lambda}_k] \\cdot (\\mathbf{m}_k - \\mathbf{m}_0)^T \\\\ &= \\frac{KD\\beta_0}{\\beta_k} + \\beta_0 \\sum_{k=1}^K (\\mathbf{m}_k - \\mathbf{m}_0) \\cdot (v_k \\mathbf{W}_k) \\cdot (\\mathbf{m}_k - \\mathbf{m}_0)^T \\end{split}$$\n\nSubstituting these two expectations back, we obtain Eq (10.74: $\\mathbb{E}[\\ln p(\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})] = \\frac{1}{2} \\sum_{k=1}^{K} \\left\\{ D \\ln(\\beta_0/2\\pi) + \\ln \\widetilde{\\Lambda}_k - \\frac{D\\beta_0}{\\beta_k} - \\beta_0 \\nu_k (\\mathbf{m}_k - \\mathbf{m}_0)^{\\mathrm{T}} \\mathbf{W}_k (\\mathbf{m}_k - \\mathbf{m}_0) \\right\\} + K \\ln B(\\mathbf{W}_0, \\nu_0) + \\frac{(\\nu_0 - D - 1)}{2} \\sum_{k=1}^{K} \\ln \\widetilde{\\Lambda}_k - \\frac{1}{2} \\sum_{k=1}^{K} \\nu_k \\mathrm{Tr}(\\mathbf{W}_0^{-1} \\mathbf{W}_k)$) just as required. According to Eq (10.48: $q^{\\star}(\\mathbf{Z}) = \\prod_{n=1}^{N} \\prod_{k=1}^{K} r_{nk}^{z_{nk}}$), we have:\n\n$$\\mathbb{E}[\\ln q(\\mathbf{Z})] = \\sum_{n,k=1}^{N,K} \\mathbb{E}[z_{nk}] \\cdot \\ln r_{nk} = \\sum_{n,k=1}^{N,K} r_{nk} \\cdot \\ln r_{nk}$$\n\nAccording to Eq (10.57: $q^{\\star}(\\boldsymbol{\\pi}) = \\operatorname{Dir}(\\boldsymbol{\\pi}|\\boldsymbol{\\alpha})$), we have:\n\n$$\\mathbb{E}[\\ln q(\\pi)] = \\ln C(\\boldsymbol{\\alpha}) + (\\alpha_k - 1) \\sum_{k=1}^K \\mathbb{E}[\\ln \\pi_k]$$\n$$= \\ln C(\\boldsymbol{\\alpha}_0) + (\\alpha_k - 1) \\sum_{k=1}^K \\ln \\widetilde{\\pi}_k$$\n\nTo derive Eq(10.77), we follow the same procedure as that for Eq(10.74):\n\n$$\\begin{split} \\mathbb{E}[\\ln q(\\pmb{\\mu},\\pmb{\\Lambda})] &= \\sum_{k=1}^K \\mathbb{E}[\\ln \\mathcal{N}(\\pmb{\\mu}_k|\\mathbf{m}_k,(\\beta_k\\pmb{\\Lambda}_k)^{-1})] + \\sum_{k=1}^K \\mathbb{E}[\\ln \\mathcal{W}(\\pmb{\\Lambda}_k|\\mathbf{W}_k,v_k)] \\\\ &= \\sum_{k=1}^K \\mathbb{E}\\Big\\{-\\frac{D}{2}\\ln 2\\pi + \\frac{D}{2}\\ln \\beta_k + \\frac{1}{2}\\ln |\\pmb{\\Lambda}_k| - \\frac{1}{2}(\\pmb{\\mu}_k - \\mathbf{m}_k)^T(\\beta_k\\pmb{\\Lambda}_k)(\\pmb{\\mu}_k - \\mathbf{m}_k)\\Big\\} \\\\ &+ \\sum_{k=1}^K \\mathbb{E}\\Big\\{\\ln B(\\mathbf{W}_k,v_k) + \\frac{v_k - D - 1}{2}\\ln |\\pmb{\\Lambda}_k| - \\frac{1}{2}\\mathrm{Tr}[\\mathbf{W}_k^{-1}\\pmb{\\Lambda}_k]\\Big\\} \\\\ &= \\frac{K \\cdot D}{2}\\ln \\frac{\\beta_k}{2\\pi} + \\frac{1}{2}\\sum_{k=1}^K \\ln \\widetilde{\\pmb{\\Lambda}_k} - \\frac{1}{2}\\sum_{k=1}^K \\mathbb{E}\\Big\\{(\\pmb{\\mu}_k - \\mathbf{m}_k)^T(\\beta_k\\pmb{\\Lambda}_k)(\\pmb{\\mu}_k - \\mathbf{m}_k)\\Big\\} \\\\ &K \\cdot \\ln B(\\mathbf{W}_k,v_k) + \\frac{v_k - D - 1}{2}\\sum_{k=1}^K \\ln \\widetilde{\\pmb{\\Lambda}_k} - \\frac{1}{2}\\sum_{k=1}^K \\mathbb{E}\\Big\\{\\mathrm{Tr}[\\mathbf{W}_k^{-1}\\pmb{\\Lambda}_k]\\Big\\} \\\\ &= \\frac{K \\cdot D}{2}\\ln \\frac{\\beta_k}{2\\pi} + \\frac{1}{2}\\sum_{k=1}^K \\ln \\widetilde{\\pmb{\\Lambda}_k} - \\frac{KD}{2} \\\\ &K \\cdot \\ln B(\\mathbf{W}_k,v_k) + \\frac{v_k - D - 1}{2}\\sum_{k=1}^K \\ln \\widetilde{\\pmb{\\Lambda}_k} - \\frac{1}{2}\\sum_{k=1}^K v_k \\mathbb{E}\\Big\\{\\mathrm{Tr}[\\mathbf{W}_k^{-1}\\mathbf{W}_k]\\Big\\} \\\\ &= \\frac{K \\cdot D}{2}\\ln \\frac{\\beta_k}{2\\pi} + \\frac{1}{2}\\sum_{k=1}^K \\ln \\widetilde{\\pmb{\\Lambda}_k} - \\frac{KD}{2} \\\\ &K \\cdot \\ln B(\\mathbf{W}_k,v_k) + \\frac{v_k - D - 1}{2}\\sum_{k=1}^K \\ln \\widetilde{\\pmb{\\Lambda}_k} - \\frac{1}{2}\\sum_{k=1}^K v_k \\cdot D \\end{split}$$\n\nIt is identical to Eq (10.77: $\\mathbb{E}[\\ln q(\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})] = \\sum_{k=1}^{K} \\left\\{ \\frac{1}{2} \\ln \\widetilde{\\Lambda}_k + \\frac{D}{2} \\ln \\left( \\frac{\\beta_k}{2\\pi} \\right) - \\frac{D}{2} - \\operatorname{H}\\left[q(\\boldsymbol{\\Lambda}_k)\\right] \\right\\}$).",
"answer_length": 7815
},
{
"chapter": 10,
"question_number": "10.18",
"difficulty": "hard",
"question_text": "In this exercise, we shall derive the variational re-estimation equations for the Gaussian mixture model by direct differentiation of the lower bound. To do this we assume that the variational distribution has the factorization defined by (10.42: $q(\\mathbf{Z}, \\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) = q(\\mathbf{Z})q(\\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}).$) and (10.55: $q(\\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) = q(\\boldsymbol{\\pi}) \\prod_{k=1}^{K} q(\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k).$) with factors given by (10.48: $q^{\\star}(\\mathbf{Z}) = \\prod_{n=1}^{N} \\prod_{k=1}^{K} r_{nk}^{z_{nk}}$), (10.57: $q^{\\star}(\\boldsymbol{\\pi}) = \\operatorname{Dir}(\\boldsymbol{\\pi}|\\boldsymbol{\\alpha})$), and (10.59: $q^{\\star}(\\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Lambda}_{k}) = \\mathcal{N}\\left(\\boldsymbol{\\mu}_{k} | \\mathbf{m}_{k}, (\\beta_{k} \\boldsymbol{\\Lambda}_{k})^{-1}\\right) \\, \\mathcal{W}(\\boldsymbol{\\Lambda}_{k} | \\mathbf{W}_{k}, \\nu_{k})$). Substitute these into (10.70: $-\\mathbb{E}[\\ln q(\\mathbf{Z})] - \\mathbb{E}[\\ln q(\\boldsymbol{\\pi})] - \\mathbb{E}[\\ln q(\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})]$) and hence obtain the lower bound as a function of the parameters of the variational distribution. Then, by maximizing the bound with respect to these parameters, derive the re-estimation equations for the factors in the variational distribution, and show that these are the same as those obtained in Section 10.2.1.",
"answer": "This problem is very complicated. Let's explain it in details. In section 10.2.1, we have obtained the update formula for all the coefficients using the general framework of variational inference. For more details you can see Prob.10.12 and Prob.10.13.\n\nMoreover, in the previous problem, we have shown that $\\mathcal{L}$ is given by Eq (10.70)-Eq (10.77: $\\mathbb{E}[\\ln q(\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})] = \\sum_{k=1}^{K} \\left\\{ \\frac{1}{2} \\ln \\widetilde{\\Lambda}_k + \\frac{D}{2} \\ln \\left( \\frac{\\beta_k}{2\\pi} \\right) - \\frac{D}{2} - \\operatorname{H}\\left[q(\\boldsymbol{\\Lambda}_k)\\right] \\right\\}$), if we have assumed the form of q, i.e., Eq (10.42: $q(\\mathbf{Z}, \\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) = q(\\mathbf{Z})q(\\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}).$), Eq (10.48: $q^{\\star}(\\mathbf{Z}) = \\prod_{n=1}^{N} \\prod_{k=1}^{K} r_{nk}^{z_{nk}}$),Eq (10.55: $q(\\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) = q(\\boldsymbol{\\pi}) \\prod_{k=1}^{K} q(\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k).$), Eq (10.57: $q^{\\star}(\\boldsymbol{\\pi}) = \\operatorname{Dir}(\\boldsymbol{\\pi}|\\boldsymbol{\\alpha})$) and Eq (10.59: $q^{\\star}(\\boldsymbol{\\mu}_{k}, \\boldsymbol{\\Lambda}_{k}) = \\mathcal{N}\\left(\\boldsymbol{\\mu}_{k} | \\mathbf{m}_{k}, (\\beta_{k} \\boldsymbol{\\Lambda}_{k})^{-1}\\right) \\, \\mathcal{W}(\\boldsymbol{\\Lambda}_{k} | \\mathbf{W}_{k}, \\nu_{k})$). Note that here we do not know the specific value of those coefficients, e.g., Eq (10.60)-Eq (10.63). In this problem, we will show that by maximizing $\\mathcal{L}$ with respect to those coefficients, we will obtain those formula just as in section 10.2.1.\n\nTo summarize, here we write down all the coefficients required to estimate: $\\{\\beta_k, \\mathbf{m}_k, v_k, \\mathbf{W}_k, \\alpha_k, r_{nk}\\}$ . We begin by considering $\\beta_k$ . Note that only Eq (10.71: $\\mathbb{E}[\\ln p(\\mathbf{X}|\\mathbf{Z}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})] = \\frac{1}{2} \\sum_{k=1}^{K} N_k \\left\\{ \\ln \\widetilde{\\Lambda}_k - D\\beta_k^{-1} - \\nu_k \\text{Tr}(\\mathbf{S}_k \\mathbf{W}_k) - \\nu_k (\\overline{\\mathbf{x}}_k - \\mathbf{m}_k)^{\\mathrm{T}} \\mathbf{W}_k (\\overline{\\mathbf{x}}_k - \\mathbf{m}_k) - D \\ln(2\\pi) \\right\\}$), (10.74: $\\mathbb{E}[\\ln p(\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})] = \\frac{1}{2} \\sum_{k=1}^{K} \\left\\{ D \\ln(\\beta_0/2\\pi) + \\ln \\widetilde{\\Lambda}_k - \\frac{D\\beta_0}{\\beta_k} - \\beta_0 \\nu_k (\\mathbf{m}_k - \\mathbf{m}_0)^{\\mathrm{T}} \\mathbf{W}_k (\\mathbf{m}_k - \\mathbf{m}_0) \\right\\} + K \\ln B(\\mathbf{W}_0, \\nu_0) + \\frac{(\\nu_0 - D - 1)}{2} \\sum_{k=1}^{K} \\ln \\widetilde{\\Lambda}_k - \\frac{1}{2} \\sum_{k=1}^{K} \\nu_k \\mathrm{Tr}(\\mathbf{W}_0^{-1} \\mathbf{W}_k)$) and (10.77: $\\mathbb{E}[\\ln q(\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})] = \\sum_{k=1}^{K} \\left\\{ \\frac{1}{2} \\ln \\widetilde{\\Lambda}_k + \\frac{D}{2} \\ln \\left( \\frac{\\beta_k}{2\\pi} \\right) - \\frac{D}{2} - \\operatorname{H}\\left[q(\\boldsymbol{\\Lambda}_k)\\right] \\right\\}$) contain $\\beta_k$ , we calculate the derivative of $\\mathcal{L}$ with\n\nrespect to $\\beta_k$ and set it to zero:\n\n$$\\begin{array}{lcl} \\frac{\\partial \\mathcal{L}}{\\partial \\beta_k} & = & (\\frac{1}{2}N_k D\\beta_k^{-2}) + (\\frac{1}{2}D\\beta_0\\beta_k^{-2}) - (\\frac{D}{2}\\frac{1}{2\\pi}\\frac{2\\pi}{\\beta_k}) \\\\ & = & \\frac{1}{2}\\beta_k^{-2} \\cdot (N_k D + D\\beta_0 - D\\beta_k) = 0 \\end{array}$$\n\nThe three brackets in the first line correspond to the derivative with respect to Eq (10.71), (10.74) and (10.77). Rearranging it, we obtain Eq (10.60). Next we consider $\\mathbf{m}_k$ , which only occurs in the quadratic terms in Eq (10.71) and (10.74).\n\n$$\\frac{\\partial \\mathcal{L}}{\\partial \\mathbf{m}_{k}} = \\left[ \\frac{1}{2} N_{k} v_{k} \\cdot 2 \\mathbf{W}_{k} (\\bar{\\mathbf{x}}_{k} - \\mathbf{m}_{k}) \\right] + \\left[ -\\frac{1}{2} \\beta_{0} v_{k} \\cdot 2 \\mathbf{W}_{k} (\\mathbf{m}_{k} - \\mathbf{m}_{0}) \\right] \n= v_{k} \\mathbf{W}_{k} \\left[ N_{k} \\cdot (\\bar{\\mathbf{x}}_{k} - \\mathbf{m}_{k}) - \\beta_{0} (\\mathbf{m}_{k} - \\mathbf{m}_{0}) \\right] = 0$$\n\nSimilarly, the two brackets in the first line correspond to the derivative with respect to Eq (10.71) and (10.74). Rearranging it, we obtain Eq (10.61). Next noticing that $v_k$ and $\\mathbf{W}_k$ are always coupled in $\\mathcal{L}$ , e.g., $v_k$ occurs ahead of quadratic terms in Eq (10.71). We will deal with $v_k$ and $\\mathbf{W}_k$ simultaneously. Let's first make this more clear by writing down those terms depend on $v_k$ and $\\mathbf{W}_k$ in $\\mathcal{L}$ :\n\n$$(10.77) \\propto \\sum_{k=1}^{K} \\left\\{ \\frac{1}{2} \\ln \\widetilde{\\Lambda}_k - \\text{H}[q(\\boldsymbol{\\Lambda}_k)] \\right\\}$$\n\n$$(10.71) \\propto \\frac{1}{2} \\sum_{k=1}^{K} N_k \\left\\{ \\ln \\widetilde{\\Lambda}_k - v_k \\cdot \\text{Tr}[(\\mathbf{S}_k + \\mathbf{A}_k) \\mathbf{W}_k] \\right\\}$$\n\n$$(10.74) \\propto \\frac{1}{2} \\sum_{k=1}^{K} \\left\\{ \\ln \\widetilde{\\Lambda}_{k} - \\beta_{0} v_{k} \\cdot \\text{Tr}[\\mathbf{B}_{k} \\mathbf{W}_{k}] \\right\\} + \\frac{v_{0} - D - 1}{2} \\sum_{k=1}^{K} \\ln \\widetilde{\\Lambda}_{k} - \\frac{1}{2} \\sum_{k=1}^{K} v_{k} \\text{Tr}[\\mathbf{W}_{0}^{-1} \\mathbf{W}_{k}]$$\n\n$$= \\frac{v_{0} - D}{2} \\sum_{k=1}^{K} \\ln \\widetilde{\\Lambda}_{k} - \\frac{1}{2} \\sum_{k=1}^{K} v_{k} \\text{Tr}[(\\beta_{0} \\mathbf{B}_{k} + \\mathbf{W}_{0}^{-1}) \\mathbf{W}_{k}]$$\n\nWhere $\\ln \\widetilde{\\Lambda}_k$ is given by Eq (10.65) and $\\mathbf{A}_k$ and $\\mathbf{B}_k$ are given by:\n\n$$\\mathbf{A}_k = (\\bar{\\mathbf{x}_k} - \\mathbf{m}_k)(\\bar{\\mathbf{x}_k} - \\mathbf{m}_k)^T, \\quad \\mathbf{B}_k = (\\mathbf{m}_k - \\mathbf{m}_0)(\\mathbf{m}_k - \\mathbf{m}_0)^T$$\n\nMoreover, $H[q(\\Lambda_k)]$ is given by (B.82):\n\n$$H[q(\\mathbf{\\Lambda}_k)] = -\\ln B(\\mathbf{W}_k, v_k) - \\frac{v_k - D - 1}{2} \\ln \\widetilde{\\Lambda}_k + \\frac{v_k D}{2}$$\n\nWhere $\\ln B(\\mathbf{W}_k, v_k)$ can be calculated based on (B.79). Note here we only focus on those terms dependent on $v_k$ and $\\mathbf{W}_k$ :\n\n$$\\ln B(\\mathbf{W}_k, v_k) \\propto -\\frac{v_k}{2} \\ln |\\mathbf{W}_k| - \\frac{v_k D}{2} \\ln 2 - \\sum_{i=1}^{D} \\Gamma(\\frac{v_k + 1 - i}{2})$$\n\nTo further simplify the derivative, we now write down those terms in $\\mathcal{L}$ which only depends on $v_k$ and $\\mathbf{W}_k$ with a given specific index k:\n\n$$\\begin{split} \\mathcal{L} & \\propto & -\\left\\{\\frac{1}{2}\\ln\\widetilde{\\Lambda}_k - \\mathrm{H}[q(\\boldsymbol{\\Lambda}_k)]\\right\\} + \\frac{1}{2}N_k\\left\\{\\ln\\widetilde{\\Lambda}_k - v_k \\cdot \\mathrm{Tr}[(\\mathbf{S}_k + \\mathbf{A}_k)\\mathbf{W}_k]\\right\\} \\\\ & + \\frac{v_0 - D}{2}\\ln\\widetilde{\\Lambda}_k - \\frac{1}{2}v_k\\mathrm{Tr}[(\\beta_0\\mathbf{B}_k + \\mathbf{W}_0^{-1})\\mathbf{W}_k] \\\\ & = & \\frac{1}{2}(-1 + N_k + v_0 - D)\\ln\\widetilde{\\Lambda}_k + \\mathrm{H}[q(\\boldsymbol{\\Lambda}_k)] - \\frac{1}{2}v_k \\cdot \\mathrm{Tr}[(N_k\\mathbf{S}_k + N_k\\mathbf{A}_k + \\beta_0\\mathbf{B}_k + \\mathbf{W}_0^{-1})\\mathbf{W}_k] \\\\ & = & \\frac{1}{2}(-1 + N_k + v_0 - D)\\ln\\widetilde{\\Lambda}_k - \\frac{1}{2}v_k \\cdot \\mathrm{Tr}[(N_k\\mathbf{S}_k + N_k\\mathbf{A}_k + \\beta_0\\mathbf{B}_k + \\mathbf{W}_0^{-1})\\mathbf{W}_k] \\\\ & - \\ln B(\\mathbf{W}_k, v_k) - \\frac{v_k - D - 1}{2}\\ln\\widetilde{\\Lambda}_k + \\frac{v_kD}{2} \\\\ & = & \\frac{1}{2}(N_k + v_0 - v_k)\\ln\\widetilde{\\Lambda}_k - \\frac{1}{2}v_k \\cdot \\mathrm{Tr}[\\mathbf{F}_k\\mathbf{W}_k] + \\frac{v_kD}{2} - \\ln B(\\mathbf{W}_k, v_k) \\end{split}$$\n\nWhere we have defined:\n\n$$\\mathbf{F}_k = N_k \\mathbf{S}_k + N_k \\mathbf{A}_k + \\beta_0 \\mathbf{B}_k + \\mathbf{W}_0^{-1}$$\n\nNote that Eq (10.77) has a minus sign in $\\mathcal{L}$ , the negative of (\\*) has been used in the first line. We first calculate the derivative of $\\mathcal{L}$ with respect to $v_k$ and set it to zero:\n\n$$\\begin{split} \\frac{\\partial \\mathcal{L}}{\\partial v_k} &= \\frac{1}{2} (N_k + v_0 - v_k) \\frac{d \\ln \\widetilde{\\Lambda}_k}{d v_k} - \\frac{\\ln \\widetilde{\\Lambda}_k}{2} - \\frac{1}{2} \\mathrm{Tr}[\\mathbf{F}_k \\mathbf{W}_k] + \\frac{D}{2} \\\\ &+ \\frac{|\\mathbf{W}_k|}{2} + \\frac{D \\ln 2}{2} + \\frac{1}{2} \\sum_{i=1}^{D} \\Gamma'(\\frac{v_k + 1 - i}{2}) \\\\ &= \\frac{1}{2} \\Big[ (N_k + v_0 - v_k) \\frac{d \\ln \\widetilde{\\Lambda}_k}{d v_k} - \\mathrm{Tr}[\\mathbf{F}_k \\mathbf{W}_k] + D \\Big] = 0 \\end{split}$$\n\nWhere in the last step, we have used the definition of $\\ln \\widetilde{\\Lambda}_k$ , i.e., Eq (10.65). Then we calculate the derivative of $\\mathscr L$ with respect to $\\mathbf W_k$ and set it to zero:\n\n$$\\frac{\\partial \\mathcal{L}}{\\partial \\mathbf{W}_k} = \\frac{1}{2} (N_k + v_0 - v_k) \\mathbf{W}_k^{-1} - \\frac{v_k}{2} \\mathbf{F}_k + \\frac{v_k}{2} \\mathbf{W}_k^{-1} \n= \\frac{1}{2} (N_k + v_0 - v_k) \\mathbf{W}_k^{-1} - \\frac{v_k}{2} (\\mathbf{F}_k - \\mathbf{W}_k^{-1}) = 0$$\n\nStaring at these two derivatives long enough, we find that if the following two conditions:\n\n$$N_k + v_0 - v_k = 0$$\n, and $\\mathbf{F}_k = \\mathbf{W}_k^{-1}$ \n\nare satisfied, the derivatives of $\\mathcal{L}$ with respect to $v_k$ and $\\mathbf{W}_k$ will all be zero. Rearranging the first condition, we obtain Eq (10.63). Next we prove that the second condition is exactly Eq (10.62), by simplifying $\\mathbf{F}_k$ .\n\n$$\\mathbf{F}_k = N_k \\mathbf{S}_k + N_k \\mathbf{A}_k + \\beta_0 \\mathbf{B}_k + \\mathbf{W}_0^{-1}$$\n\n$$= \\mathbf{W}_0^{-1} + N_k \\mathbf{S}_k + N_k \\cdot (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k) (\\bar{\\mathbf{x}}_k - \\mathbf{m}_k)^T + \\beta_0 \\cdot (\\mathbf{m}_k - \\mathbf{m}_0) (\\mathbf{m}_k - \\mathbf{m}_0)^T$$\n\nComparing this with Eq (10.62), we only need to prove:\n\n$$N_k \\cdot (\\bar{\\mathbf{x}_k} - \\mathbf{m}_k)(\\bar{\\mathbf{x}_k} - \\mathbf{m}_k)^T + \\beta_0 \\cdot (\\mathbf{m}_k - \\mathbf{m}_0)(\\mathbf{m}_k - \\mathbf{m}_0)^T = \\frac{\\beta_0 N_k}{\\beta_0 + N_k}(\\bar{\\mathbf{x}_k} - \\mathbf{m}_0)(\\bar{\\mathbf{x}_k} - \\mathbf{m}_0)^T$$\n\nLet's start from the left hand side.\n\n$$(\\text{left}) = N_k \\bar{\\mathbf{x}}_k \\bar{\\mathbf{x}}_k^T - 2N_k \\bar{\\mathbf{x}}_k \\mathbf{m}_k^T + N_k \\mathbf{m}_k \\mathbf{m}_k^T + \\beta_0 \\mathbf{m}_k \\mathbf{m}_k^T - 2\\beta_0 \\mathbf{m}_k \\mathbf{m}_0^T + \\beta_0 \\mathbf{m}_0 \\mathbf{m}_0^T$$\n\n$$= N_k \\bar{\\mathbf{x}}_k \\bar{\\mathbf{x}}_k^T - 2N_k \\bar{\\mathbf{x}}_k (\\frac{\\beta_0 \\mathbf{m}_0 + N_k \\bar{\\mathbf{x}}_k}{\\beta_0 + N_k})^T + (N_k + \\beta_0) (\\frac{\\beta_0 \\mathbf{m}_0 + N_k \\bar{\\mathbf{x}}_k}{\\beta_0 + N_k}) (\\frac{\\beta_0 \\mathbf{m}_0 + N_k \\bar{\\mathbf{x}}_k}{\\beta_0 + N_k})^T$$\n\n$$-2\\beta_0 (\\frac{\\beta_0 \\mathbf{m}_0 + N_k \\bar{\\mathbf{x}}_k}{\\beta_0 + N_k}) \\mathbf{m}_0^T + \\beta_0 \\mathbf{m}_0 \\mathbf{m}_0^T$$\n\nThen we complete the square with respect to $\\bar{\\mathbf{x}_k}$ , and we will see the coefficients match with the right hand side. Here as an example, we calculate the coefficients ahead of the quadratic term $\\bar{\\mathbf{x}_k}\\bar{\\mathbf{x}_k}^T$ :\n\n$$\\begin{array}{ll} ({\\rm quad}) & = & N_k - 2N_k \\frac{N_k}{\\beta_0 + N_k} + (\\beta_0 + N_k)(\\frac{N_k}{\\beta_0 + N_k})^2 \\\\ & = & \\frac{N_k(\\beta_0 + N_k) - 2N_k^2 + N_k^2}{\\beta_0 + N_k} \\\\ & = & \\frac{\\beta_0 N_k}{\\beta_0 + N_k} \\end{array}$$\n\nIt is similar for the linear and the constant term, and here due to page limit, we omit the proof. the update formula for $\\alpha_k, r_{nk}$ are still remaining to obtain. Noticing that only Eq (10.72), (10.73) and (10.76) depend on $\\alpha_k$ , we now calculate the derivative of $\\mathcal{L}$ with respect to $\\alpha_k$ :\n\n$$\\begin{split} \\frac{\\partial \\mathcal{L}}{\\partial \\alpha_k} &= \\sum_{n=1}^N r_{nk} \\frac{d \\ln \\widetilde{\\pi}_k}{d \\alpha_k} + (\\alpha_0 - 1) \\frac{d \\ln \\widetilde{\\pi}_k}{d \\alpha_k} - \\left[ (\\alpha_k - 1) \\frac{d \\ln \\widetilde{\\pi}_k}{d \\alpha_k} + \\ln \\widetilde{\\pi}_k + \\frac{d \\ln C(\\alpha)}{d \\alpha_k} \\right] \\\\ &= (N_k + \\alpha_0 - \\alpha_k) \\frac{d \\ln \\widetilde{\\pi}_k}{d \\alpha_k} - \\ln \\widetilde{\\pi}_k - \\frac{d \\ln C(\\alpha)}{d \\alpha_k} \\\\ &= (N_k + \\alpha_0 - \\alpha_k) \\left[ \\phi^{'}(\\alpha_k) - \\phi^{'}(\\widehat{\\alpha}) \\right] - \\left[ \\phi(\\alpha_k) - \\phi(\\widehat{\\alpha}) \\right] - \\frac{d \\left[ \\ln \\Gamma(\\widehat{\\alpha}) - \\ln \\Gamma(\\alpha_k) \\right]}{d \\alpha_k} \\\\ &= (N_k + \\alpha_0 - \\alpha_k) \\left[ \\phi^{'}(\\alpha_k) - \\phi^{'}(\\widehat{\\alpha}) \\right] - \\left[ \\phi(\\alpha_k) - \\phi(\\widehat{\\alpha}) \\right] - \\left[ \\phi(\\widehat{\\alpha}) - \\phi(\\alpha_k) \\right] \\\\ &= (N_k + \\alpha_0 - \\alpha_k) \\left[ \\phi^{'}(\\alpha_k) - \\phi^{'}(\\widehat{\\alpha}) \\right] = 0 \\end{split}$$\n\nWhere we have used (B.25), Eq (10.66). Therefore, we obtain Eq (10.58). Finally, we are required to derive an update formula for $r_{nk}$ . Note that $\\bar{\\mathbf{x}}_k$ , $\\mathbf{S}_k$ and $N_k$ also contains $r_{nk}$ , we conclude that Eq (10.71), (10.72) and (10.75) depend on $r_{nk}$ . Using the definition of $N_k$ , i.e., Eq (10.51), we can obtain:\n\n$$\\mathcal{L} \\propto \\frac{1}{2} \\sum_{k,n} r_{nk} \\left\\{ \\ln \\widetilde{\\Lambda}_k - D \\beta_k^{-1} \\right\\} - \\frac{1}{2} \\sum_k N_k v_k \\text{Tr}[(\\mathbf{S}_k + \\mathbf{A}_k) \\mathbf{W}_k]$$\n$$+ \\frac{1}{2} \\sum_{k,n} r_{nk} \\ln \\widetilde{\\pi}_k - \\frac{1}{2} \\sum_{k,n} r_{nk} \\ln r_{nk}$$\n\nNote that constraint exists for $r_{nk}$ : $\\sum_k r_{nk} = 1$ , we cannot calculate the derivative and set it to zero. We must introduce a Lagrange Multiplier. Before doing so, let's simplify $\\mathbf{S}_k + \\mathbf{A}_k$ :\n\n$$\\begin{split} \\mathbf{S}_{k} + \\mathbf{A}_{k} &= \\frac{1}{N_{k}} \\sum_{n=1}^{N} r_{nk} (\\mathbf{x}_{n} - \\bar{\\mathbf{x}_{k}}) (\\mathbf{x}_{n} - \\bar{\\mathbf{x}_{k}})^{T} + (\\bar{\\mathbf{x}_{k}} - \\mathbf{m}_{k}) (\\bar{\\mathbf{x}_{k}} - \\mathbf{m}_{k})^{T} \\\\ &= \\frac{1}{N_{k}} \\sum_{n=1}^{N} \\left[ r_{nk} \\mathbf{x}_{n} \\mathbf{x}_{n}^{T} - 2 r_{nk} \\mathbf{x}_{n} \\bar{\\mathbf{x}_{k}}^{T} + r_{nk} \\bar{\\mathbf{x}_{k}} \\bar{\\mathbf{x}_{k}}^{T} \\right] + \\bar{\\mathbf{x}_{k}} \\bar{\\mathbf{x}_{k}}^{T} - 2 \\bar{\\mathbf{x}_{k}} \\mathbf{m}_{k} + \\mathbf{m}_{k} \\mathbf{m}_{k}^{T} \\\\ &= \\frac{1}{N_{k}} \\sum_{n=1}^{N} r_{nk} \\mathbf{x}_{n} \\mathbf{x}_{n}^{T} - \\frac{\\sum_{n=1}^{N} 2 r_{nk} \\mathbf{x}_{n} \\bar{\\mathbf{x}_{k}}^{T}}{N_{k}} + \\frac{\\sum_{n=1}^{N} r_{nk} \\bar{\\mathbf{x}_{k}} \\bar{\\mathbf{x}_{k}}^{T}}{N_{k}} + \\bar{\\mathbf{x}_{k}} \\bar{\\mathbf{x}_{k}}^{T} - 2 \\bar{\\mathbf{x}_{k}} \\mathbf{m}_{k} + \\mathbf{m}_{k} \\mathbf{m}_{k}^{T} \\\\ &= \\frac{1}{N_{k}} \\sum_{n=1}^{N} r_{nk} \\mathbf{x}_{n} \\mathbf{x}_{n}^{T} - 2 \\bar{\\mathbf{x}_{k}} \\mathbf{m}_{k} + \\mathbf{m}_{k} \\mathbf{m}_{k}^{T} \\\\ &= \\frac{1}{N_{k}} (\\sum_{n=1}^{N} r_{nk} \\mathbf{x}_{n} \\mathbf{x}_{n}^{T} - 2 \\bar{\\mathbf{x}_{k}} \\mathbf{m}_{k} + \\mathbf{m}_{k} \\mathbf{m}_{k}^{T}) \\\\ &= \\frac{1}{N_{k}} \\left[ \\sum_{n=1}^{N} r_{nk} (\\mathbf{x}_{n} \\mathbf{x}_{n}^{T} - 2 \\bar{\\mathbf{x}_{k}} \\mathbf{m}_{k} + \\mathbf{m}_{k} \\mathbf{m}_{k}^{T}) \\right] \\\\ &= \\frac{1}{N_{k}} \\sum_{n=1}^{N} r_{nk} (\\mathbf{x}_{n} \\mathbf{x}_{n}^{T} - 2 \\bar{\\mathbf{x}_{k}} \\mathbf{m}_{k} + \\mathbf{m}_{k} \\mathbf{m}_{k}^{T}) \\\\ &= \\frac{1}{N_{k}} \\sum_{n=1}^{N} r_{nk} (\\mathbf{x}_{n} - \\mathbf{m}_{k}) (\\mathbf{x}_{n} - \\mathbf{m}_{k}) (\\mathbf{x}_{n} - \\mathbf{m}_{k})^{T} \\end{split}$$\n\nTherefore, we obtain:\n\n$$\\begin{split} \\mathcal{L} & \\propto & \\frac{1}{2} \\sum_{k,n} r_{nk} \\left\\{ \\ln \\widetilde{\\Lambda}_k - D \\beta_k^{-1} \\right\\} + \\sum_{k,n} r_{nk} \\ln \\widetilde{\\pi}_k - \\sum_{k,n} r_{nk} \\ln r_{nk} \\\\ & - \\frac{1}{2} \\sum_{k} N_k v_k \\mathrm{Tr}[(\\mathbf{S}_k + \\mathbf{A}_k) \\mathbf{W}_k] \\\\ & = & \\frac{1}{2} \\sum_{k,n} r_{nk} \\left\\{ \\ln \\widetilde{\\Lambda}_k - D \\beta_k^{-1} \\right\\} + \\sum_{k,n} r_{nk} \\ln \\widetilde{\\pi}_k - \\sum_{k,n} r_{nk} \\ln r_{nk} \\\\ & - \\frac{1}{2} \\sum_{k=1}^K \\sum_{n=1}^N v_k r_{nk} (\\mathbf{x}_n - \\mathbf{m}_k)^T \\mathbf{W}_k (\\mathbf{x}_n - \\mathbf{m}_k) \\end{split}$$\n\nIntroducing Lagrange Multiplier $\\lambda_n$ , we obtain:\n\n$$\\text{(Lagrange)} = \\frac{1}{2} \\sum_{k,n} r_{nk} \\left\\{ \\ln \\widetilde{\\Lambda}_k - D \\beta_k^{-1} \\right\\} + \\sum_{k,n} r_{nk} \\ln \\widetilde{\\pi}_k - \\sum_{k,n} r_{nk} \\ln r_{nk}$$\n\n$$- \\frac{1}{2} \\sum_{k=1}^K \\sum_{n=1}^N v_k r_{nk} (\\mathbf{x}_n - \\mathbf{m}_k)^T \\mathbf{W}_k (\\mathbf{x}_n - \\mathbf{m}_k) + \\sum_{n=1}^N \\lambda_n (1 - \\sum_k r_{nk})$$\n\nCalculating the derivative with respect to $\\lambda_n$ and setting it to zero, we can\n\nobtain:\n\n$$\\begin{split} \\frac{\\partial (\\text{Lagrange})}{\\partial r_{nk}} &= \\frac{1}{2} \\{\\ln \\widetilde{\\Lambda}_k - D \\boldsymbol{\\beta}_k^{-1}\\} + \\ln \\widetilde{\\pi}_k - [\\ln r_{nk} + 1] \\\\ &- \\frac{1}{2} v_k (\\mathbf{x}_n - \\mathbf{m}_k)^T \\mathbf{W}_k (\\mathbf{x}_n - \\mathbf{m}_k) + \\lambda_n = 0 \\end{split}$$\n\nMoving $\\ln r_{nk}$ to the right side and then exponentiating both sides, we obtain Eq (10.67), and the normalized $r_{nk}$ is given by Eq (10.49), (10.46), and (10.64)-(10.66).",
"answer_length": 16606
},
{
"chapter": 10,
"question_number": "10.19",
"difficulty": "medium",
"question_text": "\\star)$ Derive the result (10.81: $p(\\widehat{\\mathbf{x}}|\\mathbf{X}) = \\frac{1}{\\widehat{\\alpha}} \\sum_{k=1}^{K} \\alpha_k \\operatorname{St}(\\widehat{\\mathbf{x}}|\\mathbf{m}_k, \\mathbf{L}_k, \\nu_k + 1 - D)$) for the predictive distribution in the variational treatment of the Bayesian mixture of Gaussians model.",
"answer": "Let's start from the definition, i.e., Eq (10.78: $p(\\widehat{\\mathbf{x}}|\\mathbf{X}) = \\sum_{\\widehat{\\mathbf{z}}} \\iiint p(\\widehat{\\mathbf{x}}|\\widehat{\\mathbf{z}}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) p(\\widehat{\\mathbf{z}}|\\boldsymbol{\\pi}) p(\\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}|\\mathbf{X}) \\, \\mathrm{d}\\boldsymbol{\\pi} \\, \\mathrm{d}\\boldsymbol{\\mu} \\, \\mathrm{d}\\boldsymbol{\\Lambda}$).\n\n$$\\begin{split} p(\\widehat{\\mathbf{x}}|\\mathbf{X}) &= \\sum_{\\widehat{\\mathbf{z}}} \\int \\int \\int p(\\widehat{\\mathbf{x}}|\\widehat{\\mathbf{z}}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) p(\\widehat{\\mathbf{z}}|\\boldsymbol{\\pi}) p(\\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}|\\mathbf{X}) d\\boldsymbol{\\pi} d\\boldsymbol{\\mu} d\\boldsymbol{\\Lambda} \\\\ &= \\sum_{\\widehat{\\mathbf{z}}} \\int \\int \\int \\prod_{k=1}^K \\mathcal{N}(\\widehat{\\mathbf{x}}|\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k^{-1})^{\\widehat{\\boldsymbol{z}}_k} \\cdot \\prod_{k=1}^K \\pi_k^{\\widehat{\\boldsymbol{z}}_k} \\cdot p(\\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}|\\mathbf{X}) d\\boldsymbol{\\pi} d\\boldsymbol{\\mu} d\\boldsymbol{\\Lambda} \\\\ &\\approx \\sum_{\\widehat{\\mathbf{z}}} \\int \\int \\int \\prod_{k=1}^K \\left[ \\mathcal{N}(\\widehat{\\mathbf{x}}|\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k^{-1}) \\cdot \\boldsymbol{\\pi}_k \\right]^{\\widehat{\\boldsymbol{z}}_k} \\cdot q(\\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) d\\boldsymbol{\\pi} d\\boldsymbol{\\mu} d\\boldsymbol{\\Lambda} \\\\ &= \\sum_{k=1}^K \\int \\int \\int \\left[ \\mathcal{N}(\\widehat{\\mathbf{x}}|\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k^{-1}) \\cdot \\boldsymbol{\\pi}_k \\right] \\cdot q(\\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) d\\boldsymbol{\\pi} d\\boldsymbol{\\mu} d\\boldsymbol{\\Lambda} \\\\ &= \\sum_{k=1}^K \\int \\int \\int \\mathcal{N}(\\widehat{\\mathbf{x}}|\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k^{-1}) \\cdot \\boldsymbol{\\pi}_k \\cdot \\left[ q(\\boldsymbol{\\pi}) \\cdot \\prod_{j=1}^K q(\\boldsymbol{\\mu}_j, \\boldsymbol{\\Lambda}_j) \\right] d\\boldsymbol{\\pi} d\\boldsymbol{\\mu} d\\boldsymbol{\\Lambda} \\end{split}$$\n\nWhere we have used the fact that **z** uses a one-of-k coding scheme. Recall that $\\mu = \\{\\mu_k\\}$ and $\\Lambda = \\{\\Lambda_k\\}$ , the term inside the summation can be further simplified. Namely, for those index $j \\neq k$ , the integration with respect to $\\mu_j$ and $\\Lambda_j$ will equal 1, i.e.,\n\n$$\\begin{split} p(\\widehat{\\mathbf{x}}|\\mathbf{X}) &= \\sum_{k=1}^K \\int \\int \\int \\mathcal{N}(\\widehat{\\mathbf{x}}|\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k^{-1}) \\cdot \\boldsymbol{\\pi}_k \\cdot \\left[ q(\\boldsymbol{\\pi}) \\cdot \\prod_{j=1}^K q(\\boldsymbol{\\mu}_j, \\boldsymbol{\\Lambda}_j) \\right] d\\boldsymbol{\\pi} d\\boldsymbol{\\mu} d\\boldsymbol{\\Lambda} \\\\ &= \\sum_{k=1}^K \\int \\int \\int \\mathcal{N}(\\widehat{\\mathbf{x}}|\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k^{-1}) \\cdot \\boldsymbol{\\pi}_k \\cdot q(\\boldsymbol{\\pi}) \\cdot q(\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k) d\\boldsymbol{\\pi} d\\boldsymbol{\\mu}_k d\\boldsymbol{\\Lambda}_k \\\\ &= \\sum_{k=1}^K \\int \\int \\int \\mathcal{N}(\\widehat{\\mathbf{x}}|\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k^{-1}) \\cdot \\boldsymbol{\\pi}_k \\cdot \\mathrm{Dir}(\\boldsymbol{\\pi}|\\boldsymbol{\\alpha}) \\cdot \\mathcal{N}(\\boldsymbol{\\mu}_k|\\mathbf{m}_k, (\\boldsymbol{\\beta}_k \\boldsymbol{\\Lambda}_k)^{-1}) \\mathcal{W}(\\boldsymbol{\\Lambda}_k|\\mathbf{W}_k, \\boldsymbol{v}_k) d\\boldsymbol{\\pi} d\\boldsymbol{\\mu}_k d\\boldsymbol{\\Lambda}_k \\end{split}$$\n\nWe notice that in the expression above, only $\\pi_k \\cdot \\text{Dir}(\\boldsymbol{\\pi}|\\boldsymbol{\\alpha})$ contains $\\pi_k$ , and we know that the expectation of $\\pi_k$ with respect to $\\text{Dir}(\\boldsymbol{\\pi}|\\boldsymbol{\\alpha})$ is $\\alpha_k/\\widehat{\\alpha}_k$ .\n\nTherefore, we can obtain:\n\n$$\\begin{split} p(\\widehat{\\mathbf{x}}|\\mathbf{X}) &= \\sum_{k=1}^K \\int \\int \\frac{\\alpha_k}{\\widehat{\\alpha}} \\, \\mathcal{N}(\\widehat{\\mathbf{x}}|\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k^{-1}) \\cdot \\mathcal{N}(\\boldsymbol{\\mu}_k|\\mathbf{m}_k, (\\beta_k \\boldsymbol{\\Lambda}_k)^{-1}) \\cdot \\mathcal{W}(\\boldsymbol{\\Lambda}_k|\\mathbf{W}_k, v_k) \\, d\\,\\boldsymbol{\\mu}_k \\, d\\,\\boldsymbol{\\Lambda}_k \\\\ &= \\sum_{k=1}^K \\left\\{ \\int \\left[ \\int \\mathcal{N}(\\widehat{\\mathbf{x}}|\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k^{-1}) \\cdot \\mathcal{N}(\\boldsymbol{\\mu}_k|\\mathbf{m}_k, (\\beta_k \\boldsymbol{\\Lambda}_k)^{-1}) \\, d\\,\\boldsymbol{\\mu}_k \\right] \\cdot \\frac{\\alpha_k}{\\widehat{\\alpha}} \\cdot \\mathcal{W}(\\boldsymbol{\\Lambda}_k|\\mathbf{W}_k, v_k) \\, d\\,\\boldsymbol{\\Lambda}_k \\right\\} \\\\ &= \\sum_{k=1}^K \\left\\{ \\int \\mathcal{N}(\\widehat{\\mathbf{x}}|\\mathbf{m}_k, (1+\\beta_k^{-1})\\boldsymbol{\\Lambda}_k^{-1}) \\cdot \\frac{\\alpha_k}{\\widehat{\\alpha}} \\cdot \\mathcal{W}(\\boldsymbol{\\Lambda}_k|\\mathbf{W}_k, v_k) \\, d\\,\\boldsymbol{\\Lambda}_k \\right\\} \\\\ &= \\sum_{k=1}^K \\frac{\\alpha_k}{\\widehat{\\alpha}} \\int \\mathcal{N}(\\widehat{\\mathbf{x}}|\\mathbf{m}_k, (1+\\beta_k^{-1})\\boldsymbol{\\Lambda}_k^{-1}) \\cdot \\mathcal{W}(\\boldsymbol{\\Lambda}_k|\\mathbf{W}_k, v_k) \\, d\\,\\boldsymbol{\\Lambda}_k \\end{split}$$\n\nNotice that the Wishart distribution is a conjugate prior for the Gaussian distribution with known mean and unknown precision. We conclude that the product of $\\mathcal{N}(\\widehat{\\mathbf{x}}|\\mathbf{m}_k, (1+\\beta_k^{-1})\\Lambda_k^{-1}) \\cdot \\mathcal{W}(\\Lambda_k|\\mathbf{W}_k, v_k)$ is again a Wishart distribution without normalized, which can be verified by focusing on the dependency on $\\Lambda_k$ :\n\n$$\\begin{array}{ll} \\text{(product)} & \\propto & |\\boldsymbol{\\Lambda}_k|^{1/2 + (v_k - D - 1)/2} \\cdot \\exp \\left\\{ -\\frac{\\text{Tr}[\\boldsymbol{\\Lambda}_k \\cdot (\\widehat{\\mathbf{x}} - \\mathbf{m}_k)(\\widehat{\\mathbf{x}} - \\mathbf{m}_k)^T]}{2(1 + \\boldsymbol{\\beta}_k^{-1})} - \\frac{1}{2} \\text{Tr}[\\boldsymbol{\\Lambda}_k \\mathbf{W}_k^{-1}] \\right\\} \\\\ & \\propto & \\mathcal{W}(\\boldsymbol{\\Lambda}_k | \\mathbf{W}^{'}, \\boldsymbol{v}^{'}) \\end{array}$$\n\nWhere we have defined:\n\n$$v^{'}=v_k+1$$\n\nand\n\n$$[\\mathbf{W}']^{-1} = \\frac{(\\widehat{\\mathbf{x}} - \\mathbf{m}_k)(\\widehat{\\mathbf{x}} - \\mathbf{m}_k)^T}{1 + \\beta_h^{-1}} + \\mathbf{W}_h^{-1}$$\n\nUsing the normalization constant of Wishart distribution, i.e., (B.79), we can obtain:\n\n$$\\begin{split} p(\\widehat{\\mathbf{x}}|\\mathbf{X}) &= \\sum_{k=1}^K \\frac{\\alpha_k}{\\widehat{\\alpha}} \\int \\mathcal{N}(\\widehat{\\mathbf{x}}|\\mathbf{m}_k, (1+\\beta_k^{-1})\\boldsymbol{\\Lambda}_k^{-1}) \\cdot \\mathcal{W}(\\boldsymbol{\\Lambda}_k|\\mathbf{W}_k, v_k) d\\boldsymbol{\\Lambda}_k \\\\ &= \\sum_{k=1}^K \\frac{\\alpha_k}{\\widehat{\\alpha}} \\cdot B(\\mathbf{W}^{'}, v^{'}) \\\\ &\\propto \\left| \\frac{(\\widehat{\\mathbf{x}} - \\mathbf{m}_k)(\\widehat{\\mathbf{x}} - \\mathbf{m}_k)^T}{1+\\beta_k^{-1}} + \\mathbf{W}_k^{-1} \\right|^{-(v_k+1)/2} \\\\ &\\propto \\left| \\frac{1}{1+\\beta_k^{-1}} \\mathbf{W}_k(\\widehat{\\mathbf{x}} - \\mathbf{m}_k)(\\widehat{\\mathbf{x}} - \\mathbf{m}_k)^T + \\mathbf{I} \\right|^{-(v_k+1)/2} \\end{split}$$\n\nHere we have only considered those terms dependent on $\\hat{\\mathbf{x}}_k$ . Next, we use:\n\n$$|\\mathbf{I} + \\mathbf{a}\\mathbf{b}^T| = 1 + \\mathbf{a}^T\\mathbf{b}$$\n\nThe expression above can be further simplified to:\n\n$$\\begin{split} p(\\widehat{\\mathbf{x}}|\\mathbf{X}) & \\propto & \\left| \\frac{1}{1 + \\beta_k^{-1}} \\mathbf{W}_k (\\widehat{\\mathbf{x}} - \\mathbf{m}_k) (\\widehat{\\mathbf{x}} - \\mathbf{m}_k)^T + \\mathbf{I} \\right|^{-(v_k + 1)/2} \\\\ & = & \\left[ 1 + \\frac{1}{1 + \\beta_b^{-1}} (\\widehat{\\mathbf{x}} - \\mathbf{m}_k)^T \\mathbf{W}_k (\\widehat{\\mathbf{x}} - \\mathbf{m}_k) \\right]^{-(v_k + 1)/2} \\end{split}$$\n\nBy comparing it with (B.68), we notice that it is a Student's t distribution, whose parameters are defined by Eq (10.81)-(10.82).",
"answer_length": 7684
},
{
"chapter": 10,
"question_number": "10.2",
"difficulty": "easy",
"question_text": "Use the properties $\\mathbb{E}[z_1] = m_1$ and $\\mathbb{E}[z_2] = m_2$ to solve the simultaneous equations (10.13: $m_1 = \\mu_1 - \\Lambda_{11}^{-1} \\Lambda_{12} \\left( \\mathbb{E}[z_2] - \\mu_2 \\right).$) and (10.15: $m_2 = \\mu_2 - \\Lambda_{22}^{-1} \\Lambda_{21} \\left( \\mathbb{E}[z_1] - \\mu_1 \\right).$), and hence show that, provided the original distribution $p(\\mathbf{z})$ is nonsingular, the unique solution for the means of the factors in the approximation distribution is given by $\\mathbb{E}[z_1] = \\mu_1$ and $\\mathbb{E}[z_2] = \\mu_2$ .",
"answer": "To be more clear, we are required to solve:\n\n$$\\begin{cases} m_1 = \\mu_1 - \\Lambda_{11}^{-1} \\Lambda_{12} (m_2 - \\mu_2) \\\\ m_2 = \\mu_2 - \\Lambda_{22}^{-1} \\Lambda_{21} (m_1 - \\mu_1) \\end{cases}$$\n\nTo obtain the equation above, we need to substitute $\\mathbb{E}[z_i] = m_i$ , where i = 1, 2, into Eq (10.13: $m_1 = \\mu_1 - \\Lambda_{11}^{-1} \\Lambda_{12} \\left( \\mathbb{E}[z_2] - \\mu_2 \\right).$) and Eq (10.14: $q_2^{\\star}(z_2) = \\mathcal{N}(z_2|m_2, \\Lambda_{22}^{-1})$). Here the unknown parameters are $m_1$ and $m_2$ . It is trivial to notice that $m_i = \\mu_i$ is a solution for the equation above.\n\nLet's solve this equation from another perspective. Firstly, if any (or both) of $\\Lambda_{11}^{-1}$ and $\\Lambda_{22}^{-1}$ equals 0, we can obtain $m_i = \\mu_i$ directly from Eq (10.13)-(10.14). When none of $\\Lambda_{11}^{-1}$ and $\\Lambda_{22}^{-1}$ equals 0, we substitute $m_1$ , i.e., the first\n\nline, into the second line:\n\n$$\\begin{split} m_2 &= \\mu_2 - \\Lambda_{22}^{-1} \\Lambda_{21} \\left( m_1 - \\mu_1 \\right) \\\\ &= \\mu_2 - \\Lambda_{22}^{-1} \\Lambda_{21} \\left[ \\mu_1 - \\Lambda_{11}^{-1} \\Lambda_{12} \\left( m_2 - \\mu_2 \\right) - \\mu_1 \\right] \\\\ &= \\mu_2 - \\Lambda_{22}^{-1} \\Lambda_{21} \\mu_1 + \\Lambda_{22}^{-1} \\Lambda_{21} \\Lambda_{11}^{-1} \\Lambda_{12} \\left( m_2 - \\mu_2 \\right) + \\Lambda_{22}^{-1} \\Lambda_{21} \\mu_1 \\\\ &= \\left( 1 - \\Lambda_{22}^{-1} \\Lambda_{21} \\Lambda_{11}^{-1} \\Lambda_{12} \\right) \\mu_2 + \\Lambda_{22}^{-1} \\Lambda_{21} \\Lambda_{11}^{-1} \\Lambda_{12} \\ m_2 \\end{split}$$\n\nWe rearrange the expression above, yielding:\n\n$$(1 - \\Lambda_{22}^{-1} \\Lambda_{21} \\Lambda_{11}^{-1} \\Lambda_{12}) (m_2 - \\mu_2) = 0$$\n\nThe first term at the left hand side will equal 0 only when the distribution is singular, i.e., the determinant of the precision matrix $\\Lambda$ (i.e., $\\Lambda_{11}\\Lambda_{22} - \\Lambda_{12}\\Lambda_{21}$ ) is 0. Therefore, if the distribution is nonsingular, we must have $m_2 = \\mu_2$ . Substituting it back into the first line, we obtain $m_1 = \\mu_1$ .",
"answer_length": 2036
},
{
"chapter": 10,
"question_number": "10.20",
"difficulty": "medium",
"question_text": "- 10.20 (\\*\\*) www This exercise explores the variational Bayes solution for the mixture of Gaussians model when the size N of the data set is large and shows that it reduces (as we would expect) to the maximum likelihood solution based on EM derived in Chapter 9. Note that results from Appendix B may be used to help answer this exercise. First show that the posterior distribution $q^*(\\Lambda_k)$ of the precisions becomes sharply peaked around the maximum likelihood solution. Do the same for the posterior distribution of the means $q^*(\\mu_k|\\Lambda_k)$ . Next consider the posterior distribution $q^*(\\pi)$ for the mixing coefficients and show that this too becomes sharply peaked around the maximum likelihood solution. Similarly, show that the responsibilities become equal to the corresponding maximum likelihood values for large N, by making use of the following asymptotic result for the digamma function for large x\n\n$$\\psi(x) = \\ln x + O(1/x). \\tag{10.241}$$\n\nFinally, by making use of (10.80: $p(\\widehat{\\mathbf{x}}|\\mathbf{X}) = \\sum_{k=1}^{K} \\iiint \\pi_k \\mathcal{N}\\left(\\widehat{\\mathbf{x}}|\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k^{-1}\\right) q(\\boldsymbol{\\pi}) q(\\boldsymbol{\\mu}_k, \\boldsymbol{\\Lambda}_k) \\, \\mathrm{d}\\boldsymbol{\\pi} \\, \\mathrm{d}\\boldsymbol{\\mu}_k \\, \\mathrm{d}\\boldsymbol{\\Lambda}_k \\quad$), show that for large N, the predictive distribution becomes a mixture of Gaussians.",
"answer": "Let's begin by dealing with $q^*(\\Lambda_k)$ . When $N \\to +\\infty$ , we know that $N_k$ also approaches $+\\infty$ based on Eq (10.51). Therefore, we know that $[\\mathbf{W}_k]^{-1} \\to N_k \\mathbf{S}_k$ and $v_k \\to N_k$ . Using (B.80), we conclude that $\\mathbb{E}[\\Lambda_k] = v_k \\mathbf{W}_k \\to \\mathbf{S}_k^{-1}$ . If we now can prove that the entropy $H[\\Lambda_k]$ is zero, we can conclude that the distribution collapse to a Dirac function, i.e, the distribution is sharply peaked around $\\mathbf{S}_k^{-1}$ , which is identical to the EM of Gaussian mixture given by Eq (9.25: $\\Sigma_k^{\\text{new}} = \\frac{1}{N_k} \\sum_{n=1}^N \\gamma(z_{nk}) \\left( \\mathbf{x}_n - \\boldsymbol{\\mu}_k^{\\text{new}} \\right) \\left( \\mathbf{x}_n - \\boldsymbol{\\mu}_k^{\\text{new}} \\right)^{\\text{T}}$). Therefore, let's now start from $\\ln B(\\mathbf{W}_k, v_k)$ , i.e., (B.79).\n\n$$\\begin{split} \\ln &B(\\mathbf{W}_k, v_k) &= -\\frac{v_k}{2} \\ln |\\mathbf{W}_k| - \\frac{v_k D}{2} \\ln 2 - \\frac{D(D-1)}{4} \\ln \\pi - \\sum_{i=1}^D \\ln \\Gamma(\\frac{v_k + 1 - i}{2}) \\\\ &\\rightarrow \\frac{N_k}{2} \\ln |N_k \\mathbf{S}_k| - \\frac{N_k D}{2} \\ln 2 - \\sum_{i=1}^D \\ln \\Gamma(\\frac{N_k + 1 - i}{2}) \\\\ &= \\frac{N_k}{2} (D \\ln N_k + \\ln |\\mathbf{S}_k| - D \\ln 2) - \\sum_{i=1}^D \\ln \\Gamma(\\frac{N_k - 1 - i}{2} + 1) \\\\ &\\approx \\frac{N_k}{2} (D \\ln \\frac{N_k}{2} + \\ln |\\mathbf{S}_k|) \\\\ &- \\sum_{i=1}^D \\left[ \\frac{1}{2} \\ln 2\\pi - \\frac{N_k - 1 - i}{2} + (\\frac{N_k - 1 - i}{2} + \\frac{1}{2}) \\ln \\frac{N_k - 1 - i}{2} \\right] \\\\ &\\approx \\frac{N_k}{2} (D \\ln \\frac{N_k}{2} + \\ln |\\mathbf{S}_k|) - \\sum_{i=1}^D \\left[ -\\frac{N_k}{2} + \\frac{N_k}{2} \\ln \\frac{N_k}{2} \\right] \\\\ &= \\frac{N_k}{2} (D \\ln \\frac{N_k}{2} + \\ln |\\mathbf{S}_k|) + \\frac{N_k D}{2} - \\frac{N_k D}{2} \\ln \\frac{N_k}{2} \\\\ &= \\frac{N_k}{2} (D \\ln |\\mathbf{S}_k|) \\end{split}$$\n\nWhere we have used Eq (1.146: $\\Gamma(x+1) \\simeq (2\\pi)^{1/2} e^{-x} x^{x+1/2}$) to approximate the logarithm of Gamma\n\nfunction. Next we deal with $\\mathbb{E}[\\ln \\Lambda_k]$ based on (B.81):\n\n$$\\begin{split} \\mathbb{E}[\\ln \\Lambda_k] &= \\sum_{i=1}^D \\phi(\\frac{v_k + 1 - i}{2}) + D \\ln 2 + \\ln |\\mathbf{W}_k| \\\\ &\\rightarrow \\sum_{i=1}^D \\ln(\\frac{N_k + 1 - i}{2}) + D \\ln 2 - \\ln |N_k \\mathbf{S}_k| \\\\ &\\approx \\sum_{i=1}^D \\ln \\frac{N_k}{2} + D \\ln 2 - D \\ln N_k - \\ln |\\mathbf{S}_k| \\\\ &= D \\ln \\frac{N_k}{2} + D \\ln 2 - D \\ln N_k - \\ln |\\mathbf{S}_k| \\\\ &= -\\ln |\\mathbf{S}_k| \\end{split}$$\n\nWhere we have used Eq (10.241: $\\psi(x) = \\ln x + O(1/x).$) to approximate the $\\phi(\\frac{v_k+1-i}{2})$ . Now we are ready to deal with the entropy $H[q(\\Lambda_k)]$ :\n\n$$\\begin{split} \\mathbf{H}[q(\\mathbf{\\Lambda}_k)] &= -\\ln B(\\mathbf{W}_k, v_k) - \\frac{v_k - D - 1}{2} \\mathbb{E}[\\ln \\Lambda_k] + \\frac{v_k D}{2} \\\\ &\\rightarrow -\\frac{N_k}{2} (D + \\ln |\\mathbf{S}_k|) + \\frac{N_k}{2} \\ln |\\mathbf{S}_k| + \\frac{N_k D}{2} = 0 \\end{split}$$\n\nTherefore, we can conclude that the distribution $q^*(\\Lambda_k)$ collapse to a Dirac function at $\\mathbf{S}_k^{-1}$ . In other words, when $N \\to +\\infty$ , $\\Lambda_k$ can only achieve one value $\\mathbf{S}_k^{-1}$ .\n\nNext, we deal with $q^*(\\boldsymbol{\\mu}_k|\\boldsymbol{\\Lambda}_k)$ . According to Eq (10.60: $\\beta_k = \\beta_0 + N_k$), when $N\\to +\\infty$ , we conclude that $\\beta_k\\to N_k$ , and thus, $\\mathbf{m}_k\\to \\bar{\\mathbf{x}}_k$ based on Eq (10.61: $\\mathbf{m}_k = \\frac{1}{\\beta_k} \\left( \\beta_0 \\mathbf{m}_0 + N_k \\overline{\\mathbf{x}}_k \\right)$). Since we know $q^*(\\boldsymbol{\\mu}_k|\\boldsymbol{\\Lambda}_k)=\\mathcal{N}(\\boldsymbol{\\mu}_k|\\mathbf{m}_k,(\\beta_k\\boldsymbol{\\Lambda}_k)^{-1})$ and $\\beta_k\\boldsymbol{\\Lambda}_k\\to N_k\\mathbf{S}_k^{-1}$ is large, we conclude that when $N\\to\\infty$ , $\\boldsymbol{\\mu}_k$ also achieves only one value $\\bar{\\mathbf{x}}_k$ , which is identical to the EM of Gaussian Mixture, i.e., Eq (9.24: $\\boldsymbol{\\mu}_{k}^{\\text{new}} = \\frac{1}{N_{k}} \\sum_{n=1}^{N} \\gamma(z_{nk}) \\mathbf{x}_{n}$).\n\nFinally, we consider $q^*(\\pi)$ given by Eq (10.54: $+\\sum_{k=1}^{K}\\sum_{n=1}^{N}\\mathbb{E}[z_{nk}]\\ln\\mathcal{N}\\left(\\mathbf{x}_{n}|\\boldsymbol{\\mu}_{k},\\boldsymbol{\\Lambda}_{k}^{-1}\\right)+\\text{const.}$). Since we know $\\alpha_k \\to N_k$ based on Eq (10.58: $\\alpha_k = \\alpha_0 + N_k.$), we see that $\\mathbb{E}[\\mu_k] = \\alpha_k/\\widehat{\\alpha} \\to \\frac{N_k}{N}$ and\n\n$$\\operatorname{var}[\\mu_k] = \\frac{\\alpha_k(\\widehat{\\alpha} - \\alpha_k)}{\\widehat{\\alpha}^2(\\widehat{\\alpha} + 1)} \\le \\frac{\\widehat{\\alpha} \\cdot \\widehat{\\alpha}}{\\widehat{\\alpha}^3} = \\frac{1}{\\widehat{\\alpha}} \\to 0$$\n\nWe can also conclude that $pi_k$ only achieves one value $\\frac{N_k}{N}$ , which is identical to the EM of Gaussian Mixture, i.e., Eq (9.26: $\\pi_k^{\\text{new}} = \\frac{N_k}{N}$). Now it is trivial to see that the predictive distribution will reduce to a Mixture of Gaussian using Eq (10.80). Beause $\\pi$ , $\\mu_k$ and $\\Lambda_k$ all reduce to a Dirac function, the integration is easy to perform.",
"answer_length": 4988
},
{
"chapter": 10,
"question_number": "10.21",
"difficulty": "easy",
"question_text": "Show that the number of equivalent parameter settings due to interchange symmetries in a mixture model with K components is K!.",
"answer": "This can be verified directly. The total number of labeling equals assign K labels to K object. For the first label, we have K choice, K-1 choice for the second label, and so on. Therefore, the total number is given by K!.",
"answer_length": 222
},
{
"chapter": 10,
"question_number": "10.22",
"difficulty": "medium",
"question_text": "- 10.22 (\\*\\*) We have seen that each mode of the posterior distribution in a Gaussian mixture model is a member of a family of K! equivalent modes. Suppose that the result of running the variational inference algorithm is an approximate posterior distribution q that is localized in the neighbourhood of one of the modes. We can then approximate the full posterior distribution as a mixture of K! such q distributions, once centred on each mode and having equal mixing coefficients. Show that if we assume negligible overlap between the components of the q mixture, the resulting lower bound differs from that for a single component q distribution through the addition of an extra term $\\ln K!$ .",
"answer": "Let's explain this problem in details. Suppose that now we have a mixture of Gaussian $p(\\mathbf{Z}|\\mathbf{X})$ , which are required to approximate. Moreover, it has K components and each of the modes is denoted as $\\{\\mu_1, \\mu_2, ..., \\mu_K\\}$ . We use the variational inference, i.e., Eq (10.3: $\\mathcal{L}(q) = \\int q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{X}, \\mathbf{Z})}{q(\\mathbf{Z})} \\right\\} d\\mathbf{Z}$), to minimize the KL divergence: $\\mathrm{KL}(q||p)$ , and obtain an approximate distribution $q_s(\\mathbf{Z})$ and a corresponding lower bound $L(q_s)$ .\n\nAccording to the problem description, this approximate distribution $q_s(\\mathbf{Z})$ will be a single mode Gaussian located at one of the modes of $p(\\mathbf{Z}|\\mathbf{X})$ , i.e., $q_s(\\mathbf{Z}) = \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_s, \\boldsymbol{\\Sigma}_s)$ , where $s \\in \\{1, 2, ..., K\\}$ . Now, we replicate this $q_s$ for K! times in total. Each of the copies is moved to one mode's center.\n\nNow we can write down the mixing distribution made up of K! Gaussian distribution:\n\n$$q_m(\\mathbf{Z}) = \\frac{1}{K!} \\sum_{m=1}^{K!} \\mathcal{N}(\\mathbf{Z} | \\boldsymbol{\\mu}_{C(m)}, \\boldsymbol{\\Sigma}_s)$$\n\nWhere C(m) represents the mode of the m-th component. $C(m) \\in \\{1, 2, ..., K\\}$ . What the problem wants us to prove is:\n\n$$L(q_s) + \\ln K! \\approx L(q_m)$$\n\nIn other words, the lower bound using $q_m$ to approximate, i.e., $L(q_m)$ , is $\\ln K!$ larger than using $q_s$ , i.e., $L(q_s)$ . Based on Eq (10.3: $\\mathcal{L}(q) = \\int q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{X}, \\mathbf{Z})}{q(\\mathbf{Z})} \\right\\} d\\mathbf{Z}$), let's equivalently deal with the KL divergence. According to Eq (10.4: $KL(q||p) = -\\int q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{Z}|\\mathbf{X})}{q(\\mathbf{Z})} \\right\\} d\\mathbf{Z}.$), we can obtain:\n\n$$\\begin{aligned} \\operatorname{KL}(q_{m}||p) &= -\\int q_{m}(\\mathbf{Z}) \\ln \\frac{p(\\mathbf{Z}|\\mathbf{X})}{q_{m}(\\mathbf{Z})} d\\mathbf{Z} \\\\ &= -\\int q_{m}(\\mathbf{Z}) \\ln p(\\mathbf{Z}|\\mathbf{X}) d\\mathbf{Z} + \\int q_{m}(\\mathbf{Z}) \\ln q_{m}(\\mathbf{Z}) d\\mathbf{Z} \\\\ &= -\\int q_{m}(\\mathbf{Z}) \\ln p(\\mathbf{Z}|\\mathbf{X}) d\\mathbf{Z} + \\int q_{m}(\\mathbf{Z}) \\ln \\left\\{ \\frac{1}{K!} \\sum_{m=1}^{K!} \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m)}, \\boldsymbol{\\Sigma}_{s}) \\right\\} d\\mathbf{Z} \\\\ &= -\\int q_{m}(\\mathbf{Z}) \\ln p(\\mathbf{Z}|\\mathbf{X}) d\\mathbf{Z} + \\int q_{m}(\\mathbf{Z}) \\ln \\left\\{ \\sum_{m=1}^{K!} \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m)}, \\boldsymbol{\\Sigma}_{s}) \\right\\} d\\mathbf{Z} + \\ln \\frac{1}{K!} \\\\ &= -\\ln K! - \\int q_{m}(\\mathbf{Z}) \\ln p(\\mathbf{Z}|\\mathbf{X}) d\\mathbf{Z} \\\\ &+ \\frac{1}{K!} \\int \\sum_{m=1}^{K!} \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m)}, \\boldsymbol{\\Sigma}_{s}) \\ln \\left\\{ \\sum_{m=1}^{K!} \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m)}, \\boldsymbol{\\Sigma}_{s}) \\right\\} d\\mathbf{Z} \\end{aligned}$$\n\nIn order to further simplify the KL divergence, here we write down two useful equations. First, we use the \"negligible overlap\" property. To be more specific, according to the assumption that the overlap are negligible, we can obtain:\n\n$$\\int \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m)}, \\boldsymbol{\\Sigma}_s) \\ln \\left\\{ \\sum_{m=1}^{K!} \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m)}, \\boldsymbol{\\Sigma}_s) \\right\\} d\\mathbf{Z} \\approx \\int \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m)}, \\boldsymbol{\\Sigma}_s) \\ln \\left\\{ \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m)}, \\boldsymbol{\\Sigma}_s) \\right\\} d\\mathbf{Z}$$\n\nThe second equation is that for any $m_1, m_2 \\in \\{1, 2, ..., K\\}$ , we have:\n\n$$\\int q_s \\ln q_s d\\mathbf{Z} = \\int \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m_1)}, \\boldsymbol{\\Sigma}_s) \\ln \\left\\{ \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m_1)}, \\boldsymbol{\\Sigma}_s) \\right\\} d\\mathbf{Z}$$\n$$= \\int \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m_2)}, \\boldsymbol{\\Sigma}_s) \\ln \\left\\{ \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m_2)}, \\boldsymbol{\\Sigma}_s) \\right\\} d\\mathbf{Z}$$\n\nTherefore, now we can obtain:\n\n$$\\begin{split} \\operatorname{KL}(q_m||p) &= -\\ln K! - \\int q_m(\\mathbf{Z}) \\ln p(\\mathbf{Z}|\\mathbf{X}) d\\mathbf{Z} \\\\ &+ \\frac{1}{K!} \\int \\sum_{m=1}^{K!} \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m)}, \\boldsymbol{\\Sigma}_s) \\ln \\left\\{ \\sum_{m=1}^{K!} \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m)}, \\boldsymbol{\\Sigma}_s) \\right\\} d\\mathbf{Z} \\\\ &\\approx -\\ln K! - \\int q_m(\\mathbf{Z}) \\ln p(\\mathbf{Z}|\\mathbf{X}) d\\mathbf{Z} \\\\ &+ \\frac{1}{K!} \\int \\sum_{m=1}^{K!} \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m)}, \\boldsymbol{\\Sigma}_s) \\ln \\left\\{ \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m)}, \\boldsymbol{\\Sigma}_s) \\right\\} d\\mathbf{Z} \\\\ &= -\\ln K! - \\int q_m(\\mathbf{Z}) \\ln p(\\mathbf{Z}|\\mathbf{X}) d\\mathbf{Z} \\\\ &+ \\int \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m)}, \\boldsymbol{\\Sigma}_s) \\ln \\left\\{ \\mathcal{N}(\\mathbf{Z}|\\boldsymbol{\\mu}_{C(m)}, \\boldsymbol{\\Sigma}_s) \\right\\} d\\mathbf{Z} \\quad (\\forall m \\in \\{1, 2, ..., K\\}) \\\\ &= -\\ln K! - \\int q_m(\\mathbf{Z}) \\ln p(\\mathbf{Z}|\\mathbf{X}) d\\mathbf{Z} + \\int q_s(\\mathbf{Z}) \\ln q_s(\\mathbf{Z}) d\\mathbf{Z} \\\\ &= -\\ln K! - \\int q_s(\\mathbf{Z}) \\ln p(\\mathbf{Z}|\\mathbf{X}) d\\mathbf{Z} + \\int q_s(\\mathbf{Z}) \\ln q_s(\\mathbf{Z}) d\\mathbf{Z} \\\\ &\\approx -\\ln K! - \\int q_s(\\mathbf{Z}) \\ln p(\\mathbf{Z}|\\mathbf{X}) d\\mathbf{Z} + \\int q_s(\\mathbf{Z}) \\ln q_s(\\mathbf{Z}) d\\mathbf{Z} \\\\ &= -\\ln K! - \\int q_s(\\mathbf{Z}) \\ln \\frac{p(\\mathbf{Z}|\\mathbf{X})}{q_s(\\mathbf{Z})} d\\mathbf{Z} = -\\ln K! + \\operatorname{KL}(q_s||p) \\end{split}$$\n\nTo obtain the desired result, we have adopted an approximation here, however, you should notice that this approximation is rough.",
"answer_length": 5702
},
{
"chapter": 10,
"question_number": "10.23",
"difficulty": "medium",
"question_text": "- 10.23 (\\*\\*) www Consider a variational Gaussian mixture model in which there is no prior distribution over mixing coefficients $\\{\\pi_k\\}$ . Instead, the mixing coefficients are treated as parameters, whose values are to be found by maximizing the variational lower bound on the log marginal likelihood. Show that maximizing this lower bound with respect to the mixing coefficients, using a Lagrange multiplier to enforce the constraint that the mixing coefficients sum to one, leads to the re-estimation result (10.83: $\\pi_k = \\frac{1}{N} \\sum_{n=1}^{N} r_{nk}$). Note that there is no need to consider all of the terms in the lower bound but only the dependence of the bound on the $\\{\\pi_k\\}$ .",
"answer": "Let's go back to Eq (10.70: $-\\mathbb{E}[\\ln q(\\mathbf{Z})] - \\mathbb{E}[\\ln q(\\boldsymbol{\\pi})] - \\mathbb{E}[\\ln q(\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})]$). If now we treat $\\pi_k$ as a parameter without a prior distribution, $\\pi_k$ will only occur in the second term in Eq (10.70: $-\\mathbb{E}[\\ln q(\\mathbf{Z})] - \\mathbb{E}[\\ln q(\\boldsymbol{\\pi})] - \\mathbb{E}[\\ln q(\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})]$), i.e., $\\mathbb{E}[\\ln p(\\mathbf{Z}|\\boldsymbol{\\pi})]$ . Therefore, we can obtain:\n\n$$\\mathcal{L} \\propto \\mathbb{E}[\\ln p(\\mathbf{Z}|\\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\ln \\pi_k$$\n\nWhere we have used Eq (10.72: $\\mathbb{E}[\\ln p(\\mathbf{Z}|\\boldsymbol{\\pi})] = \\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\ln \\widetilde{\\pi}_k$), and here since $\\pi_k$ is a point estimate, the expectation $\\mathbb{E}[\\ln \\pi_k]$ will reduce to $\\ln \\pi_k$ . Now we introduce a Lagrange Multiplier.\n\nLag = \n$$\\sum_{n=1}^{N} \\sum_{k=1}^{K} r_{nk} \\ln \\pi_k + \\lambda \\cdot (\\sum_{k=1}^{K} \\pi_k - 1)$$\n\nCalculating the derivative of the expression above with respect to $\\pi_k$ and setting it to zero, we obtain:\n\n$$\\frac{\\sum_{n=1}^{N} r_{nk}}{\\pi_b} + \\lambda = \\frac{N_k}{\\pi_b} + \\lambda = 0 \\tag{*}$$\n\nMultiplying both sides by $\\pi_k$ and then adopting summation of both sides with respect to k, we obtain\n\n$$\\sum_{k=1}^K N_k + \\lambda \\sum_{k=1}^K \\pi_k = 0$$\n\nSince we know the summation of $N_k$ with respect to k equals N, and the summation of $\\pi_k$ with respect to k equals 1, we rearrange the equation above, yielding:\n\n$$\\lambda = -N$$\n\nSubstituting it back into (\\*), we can obtain:\n\n$$\\pi_k = \\frac{N_k}{N} = \\frac{1}{N} \\sum_{n=1}^{N} r_{nk}$$\n\nJust as required.",
"answer_length": 1718
},
{
"chapter": 10,
"question_number": "10.24",
"difficulty": "medium",
"question_text": "We have seen in Section 10.2 that the singularities arising in the maximum likelihood treatment of Gaussian mixture models do not arise in a Bayesian treatment. Discuss whether such singularities would arise if the Bayesian model were solved using maximum posterior (MAP) estimation.",
"answer": "Recall that the singularity in the maximum likelihood estimation of Gaussian mixture is caused by the determinant of the covariance matrix $\\Sigma_k$ approaches 0, and thus the value in $\\mathcal{N}(\\mathbf{x}_n|\\boldsymbol{\\mu}_k,\\boldsymbol{\\Sigma}_k)$ will approach $+\\infty$ . For more details, you can read Section 9.2, especially page 434.\n\nIn this problem, an intuition is that since we have introduce a prior distribution for $\\Lambda_k$ , this singularity won't exist when adopting MAP. Let's verify this statement beginning by writing down the posterior.\n\n$$p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) \\propto p(\\mathbf{X}|\\mathbf{Z}, \\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) \\cdot p(\\mathbf{Z}, \\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})$$\n\n$$= p(\\mathbf{X}|\\mathbf{Z}, \\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) \\cdot p(\\mathbf{Z}|\\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) \\cdot p(\\boldsymbol{\\pi}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) \\cdot p(\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})$$\n\n$$= p(\\mathbf{X}|\\mathbf{Z}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) \\cdot p(\\mathbf{Z}|\\boldsymbol{\\pi}) \\cdot p(\\boldsymbol{\\pi}) \\cdot p(\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})$$\n\nNote that in the first step we have used Bayes' theorem, that in the second step we have used the fact that $p(a,b) = p(a|b) \\cdot p(b)$ , and that in the last step we have omitted the extra dependence based on definition, i.e., Eq (10.37)-(10.40). Now let's calculate the MAP solution for $\\Lambda_k$ .\n\n$$\\begin{split} \\ln p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}) & \\propto & \\frac{1}{2} \\sum_{n=1}^{N} z_{nk} \\Big\\{ \\ln |\\boldsymbol{\\Lambda}_{k}| - (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k})^{T} \\boldsymbol{\\Lambda}_{k} (\\mathbf{x}_{n} - \\boldsymbol{\\mu}_{k}) \\Big\\} \\\\ & \\frac{1}{2} \\Big\\{ \\ln |\\boldsymbol{\\Lambda}_{k}| - \\beta_{0} (\\boldsymbol{\\mu}_{k} - \\mathbf{m}_{0})^{T} \\boldsymbol{\\Lambda}_{k} (\\boldsymbol{\\mu}_{k} - \\mathbf{m}_{0}) \\Big\\} \\\\ & + \\frac{1}{2} \\Big\\{ (v_{0} - D - 1) \\ln |\\boldsymbol{\\Lambda}_{k}| - \\mathrm{Tr}[\\mathbf{W}_{0}^{-1} \\boldsymbol{\\Lambda}_{k}] \\Big\\} + \\mathrm{const} \\\\ & = & c \\cdot \\ln |\\boldsymbol{\\Lambda}_{k}| - \\mathrm{Tr}[\\mathbf{B} \\boldsymbol{\\Lambda}_{k}] + \\mathrm{const} \\end{split}$$\n\nWhere const is the term independent of $\\Lambda_k$ , and we have defined:\n\n$$c = \\frac{1}{2}(v_0 - D + \\sum_{n=1}^{N} z_{nk})$$\n\nand\n\n$$\\mathbf{B} = \\frac{1}{2} \\left\\{ \\sum_{n=1}^{N} z_{nk} (\\mathbf{x}_n - \\boldsymbol{\\mu}_k) (\\mathbf{x}_n - \\boldsymbol{\\mu}_k)^T + \\beta_0 (\\boldsymbol{\\mu}_k - \\mathbf{m}_0) (\\boldsymbol{\\mu}_k - \\mathbf{m}_0)^T + \\mathbf{W}_0^{-1} \\right\\}$$\n\nNext we calculate the derivative of $\\ln p(\\mathbf{Z}|\\mathbf{X}, \\boldsymbol{\\pi}, \\boldsymbol{\\mu}, \\boldsymbol{\\Lambda})$ with respect to $\\boldsymbol{\\Lambda}_k$ and set it to 0, yielding:\n\n$$c \\cdot \\mathbf{\\Lambda}_k^{-1} - \\mathbf{B} = 0$$\n\ntherefore, we obtain:\n\n$$\\mathbf{\\Lambda}_k^{-1} = \\frac{1}{c} \\mathbf{B}$$\n\nNote that in the MAP framework, we need to solve $z_{nk}$ first, and then substitute them in c and $\\mathbf{B}$ in the expression above. Nevertheless, from the expression above, we can see that $\\Lambda_k^{-1}$ won't have zero determinant.",
"answer_length": 3331
},
{
"chapter": 10,
"question_number": "10.25",
"difficulty": "medium",
"question_text": "- 10.25 (\\*\\*) The variational treatment of the Bayesian mixture of Gaussians, discussed in Section 10.2, made use of a factorized approximation (10.5: $q(\\mathbf{Z}) = \\prod_{i=1}^{M} q_i(\\mathbf{Z}_i).$) to the posterior distribution. As we saw in Figure 10.2, the factorized assumption causes the variance of the posterior distribution to be under-estimated for certain directions in parameter space. Discuss qualitatively the effect this will have on the variational approximation to the model evidence, and how this effect will vary with the number of components in the mixture. Hence explain whether the variational Gaussian mixture will tend to under-estimate or over-estimate the optimal number of components.",
"answer": "We qualitatively solve this problem. As the number of mixture components grows, so does the number of variables that may be correlated, but they are treated as independent under a variational approximation if Eq (10.5: $q(\\mathbf{Z}) = \\prod_{i=1}^{M} q_i(\\mathbf{Z}_i).$) has been used. Therefore, the proportion of probability mass under the true distribution, $p(\\mathbf{Z}, \\pi, \\mu, \\Sigma | \\mathbf{X})$ , that the variational approximation $q(\\mathbf{Z}, \\pi, \\mu, \\Sigma)$ does not capture, will grow. The consequence will be that the second term in (10.2: $\\ln p(\\mathbf{X}) = \\mathcal{L}(q) + \\mathrm{KL}(q||p)$), the KL divergence between $q(\\mathbf{Z}, \\pi, \\mu, \\Sigma)$ and $p(\\mathbf{Z}, \\pi, \\mu, \\Sigma | \\mathbf{X})$ will increase.\n\nTo answer the question whether we will underestimate or overestimate the number of components by minimizing $\\mathrm{KL}(q||p)$ divergence under factorization, we only need to see Fig.10.3. It is obvious that we will underestimate the number of components.",
"answer_length": 1016
},
{
"chapter": 10,
"question_number": "10.26",
"difficulty": "hard",
"question_text": "Extend the variational treatment of Bayesian linear regression to include a gamma hyperprior $\\operatorname{Gam}(\\beta|c_0,d_0)$ over $\\beta$ and solve variationally, by assuming a factorized variational distribution of the form $q(\\mathbf{w})q(\\alpha)q(\\beta)$ . Derive the variational update equations for the three factors in the variational distribution and also obtain an expression for the lower bound and for the predictive distribution.",
"answer": "In this problem, we also need to consider the prior $p(\\beta) = \\text{Gam}(\\beta|c_0, d_0)$ . To be more specific, based on the original joint distribution $p(\\mathbf{t}, \\mathbf{w}, \\alpha)$ , i.e., Eq (10.90: $p(\\mathbf{t}, \\mathbf{w}, \\alpha) = p(\\mathbf{t}|\\mathbf{w})p(\\mathbf{w}|\\alpha)p(\\alpha).$), the joint distribution $p(\\mathbf{t}, \\mathbf{w}, \\alpha, \\beta)$ now should be written as:\n\n$$p(\\mathbf{t}, \\mathbf{w}, \\alpha, \\beta) = p(\\mathbf{t}|\\mathbf{w}, \\beta)p(\\mathbf{w}|\\alpha)p(\\alpha)p(\\beta)$$\n\nWhere the first term on the right hand side is given by Eq (10.87: $p(\\mathbf{t}|\\mathbf{w}) = \\prod_{n=1}^{N} \\mathcal{N}(t_n|\\mathbf{w}^{\\mathrm{T}}\\boldsymbol{\\phi}_n, \\beta^{-1})$), the second one is given by Eq (10.88: $p(\\mathbf{w}|\\alpha) = \\mathcal{N}(\\mathbf{w}|\\mathbf{0}, \\alpha^{-1}\\mathbf{I})$), the third one is given by Eq (10.89: $p(\\alpha) = \\operatorname{Gam}(\\alpha|a_0, b_0)$), and the last one is given by $Gam(\\beta|c_0,d_0)$ . Using the variational framework, we assume a posterior variational distribution:\n\n$$q(\\mathbf{w}, \\alpha, \\beta) = q(\\mathbf{w})q(\\alpha)q(\\beta)$$\n\nIt is trivial to observe that introducing a Gamma prior for $\\beta$ doesn't affect $q(\\alpha)$ because the expectation of $p(\\beta)$ can be absorbed into the 'const' term in Eq (10.92: $= (a_0 - 1) \\ln \\alpha - b_0 \\alpha + \\frac{M}{2} \\ln \\alpha - \\frac{\\alpha}{2} \\mathbb{E}[\\mathbf{w}^{\\mathrm{T}} \\mathbf{w}] + \\text{const.}$). In other words, we still obtain Eq (10.93)-Eq(10.95).\n\nNow we deal with $q(\\mathbf{w})$ . By analogy to Eq (10.96)-(10.98), we can obtain:\n\n$$\\begin{split} & \\ln q^{\\star}(\\mathbf{w}) & \\propto & \\mathbb{E}_{\\beta}[\\ln p(\\mathbf{t}|\\mathbf{w},\\beta)] + \\mathbf{E}_{\\alpha}[\\ln p(\\mathbf{w}|\\alpha)] + \\text{const} \\\\ & \\propto & -\\frac{\\mathbb{E}_{\\beta}[\\beta]}{2} \\cdot \\sum_{n=1}^{N} (t_{n} - \\mathbf{w}^{T} \\boldsymbol{\\phi}_{n})^{2} - \\frac{\\mathbb{E}_{\\alpha}[\\alpha]}{2} \\mathbf{w}^{T} \\mathbf{w} + \\text{const} \\\\ & = & -\\frac{1}{2} \\mathbf{w}^{T} \\Big\\{ \\mathbb{E}_{\\beta}[\\beta] \\cdot \\boldsymbol{\\Phi}^{T} \\boldsymbol{\\Phi} + \\mathbb{E}_{\\alpha}[\\alpha] \\cdot \\mathbf{I} \\Big\\} \\mathbf{w} + \\mathbb{E}_{\\beta}[\\beta] \\mathbf{w}^{T} \\boldsymbol{\\Phi}^{T} \\mathbf{t} + \\text{const} \\end{split}$$\n\nTherefore, by analogy to Eq (10.99)-(10.101), we can conclude that $q^*(\\mathbf{w})$ is still Gaussian, i.e., $q^*(\\mathbf{w}) = \\mathcal{N}(\\mathbf{w}|\\mathbf{m}_N, \\mathbf{S}_N)$ , where we have defined:\n\n$$\\mathbf{m}_N = \\mathbb{E}_{\\beta}[\\beta] \\mathbf{S}_N \\mathbf{\\Phi}^T \\mathbf{t}$$\n\nand\n\n$$\\mathbf{S}_{N} = \\left\\{ \\mathbb{E}_{\\beta}[\\beta] \\cdot \\mathbf{\\Phi}^{T} \\mathbf{\\Phi} + \\mathbb{E}_{\\alpha}[\\alpha] \\cdot \\mathbf{I} \\right\\}^{-1}$$\n\nNext, we deal with $q(\\beta)$ . According to definition, we have:\n\n$$\\begin{split} \\ln q^{\\star}(\\beta) & \\propto & \\mathbb{E}_{\\mathbf{w}}[\\ln p(\\mathbf{t}|\\mathbf{w},\\beta)] + \\ln p(\\beta) + \\operatorname{const} \\\\ & \\propto & \\frac{N}{2} \\cdot \\ln \\beta - \\frac{\\beta}{2} \\cdot \\mathbb{E}[\\sum_{n=1}^{N} (t_{n} - \\mathbf{w}^{T} \\boldsymbol{\\phi}_{n})^{2}] + (c_{0} - 1) \\ln \\beta - d_{0}\\beta \\\\ & = & (\\frac{N}{2} + c_{0} - 1) \\cdot \\ln \\beta - \\frac{\\beta}{2} \\cdot \\mathbb{E}[||\\boldsymbol{\\Phi}\\mathbf{w} - \\mathbf{t}||^{2}] - d_{0}\\beta \\\\ & = & (\\frac{N}{2} + c_{0} - 1) \\cdot \\ln \\beta - \\beta \\cdot \\left\\{ \\frac{1}{2} \\cdot \\mathbb{E}[||\\boldsymbol{\\Phi}\\mathbf{w} - \\mathbf{t}||^{2}] + d_{0} \\right\\} \\\\ & = & (\\frac{N}{2} + c_{0} - 1) \\cdot \\ln \\beta - \\beta \\cdot \\left\\{ \\frac{1}{2} \\cdot \\mathbb{E}[\\mathbf{w}^{T} \\boldsymbol{\\Phi}^{T} \\boldsymbol{\\Phi}\\mathbf{w} - 2\\mathbf{t}^{T} \\boldsymbol{\\Phi}\\mathbf{w} + \\mathbf{t}^{T} \\mathbf{t}] + d_{0} \\right\\} \\\\ & = & (\\frac{N}{2} + c_{0} - 1) \\cdot \\ln \\beta - \\beta \\cdot \\left\\{ \\frac{1}{2} \\cdot \\operatorname{Tr}[\\boldsymbol{\\Phi}^{T} \\boldsymbol{\\Phi} \\mathbb{E}[\\mathbf{w} \\mathbf{w}^{T}]] - \\mathbf{t}^{T} \\boldsymbol{\\Phi} \\mathbb{E}[\\mathbf{w}] + \\frac{1}{2} \\mathbf{t}^{T} \\mathbf{t} + d_{0} \\right\\} \\\\ & = & (\\frac{N}{2} + c_{0} - 1) \\cdot \\ln \\beta - \\beta \\cdot \\left\\{ \\frac{1}{2} \\cdot \\operatorname{Tr}[\\boldsymbol{\\Phi}^{T} \\boldsymbol{\\Phi}(\\mathbf{m}_{N} \\mathbf{m}_{N}^{T} + \\mathbf{S}_{N})] - \\mathbf{t}^{T} \\boldsymbol{\\Phi} \\mathbf{m}_{N} + \\frac{1}{2} \\mathbf{t}^{T} \\mathbf{t} + d_{0} \\right\\} \\\\ & = & (\\frac{N}{2} + c_{0} - 1) \\cdot \\ln \\beta - \\beta \\cdot \\left\\{ \\frac{1}{2} \\operatorname{Tr}[\\boldsymbol{\\Phi}^{T} \\boldsymbol{\\Phi} \\mathbf{S}_{N}] + \\frac{1}{2} \\mathbf{m}_{N}^{T} \\boldsymbol{\\Phi}^{T} \\boldsymbol{\\Phi} \\mathbf{m}_{N} - \\mathbf{t}^{T} \\boldsymbol{\\Phi} \\mathbf{m}_{N} + \\frac{1}{2} \\mathbf{t}^{T} \\mathbf{t} + d_{0} \\right\\} \\\\ & = & (\\frac{N}{2} + c_{0} - 1) \\cdot \\ln \\beta - \\beta \\cdot \\left\\{ \\frac{1}{2} \\operatorname{Tr}[\\boldsymbol{\\Phi}^{T} \\boldsymbol{\\Phi} \\mathbf{S}_{N}] + ||\\boldsymbol{\\Phi} \\mathbf{m}_{N} - \\mathbf{t}||^{2} + 2d_{0} \\right\\} \\end{split}$$\n\nTherefore, we obtain $q^*(\\beta) = \\text{Gam}(\\beta|c_N, d_N)$ , where we have defined:\n\n$$c_N = \\frac{N}{2} + c_0$$\n\nand\n\n$$d_N = d_0 + \\frac{1}{2} \\left\\{ \\text{Tr}[\\boldsymbol{\\Phi}^T \\boldsymbol{\\Phi} \\mathbf{S}_N] + ||\\boldsymbol{\\Phi} \\mathbf{m}_N - \\mathbf{t}||^2 \\right\\}$$\n\nFurthermore, notice that from (B.27), the expectations in $\\mathbf{m}_N$ and $\\mathbf{S}_N$ can be expressed in $a_N$ , $b_N$ , and $c_N$ , $d_N$ :\n\n$$\\mathbb{E}[\\alpha] = \\frac{a_N}{b_N}$$\n and $\\mathbb{E}[\\beta] = \\frac{c_N}{d_N}$ \n\nWe have already obtained all the update formula. Next, we calculate the lower bound. By noticing Eq (10.107: $-\\mathbb{E}_{\\alpha}[\\ln q(\\mathbf{w})]_{\\mathbf{w}} - \\mathbb{E}[\\ln q(\\alpha)].$), in this case, the first term on the right hand side of Eq (10.107: $-\\mathbb{E}_{\\alpha}[\\ln q(\\mathbf{w})]_{\\mathbf{w}} - \\mathbb{E}[\\ln q(\\alpha)].$) will be modified, and two more terms will be added on the right hand side, i.e., $+\\mathbb{E}[\\ln p(\\beta)]$ and $-\\mathbb{E}[\\ln q^*(\\beta)]$ . Let's start from calculating the adding two terms:\n\n$$\\begin{split} +\\mathbb{E}[\\ln p(\\beta)] &= (c_0 - 1)\\mathbb{E}[\\ln \\beta] - d_0\\mathbb{E}[\\beta] + c_0 \\ln d_0 - \\ln \\Gamma(c_0) \\\\ &= (c_0 - 1) \\cdot (\\varphi(c_N) - \\ln d_N) - d_0 \\frac{c_N}{d_N} + c_0 \\ln d_0 - \\ln \\Gamma(c_0) \\end{split}$$\n\nwhere we have used (B.26) and (B.30). Similarly, we have:\n\n$$-\\mathbb{E}[\\ln q^{\\star}(\\beta)] = (c_N - 1) \\cdot \\varphi(c_N) - c_N + \\ln d_N - \\ln \\Gamma(c_N)$$\n\nwhere we have used (B.31). Finally, we deal with the modification of the first term on the right hand side of Eq (10.107):\n\n$$\\begin{split} \\mathbb{E}_{\\beta,\\mathbf{w}}[\\ln p(\\mathbf{t}|\\mathbf{w},\\beta)] &= \\mathbb{E}_{\\beta} \\left\\{ \\frac{N}{2} \\ln \\beta - \\frac{N}{2} \\ln 2\\pi - \\frac{\\beta}{2} \\mathbb{E}_{\\mathbf{w}}[||\\mathbf{\\Phi}\\mathbf{w} - \\mathbf{t}||^{2}] \\right\\} \\\\ &= \\frac{N}{2} \\mathbb{E}_{\\beta}[\\ln \\beta] - \\frac{N}{2} \\ln 2\\pi - \\frac{\\mathbb{E}_{\\beta}[\\beta]}{2} \\mathbb{E}_{\\mathbf{w}}[||\\mathbf{\\Phi}\\mathbf{w} - \\mathbf{t}||^{2}] \\\\ &= \\frac{N}{2} (\\varphi(c_{N}) - \\ln d_{N} - \\ln 2\\pi) - \\frac{c_{N}}{2d_{N}} \\mathbb{E}_{\\mathbf{w}}[||\\mathbf{\\Phi}\\mathbf{w} - \\mathbf{t}||^{2}] \\\\ &= \\frac{N}{2} (\\varphi(c_{N}) - \\ln d_{N} - \\ln 2\\pi) - \\frac{c_{N}}{2d_{N}} \\left\\{ \\text{Tr}[\\mathbf{\\Phi}^{T}\\mathbf{\\Phi}\\mathbf{S}_{N}] + ||\\mathbf{\\Phi}\\mathbf{m}_{N} - \\mathbf{t}||^{2} \\right\\} \\end{split}$$\n\nThe last question is the predictive distribution. It is not difficult to observe that the predictive distribution is still given by Eq (10.105: $= \\mathcal{N}(t|\\mathbf{m}_{N}^{\\mathrm{T}} \\boldsymbol{\\phi}(\\mathbf{x}), \\sigma^{2}(\\mathbf{x})) \\qquad$) and Eq (10.106: $\\sigma^{2}(\\mathbf{x}) = \\frac{1}{\\beta} + \\phi(\\mathbf{x})^{\\mathrm{T}} \\mathbf{S}_{N} \\phi(\\mathbf{x}).$), with $1/\\beta$ replaced by $1/\\mathbb{E}[\\beta]$ .",
"answer_length": 7754
},
{
"chapter": 10,
"question_number": "10.27",
"difficulty": "medium",
"question_text": "By making use of the formulae given in Appendix B show that the variational lower bound for the linear basis function regression model, defined by (10.107: $-\\mathbb{E}_{\\alpha}[\\ln q(\\mathbf{w})]_{\\mathbf{w}} - \\mathbb{E}[\\ln q(\\alpha)].$), can be written in the form (10.107: $-\\mathbb{E}_{\\alpha}[\\ln q(\\mathbf{w})]_{\\mathbf{w}} - \\mathbb{E}[\\ln q(\\alpha)].$) with the various terms defined by (10.108)–(10.112).",
"answer": "Let's deal with the terms in Eq(10.107) one by one. Noticing Eq (10.87: $p(\\mathbf{t}|\\mathbf{w}) = \\prod_{n=1}^{N} \\mathcal{N}(t_n|\\mathbf{w}^{\\mathrm{T}}\\boldsymbol{\\phi}_n, \\beta^{-1})$), we have:\n\n$$\\begin{split} \\mathbb{E}[\\ln p(\\mathbf{t}|\\mathbf{w})]_{\\mathbf{w}} &= -\\frac{N}{2}\\ln(2\\pi) + \\frac{N}{2}\\ln\\beta - \\frac{\\beta}{2}\\mathbb{E}\\Big[\\sum_{n=1}^{N}(t_n - \\mathbf{w}^T\\boldsymbol{\\phi}_n)^2\\Big] \\\\ &= -\\frac{N}{2}\\ln(2\\pi) + \\frac{N}{2}\\ln\\beta - \\frac{\\beta}{2}\\mathbb{E}\\Big[\\sum_{n=1}^{N}t_n^2 - 2\\sum_{n=1}^{N}t_n\\cdot\\mathbf{w}^T\\boldsymbol{\\phi}_n + \\sum_{n=1}^{N}\\mathbf{w}^T\\boldsymbol{\\phi}_n\\cdot\\boldsymbol{\\phi}_n^T\\mathbf{w}\\Big] \\\\ &= -\\frac{N}{2}\\ln(2\\pi) + \\frac{N}{2}\\ln\\beta - \\frac{\\beta}{2}\\mathbb{E}\\Big[\\mathbf{t}^T\\mathbf{t} - 2\\mathbf{w}^T\\mathbf{\\Phi}^T\\mathbf{t} + \\mathbf{w}^T\\cdot(\\mathbf{\\Phi}^T\\mathbf{\\Phi})\\cdot\\mathbf{w}\\Big] \\\\ &= -\\frac{N}{2}\\ln(2\\pi) + \\frac{N}{2}\\ln\\beta - \\frac{\\beta}{2}\\mathbf{t}^T\\mathbf{t} - \\beta\\mathbb{E}[\\mathbf{w}^T]\\cdot\\mathbf{\\Phi}^T\\mathbf{t} + \\mathrm{Tr}\\Big[\\mathbb{E}\\big[(\\mathbf{w}\\mathbf{w}^T)\\big]\\cdot(\\mathbf{\\Phi}^T\\mathbf{\\Phi})\\big] \\Big] \\end{split}$$\n\nWhere we have defined $\\mathbf{\\Phi} = [\\boldsymbol{\\phi}_1, \\, \\boldsymbol{\\phi}_2, \\, ..., \\, \\boldsymbol{\\phi}_N]^T$ , i.e., the *i*-th row of $\\mathbf{\\Phi}$ is $\\boldsymbol{\\phi}_i^T$ . Then using Eq(10.99), (10.100: $\\mathbf{m}_N = \\beta \\mathbf{S}_N \\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{t}$) and (10.103: $\\mathbb{E}[\\mathbf{w}\\mathbf{w}^{\\mathrm{T}}] = \\mathbf{m}_{N}\\mathbf{m}_{N}^{\\mathrm{T}} + \\mathbf{S}_{N}.$), it is easy to obtain (10.108: $- \\frac{\\beta}{2} \\mathrm{Tr} \\left[\\mathbf{\\Phi}^{\\mathrm{T}} \\mathbf{\\Phi} (\\mathbf{m}_{N} \\mathbf{m}_{N}^{\\mathrm{T}} + \\mathbf{S}_{N})\\right] \\qquad$). Next, we deal with the second term by noticing Eq (10.88):\n\n$$\\mathbb{E}\\big[\\ln p(\\mathbf{w}|\\alpha)\\big]_{\\mathbf{w},\\alpha} = -\\frac{M}{2}\\ln(2\\pi) + \\frac{M}{2}\\mathbb{E}[\\ln \\alpha]_{\\alpha} - \\frac{\\mathbb{E}[\\alpha]_{\\alpha}}{2} \\cdot \\mathbb{E}[\\mathbf{w}\\mathbf{w}^{T}]_{\\mathbf{w}}$$\n\nThen using Eq (10.93)-(10.95), (B.27), (B.30) and Eq (10.103: $\\mathbb{E}[\\mathbf{w}\\mathbf{w}^{\\mathrm{T}}] = \\mathbf{m}_{N}\\mathbf{m}_{N}^{\\mathrm{T}} + \\mathbf{S}_{N}.$), we obtain Eq (10.109: $- \\frac{a_{N}}{2b_{N}} \\left[\\mathbf{m}_{N}^{\\mathrm{T}} \\mathbf{m}_{N} + \\mathrm{Tr}(\\mathbf{S}_{N})\\right] \\qquad$) just as required. Then we deal with the third term in Eq (10.107: $-\\mathbb{E}_{\\alpha}[\\ln q(\\mathbf{w})]_{\\mathbf{w}} - \\mathbb{E}[\\ln q(\\alpha)].$) by noticing Eq (10.89):\n\n$$\\mathbb{E}[\\ln p(\\alpha)]_{\\alpha} = a_0 \\ln b_0 + (a_0 - 1)\\mathbb{E}[\\ln \\alpha] - b_0 \\mathbb{E}[\\alpha] - \\ln \\Gamma(a_0)$$\n\nSimilarly, using Eq (10.93)-(10.95), (B.27), (B.30), we will obtain Eq (10.110: $\\mathbb{E}[\\ln p(\\alpha)]_{\\alpha} = a_0 \\ln b_0 + (a_0 - 1) [\\psi(a_N) - \\ln b_N] -b_0 \\frac{a_N}{b_N} - \\ln \\Gamma(a_N)$). Notice that there is a typo in Eq (10.110: $\\mathbb{E}[\\ln p(\\alpha)]_{\\alpha} = a_0 \\ln b_0 + (a_0 - 1) [\\psi(a_N) - \\ln b_N] -b_0 \\frac{a_N}{b_N} - \\ln \\Gamma(a_N)$). The last term in Eq (10.110: $\\mathbb{E}[\\ln p(\\alpha)]_{\\alpha} = a_0 \\ln b_0 + (a_0 - 1) [\\psi(a_N) - \\ln b_N] -b_0 \\frac{a_N}{b_N} - \\ln \\Gamma(a_N)$) should be $\\ln \\Gamma(a_0)$ instead of $\\ln \\Gamma(a_N)$ .\n\nFinally we deal with the last two terms in Eq (10.107). We notice that these two terms are actually negative entropy of a Gaussian and a Gamma distribution, so that using (B.31) and (B.41), we can obtain:\n\n$$-\\mathbb{E}[\\ln q(\\alpha)]_{\\alpha} = \\mathbb{H}[\\alpha] = \\ln \\Gamma(a_N) - (a_N - 1) \\cdot \\varphi(a_N) - \\ln b_N + a_N$$\n\nand\n\n$$-\\mathbb{E}[\\ln q(\\mathbf{w})]_{\\mathbf{w}} = \\mathbf{H}[\\mathbf{w}] = \\frac{1}{2}\\ln |\\mathbf{S}_N| + \\frac{M}{2}(1 + \\ln(2\\pi))$$\n\n**Problem 10.28 Solution**",
"answer_length": 3772
},
{
"chapter": 10,
"question_number": "10.29",
"difficulty": "easy",
"question_text": "Show that the function $f(x) = \\ln(x)$ is concave for $0 < x < \\infty$ by computing its second derivative. Determine the form of the dual function $g(\\lambda)$ defined by (10.133: $g(\\lambda) = \\min_{x} \\left\\{ \\lambda x - f(x) \\right\\}.$), and verify that minimization of $\\lambda x g(\\lambda)$ with respect to $\\lambda$ according to (10.132: $f(x) = \\min_{\\lambda} \\{\\lambda x - g(\\lambda)\\}$) indeed recovers the function $\\ln(x)$ .",
"answer": "The second derivative of f(x) is given by:\n\n$$\\frac{d^2}{dx^2}(\\ln x) = \\frac{d}{dx}(\\frac{1}{x}) = -\\frac{1}{x^2} < 0$$\n\nTherefore, $f(x) = \\ln x$ is concave for $0 < x < \\infty$ . Based on definition, i.e., Eq (10.133: $g(\\lambda) = \\min_{x} \\left\\{ \\lambda x - f(x) \\right\\}.$), we can obtain:\n\n$$g(\\lambda) = \\min_{x} \\{\\lambda x - \\ln x\\}$$\n\nWe observe that:\n\n$$\\frac{d}{dx}(\\lambda x - \\ln x) = \\lambda - \\frac{1}{x}$$\n\nIn other words, when $\\lambda \\le 0$ , $\\lambda x - \\ln x$ will always decrease as x increase. On the other hand, when $\\lambda > 0$ , $\\lambda x - \\ln x$ will achieve its minimum when $x = 1/\\lambda$ . Therefore, we conclude that:\n\n$$g(\\lambda) = \\lambda \\cdot \\frac{1}{\\lambda} - \\ln \\frac{1}{\\lambda} = 1 + \\ln \\lambda$$\n\nSubstituting $g(\\lambda)$ back into Eq (10.132: $f(x) = \\min_{\\lambda} \\{\\lambda x - g(\\lambda)\\}$), we obtain:\n\n$$f(x) = \\min_{\\lambda} \\{\\lambda x - 1 - \\ln \\lambda\\}$$\n\nWe calculate the derivative:\n\n$$\\frac{d}{d\\lambda}(\\lambda x - 1 - \\ln \\lambda) = x - \\frac{1}{\\lambda}$$\n\nTherefore, when $\\lambda = 1/x$ , $\\lambda x - 1 - \\ln \\lambda$ achieves minimum with respect to $\\lambda$ , which yields:\n\n$$f(x) = \\frac{1}{x} \\cdot x - 1 - \\ln \\frac{1}{x} = \\ln x$$\n\nIn other words, we have shown that Eq (10.132: $f(x) = \\min_{\\lambda} \\{\\lambda x - g(\\lambda)\\}$) indeed recovers $f(x) = \\ln x$ .",
"answer_length": 1364
},
{
"chapter": 10,
"question_number": "10.3",
"difficulty": "medium",
"question_text": "- 10.3 (\\*\\*) www Consider a factorized variational distribution $q(\\mathbf{Z})$ of the form (10.5: $q(\\mathbf{Z}) = \\prod_{i=1}^{M} q_i(\\mathbf{Z}_i).$). By using the technique of Lagrange multipliers, verify that minimization of the Kullback-Leibler divergence $\\mathrm{KL}(p\\|q)$ with respect to one of the factors $q_i(\\mathbf{Z}_i)$ , keeping all other factors fixed, leads to the solution (10.17: $q_j^{\\star}(\\mathbf{Z}_j) = \\int p(\\mathbf{Z}) \\prod_{i \\neq j} d\\mathbf{Z}_i = p(\\mathbf{Z}_j).$).",
"answer": "Let's start from the definition of KL divergence given in Eq (10.16: $Q(\\boldsymbol{\\xi}, \\boldsymbol{\\xi}^{\\text{old}}) = \\sum_{n=1}^{N} \\left\\{ \\ln \\sigma(\\xi_n) - \\xi_n/2 - \\lambda(\\xi_n) (\\boldsymbol{\\phi}_n^{\\text{T}} \\mathbb{E}[\\mathbf{w}\\mathbf{w}^{\\text{T}}] \\boldsymbol{\\phi}_n - \\xi_n^2) \\right\\} + \\text{const}$).\n\n$$\\begin{split} KL(p||q) &= -\\int p(\\mathbf{Z}) \\Big[ \\sum_{i=1}^{M} \\ln q_i(\\mathbf{Z}_i) \\Big] d\\mathbf{Z} + \\text{const} \\\\ &= -\\int p(\\mathbf{Z}) \\Big[ \\ln q_j(\\mathbf{Z}_j) + \\sum_{i \\neq j} \\ln q_i(\\mathbf{Z}_i) \\Big] d\\mathbf{Z} + \\text{const} \\\\ &= -\\int p(\\mathbf{Z}) \\ln q_j(\\mathbf{Z}_j) d\\mathbf{Z} + \\text{const} \\\\ &= -\\int \\Big[ \\int p(\\mathbf{Z}) \\prod_{i \\neq j} d\\mathbf{Z}_i \\Big] \\ln q_j(\\mathbf{Z}_j) d\\mathbf{Z}_j + \\text{const} \\\\ &= -\\int P(\\mathbf{Z}_j) \\ln q_j(\\mathbf{Z}_j) d\\mathbf{Z}_j + \\text{const} \\end{split}$$\n\nNote that in the third step, since all the factors $q_i(\\mathbf{Z}_i)$ , where $i \\neq j$ , are fixed, they can be absorbed into the 'Const' variable. In the last step, we have denoted the marginal distribution:\n\n$$p(\\mathbf{Z}_j) = \\int p(\\mathbf{Z}) \\prod_{i \\neq j} d\\mathbf{Z}_i$$\n\nWe introduce the Lagrange multiplier to enforce $q_j(\\mathbf{Z}_j)$ integrate to 1.\n\n$$L = -\\int P(\\mathbf{Z}_j) \\ln q_j(\\mathbf{Z}_j) d\\mathbf{Z}_j + \\lambda (\\int q_j(\\mathbf{Z}_j) d\\mathbf{Z}_j - 1)$$\n\nUsing the functional derivative (for more details, you can refer to Appendix D or Prob.1.34), we calculate the functional derivative of L with respect to $q_j(\\mathbf{Z}_j)$ and set it to 0:\n\n$$-\\frac{p(\\mathbf{Z}_j)}{q_j(\\mathbf{Z}_j)} + \\lambda = 0$$\n\nRearranging it, we can obtain:\n\n$$\\lambda q_j(\\mathbf{Z}_j) = p(\\mathbf{Z}_j)$$\n\nIntegrating both sides with respect to $\\mathbf{Z}_j$ , we see that $\\lambda = 1$ . Substituting it back into the derivative, we can obtain the optimal $q_j(\\mathbf{Z}_j)$ :\n\n$$q_j^{\\star}(\\mathbf{Z}_j) = p(\\mathbf{Z}_j)$$\n\nNotice that actually we should also enforce $q_j(\\mathbf{Z}_j) > 0$ in the Lagrange multiplier, however as we can see that when we only enforce $q_j(\\mathbf{Z}_j)$ integrate to 1 and obtain the final close expression, $q_j(\\mathbf{Z}_j)$ is definitely larger than 0 at all $\\mathbf{Z}_j$ because $p(\\mathbf{Z}_j)$ is a PDF. Therefore, there is no need to introduce this inequality constraint in the Lagrange multiplier.",
"answer_length": 2359
},
{
"chapter": 10,
"question_number": "10.30",
"difficulty": "easy",
"question_text": "By evaluating the second derivative, show that the log logistic function $f(x) = -\\ln(1+e^{-x})$ is concave. Derive the variational upper bound (10.137: $\\sigma(x) \\leqslant \\exp(\\lambda x - g(\\lambda))$) directly by making a second order Taylor expansion of the log logistic function around a point $x=\\xi$ .\n- 10.31 $(\\star \\star)$ By finding the second derivative with respect to x, show that the function $f(x) = -\\ln(e^{x/2} + e^{-x/2})$ is a concave function of x. Now consider the second derivatives with respect to the variable $x^2$ and hence show that it is a convex function of $x^2$ . Plot graphs of f(x) against x and against $x^2$ . Derive the lower bound (10.144: $\\sigma(x) \\geqslant \\sigma(\\xi) \\exp\\left\\{ (x - \\xi)/2 - \\lambda(\\xi)(x^2 - \\xi^2) \\right\\}$) on the logistic sigmoid function directly by making a first order Taylor series expansion of the function f(x) in the variable $x^2$ centred on the value $\\xi^2$ .",
"answer": "We begin by calculating the first derivative:\n\n$$\\frac{df(x)}{dx} = -\\frac{-e^{-x}}{1 + e^{-x}} = \\sigma(x) \\cdot e^{-x}$$\n\nThen we can obtain the second derivative:\n\n$$\\frac{d^2f(x)}{dx^2} = \\frac{-e^{-x}(1+e^{-x}) - e^{-x}(-e^{-x})}{(1+e^{-x})^2} = -[\\sigma(x)]^2 \\cdot e^{-x} < 0$$\n\nTherefore, the log logistic function f(x) is concave. Utilizing this concave property, we can obtain:\n\n$$f(x) \\le f(\\xi) + f'(\\xi) \\cdot (x - \\xi)$$\n\nwhich gives,\n\n$$\\ln \\sigma(x) \\le \\ln \\sigma(\\xi) + \\sigma(\\xi) \\cdot e^{-\\xi} \\cdot (x - \\xi) \\tag{*}$$\n\nComparing the expression above with Eq (10.136: $\\ln \\sigma(x) \\leqslant \\lambda x - g(\\lambda)$), we define $\\lambda = \\sigma(\\xi) \\cdot e^{-\\xi}$ . Then we can obtain:\n\n$$\\lambda = \\sigma(\\xi) \\cdot e^{-\\xi} = \\frac{e^{-\\xi}}{1 + e^{-\\xi}} = 1 - \\frac{1}{1 + e^{-\\xi}} = 1 - \\sigma(\\xi)$$\n\nIn other words, we have obtained $\\sigma(\\xi) = 1 - \\lambda$ . In order to simplify (\\*), we need to express $\\xi$ using $\\lambda$ and x. According to the definition of $\\lambda$ , we can obtain:\n\n$$\\xi = \\ln \\sigma(\\xi) - \\ln \\lambda = \\ln(1 - \\lambda) - \\ln \\lambda$$\n\nNow (\\*) can be simplified as:\n\n$$\\ln \\sigma(x) \\leq \\ln(1-\\lambda) + \\lambda \\cdot (x-\\xi)$$\n\n$$= \\lambda \\cdot x + \\ln(1-\\lambda) - \\lambda \\cdot \\xi$$\n\n$$= \\lambda \\cdot x + \\ln(1-\\lambda) - \\lambda \\cdot \\left[\\ln(1-\\lambda) - \\ln\\lambda\\right]$$\n\n$$= \\lambda \\cdot x + (1-\\lambda)\\ln(1-\\lambda) + \\lambda \\ln\\lambda$$\n\n$$= \\lambda \\cdot x - g(\\lambda)$$\n\nJust as required.",
"answer_length": 1489
},
{
"chapter": 10,
"question_number": "10.33",
"difficulty": "easy",
"question_text": "By differentiating the quantity $Q(\\xi, \\xi^{\\text{old}})$ defined by (10.161) with respect to the variational parameter $\\xi_n$ show that the update equation for $\\xi_n$ for the Bayesian logistic regression model is given by (10.163: $(\\xi_n^{\\text{new}})^2 = \\boldsymbol{\\phi}_n^{\\text{T}} \\mathbb{E}[\\mathbf{w} \\mathbf{w}^{\\text{T}}] \\boldsymbol{\\phi}_n = \\boldsymbol{\\phi}_n^{\\text{T}} \\left( \\mathbf{S}_N + \\mathbf{m}_N \\mathbf{m}_N^{\\text{T}} \\right) \\boldsymbol{\\phi}_n$).",
"answer": "To prove Eq (10.163: $(\\xi_n^{\\text{new}})^2 = \\boldsymbol{\\phi}_n^{\\text{T}} \\mathbb{E}[\\mathbf{w} \\mathbf{w}^{\\text{T}}] \\boldsymbol{\\phi}_n = \\boldsymbol{\\phi}_n^{\\text{T}} \\left( \\mathbf{S}_N + \\mathbf{m}_N \\mathbf{m}_N^{\\text{T}} \\right) \\boldsymbol{\\phi}_n$), we only need to prove Eq (10.162: $0 = \\lambda'(\\xi_n)(\\boldsymbol{\\phi}_n^{\\mathrm{T}} \\mathbb{E}[\\mathbf{w}\\mathbf{w}^{\\mathrm{T}}] \\boldsymbol{\\phi}_n - \\xi_n^2).$), from which Eq (10.163: $(\\xi_n^{\\text{new}})^2 = \\boldsymbol{\\phi}_n^{\\text{T}} \\mathbb{E}[\\mathbf{w} \\mathbf{w}^{\\text{T}}] \\boldsymbol{\\phi}_n = \\boldsymbol{\\phi}_n^{\\text{T}} \\left( \\mathbf{S}_N + \\mathbf{m}_N \\mathbf{m}_N^{\\text{T}} \\right) \\boldsymbol{\\phi}_n$) can be easily derived according to the text below Eq (10.132: $f(x) = \\min_{\\lambda} \\{\\lambda x - g(\\lambda)\\}$). Therefore, in what follows, we prove that the derivative of $Q(\\xi, \\xi^{\\text{old}})$ with respect to $\\xi_n$ will give Eq (10.162: $0 = \\lambda'(\\xi_n)(\\boldsymbol{\\phi}_n^{\\mathrm{T}} \\mathbb{E}[\\mathbf{w}\\mathbf{w}^{\\mathrm{T}}] \\boldsymbol{\\phi}_n - \\xi_n^2).$). We start by noticing Eq (4.88: $\\frac{d\\sigma}{da} = \\sigma(1 - \\sigma).$), i.e.,\n\n$$\\frac{d\\sigma(\\xi)}{d\\xi} = \\sigma(\\xi) \\cdot (1 - \\sigma(\\xi))$$\n\nNoticing Eq (10.150: $\\lambda(\\xi) = \\frac{1}{2\\xi} \\left[ \\sigma(\\xi) - \\frac{1}{2} \\right].$), now we can obtain:\n\n$$\\frac{dQ(\\boldsymbol{\\xi}, \\boldsymbol{\\xi}^{\\text{old}})}{d\\xi_{n}} = \\frac{\\sigma^{'}(\\xi_{n})}{\\sigma(\\xi_{n})} - \\frac{1}{2} - \\lambda^{'}(\\xi_{n}) \\cdot (\\boldsymbol{\\phi}_{n}^{T} \\mathbb{E}[\\mathbf{w}\\mathbf{w}^{T}] \\boldsymbol{\\phi}_{n} - \\xi_{n}^{2}) - \\lambda(\\xi_{n}) \\cdot (-2\\xi_{n})$$\n\n$$= 1 - \\sigma(\\xi_{n}) - \\frac{1}{2} + 2\\xi_{n} \\cdot \\lambda(\\xi_{n}) - \\lambda^{'}(\\xi_{n}) \\cdot (\\boldsymbol{\\phi}_{n}^{T} \\mathbb{E}[\\mathbf{w}\\mathbf{w}^{T}] \\boldsymbol{\\phi}_{n} - \\xi_{n}^{2})$$\n\n$$= \\frac{1}{2} - \\sigma(\\xi_{n}) + \\sigma(\\xi_{n}) - \\frac{1}{2} - \\lambda^{'}(\\xi_{n}) \\cdot (\\boldsymbol{\\phi}_{n}^{T} \\mathbb{E}[\\mathbf{w}\\mathbf{w}^{T}] \\boldsymbol{\\phi}_{n} - \\xi_{n}^{2})$$\n\n$$= -\\lambda^{'}(\\xi_{n}) \\cdot (\\boldsymbol{\\phi}_{n}^{T} \\mathbb{E}[\\mathbf{w}\\mathbf{w}^{T}] \\boldsymbol{\\phi}_{n} - \\xi_{n}^{2})$$\n\nSetting the derivative equal to zero, we obtain Eq (10.162: $0 = \\lambda'(\\xi_n)(\\boldsymbol{\\phi}_n^{\\mathrm{T}} \\mathbb{E}[\\mathbf{w}\\mathbf{w}^{\\mathrm{T}}] \\boldsymbol{\\phi}_n - \\xi_n^2).$), from which Eq (10.16b3) follows.",
"answer_length": 2426
},
{
"chapter": 10,
"question_number": "10.34",
"difficulty": "medium",
"question_text": "- 10.34 (\\*\\*) In this exercise we derive re-estimation equations for the variational parameters $\\xi$ in the Bayesian logistic regression model of Section 4.5 by direct maximization of the lower bound given by (10.164: $\\mathcal{L}(\\boldsymbol{\\xi}) = \\frac{1}{2} \\ln \\frac{|\\mathbf{S}_{N}|}{|\\mathbf{S}_{0}|} - \\frac{1}{2} \\mathbf{m}_{N}^{\\mathrm{T}} \\mathbf{S}_{N}^{-1} \\mathbf{m}_{N} + \\frac{1}{2} \\mathbf{m}_{0}^{\\mathrm{T}} \\mathbf{S}_{0}^{-1} \\mathbf{m}_{0} + \\sum_{n=1}^{N} \\left\\{ \\ln \\sigma(\\xi_{n}) - \\frac{1}{2} \\xi_{n} - \\lambda(\\xi_{n}) \\xi_{n}^{2} \\right\\}.$). To do this set the derivative of $\\mathcal{L}(\\xi)$ with respect to $\\xi_n$ equal to zero, making use of the result (3.117: $\\frac{d}{d\\alpha}\\ln|\\mathbf{A}| = \\operatorname{Tr}\\left(\\mathbf{A}^{-1}\\frac{d}{d\\alpha}\\mathbf{A}\\right).$) for the derivative of the log of a determinant, together with the expressions (10.157: $\\mathbf{m}_N = \\mathbf{S}_N \\left( \\mathbf{S}_0^{-1} \\mathbf{m}_0 + \\sum_{n=1}^N (t_n - 1/2) \\phi_n \\right)$) and (10.158: $\\mathbf{S}_{N}^{-1} = \\mathbf{S}_{0}^{-1} + 2\\sum_{n=1}^{N} \\lambda(\\xi_{n}) \\phi_{n} \\phi_{n}^{\\mathrm{T}}.$) which define the mean and covariance of the variational posterior distribution $q(\\mathbf{w})$ .",
"answer": "First, we should clarify one thing and that is there is typos in Eq(10.164). It is not difficult to observe these error if we notice that for $q(\\mathbf{w}) = \\mathcal{N}(\\mathbf{w}|\\mathbf{m}_N, \\mathbf{S}_N)$ , in its logarithm, i.e., $\\ln q(\\mathbf{w})$ , $\\frac{1}{2} \\ln |\\mathbf{S}_N|$ should always have the same sign as $\\frac{1}{2}\\mathbf{m}_N^T \\mathbf{S}_N^{-1}\\mathbf{m}_N$ . This is our intuition. However, this is not the case in Eq(10.164). Based on Eq(10.159), Eq(10.153) and the Gaussian prior $p(\\mathbf{w}) = \\mathcal{N}(\\mathbf{w}|\\mathbf{m}_0, \\mathbf{S}_0)$ , we can analytically obtain the correct lower bound $L(\\xi)$ (this will also be strictly proved by the next problem):\n\n$$L(\\xi) = \\frac{1}{2} \\ln |\\mathbf{S}_{N}| - \\frac{1}{2} \\ln |\\mathbf{S}_{0}| + \\frac{1}{2} \\mathbf{m}_{N}^{T} \\mathbf{S}_{N}^{-1} \\mathbf{m}_{N} - \\frac{1}{2} \\mathbf{m}_{0}^{T} \\mathbf{S}_{0}^{-1} \\mathbf{m}_{0}$$\n\n$$+ \\sum_{n=1}^{N} \\left\\{ \\ln \\sigma(\\xi_{n}) - \\frac{1}{2} \\xi_{n} + \\lambda(\\xi_{n}) \\xi_{n}^{2} \\right\\}$$\n\n$$= \\frac{1}{2} \\ln |\\mathbf{S}_{N}| + \\frac{1}{2} \\mathbf{m}_{N}^{T} \\mathbf{S}_{N}^{-1} \\mathbf{m}_{N} + \\sum_{n=1}^{N} \\left\\{ \\ln \\sigma(\\xi_{n}) - \\frac{1}{2} \\xi_{n} + \\lambda(\\xi_{n}) \\xi_{n}^{2} \\right\\} + \\text{const}$$\n\nWhere const denotes the term unrelated to $\\xi_n$ because $\\mathbf{m}_0$ and $\\mathbf{S}_0$ don't depend on $\\xi_n$ . Moreover, noticing that $\\mathbf{S}_N^{-1} \\cdot \\mathbf{m}_N$ also doesn't depend on $\\xi_n$ according to Eq(10.157),thus it will be convenient to define a variable: $\\mathbf{z}_N = \\mathbf{S}_N^{-1} \\cdot \\mathbf{m}_N$ , and we can easily verify:\n\n$$\\mathbf{m}_N^T \\mathbf{S}_N^{-1} \\mathbf{m}_N = [\\mathbf{S}_N \\mathbf{S}_N^{-1} \\mathbf{m}_N]^T \\mathbf{S}_N^{-1} [\\mathbf{S}_N \\mathbf{S}_N^{-1} \\mathbf{m}_N] = [\\mathbf{S}_N \\mathbf{z}_N]^T \\mathbf{S}_N^{-1} [\\mathbf{S}_N \\mathbf{z}_N] = \\mathbf{z}_N^T \\mathbf{S}_N^{-1} \\mathbf{z}_N$$\n\nNow, we can obtain:\n\n$$\\begin{split} \\frac{\\partial L(\\xi)}{\\partial \\xi_n} &= \\frac{d}{d\\xi_n} \\left\\{ \\frac{1}{2} \\ln |\\mathbf{S}_N| + \\frac{1}{2} \\mathbf{z}_N^T \\mathbf{S}_N \\mathbf{z}_N + \\sum_{n=1}^N \\left\\{ \\ln \\sigma(\\xi_n) - \\frac{1}{2} \\xi_n + \\lambda(\\xi_n) \\xi_n^2 \\right\\} \\right\\} \\\\ &= \\frac{1}{2} \\mathrm{Tr} \\big[ \\mathbf{S}_N^{-1} \\frac{\\partial \\mathbf{S}_N}{\\partial \\xi_n} \\big] + \\frac{1}{2} \\mathrm{Tr} \\big[ \\mathbf{z}_N \\mathbf{z}_N^T \\cdot \\frac{\\partial \\mathbf{S}_N}{\\partial \\xi_n} \\big] + \\lambda'(\\xi_n) \\xi_n^2 \\end{split}$$\n\nWhere we have used Eq(3.117) for the first term, and for the second term we have used:\n\n$$\\frac{d}{d\\xi_n} \\left\\{ \\frac{1}{2} \\mathbf{z}_N^T \\mathbf{S}_N \\mathbf{z}_N \\right\\} = \\frac{1}{2} \\frac{d}{d\\xi_n} \\left\\{ \\text{Tr} \\left[ \\mathbf{z}_N \\mathbf{z}_N^T \\cdot \\mathbf{S}_N \\right] \\right\\} = \\frac{1}{2} \\text{Tr} \\left[ \\mathbf{z}_N \\mathbf{z}_N^T \\cdot \\frac{\\partial \\mathbf{S}_N}{\\partial \\xi_n} \\right]$$\n\nFurthermore, for the last term, we can follow the same procedure as in the previous problem and now our remain task is to calculate $\\partial \\mathbf{S}_N/\\partial \\xi_n$ . Based on Eq(10.158) and (C.21), we can obtain:\n\n$$\\frac{\\partial \\mathbf{S}_{N}}{\\partial \\xi_{n}} = -\\mathbf{S}_{N} \\frac{\\partial \\mathbf{S}_{N}^{-1}}{\\partial \\xi_{n}} \\mathbf{S}_{N} = -\\mathbf{S}_{N} \\cdot [2\\lambda'(\\xi_{n}) \\boldsymbol{\\phi}_{n} \\boldsymbol{\\phi}_{n}^{T}] \\cdot \\mathbf{S}_{N}$$\n\nSubstituting it back into the derivative, we can obtain:\n\n$$\\begin{split} \\frac{\\partial L(\\boldsymbol{\\xi})}{\\partial \\boldsymbol{\\xi}_{n}} &= \\frac{1}{2} \\mathrm{Tr} \\big[ (\\mathbf{S}_{N}^{-1} + \\mathbf{z}_{N} \\mathbf{z}_{N}^{T}) \\frac{\\partial \\mathbf{S}_{N}}{\\partial \\boldsymbol{\\xi}_{n}} \\big] + \\boldsymbol{\\lambda}'(\\boldsymbol{\\xi}_{n}) \\boldsymbol{\\xi}_{n}^{2} \\\\ &= -\\frac{1}{2} \\mathrm{Tr} \\big[ (\\mathbf{S}_{N}^{-1} + \\mathbf{z}_{N} \\mathbf{z}_{N}^{T}) \\mathbf{S}_{N} \\cdot [2\\boldsymbol{\\lambda}'(\\boldsymbol{\\xi}_{n}) \\boldsymbol{\\phi}_{n} \\boldsymbol{\\phi}_{n}^{T}] \\cdot \\mathbf{S}_{N} \\big] + \\boldsymbol{\\lambda}'(\\boldsymbol{\\xi}_{n}) \\boldsymbol{\\xi}_{n}^{2} \\\\ &= -\\boldsymbol{\\lambda}'(\\boldsymbol{\\xi}_{n}) \\cdot \\left\\{ \\mathrm{Tr} \\big[ (\\mathbf{S}_{N}^{-1} + \\mathbf{z}_{N} \\mathbf{z}_{N}^{T}) \\cdot \\mathbf{S}_{N} \\cdot \\boldsymbol{\\phi}_{n} \\boldsymbol{\\phi}_{n}^{T} \\cdot \\mathbf{S}_{N} \\big] - \\boldsymbol{\\xi}_{n}^{2} \\right\\} = 0 \\end{split}$$\n\nTherefore, we can obtain:\n\n$$\\begin{split} \\boldsymbol{\\xi}_n^2 &= \\operatorname{Tr} \\big[ (\\mathbf{S}_N^{-1} + \\mathbf{z}_N \\mathbf{z}_N^T) \\cdot \\mathbf{S}_N \\cdot \\boldsymbol{\\phi}_n \\boldsymbol{\\phi}_n^T \\cdot \\mathbf{S}_N \\big] \\\\ &= (\\mathbf{S}_N \\cdot \\boldsymbol{\\phi}_n)^T \\cdot (\\mathbf{S}_N^{-1} + \\mathbf{z}_N \\mathbf{z}_N^T) \\cdot (\\mathbf{S}_N \\cdot \\boldsymbol{\\phi}_n) \\\\ &= \\boldsymbol{\\phi}_n^T \\cdot (\\mathbf{S}_N + \\mathbf{S}_N \\mathbf{z}_N \\mathbf{z}_N^T \\mathbf{S}_N) \\cdot \\boldsymbol{\\phi}_n \\\\ &= \\boldsymbol{\\phi}_n^T \\cdot (\\mathbf{S}_N + \\mathbf{m}_N \\mathbf{m}_N^T) \\cdot \\boldsymbol{\\phi}_n \\end{split}$$\n\nWhere we have used the definition of $\\mathbf{z}_N$ , i.e., $\\mathbf{z}_N = \\mathbf{S}_N^{-1} \\cdot \\mathbf{m}_N$ and also repeatedly used the symmetry property of $\\mathbf{S}_N$ .",
"answer_length": 5188
},
{
"chapter": 10,
"question_number": "10.35",
"difficulty": "medium",
"question_text": "- 10.35 (\\*\\*) Derive the result (10.164: $\\mathcal{L}(\\boldsymbol{\\xi}) = \\frac{1}{2} \\ln \\frac{|\\mathbf{S}_{N}|}{|\\mathbf{S}_{0}|} - \\frac{1}{2} \\mathbf{m}_{N}^{\\mathrm{T}} \\mathbf{S}_{N}^{-1} \\mathbf{m}_{N} + \\frac{1}{2} \\mathbf{m}_{0}^{\\mathrm{T}} \\mathbf{S}_{0}^{-1} \\mathbf{m}_{0} + \\sum_{n=1}^{N} \\left\\{ \\ln \\sigma(\\xi_{n}) - \\frac{1}{2} \\xi_{n} - \\lambda(\\xi_{n}) \\xi_{n}^{2} \\right\\}.$) for the lower bound $\\mathcal{L}(\\xi)$ in the variational logistic regression model. This is most easily done by substituting the expressions for the Gaussian prior $q(\\mathbf{w}) = \\mathcal{N}(\\mathbf{w}|\\mathbf{m}_0, \\mathbf{S}_0)$ , together with the lower bound $h(\\mathbf{w}, \\xi)$ on the likelihood function, into the integral (10.159: $\\ln p(\\mathbf{t}) = \\ln \\int p(\\mathbf{t}|\\mathbf{w})p(\\mathbf{w}) \\, d\\mathbf{w} \\geqslant \\ln \\int h(\\mathbf{w}, \\boldsymbol{\\xi})p(\\mathbf{w}) \\, d\\mathbf{w} = \\mathcal{L}(\\boldsymbol{\\xi}). \\quad$) which defines $\\mathcal{L}(\\xi)$ . Next gather together the terms which depend on $\\mathbf{w}$ in the exponential and complete the square to give a Gaussian integral, which can then be evaluated by invoking the standard result for the normalization coefficient of a multivariate Gaussian. Finally take the logarithm to obtain (10.164: $\\mathcal{L}(\\boldsymbol{\\xi}) = \\frac{1}{2} \\ln \\frac{|\\mathbf{S}_{N}|}{|\\mathbf{S}_{0}|} - \\frac{1}{2} \\mathbf{m}_{N}^{\\mathrm{T}} \\mathbf{S}_{N}^{-1} \\mathbf{m}_{N} + \\frac{1}{2} \\mathbf{m}_{0}^{\\mathrm{T}} \\mathbf{S}_{0}^{-1} \\mathbf{m}_{0} + \\sum_{n=1}^{N} \\left\\{ \\ln \\sigma(\\xi_{n}) - \\frac{1}{2} \\xi_{n} - \\lambda(\\xi_{n}) \\xi_{n}^{2} \\right\\}.$).",
"answer": "There is a typo in Eq (10.164: $\\mathcal{L}(\\boldsymbol{\\xi}) = \\frac{1}{2} \\ln \\frac{|\\mathbf{S}_{N}|}{|\\mathbf{S}_{0}|} - \\frac{1}{2} \\mathbf{m}_{N}^{\\mathrm{T}} \\mathbf{S}_{N}^{-1} \\mathbf{m}_{N} + \\frac{1}{2} \\mathbf{m}_{0}^{\\mathrm{T}} \\mathbf{S}_{0}^{-1} \\mathbf{m}_{0} + \\sum_{n=1}^{N} \\left\\{ \\ln \\sigma(\\xi_{n}) - \\frac{1}{2} \\xi_{n} - \\lambda(\\xi_{n}) \\xi_{n}^{2} \\right\\}.$), for more details you can refer to the previous problem. Let's calculate $L(\\xi)$ based on Based on Eq(10.159), Eq(10.153) and the Gaussian prior $p(\\mathbf{w}) = \\mathcal{N}(\\mathbf{w}|\\mathbf{m}_0, \\mathbf{S}_0)$ :\n\n$$h(\\mathbf{w}, \\boldsymbol{\\xi}) p(\\mathbf{w}) = \\mathcal{N}(\\mathbf{w} | \\mathbf{m}_{0}, \\mathbf{S}_{0}) \\cdot \\prod_{n=1}^{N} \\sigma(\\xi_{n}) \\exp\\left\\{\\mathbf{w}^{T} \\boldsymbol{\\phi}_{n} \\mathbf{t}_{n} - (\\mathbf{w}^{T} \\boldsymbol{\\phi}_{n} + \\xi_{n})/2\\right.$$\n\n$$\\left. - \\lambda(\\xi_{n}) ([\\mathbf{w}^{T} \\boldsymbol{\\phi}_{n}]^{2} - \\xi_{n}^{2})\\right\\}$$\n\n$$= \\left\\{ (2\\pi)^{-W/2} \\cdot |\\mathbf{S}_{0}|^{-1/2} \\cdot \\prod_{n=1}^{N} \\sigma(\\xi_{n}) \\right\\} \\cdot \\exp\\left\\{ -\\frac{1}{2} (\\mathbf{w} - \\mathbf{m}_{0})^{T} \\mathbf{S}_{0}^{-1} (\\mathbf{w} - \\mathbf{m}_{0}) \\right\\}$$\n\n$$\\cdot \\prod_{n=1}^{N} \\exp\\left\\{\\mathbf{w}^{T} \\boldsymbol{\\phi}_{n} \\mathbf{t}_{n} - (\\mathbf{w}^{T} \\boldsymbol{\\phi}_{n} + \\xi_{n})/2 - \\lambda(\\xi_{n}) ([\\mathbf{w}^{T} \\boldsymbol{\\phi}_{n}]^{2} - \\xi_{n}^{2}) \\right\\}$$\n\n$$= \\left\\{ (2\\pi)^{-W/2} \\cdot |\\mathbf{S}_{0}|^{-1/2} \\cdot \\prod_{n=1}^{N} \\sigma(\\xi_{n}) \\cdot \\exp\\left( -\\frac{1}{2} \\mathbf{m}_{0}^{T} \\mathbf{S}_{0}^{-1} \\mathbf{m}_{0} - \\sum_{n=1}^{N} \\frac{\\xi_{n}}{2} + \\sum_{n=1}^{N} \\lambda(\\xi_{n}) \\xi_{n}^{2} \\right) \\right\\}$$\n\n$$\\cdot \\exp\\left\\{ -\\frac{1}{2} \\mathbf{w}^{T} \\left( \\mathbf{S}_{0}^{-1} + 2 \\sum_{n=1}^{N} \\lambda(\\xi_{n}) \\boldsymbol{\\phi}_{n} \\boldsymbol{\\phi}_{n}^{T} \\right) \\mathbf{w} + \\mathbf{w}^{T} \\left( \\mathbf{S}_{0}^{-1} \\mathbf{m}_{0} + \\sum_{n=1}^{N} \\boldsymbol{\\phi}_{n} (t_{n} - \\frac{1}{2}) \\right) \\right\\}$$\n\nNoticing Eq (10.157)-(10.58), we can obtain:\n\n$$\\begin{split} h(\\mathbf{w}, \\boldsymbol{\\xi}) \\, p(\\mathbf{w}) &= \\left. \\left\\{ (2\\pi)^{-W/2} \\cdot |\\mathbf{S}_0|^{-1/2} \\cdot \\prod_{n=1}^N \\sigma(\\xi_n) \\cdot \\exp\\left(-\\frac{1}{2}\\mathbf{m}_0^T \\mathbf{S}_0^{-1} \\mathbf{m}_0 - \\sum_{n=1}^N \\frac{\\xi_n}{2} + \\sum_{n=1}^N \\lambda(\\xi_n) \\xi_n^2 \\right) \\right\\} \\\\ &\\cdot \\exp\\left\\{ -\\frac{1}{2}\\mathbf{w}^T \\mathbf{S}_N^{-1} \\mathbf{w} + \\mathbf{w}^T \\mathbf{S}_N^{-1} \\mathbf{m}_N \\right\\} \\\\ &= \\left. \\left\\{ (2\\pi)^{-W/2} \\cdot |\\mathbf{S}_0|^{-1/2} \\cdot \\prod_{n=1}^N \\sigma(\\xi_n) \\right. \\\\ &\\cdot \\exp\\left( -\\frac{1}{2}\\mathbf{m}_0^T \\mathbf{S}_0^{-1} \\mathbf{m}_0 - \\sum_{n=1}^N \\frac{\\xi_n}{2} + \\sum_{n=1}^N \\lambda(\\xi_n) \\xi_n^2 + \\frac{1}{2}\\mathbf{m}_N^T \\mathbf{S}_N^{-1} \\mathbf{m}_N \\right) \\right\\} \\\\ &\\cdot \\exp\\left\\{ -\\frac{1}{2}(\\mathbf{w} - \\mathbf{m}_N)^T \\mathbf{S}_N^{-1} (\\mathbf{w} - \\mathbf{m}_N) \\right\\} \\end{split}$$\n\nTherefore, utilizing the normalization constant of Gaussian distribution,\n\nnow we can obtain:\n\n$$\\begin{split} \\int h(\\mathbf{w}, \\boldsymbol{\\xi}) p(\\mathbf{w}) d\\mathbf{w} &= (2\\pi)^{W/2} \\cdot |\\mathbf{S}_N|^{1/2} \\cdot \\left\\{ (2\\pi)^{-W/2} \\cdot |\\mathbf{S}_0|^{-1/2} \\cdot \\prod_{n=1}^N \\sigma(\\xi_n) \\right. \\\\ & \\cdot \\exp\\left( -\\frac{1}{2} \\mathbf{m}_0^T \\mathbf{S}_0^{-1} \\mathbf{m}_0 - \\sum_{n=1}^N \\frac{\\xi_n}{2} + \\sum_{n=1}^N \\lambda(\\xi_n) \\xi_n^2 + \\frac{1}{2} \\mathbf{m}_N^T \\mathbf{S}_N^{-1} \\mathbf{m}_N \\right) \\right\\} \\\\ &= \\left. \\left\\{ (\\frac{|\\mathbf{S}_N|}{|\\mathbf{S}_0|})^{1/2} \\cdot \\prod_{n=1}^N \\sigma(\\xi_n) \\right. \\\\ & \\cdot \\exp\\left( -\\frac{1}{2} \\mathbf{m}_0^T \\mathbf{S}_0^{-1} \\mathbf{m}_0 - \\sum_{n=1}^N \\frac{\\xi_n}{2} + \\sum_{n=1}^N \\lambda(\\xi_n) \\xi_n^2 + \\frac{1}{2} \\mathbf{m}_N^T \\mathbf{S}_N^{-1} \\mathbf{m}_N \\right) \\right\\} \\end{split}$$\n\nTherefore, $L(\\xi)$ can be written as:\n\n$$L(\\boldsymbol{\\xi}) = \\ln \\int h(\\mathbf{w}, \\boldsymbol{\\xi}) p(\\mathbf{w}) d\\mathbf{w}$$\n\n$$= \\frac{1}{2} \\ln \\frac{|\\mathbf{S}_N|}{|\\mathbf{S}_0|} - \\frac{1}{2} \\mathbf{m}_0^T \\mathbf{S}_0^{-1} \\mathbf{m}_0 + \\frac{1}{2} \\mathbf{m}_N^T \\mathbf{S}_N^{-1} \\mathbf{m}_N + \\sum_{n=1}^N \\left\\{ \\ln \\sigma(\\xi_n) - \\frac{1}{2} \\xi_n + \\lambda(\\xi_n) \\xi_n^2 \\right\\}$$",
"answer_length": 4295
},
{
"chapter": 10,
"question_number": "10.36",
"difficulty": "medium",
"question_text": "Consider the ADF approximation scheme discussed in Section 10.7, and show that inclusion of the factor $f_j(\\theta)$ leads to an update of the model evidence of the form\n\n$$p_j(\\mathcal{D}) \\simeq p_{j-1}(\\mathcal{D})Z_j$$\n (10.242: $p_j(\\mathcal{D}) \\simeq p_{j-1}(\\mathcal{D})Z_j$)\n\nwhere $Z_j$ is the normalization constant defined by (10.197: $Z_j = \\int f_j(\\boldsymbol{\\theta}) q^{\\setminus j}(\\boldsymbol{\\theta}) \\, \\mathrm{d}\\boldsymbol{\\theta}.$). By applying this result recursively, and initializing with $p_0(\\mathcal{D}) = 1$ , derive the result\n\n$$p(\\mathcal{D}) \\simeq \\prod_{j} Z_{j}.$$\n (10.243: $p(\\mathcal{D}) \\simeq \\prod_{j} Z_{j}.$)",
"answer": "Let's clarify this problem. What this problem wants us to prove is that suppose at beginning the joint distribution comprises a product of j-1 factors, i.e.,\n\n$$p_{j-1}(D,\\boldsymbol{\\theta}) = \\prod_{i=1}^{j-1} f_{j-1}(\\boldsymbol{\\theta})$$\n\nand now the joint distribution comprises a product of *j* factors:\n\n$$p_j(D, \\boldsymbol{\\theta}) = \\prod_{i=1}^j f_j(\\boldsymbol{\\theta}) = p_{j-1}(D, \\boldsymbol{\\theta}) \\cdot f_j(\\boldsymbol{\\theta})$$\n\nThen we are asked to prove Eq (10.242: $p_j(\\mathcal{D}) \\simeq p_{j-1}(\\mathcal{D})Z_j$). This situation corresponds to j-1 data points at the beginning and then one more data point is obtained. For more details you can read the text below Eq (10.188: $p(\\mathcal{D}, \\boldsymbol{\\theta}) = \\prod_{i} f_i(\\boldsymbol{\\theta}).$). Based on definition, we can write down:\n\n$$\\begin{split} p_{j}(D) &= \\int p_{j}(D,\\boldsymbol{\\theta})d\\boldsymbol{\\theta} \\\\ &= \\int p_{j-1}(D,\\boldsymbol{\\theta})\\cdot f_{j}(\\boldsymbol{\\theta})d\\boldsymbol{\\theta} \\\\ &= \\int p_{j-1}(D)\\cdot p_{j-1}(\\boldsymbol{\\theta}|D)\\cdot f_{j}(\\boldsymbol{\\theta})d\\boldsymbol{\\theta} \\\\ &= p_{j-1}(D)\\cdot \\int p_{j-1}(\\boldsymbol{\\theta}|D)\\cdot f_{j}(\\boldsymbol{\\theta})d\\boldsymbol{\\theta} \\\\ &\\approx p_{j-1}(D)\\cdot \\int q_{j-1}(\\boldsymbol{\\theta})\\cdot f_{j}(\\boldsymbol{\\theta})d\\boldsymbol{\\theta} \\\\ &= p_{j-1}(D)\\cdot Z_{j} \\end{split}$$\n\nWhere we have sequentially used Bayes' Theorem, $q_{j-1}(\\theta)$ is an approximation for the posterior $p_{j-1}(\\theta|D)$ , and Eq (10.197: $Z_j = \\int f_j(\\boldsymbol{\\theta}) q^{\\setminus j}(\\boldsymbol{\\theta}) \\, \\mathrm{d}\\boldsymbol{\\theta}.$). To further prove Eq (10.243: $p(\\mathcal{D}) \\simeq \\prod_{j} Z_{j}.$), we only need to recursively use the expression we have proved.",
"answer_length": 1766
},
{
"chapter": 10,
"question_number": "10.37",
"difficulty": "easy",
"question_text": "Consider the expectation propagation algorithm from Section 10.7, and suppose that one of the factors $f_0(\\theta)$ in the definition (10.188: $p(\\mathcal{D}, \\boldsymbol{\\theta}) = \\prod_{i} f_i(\\boldsymbol{\\theta}).$) has the same exponential family functional form as the approximating distribution $q(\\theta)$ . Show that if the factor $\\widetilde{f}_0(\\theta)$ is initialized to be $f_0(\\theta)$ , then an EP update to refine $\\widetilde{f}_0(\\theta)$ leaves $f_0(\\theta)$ unchanged. This situation typically arises when one of the factors is the prior $p(\\theta)$ , and so we see that the prior factor can be incorporated once exactly and does not need to be refined.",
"answer": "Let's start from definition. q() will be initialized as\n\n$$q^{\\text{init}}(\\boldsymbol{\\theta}) = \\widetilde{f}_0(\\boldsymbol{\\theta}) \\prod_{i \\neq 0} \\widetilde{f}_i(\\boldsymbol{\\theta}) = f_0(\\boldsymbol{\\theta}) \\prod_{i \\neq 0} \\widetilde{f}_i(\\boldsymbol{\\theta})$$\n\nWhere we have used $\\tilde{f}_0(\\boldsymbol{\\theta}) = f_0(\\boldsymbol{\\theta})$ according to the problem description. Then we can obtain:\n\n$$q^{0}(\\boldsymbol{\\theta}) = \\frac{q(\\boldsymbol{\\theta})}{\\widetilde{f}_0(\\boldsymbol{\\theta})} = \\prod_{i \\neq 0} \\widetilde{f}_i(\\boldsymbol{\\theta})$$\n\nNext, we will obtain $q^{\\text{new}}(\\boldsymbol{\\theta})$ by matching its moments against $q^{0}(\\boldsymbol{\\theta}) f_0(\\boldsymbol{\\theta})$ , which exactly equals:\n\n$$q^{0}(\\boldsymbol{\\theta})f_0(\\boldsymbol{\\theta}) = \\frac{q(\\boldsymbol{\\theta})}{\\widetilde{f}_0(\\boldsymbol{\\theta})} = \\prod_{i \\neq 0} \\widetilde{f}_i(\\boldsymbol{\\theta}) \\cdot f_0(\\boldsymbol{\\theta}) = q^{\\text{init}}(\\boldsymbol{\\theta})$$\n\nIn other words, in order to obtain $q^{\\text{new}}(\\boldsymbol{\\theta})$ , we need to match its moment against $q^{0}(\\boldsymbol{\\theta})$ , and since $q^{\\text{new}}$ and $q^{\\text{init}}$ both belong to exponential family, they will be identical if they have the same moment. Moreover, based on Eq (10.206: $Z_j = \\int q^{\\setminus j}(\\boldsymbol{\\theta}) f_j(\\boldsymbol{\\theta}) d\\boldsymbol{\\theta}.$), we have:\n\n$$Z_0 = \\int q^{0}(\\boldsymbol{\\theta}) f_0(\\boldsymbol{\\theta}) d\\boldsymbol{\\theta} = \\int q^{\\text{init}}(\\boldsymbol{\\theta}) d\\boldsymbol{\\theta} = 1$$\n\nTherefore, based on Eq(10.207), we have:\n\n$$\\widetilde{f}_0(\\boldsymbol{\\theta}) = Z_0 \\frac{q^{\\text{new}}(\\boldsymbol{\\theta})}{q^{0}(\\boldsymbol{\\theta})} = 1 \\cdot \\frac{q^{\\text{init}}(\\boldsymbol{\\theta})}{q^{0}(\\boldsymbol{\\theta})} = f_0(\\boldsymbol{\\theta})$$",
"answer_length": 1849
},
{
"chapter": 10,
"question_number": "10.38",
"difficulty": "hard",
"question_text": "In this exercise and the next, we shall verify the results (10.214)–(10.224) for the expectation propagation algorithm applied to the clutter problem. Begin by using the division formula (10.205: $q^{\\setminus j}(\\boldsymbol{\\theta}) = \\frac{q(\\boldsymbol{\\theta})}{\\widetilde{f}_j(\\boldsymbol{\\theta})}.$) to derive the expressions (10.214: $\\mathbf{m}^{\\setminus n} = \\mathbf{m} + v^{\\setminus n} v_n^{-1} (\\mathbf{m} - \\mathbf{m}_n)$) and (10.215: $(v^{\\setminus n})^{-1} = v^{-1} - v_n^{-1}.$) by completing the square inside the exponential to identify the mean and variance. Also, show that the normalization constant $Z_n$ , defined by (10.206: $Z_j = \\int q^{\\setminus j}(\\boldsymbol{\\theta}) f_j(\\boldsymbol{\\theta}) d\\boldsymbol{\\theta}.$), is given for the clutter problem by (10.216: $Z_n = (1 - w)\\mathcal{N}(\\mathbf{x}_n | \\mathbf{m}^{n}, (v^{n} + 1)\\mathbf{I}) + w\\mathcal{N}(\\mathbf{x}_n | \\mathbf{0}, a\\mathbf{I}).$). This can be done by making use of the general result (2.115: $p(\\mathbf{y}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}, \\mathbf{L}^{-1} + \\mathbf{A}\\boldsymbol{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}})$).",
"answer": "Based on Eq (10.205: $q^{\\setminus j}(\\boldsymbol{\\theta}) = \\frac{q(\\boldsymbol{\\theta})}{\\widetilde{f}_j(\\boldsymbol{\\theta})}.$), (10.212: $q(\\boldsymbol{\\theta}) = \\mathcal{N}(\\boldsymbol{\\theta}|\\mathbf{m}, v\\mathbf{I}).$) and (10.213: $\\widetilde{f}_n(\\boldsymbol{\\theta}) = s_n \\mathcal{N}(\\boldsymbol{\\theta}|\\mathbf{m}_n, v_n \\mathbf{I})$), we can obtain:\n\n$$q^{/j}(\\boldsymbol{\\theta}) = \\frac{q(\\boldsymbol{\\theta})}{\\widetilde{f}_{j}(\\boldsymbol{\\theta})} = \\frac{\\mathcal{N}(\\boldsymbol{\\theta}|\\mathbf{m}, v\\mathbf{I})}{s_{n}\\mathcal{N}(\\boldsymbol{\\theta}|\\mathbf{m}_{n}, v_{n}\\mathbf{I})}$$\n\n$$\\propto \\frac{\\exp\\left\\{-\\frac{1}{2}(\\boldsymbol{\\theta} - \\mathbf{m})^{T}(v\\mathbf{I})^{-1}(\\boldsymbol{\\theta} - \\mathbf{m})\\right\\}}{\\exp\\left\\{-\\frac{1}{2}(\\boldsymbol{\\theta} - \\mathbf{m}_{n})^{T}(v_{n}\\mathbf{I})^{-1}(\\boldsymbol{\\theta} - \\mathbf{m}_{n})\\right\\}}$$\n\n$$= \\exp\\left\\{-\\frac{1}{2}(\\boldsymbol{\\theta} - \\mathbf{m})^{T}(v\\mathbf{I})^{-1}(\\boldsymbol{\\theta} - \\mathbf{m}) + \\frac{1}{2}(\\boldsymbol{\\theta} - \\mathbf{m}_{n})^{T}(v_{n}\\mathbf{I})^{-1}(\\boldsymbol{\\theta} - \\mathbf{m}_{n})\\right\\}$$\n\n$$= \\exp\\left\\{-\\frac{1}{2}(\\boldsymbol{\\theta}^{T}\\mathbf{A}\\boldsymbol{\\theta} + \\boldsymbol{\\theta}^{T} \\cdot \\mathbf{B} + \\mathbf{C})\\right\\}$$\n\nWhere we have completed squares over $\\theta$ in the last step, and we have defined:\n\n$$\\mathbf{A} = (v\\mathbf{I})^{-1} - (v_n\\mathbf{I})^{-1}$$\n and $\\mathbf{B} = 2 \\cdot \\left[ -(v\\mathbf{I})^{-1} \\cdot \\mathbf{m} + (v_n\\mathbf{I})^{-1} \\cdot \\mathbf{m}_n \\right]$ \n\nNote that in order to match this to a Gaussian, we don't actually need **C**, so we omit it here. Now we match this against a Gaussian, beginning by first considering the quadratic term, we can obtain:\n\n$$[\\boldsymbol{\\Sigma}^{/n}]^{-1} = (v\\mathbf{I})^{-1} - (v_n\\mathbf{I})^{-1} = (v^{-1} - v_n^{-1})\\mathbf{I}^{-1} = [v^{/n}]^{-1} \\cdot \\mathbf{I}^{-1}$$\n\nIt is identical to Eq (10.214: $\\mathbf{m}^{\\setminus n} = \\mathbf{m} + v^{\\setminus n} v_n^{-1} (\\mathbf{m} - \\mathbf{m}_n)$). By matching the linear term, we can also obtain:\n\n$$-2 \\cdot [\\mathbf{\\Sigma}^{/n}]^{-1} \\cdot (\\mathbf{m}^{/n}) = \\mathbf{B} = 2 \\cdot \\left[ -(v\\mathbf{I})^{-1} \\cdot \\mathbf{m} + (v_n\\mathbf{I})^{-1} \\cdot \\mathbf{m}_n \\right]$$\n\nRearranging it, we can obtain:\n\n$$(\\mathbf{m}^{/n}) = -[\\mathbf{\\Sigma}^{/n}] \\cdot \\left[ -(v\\mathbf{I})^{-1} \\cdot \\mathbf{m} + (v_n \\mathbf{I})^{-1} \\cdot \\mathbf{m}_n \\right]$$\n\n$$= -[v^{/n}] \\cdot \\left[ -v^{-1} \\cdot \\mathbf{m} + v_n^{-1} \\cdot \\mathbf{m}_n \\right]$$\n\n$$= v^{/n} \\cdot v^{-1} \\cdot \\mathbf{m} - \\frac{v^{/n}}{v_n} \\cdot \\mathbf{m}_n$$\n\n$$= v^{/n} ([v^{/n}]^{-1} - v_n^{-1}) \\cdot \\mathbf{m} - \\frac{v^{/n}}{v_n} \\cdot \\mathbf{m}_n$$\n\n$$= \\mathbf{m} + \\frac{v^{/n}}{v_n} \\cdot (\\mathbf{m} - \\mathbf{m}_n)$$\n\nWhich is identical to Eq (10.214: $\\mathbf{m}^{\\setminus n} = \\mathbf{m} + v^{\\setminus n} v_n^{-1} (\\mathbf{m} - \\mathbf{m}_n)$). **One important thing worthy clarified is that**: for arbitrary two Gaussian random variable, their division is not a Gaussian. You can find more details by typing \"ratio distribution\" in Wikipedia. Generally speaking, the division of two Gaussian random variable follows a Cauchy distribution. Moreover, the product of two Gaussian random variables is not a Gaussian random variable.\n\nHowever, the product of two Gaussian PDF, e.g., $p(\\mathbf{x})$ and $p(\\mathbf{y})$ , can be a Gaussian PDF because when $\\mathbf{x}$ and $\\mathbf{y}$ are independent, $p(\\mathbf{x},\\mathbf{y}) = p(\\mathbf{x})p(\\mathbf{y})$ , is a Gaussian PDF. In the EP framework,according to Eq (10.204: $q(\\boldsymbol{\\theta}) \\propto \\prod_{i} \\widetilde{f}_{i}(\\boldsymbol{\\theta}).$), we have already assumed that $q(\\boldsymbol{\\theta})$ , i.e., Eq (10.212: $q(\\boldsymbol{\\theta}) = \\mathcal{N}(\\boldsymbol{\\theta}|\\mathbf{m}, v\\mathbf{I}).$), is given by the product of $\\tilde{f}_j(\\boldsymbol{\\theta})$ , i.e.,(10.213). Therefore, their division still gives by the product of many remaining Gaussian PDF, which is still a Gaussian.\n\nFinally, based on Eq (10.206: $Z_j = \\int q^{\\setminus j}(\\boldsymbol{\\theta}) f_j(\\boldsymbol{\\theta}) d\\boldsymbol{\\theta}.$) and (10.209: $p(\\mathbf{x}|\\boldsymbol{\\theta}) = (1 - w)\\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\theta}, \\mathbf{I}) + w\\mathcal{N}(\\mathbf{x}|\\mathbf{0}, a\\mathbf{I})$), we can obtain:\n\n$$Z_{n} = \\int q^{/n}(\\boldsymbol{\\theta})p(\\mathbf{x}_{n}|\\boldsymbol{\\theta})d\\boldsymbol{\\theta}$$\n\n$$= \\int \\mathcal{N}(\\boldsymbol{\\theta}|\\mathbf{m}^{/n}, v^{/n}\\mathbf{I}) \\cdot \\{(1-w)\\mathcal{N}(\\mathbf{x}_{n}|\\boldsymbol{\\theta}, \\mathbf{I}) + w\\mathcal{N}(\\mathbf{x}_{n}|\\boldsymbol{0}, \\alpha\\mathbf{I})\\} d\\boldsymbol{\\theta}$$\n\n$$= (1-w)\\int \\mathcal{N}(\\boldsymbol{\\theta}|\\mathbf{m}^{/n}, v^{/n}\\mathbf{I})\\mathcal{N}(\\mathbf{x}_{n}|\\boldsymbol{\\theta}, \\mathbf{I}) d\\boldsymbol{\\theta} + w\\int \\mathcal{N}(\\boldsymbol{\\theta}|\\mathbf{m}^{/n}, v^{/n}\\mathbf{I}) \\cdot \\mathcal{N}(\\mathbf{x}_{n}|\\boldsymbol{0}, \\alpha\\mathbf{I}) d\\boldsymbol{\\theta}$$\n\n$$= (1-w)\\mathcal{N}(\\mathbf{x}_{n}|\\mathbf{m}^{/n}, (v^{/n}+1)\\mathbf{I}) + w\\mathcal{N}(\\mathbf{x}_{n}|\\boldsymbol{0}, \\alpha\\mathbf{I})$$\n\nWhere we have used Eq (2.115: $p(\\mathbf{y}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}, \\mathbf{L}^{-1} + \\mathbf{A}\\boldsymbol{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}})$).",
"answer_length": 5368
},
{
"chapter": 10,
"question_number": "10.39",
"difficulty": "hard",
"question_text": "\\star \\star)$ Show that the mean and variance of $q^{\\text{new}}(\\theta)$ for EP applied to the clutter problem are given by (10.217: $\\mathbf{m} = \\mathbf{m}^{n} + \\rho_n \\frac{v^{n}}{v^{n} + 1} (\\mathbf{x}_n - \\mathbf{m}^{n})$) and (10.218: $v = v^{n} - \\rho_n \\frac{(v^{n})^2}{v^{n} + 1} + \\rho_n (1 - \\rho_n) \\frac{(v^{n})^2 ||\\mathbf{x}_n - \\mathbf{m}^{n}||^2}{D(v^{n} + 1)^2}$). To do this, first prove the following results for the expectations of $\\theta$ and $\\theta\\theta^{\\mathrm{T}}$ under $q^{\\mathrm{new}}(\\theta)$\n\n$$\\mathbb{E}[\\boldsymbol{\\theta}] = \\mathbf{m}^{n} + v^{n} \\nabla_{\\mathbf{m}^{n}} \\ln Z_{n}$$\n (10.244: $\\mathbb{E}[\\boldsymbol{\\theta}] = \\mathbf{m}^{n} + v^{n} \\nabla_{\\mathbf{m}^{n}} \\ln Z_{n}$)\n\n$$\\mathbb{E}[\\boldsymbol{\\theta}] = \\mathbf{m}^{n} + v^{n} \\nabla_{\\mathbf{m}^{n}} \\ln Z_{n}$$\n\n$$\\mathbb{E}[\\boldsymbol{\\theta}^{T} \\boldsymbol{\\theta}] = 2(v^{n})^{2} \\nabla_{v^{n}} \\ln Z_{n} + 2\\mathbb{E}[\\boldsymbol{\\theta}]^{T} \\mathbf{m}^{n} - \\|\\mathbf{m}^{n}\\|^{2}$$\n(10.245: $\\mathbb{E}[\\boldsymbol{\\theta}^{T} \\boldsymbol{\\theta}] = 2(v^{n})^{2} \\nabla_{v^{n}} \\ln Z_{n} + 2\\mathbb{E}[\\boldsymbol{\\theta}]^{T} \\mathbf{m}^{n} - \\|\\mathbf{m}^{n}\\|^{2}$)\n\nand then make use of the result (10.216: $Z_n = (1 - w)\\mathcal{N}(\\mathbf{x}_n | \\mathbf{m}^{n}, (v^{n} + 1)\\mathbf{I}) + w\\mathcal{N}(\\mathbf{x}_n | \\mathbf{0}, a\\mathbf{I}).$) for $Z_n$ . Next, prove the results (10.220)– (10.222: $s_n = \\frac{Z_n}{(2\\pi v_n)^{D/2} \\mathcal{N}(\\mathbf{m}_n | \\mathbf{m}^{n}, (v_n + v^{n})\\mathbf{I})}.$) by using (10.207: $\\widetilde{f}_{j}(\\boldsymbol{\\theta}) = Z_{j} \\frac{q^{\\text{new}}(\\boldsymbol{\\theta})}{q^{\\setminus j}(\\boldsymbol{\\theta})}.$) and completing the square in the exponential. Finally, use (10.208: $p(\\mathcal{D}) \\simeq \\int \\prod_{i} \\widetilde{f}_{i}(\\boldsymbol{\\theta}) d\\boldsymbol{\\theta}.$) to derive the result (10.223: $p(\\mathcal{D}) \\simeq (2\\pi v^{\\text{new}})^{D/2} \\exp(B/2) \\prod_{n=1}^{N} \\left\\{ s_n (2\\pi v_n)^{-D/2} \\right\\}$).\n\n# Sampling Methods\n\nFor most probabilistic models of practical interest, exact inference is intractable, and so we have to resort to some form of approximation. In Chapter 10, we discussed inference algorithms based on deterministic approximations, which include methods such as variational Bayes and expectation propagation. Here we consider approximate inference methods based on numerical sampling, also known as *Monte Carlo* techniques.\n\nAlthough for some applications the posterior distribution over unobserved variables will be of direct interest in itself, for most situations the posterior distribution is required primarily for the purpose of evaluating expectations, for example in order to make predictions. The fundamental problem that we therefore wish to address in this chapter involves finding the expectation of some function $f(\\mathbf{z})$ with respect to a probability distribution $p(\\mathbf{z})$ . Here, the components of $\\mathbf{z}$ might comprise discrete or continuous variables or some combination of the two. Thus in the case of continuous\n\nFigure 11.1 Schematic illustration of a function f(z) whose expectation is to be evaluated with respect to a distribution p(z).\n\n\n\nvariables, we wish to evaluate the expectation\n\n$$\\mathbb{E}[f] = \\int f(\\mathbf{z})p(\\mathbf{z}) \\,\\mathrm{d}\\mathbf{z} \\tag{11.1}$$\n\nwhere the integral is replaced by summation in the case of discrete variables. This is illustrated schematically for a single continuous variable in Figure 11.1. We shall suppose that such expectations are too complex to be evaluated exactly using analytical techniques.\n\nThe general idea behind sampling methods is to obtain a set of samples $\\mathbf{z}^{(l)}$ (where $l=1,\\ldots,L$ ) drawn independently from the distribution $p(\\mathbf{z})$ . This allows the expectation (11.1: $\\mathbb{E}[f] = \\int f(\\mathbf{z})p(\\mathbf{z}) \\,\\mathrm{d}\\mathbf{z}$) to be approximated by a finite sum\n\n$$\\widehat{f} = \\frac{1}{L} \\sum_{l=1}^{L} f(\\mathbf{z}^{(l)}). \\tag{11.2}$$\n\nAs long as the samples $\\mathbf{z}^{(l)}$ are drawn from the distribution $p(\\mathbf{z})$ , then $\\mathbb{E}[\\widehat{f}] = \\mathbb{E}[f]$ and so the estimator $\\widehat{f}$ has the correct mean. The variance of the estimator is given by\n\n$$\\operatorname{var}[\\widehat{f}] = \\frac{1}{L} \\mathbb{E}\\left[ (f - \\mathbb{E}[f])^2 \\right]$$\n(11.3)\n\nis the variance of the function $f(\\mathbf{z})$ under the distribution $p(\\mathbf{z})$ . It is worth emphasizing that the accuracy of the estimator therefore does not depend on the dimensionality of $\\mathbf{z}$ , and that, in principle, high accuracy may be achievable with a relatively small number of samples $\\mathbf{z}^{(l)}$ . In practice, ten or twenty independent samples may suffice to estimate an expectation to sufficient accuracy.\n\nThe problem, however, is that the samples $\\{\\mathbf{z}^{(l)}\\}$ might not be independent, and so the effective sample size might be much smaller than the apparent sample size. Also, referring back to Figure 11.1, we note that if $f(\\mathbf{z})$ is small in regions where $p(\\mathbf{z})$ is large, and vice versa, then the expectation may be dominated by regions of small probability, implying that relatively large sample sizes will be required to achieve sufficient accuracy.\n\nFor many models, the joint distribution $p(\\mathbf{z})$ is conveniently specified in terms of a graphical model. In the case of a directed graph with no observed variables, it is\n\n### Exercise 11.1\n\nstraightforward to sample from the joint distribution (assuming that it is possible to sample from the conditional distributions at each node) using the following *ancestral sampling* approach, discussed briefly in Section 8.1.2. The joint distribution is specified by\n\n$$p(\\mathbf{z}) = \\prod_{i=1}^{M} p(\\mathbf{z}_i | \\mathbf{pa}_i)$$\n (11.4)",
"answer": "This problem is really complicated, but hint has already been given in Eq (10.244: $\\mathbb{E}[\\boldsymbol{\\theta}] = \\mathbf{m}^{n} + v^{n} \\nabla_{\\mathbf{m}^{n}} \\ln Z_{n}$) and (10.255). Notice that in Eq (10.244: $\\mathbb{E}[\\boldsymbol{\\theta}] = \\mathbf{m}^{n} + v^{n} \\nabla_{\\mathbf{m}^{n}} \\ln Z_{n}$), we have a quite complicated term $\\nabla_{\\mathbf{m}^{/n}} \\ln Z_n$ , which we know that $\\nabla_{\\mathbf{m}^{/n}} \\ln Z_n = (\\nabla_{\\mathbf{m}^{/n}} Z_n)/Z_n$ based on the Chain Rule, and since we know the exact form of $Z_n$ which has been derived in the previous problem, we guess that we can start from dealing with $\\nabla_{\\mathbf{m}^{/n}} \\ln Z_n$ to obtain Eq (10.244: $\\mathbb{E}[\\boldsymbol{\\theta}] = \\mathbf{m}^{n} + v^{n} \\nabla_{\\mathbf{m}^{n}} \\ln Z_{n}$). Before starting, we write down a basic formula here: for a Gaussian random variable $\\mathbf{x} \\sim \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma})$ , we have:\n\n$$\\nabla_{\\boldsymbol{\\mu}} \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma}) \\cdot (\\mathbf{x} - \\boldsymbol{\\mu}) \\boldsymbol{\\Sigma}^{-1}$$\n\nNow we can obtain:\n\n$$\\nabla_{\\mathbf{m}^{/n}} \\ln Z_{n} = \\frac{1}{Z_{n}} \\cdot \\nabla_{\\mathbf{m}^{/n}} Z_{n}$$\n\n$$= \\frac{1}{Z_{n}} \\cdot \\nabla_{\\mathbf{m}^{/n}} \\int q^{/n}(\\boldsymbol{\\theta}) p(\\mathbf{x}_{n} | \\boldsymbol{\\theta}) d\\boldsymbol{\\theta}$$\n\n$$= \\frac{1}{Z_{n}} \\cdot \\int \\left\\{ \\nabla_{\\mathbf{m}^{/n}} q^{/n}(\\boldsymbol{\\theta}) \\right\\} \\cdot p(\\mathbf{x}_{n} | \\boldsymbol{\\theta}) d\\boldsymbol{\\theta}$$\n\n$$= \\frac{1}{Z_{n}} \\cdot \\int \\frac{1}{v^{/n}} (\\boldsymbol{\\theta} - \\mathbf{m}^{/n}) \\cdot q^{/n}(\\boldsymbol{\\theta}) \\cdot p(\\mathbf{x}_{n} | \\boldsymbol{\\theta}) d\\boldsymbol{\\theta}$$\n\n$$= \\frac{1}{Z_{n}} \\cdot \\frac{1}{v^{/n}} \\cdot \\left\\{ \\int \\boldsymbol{\\theta} \\cdot q^{/n}(\\boldsymbol{\\theta}) \\cdot p(\\mathbf{x}_{n} | \\boldsymbol{\\theta}) d\\boldsymbol{\\theta} - \\int \\mathbf{m}^{/n} \\cdot q^{/n}(\\boldsymbol{\\theta}) \\cdot p(\\mathbf{x}_{n} | \\boldsymbol{\\theta}) d\\boldsymbol{\\theta} \\right\\}$$\n\n$$= \\frac{1}{v^{/n}} \\cdot \\left\\{ \\mathbb{E}[\\boldsymbol{\\theta}] - \\mathbf{m}^{/n} \\right\\}$$\n\nHere we have used $q^{/n}(\\boldsymbol{\\theta}) = \\mathcal{N}(\\boldsymbol{\\theta}|\\mathbf{m}^{/n}, v^{/n}\\mathbf{I})$ , and $q^{/n}(\\boldsymbol{\\theta}) \\cdot p(\\mathbf{x}_n|\\boldsymbol{\\theta}) = Z_n \\cdot q^{\\text{new}}(\\boldsymbol{\\theta})$ . Rearranging the equation above, we obtain Eq (10.244: $\\mathbb{E}[\\boldsymbol{\\theta}] = \\mathbf{m}^{n} + v^{n} \\nabla_{\\mathbf{m}^{n}} \\ln Z_{n}$). Then we use Eq (10.216: $Z_n = (1 - w)\\mathcal{N}(\\mathbf{x}_n | \\mathbf{m}^{n}, (v^{n} + 1)\\mathbf{I}) + w\\mathcal{N}(\\mathbf{x}_n | \\mathbf{0}, a\\mathbf{I}).$), yielding:\n\n$$\\mathbb{E}[\\boldsymbol{\\theta}] = \\mathbf{m}^{/n} + v^{/n} \\cdot \\nabla_{\\mathbf{m}^{/n}} \\ln Z_n$$\n\n$$= \\mathbf{m}^{/n} + v^{/n} \\cdot \\frac{1}{Z_n} (1 - w) \\mathcal{N}(\\mathbf{x}_n | \\mathbf{m}^{/n}, (v^{/n} + 1)\\mathbf{I}) \\cdot \\frac{1}{v^{/n} + 1} (\\mathbf{x}_n - \\mathbf{m}^{/n})$$\n\n$$= \\mathbf{m}^{/n} + v^{/n} \\cdot \\rho_n \\cdot \\frac{1}{v^{/n} + 1} (\\mathbf{x}_n - \\mathbf{m}^{/n})$$\n\nWhere we have defined:\n\n$$\\rho_n = \\frac{1}{Z_n} (1 - w) \\mathcal{N}(\\mathbf{x}_n | \\mathbf{m}^{/n}, (v^{/n} + 1)\\mathbf{I})$$\n\n$$= \\frac{1}{Z_n} (1 - w) \\cdot \\frac{Z_n - w \\mathcal{N}(\\mathbf{x}_n | \\mathbf{0}, \\alpha \\mathbf{I})}{1 - w}$$\n\n$$= 1 - \\frac{w}{Z_n} \\mathcal{N}(\\mathbf{x}_n | \\mathbf{0}, \\alpha \\mathbf{I})$$\n\nTherefore, we have proved the mean $\\mathbf{m}$ is given by Eq (10.217: $\\mathbf{m} = \\mathbf{m}^{n} + \\rho_n \\frac{v^{n}}{v^{n} + 1} (\\mathbf{x}_n - \\mathbf{m}^{n})$), next we prove Eq (10.218: $v = v^{n} - \\rho_n \\frac{(v^{n})^2}{v^{n} + 1} + \\rho_n (1 - \\rho_n) \\frac{(v^{n})^2 ||\\mathbf{x}_n - \\mathbf{m}^{n}||^2}{D(v^{n} + 1)^2}$). Similarly, we can write down:\n\n$$\\nabla_{v^{/n}} \\ln Z_n = \\frac{1}{Z_n} \\cdot \\nabla_{v^{/n}} Z_n$$\n\n$$= \\frac{1}{Z_n} \\cdot \\nabla_{v^{/n}} \\int q^{/n}(\\boldsymbol{\\theta}) p(\\mathbf{x}_n | \\boldsymbol{\\theta}) d\\boldsymbol{\\theta}$$\n\n$$= \\frac{1}{Z_n} \\cdot \\int \\left\\{ \\nabla_{v^{/n}} q^{/n}(\\boldsymbol{\\theta}) \\right\\} p(\\mathbf{x}_n | \\boldsymbol{\\theta}) d\\boldsymbol{\\theta}$$\n\n$$= \\frac{1}{Z_n} \\cdot \\int \\left\\{ \\frac{1}{2(v^{/n})^2} ||\\mathbf{m}^{/n} - \\boldsymbol{\\theta}||^2 - \\frac{D}{2v^{/n}} \\right\\} q^{/n}(\\boldsymbol{\\theta}) \\cdot p(\\mathbf{x}_n | \\boldsymbol{\\theta}) d\\boldsymbol{\\theta}$$\n\n$$= \\int q^{\\text{new}}(\\boldsymbol{\\theta}) \\cdot \\left\\{ \\frac{1}{2(v^{/n})^2} (\\mathbf{m}^{/n} - \\boldsymbol{\\theta})^T (\\mathbf{m}^{/n} - \\boldsymbol{\\theta}) - \\frac{D}{2v^{/n}} \\right\\} d\\boldsymbol{\\theta}$$\n\n$$= \\frac{1}{2(v^{/n})^2} \\left\\{ \\mathbb{E}[\\boldsymbol{\\theta}\\boldsymbol{\\theta}^T] - 2\\mathbb{E}[\\boldsymbol{\\theta}] \\mathbf{m}^{/n} + ||\\mathbf{m}^{/n}||^2 \\right\\} - \\frac{D}{2v^{/n}}$$\n\nRearranging it, we can obtain:\n\n$$\\mathbb{E}[\\boldsymbol{\\theta}\\boldsymbol{\\theta}^T] = 2(v^{/n})^2 \\cdot \\nabla_{v^{/n}} \\ln Z_n + 2\\mathbb{E}[\\boldsymbol{\\theta}] \\mathbf{m}^{/n} - ||\\mathbf{m}^{/n}||^2 + D \\cdot v^{/n}$$\n\nThere is a typo in Eq (10.255), and the intrinsic reason is that when calculating $\\nabla_{v^{/n}}q^{/n}(\\boldsymbol{\\theta})$ , there are two terms in $q^{/n}(\\boldsymbol{\\theta})$ dependent on $v^{/n}$ : one is inside the exponential, and the other is in the fraction $\\frac{1}{|v^{/n}\\mathbf{I}|^{1/2}}$ , which is outside the exponential. Now, we still use Eq (10.216: $Z_n = (1 - w)\\mathcal{N}(\\mathbf{x}_n | \\mathbf{m}^{n}, (v^{n} + 1)\\mathbf{I}) + w\\mathcal{N}(\\mathbf{x}_n | \\mathbf{0}, a\\mathbf{I}).$), yielding:\n\n$$\\nabla_{v^{/n}} \\ln Z_n = \\frac{1}{Z_n} (1 - w) \\mathcal{N}(\\mathbf{x}_n | \\mathbf{m}^{/n}, (v^{/n} + 1)\\mathbf{I}) \\cdot \\left[ \\frac{1}{2(v^{/n} + 1)^2} ||\\mathbf{x}_n - \\mathbf{m}^{/n}||^2 - \\frac{D}{2(v^{/n} + 1)} \\right]$$\n\n$$= \\rho_n \\cdot \\left[ \\frac{1}{2(v^{/n} + 1)^2} ||\\mathbf{x}_n - \\mathbf{m}^{/n}||^2 - \\frac{D}{2(v^{/n} + 1)} \\right]$$\n\nFinally, using the definition of variance, we obtain:\n\n$$v\\mathbf{I} = \\mathbb{E}[\\boldsymbol{\\theta}\\boldsymbol{\\theta}^T] - \\mathbb{E}[\\boldsymbol{\\theta}]\\mathbb{E}[\\boldsymbol{\\theta}^T]$$\n\nTherefore, taking the trace, we obtain:\n\n$$\\begin{split} v &= \\frac{1}{D} \\cdot \\left\\{ \\mathbb{E}[\\boldsymbol{\\theta}^T \\boldsymbol{\\theta}] - \\mathbb{E}[\\boldsymbol{\\theta}^T] \\mathbb{E}[\\boldsymbol{\\theta}] \\right\\} = \\frac{1}{D} \\cdot \\left\\{ \\mathbb{E}[\\boldsymbol{\\theta}^T \\boldsymbol{\\theta}] - ||\\mathbb{E}[\\boldsymbol{\\theta}]||^2 \\right\\} \\\\ &= \\frac{1}{D} \\cdot \\left\\{ 2(v^{/n})^2 \\cdot \\nabla_{v^{/n}} \\ln Z_n + 2\\mathbb{E}[\\boldsymbol{\\theta}] \\mathbf{m}^{/n} - ||\\mathbf{m}^{/n}||^2 + D \\cdot v^{/n} - ||\\mathbb{E}[\\boldsymbol{\\theta}]||^2 \\right\\} \\\\ &= \\frac{1}{D} \\cdot \\left\\{ 2(v^{/n})^2 \\cdot \\nabla_{v^{/n}} \\ln Z_n - ||\\mathbb{E}[\\boldsymbol{\\theta}] - \\mathbf{m}^{/n}||^2 + D \\cdot v^{/n} \\right\\} \\\\ &= \\frac{1}{D} \\cdot \\left\\{ 2(v^{/n})^2 \\cdot \\nabla_{v^{/n}} \\ln Z_n - ||v^{/n} \\cdot \\rho_n \\cdot \\frac{1}{v^{/n} + 1} (\\mathbf{x}_n - \\mathbf{m}^{/n})||^2 + D \\cdot v^{/n} \\right\\} \\end{split}$$\n\nIf we substitute $\\nabla_{v^{/n}} \\ln Z_n$ into the expression above, we will just obtain Eq (10.215) as required.\n\n# 0.11 Sampling Methods",
"answer_length": 7260
},
{
"chapter": 10,
"question_number": "10.4",
"difficulty": "medium",
"question_text": "Suppose that $p(\\mathbf{x})$ is some fixed distribution and that we wish to approximate it using a Gaussian distribution $q(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Sigma})$ . By writing down the form of the KL divergence $\\mathrm{KL}(p\\|q)$ for a Gaussian $q(\\mathbf{x})$ and then differentiating, show that\n\n- minimization of $\\mathrm{KL}(p||q)$ with respect to $\\mu$ and $\\Sigma$ leads to the result that $\\mu$ is given by the expectation of $\\mathbf{x}$ under $p(\\mathbf{x})$ and that $\\Sigma$ is given by the covariance.",
"answer": "We begin by writing down the KL divergence.\n\n$$\\begin{aligned} \\mathrm{KL}(p||q) &= -\\int p(\\mathbf{x}) \\ln \\left\\{ \\frac{q(\\mathbf{x})}{p(\\mathbf{x})} \\right\\} d\\mathbf{x} \\\\ &= -\\int p(\\mathbf{x}) \\ln q(\\mathbf{x}) d\\mathbf{x} + \\mathrm{const} \\\\ &= -\\int p(\\mathbf{x}) \\left[ -\\frac{D}{2} \\ln 2\\pi - \\frac{1}{2} \\ln |\\mathbf{\\Sigma}| - \\frac{1}{2} (\\mathbf{x} - \\boldsymbol{\\mu})^T \\mathbf{\\Sigma}^{-1} (\\mathbf{x} - \\boldsymbol{\\mu}) \\right] d\\mathbf{x} + \\mathrm{const} \\\\ &= \\int p(\\mathbf{x}) \\left[ \\frac{1}{2} \\ln |\\mathbf{\\Sigma}| + \\frac{1}{2} (\\mathbf{x} - \\boldsymbol{\\mu})^T \\mathbf{\\Sigma}^{-1} (\\mathbf{x} - \\boldsymbol{\\mu}) \\right] d\\mathbf{x} + \\mathrm{const} \\\\ &= \\frac{1}{2} \\ln |\\mathbf{\\Sigma}| + \\int p(\\mathbf{x}) \\left[ \\frac{1}{2} (\\mathbf{x} - \\boldsymbol{\\mu})^T \\mathbf{\\Sigma}^{-1} (\\mathbf{x} - \\boldsymbol{\\mu}) \\right] d\\mathbf{x} + \\mathrm{const} \\\\ &= \\frac{1}{2} \\ln |\\mathbf{\\Sigma}| + \\int p(\\mathbf{x}) \\frac{1}{2} \\left[ \\mathbf{x}^T \\mathbf{\\Sigma}^{-1} \\mathbf{x} - 2\\boldsymbol{\\mu}^T \\mathbf{\\Sigma}^{-1} \\mathbf{x} + \\boldsymbol{\\mu}^T \\mathbf{\\Sigma}^{-1} \\boldsymbol{\\mu} \\right] d\\mathbf{x} + \\mathrm{const} \\\\ &= \\frac{1}{2} \\ln |\\mathbf{\\Sigma}| + \\frac{1}{2} \\int p(\\mathbf{x}) \\mathrm{Tr}[\\mathbf{\\Sigma}^{-1} (\\mathbf{x} \\mathbf{x}^T)] d\\mathbf{x} - \\boldsymbol{\\mu}^T \\mathbf{\\Sigma}^{-1} \\mathbb{E}[\\mathbf{x}] + \\frac{1}{2} \\boldsymbol{\\mu}^T \\mathbf{\\Sigma}^{-1} \\boldsymbol{\\mu} + \\mathrm{const} \\\\ &= \\frac{1}{2} \\ln |\\mathbf{\\Sigma}| + \\frac{1}{2} \\mathrm{Tr}[\\mathbf{\\Sigma}^{-1} \\mathbb{E}(\\mathbf{x} \\mathbf{x}^T)] - \\boldsymbol{\\mu}^T \\mathbf{\\Sigma}^{-1} \\mathbb{E}[\\mathbf{x}] + \\frac{1}{2} \\boldsymbol{\\mu}^T \\mathbf{\\Sigma}^{-1} \\boldsymbol{\\mu} + \\mathrm{const} \\end{aligned}$$\n\nHere D is the dimension of $\\mathbf{x}$ . We first calculate the derivative of $\\mathrm{KL}(p||q)$ with respect to $\\boldsymbol{\\mu}$ and set it to 0:\n\n$$\\frac{\\partial \\mathrm{KL}}{\\partial \\boldsymbol{\\mu}} = -\\boldsymbol{\\Sigma}^{-1} \\mathbb{E}[x] + \\boldsymbol{\\Sigma}^{-1} \\boldsymbol{\\mu} = 0$$\n\nTherefore, we can obtain $\\mu = \\mathbb{E}[\\mathbf{x}]$ . When $\\mu = \\mathbb{E}[\\mathbf{x}]$ is satisfied, KL divergence reduces to:\n\n$$\\mathrm{KL}(p||q) = \\frac{1}{2}\\ln|\\mathbf{\\Sigma}| + \\frac{1}{2}\\mathrm{Tr}[\\mathbf{\\Sigma}^{-1}\\mathbb{E}(\\mathbf{x}\\mathbf{x}^T)] - \\frac{1}{2}\\boldsymbol{\\mu}^T\\mathbf{\\Sigma}^{-1}\\boldsymbol{\\mu} + \\mathrm{const}$$\n\nThen we calculate the derivative of $\\mathrm{KL}(p||q)$ with respect to $\\Sigma$ and set it to 0:\n\n$$\\frac{\\partial \\mathrm{KL}}{\\partial \\boldsymbol{\\Sigma}} = \\frac{1}{2} \\boldsymbol{\\Sigma}^{-1} - \\frac{1}{2} \\boldsymbol{\\Sigma}^{-1} \\mathbb{E}[\\mathbf{x} \\mathbf{x}^T] \\boldsymbol{\\Sigma}^{-1} + \\frac{1}{2} \\boldsymbol{\\Sigma}^{-1} \\boldsymbol{\\mu} \\boldsymbol{\\mu}^T \\boldsymbol{\\Sigma}^{-1} = 0$$\n\nNote that here we have used and Eq (61) and Eq (124) in 'MatrixCook-Book', and that $\\Sigma$ , $\\mathbb{E}[\\mathbf{x}\\mathbf{x}^T]$ are both symmetric. We rewrite those equations here for your reference:\n\n$$\\frac{\\partial \\mathbf{a}^T \\mathbf{X}^{-1} \\mathbf{b}}{\\partial \\mathbf{X}} = -\\mathbf{X}^{-T} \\mathbf{a} \\mathbf{b}^T \\mathbf{X}^{-T} \\quad \\text{and} \\quad \\frac{\\partial \\text{Tr}(\\mathbf{A} \\mathbf{X}^{-1} \\mathbf{B})}{\\partial \\mathbf{X}} = -\\mathbf{X}^{-T} \\mathbf{A}^T \\mathbf{B}^T \\mathbf{X}^{-T}$$\n\nRearranging the derivative, we can obtain:\n\n$$\\mathbf{\\Sigma} = \\mathbb{E}[\\mathbf{x}\\mathbf{x}^T] - \\boldsymbol{\\mu}\\boldsymbol{\\mu}^T = \\mathbb{E}[\\mathbf{x}\\mathbf{x}^T] - \\mathbb{E}[\\mathbf{x}]\\mathbb{E}[\\mathbf{x}]^T = \\text{cov}[\\mathbf{x}]$$",
"answer_length": 3594
},
{
"chapter": 10,
"question_number": "10.5",
"difficulty": "medium",
"question_text": "- 10.5 (\\*\\*) www Consider a model in which the set of all hidden stochastic variables, denoted collectively by $\\mathbf{Z}$ , comprises some latent variables $\\mathbf{z}$ together with some model parameters $\\boldsymbol{\\theta}$ . Suppose we use a variational distribution that factorizes between latent variables and parameters so that $q(\\mathbf{z}, \\boldsymbol{\\theta}) = q_{\\mathbf{z}}(\\mathbf{z})q_{\\boldsymbol{\\theta}}(\\boldsymbol{\\theta})$ , in which the distribution $q_{\\boldsymbol{\\theta}}(\\boldsymbol{\\theta})$ is approximated by a point estimate of the form $q_{\\boldsymbol{\\theta}}(\\boldsymbol{\\theta}) = \\delta(\\boldsymbol{\\theta} \\boldsymbol{\\theta}_0)$ where $\\boldsymbol{\\theta}_0$ is a vector of free parameters. Show that variational optimization of this factorized distribution is equivalent to an EM algorithm, in which the E step optimizes $q_{\\mathbf{z}}(\\mathbf{z})$ , and the M step maximizes the expected complete-data log posterior distribution of $\\boldsymbol{\\theta}$ with respect to $\\boldsymbol{\\theta}_0$ .",
"answer": "We introduce a property of Dirac function:\n\n$$\\int \\delta(\\boldsymbol{\\theta} - \\boldsymbol{\\theta}_0) f(\\boldsymbol{\\theta}) d\\boldsymbol{\\theta} = f(\\boldsymbol{\\theta}_0)$$\n\nWe first calculate the optimal $q(\\mathbf{z}, \\boldsymbol{\\theta})$ by fixing $q_{\\boldsymbol{\\theta}}(\\boldsymbol{\\theta})$ . This is achieved by minimizing the KL divergence given in Eq (10.4):\n\n$$KL(q||p) = -\\int \\int q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{Z}|\\mathbf{X})}{q(\\mathbf{Z})} \\right\\} d\\mathbf{Z}$$\n\n$$= -\\int \\int q_{\\mathbf{z}}(\\mathbf{z}) q_{\\theta}(\\boldsymbol{\\theta}) \\ln \\left\\{ \\frac{p(\\mathbf{z}, \\boldsymbol{\\theta}|\\mathbf{X})}{q_{\\mathbf{z}}(\\mathbf{z}) q_{\\theta}(\\boldsymbol{\\theta})} \\right\\} d\\mathbf{z} d\\boldsymbol{\\theta}$$\n\n$$= -\\int \\int q_{\\mathbf{z}}(\\mathbf{z}) q_{\\theta}(\\boldsymbol{\\theta}) \\ln \\left\\{ \\frac{p(\\mathbf{z}, \\boldsymbol{\\theta}|\\mathbf{X})}{q_{\\mathbf{z}}(\\mathbf{z})} \\right\\} d\\mathbf{z} d\\boldsymbol{\\theta} + \\int q_{\\theta}(\\boldsymbol{\\theta}) \\ln q_{\\theta}(\\boldsymbol{\\theta}) d\\boldsymbol{\\theta}$$\n\n$$= -\\int \\int q_{\\mathbf{z}}(\\mathbf{z}) q_{\\theta}(\\boldsymbol{\\theta}) \\ln \\left\\{ \\frac{p(\\mathbf{z}, \\boldsymbol{\\theta}|\\mathbf{X})}{q_{\\mathbf{z}}(\\mathbf{z})} \\right\\} d\\mathbf{z} d\\boldsymbol{\\theta} + \\text{const}$$\n\n$$= -\\int q_{\\theta}(\\boldsymbol{\\theta}) \\left\\{ \\int q_{\\mathbf{z}}(\\mathbf{z}) \\ln \\left\\{ \\frac{p(\\mathbf{z}, \\boldsymbol{\\theta}|\\mathbf{X})}{q_{\\mathbf{z}}(\\mathbf{z})} \\right\\} d\\mathbf{z} \\right\\} d\\boldsymbol{\\theta} + \\text{const}$$\n\n$$= -\\int q_{\\mathbf{z}}(\\mathbf{z}) \\ln \\left\\{ \\frac{p(\\mathbf{z}|\\boldsymbol{\\theta}_{0}, \\mathbf{X}) p(\\boldsymbol{\\theta}_{0}|\\mathbf{X})}{q_{\\mathbf{z}}(\\mathbf{z})} \\right\\} d\\mathbf{z} + \\text{const}$$\n\n$$= -\\int q_{\\mathbf{z}}(\\mathbf{z}) \\ln \\left\\{ \\frac{p(\\mathbf{z}|\\boldsymbol{\\theta}_{0}, \\mathbf{X}) p(\\boldsymbol{\\theta}_{0}|\\mathbf{X})}{q_{\\mathbf{z}}(\\mathbf{z})} \\right\\} d\\mathbf{z} + \\text{const}$$\n\n$$= -\\int q_{\\mathbf{z}}(\\mathbf{z}) \\ln \\left\\{ \\frac{p(\\mathbf{z}|\\boldsymbol{\\theta}_{0}, \\mathbf{X})}{q_{\\mathbf{z}}(\\mathbf{z})} \\right\\} d\\mathbf{z} + \\text{const}$$\n\nHere the 'Const' denotes the terms independent of $q_{\\mathbf{z}}(\\mathbf{z})$ . Note that we will show at the end of this problem, here 'Const' actually is $-\\infty$ due to the existence of the entropy of Dirac function:\n\n$$\\int q_{\\boldsymbol{\\theta}}(\\boldsymbol{\\theta}) \\ln q_{\\boldsymbol{\\theta}}(\\boldsymbol{\\theta}) d\\boldsymbol{\\theta}$$\n\nNow it is clear that when $q_{\\mathbf{z}}(\\mathbf{z})$ equals $p(\\mathbf{z}|\\boldsymbol{\\theta}_0, \\mathbf{X})$ , the KL divergence is minimized. This corresponds to the E-step. Next, we calculate the optimal $q_{\\boldsymbol{\\theta}}(\\boldsymbol{\\theta})$ , i.e., $\\boldsymbol{\\theta}_0$ , by maximizing L(q) given in Eq (10.3: $\\mathcal{L}(q) = \\int q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{X}, \\mathbf{Z})}{q(\\mathbf{Z})} \\right\\} d\\mathbf{Z}$), but fixing $q_{\\boldsymbol{\\theta}}(\\boldsymbol{\\theta})$ :\n\n$$\\begin{split} L(q) &= \\int \\int q(\\mathbf{Z}) \\ln \\left\\{ \\frac{p(\\mathbf{X}, \\mathbf{Z})}{q(\\mathbf{Z})} \\right\\} d\\mathbf{Z} \\\\ &= \\int \\int q_{\\mathbf{z}}(\\mathbf{z}) q_{\\theta}(\\boldsymbol{\\theta}) \\ln \\left\\{ \\frac{p(\\mathbf{X}, \\mathbf{z}, \\boldsymbol{\\theta})}{q_{\\mathbf{z}}(\\mathbf{z}) q_{\\theta}(\\boldsymbol{\\theta})} \\right\\} d\\mathbf{z} d\\boldsymbol{\\theta} \\\\ &= \\int \\int q_{\\mathbf{z}}(\\mathbf{z}) q_{\\theta}(\\boldsymbol{\\theta}) \\ln \\left\\{ \\frac{p(\\mathbf{X}, \\mathbf{z}, \\boldsymbol{\\theta})}{q_{\\mathbf{z}}(\\mathbf{z})} \\right\\} d\\mathbf{z} d\\boldsymbol{\\theta} - \\int q_{\\theta}(\\boldsymbol{\\theta}) \\ln q_{\\theta}(\\boldsymbol{\\theta}) d\\boldsymbol{\\theta} \\\\ &= \\int \\int q_{\\mathbf{z}}(\\mathbf{z}) q_{\\theta}(\\boldsymbol{\\theta}) \\ln \\left\\{ p(\\mathbf{X}, \\mathbf{z}, \\boldsymbol{\\theta}) \\right\\} d\\mathbf{z} d\\boldsymbol{\\theta} - \\int q_{\\theta}(\\boldsymbol{\\theta}) \\ln q_{\\theta}(\\boldsymbol{\\theta}) d\\boldsymbol{\\theta} + \\text{const} \\\\ &= \\int q_{\\theta}(\\boldsymbol{\\theta}) \\mathbb{E}_{q_{\\mathbf{z}}} [\\ln p(\\mathbf{X}, \\mathbf{z}, \\boldsymbol{\\theta})] d\\boldsymbol{\\theta} - \\int q_{\\theta}(\\boldsymbol{\\theta}) \\ln q_{\\theta}(\\boldsymbol{\\theta}) d\\boldsymbol{\\theta} + \\text{const} \\\\ &= \\mathbb{E}_{q_{\\mathbf{z}}(\\mathbf{z})} [\\ln p(\\mathbf{X}, \\mathbf{z}, \\boldsymbol{\\theta}_0)] - \\int q_{\\theta}(\\boldsymbol{\\theta}) \\ln q_{\\theta}(\\boldsymbol{\\theta}) d\\boldsymbol{\\theta} + \\text{const} \\end{split}$$\n\nThe second term is actually the entropy of a Dirac function, which is $-\\infty$ and independent of the value of $\\theta_0$ . Not strictly speaking, we only need to maximize the first term. This is exactly the M-step.\n\nOne important thing needs to be clarified here. You may ask no matter how we set $\\theta_0$ , L(q) will always be $-\\infty$ . Actually, this is an intrinsic problem as long as we use a point estimate $q_{\\theta}(\\theta)$ . This will even occur when we derive the optimal $q_{\\mathbf{z}}(\\mathbf{z})$ by minimizing the KL divergence at the first step. Therefore, the 'Maximizing' and 'Minimizing' is a general meaning in this problem where we neglect the $-\\infty$ term.",
"answer_length": 5129
},
{
"chapter": 10,
"question_number": "10.6",
"difficulty": "medium",
"question_text": "The alpha family of divergences is defined by (10.19: $D_{\\alpha}(p||q) = \\frac{4}{1 - \\alpha^2} \\left( 1 - \\int p(x)^{(1+\\alpha)/2} q(x)^{(1-\\alpha)/2} dx \\right)$). Show that the Kullback-Leibler divergence $\\mathrm{KL}(p\\|q)$ corresponds to $\\alpha \\to 1$ . This can be done by writing $p^{\\epsilon} = \\exp(\\epsilon \\ln p) = 1 + \\epsilon \\ln p + O(\\epsilon^2)$ and then taking $\\epsilon \\to 0$ . Similarly show that $\\mathrm{KL}(q\\|p)$ corresponds to $\\alpha \\to -1$ .",
"answer": "Let's use the hint by first enforcing $\\alpha \\to 1$ .\n\n$$\\begin{split} D_{\\alpha}(p||q) &= \\frac{4}{1-\\alpha^2} \\Big\\{ 1 - \\int p^{(1+\\alpha)/2} q^{(1-\\alpha)/2} \\, dx \\Big\\} \\\\ &= \\frac{4}{1-\\alpha^2} \\Big\\{ 1 - \\int \\frac{p}{p^{(1-\\alpha)/2}} \\left[ 1 + \\frac{1-\\alpha}{2} \\ln q + O(\\frac{1-\\alpha}{2})^2 \\right] dx \\Big\\} \\\\ &= \\frac{4}{1-\\alpha^2} \\Big\\{ 1 - \\int p \\cdot \\frac{1 + \\frac{1-\\alpha}{2} \\ln q + O(\\frac{1-\\alpha}{2})^2}{1 + \\frac{1-\\alpha}{2} \\ln p + O(\\frac{1-\\alpha}{2})^2} \\, dx \\Big\\} \\\\ &\\approx \\frac{4}{1-\\alpha^2} \\Big\\{ 1 - \\int p \\cdot \\frac{1 + \\frac{1-\\alpha}{2} \\ln q}{1 + \\frac{1-\\alpha}{2} \\ln p} \\, dx \\Big\\} \\\\ &= \\frac{4}{1-\\alpha^2} \\Big\\{ - \\int p \\cdot \\left[ \\frac{1 + \\frac{1-\\alpha}{2} \\ln q}{1 + \\frac{1-\\alpha}{2} \\ln p} - 1 \\right] \\, dx \\Big\\} \\\\ &= \\frac{4}{1-\\alpha^2} \\Big\\{ - \\int p \\cdot \\frac{\\frac{1-\\alpha}{2} \\ln q - \\frac{1-\\alpha}{2} \\ln p}{1 + \\frac{1-\\alpha}{2} \\ln p} \\, dx \\Big\\} \\\\ &= \\frac{2}{1+\\alpha} \\Big\\{ - \\int p \\cdot \\frac{\\ln q - \\ln p}{1 + \\frac{1-\\alpha}{2} \\ln p} \\, dx \\Big\\} \\\\ &\\approx - \\int p \\cdot (\\ln q - \\ln p) \\, dx = - \\int p \\cdot \\ln \\frac{q}{p} \\, dx \\end{split}$$\n\nHere p and q is short for p(x) and q(x). It is similar when $\\alpha \\to -1$ . One important thing worthy mentioning is that if we directly approximate $p^{(1+\\alpha)/2}$ by p instead of $p/p^{(1-\\alpha)/2}$ in the first step, we won't get the desired result.\n\n# **Problem 10.7 Solution**\n\nLet's begin from Eq (10.25: $= -\\frac{\\mathbb{E}[\\tau]}{2} \\left\\{ \\lambda_0 (\\mu - \\mu_0)^2 + \\sum_{n=1}^{N} (x_n - \\mu)^2 \\right\\} + \\text{const.} \\quad$).\n\n$$\\begin{split} \\ln q_{\\mu}^{\\star}(\\mu) &= -\\frac{\\mathbb{E}[\\tau]}{2} \\Big\\{ \\lambda_{0}(\\mu - \\mu_{0})^{2} + \\sum_{n=1}^{N} (x_{n} - \\mu)^{2} \\Big\\} + \\text{const} \\\\ &= -\\frac{\\mathbb{E}[\\tau]}{2} \\Big\\{ \\lambda_{0} \\mu^{2} - 2\\lambda_{0} \\mu_{0} \\mu + \\lambda_{0} \\mu_{0}^{2} + N \\mu^{2} - 2(\\sum_{n=1}^{N} x_{n}) \\mu + \\sum_{n=1}^{N} x_{n}^{2} \\Big\\} + \\text{const} \\\\ &= -\\frac{\\mathbb{E}[\\tau]}{2} \\Big\\{ (\\lambda_{0} + N) \\mu^{2} - 2(\\lambda_{0} \\mu_{0} + \\sum_{n=1}^{N} x_{n}) \\mu + (\\lambda_{0} \\mu_{0}^{2} + \\sum_{n=1}^{N} x_{n}^{2}) \\Big\\} + \\text{const} \\\\ &= -\\frac{\\mathbb{E}[\\tau](\\lambda_{0} + N)}{2} \\Big\\{ \\mu^{2} - 2\\frac{\\lambda_{0} \\mu_{0} + \\sum_{n=1}^{N} x_{n}}{\\lambda_{0} + N} \\mu + \\frac{\\lambda_{0} \\mu_{0}^{2} + \\sum_{n=1}^{N} x_{n}^{2}}{\\lambda_{0} + N} \\Big\\} + \\text{const} \\end{split}$$\n\nFrom this expression, we see that $q_{\\mu}^{\\star}(\\mu)$ should be a Gaussian. Suppose that is has form: $q_{\\mu}^{\\star}(\\mu) \\sim \\mathcal{N}(\\mu|\\mu_N, \\lambda_N^{-1})$ , then its logarithm can be written as:\n\n$$\\ln q_{\\mu}^{\\star}(\\mu) = \\frac{1}{2} \\ln \\frac{\\lambda_N}{2\\pi} - \\frac{\\lambda_N}{2} (\\mu - \\mu_N)^2$$\n\nWe match the terms related to $\\mu$ (the quadratic term and linear term), yielding:\n\n$$\\lambda_N = \\mathbb{E}[\\tau] \\cdot (\\lambda_0 + N)$$\n, and $\\lambda_N \\mu_N = \\mathbb{E}[\\tau] \\cdot (\\lambda_0 + N) \\cdot \\frac{\\lambda_0 \\mu_0 + \\sum_{n=1}^N x_n}{\\lambda_0 + N}$ \n\nTherefore, we obtain:\n\n$$\\mu_N = \\frac{\\lambda_0 \\, \\mu_0 + N \\bar{x}}{\\lambda_0 + N}$$\n\nWhere $\\bar{x}$ is the mean of $x_n$ , i.e.,\n\n$$\\bar{x} = \\frac{1}{N} \\sum_{n=1}^{N} x_n$$\n\nThen we deal with the other factor $q_{\\tau}(\\tau)$ . Note that there is a typo in Eq (10.28: $-\\frac{\\tau}{2} \\mathbb{E}_{\\mu} \\left[ \\sum_{n=1}^{N} (x_{n} - \\mu)^{2} + \\lambda_{0}(\\mu - \\mu_{0})^{2} \\right] + \\text{const} \\quad$), the coefficient ahead of $\\ln \\tau$ should be $\\frac{N+1}{2}$ . Let's verify this by considering the terms introducing $\\ln \\tau$ . The first term inside the expectation, i.e., $\\ln p(D|\\mu,\\tau)$ , gives $\\frac{N}{2} \\ln \\tau$ , and the second term inside the expectation, i.e., $\\ln p(\\mu|\\tau)$ , gives $\\frac{1}{2} \\ln \\tau$ . Finally the last term $\\ln p(\\tau)$ gives $(a_0-1)\\ln \\tau$ . Therefore, Eq (10.29: $a_N = a_0 + \\frac{N}{2}$), Eq (10.31: $\\frac{1}{\\mathbb{E}[\\tau]} = \\mathbb{E}\\left[\\frac{1}{N} \\sum_{n=1}^{N} (x_n - \\mu)^2\\right] = \\overline{x^2} - 2\\overline{x}\\mathbb{E}[\\mu] + \\mathbb{E}[\\mu^2].$) and Eq (10.33: $= \\frac{1}{N-1} \\sum_{n=1}^{N} (x_n - \\overline{x})^2.$) will also change consequently. The right forms of these equations will be given in this and following problems.\n\nNow suppose that $q_{\\tau}(\\tau)$ is a Gamma distribution, i.e., $q_{\\tau}(\\tau) \\sim \\text{Gam}(\\tau|a_N,b_N)$ , we have:\n\n$$\\ln q_{\\tau}(\\tau) = -\\ln \\Gamma(a_N) + a_N \\ln b_N + (a_N - 1) \\ln \\tau - b_N \\tau$$\n\nComparing it with Eq (10.28: $-\\frac{\\tau}{2} \\mathbb{E}_{\\mu} \\left[ \\sum_{n=1}^{N} (x_{n} - \\mu)^{2} + \\lambda_{0}(\\mu - \\mu_{0})^{2} \\right] + \\text{const} \\quad$) and matching the coefficients ahead of $\\tau$ and $\\ln \\tau$ , we can obtain:\n\n$$a_N - 1 = a_0 - 1 + \\frac{N+1}{2}$$\n $\\Rightarrow$ $a_N = a_0 + \\frac{N+1}{2}$ \n\nAnd similarly\n\n$$b_N = b_0 + \\frac{1}{2} \\mathbb{E}_{\\mu} \\left[ \\sum_{n=1}^{N} (x_n - \\mu)^2 + \\lambda_0 (\\mu - \\mu_0)^2 \\right]$$\n\nJust as required.",
"answer_length": 4965
},
{
"chapter": 10,
"question_number": "10.8",
"difficulty": "easy",
"question_text": "Consider the variational posterior distribution for the precision of a univariate Gaussian whose parameters are given by (10.29: $a_N = a_0 + \\frac{N}{2}$) and (10.30: $b_N = b_0 + \\frac{1}{2} \\mathbb{E}_{\\mu} \\left[ \\sum_{n=1}^{N} (x_n - \\mu)^2 + \\lambda_0 (\\mu - \\mu_0)^2 \\right].$). By using the standard results for the mean and variance of the gamma distribution given by (B.27) and (B.28), show that if we let $N \\to \\infty$ , this variational posterior distribution has a mean given by the inverse of the maximum likelihood estimator for the variance of the data, and a variance that goes to zero.",
"answer": "According to Eq (B.27), we have:\n\n$$\\mathbb{E}[\\tau] = \\frac{a_0 + (N+1)/2}{b_0 + \\frac{1}{2} \\mathbb{E}_{\\mu} \\left[ \\sum_{n=1}^{N} (x_n - \\mu)^2 + \\lambda_0 (\\mu - \\mu_0)^2 \\right]}$$\n\n$$\\approx \\frac{N/2}{\\frac{1}{2} \\mathbb{E}_{\\mu} \\left[ \\sum_{n=1}^{N} (x_n - \\mu)^2 \\right]}$$\n\n$$= \\frac{N}{\\mathbb{E}_{\\mu} \\left[ \\sum_{n=1}^{N} (x_n - \\mu)^2 \\right]}$$\n\n$$= \\left\\{ \\frac{1}{N} \\cdot \\mathbb{E}_{\\mu} \\left[ \\sum_{n=1}^{N} (x_n - \\mu)^2 \\right] \\right\\}^{-1}$$\n\nAccording to Eq (B.28), we have:\n\n$$\\begin{aligned} \\text{var}[\\tau] &= \\frac{a_0 + (N+1)/2}{\\left(b_0 + \\frac{1}{2} \\mathbb{E}_{\\mu} \\left[ \\sum_{n=1}^{N} (x_n - \\mu)^2 + \\lambda_0 (\\mu - \\mu_0)^2 \\right] \\right)^2} \\\\ &\\approx \\frac{N/2}{\\frac{1}{4} \\mathbb{E}_{\\mu} \\left[ \\sum_{n=1}^{N} (x_n - \\mu)^2 \\right]^2} \\approx 0 \\end{aligned}$$\n\nJust as required.",
"answer_length": 831
},
{
"chapter": 10,
"question_number": "10.9",
"difficulty": "medium",
"question_text": "By making use of the standard result $\\mathbb{E}[\\tau] = a_N/b_N$ for the mean of a gamma distribution, together with (10.26: $\\mu_N = \\frac{\\lambda_0 \\mu_0 + N\\overline{x}}{\\lambda_0 + N}$), (10.27: $\\lambda_N = (\\lambda_0 + N) \\mathbb{E}[\\tau].$), (10.29: $a_N = a_0 + \\frac{N}{2}$), and (10.30: $b_N = b_0 + \\frac{1}{2} \\mathbb{E}_{\\mu} \\left[ \\sum_{n=1}^{N} (x_n - \\mu)^2 + \\lambda_0 (\\mu - \\mu_0)^2 \\right].$), derive the result (10.33: $= \\frac{1}{N-1} \\sum_{n=1}^{N} (x_n - \\overline{x})^2.$) for the reciprocal of the expected precision in the factorized variational treatment of a univariate Gaussian.",
"answer": "The underlying assumption of this problem is $a_0 = b_0 = \\lambda_0 = 0$ . According to Eq (10.26: $\\mu_N = \\frac{\\lambda_0 \\mu_0 + N\\overline{x}}{\\lambda_0 + N}$), Eq (10.27: $\\lambda_N = (\\lambda_0 + N) \\mathbb{E}[\\tau].$) and the definition of variance, we can obtain:\n\n$$\\begin{split} \\mathbb{E}[\\mu^2] &= \\lambda_N^{-1} + \\mathbb{E}[\\mu]^2 = \\frac{1}{(\\lambda_0 + N)\\mathbb{E}[\\tau]} + (\\frac{\\lambda_0 \\mu_0 + N\\overline{x}}{\\lambda_0 + N})^2 \\\\ &= \\frac{1}{N\\mathbb{E}[\\tau]} + \\overline{x}^2 \\end{split}$$\n\nNote that since there is a typo in Eq (10.29: $a_N = a_0 + \\frac{N}{2}$) as stated in the previous problem, i.e., missing a term $\\frac{1}{2}$ . $\\mathbb{E}[\\tau]^{-1}$ actually equals:\n\n$$\\frac{1}{\\mathbb{E}[\\tau]} = \\frac{b_N}{a_N} = \\frac{b_0 + \\frac{1}{2}\\mathbb{E}_{\\mu} \\left[ \\sum_{n=1}^{N} (x_n - \\mu)^2 + \\lambda_0 (\\mu - \\mu_0)^2 \\right]}{a_0 + (N+1)/2}$$\n\n$$= \\frac{\\frac{1}{2}\\mathbb{E}_{\\mu} \\left[ \\sum_{n=1}^{N} (x_n - \\mu)^2 \\right]}{(N+1)/2}$$\n\n$$= \\mathbb{E}_{\\mu} \\left[ \\frac{1}{N+1} \\sum_{n=1}^{N} (x_n - \\mu)^2 \\right]$$\n\n$$= \\frac{N}{N+1} \\mathbb{E}_{\\mu} \\left[ \\frac{1}{N} \\sum_{n=1}^{N} (x_n - \\mu)^2 \\right]$$\n\n$$= \\frac{N}{N+1} \\left\\{ \\overline{x^2} - 2\\overline{x}\\mathbb{E}[\\mu] + \\mathbb{E}[\\mu^2] \\right\\}$$\n\n$$= \\frac{N}{N+1} \\left\\{ \\overline{x^2} - 2\\overline{x}^2 + \\frac{1}{N\\mathbb{E}[\\tau]} + \\overline{x}^2 \\right\\}$$\n\n$$= \\frac{N}{N+1} \\left\\{ \\overline{x^2} - \\overline{x}^2 + \\frac{1}{N\\mathbb{E}[\\tau]} \\right\\}$$\n\n$$= \\frac{N}{N+1} \\left\\{ \\overline{x^2} - \\overline{x}^2 + \\frac{1}{N\\mathbb{E}[\\tau]} \\right\\}$$\n\n$$= \\frac{N}{N+1} \\left\\{ \\overline{x^2} - \\overline{x}^2 \\right\\} + \\frac{1}{(N+1)\\mathbb{E}[\\tau]}$$\n\nRearranging it, we can obtain:\n\n$$\\frac{1}{\\mathbb{E}[\\tau]} = (\\overline{x^2} - \\overline{x}^2) = \\frac{1}{N} \\sum_{n=1}^{N} (x_n - \\overline{x})^2$$\n\nActually it is still a biased estimator.",
"answer_length": 1875
}
]
},
{
"chapter_number": 11,
"total_questions": 16,
"difficulty_breakdown": {
"easy": 11,
"medium": 4,
"hard": 0,
"unknown": 2
},
"questions": [
{
"chapter": 11,
"question_number": "11.1",
"difficulty": "easy",
"question_text": "Show that the finite sample estimator $\\hat{f}$ defined by (11.2: $\\widehat{f} = \\frac{1}{L} \\sum_{l=1}^{L} f(\\mathbf{z}^{(l)}).$) has mean equal to $\\mathbb{E}[f]$ and variance given by (11.3: $\\operatorname{var}[\\widehat{f}] = \\frac{1}{L} \\mathbb{E}\\left[ (f - \\mathbb{E}[f])^2 \\right]$).",
"answer": "Based on definition, we can write down:\n\n$$\\begin{split} \\mathbb{E}[\\widehat{f}] &= \\mathbb{E}[\\frac{1}{L}\\sum_{l=1}^{L}f(\\mathbf{z}^{(l)})] \\\\ &= \\frac{1}{L}\\sum_{l=1}^{L}\\mathbb{E}[f(\\mathbf{z}^{(l)})] \\\\ &= \\frac{1}{L}\\cdot L\\cdot \\mathbb{E}[f] = \\mathbb{E}[f] \\end{split}$$\n\nWhere we have used the fact that the expectation and the summation can exchange order because all the $\\mathbf{z}^{(l)}$ are independent, and that $\\mathbb{E}[f(\\mathbf{z}^{(l)})] = \\mathbb{E}[f]$ because all the $\\mathbf{z}^{(l)}$ are drawn from $p(\\mathbf{z})$ . Next, we deal with the variance:\n\n$$\\begin{aligned} & \\text{var}[\\hat{f}] &= & \\mathbb{E}[(\\hat{f} - \\mathbb{E}[\\hat{f}])^2] = \\mathbb{E}[\\hat{f}^2] - \\mathbb{E}[\\hat{f}]^2 = \\mathbb{E}[\\hat{f}^2] - \\mathbb{E}[f]^2 \\\\ &= & \\mathbb{E}[(\\frac{1}{L} \\sum_{l=1}^{L} f(\\mathbf{z}^{(l)}))^2] - \\mathbb{E}[f]^2 \\\\ &= & \\frac{1}{L^2} \\mathbb{E}[(\\sum_{l=1}^{L} f(\\mathbf{z}^{(l)}))^2] - \\mathbb{E}[f]^2 \\\\ &= & \\frac{1}{L^2} \\mathbb{E}[\\sum_{l=1}^{L} f^2(\\mathbf{z}^{(l)}) + \\sum_{i,j=1,i\\neq j}^{L} f(\\mathbf{z}^{(i)}) f(\\mathbf{z}^{(j)})] - \\mathbb{E}[f]^2 \\\\ &= & \\frac{1}{L^2} \\mathbb{E}[\\sum_{l=1}^{L} f^2(\\mathbf{z}^{(l)})] + \\frac{L^2 - L}{L^2} \\mathbb{E}[f]^2 - \\mathbb{E}[f]^2 \\\\ &= & \\frac{1}{L^2} \\sum_{l=1}^{L} \\mathbb{E}[f^2(\\mathbf{z}^{(l)})] - \\frac{1}{L} \\mathbb{E}[f]^2 \\\\ &= & \\frac{1}{L^2} \\cdot L \\cdot \\mathbb{E}[f^2] - \\frac{1}{L} \\mathbb{E}[f]^2 \\\\ &= & \\frac{1}{L} \\mathbb{E}[f^2] - \\frac{1}{L} \\mathbb{E}[f]^2 = \\frac{1}{L} \\mathbb{E}[(f - \\mathbb{E}[f])^2] \\end{aligned}$$\n\nJust as required.",
"answer_length": 1560
},
{
"chapter": 11,
"question_number": "11.10",
"difficulty": "easy",
"question_text": "Show that the simple random walk over the integers defined by (11.34: $p(z^{(\\tau+1)} = z^{(\\tau)}) = 0.5$), (11.35: $p(z^{(\\tau+1)} = z^{(\\tau)} + 1) = 0.25$), and (11.36: $p(z^{(\\tau+1)} = z^{(\\tau)} - 1) = 0.25$) has the property that $\\mathbb{E}[(z^{(\\tau)})^2] = \\mathbb{E}[(z^{(\\tau-1)})^2] + 1/2$ and hence by induction that $\\mathbb{E}[(z^{(\\tau)})^2] = \\tau/2$ .\n\nFigure 11.15 A probability distribution over two variables $z_1$ and $z_2$ that is uniform over the shaded regions and that is zero everywhere else.\n\n",
"answer": "Based on definition and Eq (11.34)-(11.36), we can write down:\n\n$$\\mathbb{E}_{\\tau}[(z^{(\\tau)})^{2}] = 0.5 \\cdot \\mathbb{E}_{\\tau-1}[(z^{\\tau-1})^{2}] + 0.25 \\cdot \\mathbb{E}_{\\tau-1}[(z^{\\tau-1}+1)^{2}] + 0.25 \\cdot \\mathbb{E}_{\\tau-1}[(z^{\\tau-1}-1)^{2}]$$\n\n$$= \\mathbb{E}_{\\tau-1}[(z^{\\tau-1})^{2}] + 0.5$$\n\nIf the initial state is $z^{(0)} = 0$ (there is a typo in the line below Eq (11.36: $p(z^{(\\tau+1)} = z^{(\\tau)} - 1) = 0.25$)), we can obtain $\\mathbb{E}_{\\tau}[(z^{(\\tau)})^2] = \\tau/2$ just as required.",
"answer_length": 521
},
{
"chapter": 11,
"question_number": "11.11",
"difficulty": "medium",
"question_text": "Show that the Gibbs sampling algorithm, discussed in Section 11.3, satisfies detailed balance as defined by (11.40: $p^{\\star}(\\mathbf{z})T(\\mathbf{z}, \\mathbf{z}') = p^{\\star}(\\mathbf{z}')T(\\mathbf{z}', \\mathbf{z})$).",
"answer": "This problem requires you to know the definition of detailed balance, i.e., Eq (11.40):\n\n$$p^{\\star}(\\mathbf{z})T(\\mathbf{z},\\mathbf{z}') = p^{\\star}(\\mathbf{z}')T(\\mathbf{z}',\\mathbf{z})$$\n\nNote that here **z** and **z**' are the sampled values of $[z_1, z_2, ..., z_M]^T$ in two consecutive Gibbs Sampling step. Without loss of generality, we assume that we are now updating $z_j^{\\tau}$ to $z_j^{\\tau+1}$ in step $\\tau$ :\n\n$$\\begin{split} p^{\\star}(\\mathbf{z})T(\\mathbf{z},\\mathbf{z}') &= p(z_1^{\\tau},z_2^{\\tau},...,z_M^{\\tau}) \\cdot p(z_j^{\\tau+1}|\\mathbf{z}_{/j}^{\\tau}) \\\\ &= p(z_j^{\\tau}|\\mathbf{z}_{/j}^{\\tau}) \\cdot p(\\mathbf{z}_{/j}^{\\tau}) \\cdot p(z_j^{\\tau+1}|\\mathbf{z}_{/j}^{\\tau}) \\\\ &= p(z_j^{\\tau}|\\mathbf{z}_{/j}^{\\tau+1}) \\cdot p(\\mathbf{z}_{/j}^{\\tau+1}) \\cdot p(z_j^{\\tau+1}|\\mathbf{z}_{/j}^{\\tau+1}) \\\\ &= p(z_j^{\\tau}|\\mathbf{z}_{/j}^{\\tau+1}) \\cdot p(z_1^{\\tau+1},z_2^{\\tau+1},...,z_M^{\\tau+1}) \\\\ &= T(\\mathbf{z}',\\mathbf{z}) \\cdot p^{\\star}(\\mathbf{z}) \\end{split}$$\n\nTo be more specific, we write down the first line based on Gibbs sampling, where $\\mathbf{z}_{/j}^{\\tau}$ denotes all the entries in vector $\\mathbf{z}^{\\tau}$ except $z_{j}^{\\tau}$ . In the second line, we use the conditional property, i.e, p(a,b) = p(a|b)p(b) for the first term. In the third line, we use the fact that $\\mathbf{z}_{/j}^{\\tau} = \\mathbf{z}_{/j}^{\\tau+1}$ . Then we reversely use the conditional property for the last two terms in the fourth line, and finally obtain what has been asked.",
"answer_length": 1513
},
{
"chapter": 11,
"question_number": "11.12",
"difficulty": "easy",
"question_text": "Consider the distribution shown in Figure 11.15. Discuss whether the standard Gibbs sampling procedure for this distribution is ergodic, and therefore whether it would sample correctly from this distribution",
"answer": "Obviously, Gibbs Sampling is not ergodic for this specific distribution, and the quick reason is that neither the projection of the two shaded region on $z_1$ axis nor $z_2$ axis overlaps. For instance, we denote the left down shaded region as region 1. If the initial sample falls into this region, no matter how many steps have been carried out, all the generated samples will be in region 1. It is the same for the right up region.\n\n# **Problem 11.13 Solution**\n\nLet's begin by definition.\n\n$$p(\\mu|x,\\tau,\\mu_0,s_0) \\propto p(x|\\mu,\\tau,\\mu_0,s_0) \\cdot p(\\mu|\\tau,\\mu_0,s_0)$$\n\n$$= p(x|\\mu,\\tau) \\cdot p(\\mu|\\mu_0,s_0)$$\n\n$$= \\mathcal{N}(x|\\mu,\\tau^{-1}) \\cdot \\mathcal{N}(\\mu|\\mu_0,s_0)$$\n\nWhere in the first line, we have used Bayes' Theorem:\n\n$$p(\\mu|x,c) \\propto p(x|\\mu,c) \\cdot p(\\mu|c)$$\n\nNow we use Eq (2.113)-Eq (2.117: $\\Sigma = (\\mathbf{\\Lambda} + \\mathbf{A}^{\\mathrm{T}} \\mathbf{L} \\mathbf{A})^{-1}.$), we can obtain: $p(\\mu|x,\\tau,\\mu_0,s_0)=\\mathcal{N}(\\mu|\\mu^{\\star},s^{\\star})$ , where we have defined:\n\n$$[s^{\\star}]^{-1} = s_0^{-1} + \\tau \\quad , \\quad \\mu^{\\star} = s^{\\star} \\cdot (\\tau \\cdot x + s_0^{-1} \\mu_0)$$\n\nIt is similar for $p(\\tau|x,\\mu,a,b)$ :\n\n$$p(\\tau|x,\\mu,a,b) \\propto p(x|\\tau,\\mu,a,b) \\cdot p(\\tau|\\mu,a,b)$$\n\n$$= p(x|\\mu,\\tau) \\cdot p(\\tau|a,b)$$\n\n$$= \\mathcal{N}(x|\\mu,\\tau^{-1}) \\cdot \\operatorname{Gam}(\\tau|a,b)$$\n\nBased on Section 2.3.6, especially Eq (2.150)-(2.151), we can obtain $p(\\tau|x,\\mu,a,b) = \\text{Gam}(\\tau|a^*,b^*)$ , where we have defined:\n\n$$a^* = a + 0.5$$\n , $b^* = b + 0.5 \\cdot (x - \\mu)^2$",
"answer_length": 1568
},
{
"chapter": 11,
"question_number": "11.14",
"difficulty": "easy",
"question_text": "Verify that the over-relaxation update (11.50: $z_i' = \\mu_i + \\alpha(z_i - \\mu_i) + \\sigma_i(1 - \\alpha_i^2)^{1/2}\\nu$), in which $z_i$ has mean $\\mu_i$ and variance $\\sigma_i$ , and where $\\nu$ has zero mean and unit variance, gives a value $z_i'$ with mean $\\mu_i$ and variance $\\sigma_i^2$ .",
"answer": "Based on definition, we can write down:\n\n$$\\begin{split} \\mathbb{E}[z_i'] &= \\mathbb{E}[\\mu_i + \\alpha(z_i - \\mu_i) + \\sigma_i (1 - \\alpha_i^2)^{1/2} v] \\\\ &= \\mu_i + \\mathbb{E}[\\alpha(z_i - \\mu_i)] + \\mathbb{E}[\\sigma_i (1 - \\alpha_i^2)^{1/2} v] \\\\ &= \\mu_i + \\alpha \\cdot \\mathbb{E}[z_i - \\mu_i] + [\\sigma_i (1 - \\alpha_i^2)^{1/2}] \\cdot \\mathbb{E}[v] \\\\ &= \\mu_i \\end{split}$$\n\nWhere we have used the fact that the mean of $z_i$ is $\\mu_i$ , i.e., $\\mathbb{E}[z_i] = \\mu_i$ , and that the mean of v is 0, i.e., $\\mathbb{E}[v] = 0$ . Then we deal with the variance:\n\n$$\\begin{aligned} & \\text{var}[z_i'] &= & \\mathbb{E}[(z_i' - \\mu_i)^2] \\\\ &= & \\mathbb{E}[(\\alpha(z_i - \\mu_i) + \\sigma_i(1 - \\alpha_i^2)^{1/2}v)^2] \\\\ &= & \\mathbb{E}[\\alpha^2(z_i - \\mu_i)^2] + \\mathbb{E}[\\sigma_i^2(1 - \\alpha_i^2)v^2] + \\mathbb{E}[2\\alpha(z_i - \\mu_i) \\cdot \\sigma_i(1 - \\alpha_i^2)^{1/2}v] \\\\ &= & \\alpha^2 \\cdot \\mathbb{E}[(z_i - \\mu_i)^2] + \\sigma_i^2(1 - \\alpha_i^2) \\cdot \\mathbb{E}[v^2] + 2\\alpha \\cdot \\sigma_i(1 - \\alpha_i^2)^{1/2} \\cdot \\mathbb{E}[(z_i - \\mu_i)v] \\\\ &= & \\alpha^2 \\cdot \\text{var}[z_i] + \\sigma_i^2(1 - \\alpha_i^2) \\cdot (\\text{var}[v] + \\mathbb{E}[v]^2) + 2\\alpha \\cdot \\sigma_i(1 - \\alpha_i^2)^{1/2} \\cdot \\mathbb{E}[(z_i - \\mu_i)] \\cdot \\mathbb{E}[v] \\\\ &= & \\alpha^2 \\cdot \\sigma_i^2 + \\sigma_i^2(1 - \\alpha_i^2) \\cdot 1 + 0 \\\\ &= & \\sigma_i^2 \\end{aligned}$$\n\nWhere we have used the fact that $z_i$ and v are independent and thus $\\mathbb{E}[(z_i - \\mu_i)v] = \\mathbb{E}[z_i - \\mu_i] \\cdot \\mathbb{E}[v] = 0$",
"answer_length": 1535
},
{
"chapter": 11,
"question_number": "11.15",
"difficulty": "easy",
"question_text": "Using (11.56: $K(\\mathbf{r}) = \\frac{1}{2} \\|\\mathbf{r}\\|^2 = \\frac{1}{2} \\sum_{i} r_i^2.$) and (11.57: $H(\\mathbf{z}, \\mathbf{r}) = E(\\mathbf{z}) + K(\\mathbf{r})$), show that the Hamiltonian equation (11.58: $\\frac{\\mathrm{d}z_i}{\\mathrm{d}\\tau} = \\frac{\\partial H}{\\partial r_i}$) is equivalent to (11.53: $r_i = \\frac{\\mathrm{d}z_i}{\\mathrm{d}\\tau}$). Similarly, using (11.57: $H(\\mathbf{z}, \\mathbf{r}) = E(\\mathbf{z}) + K(\\mathbf{r})$) show that (11.59: $\\frac{\\mathrm{d}r_i}{\\mathrm{d}\\tau} = -\\frac{\\partial H}{\\partial z_i}.$) is equivalent to (11.55: $\\frac{\\mathrm{d}r_i}{\\mathrm{d}\\tau} = -\\frac{\\partial E(\\mathbf{z})}{\\partial z_i}.$).",
"answer": "Using Eq (11.57: $H(\\mathbf{z}, \\mathbf{r}) = E(\\mathbf{z}) + K(\\mathbf{r})$), we can write down:\n\n$$\\frac{\\partial H}{\\partial r_i} = \\frac{\\partial K}{\\partial r_i} = r_i$$\n\nComparing this with Eq (11.53: $r_i = \\frac{\\mathrm{d}z_i}{\\mathrm{d}\\tau}$), we obtain Eq (11.58: $\\frac{\\mathrm{d}z_i}{\\mathrm{d}\\tau} = \\frac{\\partial H}{\\partial r_i}$). Similarly, still using Eq (11.57: $H(\\mathbf{z}, \\mathbf{r}) = E(\\mathbf{z}) + K(\\mathbf{r})$), we can obtain:\n\n$$\\frac{\\partial H}{\\partial z_i} = \\frac{\\partial E}{\\partial z_i}$$\n\nComparing this with Eq (11.55: $\\frac{\\mathrm{d}r_i}{\\mathrm{d}\\tau} = -\\frac{\\partial E(\\mathbf{z})}{\\partial z_i}.$), we obtain Eq (11.59: $\\frac{\\mathrm{d}r_i}{\\mathrm{d}\\tau} = -\\frac{\\partial H}{\\partial z_i}.$).",
"answer_length": 750
},
{
"chapter": 11,
"question_number": "11.16",
"difficulty": "easy",
"question_text": "By making use of (11.56: $K(\\mathbf{r}) = \\frac{1}{2} \\|\\mathbf{r}\\|^2 = \\frac{1}{2} \\sum_{i} r_i^2.$), (11.57: $H(\\mathbf{z}, \\mathbf{r}) = E(\\mathbf{z}) + K(\\mathbf{r})$), and (11.63: $p(\\mathbf{z}, \\mathbf{r}) = \\frac{1}{Z_H} \\exp(-H(\\mathbf{z}, \\mathbf{r})).$), show that the conditional distribution $p(\\mathbf{r}|\\mathbf{z})$ is a Gaussian.\n\n**Figure 11.16** A graph involving an observed Gaussian variable x with prior distributions over its mean $\\mu$ and precision $\\tau$ .\n\n\n\n### 558 11. SAMPLING METHODS",
"answer": "According to Bayes' Theorem and Eq (11.54: $p(\\mathbf{z}) = \\frac{1}{Z_p} \\exp\\left(-E(\\mathbf{z})\\right)$), (11.63: $p(\\mathbf{z}, \\mathbf{r}) = \\frac{1}{Z_H} \\exp(-H(\\mathbf{z}, \\mathbf{r})).$), we have:\n\n$$p(\\mathbf{r}|\\mathbf{z}) = \\frac{p(\\mathbf{z}, \\mathbf{r})}{p(\\mathbf{z})} = \\frac{1/Z_H \\cdot \\exp(-H(\\mathbf{z}, \\mathbf{r}))}{1/Z_D \\cdot \\exp(-E(\\mathbf{z}))} = \\frac{Z_p}{Z_H} \\cdot \\exp(-K(\\mathbf{r}))$$\n\nwhere we have used Eq (11.57: $H(\\mathbf{z}, \\mathbf{r}) = E(\\mathbf{z}) + K(\\mathbf{r})$). Moreover, by noticing Eq (11.56: $K(\\mathbf{r}) = \\frac{1}{2} \\|\\mathbf{r}\\|^2 = \\frac{1}{2} \\sum_{i} r_i^2.$), we conclude that $p(\\mathbf{r}|\\mathbf{z})$ should satisfy a Gaussian distribution.",
"answer_length": 709
},
{
"chapter": 11,
"question_number": "11.17",
"difficulty": "easy",
"question_text": "Verify that the two probabilities (11.68: $\\frac{1}{Z_H} \\exp(-H(\\mathcal{R})) \\delta V_{\\frac{1}{2}} \\min \\{1, \\exp(-H(\\mathcal{R}) + H(\\mathcal{R}'))\\}.$) and (11.69: $\\frac{1}{Z_H} \\exp(-H(\\mathcal{R}')) \\delta V_{\\frac{1}{2}} \\min \\{1, \\exp(-H(\\mathcal{R}') + H(\\mathcal{R}))\\}.$) are equal, and hence that detailed balance holds for the hybrid Monte Carlo algorithm.\n\n\n\nAppendix A\n\nIn Chapter 9, we discussed probabilistic models having discrete latent variables, such as the mixture of Gaussians. We now explore models in which some, or all, of the latent variables are continuous. An important motivation for such models is that many data sets have the property that the data points all lie close to a manifold of much lower dimensionality than that of the original data space. To see why this might arise, consider an artificial data set constructed by taking one of the off-line digits, represented by a $64 \\times 64$ pixel grey-level image, and embedding it in a larger image of size $100 \\times 100$ by padding with pixels having the value zero (corresponding to white pixels) in which the location and orientation of the digit is varied at random, as illustrated in Figure 12.1. Each of the resulting images is represented by a point in the $100 \\times 100 = 10,000$ -dimensional data space. However, across a data set of such images, there are only three *degrees of freedom* of variability, corresponding to the vertical and horizontal translations and the rotations. The data points will therefore live on a subspace of the data space whose *intrinsic dimensionality* is three. Note\n\n\n\nFigure 12.1 A synthetic data set obtained by taking one of the off-line digit images and creating multiple copies in each of which the digit has undergone a random displacement and rotation within some larger image field. The resulting images each have $100 \\times 100 = 10,000$ pixels.\n\nthat the manifold will be nonlinear because, for instance, if we translate the digit past a particular pixel, that pixel value will go from zero (white) to one (black) and back to zero again, which is clearly a nonlinear function of the digit position. In this example, the translation and rotation parameters are latent variables because we observe only the image vectors and are not told which values of the translation or rotation variables were used to create them.\n\nFor real digit image data, there will be a further degree of freedom arising from scaling. Moreover there will be multiple additional degrees of freedom associated with more complex deformations due to the variability in an individual's writing as well as the differences in writing styles between individuals. Nevertheless, the number of such degrees of freedom will be small compared to the dimensionality of the data set.\n\nAnother example is provided by the oil flow data set, in which (for a given geometrical configuration of the gas, water, and oil phases) there are only two degrees of freedom of variability corresponding to the fraction of oil in the pipe and the fraction of water (the fraction of gas then being determined). Although the data space comprises 12 measurements, a data set of points will lie close to a two-dimensional manifold embedded within this space. In this case, the manifold comprises several distinct segments corresponding to different flow regimes, each such segment being a (noisy) continuous two-dimensional manifold. If our goal is data compression, or density modelling, then there can be benefits in exploiting this manifold structure.\n\nIn practice, the data points will not be confined precisely to a smooth low-dimensional manifold, and we can interpret the departures of data points from the manifold as 'noise'. This leads naturally to a generative view of such models in which we first select a point within the manifold according to some latent variable distribution and then generate an observed data point by adding noise, drawn from some conditional distribution of the data variables given the latent variables.\n\nThe simplest continuous latent variable model assumes Gaussian distributions for both the latent and observed variables and makes use of a linear-Gaussian dependence of the observed variables on the state of the latent variables. This leads to a probabilistic formulation of the well-known technique of principal component analysis (PCA), as well as to a related model called factor analysis.\n\nIn this chapter w will begin with a standard, nonprobabilistic treatment of PCA, and then we show how PCA arises naturally as the maximum likelihood solution to\n\nAppendix A\n\nSection 8.1.4\n\nSection 12.1\n\nFigure 12.2 Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal projection of the data points (red dots) onto this subspace maximizes the variance of the projected points (green dots). An alternative definition of PCA is based on minimizing the sum-of-squares of the projection errors, indicated by the blue\n\n\n\n### Section 12.2\n\na particular form of linear-Gaussian latent variable model. This probabilistic reformulation brings many advantages, such as the use of EM for parameter estimation, principled extensions to mixtures of PCA models, and Bayesian formulations that allow the number of principal components to be determined automatically from the data. Finally, we discuss briefly several generalizations of the latent variable concept that go beyond the linear-Gaussian assumption including non-Gaussian latent variables, which leads to the framework of *independent component analysis*, as well as models having a nonlinear relationship between latent and observed variables.\n\n### Section 12.4\n\n### 12.1. Principal Component Analysis\n\nPrincipal component analysis, or PCA, is a technique that is widely used for applications such as dimensionality reduction, lossy data compression, feature extraction, and data visualization (Jolliffe, 2002). It is also known as the *Karhunen-Loève* transform.\n\nThere are two commonly used definitions of PCA that give rise to the same algorithm. PCA can be defined as the orthogonal projection of the data onto a lower dimensional linear space, known as the *principal subspace*, such that the variance of the projected data is maximized (Hotelling, 1933). Equivalently, it can be defined as the linear projection that minimizes the average projection cost, defined as the mean squared distance between the data points and their projections (Pearson, 1901). The process of orthogonal projection is illustrated in Figure 12.2. We consider each of these definitions in turn.\n\n### 12.1.1 Maximum variance formulation\n\nConsider a data set of observations $\\{x_n\\}$ where $n=1,\\ldots,N$ , and $x_n$ is a Euclidean variable with dimensionality D. Our goal is to project the data onto a space having dimensionality M < D while maximizing the variance of the projected data. For the moment, we shall assume that the value of M is given. Later in this",
"answer": "There are typos in Eq (11.68: $\\frac{1}{Z_H} \\exp(-H(\\mathcal{R})) \\delta V_{\\frac{1}{2}} \\min \\{1, \\exp(-H(\\mathcal{R}) + H(\\mathcal{R}'))\\}.$) and (11.69: $\\frac{1}{Z_H} \\exp(-H(\\mathcal{R}')) \\delta V_{\\frac{1}{2}} \\min \\{1, \\exp(-H(\\mathcal{R}') + H(\\mathcal{R}))\\}.$). The signs in the exponential of the second argument of the min function is not right. To be more specific, Eq (11.68: $\\frac{1}{Z_H} \\exp(-H(\\mathcal{R})) \\delta V_{\\frac{1}{2}} \\min \\{1, \\exp(-H(\\mathcal{R}) + H(\\mathcal{R}'))\\}.$) should be:\n\n$$\\frac{1}{Z_H} \\exp(-H(R)) \\delta V \\frac{1}{2} \\min\\{1, \\exp(H(R) - H(R'))\\} \\tag{*}$$\n\nand Eq (11.69: $\\frac{1}{Z_H} \\exp(-H(\\mathcal{R}')) \\delta V_{\\frac{1}{2}} \\min \\{1, \\exp(-H(\\mathcal{R}') + H(\\mathcal{R}))\\}.$) is given by:\n\n$$\\frac{1}{Z_H} \\exp(-H(R'))\\delta V \\frac{1}{2} \\min\\{1, \\exp(H(R') - H(R))\\}$$\n (\\*\\*)\n\nWhen H(R) = H(R'), they are clearly equal. When H(R) > H(R'), (\\*) will reduce to:\n\n$$\\frac{1}{Z_H} \\exp(-H(R))\\delta V \\frac{1}{2}$$\n\nBecause the min function will give 1, and in this case (\\*\\*) will give:\n\n$$\\frac{1}{Z_H} \\exp(-H(R')) \\delta V \\frac{1}{2} \\exp(H(R') - H(R)) \\} = \\frac{1}{Z_H} \\exp(-H(R)) \\delta V \\frac{1}{2}$$\n\nTherefore, they are identical, and it is similar when H(R) < H(R').\n\n# 0.12 Continuous Latent Variables",
"answer_length": 1280
},
{
"chapter": 11,
"question_number": "11.2",
"difficulty": "easy",
"question_text": "Suppose that z is a random variable with uniform distribution over (0,1) and that we transform z using $y = h^{-1}(z)$ where h(y) is given by (11.6: $z = h(y) \\equiv \\int_{-\\infty}^{y} p(\\widehat{y}) \\,\\mathrm{d}\\widehat{y}$). Show that y has the distribution p(y).",
"answer": "What this problem wants us to prove is that if we use $y = h^{-1}(z)$ to transform the value of z to y, where z satisfies a uniform distribution over [0,1] and $h(\\cdot)$ is defined by Eq(11.6), we can enforce y to satisfy a specific desired distribution p(y). Let's prove it beginning by Eq (11.1):\n\n$$p^{\\star}(y) = p(z) \\cdot \\left| \\frac{dz}{dy} \\right| = 1 \\cdot h'(y) = \\frac{d}{dy} \\int_{-\\infty}^{y} p(\\widehat{y}) d\\widehat{y} = p(y)$$\n\nJust as required.",
"answer_length": 467
},
{
"chapter": 11,
"question_number": "11.3",
"difficulty": "easy",
"question_text": "Given a random variable z that is uniformly distributed over (0, 1), find a transformation y = f(z) such that y has a Cauchy distribution given by (11.8: $p(y) = \\frac{1}{\\pi} \\frac{1}{1 + y^2}.$).",
"answer": "We use what we have obtained in the previous problem.\n\n$$h(y) = \\int_{-\\infty}^{y} p(\\hat{y}) d\\hat{y}$$\n$$= \\int_{-\\infty}^{y} \\frac{1}{\\pi} \\frac{1}{1 + \\hat{y}^2} d\\hat{y}$$\n$$= \\tan^{-1}(y)$$\n\nTherefore, since we know that $z = h(y) = \\tan^{-1}(y)$ , we can obtain the transformation from z to y: $y = \\tan(z)$ .",
"answer_length": 318
},
{
"chapter": 11,
"question_number": "11.4",
"difficulty": "medium",
"question_text": "Suppose that $z_1$ and $z_2$ are uniformly distributed over the unit circle, as shown in Figure 11.3, and that we make the change of variables given by (11.10: $y_1 = z_1 \\left(\\frac{-2\\ln z_1}{r^2}\\right)^{1/2}$) and (11.11: $y_2 = z_2 \\left(\\frac{-2\\ln z_2}{r^2}\\right)^{1/2}$). Show that $(y_1, y_2)$ will be distributed according to (11.12: $= \\left[ \\frac{1}{\\sqrt{2\\pi}} \\exp(-y_1^2/2) \\right] \\left[ \\frac{1}{\\sqrt{2\\pi}} \\exp(-y_2^2/2) \\right]$).",
"answer": "First, I believe there is a typo in Eq (11.10: $y_1 = z_1 \\left(\\frac{-2\\ln z_1}{r^2}\\right)^{1/2}$) and (11.11: $y_2 = z_2 \\left(\\frac{-2\\ln z_2}{r^2}\\right)^{1/2}$). Both $\\ln z_1$ and $\\ln z_2$ should be $\\ln(z_1^2 + z_2^2)$ . In the following, we will solve the problem under this assumption.\n\nWe only need to calculate the Jacobian matrix. First, based on Eq (11.10)-(11.11), it is not difficult to observe that $z_1$ only depends on $y_1$ , and $z_2$ only depends on $y_2$ , which means that $\\partial z_1/\\partial y_2 = 0$ and $\\partial z_2/\\partial y_1 = 0$ . To obtain the diagonal terms of the Jacobian matrix, i.e., $\\partial z_1/\\partial y_1$ and $\\partial z_2/\\partial y_2$ . To deal with the problem associated with a circle, it is always convenient to use polar coordinate:\n\n$$z_1 = r\\cos\\theta$$\n, and $z_2 = r\\sin\\theta$ \n\nIt is easily to obtain:\n\n$$\\frac{\\partial(z_1, z_2)}{\\partial(r, \\theta)} = \\begin{bmatrix} \\partial z_1 / \\partial r & \\partial z_1 / \\partial \\theta \\\\ \\partial z_2 / \\partial r & \\partial z_2 / \\partial \\theta \\end{bmatrix} = \\begin{bmatrix} \\cos \\theta & -r \\sin \\theta \\\\ \\sin \\theta & r \\cos \\theta \\end{bmatrix}$$\n\nTherefore, we can obtain:\n\n$$\\left|\\frac{\\partial(z_1, z_2)}{\\partial(r, \\theta)}\\right| = r(\\cos^2 \\theta + \\sin^2 \\theta) = r$$\n\nThen we substitute r and $\\theta$ into Eq (11.10: $y_1 = z_1 \\left(\\frac{-2\\ln z_1}{r^2}\\right)^{1/2}$), yielding:\n\n$$y_1 = r\\cos\\theta(\\frac{-2\\ln r^2}{r^2})^{1/2} = \\cos\\theta(-2\\ln r^2)^{1/2}$$\n (\\*)\n\nSimilarly, we also have:\n\n$$y_2 = \\sin\\theta (-2\\ln r^2)^{1/2} \\tag{**}$$\n\nIt is easily to obtain:\n\n$$\\frac{\\partial(y_1,y_2)}{\\partial(r,\\theta)} = \\begin{bmatrix} \\partial y_1/\\partial r & \\partial y_1/\\partial \\theta \\\\ \\partial y_2/\\partial r & \\partial y_2/\\partial \\theta \\end{bmatrix} = \\begin{bmatrix} -2\\cos\\theta(-2\\ln r^2)^{-1/2} \\cdot r^{-1} & -\\sin\\theta(-2\\ln r^2)^{1/2} \\\\ -2\\sin\\theta(-2\\ln r^2)^{-1/2} \\cdot r^{-1} & \\cos\\theta(-2\\ln r^2)^{1/2} \\end{bmatrix}$$\n\nTherefore, we can obtain:\n\n$$\\left|\\frac{\\partial(y_1, y_2)}{\\partial(r, \\theta)}\\right| = (-2r^{-1}(\\cos^2\\theta + \\sin^2\\theta)) = -2r^{-1}$$\n\nNext, we need to use the property of Jacobian Matrix:\n\n$$\\begin{aligned} |\\frac{\\partial(z_1, z_2)}{\\partial(y_1, y_2)}| &= |\\frac{\\partial(z_1, z_2)}{\\partial(r, \\theta)} \\cdot \\frac{\\partial(r, \\theta)}{\\partial(y_1, y_2)}| \\\\ &= |\\frac{\\partial(z_1, z_2)}{\\partial(r, \\theta)}| \\cdot |\\frac{\\partial(r, \\theta)}{\\partial(y_1, y_2)}| \\\\ &= |\\frac{\\partial(z_1, z_2)}{\\partial(r, \\theta)}| \\cdot |\\frac{\\partial(y_1, y_2)}{\\partial(r, \\theta)}|^{-1} \\\\ &= r \\cdot (-2r^{-1})^{-1} = -\\frac{r^2}{2} \\end{aligned}$$\n\nBy squaring both sides of (\\*) and (\\*\\*) and adding them together, we can obtain:\n\n$$y_1^2 + y_2^2 = -2\\ln r^2 = r^2 = \\exp\\left\\{\\frac{y_1^2 + y_2^2}{-2}\\right\\}$$\n\nFinally, we can obtain:\n\n$$p(y_1,y_2) = p(z_1,z_2) \\left| \\frac{\\partial(z_1,z_2)}{\\partial(y_1,y_2)} \\right| = \\frac{1}{\\pi} \\cdot \\left| -\\frac{r^2}{2} \\right| = \\frac{1}{2\\pi} r^2 = \\frac{1}{2\\pi} \\exp\\left\\{ \\frac{y_1^2 + y_2^2}{-2} \\right\\}$$\n\nJust as required.",
"answer_length": 3089
},
{
"chapter": 11,
"question_number": "11.5",
"difficulty": "easy",
"question_text": "- 11.5 (\\*) www Let z be a D-dimensional random variable having a Gaussian distribution with zero mean and unit covariance matrix, and suppose that the positive definite symmetric matrix $\\Sigma$ has the Cholesky decomposition $\\Sigma = \\mathbf{L}\\mathbf{L}^T$ where $\\mathbf{L}$ is a lower-triangular matrix (i.e., one with zeros above the leading diagonal). Show that the variable $\\mathbf{y} = \\mu + \\mathbf{L}\\mathbf{z}$ has a Gaussian distribution with mean $\\mu$ and covariance $\\Sigma$ . This provides a technique for generating samples from a general multivariate Gaussian using samples from a univariate Gaussian having zero mean and unit variance.",
"answer": "This is a linear transformation of $\\mathbf{z}$ , we still obtain a Gaussian random variable $\\mathbf{y}$ . We only need to match its moments (mean and variance). We know that $\\mathbf{z} \\sim \\mathcal{N}(\\mathbf{0}, \\mathbf{I})$ , $\\mathbf{\\Sigma} = \\mathbf{L}\\mathbf{L}^T$ , and $\\mathbf{y} = \\boldsymbol{\\mu} + \\mathbf{L}\\mathbf{z}$ . Now, using $\\mathbb{E}[\\mathbf{z}] = \\mathbf{0}$ , we obtain:\n\n$$\\mathbb{E}[\\mathbf{y}] = \\mathbb{E}[\\mu + \\mathbf{L}\\mathbf{z}]$$\n$$= \\mu + \\mathbf{L} \\cdot \\mathbb{E}[\\mathbf{z}]$$\n$$= \\mu$$\n\nMoreover, using $\\text{cov}[\\mathbf{z}] = \\mathbb{E}[\\mathbf{z}\\mathbf{z}^T] - \\mathbb{E}[\\mathbf{z}]\\mathbb{E}[\\mathbf{z}^T] = \\mathbb{E}[\\mathbf{z}\\mathbf{z}^T] = \\mathbf{I}$ , we can obtain:\n\n$$\\begin{aligned}\n\\operatorname{cov}[\\mathbf{y}] &= \\mathbb{E}[\\mathbf{y}\\mathbf{y}^T] - \\mathbb{E}[\\mathbf{y}]\\mathbb{E}[\\mathbf{y}^T] \\\\\n&= \\mathbb{E}[(\\boldsymbol{\\mu} + \\mathbf{L}\\mathbf{z}) \\cdot (\\boldsymbol{\\mu} + \\mathbf{L}\\mathbf{z})^T] - \\boldsymbol{\\mu}\\boldsymbol{\\mu}^T \\\\\n&= \\mathbb{E}[\\boldsymbol{\\mu}\\boldsymbol{\\mu}^T + 2\\boldsymbol{\\mu} \\cdot (\\mathbf{L}\\mathbf{z})^T + (\\mathbf{L}\\mathbf{z}) \\cdot (\\mathbf{L}\\mathbf{z})^T] - \\boldsymbol{\\mu}\\boldsymbol{\\mu}^T \\\\\n&= 2\\boldsymbol{\\mu} \\cdot \\mathbb{E}[\\mathbf{z}^T] \\cdot \\mathbf{L}^T + \\mathbb{E}[\\mathbf{L}\\mathbf{z}\\mathbf{z}^T\\mathbf{L}^T] \\\\\n&= \\mathbf{L} \\cdot \\mathbb{E}[\\mathbf{z}\\mathbf{z}^T] \\cdot \\mathbf{L}^T = \\mathbf{L} \\cdot \\mathbf{I} \\cdot \\mathbf{L}^T \\\\\n&= \\mathbf{\\Sigma}\n\\end{aligned}$$\n\nJust as required.",
"answer_length": 1529
},
{
"chapter": 11,
"question_number": "11.6",
"difficulty": "medium",
"question_text": "- 11.6 (\\*\\*) www In this exercise, we show more carefully that rejection sampling does indeed draw samples from the desired distribution $p(\\mathbf{z})$ . Suppose the proposal distribution is $q(\\mathbf{z})$ and show that the probability of a sample value $\\mathbf{z}$ being accepted is given by $\\widetilde{p}(\\mathbf{z})/kq(\\mathbf{z})$ where $\\widetilde{p}$ is any unnormalized distribution that is proportional to $p(\\mathbf{z})$ , and the constant k is set to the smallest value that ensures $kq(\\mathbf{z}) \\geqslant \\widetilde{p}(\\mathbf{z})$ for all values of $\\mathbf{z}$ . Note that the probability of drawing a value $\\mathbf{z}$ is given by the probability of drawing that value from $q(\\mathbf{z})$ times the probability of accepting that value given that it has been drawn. Make use of this, along with the sum and product rules of probability, to write down the normalized form for the distribution over $\\mathbf{z}$ , and show that it equals $p(\\mathbf{z})$ .",
"answer": "This problem is all about definition. According to the description of rejection sampling, we know that: for a specific value $\\mathbf{z}_0$ (drawn from $q(\\mathbf{z})$ ), we will generate a random variable $u_0$ , which satisfies a uniform distribution in the interval $[0, kq(\\mathbf{z}_0)]$ , and if the generated value of $u_0$ is less than $\\tilde{p}(\\mathbf{z}_0)$ , we will accept this value. Therefore, we obtain:\n\n$$P[\\text{accept}|\\mathbf{z}_0] = \\frac{\\widetilde{p}(\\mathbf{z}_0)}{kq(\\mathbf{z}_0)}$$\n\nSince we know $\\mathbf{z}_0$ is drawn from $q(\\mathbf{z})$ , we can obtain the total acceptance rate by integral:\n\n$$P[\\text{accept}] = \\int P[\\text{accept}|\\mathbf{z}_0] \\cdot q(\\mathbf{z}_0) d\\mathbf{z}_0 = \\int \\frac{\\widetilde{p}(\\mathbf{z}_0)}{k} d\\mathbf{z}_0$$\n\nIt is identical to Eq (11.14: $= \\frac{1}{k} \\int \\widetilde{p}(z) dz.$). We substitute Eq (11.13: $p(z) = \\frac{1}{Z_p}\\widetilde{p}(z)$) into the expression above, yielding:\n\n$$P[\\text{accept}] = \\frac{Z_p}{k}$$\n\nWe define a very small vector $\\boldsymbol{\\epsilon}$ , and we can obtain:\n\n$$P[\\mathbf{x}_{0} \\in (\\mathbf{x}, \\mathbf{x} + \\boldsymbol{\\epsilon})] = P[\\mathbf{z}_{0} \\in (\\mathbf{x}, \\mathbf{x} + \\boldsymbol{\\epsilon}) | \\operatorname{accept}]$$\n\n$$= \\frac{P[\\operatorname{accept}, \\mathbf{z}_{0} \\in (\\mathbf{x}, \\mathbf{x} + \\boldsymbol{\\epsilon})]}{P[\\operatorname{accept}]}$$\n\n$$= \\frac{\\int_{(\\mathbf{x}, \\mathbf{x} + \\boldsymbol{\\epsilon})} q(\\mathbf{z}_{0}) P[\\operatorname{accept} | \\mathbf{z}_{0}] d\\mathbf{z}_{0}}{Z_{p}/k}$$\n\n$$= \\int_{(\\mathbf{x}, \\mathbf{x} + \\boldsymbol{\\epsilon})} \\frac{k}{Z_{p}} \\cdot q(\\mathbf{z}_{0}) \\cdot p(\\operatorname{accept} | \\mathbf{z}_{0}) d\\mathbf{z}_{0}$$\n\n$$= \\int_{(\\mathbf{x}, \\mathbf{x} + \\boldsymbol{\\epsilon})} \\frac{k}{Z_{p}} \\cdot q(\\mathbf{z}_{0}) \\cdot \\frac{\\tilde{p}(\\mathbf{z}_{0})}{kq(\\mathbf{z}_{0})} d\\mathbf{z}_{0}$$\n\n$$= \\int_{(\\mathbf{x}, \\mathbf{x} + \\boldsymbol{\\epsilon})} \\frac{1}{Z_{p}} \\cdot \\tilde{p}(\\mathbf{z}_{0}) d\\mathbf{z}_{0}$$\n\n$$= \\int_{(\\mathbf{x}, \\mathbf{x} + \\boldsymbol{\\epsilon})} p(\\mathbf{z}_{0}) d\\mathbf{z}_{0}$$\n\nJust as required. **Several clarifications must be made here**:(1)we have used P[A] to represent the probability of event A occurs, and $p(\\mathbf{z})$ or $q(\\mathbf{z})$ to represent the Probability Density Function (PDF). (2) Please be careful with $P[\\mathbf{x}_0 \\in (\\mathbf{x}, \\mathbf{x} + \\boldsymbol{\\epsilon})] = P[\\mathbf{z}_0 \\in (\\mathbf{x}, \\mathbf{x} + \\boldsymbol{\\epsilon})|$ and this is the key point of this problem.",
"answer_length": 2556
},
{
"chapter": 11,
"question_number": "11.7",
"difficulty": "easy",
"question_text": "Suppose that z has a uniform distribution over the interval [0,1]. Show that the variable $y = b \\tan z + c$ has a Cauchy distribution given by (11.16: $q(z) = \\frac{k}{1 + (z - c)^2 / b^2}.$).",
"answer": "Notice that the symbols used in the main text is different from those in the problem description. in the following, we will use those in the main text. Namely, y satisfies a uniform distribution on interval [0,1], and $z = b \\tan y + c$ . Then we aims to prove Eq (11.16: $q(z) = \\frac{k}{1 + (z - c)^2 / b^2}.$). Since we know that:\n\n$$q(z) = p(y) \\cdot |\\frac{dy}{dz}|$$\n\nand that:\n\n$$y = \\arctan \\frac{z-c}{b}$$\n $\\Rightarrow$ $\\frac{dy}{dz} = \\frac{1}{b} \\cdot \\frac{1}{1 + [(z-c)/b]^2}$ \n\nSubstituting it back, we obtain:\n\n$$q(z) = 1 \\cdot \\frac{1}{b} \\cdot \\frac{1}{1 + [(z-c)/b]^2}$$\n\nIn my point of view, Eq (11.16: $q(z) = \\frac{k}{1 + (z - c)^2 / b^2}.$) is an expression for the comparison function kq(z), not the proposal function q(z). If we wish to use Eq (11.16: $q(z) = \\frac{k}{1 + (z - c)^2 / b^2}.$) to express the proposal function, the numerator in Eq (11.16: $q(z) = \\frac{k}{1 + (z - c)^2 / b^2}.$) should be 1/b instead of k. Because the proposal function q(z) is a PDF, it should integrate to 1. However, in rejection sampling, the comparison function is what we actually care about.",
"answer_length": 1112
},
{
"chapter": 11,
"question_number": "11.8",
"difficulty": "medium",
"question_text": "Determine expressions for the coefficients $k_i$ in the envelope distribution (11.17: $q(z) = k_i \\lambda_i \\exp\\{-\\lambda_i (z - z_{i-1})\\} \\qquad z_{i-1} < z \\leqslant z_i.$) for adaptive rejection sampling using the requirements of continuity and normalization.",
"answer": "There is a typo in Eq (11.17: $q(z) = k_i \\lambda_i \\exp\\{-\\lambda_i (z - z_{i-1})\\} \\qquad z_{i-1} < z \\leqslant z_i.$), which is not difficult to observe, if we carefully examine Fig.11.6. The correct form should be:\n\n$$q_i(z) = k_i \\lambda_i \\exp\\{-\\lambda_i (z - z_i)\\}, \\quad \\tilde{z}_{i-1,i} < z \\le \\tilde{z}_{i,i+1}, \\quad \\text{where } i = 1, 2, ..., N$$\n\nHere we use $\\tilde{z}_{i,i+1}$ to represent the intersection point of the i-th and i+1-th envelope, $q_i(z)$ to represent the comparison function of the i-th envelope, and N is the total number of the envelopes. Notice that $\\tilde{z}_{0,1}$ and $\\tilde{z}_{N,N+1}$ could be $-\\infty$ and $\\infty$ correspondingly.\n\nFirst, from Fig.11.6, we see that: $q(z_i) = \\tilde{p}(z_i)$ , substituting the expression above into the equation and yielding:\n\n$$k_i \\lambda_i = \\widetilde{p}(z_i) \\tag{*}$$\n\nOne important thing should be made clear is that we can only evaluate $\\tilde{p}(z)$ at specific point z, but not the normalized PDF p(z). This is the assumption of rejection sampling. For more details, please refer to section 11.1.2.\n\nNotice that $q_i(z)$ and $q_{i+1}(z)$ should have the same value at $\\widetilde{z}_{i,i+1}$ , we obtain:\n\n$$k_i \\lambda_i \\exp\\{-\\lambda_i (\\widetilde{z}_{i,i+1} - z_i)\\} = k_{i+1} \\lambda_{i+1} \\exp\\{-\\lambda_{i+1} (\\widetilde{z}_{i,i+1} - z_{i+1})\\}$$\n\nAfter several rearrangement, we obtain:\n\n$$\\widetilde{z}_{i,i+1} = \\frac{1}{\\lambda_i - \\lambda_{i+1}} \\left\\{ \\ln \\frac{k_i \\lambda_i}{k_{i+1} \\lambda_{i+1}} + \\lambda_i z_i - \\lambda_{i+1} z_{i+1} \\right\\} \\tag{**}$$\n\nBefore moving on, we should make some clarifications: the adaptive rejection sampling begins with several grid points, e.g., $z_1, z_2, ..., z_N$ , and then we evaluate the derivative of $\\widetilde{p}(z)$ at those points, i.e., $\\lambda_1, \\lambda_2, ..., \\lambda_N$ . Then we can easily obtain $k_i$ based on (\\*), and next the intersection points $\\widetilde{z}_{i,i+1}$ based on (\\*\\*).",
"answer_length": 1990
},
{
"chapter": 11,
"question_number": "11.9",
"difficulty": "medium",
"question_text": "By making use of the technique discussed in Section 11.1.1 for sampling from a single exponential distribution, devise an algorithm for sampling from the piecewise exponential distribution defined by (11.17: $q(z) = k_i \\lambda_i \\exp\\{-\\lambda_i (z - z_{i-1})\\} \\qquad z_{i-1} < z \\leqslant z_i.$).",
"answer": "In this problem, we will still use the same notation as in the previous one. First, we need to know the probability of sampling from each segment. Notice that Eq (11.17: $q(z) = k_i \\lambda_i \\exp\\{-\\lambda_i (z - z_{i-1})\\} \\qquad z_{i-1} < z \\leqslant z_i.$) is not correctly normalized, we first calculate its normalization constant $Z_q$ :\n\n$$\\begin{split} Z_{q} &= \\int_{\\widetilde{z}_{0,1}}^{\\widetilde{z}_{N,N+1}} q(z) \\, dz = \\sum_{i=1}^{N} \\int_{\\widetilde{z}_{i-1,i}}^{\\widetilde{z}_{i,i+1}} q_{i}(z_{i}) \\, dz_{i} \\\\ &= \\sum_{i=1}^{N} \\int_{\\widetilde{z}_{i-1,i}}^{\\widetilde{z}_{i,i+1}} k_{i} \\lambda_{i} \\exp\\{-\\lambda_{i}(z-z_{i})\\} \\, dz_{i} \\\\ &= \\sum_{i=1}^{N} -k_{i} \\exp\\{-\\lambda_{i}(z-z_{i})\\} \\Big|_{\\widetilde{z}_{i-1,i}}^{\\widetilde{z}_{i,i+1}} \\\\ &= \\sum_{i=1}^{N} -k_{i} \\left[ \\exp\\{-\\lambda_{i}(\\widetilde{z}_{i,i+1}-z_{i})\\} - \\exp\\{-\\lambda_{i}(\\widetilde{z}_{i-1,i}-z_{i})\\} \\right] = \\sum_{i=1}^{N} \\widehat{k}_{i} \\end{split}$$\n\nWhere we have defined:\n\n$$\\widehat{k}_i = -k_i \\left[ \\exp\\{-\\lambda_i(\\widetilde{z}_{i,i+1} - z_i)\\} - \\exp\\{-\\lambda_i(\\widetilde{z}_{i-1,i} - z_i)\\} \\right] \\tag{*}$$\n\nFrom this derivation, we know that the probability of sampling from the i-th segment is given by $\\widehat{k}_i/Z_q$ , where $Z_q = \\sum_{i=1}^N \\widehat{k}_i$ . Therefore, now we define an auxiliary random variable $\\eta$ , which is uniform in interval [0,1], and then define:\n\n$$i = j$$\n if $\\eta \\in \\left[\\frac{1}{Z_q} \\sum_{m=0}^{j-1} \\hat{k}_m, \\frac{1}{Z_q} \\sum_{m=0}^{j} \\hat{k}_m\\right], \\quad j = 1, 2, ..., N$ (\\*\\*)\n\nWhere we have defined $\\hat{k}_0 = 0$ for convenience. Until now, we have decide the chosen *i*-th segment. Next, we should sample from the *i*-th exponential distribution using the technique in section 11.1.1.. According to Eq (11.6: $z = h(y) \\equiv \\int_{-\\infty}^{y} p(\\widehat{y}) \\,\\mathrm{d}\\widehat{y}$), we\n\ncan write down:\n\n$$\\begin{split} h_i(z) &= \\int_{\\widetilde{z}_{i-1,i}}^z \\frac{q_i(z_i)}{\\widehat{k}_i} dz_i \\\\ &= \\frac{1}{\\widehat{k}_i} \\cdot \\int_{\\widetilde{z}_{i-1,i}}^z k_i \\lambda_i \\exp\\{-\\lambda_i (z-z_i)\\} dz_i \\\\ &= \\frac{-k_i}{\\widehat{k}_i} \\cdot \\exp\\{-\\lambda_i (z-z_i)\\}\\Big|_{\\widetilde{z}_{i-1,i}}^z \\\\ &= \\frac{-k_i}{\\widehat{k}_i} \\cdot \\Big[ \\exp\\{-\\lambda_i (z-z_i)\\} - \\exp\\{-\\lambda_i (\\widetilde{z}_{i-1,i}-z_i)\\} \\Big] \\\\ &= \\frac{k_i}{\\widehat{k}_i} \\cdot \\exp(\\lambda_i z_i) \\Big[ \\exp\\{-\\lambda_i \\widetilde{z}_{i-1,i}\\} - \\exp\\{-\\lambda_i z\\} \\Big] \\end{split}$$\n\nNotice that $q_i(z)$ is not correctly normalized, and $q_i(z)/\\hat{k}_i$ is the correct normalized form. With several rearrangement, we can obtain:\n\n$$\\begin{array}{ll} h_i^{-1}(\\xi) & = & \\displaystyle \\frac{1}{-\\lambda_i} \\cdot \\ln \\left[ \\exp\\{-\\lambda_i \\widetilde{z}_{i-1,i}\\} - \\frac{\\xi}{\\frac{k_i}{\\widehat{k}_i} \\cdot \\exp(\\lambda_i z_i)} \\right] \\\\ \\\\ & = & \\displaystyle \\frac{1}{-\\lambda_i} \\cdot \\frac{\\ln \\left[ \\exp\\{-\\lambda_i \\widetilde{z}_{i-1,i}\\} \\right]}{\\ln \\frac{\\widehat{k}_i \\xi}{k_i \\cdot \\exp(\\lambda_i z_i)}} \\\\ \\\\ & = & \\displaystyle \\frac{\\widetilde{z}_{i-1,i}}{\\ln \\xi + \\ln \\frac{\\widehat{k}_i}{k_i} - \\lambda_i z_i} \\end{array}$$\n\nIn conclusion, we first generate a random variable $\\eta$ , which is uniform in interval [0,1], and obtain the value i according to (\\*\\*), and then we generate a random variable $\\xi$ , which is also uniform in interval [0,1], and then transform it to z using $z = h_i^{-1}(\\xi)$ .\n\nNotice that here, $\\lambda_i$ , $\\tilde{z}_{i,i+1}$ and $k_i$ can be obtained once the grid points $z_1, z_2, ..., z_N$ are given. For more details, please refer to the previous problem. After these variables are obtained, $\\hat{k}_i$ can also be determined using (\\*), and thus $h_i^{-1}(\\xi)$ can be determined.",
"answer_length": 3763
}
]
},
{
"chapter_number": 12,
"total_questions": 5,
"difficulty_breakdown": {
"easy": 7,
"medium": 8,
"hard": 1,
"unknown": 11
},
"questions": [
{
"chapter": 12,
"question_number": "12.11",
"difficulty": "medium",
"question_text": "Show that in the limit $\\sigma^2 \\to 0$ , the posterior mean for the probabilistic PCA model becomes an orthogonal projection onto the principal subspace, as in conventional PCA.",
"answer": "Taking $\\sigma^2 \\to 0$ in (12.41: $\\mathbf{M} = \\mathbf{W}^{\\mathrm{T}} \\mathbf{W} + \\sigma^{2} \\mathbf{I}.$) and substituting into (12.48: $\\mathbb{E}[\\mathbf{z}|\\mathbf{x}] = \\mathbf{M}^{-1}\\mathbf{W}_{\\mathrm{ML}}^{\\mathrm{T}}(\\mathbf{x} - \\overline{\\mathbf{x}})$) we obtain the posterior mean for probabilistic PCA in the form\n\n$$(\\mathbf{W}_{\\mathrm{ML}}^{\\mathrm{T}}\\mathbf{W}_{\\mathrm{ML}})^{-1}\\mathbf{W}_{\\mathrm{ML}}^{\\mathrm{T}}(\\mathbf{x}-\\overline{\\mathbf{x}}).$$\n\nNow substitute for $\\mathbf{W}_{\\mathrm{ML}}$ using (12.45: $\\mathbf{W}_{\\mathrm{ML}} = \\mathbf{U}_{M} (\\mathbf{L}_{M} - \\sigma^{2} \\mathbf{I})^{1/2} \\mathbf{R}$) in which we take $\\mathbf{R} = \\mathbf{I}$ for compatibility with conventional PCA. Using the orthogonality property $\\mathbf{U}_{M}^{\\mathrm{T}}\\mathbf{U}_{M} = \\mathbf{I}$ and setting $\\sigma^{2} = 0$ , this reduces to\n\n$$\\mathbf{L}^{-1/2}\\mathbf{U}_{M}^{\\mathrm{T}}(\\mathbf{x}-\\overline{\\mathbf{x}})$$\n\nwhich is the orthogonal projection is given by the conventional PCA result (12.24: $\\mathbf{y}_n = \\mathbf{L}^{-1/2} \\mathbf{U}^{\\mathrm{T}} (\\mathbf{x}_n - \\overline{\\mathbf{x}})$).",
"answer_length": 1139
},
{
"chapter": 12,
"question_number": "12.28",
"difficulty": "medium",
"question_text": "**www** Use the transformation property (1.27: $= p_{x}(g(y)) |g'(y)|.$) of a probability density under a change of variable to show that any density p(y) can be obtained from a fixed density q(x) that is everywhere nonzero by making a nonlinear change of variable y = f(x) in which f(x) is a monotonic function so that $0 \\le f'(x) < \\infty$ . Write down the differential equation satisfied by f(x) and draw a diagram illustrating the transformation of the density.",
"answer": "If we assume that the function y = f(x) is *strictly* monotonic, which is necessary to exclude the possibility for spikes of infinite density in p(y), we are guaranteed that the inverse function $x = f^{-1}(y)$ exists. We can then use (1.27: $= p_{x}(g(y)) |g'(y)|.$) to write\n\n$$p(y) = q(f^{-1}(y)) \\left| \\frac{\\mathrm{d}f^{-1}}{\\mathrm{d}y} \\right|. \\tag{147}$$\n\nSince the only restriction on f is that it is monotonic, it can distribute the probability mass over x arbitrarily over y. This is illustrated on page 8, as a part of Solution 1.4. From (147) we see directly that\n\n$$|f'(x)| = \\frac{q(x)}{p(f(x))}.$$",
"answer_length": 617
},
{
"chapter": 12,
"question_number": "12.3",
"difficulty": "easy",
"question_text": "Verify that the eigenvectors defined by (12.30: $\\mathbf{u}_i = \\frac{1}{(N\\lambda_i)^{1/2}} \\mathbf{X}^{\\mathrm{T}} \\mathbf{v}_i.$) are normalized to unit length, assuming that the eigenvectors $\\mathbf{v}_i$ have unit length.",
"answer": "According to Eq (12.30: $\\mathbf{u}_i = \\frac{1}{(N\\lambda_i)^{1/2}} \\mathbf{X}^{\\mathrm{T}} \\mathbf{v}_i.$), we can obtain:\n\n$$\\mathbf{u}_i^T \\mathbf{u}_i = \\frac{1}{N\\lambda_i} \\mathbf{v}_i^T \\mathbf{X} \\mathbf{X}^T \\mathbf{v}_i$$\n\nWe left multiply $\\mathbf{v}_i^T$ on both sides of Eq (12.28: $\\frac{1}{N} \\mathbf{X} \\mathbf{X}^{\\mathrm{T}} \\mathbf{v}_i = \\lambda_i \\mathbf{v}_i$), yielding:\n\n$$\\frac{1}{N} \\mathbf{v}_i^T \\mathbf{X} \\mathbf{X}^T \\mathbf{v}_i = \\lambda_i \\mathbf{v}_i^T \\mathbf{v}_i = \\lambda_i ||\\mathbf{v}_i||^2 = \\lambda_i$$\n\nHere we have used the fact that $\\mathbf{v}_i$ has unit length. Substituting it back into $\\mathbf{u}_i^T \\mathbf{u}_i$ , we can obtain:\n\n$$\\mathbf{u}_i^T \\mathbf{u}_i = 1$$\n\nJust as required.",
"answer_length": 745
},
{
"chapter": 12,
"question_number": "12.4",
"difficulty": "easy",
"question_text": "- 12.4 (\\*) www Suppose we replace the zero-mean, unit-covariance latent space distribution (12.31: $p(\\mathbf{z}) = \\mathcal{N}(\\mathbf{z}|\\mathbf{0}, \\mathbf{I}).$) in the probabilistic PCA model by a general Gaussian distribution of the form $\\mathcal{N}(\\mathbf{z}|\\mathbf{m}, \\boldsymbol{\\Sigma})$ . By redefining the parameters of the model, show that this leads to an identical model for the marginal distribution $p(\\mathbf{x})$ over the observed variables for any valid choice of $\\mathbf{m}$ and $\\boldsymbol{\\Sigma}$ .",
"answer": "We know $p(\\mathbf{z}) = \\mathcal{N}(\\mathbf{z}|\\mathbf{m}, \\boldsymbol{\\Sigma})$ , and $p(\\mathbf{x}|\\mathbf{z}) = \\mathcal{N}(\\mathbf{x}|\\mathbf{W}\\mathbf{z} + \\boldsymbol{\\mu}, \\sigma^2\\mathbf{I})$ . According to Eq (2.113)-(2.115), we have:\n\n$$p(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\mathbf{W}\\mathbf{m} + \\boldsymbol{\\mu}, \\sigma^2 \\mathbf{I} + \\mathbf{W}\\boldsymbol{\\Sigma}\\mathbf{W}^T) = \\mathcal{N}(\\mathbf{x}|\\hat{\\boldsymbol{\\mu}}, \\sigma^2 \\mathbf{I} + \\hat{\\mathbf{W}}\\hat{\\mathbf{W}}^T)$$\n\nwhere we have defined:\n\n$$\\hat{\\boldsymbol{\\mu}} = \\mathbf{Wm} + \\boldsymbol{\\mu}$$\n\nand\n\n$$\\widehat{\\boldsymbol{W}} = \\boldsymbol{W} \\boldsymbol{\\Sigma}^{1/2}$$\n\nTherefore, in the general case, the final form of $p(\\mathbf{x})$ can still be written as Eq (12.35: $p(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\mathbf{C})$).\n\n# **Problem 12.5 Solution**",
"answer_length": 872
},
{
"chapter": 12,
"question_number": "12.6",
"difficulty": "easy",
"question_text": "Draw a directed probabilistic graph for the probabilistic PCA model described in Section 12.2 in which the components of the observed variable x are shown explicitly as separate nodes. Hence verify that the probabilistic PCA model has the same independence structure as the naive Bayes model discussed in Section 8.2.2.",
"answer": "Omitting the parameters, W, $\\mu$ and $\\sigma$ , leaving only the stochastic variables z and x, the graphical model for probabilistic PCA is identical with the the 'naive Bayes' model24 in Section 8.2.2. Hence these two models exhibit the same independence structure.",
"answer_length": 270
}
]
},
{
"chapter_number": 13,
"total_questions": 7,
"difficulty_breakdown": {
"easy": 7,
"medium": 8,
"hard": 3,
"unknown": 15
},
"questions": [
{
"chapter": 13,
"question_number": "13.1",
"difficulty": "easy",
"question_text": "Use the technique of d-separation, discussed in Section 8.2, to verify that the Markov model shown in Figure 13.3 having N nodes in total satisfies the conditional independence properties (13.3: $p(\\mathbf{x}_n|\\mathbf{x}_1,\\dots,\\mathbf{x}_{n-1}) = p(\\mathbf{x}_n|\\mathbf{x}_{n-1})$) for n = 2, ..., N. Similarly, show that a model described by the graph in Figure 13.4 in which there are N nodes in total\n\n\n\nFigure 13.23 Schematic illustration of the operation of the particle filter for a one-dimensional latent space. At time step n, the posterior $p(z_n|\\mathbf{x}_n)$ is represented as a mixture distribution, shown schematically as circles whose sizes are proportional to the weights $w_n^{(l)}$ . A set of L samples is then drawn from this distribution and the new weights $w_{n+1}^{(l)}$ evaluated using $p(\\mathbf{x}_{n+1}|\\mathbf{z}_{n+1}^{(l)})$ .\n\nsatisfies the conditional independence properties\n\n$$p(\\mathbf{x}_n|\\mathbf{x}_1,\\dots,\\mathbf{x}_{n-1}) = p(\\mathbf{x}_n|\\mathbf{x}_{n-1},\\mathbf{x}_{n-2})$$\n(13.122: $p(\\mathbf{x}_n|\\mathbf{x}_1,\\dots,\\mathbf{x}_{n-1}) = p(\\mathbf{x}_n|\\mathbf{x}_{n-1},\\mathbf{x}_{n-2})$)\n\nfor n = 3, ..., N.",
"answer": "Since the arrows on the path from $x_m$ to $x_n$ , with m < n - 1, will meet head-to-tail at $x_{n-1}$ , which is in the conditioning set, all such paths are blocked by $x_{n-1}$ and hence (13.3: $p(\\mathbf{x}_n|\\mathbf{x}_1,\\dots,\\mathbf{x}_{n-1}) = p(\\mathbf{x}_n|\\mathbf{x}_{n-1})$) holds.\n\nThe same argument applies in the case depicted4, with the modification that m < n - 2 and that paths are blocked by $x_{n-1}$ or $x_{n-2}$ .",
"answer_length": 443
},
{
"chapter": 13,
"question_number": "13.13",
"difficulty": "medium",
"question_text": "- 13.13 (\\*\\*) www Use the definition (8.64: $\\mu_{f_s \\to x}(x) \\equiv \\sum_{X_s} F_s(x, X_s)$) of the messages passed from a factor node to a variable node in a factor graph, together with the expression (13.6: $p(\\mathbf{x}_1, \\dots, \\mathbf{x}_N, \\mathbf{z}_1, \\dots, \\mathbf{z}_N) = p(\\mathbf{z}_1) \\left[ \\prod_{n=2}^N p(\\mathbf{z}_n | \\mathbf{z}_{n-1}) \\right] \\prod_{n=1}^N p(\\mathbf{x}_n | \\mathbf{z}_n).$) for the joint distribution in a hidden Markov model, to show that the definition (13.50: $\\alpha(\\mathbf{z}_n) = \\mu_{f_n \\to \\mathbf{z}_n}(\\mathbf{z}_n)$) of the alpha message is the same as the definition (13.34: $\\alpha(\\mathbf{z}_n) \\equiv p(\\mathbf{x}_1, \\dots, \\mathbf{x}_n, \\mathbf{z}_n)$).",
"answer": "Using (8.64: $\\mu_{f_s \\to x}(x) \\equiv \\sum_{X_s} F_s(x, X_s)$), we can rewrite (13.50: $\\alpha(\\mathbf{z}_n) = \\mu_{f_n \\to \\mathbf{z}_n}(\\mathbf{z}_n)$) as\n\n$$\\alpha(\\mathbf{z}_n) = \\sum_{\\mathbf{z}_{n-1}} F_n(\\mathbf{z}_n, \\{\\mathbf{z}_1, \\dots, \\mathbf{z}_{n-1}\\}), \\tag{148}$$\n\nwhere $F_n(\\cdot)$ is the product of all factors connected to $\\mathbf{z}_n$ via $f_n$ , including $f_n$ itself (15), so that\n\n$$F_n(\\mathbf{z}_n, {\\mathbf{z}_1, \\dots, \\mathbf{z}_{n-1}}) = h(\\mathbf{z}_1) \\prod_{i=2}^n f_i(\\mathbf{z}_i, \\mathbf{z}_{i-1}),$$\n (149)\n\nwhere we have introduced $h(\\mathbf{z}_1)$ and $f_i(\\mathbf{z}_i, \\mathbf{z}_{i-1})$ from (13.45: $h(\\mathbf{z}_1) = p(\\mathbf{z}_1)p(\\mathbf{x}_1|\\mathbf{z}_1)$) and (13.46: $f_n(\\mathbf{z}_{n-1}, \\mathbf{z}_n) = p(\\mathbf{z}_n | \\mathbf{z}_{n-1}) p(\\mathbf{x}_n | \\mathbf{z}_n).$), respectively. Using the corresponding r.h.s. definitions and repeatedly applying the product rule, we can rewrite (149) as\n\n$$F_n(\\mathbf{z}_n, {\\mathbf{z}_1, \\dots, \\mathbf{z}_{n-1}}) = p(\\mathbf{x}_1, \\dots, \\mathbf{x}_n, \\mathbf{z}_1, \\dots, \\mathbf{z}_n).$$\n\nApplying the sum rule, summing over $\\mathbf{z}_1, \\dots, \\mathbf{z}_{n-1}$ as on the r.h.s. of (148), we obtain (13.34: $\\alpha(\\mathbf{z}_n) \\equiv p(\\mathbf{x}_1, \\dots, \\mathbf{x}_n, \\mathbf{z}_n)$).",
"answer_length": 1314
},
{
"chapter": 13,
"question_number": "13.17",
"difficulty": "easy",
"question_text": "Show that the directed graph for the input-output hidden Markov model, given in Figure 13.18, can be expressed as a tree-structured factor graph of the form shown in Figure 13.15 and write down expressions for the initial factor $h(\\mathbf{z}_1)$ and for the general factor $f_n(\\mathbf{z}_{n-1}, \\mathbf{z}_n)$ where $2 \\le n \\le N$ .",
"answer": "The emission probabilities over observed variables $\\mathbf{x}_n$ are absorbed into the corresponding factors, $f_n$ , analogously to the way in which14 was transformed into15. The factors then take the form\n\n$$h(\\mathbf{z}_1) = p(\\mathbf{z}_1|\\mathbf{u}_1)p(\\mathbf{x}_1|\\mathbf{z}_1,\\mathbf{u}_1)$$\n (150)\n\n$$f_n(\\mathbf{z}_{n-1}, \\mathbf{z}_n) = p(\\mathbf{z}_n | \\mathbf{z}_{n-1}, \\mathbf{u}_n) p(\\mathbf{x}_n | \\mathbf{z}_n, \\mathbf{u}_n).$$\n (151)",
"answer_length": 455
},
{
"chapter": 13,
"question_number": "13.22",
"difficulty": "medium",
"question_text": "Using (13.93: $c_1\\widehat{\\alpha}(\\mathbf{z}_1) = p(\\mathbf{z}_1)p(\\mathbf{x}_1|\\mathbf{z}_1).$), together with the definitions (13.76: $p(\\mathbf{x}_n|\\mathbf{z}_n) = \\mathcal{N}(\\mathbf{x}_n|\\mathbf{C}\\mathbf{z}_n, \\mathbf{\\Sigma}).$) and (13.77: $p(\\mathbf{z}_1) = \\mathcal{N}(\\mathbf{z}_1 | \\boldsymbol{\\mu}_0, \\mathbf{V}_0).$) and the result (2.115: $p(\\mathbf{y}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}, \\mathbf{L}^{-1} + \\mathbf{A}\\boldsymbol{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}})$), derive (13.96: $c_1 = \\mathcal{N}(\\mathbf{x}_1 | \\mathbf{C}\\boldsymbol{\\mu}_0, \\mathbf{C}\\mathbf{V}_0\\mathbf{C}^{\\mathrm{T}} + \\boldsymbol{\\Sigma})$).",
"answer": "Using (13.76: $p(\\mathbf{x}_n|\\mathbf{z}_n) = \\mathcal{N}(\\mathbf{x}_n|\\mathbf{C}\\mathbf{z}_n, \\mathbf{\\Sigma}).$), (13.77: $p(\\mathbf{z}_1) = \\mathcal{N}(\\mathbf{z}_1 | \\boldsymbol{\\mu}_0, \\mathbf{V}_0).$) and (13.84: $\\widehat{\\alpha}(\\mathbf{z}_n) = \\mathcal{N}(\\mathbf{z}_n | \\boldsymbol{\\mu}_n, \\mathbf{V}_n).$), we can write (13.93: $c_1\\widehat{\\alpha}(\\mathbf{z}_1) = p(\\mathbf{z}_1)p(\\mathbf{x}_1|\\mathbf{z}_1).$), for the case n = 1, as\n\n$$c_1 \\mathcal{N}(\\mathbf{z}_1 | \\boldsymbol{\\mu}_1, \\mathbf{V}_1) = \\mathcal{N}(\\mathbf{z}_1 | \\boldsymbol{\\mu}_0, \\mathbf{V}_0) \\mathcal{N}(\\mathbf{x}_1 | \\mathbf{C} \\mathbf{z}_1, \\boldsymbol{\\Sigma}).$$\n\nThe r.h.s. define the joint probability distribution over $\\mathbf{x}_1$ and $\\mathbf{z}_1$ in terms of a conditional distribution over $\\mathbf{x}_1$ given $\\mathbf{z}_1$ and a distribution over $\\mathbf{z}_1$ , corresponding to (2.114: $p(\\mathbf{y}|\\mathbf{x}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\mathbf{x} + \\mathbf{b}, \\mathbf{L}^{-1})$) and (2.113: $p(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}^{-1})$), respectively. What we need to do is to rewrite this into a conditional distribution over $\\mathbf{z}_1$ given $\\mathbf{x}_1$ and a distribution over $\\mathbf{x}_1$ , corresponding to (2.116: $p(\\mathbf{x}|\\mathbf{y}) = \\mathcal{N}(\\mathbf{x}|\\mathbf{\\Sigma}\\{\\mathbf{A}^{\\mathrm{T}}\\mathbf{L}(\\mathbf{y}-\\mathbf{b}) + \\mathbf{\\Lambda}\\boldsymbol{\\mu}\\}, \\mathbf{\\Sigma})$) and (2.115: $p(\\mathbf{y}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\boldsymbol{\\mu} + \\mathbf{b}, \\mathbf{L}^{-1} + \\mathbf{A}\\boldsymbol{\\Lambda}^{-1}\\mathbf{A}^{\\mathrm{T}})$), respectively.\n\nIf we make the substitutions\n\n$$\\mathbf{x} \\Rightarrow \\mathbf{z}_1 \\quad \\boldsymbol{\\mu} \\Rightarrow \\boldsymbol{\\mu}_0 \\quad \\boldsymbol{\\Lambda}^{-1} \\Rightarrow \\mathbf{V}_0$$\n\n$$\\mathbf{y}\\Rightarrow\\mathbf{x}_1\\quad\\mathbf{A}\\Rightarrow\\mathbf{C}\\quad\\mathbf{b}\\Rightarrow\\mathbf{0}\\quad\\mathbf{L}^{-1}\\Rightarrow\\mathbf{\\Sigma},$$\n\nin (2.113: $p(\\mathbf{x}) = \\mathcal{N}(\\mathbf{x}|\\boldsymbol{\\mu}, \\boldsymbol{\\Lambda}^{-1})$) and (2.114: $p(\\mathbf{y}|\\mathbf{x}) = \\mathcal{N}(\\mathbf{y}|\\mathbf{A}\\mathbf{x} + \\mathbf{b}, \\mathbf{L}^{-1})$), (2.115) directly gives us the r.h.s. of (13.96).\n\n13.24 This extension can be embedded in the existing framework by adopting the following modifications:\n\n$$m{\\mu}_0' = \\left[ egin{array}{c} m{\\mu}_0 \\ 1 \\end{array}\night] \\quad \\mathbf{V}_0' = \\left[ egin{array}{cc} \\mathbf{V}_0 & \\mathbf{0} \\ \\mathbf{0} & 0 \\end{array}\night] \\quad \\mathbf{\\Gamma}' = \\left[ egin{array}{cc} m{\\Gamma} & \\mathbf{0} \\ \\mathbf{0} & 0 \\end{array}\night]$$\n\n$$\\mathbf{A}' = \\left[ \\begin{array}{cc} \\mathbf{A} & \\mathbf{a} \\\\ \\mathbf{0} & 1 \\end{array} \\right] \\quad \\mathbf{C}' = \\left[ \\begin{array}{cc} \\mathbf{C} & \\mathbf{c} \\end{array} \\right].$$\n\nThis will ensure that the constant terms **a** and **c** are included in the corresponding Gaussian means for $\\mathbf{z}_n$ and $\\mathbf{x}_n$ for $n = 1, \\dots, N$ .\n\nNote that the resulting covariances for $\\mathbf{z}_n$ , $\\mathbf{V}_n$ , will be singular, as will the corresponding prior covariances, $\\mathbf{P}_{n-1}$ . This will, however, only be a problem where these matrices need to be inverted, such as in (13.102). These cases must be handled separately, using the 'inversion' formula\n\n$$(\\mathbf{P}_{n-1}')^{-1} = \\left[ \\begin{array}{cc} \\mathbf{P}_{n-1}^{-1} & \\mathbf{0} \\\\ \\mathbf{0} & 0 \\end{array} \\right],$$\n\nnullifying the contribution from the (non-existent) variance of the element in $\\mathbf{z}_n$ that accounts for the constant terms $\\mathbf{a}$ and $\\mathbf{c}$ .",
"answer_length": 3668
},
{
"chapter": 13,
"question_number": "13.4",
"difficulty": "medium",
"question_text": "Consider a hidden Markov model in which the emission densities are represented by a parametric model $p(\\mathbf{x}|\\mathbf{z},\\mathbf{w})$ , such as a linear regression model or a neural network, in which $\\mathbf{w}$ is a vector of adaptive parameters. Describe how the parameters $\\mathbf{w}$ can be learned from data using maximum likelihood.",
"answer": "The learning of w would follow the scheme for maximum learning described in Section 13.2.1, with w replacing $\\phi$ . As discussed towards the end of Section 13.2.1, the precise update formulae would depend on the form of regression model used and how it is being used.\n\nThe most obvious situation where this would occur is in a HMM similar to that depicted18, where the emmission densities not only depends on the latent variable **z**, but also on some input variable **u**. The regression model could then be used to map **u** to **x**, depending on the state of the latent variable **z**.\n\nNote that when a nonlinear regression model, such as a neural network, is used, the M-step for w may not have closed form.",
"answer_length": 717
},
{
"chapter": 13,
"question_number": "13.8",
"difficulty": "medium",
"question_text": "- 13.8 (\\*\\*) www For a hidden Markov model having discrete observations governed by a multinomial distribution, show that the conditional distribution of the observations given the hidden variables is given by (13.22: $p(\\mathbf{x}|\\mathbf{z}) = \\prod_{i=1}^{D} \\prod_{k=1}^{K} \\mu_{ik}^{x_i z_k}$) and the corresponding M step equations are given by (13.23: $\\mu_{ik} = \\frac{\\sum_{n=1}^{N} \\gamma(z_{nk}) x_{ni}}{\\sum_{n=1}^{N} \\gamma(z_{nk})}.$). Write down the analogous equations for the conditional distribution and the M step equations for the case of a hidden Markov with multiple binary output variables each of which is governed by a Bernoulli conditional distribution. Hint: refer to Sections 2.1 and 2.2 for a discussion of the corresponding maximum likelihood solutions for i.i.d. data if required.",
"answer": "Only the final term of $Q(\\theta, \\theta^{\\text{old}})$ given by (13.17: $Q(\\boldsymbol{\\theta}, \\boldsymbol{\\theta}^{\\text{old}}) = \\sum_{k=1}^{K} \\gamma(z_{1k}) \\ln \\pi_k + \\sum_{n=2}^{N} \\sum_{j=1}^{K} \\sum_{k=1}^{K} \\xi(z_{n-1,j}, z_{nk}) \\ln A_{jk} + \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\ln p(\\mathbf{x}_n | \\boldsymbol{\\phi}_k).$) depends on the parameters of the emission model. For the multinomial variable $\\mathbf{x}$ , whose D components are all zero except for a single entry of 1,\n\n$$\\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\ln p(\\mathbf{x}_n | \\boldsymbol{\\phi}_k) = \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\sum_{i=1}^{D} x_{ni} \\ln \\mu_{ki}.$$\n\nNow when we maximize with respect to $\\mu_{ki}$ we have to take account of the constraints that, for each value of k the components of $\\mu_{ki}$ must sum to one. We therefore introduce Lagrange multipliers $\\{\\lambda_k\\}$ and maximize the modified function given by\n\n$$\\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\sum_{i=1}^{D} x_{ni} \\ln \\mu_{ki} + \\sum_{k=1}^{K} \\lambda_k \\left( \\sum_{i=1}^{D} \\mu_{ki} - 1 \\right).$$\n\nSetting the derivative with respect to $\\mu_{ki}$ to zero we obtain\n\n$$0 = \\sum_{n=1}^{N} \\gamma(z_{nk}) \\frac{x_{ni}}{\\mu_{ki}} + \\lambda_k.$$\n\nMultiplying through by $\\mu_{ki}$ , summing over i, and making use of the constraint on $\\mu_{ki}$ together with the result $\\sum_i x_{ni} = 1$ we have\n\n$$\\lambda_k = -\\sum_{n=1}^N \\gamma(z_{nk}).$$\n\nFinally, back-substituting for $\\lambda_k$ and solving for $\\mu_{ki}$ we again obtain (13.23: $\\mu_{ik} = \\frac{\\sum_{n=1}^{N} \\gamma(z_{nk}) x_{ni}}{\\sum_{n=1}^{N} \\gamma(z_{nk})}.$).\n\nSimilarly, for the case of a multivariate Bernoulli observed variable ${\\bf x}$ whose D components independently take the value 0 or 1, using the standard expression for the multivariate Bernoulli distribution we have\n\n$$\\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\ln p(\\mathbf{x}_n | \\boldsymbol{\\phi}_k)$$\n\n$$= \\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma(z_{nk}) \\sum_{i=1}^{D} \\left\\{ x_{ni} \\ln \\mu_{ki} + (1 - x_{ni}) \\ln(1 - \\mu_{ki}) \\right\\}.$$\n\nMaximizing with respect to $\\mu_{ki}$ we obtain\n\n$$\\mu_{ki} = \\frac{\\sum_{n=1}^{N} \\gamma(z_{nk}) x_{ni}}{\\sum_{n=1}^{N} \\gamma(z_{nk})}$$\n\nwhich is equivalent to (13.23: $\\mu_{ik} = \\frac{\\sum_{n=1}^{N} \\gamma(z_{nk}) x_{ni}}{\\sum_{n=1}^{N} \\gamma(z_{nk})}.$).",
"answer_length": 2368
},
{
"chapter": 13,
"question_number": "13.9",
"difficulty": "medium",
"question_text": "Use the d-separation criterion to verify that the conditional independence properties (13.24)–(13.31) are satisfied by the joint distribution for the hidden Markov model defined by (13.6: $p(\\mathbf{x}_1, \\dots, \\mathbf{x}_N, \\mathbf{z}_1, \\dots, \\mathbf{z}_N) = p(\\mathbf{z}_1) \\left[ \\prod_{n=2}^N p(\\mathbf{z}_n | \\mathbf{z}_{n-1}) \\right] \\prod_{n=1}^N p(\\mathbf{x}_n | \\mathbf{z}_n).$).",
"answer": "We can verify all these independence properties using d-separation by refering to5.\n\n(13.24: $p(\\mathbf{x}_{n+1}, \\dots, \\mathbf{x}_{N}|\\mathbf{z}_{n}) \\qquad$) follows from the fact that arrows on paths from any of $\\mathbf{x}_1, \\dots, \\mathbf{x}_n$ to any of $\\mathbf{x}_{n+1}, \\dots, \\mathbf{x}_N$ meet head-to-tail or tail-to-tail at $\\mathbf{z}_n$ , which is in the conditioning set.\n\n(13.25: $p(\\mathbf{x}_{1}, \\dots, \\mathbf{x}_{n-1}|\\mathbf{x}_{n}, \\mathbf{z}_{n}) = p(\\mathbf{x}_{1}, \\dots, \\mathbf{x}_{n-1}|\\mathbf{z}_{n}) \\qquad$) follows from the fact that arrows on paths from any of $\\mathbf{x}_1, \\dots, \\mathbf{x}_{n-1}$ to $\\mathbf{x}_n$ meet head-to-tail at $\\mathbf{z}_n$ , which is in the conditioning set.\n\n(13.26: $p(\\mathbf{x}_{1}, \\dots, \\mathbf{x}_{n-1}|\\mathbf{z}_{n-1}, \\mathbf{z}_{n}) = p(\\mathbf{x}_{1}, \\dots, \\mathbf{x}_{n-1}|\\mathbf{z}_{n-1}) \\qquad$) follows from the fact that arrows on paths from any of $\\mathbf{x}_1, \\dots, \\mathbf{x}_{n-1}$ to $\\mathbf{z}_n$ meet head-to-tail or tail-to-tail at $\\mathbf{z}_{n-1}$ , which is in the conditioning set.\n\n(13.27: $p(\\mathbf{x}_{n+1}, \\dots, \\mathbf{x}_{N}|\\mathbf{z}_{n}, \\mathbf{z}_{n+1}) = p(\\mathbf{x}_{n+1}, \\dots, \\mathbf{x}_{N}|\\mathbf{z}_{n+1}) \\qquad$) follows from the fact that arrows on paths from $\\mathbf{z}_n$ to any of $\\mathbf{x}_{n+1}, \\dots, \\mathbf{x}_N$ meet head-to-tail at $\\mathbf{z}_{n+1}$ , which is in the conditioning set.\n\n(13.28: $p(\\mathbf{x}_{n+2}, \\dots, \\mathbf{x}_{N}|\\mathbf{z}_{n+1}, \\mathbf{x}_{n+1}) = p(\\mathbf{x}_{n+2}, \\dots, \\mathbf{x}_{N}|\\mathbf{z}_{n+1}) \\qquad$) follows from the fact that arrows on paths from $\\mathbf{x}_{n+1}$ to any of $\\mathbf{x}_{n+2}, \\dots, \\mathbf{x}_N$ to meet tail-to-tail at $\\mathbf{z}_{n+1}$ , which is in the conditioning set.\n\n(13.29: $p(\\mathbf{x}_{n}|\\mathbf{z}_{n})p(\\mathbf{x}_{n+1}, \\dots, \\mathbf{x}_{N}|\\mathbf{z}_{n}) \\qquad$) follows from (13.24: $p(\\mathbf{x}_{n+1}, \\dots, \\mathbf{x}_{N}|\\mathbf{z}_{n}) \\qquad$) and the fact that arrows on paths from any of $\\mathbf{x}_1, \\ldots, \\mathbf{x}_{n-1}$ to $\\mathbf{x}_n$ meet head-to-tail or tail-to-tail at $\\mathbf{z}_{n-1}$ , which is in the conditioning set.\n\n(13.30: $p(\\mathbf{x}_{N+1}|\\mathbf{X}, \\mathbf{z}_{N+1}) = p(\\mathbf{x}_{N+1}|\\mathbf{z}_{N+1}) \\qquad$) follows from the fact that arrows on paths from any of $\\mathbf{x}_1, \\dots, \\mathbf{x}_N$ to $\\mathbf{x}_{N+1}$ meet head-to-tail at $\\mathbf{z}_{N+1}$ , which is in the conditioning set.\n\n(13.31: $p(\\mathbf{z}_{N+1}|\\mathbf{z}_{N}, \\mathbf{X}) = p(\\mathbf{z}_{N+1}|\\mathbf{z}_{N}) \\qquad$) follows from the fact that arrows on paths from any of $\\mathbf{x}_1, \\dots, \\mathbf{x}_N$ to $\\mathbf{z}_{N+1}$ meet head-to-tail or tail-to-tail at $\\mathbf{z}_N$ , which is in the conditioning set.",
"answer_length": 2825
}
]
},
{
"chapter_number": 14,
"total_questions": 8,
"difficulty_breakdown": {
"easy": 10,
"medium": 3,
"hard": 1,
"unknown": 3
},
"questions": [
{
"chapter": 14,
"question_number": "14.1",
"difficulty": "medium",
"question_text": "- 14.1 (\\*\\*) www Consider a set models of the form $p(\\mathbf{t}|\\mathbf{x}, \\mathbf{z}_h, \\boldsymbol{\\theta}_h, h)$ in which $\\mathbf{x}$ is the input vector, $\\mathbf{t}$ is the target vector, h indexes the different models, $\\mathbf{z}_h$ is a latent variable for model h, and $\\boldsymbol{\\theta}_h$ is the set of parameters for model h. Suppose the models have prior probabilities p(h) and that we are given a training set $\\mathbf{X} = \\{\\mathbf{x}_1, \\dots, \\mathbf{x}_N\\}$ and $\\mathbf{T} = \\{\\mathbf{t}_1, \\dots, \\mathbf{t}_N\\}$ . Write down the formulae needed to evaluate the predictive distribution $p(\\mathbf{t}|\\mathbf{x}, \\mathbf{X}, \\mathbf{T})$ in which the latent variables and the model index are marginalized out. Use these formulae to highlight the difference between Bayesian averaging of different models and the use of latent variables within a single model.",
"answer": "The required predictive distribution is given by\n\n$$p(\\mathbf{t}|\\mathbf{x}, \\mathbf{X}, \\mathbf{T}) = \\sum_{h} p(h) \\sum_{\\mathbf{z}_{h}} p(\\mathbf{z}_{h}) \\int p(\\mathbf{t}|\\mathbf{x}, \\boldsymbol{\\theta}_{h}, \\mathbf{z}_{h}, h) p(\\boldsymbol{\\theta}_{h}|\\mathbf{X}, \\mathbf{T}, h) d\\boldsymbol{\\theta}_{h}, \\quad (154)$$\n\nwhere\n\n$$p(\\boldsymbol{\\theta}_{h}|\\mathbf{X}, \\mathbf{T}, h) = \\frac{p(\\mathbf{T}|\\mathbf{X}, \\boldsymbol{\\theta}_{h}, h)p(\\boldsymbol{\\theta}_{h}|h)}{p(\\mathbf{T}|\\mathbf{X}, h)}$$\n\n$$\\propto p(\\boldsymbol{\\theta}|h) \\prod_{n=1}^{N} p(\\mathbf{t}_{n}|\\mathbf{x}_{n}, \\boldsymbol{\\theta}, h)$$\n\n$$= p(\\boldsymbol{\\theta}|h) \\prod_{n=1}^{N} \\left( \\sum_{\\mathbf{z}_{nh}} p(\\mathbf{t}_{n}, \\mathbf{z}_{nh}|\\mathbf{x}_{n}, \\boldsymbol{\\theta}, h) \\right)$$\n(155)\n\nThe integrals and summations in (154) are examples of Bayesian averaging, accounting for the uncertainty about which model, h, is the correct one, the value of the corresponding parameters, $\\theta_h$ , and the state of the latent variable, $\\mathbf{z}_h$ . The summation in (155), on the other hand, is an example of the use of latent variables, where different data points correspond to different latent variable states, although all the data are assumed to have been generated by a single model, h.",
"answer_length": 1289
},
{
"chapter": 14,
"question_number": "14.13",
"difficulty": "easy",
"question_text": "Verify that the complete-data log likelihood function for the mixture of linear regression models is given by (14.36: $\\ln p(\\mathbf{t}, \\mathbf{Z}|\\boldsymbol{\\theta}) = \\sum_{n=1}^{N} \\sum_{k=1}^{K} z_{nk} \\ln \\left\\{ \\pi_k \\mathcal{N}(t_n | \\mathbf{w}_k^{\\mathrm{T}} \\boldsymbol{\\phi}_n, \\beta^{-1}) \\right\\}.$).",
"answer": "Starting from the mixture distribution in (14.34: $p(t|\\boldsymbol{\\theta}) = \\sum_{k=1}^{K} \\pi_k \\mathcal{N}(t|\\mathbf{w}_k^{\\mathrm{T}} \\boldsymbol{\\phi}, \\beta^{-1})$), we follow the same steps as for mixtures of Gaussians, presented in Section 9.2. We introduce a *K*-nomial latent variable, **z**, such that the joint distribution over **z** and *t* equals\n\n$$p(t, \\mathbf{z}) = p(t|\\mathbf{z})p(\\mathbf{z}) = \\prod_{k=1}^{K} \\left( \\mathcal{N} \\left( t | \\mathbf{w}_k^{\\mathrm{T}} \\boldsymbol{\\phi}, \\beta^{-1} \\right) \\pi_k \\right)^{z_k}.$$\n\nGiven a set of observations, $\\{(t_n, \\phi_n)\\}_{n=1}^N$ , we can write the complete likelihood over these observations and the corresponding $\\mathbf{z}_1, \\dots, \\mathbf{z}_N$ , as\n\n$$\\prod_{n=1}^{N} \\prod_{k=1}^{K} \\left( \\pi_k \\mathcal{N}(t_n | \\mathbf{w}_k^{\\mathrm{T}} \\boldsymbol{\\phi}_n, \\beta^{-1}) \\right)^{z_{nk}}.$$\n\nTaking the logarithm, we obtain (14.36: $\\ln p(\\mathbf{t}, \\mathbf{Z}|\\boldsymbol{\\theta}) = \\sum_{n=1}^{N} \\sum_{k=1}^{K} z_{nk} \\ln \\left\\{ \\pi_k \\mathcal{N}(t_n | \\mathbf{w}_k^{\\mathrm{T}} \\boldsymbol{\\phi}_n, \\beta^{-1}) \\right\\}.$).",
"answer_length": 1118
},
{
"chapter": 14,
"question_number": "14.15",
"difficulty": "easy",
"question_text": "- 14.15 (\\*) www We have already noted that if we use a squared loss function in a regression problem, the corresponding optimal prediction of the target variable for a new input vector is given by the conditional mean of the predictive distribution. Show that the conditional mean for the mixture of linear regression models discussed in Section 14.5.1 is given by a linear combination of the means of each component distribution. Note that if the conditional distribution of the target data is multimodal, the conditional mean can give poor predictions.",
"answer": "The predictive distribution from the mixture of linear regression models for a new input feature vector, $\\hat{\\phi}$ , is obtained from (14.34: $p(t|\\boldsymbol{\\theta}) = \\sum_{k=1}^{K} \\pi_k \\mathcal{N}(t|\\mathbf{w}_k^{\\mathrm{T}} \\boldsymbol{\\phi}, \\beta^{-1})$), with $\\phi$ replaced by $\\hat{\\phi}$ . Calculating the expectation of t under this distribution, we obtain\n\n$$\\mathbb{E}[t|\\widehat{\\boldsymbol{\\phi}},\\boldsymbol{\\theta}] = \\sum_{k=1}^{K} \\pi_k \\mathbb{E}[t|\\widehat{\\boldsymbol{\\phi}}, \\mathbf{w}_k, \\beta].$$\n\nDepending on the parameters, this expectation is potentially K-modal, with one mode for each mixture component. However, the weighted combination of these modes output by the mixture model may not be close to any single mode. For example, the combination of the two modes in the left panel of9 will end up in between the two modes, a region with no signicant probability mass.",
"answer_length": 910
},
{
"chapter": 14,
"question_number": "14.17",
"difficulty": "medium",
"question_text": "Consider a mixture model for a conditional distribution $p(t|\\mathbf{x})$ of the form\n\n$$p(t|\\mathbf{x}) = \\sum_{k=1}^{K} \\pi_k \\psi_k(t|\\mathbf{x})$$\n(14.58: $p(t|\\mathbf{x}) = \\sum_{k=1}^{K} \\pi_k \\psi_k(t|\\mathbf{x})$)\n\nin which each mixture component $\\psi_k(t|\\mathbf{x})$ is itself a mixture model. Show that this two-level hierarchical mixture is equivalent to a conventional single-level mixture model. Now suppose that the mixing coefficients in both levels of such a hierarchical model are arbitrary functions of $\\mathbf{x}$ . Again, show that this hierarchical model is again equivalent to a single-level model with $\\mathbf{x}$ -dependent mixing coefficients. Finally, consider the case in which the mixing coefficients at both levels of the hierarchical mixture are constrained to be linear classification (logistic or softmax) models. Show that the hierarchical mixture cannot in general be represented by a single-level mixture having linear classification models for the mixing coefficients. Hint: to do this it is sufficient to construct a single counter-example, so consider a mixture of two components in which one of those components is itself a mixture of two components, with mixing coefficients given by linear-logistic models. Show that this cannot be represented by a single-level mixture of 3 components having mixing coefficients determined by a linear-softmax model.",
"answer": "If we define $\\psi_k(t|\\mathbf{x})$ in (14.58: $p(t|\\mathbf{x}) = \\sum_{k=1}^{K} \\pi_k \\psi_k(t|\\mathbf{x})$) as\n\n$$\\psi_k(t|\\mathbf{x}) = \\sum_{m=1}^{M} \\lambda_{mk} \\phi_{mk}(t|\\mathbf{x}),$$\n\nwe can rewrite (14.58: $p(t|\\mathbf{x}) = \\sum_{k=1}^{K} \\pi_k \\psi_k(t|\\mathbf{x})$) as\n\n$$p(t|\\mathbf{x}) = \\sum_{k=1}^{K} \\pi_k \\sum_{m=1}^{M} \\lambda_{mk} \\phi_{mk}(t|\\mathbf{x})$$\n$$= \\sum_{k=1}^{K} \\sum_{m=1}^{M} \\pi_k \\lambda_{mk} \\phi_{mk}(t|\\mathbf{x}).$$\n\nBy changing the indexation, we can write this as\n\n$$p(t|\\mathbf{x}) = \\sum_{l=1}^{L} \\eta_l \\phi_l(t|\\mathbf{x}),$$\n\nwhere L=KM, l=(k-1)M+m, $\\eta_l=\\pi_k\\lambda_{mk}$ and $\\phi_l(\\cdot)=\\phi_{mk}(\\cdot)$ . By construction, $\\eta_l\\geqslant 0$ and $\\sum_{l=1}^L\\eta_l=1$ .\n\nNote that this would work just as well if $\\pi_k$ and $\\lambda_{mk}$ were to be dependent on $\\mathbf{x}$ , as long as they both respect the constraints of being non-negative and summing to 1 for every possible value of $\\mathbf{x}$ .\n\nFinally, consider a tree-structured, hierarchical mixture model, as illustrated in the left panel of On the top (root) level, this is a mixture with two components. The mixing coefficients are given by a linear logistic regression model and hence are input dependent. The left sub-tree correspond to a local conditional density model, $\\psi_1(t|\\mathbf{x})$ . In the right sub-tree, the structure from the root is replicated, with the difference that both sub-trees contain local conditional density models, $\\psi_2(t|\\mathbf{x})$ and $\\psi_3(t|\\mathbf{x})$ .\n\nWe can write the resulting mixture model on the form (14.58: $p(t|\\mathbf{x}) = \\sum_{k=1}^{K} \\pi_k \\psi_k(t|\\mathbf{x})$) with mixing coefficients\n\n$$\\pi_1(\\mathbf{x}) = \\sigma(\\mathbf{v}_1^{\\mathrm{T}}\\mathbf{x})\n\\pi_2(\\mathbf{x}) = (1 - \\sigma(\\mathbf{v}_1^{\\mathrm{T}}\\mathbf{x}))\\sigma(\\mathbf{v}_2^{\\mathrm{T}}\\mathbf{x})\n\\pi_3(\\mathbf{x}) = (1 - \\sigma(\\mathbf{v}_1^{\\mathrm{T}}\\mathbf{x}))(1 - \\sigma(\\mathbf{v}_2^{\\mathrm{T}}\\mathbf{x})),$$\n\nwhere $\\sigma(\\cdot)$ is defined in (4.59: $\\sigma(a) = \\frac{1}{1 + \\exp(-a)}$) and $\\mathbf{v}_1$ and $\\mathbf{v}_2$ are the parameter vectors of the logistic regression models. Note that $\\pi_1(\\mathbf{x})$ is independent of the value of $\\mathbf{v}_2$ . This would not be the case if the mixing coefficients were modelled using a single level softmax model,\n\n$$\\pi_k(\\mathbf{x}) = \\frac{e^{\\mathbf{u}_k^{\\mathrm{T}} \\mathbf{x}}}{\\sum_{j}^{3} e^{\\mathbf{u}_j^{\\mathrm{T}} \\mathbf{x}}},$$\n\nwhere the parameters $\\mathbf{u}_k$ , corresponding to $\\pi_k(\\mathbf{x})$ , will also affect the other mixing coefficients, $\\pi_{j\\neq k}(\\mathbf{x})$ , through the denominator. This gives the hierarchical model different properties in the modelling of the mixture coefficients over the input space, as compared to a linear softmax model. An example is shown in the right panel of, where the red lines represent borders of equal mixing coefficients in the input space. These borders are formed from two straight lines, corresponding to the two logistic units in the left panel of 8. A corresponding division of the input space by a softmax model would involve three straight lines joined at a single point, looking, e.g., something like the red lines3 in PRML; note that a linear three-class softmax model could not implement the borders show in right panel of",
"answer_length": 3368
},
{
"chapter": 14,
"question_number": "14.3",
"difficulty": "easy",
"question_text": "By making use of Jensen's inequality (1.115: $f\\left(\\sum_{i=1}^{M} \\lambda_i x_i\\right) \\leqslant \\sum_{i=1}^{M} \\lambda_i f(x_i)$), for the special case of the convex function $f(x) = x^2$ , show that the average expected sum-of-squares error $E_{AV}$ of the members of a simple committee model, given by (14.10: $E_{\\text{AV}} = \\frac{1}{M} \\sum_{m=1}^{M} \\mathbb{E}_{\\mathbf{x}} \\left[ \\epsilon_m(\\mathbf{x})^2 \\right].$), and the expected error $E_{COM}$ of the committee itself, given by (14.11: $= \\mathbb{E}_{\\mathbf{x}} \\left[ \\left\\{ \\frac{1}{M} \\sum_{m=1}^{M} \\epsilon_m(\\mathbf{x}) \\right\\}^2 \\right]$), satisfy\n\n$$E_{\\text{COM}} \\leqslant E_{\\text{AV}}.$$\n (14.54: $E_{\\text{COM}} \\leqslant E_{\\text{AV}}.$)",
"answer": "We start by rearranging the r.h.s. of (14.10: $E_{\\text{AV}} = \\frac{1}{M} \\sum_{m=1}^{M} \\mathbb{E}_{\\mathbf{x}} \\left[ \\epsilon_m(\\mathbf{x})^2 \\right].$), by moving the factor 1/M inside the sum and the expectation operator outside the sum, yielding\n\n$$\\mathbb{E}_{\\mathbf{x}} \\left[ \\sum_{m=1}^{M} \\frac{1}{M} \\epsilon_m(\\mathbf{x})^2 \\right].$$\n\nIf we then identify $\\epsilon_m(\\mathbf{x})$ and 1/M with $x_i$ and $\\lambda_i$ in (1.115: $f\\left(\\sum_{i=1}^{M} \\lambda_i x_i\\right) \\leqslant \\sum_{i=1}^{M} \\lambda_i f(x_i)$), respectively, and take $f(x) = x^2$ , we see from (1.115: $f\\left(\\sum_{i=1}^{M} \\lambda_i x_i\\right) \\leqslant \\sum_{i=1}^{M} \\lambda_i f(x_i)$) that\n\n$$\\left(\\sum_{m=1}^{M} \\frac{1}{M} \\epsilon_m(\\mathbf{x})\\right)^2 \\leqslant \\sum_{m=1}^{M} \\frac{1}{M} \\epsilon_m(\\mathbf{x})^2.$$\n\nSince this holds for all values of x, it must also hold for the expectation over x, proving (14.54: $E_{\\text{COM}} \\leqslant E_{\\text{AV}}.$).",
"answer_length": 966
},
{
"chapter": 14,
"question_number": "14.5",
"difficulty": "medium",
"question_text": "\\star)$ www Consider a committee in which we allow unequal weighting of the constituent models, so that\n\n$$y_{\\text{COM}}(\\mathbf{x}) = \\sum_{m=1}^{M} \\alpha_m y_m(\\mathbf{x}). \\tag{14.55}$$\n\nIn order to ensure that the predictions $y_{\\text{COM}}(\\mathbf{x})$ remain within sensible limits, suppose that we require that they be bounded at each value of $\\mathbf{x}$ by the minimum and maximum values given by any of the members of the committee, so that\n\n$$y_{\\min}(\\mathbf{x}) \\leqslant y_{\\text{COM}}(\\mathbf{x}) \\leqslant y_{\\max}(\\mathbf{x}).$$\n (14.56: $y_{\\min}(\\mathbf{x}) \\leqslant y_{\\text{COM}}(\\mathbf{x}) \\leqslant y_{\\max}(\\mathbf{x}).$)\n\nShow that a necessary and sufficient condition for this constraint is that the coefficients $\\alpha_m$ satisfy\n\n$$\\alpha_m \\geqslant 0, \\qquad \\sum_{m=1}^{M} \\alpha_m = 1. \\tag{14.57}$$",
"answer": "To prove that (14.57: $\\alpha_m \\geqslant 0, \\qquad \\sum_{m=1}^{M} \\alpha_m = 1.$) is a sufficient condition for (14.56: $y_{\\min}(\\mathbf{x}) \\leqslant y_{\\text{COM}}(\\mathbf{x}) \\leqslant y_{\\max}(\\mathbf{x}).$) we have to show that (14.56: $y_{\\min}(\\mathbf{x}) \\leqslant y_{\\text{COM}}(\\mathbf{x}) \\leqslant y_{\\max}(\\mathbf{x}).$) follows from (14.57: $\\alpha_m \\geqslant 0, \\qquad \\sum_{m=1}^{M} \\alpha_m = 1.$). To do this, consider a fixed set of $y_m(\\mathbf{x})$ and imagine varying the $\\alpha_m$ over all possible values allowed by (14.57: $\\alpha_m \\geqslant 0, \\qquad \\sum_{m=1}^{M} \\alpha_m = 1.$) and consider the values taken by\n\n $y_{\\text{COM}}(\\mathbf{x})$ as a result. The maximum value of $y_{\\text{COM}}(\\mathbf{x})$ occurs when $\\alpha_k = 1$ where $y_k(\\mathbf{x}) \\geqslant y_m(\\mathbf{x})$ for $m \\neq k$ , and hence all $\\alpha_m = 0$ for $m \\neq k$ . An analogous result holds for the minimum value. For other settings of $\\alpha$ ,\n\n$$y_{\\min}(\\mathbf{x}) < y_{\\text{COM}}(\\mathbf{x}) < y_{\\max}(\\mathbf{x}),$$\n\nsince $y_{\\text{COM}}(\\mathbf{x})$ is a convex combination of points, $y_m(\\mathbf{x})$ , such that\n\n$$\\forall m: y_{\\min}(\\mathbf{x}) \\leqslant y_m(\\mathbf{x}) \\leqslant y_{\\max}(\\mathbf{x}).$$\n\nThus, (14.57: $\\alpha_m \\geqslant 0, \\qquad \\sum_{m=1}^{M} \\alpha_m = 1.$) is a sufficient condition for (14.56: $y_{\\min}(\\mathbf{x}) \\leqslant y_{\\text{COM}}(\\mathbf{x}) \\leqslant y_{\\max}(\\mathbf{x}).$).\n\nShowing that (14.57: $\\alpha_m \\geqslant 0, \\qquad \\sum_{m=1}^{M} \\alpha_m = 1.$) is a necessary condition for (14.56: $y_{\\min}(\\mathbf{x}) \\leqslant y_{\\text{COM}}(\\mathbf{x}) \\leqslant y_{\\max}(\\mathbf{x}).$) is equivalent to showing that (14.56: $y_{\\min}(\\mathbf{x}) \\leqslant y_{\\text{COM}}(\\mathbf{x}) \\leqslant y_{\\max}(\\mathbf{x}).$) is a sufficient condition for (14.57). The implication here is that if (14.56) holds for any choice of values of the committee members $\\{y_m(\\mathbf{x})\\}$ then (14.57) will be satisfied. Suppose, without loss of generality, that $\\alpha_k$ is the smallest of the $\\alpha$ values, i.e. $\\alpha_k \\leqslant \\alpha_m$ for $k \\neq m$ . Then consider $y_k(\\mathbf{x}) = 1$ , together with $y_m(\\mathbf{x}) = 0$ for all $m \\neq k$ . Then $y_{\\min}(\\mathbf{x}) = 0$ while $y_{\\text{COM}}(\\mathbf{x}) = \\alpha_k$ and hence from (14.56) we obtain $\\alpha_k \\geqslant 0$ . Since $\\alpha_k$ is the smallest of the $\\alpha$ values it follows that all of the coefficients must satisfy $\\alpha_k \\geqslant 0$ . Similarly, consider the case in which $y_m(\\mathbf{x}) = 1$ for all m. Then $y_{\\min}(\\mathbf{x}) = y_{\\max}(\\mathbf{x}) = 1$ , while $y_{\\text{COM}}(\\mathbf{x}) = \\sum_m \\alpha_m$ . From (14.56) it then follows that $\\sum_m \\alpha_m = 1$ , as required.",
"answer_length": 2788
},
{
"chapter": 14,
"question_number": "14.6",
"difficulty": "easy",
"question_text": "By differentiating the error function (14.23: $= (e^{\\alpha_m/2} - e^{-\\alpha_m/2}) \\sum_{n=1}^N w_n^{(m)} I(y_m(\\mathbf{x}_n) \\neq t_n) + e^{-\\alpha_m/2} \\sum_{n=1}^N w_n^{(m)}.$) with respect to $\\alpha_m$ , show that the parameters $\\alpha_m$ in the AdaBoost algorithm are updated using (14.17: $\\alpha_m = \\ln \\left\\{ \\frac{1 - \\epsilon_m}{\\epsilon_m} \\right\\}.$) in which $\\epsilon_m$ is defined by (14.16: $\\epsilon_{m} = \\frac{\\sum_{n=1}^{N} w_{n}^{(m)} I(y_{m}(\\mathbf{x}_{n}) \\neq t_{n})}{\\sum_{n=1}^{N} w_{n}^{(m)}}$).",
"answer": "If we differentiate (14.23: $= (e^{\\alpha_m/2} - e^{-\\alpha_m/2}) \\sum_{n=1}^N w_n^{(m)} I(y_m(\\mathbf{x}_n) \\neq t_n) + e^{-\\alpha_m/2} \\sum_{n=1}^N w_n^{(m)}.$) w.r.t. $\\alpha_m$ we obtain\n\n$$\\frac{\\partial E}{\\partial \\alpha_m} = \\frac{1}{2} \\left( (e^{\\alpha_m/2} + e^{-\\alpha_m/2}) \\sum_{n=1}^{N} w_n^{(m)} I(y_m(\\mathbf{x}_n) \\neq t_n) - e^{-\\alpha_m/2} \\sum_{n=1}^{N} w_n^{(m)} \\right).$$\n\nSetting this equal to zero and rearranging, we get\n\n$$\\frac{\\sum_{n} w_n^{(m)} I(y_m(\\mathbf{x}_n) \\neq t_n)}{\\sum_{n} w_n^{(m)}} = \\frac{e^{-\\alpha_m/2}}{e^{\\alpha_m/2} + e^{-\\alpha_m/2}} = \\frac{1}{e^{\\alpha_m} + 1}.$$\n\nUsing (14.16: $\\epsilon_{m} = \\frac{\\sum_{n=1}^{N} w_{n}^{(m)} I(y_{m}(\\mathbf{x}_{n}) \\neq t_{n})}{\\sum_{n=1}^{N} w_{n}^{(m)}}$), we can rewrite this as\n\n$$\\frac{1}{e^{\\alpha_m} + 1} = \\epsilon_m,$$\n\nwhich can be further rewritten as\n\n$$e^{\\alpha_m} = \\frac{1 - \\epsilon_m}{\\epsilon_m},$$\n\nfrom which (14.17: $\\alpha_m = \\ln \\left\\{ \\frac{1 - \\epsilon_m}{\\epsilon_m} \\right\\}.$) follows directly.",
"answer_length": 1018
},
{
"chapter": 14,
"question_number": "14.9",
"difficulty": "easy",
"question_text": "Show that the sequential minimization of the sum-of-squares error function for an additive model of the form (14.21: $f_m(\\mathbf{x}) = \\frac{1}{2} \\sum_{l=1}^{m} \\alpha_l y_l(\\mathbf{x})$) in the style of boosting simply involves fitting each new base classifier to the residual errors $t_n f_{m-1}(\\mathbf{x}_n)$ from the previous model.",
"answer": "The sum-of-squares error for the additive model of (14.21: $f_m(\\mathbf{x}) = \\frac{1}{2} \\sum_{l=1}^{m} \\alpha_l y_l(\\mathbf{x})$) is defined as\n\n$$E = \\frac{1}{2} \\sum_{n=1}^{N} (t_n - f_m(\\mathbf{x}_n))^2.$$\n\nUsing (14.21: $f_m(\\mathbf{x}) = \\frac{1}{2} \\sum_{l=1}^{m} \\alpha_l y_l(\\mathbf{x})$), we can rewrite this as\n\n$$\\frac{1}{2} \\sum_{n=1}^{N} (t_n - f_{m-1}(\\mathbf{x}_n) - \\frac{1}{2} \\alpha_m y_m(\\mathbf{x}))^2,$$\n\nwhere we recognize the two first terms inside the square as the residual from the (m-1)-th model. Minimizing this error w.r.t. $y_m(\\mathbf{x})$ will be equivalent to fitting $y_m(\\mathbf{x})$ to the (scaled) residuals.",
"answer_length": 651
}
]
}
]
}