Bayesian Statistics: Mixture Models Coursera Quiz Answers

All Weeks Bayesian Statistics: Mixture Models Coursera Quiz Answers

Bayesian Statistics: Mixture Models introduces you to an important class of statistical models. The course is organized in five modules, each of which contains lecture videos, short quizzes, background reading, discussion prompts, and one or more peer-reviewed assignments. Statistics is best learned by doing it, not just watching a video, so the course is structured to help you learn through application.

Some exercises require the use of R, a freely-available statistical software package. A brief tutorial is provided, but we encourage you to take advantage of the many other resources online for learning R if you are interested.

This is an intermediate-level course, and it was designed to be the third in UC Santa Cruz’s series on Bayesian statistics, after Herbie Lee’s “Bayesian Statistics: From Concept to Data Analysis” and Matthew Heiner’s “Bayesian Statistics: Techniques and Models.” To succeed in the course, you should have some knowledge of and comfort with calculus-based probability, principles of maximum-likelihood estimation, and Bayesian estimation.

Enroll on Coursera

Bayesian Statistics: Mixture Models Coursera Quiz Answers

Practice Quiz: Basic definitions

Q1. Which one of the following is not the density of a well defined mixture distribution with support on x \ge 0x≥0

  • f(x) = 0.5 e^{-x} + 0.5 \frac{1}{\sqrt{2\pi}} e^{-0.5 x^2}f(x)=0.5ex+0.52π​1​e−0.5x2
  • f(x) = 0.50 e^{-x} + 0.25 e^{-0.5x}f(x)=0.50ex+0.25e−0.5x
  • f(x) = 0.50 e^{-x} + 0.5 e^{-0.5x} f(x)=0.50ex+0.5e−0.5x

Q2. What is the expected value of a random variable X whose distribution is a mixture of Poisson distributions with the form

f(x) = 0.3 \frac{2^x e^{-2}}{x!} + 0.45 \frac{3^x e^{-3}}{x!} + 0.25 \frac{½^x e^{-½}}{x!}f(x)=0.3x!2xe−2​+0.45x!3xe−3​+0.25xxe−½​

Q3. What is the variance of a random variable X whose distribution is a mixture of Poisson distributions with the form

f(x) = 0.3 \frac{2^x e^{-2}}{x!} + 0.45 \frac{3^x e^{-3}}{x!} + 0.25 \frac{½^x e^{-½}}{x!}f(x)=0.3x!2xe−2​+0.45x!3xe−3​+0.25xxe−½​

Mixtures of Gaussians

Q1. True or False: A scale mixture of normals with density

f(x) = \sum_{k=1}^{K} w_k \frac{1}{\sqrt{2\pi}\sigma_k} \exp \left\{ -\frac{x^2}{\sigma_k^2} \right\} f(x)=∑k=1Kwk​2πσk​1​exp{−σk2​x2​}

is always unimodal.

  • True
  • False

Q2. True or False: A scale mixture of normals with density

f(x) = \sum_{k=1}^{K} w_k \frac{1}{\sqrt{2\pi}\sigma_k} \exp \left\{ -\frac{x^2}{\sigma_k^2} \right\} f(x)=∑k=1Kwk​2πσk​1​exp{−σk2​x2​}

is always symmetric.

  • True
  • False

Zero-inflated distributions

Q1. Consider a zero-inflated mixture that involves a point mass at 0 with weight 0.3 and an exponential distribution with mean 1 and weight 0.7. What is the mean of this mixture?

  • 1
  • 0.7
  • 0.3

Q2. Consider a zero-inflated mixture that involves a point mass at 0 with weight 0.2 and an exponential distribution with mean 10000 and weight 0.8. If this mixture is used to represent the number of hours a light bulb works between the time it is installed and the time it fails, what is the probability that the bulb was defective when coming out of the factory and does not work when you install it?

Week 01 : Definition of Mixture Models

Q1. Which one of the following is not the density of a well defined mixture distribution with support on the positive integers

  • f(x)=0.5×x!e−1​+0.5×x!e−1​
  • f(x)=0.5×x!2xe−2​+0.5×x!3xe−3​
  • f(x)=0.45×2xx!e−1​+0.55×x!3xe−3​

Q2. Consider a zero-inflated mixture that involves a point mass at 0 with weight 0.2, a Gamma distribution with mean 1, variance 2 and weight 0.5, and another Gamma distribution with mean 2, variance 4 and weight 0.3. What is the mean of this mixture?

  • 2.2
  • 1.1
  • 1.8

Q3. Consider a zero-inflated mixture that involves a point mass at 0 with weight 0.2, a Gamma distribution with mean 1, variance 2 and weight 0.5, and another Gamma distribution with mean 2, variance 4 and weight 0.3. What is the variance of this mixture?

Q4. True or False: A mixture of Gaussians of the form

f(x)=0.32π​1​exp{−2x2​}+0.72π​1​exp{−2(x−4)2​}

has a bimodal density.

  • True
  • False

Q5. True or False: Consider a location mixture of normals

f(x)=∑k=1Kωk​2πσ1​exp{−2σ2(xμk​)2​}

The following 3 constraints make all parameters fully identifiable:

  1. The means μ1,…,μK should all be different.
  2. No weight ωk is allowed to be zero.
  3. The component are ordered based on the values of their means, i.e., the component with the smallest μk is labeled component 1, the one with the second smallest μk is labeled component 2, etc.
  • True
  • False

Likelihood function for mixture models

Q1. Consider a random sample (-0.3, 4.1, 3.6, 7.5, 1.9, 2.7)(−0.3,4.1,3.6,7.5,1.9,2.7) composed of n=6n=6 observations form the mixture with density:

f(x)=w1​2π​1​exp{−2x2​}+w2​2π​1​exp{−2(x−2)2​}+w3​2π​1​exp{−2(x−4)2​}

Which expression is proportional to the complete-data likelihood associated with the indicator vector (1,2,2,3,1,2)(1,2,2,3,1,2)?

  • w12​w23​w3​exp{−11.705}
  • w13​w2​w32​exp{−11.705}
  • w12​w33​w3​exp{−23.41}
  • w13​w3​w32​exp{−23.41}

Q2. True or False: Consider a location mixture of normals

f(x)=∑k=1Kωk​2πσ1​exp{−2σ2(xμk​)2​}

The following 3 constraints make all parameters fully identifiable:

  1. The means μ1,…,μK should all be different.
  2. No weight ωk is allowed to be zero.
  3. The component are ordered based on the values of their means, i.e., the component with the smallest μk is labeled component 1, the one with the second smallest μk is labeled component 2, etc.
  • True
  • False

Practice Quiz: Bayesian Information Criteria (BIC)

Q1. Consider a KK-component mixture of DD-dimensional Multinomial distributions,

f(\mathbf{x}) =\sum_{k=1}^{K} w_k {x_1+x_2 + \cdots + x_D \choose x_1 \; x_2 \; \cdots \; x_D} \prod_{d=1}^{D} \theta_{d,k}^{x_d} f(x)=∑k=1Kwk​(x1​x2​⋯xDx1​+x2​+⋯+xD​​)∏d=1Dθd,kxd​​

where \mathbf{x} = (x_1 , \ldots, x_D)x=(x1​,…,xD​) and \sum_{d=1}^{D} \theta_{d,k}=1∑d=1Dθd,k​=1 for all k = 1, \ldots, Kk=1,…,K. For the purpose of computing the BIC, what is the effective number of parameters in the model?

  • (K−1)+K×D
  • K+K×(D−1)
  • (K−1)+K×(D−1)
  • (K−1)×(D−1)

Estimating the number of components in Bayesian settings

Q1. Let K^{*}K∗ be the prior expected number of occupied components in a mixture model with KK components where the weights are given a Dirichlet prior (w1,…,wK)∼Dir(2K,…,2K). If you have n=400n=400 observations, what is the expected number of occupied components, E(K^{*})E(K∗) according to the exact formula we discussed in the lecture? Round your answer to one decimal place.

Q2. Consider the same setup as the previous question, what is the expected number of occupied components, E(K^{*})E(K∗) according to the exact formula we discussed in the lecture if n=100n=100 instead? Round your answer to one decimal place.

Q3. What would be the answer to the previous question if you used the approximate formula instead of the exact formula? Remember to round your answer to one decimal place.

Q4. If you have n=200n=200 observations and a priori expect the mixture will have about 2 occupied components (i.e., E(K^{*}) \approx 2E(K∗)≈2 a priori), what value of \alphaα should you use for the prior (w1,…,wK)∼Dir(αK,…,αK). Use the approximation E(K^{*}) \approx \alpha \log\left( \frac{n+\alpha-1}{\alpha} \right)E(K∗)≈αlog(αn+α−1​) to provide an answer, which should be rounded to two decimal places.

Estimating the partition structure in Bayesian models

Q1. Binder’s loss function is invariant to label switching

  • Yes
  • No

Q2. Use the implementation of the MCMC algorithm for fitting a mixture model to the galaxies dataset contained in the lesson “Sample code for estimating the number of components and the partition structure in Bayesian models” to estimate the number of component associated with the optimal partition obtained using Binder’s loss function with \gamma_1 = 3γ1​=3 and \gamma_2 = 1γ2​=1. Make sure to set keep the seed of the random number generator set to 781209.

Q3. Rerun the algorithm contained in “Sample code for estimating the number of components and the partition structure in Bayesian models” using a prior for the weights (w1,…,wK)∼Dir(0.2K,…,0.2K). What is the mode for the posterior distribution on K^*K∗, the number of occupied clusters?

Q4. Under the new prior (w1,…,wK)∼Dir(0.2K,…,0.2K), what is the number of components in the optimal partitions according to Binder’s loss function with \gamma_1 = \gamma_2γ1​=γ2​?

Week 02 : Computational considerations for Mixture Models

Q1. Consider a mixture of three Gaussian distribution with common identity covariance matrix and means

μ1​=(0,0)′, \mu_2 = (1/3,1/3)’μ2​=(1/3,1/3)′ and \mu_3 = (-2/3,1/3)’μ3​=(−2/3,1/3)′.

For an observation x_i = (31,-23)’xi​=(31,−23)′, what is the value of v_{i,2}vi,2​, the probability of the observation being generated by the second component (rounded to three decimal places)?

  • 0.928
  • 1.00
  • 0.072

Q2. True or False: The starting value for the parameters of the mixture model in the EM algorithm could have an impact on the solution you obtain.

  • True
  • False

Q3. True or False: Consider a Bayesian formulation of a Mixture Model that uses informative priors for all the parameters. A Markov chain Monte Carlo (MCMC) algorithm for fitting such model will fail to work if no observations are allocated to a component of the mixture.

  • True
  • False

Estimating the partition structure in Bayesian models

Q1. Binder’s loss function is invariant to label switching

  • Yes
  • No

Q2. Use the implementation of the MCMC algorithm for fitting a mixture model to the galaxies dataset contained in the lesson “Sample code for estimating the number of components and the partition structure in Bayesian models” to estimate the number of component associated with the optimal partition obtained using Binder’s loss function with

γ1​=3 and \gamma_2 = 1γ2​=1

. Make sure to set keep the seed of the random number generator set to 781209.

Q3. Rerun the algorithm contained in “Sample code for estimating the number of components and the partition structure in Bayesian models” using a prior for the weights (w1,…,wK)∼Dir(0.2K,…,0.2K). What is the mode for the posterior distribution on K^*K∗, the number of occupied clusters?

Q4. Under the new prior (w1,…,wK)∼Dir(0.2K,…,0.2K), what is the number of components in the optimal partitions according to Binder’s loss function with γ1​=γ2​?

Bayesian Statistics: Mixture Models Answers Course Review:

In our experience, we suggest you enroll in Bayesian Statistics: Mixture Models Quiz Answers courses and gain some new skills from Professionals completely free and we assure you will be worth it.

Bayesian Statistics: Mixture Models course is available on Coursera for free, if you are stuck anywhere between a quiz or a graded assessment quiz, just visit Networking Funda to get Bayesian Statistics: Mixture Models Quiz Answers.

Conclusion:

I hope this Bayesian Statistics: Mixture Models Quiz Answers would be useful for you to learn something new from this Course. If it helped you then don’t forget to bookmark our site for more Quiz Answers.

This course is intended for audiences of all experiences who are interested in learning about new skills in a business context; there are no prerequisite courses.

Keep Learning!

Get All Course Quiz Answers of Bayesian Statistics Specialization

Bayesian Statistics: From Concept to Data Analysis Quiz Answers

Bayesian Statistics: Techniques and Models Quiz Answers

Bayesian Statistics: Mixture Models Coursera Quiz Answers

Bayesian Statistics: Time Series Analysis Quiz Answer

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!