Generative model
From Wikipedia, the free encyclopedia
In statistics, a generative model is a model for randomly generating observable data, typically given some hidden parameters. It specifies a joint probability distribution over observation and label sequences. Generative models are used in machine learning for either modeling data directly (i.e., modeling observed draws from a probability density function), or as an intermediate step to forming a conditional probability density function. A conditional distribution can be formed from a generative model through the use of Bayes' rule.
Shannon (1948) gives an example in which a table of frequencies of English word pairs is used to generate a sentence beginning with representing and speedily is an good; which is not proper English but which will increasingly approximate it as the table is moved from word pairs to word triplets etc.
Generative models contrast with discriminative models, in that a generative model is a full probability model of all variables, whereas a discriminative model provides a model only of the target variable(s) conditional on the observed variables. Thus a generative model can be used, for example, to simulate (i.e. generate) values of any variable in the model, whereas a discriminative model allows only sampling of the target variables conditional on the observed quantities.
Examples of generative models include:
- Gaussian distribution
- Gaussian mixture model
- Multinomial distribution
- Hidden Markov model
- Naive Bayes
- AODE
- Latent Dirichlet allocation
If the observed data are truly sampled from the generative model, then fitting the parameters of the generative model to maximize the data likelihood is a common method. However, since most statistical models are only approximates to the true distribution, if the model's application is to infer about a subset of variables conditional on known values of others, then it can be argued that the approximation makes more assumptions than are necessary to solve the problem at hand. In such cases, it is often more accurate to model the conditional density functions directly: i.e., performing classification or regression analysis.
Discriminative models are a class of models used in machine learning for modeling the dependence of an unobserved variable y on an observed variable x. Within a statistical framework, this is done by modeling the conditional probability distribution P(y | x), which can be used for predicting y from x.
Discriminative models differ from generative models in that they do not allow one to generate samples from the joint distribution of x and y.
Examples of discriminative models used in machine learning include: