Simple matlab simulink examples
The fitcnb function can be used to create a more general type of naive Bayes classifier.įirst model each variable in each class using a Gaussian distribution. While the assumption of class-conditional independence between variables is not true in general, naive Bayes classifiers have been found to work well in practice on many data sets. Naive Bayes classifiers are among the most popular classifiers. These diagonal choices are specific examples of a naive Bayes classifier, because they assume the variables are conditionally independent given the class label. They are similar to 'linear' and 'quadratic', but with diagonal covariance matrix estimates. The fitcdiscr function has other two other types, 'DiagLinear' and 'DiagQuadratic'. It shows that a simpler model may get comparable, or better performance than a more complicated model. QDA has a slightly larger cross-validation error than LDA. To reproduce the exact results in this example, execute the following command: You could repeat this by removing each of the ten subsets one at a time.īecause cross-validation randomly divides data, its outcome depends on the initial random seed. Remove one subset, train the classification model using the other nine subsets, and use the trained model to classify the removed subset. The steps show how to: Load data and create fits using different library models.
#Simple matlab simulink examples how to#
It also shows how to fit a single-term exponential equation and compare this to the polynomial models. Each subset has roughly equal size and roughly the same class proportions as in the training set. This example shows how to fit polynomials up to sixth degree to some census data using Curve Fitting Toolbox.
It randomly divides the training set into 10 disjoint subsets. A stratified 10-fold cross-validation is a popular choice for estimating the test error on classification algorithms. In this case you don't have another labeled data set, but you can simulate one by doing cross-validation. In fact, the resubstitution error will likely under-estimate the test error. Usually people are more interested in the test error (also referred to as generalization error), which is the expected prediction error on an independent set. You have computed the resubstitution error.