Understanding conditional probabilities
To understand conditional probabilities, refer to previous articles to understand conditional probabilities
Two-stage algorithm-training and querying
Now look at the famous Bayes algorithm. Bayes is divided into two stages: training and querying. Training refers to the training of sample datasets to find patterns.
NEWLISPE provides the Bayes-train function
Training first look at the function prototype:
Syntax: (bayes-train list-m1 [list-m2 ...] sym-context-d)
LIST-M1 [list-m2 ...] is an input parameter, is a bunch of list,list elements can be symbol or string, in Bayes, these elements have a canonical name, called token. So list-m1 ... These are called token lists.
The Bayes-train function actually counts the number of times each token appears in these input lists, and then stores the results in key/value form in the context, which is the context that sym-context-d represents.
Now for an example, token is symbol, a total of two tokens list, the training results are stored in the context ' L
The symbols function shows that there are several symbol in the result L, where total is the sum, and now displays their values one at a time.
> l:a (2 1) > L:b (1 2) > L:c (2 3) > L:total (5 6)
The number of tokens a can be seen in the two token list is 2 and 1, respectively, B and C are (1 2) and (2 3) times, the total is (5 6) key is the token value is the frequency of the different token list.
Token can also be a string, but note that in the result context, key is prefaced with _, for example:
(Bayes-train ' ("One", "" "" "" "" "" "" "" "" "" Three ") ' (" three "" One "" Three ") ' (" One "" "" "" "" " three")
In S, key is _one, _two, and _three.
These token lists, logically, represent the order in which tokens are arranged. The token list can contain millions of tokens, such as for natural language training.
Incremental training It is worth noting that training can be done continuously, and if you call it again, you will find that the frequency of tokens is increasing. Like what:
> (Bayes-train ' (a a B c c) ' (a b b c c) ' L) (Ten) > L:a (4 2) > L:total (10 12)
This is good news, we can save the results of L in the database each time, there will be new training samples come over, can continue training, rather than start from scratch. I also, if it is true that the token frequency has been obtained by other means, we can skip a training session and save the results directly in the context to help with subsequent training. New tokens can also be added to the Bayes-train function to update the results correctly. Incremental training is the best approach when the training set is very large, or when training data I is growing over time.
The query is ultimately for querying.
newLISP Bayes algorithm