Study Notes on building an Expert System Using Prolog (4)
Source: Internet
Author: User
Study Notes on building an Expert System Using Prolog (4) Iv. Description It is said that one of the important functions of the expert system is self-explanation of its behavior. This means that users can ask questions to the expert system at any time during the consultation process, why is it giving a certain answer, and the system will give an explanation according to certain rules. 1. Meaning of the interpretation function for users The user does not need to explain the function too much. He often only needs the answer. This explanation is not always useful even if the user asks for an explanation. 2. Meaning of the function to developers For system developers, the interpretation function is very important because experts can discover error rules by interpreting the content. 3. Types of interpretation functions Expert systems usually have four ways of interpretation: ● tracking application rules during the consulting process; ● explaining how the system draws a conclusion; ● explaining why the system raises a question; ● explain why the conclusion is uncertain. (1) clam's interpretation function
1. Tracking The first interpretation function added to clam is to track the execution of rules. Its performance is like that of the prolog tracking module, which informs the user of the "call", "exit", and "failure" of a rule ". It uses a special predicate, bugdisp, to send trace information to users. The bugdisp parameter is the list of "items" (TERM. Bugdisp (l):-ruletrace, % when the atomic value is true, the system enters the tracking status write_line (L ),!. Bugdisp (_). write_line ([]):-nl. write_line ([H | T]):-write (H), tab (1), write_line (t ). clam system-level command predicates, go, call the following new predicates do, issue the command trace (on) or trace (off), and enable or disable the tracking function. Do (trace (x):-set_trace (x ),!. Set_trace (off):-ruletrace, retract (ruletrace ). set_trace (on):-not ruletrace, asserta (ruletrace ). set_trace (_). add the Trace call to the predicate FG and report the "call" and "successful exit" statuses. FG (goal, curcf):-rule (n, LHS (iflist), RHS (goal, CF), bugdisp (['call rule', N]), prove (n, iflist, tally), bugdisp (['exit rule', N]), adjust (CF, tally, newcf), update (goal, newcf, curcf, n), curcf = 100 ,!. FG (goal, CF):-fact (goal, CF ). capture processing when the called predicate fails: Prove (n, iflist, tally):-prov (iflist, 100, tally ),!. Prove (n, _, _):-bugdisp (['fail rule', N]), fail. 2. Explain how the system draws a conclusion The system concludes that after the user asks a question system, the evidence used for reasoning is displayed. Evidence collection: one is dedicated tracking of conclusions, and the other is from the blackboard (Work Database ). Clam uses the second method to collect evidence from the fact (fact) in the work database ). Facts are produced by multiple rules that end with the same "Attribute-value" and compound confidence level. Therefore, the fact's 3rd parameters are to save the list of rule numbers. It is not a complete proof tree, but it also directly draws a fact conclusion. FACT (AV, CF, rulelist) uses the predicate update to update facts. Its 4th parameters are the number of the rule to be updated. Its 1st clause updates the fact and adds the new rule number to the fact. Update (goal, newcf, CF, rulen):-fact (goal, oldcf, _), combine (newcf, oldcf, CF), retract (fact (goal, oldcf, oldrules), asserta (fact (goal, CF, [rulen | oldrules]),! Update (goal, CF, CF, rulen):-asserta (fact (goal, CF, [rulen]). call update from the predicate FG and use a new parameter. Rule No.: FG (goal, curcf):-rule (n, LHS (iflist), RHS (goal, CF )),... update (goal, newcf, curcf, n ),... since the relevant rules come from the fact in the work database, we can answer the user's question: where does the fact come from. It is extremely simple to do, that is, to write a list of used rules. How (GOAL):-fact (goal, CF, rules), CF> 20, pretty (goal, PG), write_line ([PG, was, derived, from, 'rules: '| rules]), NL, list_rules (rules), fail. how (_). it is similar to the fact that the negative target is derived from a negative cf. How (not goal):-fact (goal, CF, rules), cf <-20, pretty (not goal, PG), write_line ([PG, was, derived, from, 'rules': '| rules]), NL, list_rules (rules), fail. to improve readability, use pretty to convert the AV and list. Pretty (AV (A, yes), [a]):-!. Pretty (not av (A, yes), [not, a]):-!. Pretty (AV (A, no), [not, a]):-!. Pretty (not av (A, v), [not, A, is, V]). pretty (AV (A, v), [a, is, V]). the predicate list_rules is used to format the list of rules in a given fact. List_rules ([]). list_rules ([R | x]):-list_rule (R), list_rules (X ). list_rule (n):-rule (n, LHS (iflist), RHS (goal, CF), write_line (['rule', N]), write_line (['if']), write_ifs (iflist), write_line (['then']), pretty (goal, PG), write_line (['', PG, cf]), NL. write_ifs ([]). write_ifs ([H | T]):-pretty (H, HP), tab (5), write_line (HP), write_ifs (t ). pretty can be used in turn. How:-write ('Goal? '), Read_line (x), NL, pretty (goal, x), how (goal ). you can use how as a command to add the top-level user interface: Do (how):-How ,!. The preceding how command shows you the rule set directly related to facts. These rules seem to be based on other facts. There are two ways to display this information: ● the user will further ask how the objectives of the various LHS rules are proved to go deep into the evidence tree; ● call the how command in the relevant predicates, displays the complete evidence tree. So far, we have used the first method. To implement the second method, you need to write the predicate how_lhs. List_rules ([]). list_rules ([R | x]):-list_rule (R), how_lhs (R), list_rules (X ). how_lhs (n):-rule (n, LHS (iflist ),_),!, How_ifs (iflist). how_ifs ([]). how_ifs ([goal | x]):-How (GOAL), how_ifs (X ). 3. Ask questions from the system to users, ask the user why he asked this question.
The user asks the system how after the reasoning is complete. When the question why occurs at the bottom of the Rule chain, all rules are used up, so the system asks the user. At this time, the user asks to know why the system raises this question. In order for the system to answer such why questions, the reasoning chain must be tracked. The hist parameter in the following two predicates is a list of Tracked rules. Its initial value is an empty table. Findgoal (goal, curcf, hist):-FG (goal, curcf, hist ). FG (goal, curcf, hist ):-... prove (n, iflist, tally, hist ),... prove (n, iflist, tally, hist):-prov (iflist, 100, tally, [n | hist]),!. Prove (n, _, _):-bugdisp (['fail rule', N]), fail. prov ([], tally, tally, hist ). prov ([H | T], curtal, tally, hist):-findgoal (H, CF, hist), min (curtal, CF, Tal), Tal> = 20, prov (T, Tal, tally, hist ). get_user (x, hist):-repeat, write (':'), read_line (x), process_ans (x, hist ). process_ans ([Why], hist):-nl, write_hist (hist ),!, Fail. process_ans ([trace, X], _):-set_trace (x ),!, Fail. process_ans ([help], _):-Help ,!, Fail. process_ans (x ,_). % Just return user's answerwrite_hist ([]):-nl. write_hist ([goal (x) | T]):-write_line ([goal, X]),!, Write_hist (T). write_hist ([n | T]):-list_rule (N ),!, Write_hist (t ).
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.