Components of An Expert System (ES):
Main Components of an Expert system are explained below:
a. Knowledge Base:
At the center of any Expert System their is knowledge base, which contains
specific facts about the expert area and rules that the Expert System will use
to make decisions based on those facts.
The most popular knowledge representation technique is the use of rule. A
rule specifies what to do in a given situation and consists of two parts:
- A condition that may or may not be true
- Actions to be taken when the condition is true or false An example of a rule
All of the rules contained in an Expert System are called the rule set, which
can vary from dozen to 10,000 and so on for a complex one.
Most Expert Systems are capable of dealing with binary logic, yes or no, true or
false, 0 or 1. IF Expert Systems are to truly incorporate human thinking
patterns, they must handle such imprecise terms as "most", "many", or "some"
which are called "Fuzzy logic".
b. Inference Engine:
The engine inference is the portion of the Expert System that performs reasoning
by using the contents of the knowledge base in a particular sequence. During the
consultation, the inference engine examines the rules of the knowledge base one
at a time, and when rule condition is true the specified action is taken. In
Expert Systems terminology, the rule is fixed when the action is taken.
Two main methods have been devised for the inference engine to use in
examining the rules: forward reasoning and reverse reasoning.
|In the 1960s a computer scientist named Joseph Weizenbaum wrote a little program
as an experiment in natural language. He named the program after Eliza
Doolittie, the character in My Fair Lady who wanted to learn to speak proper
English. The software allows the : computer to act as a gentle
therapist who does not talk much but, \ instead, encourages the
patient - the computer user - to talk.
The Eliza software has a storehouse of key phrases to be dragged out when
triggered by the patient. For example, if a patient types;
"My mother never liked me,"
The software cued by the word mother - can respond;
"Tell me more about your family."
If there are no key words from the patient, the computer responds neutrally,
with a phrase such as;
"I see" or "That's very interesting" or "Why do you think that?"
If a patient gives yes or no answers, the computer may respond; "I prefer
With party tricks like these, the program is able to move along quite swiftly
from line to line. Weizenbaum was astonished to discover that people were taking
his little program seriously, pouring out their hearts to the computer.'
Forward reasoning is also called forward chaining; the rules are examined one
after another in a certain order. The order might be the sequence in which the
rules were entered into the rule set, or it might be some other sequence
specified by the user. As each rule is examined, the Expert Systems attempt to
evaluate whether the condition is true or false. For example, a medical expert
system may be used to examine a patient's symptoms and provide a diagnosis.
Based on these symptoms, the expert system might locate several diseases that
the patient may have.
Reverse reasoning is also called backward chaining; the inference engine selects
a rule and regards it as a problem to be solved. Such procedures are often
called goal driven inferential processes. For example, the expert system might
be given the goal to "find the symptoms of this disease" and would work back
from there, asking questions as necessary to. confirm.
c. User Interface:
Users often interact with the Expert Systems through a user interface. In most
cases, the Expert Systems prompt (asks) the user to supply information about the
problem and the user types in the requested data. The data entered are examined
by the inference engine and compared to the facts, rules, and the relationships
in the knowledge base. This examination and comparison process results in the
system continuing to prompt the user for more information until the system has
enough data about the current problem so that it can reach a conclusion. Thus
the user interface for' an Expert System is highly interactive.
Ideally, the user interface should enable to communicate with the Expert Systems
in his natural language, without needing to learn rigid, programming language
d. Explanation Facilities:
After users supply information about the current problem-solving situation, the
Expert System reaches a conclusion and/or makes a recommendation (which can be
output to a screen, printer, or storage device) about what should be done. In
many cases, users are interested in knowing the line of reasoning followed by
the Expert System in drawing conclusions.
The explanation facility communicates to user the logic followed in reaching a
decision and, in some cases, may also attempt to explain the importance of
certain information inputs. Also, if the Expert Systems cannot draw a
conclusion, it should display what it has uncovered and let human experts use
these facts to their advantage.