![]() |
Fusion of Neural Networks, Fuzzy Systems and Genetic Algorithms: Industrial Applications
by Lakhmi C. Jain; N.M. Martin CRC Press, CRC Press LLC ISBN: 0849398045 Pub Date: 11/01/98 |
Previous | Table of Contents | Next |
Accuracy Controlled Fuzzy Solution
The architecture of the neural net (Box 1 of Figure 2) is designed so that the neural net can be fully mapped to the fuzzy logic (Figure 3). Full mapping of the neural net solution to the fuzzy logic solution guarantees no loss of accuracy in converting a neural net based solution to a fuzzy logic based solution. Full mapping also dictates the nonheuristic algorithms used in fuzzy logic (Box 3 of Figure 2). Thus, the defuzzification, rule evaluation, and antecedent processing algorithms are derived based on the neural net architecture. Since the neural net can be trained to a pre-specified desired accuracy, the generated rules and membership functions will guarantee the same accuracy using the corresponding fuzzy logic design. In other words, the neural net will generate appropriate rules and membership functions to guarantee a pre-specified accuracy level.
Thus, unlike a conventional fuzzy system, the NeuFuz based fuzzy system can develop a solution to meet a pre-specified desired accuracy level. Pre-specified accuracy will be easily met by the training set. However, the accuracy level may not be easily met by the test set. In such a situation, the neural net can be retrained with better accuracy or part of the test set can be included in the training set (or both). This way, the performance of the test set can be improved.
Adaptability
NeuFuz can provide adaptation capabilities over time by using on-line learning capability. Thus, when implemented on embedded processors, NeuFuz can provide adaptation capability over time if on-chip learning capability is provided.
Control Parameters
NeuFuz technology provides various parameters to control and optimize the solution. Desired accuracy, learning rate, and number of membership functions are a few examples.
Understanding the Black Box
The weights of the neural nets are mapped to fuzzy logic rules and member functions. Expressing the weights of the neural net by fuzzy rules also provides better understanding of the Black Box and thus helps better design of the neural net itself.
The NeuFuz architecture is shown in Figures 3 and 4. Figure 4 shows the details of layer 1 of Figure 3. Multiplicative neurons are used in the hidden layer. The output layer uses a sum neuron. The input layer, which does fuzzification and defines membership functions (layer 1 of Figure 3 and layers 1-4 of Figure 4), uses linear, nonlinear, and sum neurons. The back propagation learning algorithm is used which is properly modified to incorporate multiplicative neurons in layer 2 of Figure 3.
Fuzzification and Generating Membership Functions
The first layer of Figure 3 includes the fuzzification process, whose task is to match the values of the input variables against the labels used in the fuzzy control rule. The first layer neurons and the weights between layer 1 and 2 are also used to define the input membership functions. In fact, it is difficult to do both fuzzification and learning membership functions in one layer. Figure 4 shows a multilayer implementation for fuzzification and membership function generation. With an input level of x, the output layer 1 neuron (Figure 4) is g1 · x where g1 is the gain of neuron in layer 1. The input of layer 2 neuron is g1 · x · W1. Continuing this way, we have the input of layer 4 neuron, z, as
where a = g1 · g2 · W1 · W2, c = W3, and g2 = gain of layer 2 neuron.
Gains g1 and g2 can be kept constant and we can adjust only weights W1, W2, and W3 during learning. Now, if we assume the nonlinear function as an exponential function of the form [1/(1 + e-z)], then we have the output, y, of the neuron in layer 4 as
By learning a, b, and c (i.e., weights W1, W2, and W3), we can easily learn an exponential membership function. The size and shape of this function is determined by weights W1, W2, W3, and bias b. By using different initial values of weights, we can generate various exponential membership functions of same type, but with different shapes, sizes, and positions. By using multiple neurons in layer 3 and 4 and using different weight values for initial W2s and W3s, we can learn any class of exponential type membership functions. These membership functions meet all the criteria to back propagate error signals. Other suitable mathematical functions could be used as well. By breaking the network in this particular way (Figure 4), we have better control of learning the membership functions. After the learning is completed, the weights remain fixed and the neural net recall operation will classify the input x in one or more fuzzy classes (each neuron in layer 4 defines a fuzzy class).
Generating Fuzzy Rules
The layer 2 neurons of Figure 3 represent the rule base. The output layer neuron provides the rule evaluation and defuzzification. Neurons in these 2 layers are linear and use a slope of unity. The weights between layers 2 and 3 (Figure 3) represent the consequent. These are singletons. After the learning is completed, layer 2 neurons along with the outputs of layer 1 neurons and the weights between layers 2 and 3 form the fuzzy rule base.
Previous | Table of Contents | Next |