Fusion of Neural Networks, Fuzzy Systems and Genetic Algorithms: Industrial Applications Fusion of Neural Networks, Fuzzy Systems and Genetic Algorithms: Industrial Applications
by Lakhmi C. Jain; N.M. Martin
CRC Press, CRC Press LLC
ISBN: 0849398045   Pub Date: 11/01/98
  

Previous Table of Contents Next


A data set composed by system examples is acquired to be used in the training stage. Figure 2(a) displays the examples covering the domain, each one formed by a data sample like . The examples are grouped in clusters for each respective rule R(l). In the figure, we exemplify the rule acquisition expressed in statement (6).


Figure 2  (a) Set of examples selected from the training data to extract the rule with antecedents defined by fuzzy sets PM and NM. (b) Membership function induced by weighted output values y′ into the specified rule region, and the computed conclusion value ω(l).

The condition rule part is characterized by fuzzy sets PM (Positive-Medium) and NM (Negative-Medium). The conclusion part, characterized by a numerical value ω(l), is extracted based on the examples contained into the domain region covered by the two fuzzy sets PM and NM. This set of examples is represented in Figure 2 by filled circles into the rule region R(l).

Using the fuzzy cluster concept, it attributes to each example a certain degree of how much it belongs to that cluster or, in other words, how much each example contributes to the extraction of conclusion value ω(l) of that rule R(l).

Suppose an example inside the rule region. Its contribution degree is computed by the product of each condition membership degree in fuzzy sets PM and NM of specified rule region, as expressed in (7) and displayed in Figure 2(b). The computed contribution degree then weights the corresponding output value y′.

The anterior operations are executed for each example inside the rule region, and compose a membership function defined for all output values y′ into the rule region, as Figure 2(b) illustrates to rule (6). Using the centroid method, the final conclusion value ω(l) for that rule (l) is computed from the induced membership function.

4.2 The Cluster-Based Algorithm

The algorithm uses the ideas introduced in the anterior section to extract each rule to build an initial model to the electro-hydraulic actuator. At first, the algorithm divides system’s domain into a set of clusters using the fuzzy sets attributed to each variable. As shown in Figure 2, each cluster represents a local rule. The rules composing the model are established a priori by multiplication of the number of fuzzy sets attributed to each condition variable.

The cluster-based algorithm steps are described below in more detail, and a simple example illustrates it.

Starting with rule one (l = 1) and the kth training example, the cluster-based algorithm summarizes the following steps to extract its conclusion value ω(1):

Step 1) Establish the variable set better characterizing the actuator’s behavior;
Step 2) Set the limits of each universe of discourse and the number of fuzzy sets for the selected input-output variables in step 1. The algorithm uses symmetric Gaussian membership functions uniformly distributed by each universe of discourse;
Step 3) The algorithm begins with the extraction of the first rule (l = 1). From the training set, we take the kth numerical example , and calculate, for all condition variables, their respective membership degrees in the fuzzy sets composing the rule as expressed in (8).

Step 4) Calculate the membership degree of corresponding output value y′ (k) in rule (l), or its membership degree in cluster (l), as indicated in (9) by the term S1(l) (k).

Step 5) The output value y′ (k) is weighted by its membership degree S1(l) (k) in rule (l), as described in Equation (10) by S2(l) (k).

Step 6) In this step, the algorithm adds recursively the value S2(l) (k) and the membership degree S1(l) (k) as indicated in (11). The variable Numerator adds to rule (l) all weighted contributions made by the n data values y′ (k) in the training set. The variable Denominator sums each membership degree in order to normalize the conclusion value ω(l).

Get the next example. If there are no more examples, go to step 7 and compute the conclusion value ω(l). If not, go to step 3 and pick up the next example as indicated in (12).

Step 7) If the training set has finished (k = n), compute the conclusion value ω(l) for rule (l) using equation (13).

Step 8) The algorithm now goes to next rule (14), begins again with the first training example (15), and returns to step 3. If there are no more rules (l = c), the algorithm stops.

4.3 Illustrative Example

This example illustrates the anterior steps for one training period. It uses the two examples shown in (16) to demonstrate the computation of ω(l) for a certain specified rule (l).

This example considers a system with two antecedent variables denoted by x1 and x2, and one output variable, y. The variables are partitioned by symmetric triangular membership functions. The use of a triangular partition instead of a Gaussian one helps us to better visualize the algorithm steps. We attributed 7 fuzzy sets to variable x1 (Figure 3a), 5 fuzzy sets to x2 (Figure 3b), and 5 fuzzy sets to y (Figure 3c).


Figure 3  (a) Partition of variable x1 with 7 fuzzy sets. (b) Partition of variable x2 with 5 fuzzy sets. (c) Partition of variable y with 5 fuzzy sets.


Previous Table of Contents Next

Copyright © CRC Press LLC