TH315斗式提升機總體結(jié)構(gòu)設(shè)計
TH315斗式提升機總體結(jié)構(gòu)設(shè)計,TH315,提升,總體,結(jié)構(gòu)設(shè)計
Received 15 May 2008; accepted 20 July 2008 Projects 50674086 supported by the National Natural Science Foundation of China, BS2006002 by the Society Development Science and Technology Plan of Jiangsu Province and 20060290508 by the Doctoral Foundation of Ministry of Education of China Corresponding author. Tel: +86-516-83591702; E-mail address: Mine-hoist fault-condition detection based on the wavelet packet transform and kernel PCA XIA Shi-xiong, NIU Qiang, ZHOU Yong, ZHANG Lei School of Computer Science & Technology, China University of Mining & Technology, Xuzhou, Jiangsu 221008, China Abstract: A new algorithm was developed to correctly identify fault conditions and accurately monitor fault development in a mine hoist. The new method is based on the Wavelet Packet Transform (WPT) and kernel PCA (Kernel Principal Compo-nent Analysis, KPCA). For non-linear monitoring systems the key to fault detection is the extracting of main features. The wavelet packet transform is a novel technique of signal processing that possesses excellent characteristics of time-frequency localization. It is suitable for analysing time-varying or transient signals. KPCA maps the original input features into a higher dimension feature space through a non-linear mapping. The principal components are then found in the higher dimen-sion feature space. The KPCA transformation was applied to extracting the main nonlinear features from experimental fault feature data after wavelet packet transformation. The results show that the proposed method affords credible fault detection and identification. Key words: kernel method; PCA; KPCA; fault condition detection1 Introduction Because a mine hoist is a very complicated and variable system, the hoist will inevitably generate some faults during long-terms of running and heavy loading. This can lead to equipment being damaged, to work stoppage, to reduced operating efficiency and may even pose a threat to the security of mine per-sonnel. Therefore, the identification of running faults has become an important component of the safety system. The key technique for hoist condition moni-toring and fault identification is extracting informa-tion from features of the monitoring signals and then offering a judgmental result. However, there are many variables to monitor in a mine hoist and, also, there are many complex correlations between the variables and the working equipment. This introduces uncertain factors and information as manifested by complex forms such as multiple faults or associated faults, which introduce considerable difficulty to fault diagnosis and identification1. There are currently many conventional methods for extracting mine hoist fault features, such as Principal Component Analysis (PCA) and Partial Least Squares (PLS)2. These methods have been applied to the actual process. However, these methods are essentially a linear transformation approach. But the actual monitoring process includes nonlinearity in different degrees. Thus, researchers have proposed a series of nonlinear methods involving complex nonlinear transforma-tions. Furthermore, these non-linear methods are con-fined to fault detection: Fault variable separation and fault identification are still difficult problems. This paper describes a hoist fault diagnosis feature exaction method based on the Wavelet Packet Trans-form (WPT) and kernel principal component analysis (KPCA). We extract the features by WPT and then extract the main features using a KPCA transform, which projects low-dimensional monitoring data samples into a high-dimensional space. Then we do a dimension reduction and reconstruction back to the singular kernel matrix. After that, the target feature is extracted from the reconstructed nonsingular matrix. In this way the exact target feature is distinct and sta-ble. By comparing the analyzed data we show that the method proposed in this paper is effective. 2 Feature extraction based on WPT and KPCA 2.1 Wavelet packet transform The wavelet packet transform (WPT) method3, which is a generalization of wavelet decomposition, offers a rich range of possibilities for signal analysis. J China Univ Mining & Technol 18 (2008) 05670570 JOURNAL OF CHINA UNIVERSITY OF MINING & TECHNOLOGY Journal of China University of Mining & Technology Vol.18 No.4 568The frequency bands of a hoist-motor signal as col-lected by the sensor system are wide. The useful in-formation hides within the large amount of data. In general, some frequencies of the signal are amplified and some are depressed by the information. That is to say, these broadband signals contain a large amount of useful information: But the information can not be directly obtained from the data. The WPT is a fine signal analysis method that decomposes the signal into many layers and gives a better resolution in the time-frequency domain. The useful information within the different frequency bands will be ex-pressed by different wavelet coefficients after the decomposition of the signal. The concept of “energy information” is presented to identify new information hidden the data. An energy eigenvector is then used to quickly mine information hiding within the large amount of data. The algorithm is: Step 1: Perform a 3-layer wavelet packet decom-position of the echo signals and extract the signal characteristics of the eight frequency components, from low to high, in the 3rd layer. Step 2: Reconstruct the coefficients of the wavelet packet decomposition. Use 3 jS (j=0, 1, , 7) to denote the reconstructed signals of each frequency band range in the 3rd layer. The total signal can then be denoted as: ?=703jjSS (1) Step 3: Construct the feature vectors of the echo signals of the GPR. When the coupling electromag-netic waves are transmitted underground they meet various inhomogeneous media. The energy distribut-ing of the echo signals in each frequency band will then be different. Assume that the corresponding en-ergy of 3 jS (j=0, 1, , 7) can be represented as 3 jE(j=0, 1, , 7). The magnitude of the dispersed points of the reconstructed signal 3 jS is: jkx (j=0, 1, , 7; k=1, 2, , n), where n is the length of the signal. Then we can get: 22331( ) dnjjjkkESttx=? (2) Consider that we have made only a 3-layer wavelet package decomposition of the echo signals. To make the change of each frequency component more de-tailed the 2-rank statistical characteristics of the re-constructed signal is also regarded as a feature vector: 2311()njkjjkkDxxn=? (3) Step 4: The 3 jE are often large so we normalize them. Assume that 7230jjEE=?=?, thus the derived feature vectors are, at last: 30313637/, /, ., /, /EE EEEE EE=T (4) The signal is decomposed by a wavelet package and then the useful characteristic information feature vectors are extracted through the process given above. Compared to other traditional methods, like the Hil-bert transform, approaches based on the WPT analy-sis are more welcome due to the agility of the process and its scientific decomposition. 2.2 Kernel principal component analysis The method of kernel principal component analysis applies kernel methods to principal component analy-sis45. Let1, 1, 2, , , 0MNkkkxRkMx=?. The prin-cipal component is the element at the diagonal after the covariance matrix, T11Mijjx xM=?C, has been diagonalized. Generally speaking, the first N values along the diagonal, corresponding to the large eigen-values, are the useful information in the analysis. PCA solves the eigenvalues and eigenvectors of the covariance matrix. Solving the characteristic equa-tion6:11()Mjjjxv xM=?C (5) where the eigenvalues 0 and the eigenvectors 0NR is essence of PCA. Let the nonlinear transformations, :NRF, xX, project the original space into feature space, F. Then the covariance matrix, C, of the original space has the following form in the feature space: T11( )()MijjxxM=?C (6) Nonlinear principal component analysis can be considered to be principal component analysis of Cin the feature space, F. Obviously, all the eigenvalues of C (0) and eigenvectors, 0VF satisfy VV= C. All of the solutions are in the subspace that transforms from ( ), 1, 2, , ixiM=?: ()(), 1, 2, , kkxVxV kM =?C (7) There is a coefficient i. Let 1( )MiiiVx=? (8) From Eqs.(6), (7) and (8) we can obtain: XIA Shi-xiong et al Mine-hoist fault-condition detection based on the wavelet packet 569111()( )1 ()()()( )MikiiMMikjjiijaxxaxxxxM=? (9)where1, 2, , kM=?. Define A as an MM rank matrix. Its elements are: ( )()ijijA?xx= (10) From Eqs.(9) and (10), we can obtain 2Maa=AA. This is equivalent to: M aa=A (11) Make 12M? as As eigenvalues, and 12, , , M? as the corresponding eigenvector. We only need to calculate the test points projec-tions on the eigenvectors kV that correspond to nonzero eigenvalues in F to do the principal compo-nent extraction. Defining this as k, it is given by: 1( )( )( )MkkiikiVxxx=? (12) It is easy to see that if we solve for the direct prin-cipal component we need to know the exact form of the non-linear image. Also as the dimension of the feature space increases the amount of computation goes up exponentially. Because Eq.(12) involves an inner-product computation, ( )( )ixx, accord-ing to the principles of Hilbert-Schmidt we can find a kernel function that satisfies the Mercer conditions and makes ( , )( )( )iiK x xxx=. Then Eq.(12) can be written: 1( )( , )MkkiikiVxx x=?K (13) Here is the eigenvector of K. In this way the dot product must be done in the original space but the specific form of ( )x need not be known. The mapping, ( )x, and the feature space, F, are all completely determined by the choice of kernel func-tion78. 2.3 Description of the algorithmThe algorithm for extracting target features in rec-ognition of fault diagnosis is: Step 1: Extract the features by WPT; Step 2: Calculate the nuclear matrix, K, for each sample, (1, 2, , )NixRiN=?, in the original in-put space, and ( )()ijijKxx=; Step 3: Calculate the nuclear matrix after zero-mean processing of the mapping data in feature space; Step 4: Solve the characteristic equation M aa=A; Step 5: Extract the k major components using Eq.(13) to derive a new vector. Because the kernel function used in KPCA met the Mercer conditions it can be used instead of the inner product in feature space. It is not necessary to con-sider the precise form of the nonlinear transformation. The mapping function can be non-linear and the di-mensions of the feature space can be very high but it is possible to get the main feature components effec-tively by choosing a suitable kernel function and kernel parameters9. 3 Results and discussion The character of the most common fault of a mine hoist was in the frequency of the equipment vibration signals. The experiment used the vibration signals of a mine hoist as test data. The collected vibration sig-nals were first processed by wavelet packet. Then through the observation of different time-frequency energy distributions in a level of the wavelet packet we obtained the original data sheet shown in Table 1 by extracting the features of the running motor. The fault diagnosis model is used for fault identification or classification. Table 1 Original fault data sheet Eigenvector (104) E50E51E41E31E21E11Fault style1166.4951.34980.13612 0.08795 0.19654 0.25780F12132.7141.24600.10684 0.07303 0.12731 0.19007F13112.251.53530.21356 0.09543 0.16312 0.16495F14255.031.95740.44407 0.31501 0.33960 0.28204F25293.112.65920.66510 0.43674 0.27603 .027473F26278.842.46700.49700 0.44644 0.28110 0.27478F27284.122.30140.29273 0.49169 0.27572 0.23260F38254.221.53490.47248 0.45050 0.28597 0.28644F39312.742.43370.42723 0.40110 0.34898 0.24294F310304.122.60140.77273 0.53169 0.37281 0.27263F411314.222.53490.87648 0.65350 0.32535 0.29534F412302.742.83370.72829 0.50314 0.38812 0.29251F4Experimental testing was conducted in two parts: The first part was comparing the performance of KPCA and PCA for feature extraction from the origi-nal data, namely: The distribution of the projection of the main components of the tested fault samples. The second part was comparing the performance of the classifiers, which were constructed after extracting features by KPCA or PCA. The minimum distance and nearest-neighbor criteria were used for classifica-tion comparison, which can also test the KPCA and PCA performance. In the first part of the experiment, 300 fault sam-ples were used for comparing between KPCA and PCA for feature extraction. To simplify the calcula-tions a Gaussian kernel function was used: Journal of China University of Mining & Technology Vol.18 No.4 57022( , )( ), ( )exp2xyx yxy?=?K (10) The value of the kernel parameter, , is between 0.8 and 3, and the interval is 0.4 when the number of reduced dimensions is ascertained. So the best correct classification rate at this dimension is the accuracy of the classifier having the best classification results. In the second part of the experiment, the classifi-ers recognition rate after feature extraction was ex-amined. Comparisons were done two ways: the minimum distance or the nearest-neighbor. 80% of the data were selected for training and the other 20% were used for testing. The results are shown in Tables 2 and 3. Table 2 Comparing the recognition rate of the PCA and KPCA methods (%) PCA KPCA Minimum distance 91.4 97.2 Nearest-neighbor 90.6 96.5 Table 3 Comparing the recognition times of the PCA and KPCA methods (s) Times of extractionTimes of classificationTotal timesPCA 216.4 38.1 254.5 KPCA129.5 19.2 148.7 From Tables 2 and 3, it can be concluded from Ta-bles 2 and 3 that KPCA takes less time and has rela-tively higher recognition accuracy than PCA. 4 Conclusions A principal component analysis using the kernel fault extraction method was described. The problem is first transformed from a nonlinear space into a lin-ear higher dimension space. Then the higher dimen-sion feature space is operated on by taking the inner product with a kernel function. This thereby cleverly solves complex computing problems and overcomes the difficulties of high dimensions and local minimi-zation. As can be seen from the experimental data, compared to the traditional PCA the KPCA analysis has greatly improved feature extraction and efficiency in recognition fault states. References 1Ribeiro R L. Fault detection of open-switch damage in voltage-fed PWM motor drive systems. IEEE Trans Power Electron, 2003, 18(2): 587593. 2Sottile J. An overview of fault monitoring and diagnosis in mining equipment. IEEE Trans Ind Appl, 1994, 30(5): 13261332. 3Peng Z K, Chu F L. Application of wavelet transform in machine condition monitoring and fault diagnostics: a review with bibliography. Mechanical Systems and Sig-nal Processing, 2003(17): 199221. 4Roth V, Steinhage V. Nonlinear discriminant analysis using kernel function. In: Advances in Neural Informa-tion Proceeding Systems. MA: MIT Press, 2000: 568 574. 5Twining C, Taylor C. The use of kernel principal com-ponent analysis to model data distributions. Pattern Recognition, 2003, 36(1): 217227. 6Muller K R, Mika S, Ratsch S, et al. An introduction to kernel-based learning algorithms. IEEE Trans on Neural Network, 2001, 12(2): 181. 7Xiao J H, Fan K Q, Wu J P. A study on SVM for fault diagnosis. Journal of Vibration, Measurement & Diag-nosis, 2001, 21(4): 258262. 8Zhao L J, Wang G, Li Y. Study of a nonlinear PCA fault detection and diagnosis method. Information and Con-trol, 2001, 30(4): 359364. 9Xiao J H, Wu J P. Theory and application study of fea-ture extraction based on kernel. Computer Engineering, 2002, 28(10): 3638.
收藏