Condition Monitoring of Elevators Using Deep Learning and Frequency Analysis Approach

by Krishna Mohan Mishra, John-Eric Saxen, Jerker Bjorkqvist and Kalevi J. Huhtala

This paper was presented at Elevcon 2023 in Prague, Czech Republic. 

Abstract 

In this research, we propose an automated deep learning feature extraction technique to calculate new features from fast Fourier transform (FFT) of data from an accelerometer sensor attached to an elevator car. Data labeling is performed with the information provided by maintenance data. Calculated features attached with class variables are classified using the random forest algorithm. We have achieved 100% accuracy in fault detection along with avoiding false alarms based on new extracted deep features, which outperform results using existing features. This research will help various predictive-maintenance systems to detect false alarms, which will, in turn, reduce unnecessary visits of service technicians to installation sites.

1. Introduction

In recent years, elevator systems have been used more and more extensively in apartments, commercial facilities and office buildings. Nowadays, 54% of the world’s population lives in urban areas (Desa, 2014). Therefore, elevator systems need proper maintenance and safety. Fault diagnosis methods based on deep neural networks (Jia et al., 2016) and convolutional neural networks (Xia et al., 2018) feature extraction methodology and are presented as state of the art for rotatory machines similar to elevator systems. Support vector machines (Martinez-Rego et al., 2011) and extreme learning machines (Yang and Zhang, 2016) are also used as fault detection methods for rotatory machines. However, we have developed an intelligent deep autoencoder random forest-based feature extraction methodology for fault detection in elevator systems to improve the performance of traditional fault-diagnosis methods. 

Autoencoders were first introduced by LeCun (Fogelman-Soulie et al., 1987) and have been studied for decades. Traditionally, feature learning and dimensionality reduction are the two main features of autoencoders. Recently, autoencoders have been considered one of the most compelling subspace analysis techniques because of the existing theoretical relations between autoencoders and latent variable models. Autoencoders have been used for feature extraction from the data in systems like induction motors (Sun et al., 2016) and wind turbines (Jiang et al., 2018) for fault detection, different from elevator systems as in our research.

In our previous research, raw-sensor data, mainly acceleration signals, were used to calculate elevator key performance and ride quality features, which we call here existing features. Random forest was used for fault detection based on these existing features. Existing domain-specific features are calculated from raw sensor data, but that requires expert knowledge of the domain and results in a loss of information to some extent. To avoid these implications, we have developed an algorithm for FFT-based feature extraction from the raw sensor data rides and a generic algorithm with deep autoencoder random forest approach for automated feature extraction from FFT-based features for fault detection in elevator systems. The rest of this paper is organized as follows. Section 2 presents the methodology of the paper, including data extraction, deep autoencoder and random forest algorithms. Section 3 includes the details of experiments performed, results and discussion. Finally, section 4 concludes the paper and presents the future work.

2. Methodology

In this study, we have utilized 12 different existing features derived from raw sensor data describing the motion and vibration of an elevator for fault detection and diagnostics of multiple faults. We have developed an automated feature extraction technique for raw sensor data in this research as an extension to the work of our previous research (Mishra et al., 2019) to compare the results using new extracted deep features. Data collected from an elevator system is processed to obtain time series vectors representing the constant speed movement phase, from which compressed frequency domain features are derived. Frequency domain features are fed to a deep autoencoder model for feature extraction and then random forest performs the fault detection task based on extracted deep features.

2.1 Data Extraction Algorithm

Raw sensor data collected from elevator systems typically encompass a large collection of data points sampled at high frequency. In order to transmit big sensor data to cloud-based applications, it is often desirable to pre-process the data and perform compression before transmission, for example in form of edge computing performed in the device end. Here, the raw data is obtained from a set of events (elevator travels) in the form of one-dimensional time series vectors with equidistant sampling times. The goal of the data processing phase is to compress the set of raw time series obtained from machinery and transform it into features with reduced dimensionality, but meanwhile maintain the necessary information for fault detection. 

Dimensionality reduction is not only beneficial in the sense of data volume reduction; it can also make the data more applicable for machine learning by shortening the training and reducing overfitting (Bellman, 1966). Meanwhile, transforming the data into frequency domain allows each variable length event to be represented by a compressed set of frequency bins of equal length. The frequency domain representation is commonly utilized in machine fault diagnostics (Goyal and Pabla, 2016), where vibration changes are monitored by analyzing the spectrum. The outlined data extraction process can be divided into pre-procession, data selection, transformation and (compressed) feature extraction.

In the pre-procession stage, 200 Hz data from an accelerometer measuring the vertical elevator acceleration is obtained over a set of elevator travels. Each elevator travel can be divided into an acceleration, constant speed and deceleration phase, where the constant speed phase is primarily of interest in the presented method. The unfiltered acceleration data is processed as a one-dimensional time series vector and normalized. In the next step, data selection is performed to obtain the windows of constant speed from the acceleration time series with a minimum window length of 200 samples, while the remaining data points are discarded.

In the transformation stage, the set X of time series vectors representing constant speed phases of rides are transformed into frequency domain. The Fourier transform is the most widely used frequency conversion, which mathematically relates a time domain signal into its frequency domain representation. For each variable length acceleration vector x{T}, where T indicates the travel number, L-point FFT was applied. The normalized FFT amplitude spectrum is obtained according to:

\[ S_{T}\left[k\right] = \frac{ \left |Y_{t} \left[k\right] \right| }{L_{T} } = \frac{\sqrt{Y_{T}\left[k\right] ^{2}_{Re} + Y_{T}\left[k\right] ^{2}_{im} } }{L_{T} } \]

where YT[k] is the discrete Fourier transform of the acceleration signal XT[n] . The spectrum ST contains LT frequency bins, where LT describes the length of travel T. In order to extract compressed frequency domain features, the dimensionality of each spectrum is reduced into an equalized number of bins N with bin widths wT = LT/N. The new amplitude spectrum is calculated by averaging over the bins.

\[S_{T,bin}\left[n\right] = \frac{1}{w_{T}} \Sigma^{n\cdot w_{T}}_{k=\left(n- 1\right)\cdot w_{T}} S_{T} \left[k\right] , n = 1 …N \]

where N was selected as 40. Finally, since the real-numbered FFT spectrum is mirrored, the first half of each spectrum containing N/2 features are selected. Hence, the total number of features fed to the deep encoder is T N/2 .

2.2. Deep Autoencoder

We are using a five layer deep autoencoder (see Figure 1) including input, output, encoder, decoder and representation layers, which is a different approach than in (Jiang et al., 2018), (Vincent et al., 2008). In our approach, we first analyze the data to find all floor patterns and then feed the segmented raw sensor data windows in up and down directions separately to the algorithm for FFT feature extraction. Extracted FFT features are fed to the deep autoencoder model for extracting new deep features. Lastly, we apply random forest as a classifier for fault detection based on new deep features extracted from the FFT features.

The encoder transforms the input x into corrupted input data x’ using hidden representation H through nonlinear mapping.

\[H= f\left(W_{1}x^{\prime } + b \right) \]

where f ( )   is a nonlinear activation function as the sigmoid function,  W 1 R k m the weight matrix and  b R k is the bias vector to be optimized in encoding with k nodes in the hidden layer (Vincent et al., 2008). Then, with parameters W 2 R k m and c R m , the decoder uses nonlinear transformation to map hidden representation H to a reconstructed vector  x at the output layer. 

\[x^{“} = g\left(W_{2}H + c \right) \]

where g ( )   is again nonlinear function (sigmoid function). In this study, the weight matrix is W 2 = W 1 T , which is tied to weight for better learning performance.

2.3 Random Forest

The final classification accuracy of random forest is calculated by averaging, i.e., arithmetic mean of the probabilities of assigning classes related to all the produced trees . Testing data  that is unknown to all the decision trees is used for evaluation by the voting method. Specifically, let sensor data value  have training sample  in the arrived leaf node of the decision tree , where and number of training samples is L_e in the current arrived leaf node of decision tree . The final prediction result is given by (Huynh et al., 2016):

All classification trees providing a final decision by voting method are given by:

        where the combination model is , the number of training subsets are  depending on which decision tree model is  ,while output or labels of the P classes are  and combined strategy is  defined as: 

where output of the decision tree is  and  class label of the P classes is 

3. Results and Discussion

In this research, we first selected all floor patterns like floor 2-5, 3-8 and so on from the data, some of which are shown in Table 1. 

The next step includes the selection of faulty rides from all floor patterns based on time periods provided by the maintenance data. An equal number of healthy rides are also selected. Only the vertical component of acceleration data is selected in this research because it is the most informative aspect, consisting of significant changes in vibration levels as compared to other components. Healthy and faulty rides are fed to the algorithm for FFT feature extraction separately.

3.1 Up Movement

We have analyzed up and down movements separately because the traction-based elevator usually produces slightly different levels of vibration in each direction. First, we have selected faulty rides based on time periods provided by the maintenance data, including all floor patterns, which is fed to the algorithm for FFT feature extraction, as shown in Figure 2. 

Then, we have selected an equal number of rides for healthy data, similar to Figure 2. The next step is to label both the healthy and faulty FFT features with class labels 0 and 1, respectively. Healthy and faulty FFT features with class labels are fed to the deep autoencoder model and the generated deep features are shown in Figure 3. These are called deep features, or latent features, in deep autoencoder terminology, which shows hidden representations of the data. 

Extracted deep features are fed to the random forest algorithm for classification, and the results provide 100% accuracy in fault detection, as shown in Table 2. We have compared accuracy in terms of avoiding false positives from both features and found that new deep features generated in this research outperform the existing features. We have used the remaining healthy rides for extracting FFT features to analyze the number of false positives. These healthy FFT features are labeled as class 0 and fed to the deep autoencoder to extract new deep features from the FFT features. These new deep features are then classified with the pre-trained deep autoencoder random forest model to test the efficacy of the model in terms of false positives. 

Table 2 presents the results for upward movement of the elevator in terms of accuracy of fault detection. We have also included the accuracy of avoiding false positives as an evaluation parameter for this research. The results show that the new deep features provide better accuracy in terms of fault detection and avoiding false positives from the data, which is helpful in detecting false alarms for elevator predictive maintenance strategies. It is extremely helpful in reducing unnecessary visits by maintenance personnel to installation sites. 

3.2 Down Movement

For downward motion, we have repeated the same analysis procedure as in the case of upward motion. Table 3 presents the results for fault detection with the deep autoencoder random forest model in the downward direction. The results are similar to the upward direction, but we can see significant change in terms of accuracy of fault detection and when analyzing the number of false positives with new deep features.

4. Conclusion and Future Work

This research focuses on the health monitoring of elevator systems using a novel fault-detection technique. The goal of this research was to develop generic models for FFT-based features and automated feature extraction for fault detection in the health state monitoring of elevator systems. Our approach in this research provided 100% accuracy in fault detection, and also in the case of analyzing false positives for all floor combinations with new extracted deep features. The results support the goal of this research of developing generic models that can be used in other machine systems for fault detection. Our models outperform others because of new deep features extracted from the dataset compared to existing features calculated from the same raw sensor dataset. The automated feature extraction approach does not require any prior domain knowledge. It also provides dimensionality reduction and is robust against overfitting characteristics. 

In future work, we will extend our approach to more elevators and other real-world big data cases to validate its potential for other applications and improve its efficacy.


References

[1] Bellman, R. (1966). Dynamic programming. Science, 153(3731), 34-37. 

[2] Desa (2014). World urbanization prospects, the 2011 revision. Population Division, Department of Economic and Social Affairs, United Nations Secretariat.

[3] Fogelman-Soulie, F., Robert, Y., and Tchuente, M. (1987). Automata networks in computer science: theory and applications. Manchester University Press and Princeton University Press.

[4] Goyal, D., and Pabla, B.S. (2016). Archives of Computational Methods in Engineering, vol. 23, 585-594.

[5] Huynh, T., Gao, Y., Kang, J., Wang, L., Zhang, P., Lian, J., and Shen, D. (2016). Estimating ct image from mri data using structured random forest and auto- context model. IEEE transactions on medical imaging, 35(1), 174.

[6] Jia, F., Lei, Y., Lin, J., Zhou, X., and Lu, N. (2016). Deep neural networks: A promising tool for fault characteristic mining and intelligent diagnosis of rotating machinery with massive data. Mechanical Systems and Signal Processing, 72, 303–315.

[7] Jiang, G., Xie, P., He, H., and Yan, J. (2018). Wind turbine fault detection using a denoising autoencoder with temporal information. IEEE/ASME Transactions on Mechatronics, 23(1), pp.89–100.

[8] Martinez-Rego, D., Fontenla-Romero, O., and Alonso- Betanzos, A. (2011). Power wind mill fault detection via one-class -svm vibration signal analysis. In Neural Networks (IJCNN), The 2011 International Joint Conference on, pp. 511–518. IEEE.

[9] Mishra, K. M., Saxen, J. E., Björkqvist, J., and Huhtala, K. (2019). Fault detection of elevator system using profile extraction and deep autoencoder feature extraction. in Proceedings of the 33rd annual European Simulation and Modelling Conference (ESM), 79-83.

[10] Sun, W., Shao, S., Zhao, R., Yan, R., Zhang, X., and Chen, X. (2016). A sparse auto-encoder-based deep neural network approach for induction motor faults classification. Measurement, 89, 171–178.

[11] Vincent, P., Larochelle, H., Bengio, Y., and Manzagol, P.- A. (2008). Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, 1096–1103. ACM.

[12] Xia, M., Li, T., Xu, L., Liu, L., and de Silva, C. W. (2018). Fault diagnosis for rotating machinery using multiple sensors and convolutional neural networks. IEEE/ASME Transactions on Mechatronics, 23(1), 101–110.

[13] Yang, Z. X. and Zhang, P. B. (2016). Elm meets rae-elm: A hybrid intelligent model for multiple fault diagnosis and remaining useful life predication of rotating machinery. In Neural Networks (IJCNN), 2016 International Joint Conference on, 2321–2328. IEEE.

Krishna Mohan Mishra, John-Eric Saxen, Jerker Bjorkqvist and Kalevi J. Huhtala

Krishna Mohan Mishra, John-Eric Saxen, Jerker Bjorkqvist and Kalevi J. Huhtala

Dr. Krishna Mohan Mishra received his M.Sc. degree in data analytics from National College of Ireland in 2017. He is currently working toward a PhD degree at the Unit of Automation Technology and Mechanical Engineering, Tampere University. His research interests include advanced signal-processing algorithms, intelligent fault diagnostics and prognostics and deep learning for machine health monitoring.

John-Eric Saxén received his M.Sc. degree (2011) from Åbo Akademi University in computer engineering with the research topic of system identification. He is working toward a PhD degree at the Department of Information Technologies having the main research interests within signal processing and data analytics.

Ass. Prof. Jerker Björkqvist received his PhD in process design from Åbo Akademi University, Turku, Finland, in 2002, with the topic of scheduling algorithms and implementations on parallel computing architectures. He is currently associate professor at Åbo Akademi University. His research interests include signal processing, optimization and software for embedded systems. In the signal-processing domain, he has been contributing to wireless standards such as simulation models and validation of DVB-T2. Currently, the research focuses on signal processing on sensor Big Data, aiming at utilizing data for situation awareness, machine diagnostics and predictive maintenance.

Prof. Kalevi J. Huhtala received his Dr. Tech. degree from Tampere University of Technology (Finland) in 1996. He is currently working as a professor in the Unit of Automation Technology and Mechanical Engineering, Tampere University. He is also head of the laboratory. His primary research fields are intelligent mobile machines and diesel engine hydraulics. He has published more than 180 refereed journal and conference papers in these areas.

Get more of Elevator World. Sign up for our free e-newsletter.

Please enter a valid email address.
Something went wrong. Please check your entries and try again.

Ultra-Thin, Acid-Etched High-Strength Glass

Ultra-Thin, Acid-Etched High-Strength Glass

Strength and Determination

Strength and Determination

LEED by Example

LEED by Example

A Move Toward Standardization

A Move Toward Standardization

A Heritage of Unsuspected Richness

“A Heritage of Unsuspected Richness”

Fred A. Annett’s Electric Elevators (1927): Part 1

Fred A. Annett’s Electric Elevators (1927): Part 1

Safety, Reliability and Innovation

Safety, Reliability and Innovation

Elevator Engineering Theory & Practice

Elevator Engineering Theory & Practice