Making use of the Internet of Things, big data and machine learning
This article was first presented at the Fifth Symposium on Lift & Escalator Technologies, www.liftsymposium.org.
New technologies such as the Internet of Things (IoT), big data, cloud computing and machine learning (ML) have the potential to radically change the lift and escalator industry. This is particularly true in the areas of maintenance, product development and quality. Lift and escalator maintenance has evolved over the years. The various forms of maintenance have included breakdown, preventive, usage-based, condition-based and task-based maintenance. Using IoT, cloud computing, big data and ML, a new form of maintenance, data-driven maintenance (DDM) has arrived. DDM provides benefits to building owners, building managers, lift and escalator passengers, and lift companies.
Lifts and escalators are usually installed once and modernized as often as every 10-20 years, but are maintained for their entire lifespans. The lifts in the Woolworth Building in New York City, an early high-rise, were installed in 1914. The lifts were modernized for the fourth time in 2010, and the lifts are now in service and being maintained. Maintenance is a major source of revenues and profits for the lift industry. IoT has the ability to change the lift industry’s maintenance business model.
The term “Internet of Things” was coined by British entrepreneur Kevin Ashton in 1999. Today, there are approximately three billion Internet users. Most are humans exchanging information over the Internet. In five years, 30-50 billion physical objects (“things”) will be connected to the Internet. Also referred to as “machines,” these things will be communicating with other machines, such as computers. This form of communication is referred to as machine-to-machine (M2M) communication. M2M can utilize plain old telephone service lines, cellular communication, Ethernet connections, Wi-Fi or many other forms of electronic communication, not all of which
“Big data” is a term with many meanings. Initially, it referred to data sets too large or too complex for traditional software and computers to process in a reasonable amount of time. However, today, the term has also come to mean the use of predictive analytics to extract value from data, regardless of the quantity of data.
The processing of big data requires large amounts of processing power — power not found in desktop computers. Big data is processed by tens to thousands of servers using massively parallel software. Not all organizations have large server farms at their disposal and so must find alternative sources of processing power, such as cloud computing.
Cloud computing is the opposite of on-premises computing. With the latter, all hardware and software is owned by the operator. If a business needs 100 servers to run its business, it must buy 100 servers, buy or rent an air-conditioned facility to house the servers and provide electrical power and communication support for those servers. Additionally, the operator must provide the support necessary to keep the facility operational. If, for example, an additional 25 servers are required one day a week for data analytics, an additional facility with 25 servers must be acquired and operated.
The on-premises model has both a capital equipment expense component and an operational expense component. Everything is outsourced with cloud computing. The operator only pays for the computing and data storage on a pay-as-you-go basis. If the operator needs 100 servers during the day, 25 servers at night and 125 servers when running data analytics, the cloud provider will provide only the servers required. The number of connected servers can change dynamically based on need. The cloud computing model has no capital expense component. Cloud computing is purely an operational expense model.
The cloud provider can do this because it is providing services globally with a server farm that may be located in various countries.
Cloud providers usually have server farms in several locations where data is backed up. If a natural disaster such as an earthquake or tornado were to strike one facility, the parallel facility would continue to operate without interruption perceived by the user.
It should be noted that most businesses have a mix of on-premises and cloud computing.
ML and Data Mining
ML evolved from artificial intelligence (AI). The goal of AI is to develop computers and software that mimic human intelligence. One of the goals of AI is learning. ML involves making predictions based on properties learned from data. It is sometimes confused with data mining. The goal of data mining is to discover previously unknown properties in a set of data. While both ML and data mining are useful in the lift industry, it is this author’s opinion that ML will yield more tangible results more quickly than data mining.
There are many tools that can be used for ML. One of the more common approaches is known as Classification and Regression Trees (CARTs). These are decision trees that learn from what has occurred in the past and use that knowledge to make predictions about future outcomes. Newly developed software based on CARTs makes the analysis of data possible by trained practitioners who are not necessarily data scientists.
Data Scientists and the Data Science Team
Data scientists are those who have a combination of business acumen and a knowledge of data analytics or statistics. Most data scientists have advanced degrees in science, such as an MSc or a PhD. Thomas Davenport suggests that the combination of business, communication and analytical skills may not be found in one individual. He suggests that, rather than try to find one person with all those skills, it may be necessary to form a data science team.
If a data scientist is engaged solely in data mining, no knowledge of the product being analyzed is required. If the data scientist is performing predictive analytics on a specific product, such as a lift or escalator, the scientist or data science team must have product knowledge. The person to bring the team this knowledge is known as a domain expert.
The History of Lift and Escalator Maintenance
The type of maintenance delivered by the lift industry has evolved over the years. Initially, only reactive (breakdown) maintenance was provided: when a lift stopped working, a technician would be called to the site to return the lift to service. The industry eventually converted to preventative maintenance. The goal of this form of maintenance was to perform maintenance before a breakdown occurred and to increase the lift’s service life.
Remote monitoring of lifts and escalators appeared in the late 1980s. While remote monitoring would alert the lift company when a unit had a breakdown, it did not, in and of itself, reduce the number of breakdowns.
Usage-based maintenance appeared in the lift industry in the late 1990s. The concept of this scheme was to adjust the quantity and timing of maintenance based upon usage. The concept was not truly new. (Motor oil in automobiles has routinely been changed after a given number of kilometers of travel.)
Condition-based maintenance is simply providing maintenance based on the condition of a system or part. An example of this would be mounting an accelerometer and temperature sensor on a critical bearing and monitoring the vibration frequencies, vibration amplitudes and bearing temperature. When a reading begins to leave the normal operating range, bearing maintenance or replacement can be scheduled.
Task-based maintenance involves the generation of maintenance task lists based on the lift type, usage and condition.
Data-driven maintenance combines all the previously described maintenance types into one system. While new to many industries, including the lift and escalator industry, this type of maintenance is quite mature in industries such as aviation.
In the data-driven maintenance scheme, remote monitoring reports the usage and condition of the lift to the cloud. Using ML, predictions are made of when and what must be maintained. These preemptive tasks are then communicated to the service technician from the cloud. The service technician will then perform only those tasks that protect the customer’s assets (lifts and/or escalators).
The predictive nature of data-driven maintenance should enable the scheduling of maintenance tasks to prevent breakdowns or increase the mean time between failure. Additionally, when a pending failure is detected, data-driven maintenance should be able to recommend a preemptive action that can be taken to eliminate a loss of continuity of service. For example, if door motor current is monitored, an increase in current might, over time, indicate that additional door maintenance is required on the next visit. If a sudden increase in door motor current is detected, it might indicate that a door was damaged, and a technician should be dispatched to correct the problem before a breakdown occurs.
Unscheduled breakdowns are expensive, much more so than preventative or preemptive maintenance. Data-driven maintenance can reduce and, hopefully, eliminate unscheduled breakdowns. This will reduce maintenance costs and, ultimately, maintenance prices. Additionally, fewer breakdowns will increase customer satisfaction.
When can data-driven maintenance be implemented? The technology for it has existed for more than 10 years. However, until recently, it was cost prohibitive. Today, we have fast and low-cost computing. The cost of data storage is now a fraction of what it was just a few years ago. Low-cost data storage has made big data economically feasible. The Internet is available almost everywhere in the world where lifts are located. Low-cost wireless data communication is also available globally.
Cost and technology have reached a point in time where the economic benefits of data-driven maintenance can more than cover its costs, and it comes with an improvement in customer satisfaction.
Data-driven maintenance will change the way maintenance operations are conducted. More timely information about the performance of lifts will influence product development. If a new component has a higher or lower failure rate than the component it replaces, knowledge of this fact will be learned quicker.
If quality is defined by breakdowns per unit per year, then quality should improve. Perhaps maintenance will be priced based on uptime. Data analytics will also deliver unexpected results, but only time will determine how beneficial these results will be. However, it is logical to assume that these unexpected results will benefit both the lift industry as a whole and, most importantly, our customers.