Do you know how long your dishwasher runs? 🤔
This was a question that I found the answer to not long ago in 2013. I remember grabbing my laptop and showing it in excitement to my wife. She had a puzzled look as to why such a fact would be so exciting.
The exciting part was that the Ubi could successful log sound level in our kitchen throughout the day and the result was that, down to the minute. I could read how loud the dishwasher ran and for how long. I could even see that it had three cycles that had a few minutes of quiet pauses between them. By thinking about my exposure to the appliance noise and reading the log over time, I was able to deduce the dishwasher’s cycle. What could I do with that information was another thing.
Since then, the average number of devices and sensors in our homes has increased exponentially. Many of our Internet-connected devices have onboard sensors that when combined can provide a lot of insight into ourselves and how we live our lives. It was this motivation that led us to adding microphone, light, humidity, air pressure, and temperature sensors to the Ubi — the hope that machine learning would catch up to provide insights.
The opportunities ahead of us in IoT come from presenting us with insights on ourselves and also taking actions to help us achieve our goals, as we define them. Artificial Intelligence applied to IoT could enable us to have better moods, conserve electricity, or meet our physical goals.
Achieving this type of future is happening as a result of three waves:
- The reduced cost and proliferation of tiny Internet connected sensors
- The reduced cost of data collection and storage
- The commoditization and ease of use of AI and machine learning platforms and APIs
In 2011, I was mesmerized by a Kickstarter project called Twine. It had a temperature and humidity sensor on board, as well as an accelerometer, and could report data through WiFi. The inspiration for purchasing it came after arriving home to a puddle in my kitchen caused by a broken fridge. If only I could have been warned that the freezer was dropping in temperature! That’s what Twine could do by creating simple rules to send emails or SMS messages whenever certain thresholds were passed.
While Twine was relatively expensive, similar sensors have now gone down significantly in value. GPS, accelerometer, IR sensors, microphones, magnetic field detectors, force transducers, and barometers are available with multiple sensors on one chip and have been shipped on billions of smartphones. Likewise, WiFi/BT SoCs have come down in cost over the past five years, along with the ease of implementing apps on them.
When it came to collecting sensor data for the Ubi, we had to build the infrastructure that could handle HTTP long poll, streaming of sensor data information, accumulation of that data, processing it against rules, storing it, and then recalling it for analysis.
Along the way, there could be issues with calling too many data points when zooming out of a graph. We crashed our production server several times in the early days because of this issue.
We also needed to learn that sampling data points five times every 10 seconds could lead to a huge torrent of data hitting our servers when it came to thousands of deployed devices. Today, AWS, Google App Engine, and others have IoT platforms that make it incredibly easy to setup data collection and rules for pennies in comparison to five years ago.
Where the new frontier lies is in using this data to predict what we’ll do next or trying to influence our next steps. To do the latter, we’ll need to wade up the hierarchy of information. One such hierarchy was presented by Haeckel:
- Raw Data
In the example of the dishwasher, raw data was the dB level and time. Information was knowing where the data was being collected. Intelligence was knowing that there was a dishwasher that was turned on and located in the same room and seeing the cycles. Knowledge was being able to determine the total length of the cycle and the quiet moments. Wisdom then is knowing that the dishwasher will now run for this amount of time and generate this amount of noise — so maybe I shouldn’t have it turn on in the middle of the night.
Gathering this type of information today requires a lot of learning and input from users to educate the system. This is where AI can be applied but needs to be built for each specific scenario and is painstaking work. Companies with limited resources need to focus on where the real opportunities are now:
- Presenting insights to users if and when relevant
- Combining data to create new insights
- Predicting what events will change sentiment and emotion
The new rule is that companies that can affect user emotion and sentiment will win.Finding the Patterns
While capturing and recording raw data is now table stakes for IoT devices, and being able to tag for location is a plus to start to extract usable information, there are some easier ways that companies can turn this information into intelligence. Namely, these are through abstracting, averaging, and comparing.
Abstracting can mean that we do some interpretation of information to identify events or we integrate or differentiate to glean a sum or rate. For the Ubi, it could be rate of change in light, number of times per day someone speaks to their device or the device speaks to the user (“interactions”), the amount of change in temperature, hitting a threshold, or dozens of others.
Averaging is a type of abstraction but can be done both for an individual user/device or across a much larger set. Lastly, comparing a particular user or device’s data to the average can provide a lot of actionable insight. All of these can be done without any machine learning or AI systems.
However, being able to allow the system to be trained to identify and tag events is much more powerful. Nest has done an interesting job of this for vision processing. They’ve essentially crowdsourced machine vision by allowing for users to draw an area on a video feed and name it as an event.
For IoT device companies, if having a user tag or identify an event brings an immediate benefit to the user, why shouldn’t this then be used to train the system to automatically identify events? Sound detection, home/not home, broken heating or AC are all events that could be useful to allow users to train. This dataset can then be applied to a tool like TensorFlow with a further round of verification or correction presented to users.
Pieces of intelligence that are particularly useful to smart homes for identification are:
- Home / not home
- Arrival / departure times
- How many people are home
- Sleep / awake times
- Meal time
- Appliance use
- Other home activities (e.g. watching TV, cleaning, cooking, etc)
Moving up the ladder, we can start to put together the information above to create “knowledge” and eventually “wisdom”. This is where machine learning can be applied to help extract predictive information. There’s a great example of this from Target where they were able to predict women’s pregnanciesbefore they occurred based on events.
For example, you can start to predict that a family normally eats dinner at 6:45 PM. This information could be used to trigger an alert with meal ideas at 5:30 PM. A system could also start testing inputs and assess whether the effect is a positive impact on the user.
In the meal idea scenario, if the user adopts the idea, it can be seen as a positive impact. Other intelligence that can be gathered to assess this is whether indicators of happiness (voice analysis, earlier bed time, less wakefulness at night, etc.) had a correlation with the input from the system.
Perhaps training systems to better manipulate us is a scary proposition. However, if our goal is to improve ourselves, providing some autonomy within constraints to systems that can have an impact us such as home lighting and temperature, for example, we may yield big wins.