Smart service assurance – providing the backbone for smart applications
The provision of remote healthcare is a topic that has become of growing interest to us, ever since we worked with ISPM on some early TMF Catalyst projects. At the time, it was becoming clear that classical approaches to service assurance would not only be needed by emerging IoT applications, but that they also needed to evolve to cater for a broader range of services and performance requirements, which were likely to require the evolution of conventional telecoms service assurance processes. ISPM’s team made a number of contributions to this process and led the way in some early-stage deployments, primarily in the exciting and emerging Brazilian market.
We helped ISPM take this message to the Smart City community but, a few years ago, few in this domain had heard of service assurance, and even fewer understood the need for the TMF. Happily, that’s now changing, with leading lights from different smart city groups now fully engaged with TMF activities, and vice versa.
A few years ago, it was generally thought that smart applications would be relatively undemanding and require little in terms of service assurance, but it quickly became clear to some that there would be a need for an evolved practice of service assurance to cater for a wide range of different service requirements. Some IoT applications have no need for performance guarantees (who cares about a smart kettle, other than its owner?), others could have quite complex requirements (smart grids and smart eHealth solutions, for example) – with the likelihood that requirements would become increasingly complex with time. More importantly, it also became apparent that some services would be volatile, with the need to transition from best-effort to something more imperative, depending on triggers and events.
Take a commonly cited use case, for example – remote health monitoring. On one level, this could be no more than data collection from a Fitbit wristband (a recent and welcome addition to our family’s gadget collection). A Fitbit collects information on heart rate and other exercise-related data, which can then be downloaded and viewed on your devices, as a log of exercise and fitness improvements over time.
All of this is very interesting and, presumably, could (and probably does) form the basis of long-term studies and the foundation of some predictive healthcare measures, provided that the data is suitably anonymised. Even so, there’s not a priority need for service assurance here – when the data is collected, it’s collected, regardless of whether there is a short interruption to transmission and reception.
However, there are more critical use cases – for example, pacemakers and other heart devices that need to transmit regular updates to healthcare teams to track patients’ well-being. Such applications might tick along nicely for a while, but if any irregular activity is detected, then rapid intervention could become a matter of life and death - in which case, the quality of service requirements change dramatically, from best-effort to something that has critical (think emergency services) performance needs.
This kind of volatility requires a service assurance framework, and the ability to change QoS condition. An understanding of this has led to a model in which such a framework can support an infinite array of service types (vertical services) and an infinite array of devices (horizontal differentiation).
Of course, we’ve written about this before, but it remains no less relevant now – just last week, a colleague in a test and measurement company remarked to us that, while operators understood this point as they moved to launching LoRaWAN and NB-IoT networks, some of the service providers had no conception of QoS. They assume that, all things being equal, their data would simply be transmitted successfully, without consideration of the factors that may impact service performance, and hence lead to service interruptions and degraded SLAs for their customers.
Smart digital infrastructure must be designed from the outset with these goals in mind, so that the wonderful silos that are all-too-familiar to the telco world can be avoided. Unfortunately, the message is still not getting through to everyone.
And this is where we are today – pilot projects continue to be discussed, while full-scale deployments of such infrastructure with an appropriate service assurance framework in place from the outset are relatively few and far between. Tacira, building on the work of ISPM in Brazil, offers some excellent examples of how leading smart cities have turned the desire for more smart services, servicing a wide range of applications, into the kind of infrastructure that will support their needs and those of partner providers. Recent engagements have shown how much more potential there is.
For example, over at Aculab, research was conducted a couple of years ago by one of their partners into the potential of speech-recognition systems supported by telephony for helping to accelerate and improve the early diagnosis of Parkinson’s disease, among other conditions. Imagine the potential of such systems and the convenience of shifting diagnosis into the home rather than requiring time-consuming and costly hospital visits, particularly for those for whom mobility may already be an issue.
However, these services still need to remain accessible, reliable and secure. Similarly, remote monitoring of long-term care for the elderly with motion detectors and the ability to switch to video mode (obviously subject to strict safeguards) with the variable QOS conditions that this entails, represents another promising avenue to explore… as well as careful consideration of the service assurance requirements.
This emerging world is fascinating and, as smart everything is both a major topic for MWC this year and the key focus of our own Smart Mobility Summit, the next few months should provide yet more evidence of, not only how ‘smart everything’ has become the key driver for the evolution of our industry, but also how the different requirements of real applications are going to drive service performance delivery.
It cannot be stressed enough how important this is. If remote services are to fulfil their potential to transform healthcare, then the industry needs to be completely focused on the different requirements of each use case. To that end, the industry needs to differentiate between the frivolous and the mission critical. Frivolous is fun, but mission critical saves lives.