Ambient Digital Assistance in Healthcare at the Edge
“Edge computing is extremely important in healthcare because of multiple reasons. And one of the main reasons is privacy”
Edge computing is the computational process in which the actual CPU and its modules will do everything including the training of models right at the edge device, instead of the data going all the way to the cloud for processing and then coming back (1). So, why is it important? Well, big cloud servers like YouTube are connected through a core network and then through edge servers, and then through edge devices (2). But if you bring down all the computing power to the level of the edge, you’d significantly remove the latency, the chance of data breaches, and can also improve privacy. So all these are extremely important in healthcare as compared to many other places.
This is part of the larger series — Digital Voice Assistant in Telehealth and it includes four major topics:
So, in this article, we will be discussing edge computing in healthcare.
Cloud vs Edge:
Generally speaking, there’s a huge difference between Cloud and Edge computing (3). In edge computing, the service location is directly in the edge network or on the device, or within the router or the specific location. So, data in itself doesn’t leave the facility thereby reducing latency. And the jitter is very low, which is actually the loss of data and the data that can bounce between two different devices. For healthcare in general and for devices like robotic surgery, it’s extremely important to have low latency and low jitter. In addition to that, in edge computing, geolocation wise it’s distributed, so even if there is a data breach, you’re not losing a lot of data. And therefore it is extremely important. And then due to the increasing mobile healthcare delivery like CT scans and MRIs on ambulances, edge computing is not just a reality, but actually an essential part of providing care.
Gaming par excellence Hybrid Computing
The main reason that edge computing became such a thing was because of the gaming industry. In PS5 or Xbox, you will insert a CD and it uploads 10–12 or 25 gigabytes of complete graphics and everything. But in a multiplayer game, you don’t upload the whole graphics to the cloud but instead you just simply give the location or a very limited amount of information to the cloud. So what happens is that when you have significant GPU power in your Xbox or in your PlayStation, it decreases the workload of the cloud and also the data transfer that is needed, and you will still have a very good low latency gaming experience. So, from a technological perspective, you need to understand that the main industry that took such a big advantage of edge computing is the gaming industry. NVIDIA that creates GPUs has been slowly scaling up. As a matter of fact, more and more health IoT devices like Apple Watch or Fitbit are going to be there and there are billions of sensors that are going to collect more and more data. So, it’s important to crunch those numbers as much as on the device, then on the edge server, and then decreasing the data throughput outside the local network. This will tremendously decrease the data breaches, increases privacy, and also decreases sort of overall environmental impact.
Swarm Learning for Clinical Machine Learning:
A recent paper that is published and is something you should really read is — Swarm learning for decentralized and confidential clinical machine learning (4). This whole effort has been spearheaded by EHP Hewlett Packard. And in this paper, they exemplify a completely new model, which is beyond federated learning called swarm learning in which not only the data computing is on the edge side, but you actually have models of artificial intelligence and machine learning being calibrated and then uploaded, and then federated and then redistributed to all the edge devices. For example, if there are multiple door drones and if each one of them is doing an obstacle course, then because of swamp learning if one of them learns something then the whole swarm of drones gets updated. This is exactly the same way that you can basically have a decentralized, confidential learning environment, only in which rather than the data being uploaded, the knowledge being uploaded. Therefore, it completely becomes private and is extremely important in this environment of ransomware, and data breaches.
Breakthroughs from NVIDIA:
NVIDIA has done a big push to it (5). As a matter of fact, they now have hardware systems that go all the way from 0.5 TOPS to 10,000 TOPS. And they are trying to do this because of two reasons. One of course is decreasing latency. But more importantly, as physicians, we really need real-time analytics, especially during critical situations. And that real-time analytics can only be achieved by distributed computing all the way from the edge to the cloud. And therefore we need to have each connected device more and more intelligent. The benefit “REAL-TIME”
Silicon Architecture Evolution:
If you look at the architecture, the main semiconductors initially used to be scalar, that was CPU and then vector, which was GPU. But now we are having these matrix where actual semiconductors chips were developed to do AI. And these are already in the market. If you look at the first M1 chip (6) that was introduced by Apple, there’s a huge neural core engine with 16 cores only dedicated to doing artificial intelligence tasks. And this is not just for Apple, if you look at the Arm V9, machine learning is a big portion of its brand new architecture introduced in the whole decade on which Qualcomm and Apple chips are being designed. So there’s a huge push in the industry itself to make built-in AI at the hardware level so that they can perform better. Hence these chips & edge devices are going to be better and therefore we’re going to have more and more edge computing and more distributed decentralized edge computing.
These technological innovations have some direct results, for example, if you look at the latest innovations from Google, you can see that they actually did On-device Live Caption (7). So the data doesn’t go to the cloud but instead, the device has enough power that it can do on-device caption or on-device speech recognition. Apple also introduced it more recently with Siri and can be automatically done within the device itself because the device has all the capabilities and the models have also been shrunk down in terms of memory usage, that now we have better models, better hardware to actually take care of these new realities.
“If Anyone Saved a Life, It Would be as if He Saved the Life of All Mankind”
Ahmed, E., Ahmed, A., Yaqoob, I., Shuja, J., Gani, A., Imran, M., & Shoaib, M. (2017). Bringing Computation Closer toward the User Network: Is Edge Computing the Solution? IEEE Communications Magazine, 55(11), 138–144.
Ahmed, E., Gani, A., Imran, M., & Shoaib, M. (2017). Bringing Computation Closer toward the User Network: Is Edge Computing the Solution? IEEE Communications Magazine, 55(11), 138–144.