The opportunities for artificial intelligence in business applications are virtually endless, but selecting the right AI idea to pursue for your business is no easy task. One of the key reasons for this is a lack of understanding around the massive scope of AI applications, and which combination of tools will be most beneficial in meeting the specific needs and goals of your business.
In Gartner’s latest Hype Cycle for AI, edge applications are right at the “Peak of Inflated Expectations” - and we have definitely seen the rise in interest here at Spark 64! While we’ve worked on a few projects that truly illustrate the amazing capabilities of the edge, there are a few considerations for edge implementations based on the outcomes your project needs to achieve.
Perhaps the key reason we choose edge implementation is speed - if your requirements include a time-critical component, chances are you’ll need to leverage an edge application. At the end of the day, every solution is as different as the problem it solves, and the best application of AI technology is the one that provides the best results for you.
To understand the benefits and limitations of edge AI applications, we must first understand their relationship within a greater ecosystem. Imagine a spiderweb: A dense array of woven silk near the centre, with strands reaching out in every direction and becoming more sparse as you approach the perimeter of the web. Now instead, imagine that this spiderweb is a network of devices - sensors, wearables, phones, and computers all linked to a central processing hub (the centre of the web) and sharing information with each other, much like the web’s vibrations share information with the spider.
Traditionally, this central processing hub took care of the majority of computing tasks, but as technology has advanced, it’s become more and more important for some tasks to be completed at the “edge” of the network. To take things one step further, edge AI applications are simply artificial intelligence applications that occur locally on edge devices. This is more ubiquitous than you might think - in fact, there’s a good chance you have a device using edge AI in your pocket right now! The iPhone’s Face ID authentication uses an on-device deep neural network to identify you and unlock your phone, even if you’re not connected to wifi or a cellular network.
“Time is money, and when it needs to happen now, it needs to happen on the edge.” Forbes
In the case of the iPhone’s face ID feature, there are several critical benefits to carrying out this task on the edge (in this case on the iPhone) as opposed to sending face ID information through the networks to be processed remotely in the cloud. In fact, Face ID would likely not be achievable - at least not as we use it today - without the application of edge AI. When you’re considering whether the application of edge AI is suitable for your business case, there are several factors you should consider:
Speed: When time is of the essence, edge applications can truly shine in their ability to minimize latency. While it depends on the data being processed, in a GoogleNet image classification example, the processing speed makes response time 6x faster when processed on an edge device [Source]. Sending data back and forth over an internet connection can slow down decision-making processes that need to happen in real time. For example, a robot arm choosing which object to pick up on a moving conveyor belt can’t spare an extra few seconds, and our iPhone user would get annoyed having to wait three or four seconds to unlock their phone.
Security: If sensitive data is being processed, edge applications can eliminate privacy and security concerns by ensuring that the information is held locally, rather than processed on remote servers which may breach privacy laws or project requirements.
Location: Edge applications can run anywhere. Even in the places where the wifi or cellular coverage isn’t great. For applications on the move (like self-driving cars) or those held in remote locations (like oil rigs, airplanes, or satellites) sending data away to be processed can be slow, if it’s possible at all. Choosing an edge model is perfect in these instances, as the device doesn’t have to rely on a connection that may not be reliable or have suitable bandwidth.
Cost: Processing costs are generally volume based, but edge devices can add an additional layer of complexity to this pricing structure. Although technology advances every day, using a cloud platform to process large volumes of data is still a more cost-effective option. For larger volumes, it’s also more efficient to take advantage of the higher processing power of the cloud. Is your AI project heavily reliant on high volumes of data? If so, a cloud application may be more suitable.
Processing size: We hope to report back that this has changed in a few years, but for now, the processing power of Edge devices is a limiting factor in their utilisation. The sensors, devices, and other hardware available for edge AI are still being developed, so one of the keys to ensuring a successful edge project is ensuring that there is hardware available to meet your processing requirements. Some projects are too resource intensive to be completed on the edge and must be sent away for cloud processing.
AI application: This is linked to processing size above, but we have found as a general rule that many NLP models are too resource intensive to be useful for edge applications, that Computer Vision is fantastic in edge applications and is one of the most popular use cases, and that AI automation also vastly benefits from edge applications. Each application has different benefits and limitations, and understanding how best to utilise each one is a key to success.
By now, you may have some ideas about your project and its suitability for an Edge AI implementation. There are a multitude of things to consider here, and we’ve only scratched the surface, but for now I’ll leave one final thought in your head to get the gears turning. Often, a hybrid approach is actually best: You can do as much as possible to take advantage of the speed that Edge implementations afford, but offload strategically to improve output, using a cloud or central hub processing model to maintain efficiency with high volumes of data processing. Between the two options, the opportunities become almost limitless!
If you’re considering a project involving an edge AI implementation and would like to explore the options available, we would love to hear from you. Spark 64 specialises in bespoke, innovative solutions to the hardest AI problems: If it’s a challenge, we want to hear about it! Get in Touch
Chatbot terminology can be overwhelming for individuals without a technical background. This guide breaks down the concepts associated with this emerging customer service tool.
The story of how Team Humble Wisdom's Hackathon project - the Imposter Bot - came to life. Written by team captain and fearless leader Ming Cheuk, CTO of Spark 64