Winning with the IoT: the vitality of edge computing to the enterprise

Article Featured Image

Edge processing: filtering, modeling and analytics

Edge processing primarily consists of analytics filtering, modeling and analytics themselves. Interestingly, virtually all those tasks involve centralized clouds in some way. Edge processing usually begins with analytics filtering, in which only notable event data transmits to the cloud’s core. For monitoring equipment assets in the Industrial Internet, the reams of data indicating a machine is properly functioning are filtered at the edge. Aberrational data indicating maintenance concerns are sent to central clouds. “If you look at autonomous driving, those vehicles generate 50 terabytes per car per day,” Norris says. “If you’re generating all of that, that’s an onerous amount of data to try to [centrally] coordinate, especially when you multiply that by the number of vehicles.”

Of equal importance is the implementation of data models for analytics aggregation on edge devices or platforms. According to Will Ochandarena, senior director of product management at MapR, such “model evaluations” usually necessitate building models from centralized locations, then implementing them at the edge. When doing predictive maintenance on oil sensors, for example, “as this data is created, you pass it through and classify it with that model, and you can detect in the moment whether a failure is impending or not and take action,” Ochandarena says.

The advent of graphics processing units (GPUs) and their processing power, memory and storage advancements enables certain endpoint devices to aggregate analytics themselves. That capability is essential for acting on what Dipti Borkar, Kinetica VP of product marketing, termed “perishable” data with little latency.

Consequently, GPU-powered edge devices “are more than capable to drive all the computing for data processing solutions to leverage aggregates right there and provide that as a way to inform operations at the edge without any centralized orchestration,” Negahban explains. Filtering analytics at the edge creates substantial cost benefits for transmitting results to central stores.

Processing artificial intelligence

The need to aggregate data for analytics purposes is fundamental to what Wisniewski terms user behavior analytics, which are necessary for “analyzing all your users, knowing what their behavior is and watching for something that doesn’t look like their normal behavior.” That tenet is as applicable to machine-generated data as to human-generated data and is significantly enhanced by AI. Ochandarena describes the correlation between the IoT and AI. “Given your IoT use case, it hinges on having machine learning or deep learning build a model to do something new with,” he says. “Edge computing facilitates that model in a way that helps you get things done: overcoming network challenges, latency challenges, things like that.”

The predictive and prescriptive nature of AI is well suited for what Negahban calls “inferencing,” particularly on contemporary lightweight edge platforms that perform jobs such as “running deep learning operations simultaneously, generating scores and feeding them back into other analytics right on that same device or back to the cloud.”

Such gateway platforms offer a window of visibility into edge operations that are integral to harnessing them. “There might be 100 devices feeding into a gateway and the gateway feeding into a server; the gateway can be an aggregation point,” Guard says. “The gateway can even be a decision point that can push out to the device itself.”

Use cases for AI and the edge abound and prominently relate to the automotive industry. “The good thing about cars is they are much bigger than a smart meter, so it’s a different form factor requirement,” Borkar says. “Image recognition for driving when you’re inferencing on the fly, that’s one example.”

Ochandarena mentions that one of the latest trends to affect edge computing is the utilization of video data, primarily for use cases that “used to involve a lot of sensors,” which are now replaced with video footage. For manufacturing floor automation, progressive companies are “placing cameras at the end of the manufacturing line to take video of the things coming off and use deep learning models that can detect if they’re good or bad, then instantly mark a part for reworking before it leaves and becomes more expensive,” Ochandarena says.

KMWorld Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues