AI and ML at the edge

The amount of data coming out of IoT (Internet of Things) devices is vast and growing. Some of this data is vital, a lot of it is not. Informed estimates suggest that of every 400 events coming out of the IoT layer, only 1 will be interesting and actionable. This is a classic “needle in a haystack” problem. How to find that 1 interesting event among hundreds in a timely and cost-effective manner.

 

Why not send it all back to a data lake in “big cloud”?

 

  1. That one critical event among the hundreds of others may require an immediate response. For example, an electrical transformer sitting between an industrial customer and the grid may be about to fail, which would result in the load that the transformer had been absorbing being dumped back into the grid, potentially overloading other nearby grid components, leading to a snowballing blackout. It is practically impossible to implement these kinds of predictive and preventative maintenance scenarios using only IoT data and “big cloud”.
  2. Sending data that isn’t actually needed back to “big cloud” comes at significant cost. Not only does it require a lot of network bandwidth (theoretically available in most places in Japan, but not in a lot of other countries where we work) but the hyperscalers charge significant fees for inbound and outbound data – so-called “ingress and egress” charges. Why pay good money to transmit and store data that you don’t need now, and probably won’t need ever?

 

How does MidoriCloud address these issues?

 

  1. MidoriCloud offers high-performance low-cost compute close to the source of the data. MidoriCloud’s public cloud offering will be deployed across at least 22 cities in Japan that are not currently well served by existing commercial data centres and hyperscalers (more than 80% of the data centre in Japan is currently in the Kanto and Kansai areas – i.e., close to Tokyo or Osaka). MidoriCloud also offers a private, on-premise option, where the cloud pod can be deployed to a location of the customer’s choice.
  2. The MidoriCloud hardware stack is built on industry-standard chipsets from ARM, Intel, AMD and NVIDIA, allowing us to run the same software stacks as the large public cloud providers.

 

What kind of use cases are enabled by moving AI/ML to the edge?

 

While there are literally thousands of potential use cases for AI/ML-enabled edge computing the following examples will give an idea of some of the possibilities. Even if your enterprise does not fit with any of these use cases, there will almost certainly be similar use cases that can be identified by thinking through the examples that follow:

 

  • Predictive and preventative maintenance: asset utilization and availability can be drastically improved when potential problems are detected before they occur. AI/ML algorithms running on edge cloud can interrogate the data stream coming from an IoT-enabled device in near real time, looking for signs of an impending failure and initiate actions to reduce the impact of the failure or prevent it altogether.
  • Fraud detection: streaming data can be interrogated for both patterns of events that are known to indicate fraud, as well as unexpected patterns that may indicate a new type of fraud or a “hack”. In the former case, action can be taken to stop a fraudulent transaction before a loss is incurred. In the latter case, information can be passed to analysts in the Security Operations Centre (SOC) for further investigation and classification.
  • Risk management: particularly on days when market conditions are highly volatile, it can be difficult to determine whether a trade is fraudulent, legitimate but outside trading limits, or normal given current market conditions. Edge AI/ML can compare individual trades or groups of trades with similar situations in the past and flag concerns to Risk Management and Compliance personnel before losses are incurred.
  • Control of autonomous vehicles: while the onboard processing capabilities of autonomous vehicles are increasing all the time, it is simply not possible to anticipate every possible condition that may arise using onboard processing. AI/ML at the edge can be used to add additional layers of precision guidance and control to such vehicles improving both safety and performance.
  • Routing optimization: the transportation and logistics industry is under increasing pressure to reduce both costs and carbon footprint. While there are many cloud-based solutions which offer advice to such companies on routing optimization, the data they use may be hours or even days old and does not take account either of the specific capabilities of a particular vehicle or vessel (e.g., maintenance defects) or local conditions (e.g., weather, tides, road closures etc.). Moving such processing onto a truck or ship can allow the optimum recommendation to be made in near real time factoring in the very latest conditions.

 

The MidoriCloud team includes Digital Transformation consultants and data scientists who can help you evaluate how AI/ML at the edge can help you optimize your business.

 

What AI/ML tools are available on MidoriCloud?

 

Thanks to MidoriCloud’s standards-based architecture, the majority of AI/ML tools that run on hyperscaler cloud or on-premise can be deployed at the edge. The main exception to this is tools that rely on access to huge volumes of persistent data, such as building Large Language Models.

 

In most cases, a model that is built and trained on hyperscaler cloud can be ported relatively easily to the MidoriCloud where it can continue to evolve based on new streaming data. Insights thus gained can then be fed back asynchronously to the master model in hyperscaler cloud for redistribution to other edge sites.