Our CTO likes to call MidoriCloud a “microscaler”. This is to contrast what we do with what the “hyperscalers” (AWS, Azure, GCP) are doing.
Simply put, an edge cloud is a set of computing resources placed as close as possible to the source of the data versus in an enterprise data centre or hyperscaler cloud, potentially far away from the source.
“Edge” refers to the location – close to the sensor, the camera, the autonomous vehicle, or other data source, while “cloud” means a set of computing resources which are collectively managed remotely with a high degree of automation and orchestration.
Why do we need to move compute to the edge?
Moving compute out of the data centre to the edge makes sense, because it enables analysis of data when it is most useful – i.e., as soon as it is produced. Over the last few years, a new generation of high-performance, low-cost, and power-efficient hardware enables such analysis in small edge facilities, using similar tools to those previously available in large centrally located compute farms. Examples of this include artificial intelligence (AI) and machine learning (ML) as well as more traditional line-of-business applications. Indeed, it isn’t much of a stretch to say that the only thing you probably wouldn’t run at the edge today was an application requiring access to a very large pool of persistent data in a database or data warehouse – though of course the edge-based analytics can feed back their insights to such systems asynchronously for subsequent downstream processing.
Why, you may ask, does this matter? I thought that networks are fast, cheap, and ubiquitous these days.
Well, that isn’t strictly true, even in very advanced economies such as our home market of Japan. Gigabit fibre and multimegabit 4G/5G is indeed available pretty much everywhere in Japan. However, the volume of data produced at the edge is absolutely vast, and is increasing exponentially, so even if it is technically possible, it’s usually not practical to bring it all back to the centre – network latency can be an issue for data (events) requiring an immediate response, and the ingress/egress charges of the hyperscalers can make it very expensive, if not downright cost-prohibitive to move all of that data back.
And if we talk about the edge in developing nations, such as the Philippines or Indonesia, all bets are off. Compute at the edge may be the only practical way to analyze the data quickly or at all.
Why do we need cloud at the edge?
Large enterprises may have thousands or even tens of thousands of data sources potentially producing valuable data. In hyperscale clouds, this could require hundreds or thousands of servers to analyze. We saw in the previous section why we want to move much of this analysis to the edge. This implies that we are going to have an awful lot of small computing facilities potentially distributed over a large geographical area, some of them perhaps in locations which are very remote indeed.
Most companies who have run pilots of edge computing thus far have discovered that it can rapidly become a serious headache to manage all those remote devices. Informal surveys suggest that once you get to about 200 edge computing devices, manageability becomes a serious problem. Furthermore, most large enterprises are in the habit of outsourcing the management of their compute to third parties, whether traditional managed services companies, or hyperscaler cloud providers. They often don’t have the expertise in house to address these management challenges by themselves anymore.
How does MidoriCloud address these challenges?
© 2023 all right reserved.