Skip to Content

Rosendin Rosendin

News & Press

Articles | June 4, 2018

Edge Computing And The Disaggregation Of The Data Center Infrastructure

Written by Bill Mazzetti, current senior vice president and chief engineer at Rosendin

Mission Critical

We’ve all heard more about edge computing than we ever did about DCIM or blade servers during their birth and heyday. All the touts bray and wax superficially about edge being a remote, small, powerful, compute/network mix. Everyone is vying for the sound bite and idea that leads to a Nobel in Stockholm.

Edge represents a logical and physical disaggregation of your IT program. Embrace the cactus.

First and foremost, edge is a computer science and network dynamic. We contend that edge will eventually be productized as a service, much like the cloud. We also believe that some solutions will build in a mobile or static network component to help with data gathering and gating back to the IT mothership for further analytical work. Neither of these are the case today. Whatever approach is chosen, it must be linked with a network solution and the reality of a very mobile and diverse communication solution at the data reception end of the operation.

WHERE DO WE START?

It is so early in the engineering and deployment cycle that no one can say what will be a consensus operationally or even universally sensible. Edge compute applications, to include resiliency, availability, carrier, network, compute, storage, latency, location, and the lot, will be disruptive. So are the forces that are making us face a new way of doing business. Like all things IT, they will be forced to evolve past a point where we know it’s needed but not to a point where the solutions or results are consistent.

The point is optimizing latency and the dynamics of data gathering. Edge today may be physical, and tomorrow, much more virtual. When the Interent of Things (IoT) is mentioned, we take this to mean that the data reception point is certainly moving much, much closer to the location that data is generated. All of these ideas are logical, simple, and elegant in concept but unknown in what the solution that meets those goals looks like. We all know early versions of disruptive approaches tend to be experimental, often limited in scalability, and wonky as an infant application. This is the opposite of the needed end state.

Edge computing is merely the evolution of the art based on the tools available today posed against anticipated risks or actual tasks. Nevertheless, we’ve seen this before with the rise of file servers away from the structured bus-and-tag world of 1980s data centers. Like then, IT teams are merely adapting their computing ecosystem to the demand of the end use and application.

WHAT IS EDGE, REALLY?

Edge computing solution can defined in different ways. Edge is the disaggregation of all or part of your IT infrastructure. Several hyperscale operators already logically or physically disaggregate, with their storage and database administrators (DBA) being in physically separate areas. These businesses started with network nodes placed around the world and they just kept evolving. Cloud services are an analogy for this — a private application that was novel, cutting-edge, and physically diverse that became a readily adopted, commercialized, public access, and productized business — all as originally presented by Amazon Web Services (AWS). From that comes the bare-iron business, and so on. Back to the point: edge starts as self-inflicted and custom disaggregation followed by commercially varied and viable edge solutions when it moves to fuller adoption and scale in the market.

So where does that leave us, and what forces are afoot? Mobile, faster, and the data gating of what we store and archive are at the forefront. On a derivative basis, we should ask what data should we process, transmit, and store and what can be tossed aside. Add the ability to put some pretty powerful network and compute resources into a small footprint, and now we have the nexus of need and ability, albeit without acknowledging precedent. You only have to look at the massive reduction of equipment on cell towers and the compaction and evolution of mobile technology to see what can be achieved.

SO WHAT’S DRIVING THE EDGE?

We feel there are several near-term forces driving edge and system disaggregation:

  • Close-to-consumer network speed for content streaming and uploading
  • IoT and the explosion of data gathering devices in the consumer space
  • Virtual and augmented reality
  • High-volume data transfer for performance measurement, safety, and control in autonomous vehicles and drones
  • Merger of mobile and content providers and the resulting merging of communications, media, and content by single providers
  • Need of larger compute power in franchise, store, or construction site operations that supports larger local compute and buffers existing network infrastucture
  • The need to hold down application latency based on rapidly increasing loads on the apps lift, their support network, storage, and processing infrastructure

The common theme to all of these market forces is the continued, geometric increase in the amount of data being exchanged or transferred over a network that has not kept up with the underlying service-level agreements (SLAs) or basic service expectation of the data it supports. In context, edge computing is equivalent to RAM memory. Even when data can readily flow back to processing and storage, where is the optimization in the face of mounting data gathering, storage, analytical, and network demands?

WHERE WILL EDGE RESIDE?

An edge data facility is still a data center. It still and must consider design, construction, and operational habits that result in high levels of system availability, maintainability, and resiliency. The disaggregation that edge poses will distribute compute and storage assets, and may have to place those assets in non-data center facilities.

The challenge will be placing edge assets into sufficiently robust facilities or enclosures outside of traditional data centers where those local solutions can equally sustain those IT operations and systems. Could edge be represented in another data center that’s closer to the data gathering point? Absolutely. It could be in a cell shelter, 50 ft. in the air on a cell or lighting tower, or in someone else’s data center. It may possess a network aspect like mobile or public WiFi that is not in your network hopper today. The edge location will be specific to your business. The expectations of how resilient the systems and habits will be consistent with known industry expectations for best practices.

There will certainly be approaches where your edge systems will be living outside the safe, clean, quiet, and expensive white space of today’s data centers. When you see the comments about “edge on the cell tower” (which we think will happen), you have to consider that not only network, but compute and some storage will have to be put into a shelter, enclosure, or space that needs to be temperature-, storm-, lightning-, humidity-, vandalism-, and generally life-resistant. This is the same situation we experienced in the migration from the mainframe and midrange days to distributed servers in the 1980s, where compute and storage ended up in commercial and not mission critical environments. Moreover, we lived and learned through the 1980s without too much disruption.

WE HAVE SEEN THIS BEFORE AND WHAT DO WE DO?

While history is repeating itself, it is certainly not doing so with the same technology or facility approaches we saw in the 1980s. If we recognize that we are dealing with data gathering that is increasingly mobile, this forces edge and disaggregation into a hybrid of WiFi/mobile gathering and remote compute condition. Who can productize the edge and how does it appear in the data ecosystem? When we mentioned productization, this is not merely rebranding. As with all early stage engagements, cost is considered but function and ability to meet mission need is more important. Initial conclusions will likely not look much like the mature solution.

Since edge is a disaggregating step in the end-to-end compute construct, location is critical. It becomes the base evaluator for the single purpose of reducing latency in your applications. The purpose of busting the darned thing up is to get it run optimally or in advance of a new program or need. Yes, it can reside in a data center, but it is likely not to be one you are in already. We believe that point to edge is to buffer and prioritize data processing and storage to the core of the IT systems. The basic difference that we see in edge vs. cloud is that cloud is not location specific but is capacity specific whereas edge is location and capacity specific.

Edge offers the ability to move compute closer to the point of reception to the data chain vs. cloud or non-virtualized facilities. One theory would represent edge as the IT mitosis of cloud infrastructure to improve performance. Similarly, can edge exist on a server-level where disaggregation of the drive and motherboard mean that power could be applied virtually and very accurately within the foundry? We think so.

Today, we store everything. At some point, we will reach a point where the cost and ability to store, analyze, and archive everything will be outweighed by the cost, complexity, and availability to do so. An edge compute play may offer cost savings if scaled services such as PaaS, SaaS, or IaaS can be brought into play. The initial cost dynamic may turn negative while operators are keeping their SLAs and service expectations intact and before the market for these services matures and discounts. Something will have to give.

This offers a curious commericial dynamic where services providers can use the location sensitivity of the customer’s edge solution to absorb stranded or un-optimized capacity in their own system. Until then, embrace the cactus.

Link to Original Article (may require registration)
https://www.missioncriticalmagazine.com/articles/91501-edge-computing-and-the-disaggregation-of-the-data-center-infrastructure

About Rosendin

Headquartered in San Jose, Calif., Rosendin is employee-owned and one of the largest electrical contractors in the United States, employing over 7,500 people, with revenues averaging $2 billion. Established in 1919, Rosendin remains proud of our more than 100 years of building quality electrical and communications installations and value for our clients but, most importantly, for building people within our company and our communities. Our customers lead some of the most complex construction projects in history and rely on us for our knowledge, our ability to scale, and our dedication to quality. At Rosendin, we work to ensure that everyone has the opportunity to reach their full potential by building a culture that is diverse, safe, welcoming, and inclusive.

Marketing Materials, Media Kit & Contact

Salina Brown

Director of Corporate Marketing
480.708.7625
480.921.4022
sbrown3@rosendin.com
@mentions of Rosendin