The data center has been evolving from its mainframe days to the current software defined model in large part due to technology constraints and development, market forces, and business constraints. This has resulted in deployment architectures and operating models that are more complex (silos, sprawl, underutilization, etc.), with lower visibility into operational status. This evolution is illustrated in Figure 1.
Article Index:
At the beginning of the mainframe computing era, true data centers didn’t exist, and the IT function was centralized in organizations. But with the advent of distributed computing, every department could have its own applications and servers, leading to server sprawl and skyrocketing data center operating costs. In some cases, decentralized IT also led to the function being outsourced as a non-core part of the business. Yet, put in a larger context, decentralized application architectures placed once unimaginable resources at individuals’ fingertips. It also spurred connection to complex networks and the internet, bringing more applications, more demand, and more opportunity for human error and hacking.
Some of these challenges were solved by innovations in virtualization that resulted from the move to service-oriented architecture (SOA).With the ability to abstract software from hardware, organizations now could bring server sprawl under control —saving capital and operating expenses — adding to savings from data center facilities consolidation from the earlier period.
But, the insatiable global appetite for 24/7/365 connection and growing numbers of connected devices introduced new challenges, including how to manage the flood of data into a manageable stream. Protecting physical infrastructure also took on heightened importance because of the need to protect the huge volume of data. Market forces also allow consumers to select service providers who assured data security and high network availability. This has resulted in service providers placing infrastructure closer to information consumers in edge facilities.
Today, IT departments seek to consolidate, virtualize, and automate operational processes, as complexity cannot be resolved by assigning more people to solve it. In some ways, this resembles a return to the mainframe — the access to pooled resources represents massive, centralized computing power. However, the latest technologies are deployed and managed very differently from old mainframe computers. The service-oriented data center allows the necessary flexibility and speed of deployment to better fit with the line of business it serves, re-establishing the business relevance of IT.
Pressures on IT
While CIOs view recognition of the important link between IT and the business as an opportunity, the pressures facing IT departments challenge the ability to seize it. For IT to transform the business to be successful in the digital economy, a significant investment in resources is very likely necessary; however, CIOs continue to report their budgets are shrinking or stagnant, or they are unable to find qualified people.
Not having enough time to spend on business-relevant matters is also a critical problem. According to Firefly Communications LLC, IT staff spend most of their time on maintenance and fighting fires, with 70% to 80% of a typical organization’s IT budget being spent on application and infrastructure maintenance. So much is spent to maintain the status quo because the data center has evolved to highly complex deployment models.
Since there are more endpoints (edge computing, micro data centers) as shown in Figure 2, managing and securing the IT infrastructure has become tremendously complex. The number and types of resources and overall topology to be managed have exploded. And, with the network being the sole point of access to information by consumers, availability is paramount. Further complicating matters, social media, the Internet of Things (IoT), and intensive service personalization are drowning the data center with data.
The expanding IT role, budget constraints ,and data overload are among the opportunities and challenges sending organizations to the cloud in search of solutions.
Cloud Today
A 2014 KPMG global survey of 800 business executives regarding cloud usage found that the number one use is to drive cost efficiencies (49%). Yet the survey also showed that more organizations were using cloud technology for business transformation, such as better enabling a flexible and mobile workforce, compared to a study conducted in 2012.
But cloud today only goes so far in resolving challenges facing IT, in that the cloud introduces risks and greater complexity and compounds the overall lack of visibility. For example, the same KPMG survey uncovered risks of doing business in the public cloud: data loss and privacy risks were the most significant challenges according to 53% of respondents, while 50% cited intellectual property theft as challenging or extremely challenging.
To tap the overall benefits of clouds and avoid the risks of public clouds, many organizations are turning to hybrid cloud. According to RightScale, 88% of enterprises are using public cloud, 63% are using private cloud, which may be on- or off-premises, and 58% are using both. And, Interxion predicts that 80% of enterprises that have adopted cloud services by 2016 are managing a hybrid IT infrastructure environment as they transition from enterprise architectures to cloud models.
These statistics suggest a growing world of hybrid data center ecosystems where physical data centers and other on-premise IT resources are supplemented with strategic use of cloud services to manage workloads to improve efficiency, resiliency, and speed of resolution. As companies move increasingly towards the cloud, they will be operating their IT infrastructure in a bimodal manner. The cloud portion of this hybrid model can benefit immensely from the value provided by software defined technologies. This software defined data center (SDDC) model, which is in the early stages of the next data center evolution, has the potential to reverse the trend of less visibility and greater complexity and risk. At a time when organizations are reluctant to invest in infrastructure that in three-to-five years may not support new product introductions or services models, SDDCs also will enable IT departments to stay agile and evolve with changes in business direction.
SDDC Tomorrow
Once an organization deploys a hybrid data center ecosystem to flexibly manage more and bigger workloads, the challenge becomes how to manage such a complex environment — encompassing, for example, a legacy data center with virtualized servers, a private cloud operated by a collocation services provider, and one or more public clouds operated by cloud vendors. Organizations need to be able to react quickly to changes, while being careful to protect the business.
These challenges can be met only through SDDC processes, namely: virtualization, standardization, automation, and optimization. Operated and managed correctly, the SDDC will be a resilient, efficient, secure, and less complex cloud-like environment that will enable the most efficient delivery of cloud services.
The SDDC is a data center in which all infrastructure — servers, network, storage — is abstracted from the hardware to simplify the delivery of services. Control of the data center is fully automated by software, with intelligent software systems controlling hardware configuration, including critical infrastructure.
Characteristics of SDDCs include:
Focus on services. Infrastructure configuration, control, and delivery are services-centric; virtualized infrastructure is delivered as a service. Disaggregation. The SDDC uses a large number of software hosts, often running on inexpensive commodity hardware supported by facilities software. Workloads are spread out and can move and be supported anywhere: an edge facility close to the consumers of information, an enterprise data center, a cloud hosting facility, etc. Operational shift from physical/hardware-centric to virtual/software-centric1. Time scales are compressed and flexibility enhanced, so workloads and workflows are agile (fast), flexible, scalable, and efficient. Policy-based critical infrastructure control. The software layer optimizes infrastructure management to keep the data center in balance (power and cooling allocated only where they need to be, when they need to be, at the right level). Focus shifts from availability to resiliency. The network may slow a little, but because it is a simple matter to take out a failed server after its applications have been automatically moved elsewhere, the only way to really go down is to experience a software glitch or be hacked.SDDC software-centric architecture is illustrated in Figure 3.
It’s important to understand how the SDDC will overcome traditional data center shortcomings that lead to inefficiencies and waste. In a traditional data center (with virtual servers) that doesn’t have visibility into processes and automatic controls, critical infrastructure is subject to manual input and decision making, which makes it tough to properly allocate resources to dynamic virtual workloads. This results in poor temperature control and the inability to shift more power in real-time to a moving workload’s host software. To compensate for the lack of real-time visibility and control, organizations overprovision the critical infrastructure so availability is not compromised.
In contrast, an SDDC would provide full visibility and control of the virtual infrastructure, enabling resources to quickly and efficiently match/follow virtual workload movements.
This would happen automatically based on policy applied within the software-based management/control layer of the SDDC. There would be genuinely dynamic cooling control and, on the power side, high quality and reliability, and the ability to surge, shed, and move power to hosts in real-time. In other words, supply would match demand automatically in real-time according to policies. Consequently, the need for wasteful overprovisioning is eliminated.
Deploying SDDCs
Few organizations have deployed SDDCs. Instead, as previously described, they are faced with the vexing challenge of integrating a mix of on-premises, private/public/hybrid cloud components. CIOs in a position to transform their IT organizations will be assisted in their efforts by vendors offering products such as VMware’s recently introduced unified hybrid cloud platform built on a SDDC architecture.
Most CIOs, however, are likely to move to software-centric models gradually, as part of a strategic business decision to simplify and improve efficiency of operations, become more resilient, and achieve the flexibility and scalability needed to handle dynamic workloads in a software-optimized architecture that for all intents and purposes eliminates availability concerns.
Following a strategy of gradual adoption, organizations can address IT infrastructure one piece at a time — from computing, to networking, to storage, for example. Software-defined critical infrastructure management (data center infrastructure management or DCIM) becomes a necessity when software-defined compute efforts get underway.
DCIM will be a key enabler in the SDDC. It will provide the visibility into and the common language for all components in the data center ecosystem across IT and facilities. Being able to see into everything in the ecosystem will uncover unique information with which to make better decisions.
Another option for achieving the benefits of an SDDC is to outsource it to a cloud provider that offers such services. This may become the default choice in industries that have difficulty attracting IT talent with the skills necessary to support the function as a strategic partner to the business.
Conclusion
SDDCs do foretell the end of the data center evolution toward lower visibility, greater complexity, and more risk. The integration performed by a software-oriented architecture simplifies IT and facilities control and management. The ability to move workloads anywhere and supply them with critical infrastructure support in real-time — with visibility provided through software monitoring of IT and facilities — makes operations reliable.
Additionally, SDDCs bring important cost benefits. Physical IT infrastructure will become inexpensive commodities, and everything from servers to memory chips will be able to be swapped in and out without downtime. The major data center cost benefit, though, comes from eliminating the need to overprovision.
Organizations will be able to hire more IT talent with the money saved, and, by freeing IT staff from spending most of their time chasing fires, they will be able to spend more time focusing on innovation that supports the business.
1. AT&T outlines the key operational shifts involved in moving from a physical architecture to a software-centric architecture in AT&T Domain 2.0 Vision White Paper.