The data center dilemma: Is data destroying the environment?

The data center dilemma: Is data destroying the environment?

Nowadays, many people ask the question, is data destroying the environment? The answer is yes.

In the next few years, the adoption of new technologies such as smartphones and wearable devices may slow down significantly, but data usage will continue to grow significantly. In 2012, there were only 500,000 data centers that could handle the world's data traffic, but according to research firm IDC, there are now more than 8 million data centers around the world. The rapid growth in smartphone usage, Internet of Things adoption, and big data analytics has led to massive growth in data centers and has brought huge costs.

Every year, millions of data centers around the world are consuming as much hardware and electricity as the aviation industry, producing as much carbon emissions as the airline industry. While technological advances are difficult to predict, some analytical models predict that if left unchecked, data center energy consumption could exceed 10% of the world's electricity supply by 2030. This growth would show similar carbon emissions and growth in electronic waste. Data center researchers, including the UK's leading expert Ian Betterlin, have noted that data center energy consumption continues to double every four years.

Additionally, Informa recently surveyed hundreds of IT leaders about their data center practices, and the results are interesting. While data centers use 3% of the world's electricity supply, energy efficiency ranks fourth on the list of priorities when building or leasing a new data center. In addition, most respondents do not understand their data center power usage effectiveness (PUE), the primary measure of data center efficiency, and often keep data center environments unnecessarily cold, which wastes a lot of power.

All in all, this paints a challenging picture for our environmental future. Fortunately, some forward-thinking industry leaders have been innovating around this challenge.

Over the past five years, the U.S. Department of Energy has found that increasing Internet traffic and data loads are being offset by a range of new technologies and designs that limit the growth of data center energy consumption. Lawrence Berkeley National Laboratory estimates that if 80% of servers in the United States were moved to optimized hyperscale data center facilities, it would result in a 25% drop in their energy consumption.

For enterprises that do not need or cannot afford to build hyperscale data centers, a new class of data center resource optimization systems has been introduced to the market. Over the past few years, many new server technologies and data center architectures have focused on maximizing resources and efficiency while minimizing energy requirements. These solutions look at new design improvements and rethink how standard data centers are built to achieve breakthrough performance and efficiency.

Additionally, a key area of ​​improvement is developing superior cooling technology. Building data centers in cold or windy locations is a popular solution. Another improvement is to use fewer servers so they don’t sit idle: Facebook developed a system called Autoscale in 2014 that reduces the number of servers that need to be turned on during low-traffic periods, saving about 10%-15% of electricity. Some industry leaders (such as Google) have turned to artificial intelligence to optimize their internal cooling systems by matching weather and operating conditions, reducing cooling energy consumption by nearly 40%.

[[264729]]

Another increasingly popular approach is to simply design server systems to run at higher temperatures. Rather than cooling the system down to a certain temperature, new hardware can run at higher temperatures without compromising reliability. Of course, this requires significantly less cooling requirements, which in turn requires less power to the system.

Another area of ​​focus is improving power efficiency. A recent study by Control Up found that of the 140,000 servers they studied, up to 77% had over-configured hardware, which increases power consumption. To address this, resources can be consolidated in the design, allowing servers to share computing resources between systems, and these resources can be shared across multiple servers rather than being limited to each device.

System disaggregation design is a revolutionary innovation that overturns the data center model of upgrading every 3 to 5 years, deploys modular, sustainable infrastructure, and only upgrades the missing elements in the system. By using servers composed of independently upgradeable subsystems, they allow enterprises to be more selective and efficient in retaining hardware that does not need to be replaced. For example, Intel has been deploying system disaggregation design for its latest generation of CPUs in large quantities, which has made an important contribution to reducing electronic waste.

The story is not over yet

NASA's Environmental Research Center has been implementing data center solutions that are consistent with green computing efforts. Lesley Ort from NASA's Global Modeling and Assimilation Office pointed out, "We are studying the issue of data center energy consumption, and we are also studying the issue of greenhouse gas pollution." NASA is vigorously studying and solving the environmental challenges of data centers, while many technology companies have not yet grasped the environmental impacts associated with their products and services.

Today, the most important next step is education, which makes people aware of the importance and benefits of a more environmentally friendly data center. By adopting technologies that solve data center challenges, it can provide the dual benefits of optimizing performance and reducing environmental impact. If appropriate actions are taken, data center operations will not have a serious impact on the environment.

<<:  Developing strategies at the data center level

>>:  In the 5G era, how will data centers be defined?

Recommend

CMIVPS: 350 yuan/year-1GB/20GB NVMe/1.5TB@1Gbps/Seattle/High Defense VPS

Many friends have given good feedback on SpartanH...

Global 6G market expected to reach $20 billion by 2028

In the dynamic world of telecommunications, the a...

Mobile networks want to kill WiFi, but it won’t work in China

According to a recent report by OpenSignal Mobile...

Western Digital Enters In-Memory Computing Segment with New ULTRASTAR Memory SSD

Western Digital Corporation (NASDAQ: WDC), the le...

How to use 5G spectrum efficiently? Both licensing and sharing are effective

Telecoms.com regularly invites third-party expert...