Big data continues to impact large-scale enterprise and industry users around the world. It has already changed the way financial services, healthcare, retail and manufacturing, and other business decisions are made. However, its impact on the data center industry may be less examined. As more data is created, stored, and analyzed, more servers will be needed. Where and how to store and manage these servers to continue to maintain normal operation and achieve high-performance operations are important considerations. As big data business grows and more servers are needed, physical space and available reliable electricity will also increase.
While the cloud is in full swing, we should remember that the cloud is really just someone else's server space, so it is subject to the same power and connectivity needs. Therefore, whether big data operations exist on-premises, in a colocation data center, or in the cloud, IT operators must ensure that their infrastructure needs are met today and in the future. Here are some key data center infrastructure considerations for running big data: 1. Physical Space Whether servers are deployed in a colocation facility or in the cloud, servers require physical space. As data centers move closer to population centers, space is becoming a limiting factor for big data operations and growth due to its scarcity. When it comes to growth potential, it’s important to recognize not only the operational stability of a data center provider, but also access to space so that flexible infrastructure can enable scalability and speed of deployment. 2. Electricity Today, 2.5 quintillion bytes of data are generated every day, and 90% of the world's data has been created in the past two years. This pace is reflected in the growth of data centers. According to JLL's 2017 Data Center Outlook report, electricity consumption in the United States reached a record 357.85 megawatts - a continuation of what JLL called "the continued rampant momentum of global data center utilization." In addition to overall power consumption, the power density of each server is also increasing. Not long ago, 2kW per rack was typical; now, this is almost impossible, and each rack requires a density of 30 or 40kW. As your business and its IT infrastructure plans to grow, you must also ensure that your data center has access to efficient power, as not all grids can provide MW on demand. Data centers must also be able to cool high-density racks, which is a relatively new industry requirement. If your business values data center sustainability, another important consideration is to ensure that your provider offers advanced cooling technology to reduce power consumption and overall carbon footprint. 3. Normal operation The expectation for big data operations must be "always on". When "two minutes too late" is the way big data works, the basic requirement for infrastructure is 100% uptime. In a data center environment, uptime is achieved through redundant design. In the event of a failure, generators and instantaneous switches can take over the power supply of local infrastructure. 100% uptime is achievable, but risks still exist. Although additional redundancy can be built in with additional generators and instantaneous switches, these are often left to the end user at a high cost. The new trend in reducing downtime risk comes from using artificial intelligence (AI) and machine learning to maintain peak performance operations. Google first used AI in 2014 to track variables and calculate the peak efficiency of its data centers. Other large-scale data centers are also using AI and machine learning to reduce the risk of data center downtime. AI integration into the hosting ecosystem is designed to work with existing staff, combining people and technology. Data centers can now use machine learning sensors and AI roles trained by data center technicians to determine indicators that failure may occur. If your business is launching or expanding a big data initiative, it’s wise to keep real estate, power, and uptime top of mind. Whether your big data operations end up on-premises, in a colocation data center, or in the cloud, an infrastructure that is flexible, scalable, sustainable, and reliable will ensure its success. |
This article is reproduced from Leiphone.com. If ...
At present, the degree of enterprise informatizat...
[51CTO.com original article] The Global Software ...
DevOps has transformed the workflow and tradition...
CMIVPS has added Hong Kong 200GB and HK-CN2-8000G...
In recent years, blockchain technology has become...
Today's article is the opening. In fact, I ha...
The concept of Industry 4.0 is driving the popula...
Ever since 5G has entered the homes of ordinary p...
On November 7, the 24th meeting of the Standing C...
LOCVPS (Global Cloud) officially launched the Dou...
The 5G era is coming. What should 4G users do? On...
New infrastructure (i.e. new infrastructure const...
[[351620]] 5G is one of the Internet of Things co...
A strange theory about wireless routers has appea...