Today, IT managers must be prepared for the various technologies required to manage and maintain data center facilities. They should make sure they understand the basics of a data center environment that supports the combination of cloud computing, containerization, and artificial intelligence. Implementing, operating, troubleshooting and updating hybrid IT setups brings many new issues and requires more data center skills in terms of data center hardware requirements, security and data collection. So, what skills should data center managers have in 2019?
1. Cloud platform selection and management As cloud computing adoption grows, administrators must understand the workings of the infrastructure, platform and software, the different advantages and disadvantages of public, private and hybrid cloud setups, and any hardware required to support the cloud platform. Data center managers should choose the cloud computing environment that is right for their organization's long-term use. With Microsoft, Amazon, IBM, Salesforce, SAP and Oracle offering a variety of services, managers need to understand a proper evaluation process. If managers want a robust cloud platform, they must ensure the technology supports consistent image deployment, interface management, architectural standards and open application programming interfaces (APIs), such as AWS's Simple Storage Service API. At a basic level, administrators should know how to use cloud management software for application migration, virtual machine imaging, configuration, performance monitoring and security. Because of their reliance on cloud technology, administrators must understand how cloud architecture fits into their data centers and how to use it to the organization's advantage from a storage and application performance perspective. 2. Apply the management change process Cloud computing is changing the way administrators deliver, monitor, and maintain applications. Administrators need data center skills to manage more modular application setups that rely on pooled resources rather than installing programs on each server. Administrators must be familiar with the concepts behind microservices, the difference between containerization and virtual machines, and how to use orchestration as a key automation tool to ensure continuous and secure application operations. They can explore Docker-based containerization and use LXC/LXD as an alternative to certain Linux distributions, especially Ubuntu. The emergence of microservices and container technology means that administrators need to understand the best hardware to support the architecture. In addition to automation software, administrators should also learn how to build infrastructure with low latency, study how to properly scale resources, and find the best way to organize APIs and computing storage. Administrators should also look into Kubernetes for container management. Kubernetes developers are enhancing its capabilities, including simplified cluster management, a container storage interface, third-party device monitoring plugins and CoreDNS support. 3. Security becomes data first Device, application and database security are now the second layer of information security. Regardless of where managers store data, or if it is managed by a third party, security must be their primary concern. Managers must evaluate their data protection framework and put information security at the forefront of their security strategy. This requires an established security baseline, defined audit scope and objectives, and properly installed data protection software. Combining data loss prevention software with a digital rights management program can help managers build an information-first framework for use throughout the organization. If managers want to get more hands-on with security, they can build skills in network testing, risk analysis, software testing, and security documentation. Being able to proactively predict, patch, and protect data from external threat actors is an important part of a data-first framework. 4. DevOps provides support through software Managers need to increase DevOps support and collaboration to improve their data center skills. A poorly implemented DevOps system can cause chaos in the production environment. Software-based checks and balances should catch problems so performance and runtime issues don't propagate throughout the operating environment. These checks and balances use more agile operations, so administrators should evaluate operational processes and determine if they support agile workflows. And when it comes to application support and production environments, understand if the software layer is frequently updated or plagued by unresolved bugs. Admins should also learn about infrastructure as code (IAC). Infrastructure as code (IAC) provides users with a higher-level, more general and more descriptive language for provisioning and deploying software processes within the data center. This means that admins manage most of the technology stack using software rather than hardware. Software-based management gives administrators more scalable data center resources, but it requires more upfront testing and code-based troubleshooting to ensure hardware compatibility. The more familiar administrators are with software-defined data centers and virtual machine management, the easier it will be to support DevOps code and infrastructure. 5. Increased AI requires new hardware Artificial intelligence (AI), machine learning, and deep learning capabilities will become more accessible to managers in 2019. Many organizations are already using machine learning techniques in industries such as manufacturing using pattern recognition to identify items that do not meet specifications. This requires not only increased storage and processing power, but also appropriate monitoring and data collection software. Managers must understand how these intelligent systems can help their business, identify some basic engines that meet their desired goals, and use one as a foundation. If an organization only uses artificial intelligence without defining use cases, the return on investment will be very low. If an organization wants to run any AI-enhanced applications, administrators should ensure that the data center has the appropriate hardware (such as GPUs) and the knowledge to manage energy consumption. Running these high-performance applications can be beneficial to the organization, but GPU-based servers require more energy and power resources. Administrators should install GPU hardware in a way that does not overwhelm the servers or result in higher power bills. |
<<: Before the 5G feast, the operators' tight restrictions have been booked
>>: What can operators do to ensure that fee reductions are real and consumers understand clearly?
The current network industry seems to no longer f...
Salute to China's communications industry, Hu...
After more than a year of commercial use, China h...
An organization once worked with MIT to interview...
2020 is a critical year for the large-scale comme...
SaltyfishTech also released a special Black Frida...
An overlay network is one or more virtual logical...
Recently, Shenzhen took the lead in entering the ...
In today's digital age, the stability and sec...
Apple announced in the early morning of October 7...
【51CTO.com original article】With the rapid rise o...
[[431016]] Preface Seeing this title, you may say...
On April 18, 2018, at HAS2018, Huawei released th...
When it comes to 5G, everyone can basically talk ...
[Original article from 51CTO.com] From the evolut...