In the second half of cloud computing, how can operators embrace open source?

In the second half of cloud computing, how can operators embrace open source?

[[225474]]

Editor's Note

In recent years, with the rise of technologies such as cloud computing, SDN, and NFV, domestic and foreign operators have successively proposed strategic goals for medium- and long-term network reconstruction. On the one hand, they want to use new technologies to generate new business models and actively respond to the strong impact of Internet companies' OTT services on traditional telecommunications businesses. On the other hand, they hope to build a new network architecture based on new technologies and strive to get rid of the huge shackles brought by the rigid network formed over the years to improve their own operation and maintenance efficiency. CORD (Center Office Re-architected as Datacenter), as an important direction for operator network reconstruction, has become a core hot word focused on by major mainstream operators. Among them, the "unified business carrying architecture under the background of CORD" is the most critical. At present, the consensus in the industry is to build a unified cloud resource pool for the entire network to carry various upper-layer business network elements, and replace the "vertical chimney" system construction mode with "horizontal layering" to eliminate many disadvantages brought by the traditional network architecture. Based on this, how to build the underlying cloud resource pool plays a decisive role in the final results of operators' network reconstruction.

How to build a cloud resource pool is not a new topic. It has been 12 years since Google CEO Schmidt first proposed the concept of "cloud computing" in 2006. In the early days of cloud computing, except for Internet companies such as Amazon, Google, and Alibaba, which could build their own cloud platforms with strong R&D capabilities, other cloud service providers (such as telecom operators) could only implement cloud platforms based on closed commercial software. However, in recent years, with the increasing maturity of multiple open source projects such as OpenStack, KVM, and Ceph, and the global popularity of the DevOps concept, major companies are implementing open source technology architecture evolution for their own cloud resource pools. Obviously, cloud computing has entered the second half of the game. Whether traditional telecom operators can find the right entry point and grasp the rhythm in this field is bound to be crucial to their ICT integration transformation.

The theme of this article is the research on relevant open source technologies in cloud computing resource pools. The focus is on three key issues: why to choose open source technology, which open source projects should be selected, and what problems open source may bring. It aims to provide some valuable ideas and suggestions for the open source evolution strategy of operators' cloud resource pools.

Question 1: Why choose open source technology?

Unlike Internet companies, telecom operators, as traditional CT companies, pay more attention to system stability and high availability. Therefore, in the past, they mostly used purely commercial and closed products in equipment selection and architecture design to ensure that the network meets telecom-grade standards. At present, the open source trend is rising globally, and a large number of contributors continue to improve the project source code, making many excellent open source projects fully meet telecom-grade requirements. This is a necessary condition for operators to embrace open source technology. On the other hand, why we choose open source technology is a question that we operators must think clearly. The following explains this issue from nine aspects.

  1. Standards: The open source foundation plays a role similar to that of the traditional CT communication standardization organizations (ITU, 3GPP, etc.), making up for the shortcomings of operators in the IT field in terms of globality, compatibility, and openness of related systems due to the lack of a standardization system.
  2. Cost reduction: When open source software applications reach a certain scale, they can significantly save costs, and cost savings are conducive to profit growth.
  3. Increased efficiency: The open source community has a wealth of automated operation and maintenance tools, which are conducive to operators' transformation to DevOps and significantly improve the work efficiency of operation and maintenance personnel.
  4. Function: Relying on the open source system architecture, operators can carry out secondary development for customized and personalized needs, while closed and commercial products are usually only designed and developed for general and abstract functions and cannot achieve secondary development.
  5. Security: Open source code is conducive to independent control, turning the "black box" into a "white box", allowing users to understand the internal composition and greatly improve system security.
  6. Innovation: Open source projects are the soil for innovation. New features and functions can be implemented quickly without being restricted by the release cycle of commercial products. A good idea from a community contributor can also help inspire other participants.
  7. Competitiveness: It is beneficial for operators to master core technologies and enhance the core competitiveness of enterprises.
  8. The right to speak: You will not be bound by a certain manufacturer's "package" solution, making system optimization and evolution more flexible.
  9. Ecosystem: Open source projects help companies leverage their resources. With the continuous development of the open source community, the company's systems will continue to be optimized.

Question 2: Which open source projects should I choose?

Since the foundation of the cloud computing resource pool is the IAAS layer, the open source projects mentioned in this article are mainly analyzed from the perspective of the IAAS layer. At the same time, considering the importance of DevOps, some excellent PAAS layer open source projects will be introduced. The following will explore the selection of cloud computing related open source projects by operators from five dimensions: computing, network, storage, cloud operating system, and automated operation and maintenance tools.

1. Computing: Hardware, Virtualization, Containers

(1) Hardware: According to people's common understanding, open source projects are all software-related. However, there are also many excellent open source projects in the hardware field, such as OCP. The project was initiated by FaceBook in 2011 and aims to drive the continuous development of IT infrastructure architecture through open source hardware. After more than 6 years, OCP has nearly 200 corporate members. Due to its low construction cost and the removal of many cumbersome and useless functions, OCP can effectively improve the iteration speed of enterprise data centers and make software upgrades simpler, changing the image of early data centers that were difficult to get rid of. There is also an open source hardware project similar to OCP in China, namely the Scorpio Project. The Scorpio Project came out in 2011. Baidu was one of the main initiators, and giants such as Alibaba, Tencent, Lenovo, China Mobile, and China Telecom have successively participated. It has now developed from version 1.0 to 3.0. It is understood that the Scorpio Project has reached the international level of the entire cabinet part of OCP, and even surpassed the relevant OCP specifications in some designs.

(2) Virtualization: The virtualization mentioned here refers specifically to X86 server virtualization technology. In the early days of cloud computing, this technology was mainly monopolized by IT giants such as VMware and Citrix. Although the related products were comprehensive and had good performance, they were also very expensive. Nowadays, as the open source KVM technology becomes more and more mature, more and more companies are beginning to build cloud resource pools based on the open source KVM technology in order to significantly reduce the cost of purchasing virtualization software. The full name of KVM is Kernel Virtual Machine, which means kernel virtual machine in Chinese. KVM is a typical second-class virtual machine architecture (based on the level of VMM, virtualization software can be divided into first-class virtual machine architecture and second-class virtual machine architecture. The first-class VMM is above the hardware layer and is itself an operating system. The second-class VMM is above the host operating system layer and can be regarded as an application). Its host operating system covers multiple Linux distributions such as CentOS, Ubuntu, and Debian.

(3) Containers: Containers are lightweight operating system-level virtualization that allows us to run applications and their dependent packages in a resource-isolated process. Among the many container engine technologies, Docker is undoubtedly a star solution. Docker is an open source advanced container engine based on LXC developed by PAAS provider dotCloud. It is developed based on the Go language and is open source in compliance with the Apache 2.0 protocol.

2. Network: Virtual Switch

Virtual switch: Most commercial virtualization software comes with a virtual switch component, such as VMware's vDS and vSS. However, for open source virtualization software, its built-in network functions are usually relatively simple, which requires additional virtual switches to make up for this shortcoming. OVS (Open vSwitch) is the most popular open source project in this field. OVS is a multi-layer virtual switch developed by Nicira using C and Python languages ​​and following the open source Apache2.0 license. Its original intention is to allow large-scale networks to achieve automatic expansion through programmability. It can run as a software switch in the virtualization layer or as a control stack for the switching chip. It supports multiple standard management interfaces and protocols (NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag). It also supports multiple virtualization technologies, including KVM, Xen, and VirtualBox.

3. Storage: Distributed storage

Distributed storage: The development of cloud computing can be said to be a history of the development of software-defined everything (SDX) technology. From the early X86 server virtualization (which can be understood as software-defined computing), to software-defined networks (SDN), software-defined storage (SDS), and then to the unified software-defined data center (SDDC), each technology has attracted much attention in the industry. Among these SDX technologies, software-defined storage can be regarded as the most practical type after software-defined computing. What is practical technology? I think there are two most critical criteria for judgment. The first is whether this technology has significant progress compared with previous technologies (such as architectural innovation, increased functions or improved performance), and the second is to see the cost-effectiveness of this technology. Software-defined storage technology obviously meets the above two requirements. First, compared with previous storage technologies, SDS is a brand-new distributed architecture, so there is also the term "distributed storage". This "decentralized" idea originated from large Internet companies such as Google. It not only reduces the risk of storage failures in IT systems, but also greatly improves storage IOPS performance, and even makes online expansion of storage capacity a reality. The above three points have long been a problem that has troubled IT system storage administrators. Secondly, SDS perfectly realizes the decoupling of software and hardware, which is similar to X86 server virtualization. The difference between the two is that the loosely coupled architecture of X86 server virtualization brings online hot migration, planned zero downtime and other features, while the loosely coupled architecture of SDS makes dedicated storage hardware evolve into a general-purpose X86 server, which greatly reduces the investment cost of IT system storage equipment. Currently, the most popular open source distributed storage project in the industry is Ceph. As a unified storage system that supports objects, blocks, and files at the same time, Ceph is also the most popular open source storage solution in the current OpenStack ecosystem.

4. Cloud operating system

In the early days of cloud computing, the market competition for cloud operating systems has never stopped. In addition to commercial cloud management platform software launched by VMware, Citrix, and Microsoft, open source projects are also flourishing, including CloudStack, OpenStack, Eucalyptus, OpenNebula and other platforms. From the current development situation, OpenStack has basically become the only choice for cloud operating systems. OpenStack, which is a disciple of Amazon AWS, has gone through 16 versions from the first version (Austin) in October 2010 to the latest version (Pike). From the initial two modules of Nova and Swift to hundreds of modules of all sizes today, it is considered to be the second largest open source community project after Linux. In the past two years, cloud platform solutions based on OpenStack have sprung up in various industries, but few people can really give OpenStack an accurate definition. Some people think it is a cloud management platform, some think it is a cloud computing platform, and some people compare it with VMware virtualization. As the saying goes, "There are a thousand Hamlets in the eyes of a thousand people." What you think of it can only indicate which aspect of it you pay attention to, not what it is. I tend to regard it as "the most common cloud computing standard system architecture currently available". Its increasing maturity has standardized various complex technical concepts in the field of cloud computing in the past, and achieved the harmonious unification of cloud solutions of various manufacturers. Don't underestimate this point. After all, the only organization that can allow countries (IT manufacturers) with different languages ​​(various IT product interfaces) around the world to put aside disputes and communicate and cooperate (unified adaptation) is the United Nations. From this point of view, OpenStack is the "United Nations" of the cloud computing world.

5. Automated operation and maintenance tools

Strictly speaking, automated operation and maintenance tools are not unique to cloud computing. There is also a need and implementation for automated operation and maintenance under traditional IT architecture. However, the combination of DevOps and cloud computing that has emerged in the industry in recent years has given a new concept to automated operation and maintenance. In this context, many excellent open source automated operation and maintenance tools have also emerged, and Ansible is currently the most popular one in the industry. Ansible is a model-driven configuration management tool that makes full use of SSH technology to improve security and simplify management. In addition to configuration management, it can also implement application deployment (even multi-layer deployment), workflow orchestration, and cloud configuration automation. Ansible is based on five design principles, including ease of use (no need to write scripts or custom code), easy to master (for both administrators and developers), comprehensive automation (allowing you to automate almost everything in your environment), high efficiency (because it runs on OpenSSH and does not rely on memory or processor resources), and security (it is inherently more secure because it does not require agents, additional ports, or root-level daemons). Like many other open source projects, Ansible also has a paid product that uses a web user interface called Ansible Tower.

Question 3: What problems might open source bring?

1. Open source does not mean free

This is an eternal topic in the open source field, and it is also a point of view that any institution, organization or company that wants to make good use of open source software needs to accept. Open source software is a change in the development and application model of traditional commercial software. Traditional commercial software is sold under license, and the problems encountered by users in deployment, use and later maintenance can be solved by the software vendor with clearly marked services. Open source software provides users with a free community version of software, but usually, it can only be regarded as a "semi-finished product" that can only meet the most basic and most common functional requirements of users. Some customized functions require users to redevelop on the basis of the community version. On the other hand, there is a lack of professional team support for the early deployment of software and the mid- and late-stage operation and maintenance. If software is compared to a car, most telecom operators at home and abroad are accustomed to being a qualified driver who does not have accidents, that is, buying a "finished car" from the equipment manufacturer, and then working hard to learn the skills to drive the car. However, open source software is a "semi-finished car". Before it can drive smoothly, it must first be optimized and modified, which requires the skills of a mechanic. Therefore, operators who have been accustomed to being "drivers" for many years may need to consider how to supplement the "mechanic" skills. If you want to become a qualified mechanic, tuition fees are inevitable.

2. Open source hinders technological development

This view seems to be far from the public's cognition, but this problem does exist in the open source field today. The original intention of the open source project is to gather the power of developers around the world to create an excellent, powerful, and fast-iterating software to replace the closed-source commercial software developed by traditional IT giants, thereby eliminating technical barriers and avoiding the phenomenon of manufacturers controlling the evolution of technology. For example, the development of the Linux project has promoted the trend of X86 servers, making closed and expensive minicomputers gradually withdraw from the stage of history. Another example is the birth of the Android project, which has accelerated the prosperity and development of mobile Internet and made non-smartphones disappear from people's vision. These successful open source projects have indeed promoted the development of technology. However, in recent years, with many IT giants paying attention to open source and investing money to support community development, the once relatively pure open source projects have undergone subtle changes. Some open source projects dominated by a few manufacturers have become no longer open and friendly. In order to maximize their own interests, the relevant manufacturers have done things that deviate from the original intention of the open source project, resulting in the decline of some excellent open source projects. This phenomenon occurs in open source projects and is usually more harmful than commercial software. Because the same type of commercial software can generally be provided by multiple competing manufacturers and coexist for a long time. Open source software of the same type will compete fully in the early stages of development, but over time, based on the "law of the jungle", a single company will eventually dominate. If the winning project is controlled by a few commercial companies, these companies can control the technological trends in a certain industry or field, which is almost the same as commercial software that completely monopolizes a market.

3. Open source is not safe

In the previous article, the author discussed "security" as an advantage of open source software, and now discusses it as a potential problem of open source software. This seems to be a bit paradoxical, but it is true. People who put forward the argument that open source software is more secure generally believe that there are hundreds of people in the community reviewing the code to find vulnerabilities or defects, so as to quickly fix the vulnerabilities and prevent them from being exploited by attackers. However, people who hold this view often ignore a problem, that is, the vulnerabilities in the code are open to everyone, including attackers, and the time taken from discovery to complete repair of most vulnerabilities is obviously longer than the time taken for malicious code to be generated, which gives attackers an opportunity to take advantage of. From the "Heartbleed" vulnerability of OpenSSL a few years ago to the "ransom incident" of MongoDB, to the recent "DRDOS attack" caused by the Memcached vulnerability, all reflect the fact that open source software is not secure.

Conclusion

In summary, this article focuses on three major topics: why to choose open source technology when building a cloud resource pool, which open source projects should be selected, and what problems open source may bring. The open source projects mentioned in this article only involve a small part of cloud computing related open source projects. Cloud computing is still an emerging technology field, and related open source projects are also maturing and developing. Therefore, the research on related open source technologies in cloud computing resource pools is a long-term and ongoing topic. I hope that the views described in this article can help and inspire readers.

About the author: Zhao Hui, engineer of the operation and maintenance center and business platform technical support engineer of Jiangsu Branch of China Telecom Co., Ltd., individual member of China Institute of Communications, mainly engaged in cloud computing platform operation and maintenance, SDN, NFV, OpenStack, artificial intelligence and other new technology applications, founder of WeChat public account "Zhao Hui Talks about Cloud".

<<:  The birth of the "Huawei Industry Fertile Soil Platform" - the empowerment and counter-empowerment of Huawei Cloud and its ecosystem partners

>>:  Exploration of CDN Technology Innovation and Business Model——2018 Asia Pacific CDN Summit was grandly held

Recommend

How will 5G affect our lives?

5G is slowly rolling out in many countries around...

What is the Internet backbone and how does it work?

Tier 1 Internet Service Providers (ISPs) connect ...

BuyVM: $3.5/month KVM-1GB/20GB/1Gbps unlimited traffic/Las Vegas data center

BuyVM Las Vegas has currently restocked a large n...

5G+Robots help accelerate the intelligent development of coal mines!

Recently, the Intelligent Coal Mine and Work Safe...