Should the server operating system choose Debian/ubuntu or CentOS?

Source: Internet
Author: User
Tags mysql version new set openssl


Should the server operating system choose Debian/ubuntu or CentOS?


In fact, I think most of his statements are not wrong. If you need to install a server, you really prefer RH system.



But...



Contact QQ 2880990294 Phone 18326835655
Fuzhou Boutique segment: 59.56.66.* for games | chess | Spinach |CDN | website and other BUSINESS


The main reasons for choosing RH System


In fact, you see the end of the reply from the beginning, the main argument is one point:



Which distribution, for up to 7-10 years, can always maintain hardware stability while continuing to upgrade patches?



The conclusion is of course rh! This is the main selling point of RH.


do we really need hardware stability support for up to 7 years?


Cough, the first half of this year, the operation of the factory has encountered such an embarrassing thing.



They purchase, to the computer room installation system, configuration network structure, join operation and maintenance management system, add monitoring, delivery. In addition to purchasing, the entire set of processes is approximately one week.



We have about 10 cabinets in the machine room, then the general expansion of the time, an expansion of a cabinet.



As a result, for a certain period of the first half of this year, a cabinet event lasted two months a week. Yun-dimensional classmate hard to install a cabinet, weekend plan to relax. By the boss notice, again to the customer, the cabinet is not enough to use, continue next week.



Yes, we now have more than 20 cabinets. The room has how many cabinets I do not know, but according to this trend, we quickly wrapped down the computer room. Now we have no limit on the bandwidth, at the end of each month according to the contract settle scores.



We have a few three-year-old servers with few stations. Now, performance is far from enough. CPU is not fast enough, there is no SSD, hard disk read and write too many times. The majority of these machines will be replaced by depreciation to sell, or as a test server, to move to the test room. And now the room inside most of the machine, are under two years of history. And at least half of the server history is less than half a year (...) )。 From the current situation, the old server left in the computer room, its cost-effective is not advantageous. Because of the density of the machine room organic rack, limiting our single room limit, which is equivalent to the rent disguised.



With this in mind, our online server life cycle is about three years. Most. Most of the time, not even yet.



More extreme than us is the page tour. Their set of server life cycles is typically half a year. Six months, to make money is also earned, not to make money is dead. So they don't even buy new hardware servers, they use virtual machines directly.



Of course, the system within the virtual machine, the support time is a year or a decade, no sense to them.


Why don't we like systems that are more than three years old?


RH provides 10-year maintenance, I would say that the latest software is not found in the official library of RH. Of course, installing the latest RH is available, but on a system that has been installed for three years? Definitely not.



What to do? Compile the chant.



This is probably the origin of the domestic talk about RH compilation.



However, I quote a passage in the text.


If I tell you today, I want to be an http server. I don't need apache or nginx. For performance, I need to rewrite a set based on xxx. I believe most people will ask the same question, "Do you think you can write better than ng?" Look back at yourself then.

 


Similarly, your own compiled software, patch maintenance speed, can be compared with the new system? And we're going to have to throw a guy down. Patch maintenance.



So, what is the positive solution?



Put in a new set of data to guide the past.



Our "Data" is loaded on disk. The system does not need to update the data, as long as the system disk wipe re-deployment again, and then configure the deploy system is OK. At the beginning of development, the separation of "Environment", "program" and "Data" is a basic principle. And even with "data", losing all the "data" on a machine doesn't pose a problem. This should be the foundation of the operational dimension. There are only a few servers that cannot be directly replaced or shut down. We do special management of these machines.


Why use Ubuntu in a factory?


Very simple. Because the initial development is expected on Linux. Developing and testing directly on Linux is very important for startup's rapid development. And what version of the development, server with what version, this is the most convenient and good to do. If you're going to argue with me about developing on a Mac, there's nothing to run on Linux. Or to develop a distribution, the server a distribution is OK.



I must say at least that this is not true for Golang and Python. Unless CGO is not used, Python's C extension is not necessary.



Let's not mention the differences under Mac and Linux. This year, when we were up 14.04, we found that the compilation of 12.04 and 14.04 did not pass. So now 12.04 of the compilation can be compiled by the programmer's own local test, 14.04 must be in the test environment to do. A bunch of programmers remote tcpdump results, copy back to the local wireshark ...



Look at the egg ache.



There is, of course, a problem. It's "We don't like systems over three years." So. Next year our system will probably rotate reload, 14.04 ...



It's also a sore egg.


is the Debian patch not reliable?


It depends on who is better than. Here are statistics of the Heartbleed events. Although not common, but I think this big loophole is more representative.


Cve-2014-0160–openssl Non-technical events for security vulnerabilities


I cite his focus to organize:


RedHat fixes faster than the official OpenSSL. The repair time of RedHat factions is slow except RedHat, such as Fedora, CentOS, and Scentific. They are all 16 hours slower than RedHat. Debian factions, such as Debian and Ubuntu, are at least 12 hours slower than RedHat. Scentific is the slowest fix in the list. For 6 hours of ZiAn Gold, Fedora, CentOS, OpenSUSE, Gentoo and Scentific failed.

 


If and RH ratio, Debian repair speed is failed, but with CentOS than ... What do you say? 6 hours to 10 hours, a kind of kettle black taste?



How many steps do you want to take?



In addition, I do not know the original words to upgrade a large package is what. I query SSL on the Debian stable:


$ dpkg -s libssl1.0.0
Version: 1.0.1e-2+deb7u12
Depends: libc6 (>= 2.7), zlib1g (>= 1:1.1.4), debconf (>= 0.5) | debconf-2.0


But at the same time.


$ dpkg -l | grep libc6
ii  libc6:i386                           2.13-38+deb7u3                i386         Embedded GNU C Library: Shared libraries


LIBC's dependence has long been satisfied to no longer be satisfied. Until today, the upgrade of OpenSSL on Debian does not require you to force a follow up upgrade libc6. and kernel is not dependent at all.


correct a little comprehension error in the original text

Debian is a distribution maintained and contributed by the community. Its selection and packaging are all organized by the community and distributed. Debian has no real release concept. Debian has many repositories, stable, testing, unstable, and experimental. The way Debian organizes its system is that a software first enters experimental and is put in for a period of time. There are bug fixes and no bugs. After a period of time, it is moved into unstable. This cycle will eventually move Into stable. So in this case, there is no concept of a stable version in the Debian system. Today you use kernel 3.2.1-87 and tomorrow you will update to kernel 3.3.2-5.

 


Debian is maintained by the community, that's right. But the package is not a community organization. In Debian, if there are no specific reasons (such as DFSG) to prevent you from hitting a package, you can pack it if you have maintainer. Even if the package is not a lot of users (many packages or even only 1X users), this is also a lot of Debian package.



The Debian package is managed by first entering unstable (yes, except in a few cases, generally not into experimental). After a week, look no problem, enter the testing. The no problem indicator is that this package and the dependent package does not have an RC bug, which is a fatal bug.



So a lot of things in unstable, testing inside but not. Because the RC bug of some basic dependent package inside unstable is not fixed. and testing the speed of the bug is the slowest. Because of a problem, unstable will introduce the new version directly. and stable will require maintainer repair. Poor testing can only wait a week ...



So when did you get into stable? He will not enter the stable with your cycle. But every 1.5-2 years (expected 1.5 years, but the RC freeze cycle will often exceed the standard, according to historical data statistics, generally two years) to do a release, the conference freezes all new packages, and fixed RC bug. And so we feel almost stable, OK, the original testing became stable, and the original unstable fork out the new unstable and testing.



So now the testing code will become the next stable code, and each fork, we are the decision testing code-is the next release of the release code.



So if you look at the BTS tracking, you will find that every 1.5 years, the number of RC bugs will drop sharply, and the number of new packages also drop sharply. It's not that everyone is hibernating, it's just a new release cycle.



As evidence, here is my packages overview. As you can see, python-snappy (This is the only package I maintain, Python-formalchemy has RfA) is 0.4 in stable, while the new two inside is 0.5. I have no clear reason to upgrade the version of stable to 0.5.


So how does Debian fix bugs?


This looks maintainer. The general principle is that if you are not unable to maintain the version, the general direct patching upgrade. This is also a bit of the original understanding of the mistake. If the Debian stable is used, there is no special reason why the kernel will not be upgraded to 3.3. As evidence, you can take a look at the current stable's official kernel version number. Currently LINUX-IMAGE-AMD64 (3.2+46), the dependency should be Linux-image-3.2.0-4-amd64 (3.2.60-1+DEB7U3). In other words, the version number should be 3.2.60-1+DEB7U3. The corresponding version number of the longterm on Kernel.org 3.2 is 62.



The original author of this understanding, how to say it. I suspect that he either didn't use Debian carefully or testing.



But if the old and new versions are too big and the old version refuses to provide patches, then it is necessary to evaluate whether they can be promoted. For example, a certain period of time, MySQL version number is 5.0xxreal5.5xxx (this is to listen to the factory DD said). As for the original compatibility issues, I do not know what they think, probably think MySQL server is not a dependency problem it.



But in this case, RH is generally not able to do it-unless they themselves out of the programmer to the old version of the patch. But if so, Oracle will usually merge back and Debian will follow dipping.


the misunderstanding of Ubuntu

Ubuntu 8.04 LTS April 24, 2008 Ubuntu 8.04.4 LTS January 28, 2010 9 months What do you say about LTS? ? ? Ubuntu 10.04 LTS April 29, 2010 Ubuntu 10.04.4 LTS February 16, 2012 What about LTS? It is a joke to say that End of the Date is 3 years old. As long as the next release comes out, the number of updates received in the previous release will be poor.

 


The author is probably a lot of RH, do not understand the nature of Ubuntu "maintenance".



Debian and Debian-based systems, the main way of distribution is the network. The CD-ROM just gives you a chance to install. Debian is even more obvious--he has a kind of disc called Netinst. There is only the base package for the installation package. In the case of non-networking, you can only pretend to be a system for networking upgrades. No GUI, no openssh, nothing.



So, lts to LTS, not enough modification is enough to hit a new disc.



Who will last 5 years, a year to get you a CD-ROM out AH-especially in the inside few bags changed.



And how is LTS maintained? Let's look at the maintenance of the USN.



In 2014 6-July, a total of 26 USN related to Ubuntu lucid.



So install Ubuntu, the first thing is to go to repository above the security patch!


The essential problem of "maintenance"


It says for a while, the fundamental question is, "maintenance" is what is a thing?



The main issue is bug fixes. Especially a special kind of bug fix--security patch.



Once a program is basically formed, it is bound to form an "interface." APIs are interfaces that invoke programs, parameters, order, environment variables, and the like. Interface compatibility is available with interfaces. If compatibility is not considered, use the latest version ...



Bang I don't know which day the program will run. Because the author changed the interface.



Don't think it's a drag, I've encountered this problem many times in practice. Python-mongo many times to modify the interface, sqlalchemy0.6 write the program, after my repeated changes finally up to 0.7, but also die on less than 0.8. As for Docker, it's also a guy with a version number just over 1.0. In front of 1.0, we have done two development for the great dead. The results were appalling.



So we're using a scheme called "release." That is, a period of time, the stable code is fixed down to form a version of the release. such as linux-kernel-3.2.0. The new function is gradually on the 3.3, and the original use of 3.2 is not disturbed.



It was quite perfect, but there was a problem. The bug doesn't have to be in the latest version, and he could have existed 14 years ago. This causes the bug to span multiple versions. And this bug doesn't have to be fixed--such as security breaches.



That's when the egg hurts.



How many bugs will be released in the upstream? If the upstream does not repair, this issue of the vulnerability to do? Most loopholes can be fixed in just a few lines, but some distributions even have to move the architecture.



There is no guarantee of repair without the power of research and development.



In the code, there are three main things, functional development, interface stability, code security.



If we could get rid of one thing, then the world would be perfect.


    1. Remove the stability of the interface, each time with the latest is good, the bug must be fixed light.
    2. Remove the functional development, the software will not advance the better, just fix the bug.
    3. Remove the code security, simply release the good, the hair is not the tube.


Unfortunately, the three are generally needed. Some very classical programs have entered the 2 situation, such as Tex. As for most Internet companies, the online system is more biased to 1. But most of the packages in the distribution, but the three are satisfied.


differences in the development of RH and Debian


Actually, it's still very big. The development of RH is really developed. Debian's "Developer", in fact, just develop Debian packaging scripts, maintenance versions, patches, warehouses. and RH Development, other than that, you see the kernel patch contribution.



This is also the difference between the community and the company's different orientations. The community no matter what business can handle, and they can't manage it. Don't mention how many people are RH, how many people are in the Debian community. I'll just spit out the number of DD in China. I queried for a total of 8 (db). I know 5 of them, more than half. A emfox came to the meeting, and Lidaobing and Zigo were there. We joked that the meeting focused on nearly half of China's DD ... In fact, the entire venue is not more than 20 people ...



Also only RH this level of company, have a lot of manpower to toss the kernel, drive such things. Because Debian even want to toss, also do not move ah. In a sense, the boom in all Linux distributions has benefited from the high income of RH.



So I really want to support open source, not all buy, buy a set of Rhel also good AH. Don't always call CentOS free, free and say a JB support open source.


under what circumstances is RH, under what circumstances not necessarily


Although I counted a bunch of questions from the original author, I have to say that his conclusion is nothing wrong.



Unless you know what you're doing, Rhel must be your first choice.



This is nonsense. Money lets people help you solve problems, and you solve problems yourself, which is more professional?



If you do this job, you are going to upgrade and upgrade your security patches in 35 years, and you don't want to do anything. Use Rhel to be correct.



If this is going to get you out of your job, change the Gentoo right.



As for what is called "knowing what you are doing", there is no uniform standard. Many times choose the development version a bit "like people drinking water, Lengnuanzizhi." For example, we chose Ubuntu to solve the issue with the environment, but introduced the operation and maintenance of rolling upgrades. But the tradeoff between release and commissioning environments can lead to a dramatic decline in research and development, and our research and development cannot be spent on money (advertising: long-term recruitment golang research and development), but our operations are available on a cost-effective basis. This time the pain has to roll up. Of course, maybe a few years later, we found that we were wrong. But the wrong reason is that we can't see it now. Of course, like us, or page tour of this wonderful company, also does not always appear. So for the most part, Rhel is all right (of course, the original writer is a bit too absolute).


what to use with Debian


Debian is very DFSG and has a very strong open source fundamentalist flavour.



Traditional open source believes that if only a commercial company is in possession of the issue, then they will take our lifeline and do evil.



I have to say that the foreigner's evil and understanding of monopoly and authority is no worse than ours.



So Debian's derivative issue is no less than RH (Linux distribution list). The biggest benefit is the DFSG rule, which allows access to Debian, not only to debian--but to the entire public domain. As a result, the--dd of Debian is very safe and has no legal risk, and the degree of specialization in this field is very high.



Moreover, because of the emphasis on freedom, all non-core packages within Debian are non-binary-made. In short, most of the other structures can be changed and should be configured to change themselves, except for core packages and packaging parameters.



Want to replace Gnome with LXDE, yes. With the help of KDE software? Or you can. What input method does the above use? Match yourself.



This is the strength and flexibility of Debian, and is a very high threshold for Debian. In contrast, Ubuntu emphasizes "out of the box". So the accompanying configuration is the most complete. But to use LXDE, the recommendation is Lubuntu.


CentOS is not RH


Most of the RH I mentioned above refers to Rhel. As for cent--, he is also a Community system, just back RH, with a thicker arm. This does not mean that cent is so reliable.



For example, this wiki page, CentOS, mentions that.


In July 2009, it was reported[by whom?] that CentOS‘s founder,Lance Davis, had disappeared in 2008. Davis had ceased contributionto the project, but continued to hold the registration forthe CentOS domain and PayPal account. In August 2009,the CentOS team reportedly made contact with Davis andobtained the centos.info and centos.org domains.[12]

 


That dude just disappeared for nearly a year, and pinched the domain name and paypal account. I remember the fact that the other developers of CentOS came out to put the words, and don't come out and declare you as missing.



This also leads directly to my company's base system selection from CentOS to Scentific (yes, it is the slowest fix on the home).



Secondly, CentOS does not sign the contract, so the incident is the operation of the maintenance of their own pocket.



Is there a problem with CentOS that you can explain to the leader? It depends on the degree of SB you lead. Anyway, if someone told me he was doing something with CentOS. My first reaction was that Rhel could not be avoided. Yes, that is the decision to use the person himself.


If you use RH, you should at least use Rhel and buy a subscription


We did not use Rhel to buy the RH subscription. RH Subscriptions are very instructive



Should the server operating system choose Debian/ubuntu or CentOS?


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.