From: http://bbs.chinaunix.net/viewthread.php? Tid = 609471
To understand the differences between tarball and rpm, let's start with the generation of software.
To put it simply, today's computer is able to operate because it handles 0 and 1, but the problem is that it can only handle 0 and 1.
Therefore, software programs that can be run on a computer must appear in binary format of 0 and 1, which we call executable ).
In addition, different CPUs run in different formats, which are called the hardware platform ).
For personal computers, the most common hardware platform is the CPU designed (or compatible) by Intel, which is often referred to as i386 or x86.
However, the binary format makes it too difficult for programmers to write programs! (Well, don't doubt, it's true in the early days ~~)
Smart people choose other easy-to-understand methods for transcription, which is what we often call "programming language.
We call the code written in programming languages as source code ). there are many programming languages. In the Unix/Linux World, C is the most traditional and commonly used programming language.
On this basis, we need to know the following facts:
1) We have many programming languages available.
2) there are also many types of hardware platforms available for execution.
3) The source code written in the program language will not be automatically converted into binary format.
This is the time when "Compiler (compiler)" is playing. In other words:
--- The written source code must be compiled for different hardware platforms to generate execution codes.
Different compilers are required for the source code of different languages and different hardware platforms.
If you use C to write source code, the most common compiler is C compiler (CC for short.
However, CC also has many different versions, with different functional performance and job platforms.
Linux is the most common gnu c compiler, or GCC for short.
When you write C code and GCC is ready, you can execute GCC to compile the Execution Code.
Then, compile the compiled Execution Code into the corresponding operator so that people can execute the program on this machine.
I will not introduce how to write C code or how to run GCC for the time being.
In fact, the longer the code is, the more difficult it is to design and protect the code, and the more parameters available for GCC.
I am not going to introduce the concept of library here, but in fact, it can also be used by many special books.
I strongly recommend that you understand the concept of a library, especially the differences between static and dynamic calling methods.
This is because the dependency of the execution environment and the portability of the program, that is, the software dependency ).
Well, let's try it out first. If you have a chance in the future, come back and introduce it to you.
So far, I believe everyone knows how the software has come out!
However, please answer the following questions:
1) Will you write the source code?
2) Do you have to execute GCC and related parameters?
(Ah... Frankly speaking, I don't quite understand it !)
And, even if you understand it, then:
3) are you willing to run hundreds of lines of GCC commands every time you install the software?
(Oh... I don't want !)
What about ordinary computer users? What do they think?
Do embracing software have to be so hard?
Here, remember this sentence first:
--- The reason why software advances is that people will solve existing problems anytime and anywhere!
As mentioned above, writing from binary to source code is a great improvement, let alone the old "punch card" age!
However, this progress is not enough... people are not satisfied ~~~
Therefore, the make program appears.
In short, make is to read a prepared makefile and automatically help us complete all the compilation work.
For the specific makefile format, I have skipped it for the time being, but I will leave it for you to learn. (I only want to talk about concepts here)
It is important to understand the progress brought about by makefile, because the RPM spec mentioned later also comes from the same idea.
From then on, we only need to run a few simple make commands to replace the past hundreds of thousands of GCC commands!
What a wonderful thing!
Okay, had to admit that makefile had brought people satisfaction for a period of time.
It's good, but not good enough!
What else is not enough?
With the popularization of computers, the spread of software has also greatly improved.
In addition, there are wide and wide differences in different execution environments.
People slowly discovered that a single makefile could not meet the needs of all environments!
What should I do?
Come back to the old saying:
--- The reason why software advances is that people will solve existing problems anytime and anywhere!
As a result, smart people began to write some environmental investigation tools, mostly script languages (shell, Perl, etc ),
This allows the user to execute it first, and then automatically modify makefile according to different investigation results.
At the same time, the author also allows users to specify special requirements through various parameters.
Yeah... perfect ?!
Nooooooooooooooot yet !!
What happened?
Sorry, dear friend, if I don't talk about it, how do you know you can still play it like this?
What's more, what about ordinary users who just want to run applications ?! Even if I say this, it's still confusing for them ~~~
Don't worry!
--- The reason why software advances is that people will solve existing problems anytime and anywhere!
As a result, the author writes all the steps and possible environment requirements and operations into documents so that users can read them slowly.
This is the intention of readme and install!
Oh... are you suddenly enlightened here? It turns out that people often run software like this:
1) less readme install
2)./configure
3) make
4) make install
As long as you can understand what you have mentioned above, you can understand why these steps are all... ^_^
This is what people often call the "Source code installation" method, or tarball.
(Well, tarball is actually about tar, Gzip, gunzip, and so on. I will not talk about it here for the time being .)
However, is the problem solved?
Do you think the world has been peaceful since then?
If you are satisfied, let me ask you a few more questions:
1) Do you know which software is installed in the system?
2) who provided them? What is the version?
3) Do you know where to install the software just installed?
4) Do you know which files have been modified since the installation?
5) What other software does this software depend on?
6) What software depends on you?
7) Can you safely remove or upgrade it?
8) Do you know what other partners have installed and changed?
9) Do you know the software status of each machine?
9) If you want to hand over your job to someone else, can you make it clear?
10) What if you take over from someone else?
Wait ....
These questions have always been answered in software management. However, how many questions have you answered ?!
Don't tell me, you can leave it alone. If you have your own system, you can do whatever you want.
However, if you receive others' salaries every month, these basic questions cannot be answered. Are you sure you want to answer them? Do you still have a face to ask for a raise?
Well ????
Are you sweating? Friend?
Oh... Don't be guilty, not just you. Many people have encountered the same problem...
--- The reason why software advances is that people will solve existing problems anytime and anywhere!
Then, how can I solve the above problems if I change the question to you?
1) Remember with your brain?
2) write with a pen?
3) Ask the secretary?
4) ignore it?
Oh... friends, don't forget that you are a smart IT professional ~~~
5) write an electronic document?
Is that enough? Of course not enough!
6) Is it a database )?
Yeeeeeaaaaahhhhhhhhhhhh! Bingo! You won the prize!
You have been engaged in it for so long. Didn't you think that you should hand over everything you want to prepare for a query to the database!
Yes! We can also use databases to track and manage our software information, right?
This is indeed the case!
But will you write the database? Even if it is written, can it be used to manage our software?
(Honestly and honestly, I won't !)
How? Is it not applicable to others?
Oh... yes: use others' work results-this is the same as the truth that you don't need to write source code yourself!
So, who is it?
In fact, you are already eager to hear about it-RedHat package management (RPM) is one of the best.
Package refers to the software we installed on the computer.
(Package managemnet and software management are actually different. Here we are talking about package .)
What is RPM capable?
To put it bluntly, it's just a database.
With it, all the questions about software management can be answered in light of the answer.
Oh... I have skipped the application of RPM commands. I only talk about the principle, okay?
Okay!
So how did RPM dB come out?
Well... this is about to get down to RPM spec.
Do you still remember the makefile mentioned above?
If you remember that makefile can automatically run GCC,
A simple RPM spec will help you automatically run the configure and make commands!
Then, the database data project is added.
Of course, the spec file can be very complicated to write. However, do you know what I'm talking about?
--- For spec writing, I also skipped this ....
Ha ~~~ Have you been defeated by me? Oh ~~~~~~ Take it easy, just talk about the principle, okay?
When you have a tarball, you can write another RPM spec,
Then use the rpmbuild command to complete each instruction specified by spec.
(In fact, it is similar to running make on your own, but it is more automated ...)
Finally, files in binary RPM format can be generated.
Then, run the RPM command to install all the files and modify the database according to the spec.
In this way, you will not delete things that you previously installed with tarball, but the database will have all the information about the software!
Isn't it beautiful ?! Pai_^
Hold on...
Is it good enough? Darling?
We must acknowledge that the hearts of the people are inadequate!
Yes, you can solve all the software problems if the RPM does not exist.
Friends who have installed RPM must have encountered the dependence issue, right?
Alas, I really don't want to talk about this part. Now, when you have a concept of library, you can come back and discuss it.
I can only say that rpm is just kind, but there is no good news...
This is similar to the situation in which we often answer questions in the Forum and are still blinded... Alas... helpless...>; _ <
Alas... don't talk about it.
In addition to dependence, what are the shortcomings of RPM?
Well, most people get binary rpm, that is, spec is fixed and compiled.
Even if it is downloaded back, it cannot be changed.
--- The reason why software advances is that people will solve existing problems anytime and anywhere!
Therefore, the source RPM exists.
To put it bluntly, source RPM only includes the tarball and RPM spec.
After installing SRC. rpm, you can modify the spec, or open the tarball to modify the code and makefile.
After modification, compile it into binary RPM again.
This gives people great flexibility, without losing the convenience of RPM dB.
If you cannot find a satisfactory binary rpm, find the source rpm. You will like it!
In addition, even if the source rpm is not available, the authors also provide spec in many tarballs.
In this way, people can build their own binary rpm at any time.
Well, rpm is good!
However, every binary rpm is fragmented. We often need to install a software, but we have to catch a bunch of RPM.
(Here I must remind you of the version issue, because it will involve the software environment, that is, the annoying dependence issue mentioned above.
However, in all fairness, all software will encounter dependence, but rpm checks are strictly performed .)
--- The reason why software advances is that people will solve existing problems anytime and anywhere!
Yes, so we have the concept of package server.
That is, from now on, our software no longer needs to catch our machines.
Thanks to the network, we put all the software on the server for management.
We only need to install an fron End Tool on the client to set the addresses of one or more servers.
From then on, each machine has an endless number of RPM or SRC. rpm.
Currently, all the new Linux versions use this method to manage the software.
Of course, some versions do not necessarily use RPM (such as Debian and geento)... but the principle is the same.
For example, you can use the latest RedHat/Fedora version,
If you run Yum update, all the current software is updated. you can install the specified software using Yum install.
If other dependent software is required, the software is automatically downloaded and installed one by one.
You can even upgrade the entire system!
Oh... yes. At present, compared with the previous tarball age, software management has been quite convenient and mature.
I hope this article will help you have a brief understanding of package management.
If you don't want to go into the principle, you don't have to worry about it. Just remember the following advice:
1) use advanced software management tools such as Yum, APT, urpmi, and YaST.
2) Try to use rpm, but pay attention to version 1.
3) if not, source rpm is available.
4) there is no way to use tarball.
5) Of course, if you can write your own code, you don't have to worry about it!
Again:
--- The reason why software advances is that people will solve existing problems anytime and anywhere!
I believe that the progress of software management will continue and what problems will be faced in the future? Or what can be solved? Don't worry too much now.
Even with a positive and optimistic attitude, we can welcome the future!
I wish you all the following:
Happy Mid-Autumn Festival!
Netman
In Tainan