Image processing Great God

Source: Internet
Author: User

Http://blog.csdn.net/baimafujinji?viewmode=contents

How to learn image processing--from small white to big God?

What is Digital image processing? History, and what it is studying.

Speaking of image processing, what would you think? Do you really know what this field is about? Vertically, the history of digital image processing is quite long, and the topic of digital image processing is quite broad in landscape.

The history of Digital image processing dates back nearly a century, when, around 1920, images were first transmitted from London to New York via submarine cables from the United Kingdom. The first application of image processing was to improve the image quality of submarine cables sent between London and New York, when image coding was applied, the encoded images were routed through submarine cables to their destinations, and then exported through special equipment. This is a historic advance, and the time to transmit a picture has been reduced from the original one weeks to 3 hours.

In 1950, the Massachusetts Institute of Engineering in the United States produced the first computer with a graphical display-Cyclone I (Whirlwind i). The cyclone I display uses an oscilloscope-like cathode ray tube (cathode ray tube,crt) to display some simple graphics. 1958 the United States Calcomp Company developed a drum-type plotter, Gerber CNC machine tools to develop into a flat-panel plotter. In this period, the electronic computers are mainly used for scientific calculation, and the graphics equipment for these computers is only as a simple output device.

With the progress of computer technology, Digital image processing technology has been greatly developed. In 1962, Ivan Sutherland, who was also studying for a doctorate at MIT, successfully developed an Ivan Souzerain "artboard" (Sketchpad) program. This is the first interactive mapping system ever, and this is the beginning of interactive computer graphics. This computer and graphics images are more closely linked. Given Ivan Souzerain's outstanding contribution to the creation of computer graphics, he was awarded the highest award in the field of computing, the Turing Award, in 1988.

In 1964, the Jet Propulsion Laboratory in California, USA, used a computer to process a large number of lunar photographs sent back by the Voyager seventh spacecraft to correct various types of image distortions in the cameras on the spacecraft and received significant results. In the later space technology, Digital image processing technology has played a huge role.

By the the late 1960s, digital image processing has formed a relatively perfect discipline system, which developed rapidly in the the 1970s, and began to apply to medical imaging and astronomy and other fields. In 1972, American physicist Allen Machliode Comeco (Allan macleodcormack) and the British electrical engineer Goffrey Newbord Hauns verder (Godfrey Newbold Housfield) invented the axial fault technique, And it is used for skull diagnosis. The world's first X-ray computed axial tomography device was successfully developed by EMI, which is what people commonly call CT (computer tomograph). CT can use some algorithms to reconstruct the "slice" images of objects by using the perceived data. These images make up the reconstructed image within the object, that is, according to the projection of the Human Head section, the image is reconstructed by computer. In view of the tremendous impetus that CT has played in the development of medical diagnostic techniques, Hauensfelde and the Nobel Prize for Physiology or Medicine were awarded in 1979.

Subsequently, in 2003, the Nobel Prize for Physiology or Medicine awarded the two scientists who made outstanding contributions to medical imaging equipment research-American chemist Paul Lauterburg Paul Lauterbur and British physicist Peter Mansfield Peter Mansfield. Two winners have made pioneering achievements in the use of magnetic resonance imaging (magnetic Resonance IMAGING,MRI) to show different structures. The two scientists ' pioneering work in the field of MRI represents a major breakthrough in medical diagnosis and research, the Swedish Carolina School of Medicine said. In fact, the success of NMR is also inseparable from the development of digital image processing. Even today, problems such as the noise reduction of MRI images are still a hot research direction in the field of digital image processing.

When it comes to the development of digital images, there is also a crucial result to mention, which is the charge-coupled element (charge-coupled DEVICE,CCD). The CCD was originally invented in 1969 by Villard Poyi (Willard Sterling Boyle) and George Smith (George Elwood Smith), a scientist at Bell Labs in the United States. The CCD acts like a film, which converts an optical image into a digital signal. Today's widely used digital cameras, digital cameras and scanners are based on the development of CCD. In other words, the digital images we are currently studying are mainly obtained through CCD devices. Both Boyle and Smith won the 2009 Nobel Prize in Physics for their great contribution to CCD research and development.

Digital image processing is one of the most popular technologies in today's life where there is no shadow of it, and it can be said that it is a technology that changes human life every moment. But for a long time, many people have a large distortion of digital image processing, people always unconsciously associate image processing with Photoshop. The famous Photoshop is undoubtedly the most widely used image processing tool today. Similar software is also available from Corel Inc. CorelDRAW.

Although Photoshop is a very good image processing software, its existence does not represent the whole theory and method of digital image processing. It has functions that are only part of the digital image processing. In general, the content of digital image processing research mainly includes the following aspects:

    • 1) image Acquisition and output
    • 2) image encoding and compression
    • 3) Image enhancement and restoration
    • 4) Frequency domain transform of image
    • 5) information security of images
    • 6) Region Segmentation of images
    • 7) Recognition of image targets
    • 8) Geometric transformation of the image

But the research content of image processing is not limited to the above content! So, the topic of image processing is quite broad. What are the areas where image processing is now applied? Perhaps there are some examples that we might be familiar with (and, of course, you should be able to cite more examples):

    • 1) Some professional image processing software: Photoshop, CorelDRAW ...
    • 2) Some mobile app applications: Beauty 美图秀秀, play map ...
    • 3) Some medical image processing applications: MRI, color ultrasound image processing ...
    • 4) Some applications in the manufacturing industry: component detection, flaw detection ...
    • 5) Some cameras, camera applications: Night photo quality improvement ...
    • 6) Some movies are used in the industry: changing backgrounds, movie stunts ...

What kind of person will learn (or need to learn) image processing?

1) If you are a practitioner of the above-mentioned applications, you will certainly need to master the theory and technology of image, 2) Relevant professional researchers, doctoral students in tertiary institutions, and postgraduate students.

What does the so-called related majors mean? The answer may also be quite broad, for example (but not limited to this): Computer science, software Engineering, Electronic Engineering, Biomedical Engineering, Automation, Control, Applied mathematics ...

How to learn image processing well--some of my proverbs

1) for Beginners

A solid foundation and a complete and systematic understanding of the image processing theory is of great importance to the further research and practical application.

I often like to take the novel "Tianlong Eight" section of the plot to explain this to the reader, I believe that the reader has been repeatedly moved on the screen of Jin Yong's works have been familiar. The book says that there is a monk named Mono, who wants to be a peerless military study, and he is a very diligent person. However, he is wrong in too much for quick success, and even use the small-phase of Taoism to catalytic Shaolin stunt. It looks powerful and can "crash" in a short period of time, but it endless. Finally Mono intelligence, before the work to waste, just taichetaiwu. This story actually tells us to lay the foundation is very important, especially to achieve a more rapid development, it is necessary to understand the basic principles, and strive to discovering, so as to achieve the best.

Some seemingly advanced algorithms tend to be a combination of many basic algorithms. For example, the SIFT feature building process, which has deterred many people, uses the very basic content of image pyramids, histograms, and Gaussian filtering. However, there are obviously several basic techniques involved, and if you lack a systematic understanding of the image processing theory, you may feel less than good. Because all the places seem to be troughs.

About the course--

At this stage in fact the requirements of mathematics is not high, you can even from some perceptual point of view to visualize the image processing a lot of content (but does not include the frequency domain processing of content). Specific to the study of the proposal, if there are conditions (such as you still in the University of Reading) you'd better choose a course in image processing, a systematic and complete study. This is obviously the best way to get started. In this way, it is quite helpful to build a complete and systematic cognition. If you can't get a lesson like this in school, you can try some open classes online. But now there is no quality course recommendation on Chinese MOOC. There are many courses in English, such as the Digital Image processing Open course--https://www.youtube.com/channel/ucaijlkxxamoodqtlx486qja?spfreload=10 by Professor Rich of Rensselaer Institute of Technology in California, USA.

About the textbook-

Obviously, only lectures are not enough, if you can read a book is the best. In fact, no reference to many books, as long as a copy, you can read from the beginning to the end is very good. If you have no qualifications to take a course, it is more necessary to read a full self-study. At this stage, it is not possible to look for blogs and posts on the Internet. Because you particularly need to establish a systematic and complete knowledge system for this subject at this stage. East and west a piece of the Hu fight is undoubtedly the pit of your own, your knowledge system is like a bubble, may look great, but fragile vulnerable.

Now many schools use Gonzalez's "Digital Image Processing" book as a textbook. This is a very, very classic book. But I have to remind the reader:

1) This is a book written specifically for the electronic engineering professional students. It needs to have the signal and the system, the digital signal processing these two courses as the foundation. If you do not have the basis for these two courses, you can read the book either to watch the busy, or not to understand.

Here is an illustration in the book. For EE students, this is certainly not a problem. But without the foundation of the two courses I said, it's hard to grasp the essence of it. H and H, a size of a lowercase, in some places in the book H, some places with H, which are very profound intentions. The original author does not specifically explain the difference between them, because he has by default you should know that the two are different. In fact, they represent a frequency domain signal, an expression of a time domain signal, which also causes the computation to be convolution, and sometimes the operation is multiplicative (which is, of course, related to the convolution theorem). So I'm not suggesting that students who don't have this foundation should read the book on their own.

2) The first edition of Professor Gonzalez's digital Image Processing, published in 1977, is now almost 40 years old, and now the second edition widely used in China is published in 2002 (the third edition is 2007 but in fact the difference is not big), and now there are about 20 years of time. In fact, Professor Gonzalez has been retiring for almost 30 years. So the content of this book has been biased to the old. The development of digital image processing in this field is absolutely rapid and rapid. Especially in the last twenty or thirty years, a lot of new ideas, new methods continue to emerge. If you look at Professor Rich's Open Class (which is what the current American universities are teaching), you will suddenly find that our education is still at a foreign level before reform and opening up. This is actually particularly scary. So I think that Professor Gonzalez's "Digital Image processing" as a supplement to the learning process is good, but if it as the main reference, it is really: foreign countries are toothfish Guangxu, we still in the machete Spear.

2) for intermediate level

The paper came to the end of the light, I know this matter to preach. For a person with a certain foundation, want to go further intermediate level, the most important thing at this stage is to enhance the ability of hands-on practice.

or "Tianlong Eight" part of a role-oral martial arts, breathtaking Wang. Wang's head is all martial arts cheats, but the problem is that she never practiced a recruit. The result is, then, the egg. So it's definitely not good to say you're not practicing. In particular, if you want to work in this industry in the future, the result is that the code will not write, it is almost unthinkable. In the learning phase, the most commonly used tools for algorithmic development are MATLAB and OPENCV. You can understand these two things as a fairly complete library. Of course, in the industry of C + + use more, so the application of MATLAB is very limited. As we mentioned earlier, the image processing research content actually includes: Image acquisition and codec, but using MATLAB and OPENCV will cover up the details of this part of the content. You will of course never know how the JPEG files are decoded.

If your application will never involve these topics, then you always have to use MATLAB and OPENCV of course it doesn't matter. For example, your field of study is sift, surf this feature match, you can not ignore the content of codec. But if your research topic is noise reduction or compression, you may not be able to get around these things. At the beginning of the study, if you can write this part of your own, it may deepen your understanding. When you do advanced application development later, you call those libraries. So the specific use of what, should not write their own, is to depend on your stage and your own actual situation. In my personal experience, when I taught myself, I wrote the Magic House, I think this process has laid a very solid foundation for me, for my follow-up in-depth research is very helpful.

In the following article, I give some of this resources, code a lot, it is worth referring to learning: image processing and machine Vision network resources Acquisition

http://blog.csdn.net/baimafujinji/article/details/32332079

3) for Advanced step

To this extent of the reader, the implementation of the basic skills such as programming should be a cinch. But to go deep, to learn, research, and develop image processing applications, the content you need most becomes mathematics. This is a big problem in front of many people at this stage. If your major is applied mathematics, of course you will not feel a problem. But people with other professional backgrounds will feel more and more miserable.

If your image processing does not involve machine learning content, such as using the Poisson equation to do image fusion, then you will have the knowledge of PDE numerical solution, if you want to study the kaze characteristics, you have to know the AOS aspect of the content. If you study TV noise reduction, you also know the BV space content in the functional analysis ... You may not have heard these words a lot. In general, the required content includes: complex functions, functional analysis, partial differential equations, variational methods, mathematical physics methods ...

If you want to get involved in the content of machine vision methods, some of the content of machine learning and data mining methods is indispensable. And this part of the content also needs a very strong mathematical foundation, such as the maximum likelihood method, gradient descent method, Euler-Lagrange equation, least squares estimation, convex function and Johnson inequality ...

Of course, go to this step, you have been reborn, from small white to the great God! The road long its repair far XI, I will go up and down and quest.

(End of full text)

Image processing Great God

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.