Preaching and confusing software development technical terms decryption

Source: Internet
Author: User
Tags project management website web database
[Reprint]
Http://www.sjhf.net/blog/user1/sjhf/archives/2006/2006412173840.htm

Preface, now you can use your blog to encourage you to continue.
"Win32 programming"

Unfortunately, I have been learning programming for a long time (the learning environment of the last century is evident ). For a long time, I have never understood what 32 means. I have used dos, win31, Win95, win97... but I have never used an operating system named Win32? A long time later, I learned that 32 here is not the version number of the operating system, but 32 bits. The Microsoft operating system is a DOS system in win31 and earlier versions. Windows is only a large operating system in DOS.Program. Later, Win95 was slightly changed. In addition to the DOS core, windows became a part of the operating system and provided services provided by various operating systems. It should be said that windows after Win95 (the latest 64-bit win system starts with) can be called a Win32 System Platform (95/98 is actually a 16-32-bit mix ). Therefore, on such a platform, Win32 programming is called directly or indirectly using the API programming provided by the system. For Visual Studio, Win32 programming generally refers to SDK, MFC, and ATL development methods. Among them, ATL is not widely used in China, it is generally used for medium and large software products with COM components as the architecture.

"SDK": software development kit, often translated as a software development (Tool) package

In the Win32 programming field, it is generally different from the framework programming such as MFC. It directly calls the API development method provided by windows, which is different from the literal intent. Another common saying is that a software (hardware) has its own SDK, which generally refers to a set of api library functions or class libraries for developers to call. For example, the dx sdk is a set of COM components developed by Microsoft for upper-layer developers. In short, it is relatively complete for programmers to useCodeLibrary, which can be called SDK;

"MFC": Microsoft fundation classes Microsoft basic class library

As we all know, the SDK programming method often has a lot of fixed and unchanged code repeated every time. To improve programming efficiency, reducing thousands of Apis brings great mental pressure to developers, microsoft has developed such a class library. Note that this class library has nothing to do with the operating system itself. It only encapsulates APIs in an object-oriented manner. Of course, A series of programming frameworks are also provided. You can also use Visual Studio to call windows APIs using SDK. MFC has written some fixed code and only links it during compilation. Therefore, we cannot see winmain () in our code. In fact, the whole process is running, there is no difference with the SDK method. Please remember this for beginners. In addition, I personally think that the initial intention of MFC has brought more convenience to developers. I don't think it is too successful. The effort required to learn MFC is not very proportional to the final income.

"API": Application Programming Interface, application interface

The term appears frequently. In a sense, it can also be seen as a subset of the SDK. This is also a program for programmers, But it generally refers to the function library that provides services by exporting functions, excluding Class Libraries and components.

"GDI": graphic device interface, graphical Device Interface

This is the most commonly used display method in Win32 programs. It is at the same level as DirectX and OpenGL. It is not easy to display some things in DOS. The simplest thing is to call some C graphics library functions for display. However, it is generally just a few lines, colors, and outputs, the effect is very weak (so the DOS interface is not very complex, and it is not very complicated to implement). To display complex animations/images, hardware interruption is often used, call the subroutine of the video card itself (solidified in the video card. Because each video card is different, DOS game compatibility is often poor due to different video cards. In Windows, everyone is much happier. Windows shields the hardware layer and uses a table (device context) to represent a display, what we need to do is to fill in the relevant parameters in this table, then draw what we want to draw, and then the operating system will follow this table (DC ), transmits the corresponding display content (usually a display memory) to the specified video memory of the specified video card, and then transmits it to the display card. We no longer need to make traffic with different video cards. This is a great victory! The most common technology in GDI is the dual-cache technology. That is to say, you can create (replicate) a dc in the memory, but what is displayed in this DC is no longer transmitted to the display. What is the purpose? Because its parameters are consistent with the current screen DC (copy, of course, this result), its display content can be completely transmitted to the screen DC without distortion. We usually draw a picture on the memory DC, such as drawing a circle, then draw a straight line, and then transfer it to the screen DC at one time. This will only refresh the screen once for users, it can solve the flickering problem caused by refreshing the screen when you draw a little content. Of course, dual buffering and even multi-buffering have many other functions, so we have to figure out them on our own.

"DirectX"

It is usually referred to as dx (pronunciation: low cross). This is a very eye-catching term, and it is very easy to read :). Windows has done a lot of work to shield the underlying hardware. dx is one of the most well-known technologies. Operating systems must deal with various types of hardware, especially multimedia-related ones, such as video cards, sound cards, handle inputs, and network transmission of Multimedia Streams, that's too much (these are generally related to the underlying system, and it's hard for you to do it yourself ). DX is the interconnectivity between multimedia hardware and programmers provided by such an operating system. dx itself generally does not implement processing capabilities. It is a standard that requires hardware to meet, for example, DX provides a function name and hardware to implement the function content. With it, we can easily and quickly call various services provided by hardware. It mainly includes DirectDraw (providing fast image processing capability by directly accessing the display hardware), directsound (providing low-latency sound mixing and playback for software and hardware, and direct access to audio devices ), directplay (which explicitly provides universal environment connection capabilities to simplify communication services between your applications), direct3d (3D version of DirectDraw ), directinput (simplifies the ability of your applications to access the mouse, keyboard, and joystick devices). After dx5.0, you have added some (such as DirectShow), which will not be detailed. An important feature of DX is that you can directly access the hardware without knowing the details of the hardware. For example, DirectDraw can directly access the video memory over the memory, which is much faster than GDI, not on an order of magnitude. Add one point: dx is provided in the form of components, rather than the usual export API form. The latest version of dx sdk is 9.0.

"Com": Component Object Model, which is generally referred to as a component.

This is an important mechanism for Microsoft to solve code reuse. The simplest way to reuse code is to reuse Source Code , add the written functions and classes to your current code, and compile the code. Simplicity is simple, but it is more serious. Another common method is to separate modules and distribute them as DLL. dll exports functions or classes, and the client program loads them using dynamic/static links, this is obviously better than the method of the previous source code, which is not difficult and most commonly used. However, dll has some shortcomings. The most fundamental one is that it is not Binary compatible. Once the dll version is upgraded, it needs to be re-linked with the client program code. In some cases, this is almost impossible. In order to make programming as simple as building blocks, so that modules can be perfectly cooperated and replaced perfectly, COM was born. Com is not a class library, it is not a code, it is not a service of the operating system, but a set of programming models. Theoretically, it has nothing to do with the language, it has nothing to do with the operating system, and it can also be used as a COM in UNIX. Com is a program structure model standard. If your DLL or EXE meets such a standard in structure, the DLL or EXE is a component, it will become Binary compatible on the platform. Com mainly uses the Registry to register information of this module. The Registry is first checked when the customer calls the program, and the location of the required component is found (this achieves location transparency). Then, the loadlibrary is used to load it. This is essentially different from normal calls, the difference is that the user program does not know the component location, the class instantiation process, and how to destroy the component because of the special implementation method of the component. It cannot directly access any implementation details of the component, users only deal with several public interfaces of components. This will implement real independence between modules. For user programs, there is no idea about the target components except interfaces. Without changing the interface, the component can be replaced with any other component, and the client program does not need to be compiled, in the process of module integration of medium and large programs, a considerable amount of time will be saved. "STL": Standard Template Library, standard template library

This was first completed by Alexander Stepanov and Meng Lee (just like the Chinese name) and submitted to the ANSI/ISO Standard C ++ Committee in 1994 and passed it as part of the standard C ++. We can see that this is a code library standard, not a syntax standard. In short, STL is a set of basic data structures andAlgorithm. STL is characterized by the implementation of "type parameterization", that is, the STL code can process any custom type objects. If the template technology is not used, this is quite difficult. For this reason, the support for template syntax is added to the latest Java and C # syntaxes to show its importance. Another important topic about STL is GP (generic programming), generic. This is another programming model parallel to object-oriented. It is based on templates, which weakens the differences between entity types and simplifies the problem abstraction model during programming, providing better encapsulation and elasticity, there is no doubt that complicated object-oriented programming is a relief, at least spiritual. GP is not used to replace object-oriented, but as a beneficial complementary body and a good partner for the image. GP is a hot research topic in Software Architecture in recent years, but it seems that there are few real applications in China. This technology is still at the forefront of research. <Modern c ++ design> This book has a good interpretation of the GP application in C ++, and has a high lethal effect on brain cells, it is also beyond the reach of other C ++ books. Do you want to know how the C ++ code skills can be achieved? Let's take a look at this book.

"ATL": Active Template Library, Active Template Library

This should be a relatively advanced topic in VC programming. It integrates COM and template technology, bringing an extremely convenient component writing method and an extremely high learning threshold. It can be said that, even if you enter the ATL field, you have entered the intermediate and above programming fields. ATL is generated for components, it aims to make it easier for programmers to write components (writing the simplest component in C ++ to realize "Hello world" is terrible for beginners ), at the same time, it uses the template technology to establish a framework code library (Template Library) for Developing COM similar to MFC. Using this framework and template library can be relatively convenient for component development. One feature of ATL is that your class will become the parent class of some classes in the ATL code base. This is an interesting thing (this is also a feature of the template technology ).

"Handle": handle

This is an odd word for Chinese translation. It cannot be understood by beginners. This is actually equivalent to void * (by the way, beginners are often puzzled by the various odd symbols, type tags, and Macros in VC code, in fact, most of them come from the basic type # define or typedef. move the cursor over these symbols (such as handle) and press F12, the interpreter will bring you to its declaration and use it several times. You will eventually see its original appearance, and then take a sigh of relief: That's not the case. For beginners who have never used it, remember: F12 ). Many beginners always want to know what an object handle represents. I suggest not understanding it as an object, but as an entry to access an object, in fact, handle is usually an integer index (marking the position of the object in a table in the operating system, just like the subscript of an array). In Windows, the core is mainly several large tables, such an integer index is used to mark the location of the target in this table for query by the operating system. Occasionally, it is indeed a pointer to an object, and sometimes it carries some additional auxiliary information. In short, we should not directly access it and hand over the handle access task to the operating system, unless you are too tired to write programs :).

"DLL": Dynamic Link Library

One feature of DLL is that it can be dynamically loaded (as the name suggests), that is, when the main program (I prefer to call it a client program) needs this module, it is loaded to the memory by the operating system. After all, we usually use a small number of features for a large application, so some of the less commonly used function modules (DLL) are generally not loaded when the program is running, which can greatly save the memory overhead. DLL is also the most commonly used distribution module method for mutual collaboration. There are two main methods to call the DLL in the program: 1. For the DLL that uses the def file to export the function, use the API function loadlibrary ("dllmodulename") to load and then use getprocaddress () obtain the function pointer, and then call 2 to directly export classes and functions. The client program uses the same header file declaration and adds the corresponding lib Link Library, you can directly use the classes or functions in the DLL in your program without loadlibrary.

"Process": Process

A process is a dynamic concept, including a process control block (PCB) from the process creation application. Generally, the operating system creates a table (struct, address space memory allocation. The module code is loaded and executed, and abolished after execution. The whole process is called "process ". In Win32, a process has 4 GB Logical space. However, we often use it as a static concept. In Win32, the execution of an EXE is a process (if a new process is activated in the internal part of it ).

"Thread": Thread

In order to effectively improve the CPU utilization and achieve multi-task execution, Microsoft further splits the process to implement a new pair of CPU task scheduling features: thread. A process has at least one thread. When implementing multi-task concurrency, we usually create a new thread (the system overhead of the thread is smaller than the process), and the thread uses our own function as the entry, the function is automatically revoked after execution is completed (you can also force the thread to end during execution ). By the way, there is no thread in UNIX, because Unix is mainly based on multi-process concurrent services (so it is more suitable for server ), when the system runs, there are already too many processes, so there is no need to refine the process, because this will even reduce the system efficiency (CPU scheduling is not enough), of course, this is my personal conjecture :)

"C Language"

So far, the C language should be the most widely spread language, especially in the Unix world, still playing the leading role, in other such as hardware development, embedded systems (such as mobile phones) both of them have outstanding tables, and even have a place in SDK development on the Win32 platform. What's more, it is the Enlightenment language of most programmers in China (I dare not say it abroad). Many of its talents have understood the thinking of the program. The biggest feature of C is that it is fast. In addition to compilation, the efficiency can reach the highest level. Its flexibility and direct access to hardware fully conform to the programmer's free nature. If you are still hesitant to learn other technologies, you have only one sentence to learn C: Believe me, that's right! There are also many people who advocate that they can directly learn object-oriented languages. I don't agree. The abstract of the machine model by object-oriented language is very easy for programmers to confuse, and it is difficult to establish accurate runtime models. After all, we are programmers, not users. We cannot take all the problems for granted to the compiler and operating system, nor can they solve them. At least one process-oriented language can be learned.

C ++

This is another masterpiece of Bell's lab. At the same time, it also breaks the hearts of too many programmers around the world, and brain cells are very powerful. C ++ is much more complex than most beginners think. It basically includes a class-based C language, template, and standard template library. many beginners only master C ++ as a subset of the class-based C language (if you don't believe it, take a look at the C ++ code in <modern c ++ design> to see how much it can be understood ). What's more troublesome is that C ++ has to talk about the object-oriented programming style, which is also much more difficult than beginners think and requires a long-running war. What worries me this kind of C ++ fans is its current prospects in the win platform. on the Net platform, it is difficult to find out the reasons for using C ++ to write new code without using C :(. Compared with the complexity of Java/C # And some dynamic languages (Python/Ruby), The development efficiency is obviously low, and there is a tendency to exit the development of upper-layer applications. I really don't want to give up this almost perfect language that contains the most paradigm. I can only pray that the path of C ++ can go further and better in the future.

Source code version control

This is a very important engineering method in software development and is almost a necessary process ). Many workshop-style development teams often need to improve or increase the number of software engineering methods. For beginners, it is recommended that the source code version control be implemented at the beginning of the Practice phase, because it will be used in future work.

The basic principle of source code version control is as follows:

Create a database for the project on the server side and save the first version of the source file of the selected project. Any user on the client needs to check out the permission to modify a source file. After that, the client generally performs the check in operation every time it completes a version without compilation errors and wants to save the current version on the server end and becomes the latest version (note, not Overwrite previous ones ). Any client can easily obtain any version of the file on the server (if you have the permission ). Generally, an important feature is version comparison. Any client can use a version control tool to compare different versions of a file, it will mark the differences between different versions of files with the same name, so that you can easily see the evolution of the version content, which is very common. The following describes three version control tools I have used in China ):

VSS: Visual sourcesafe

This is the source code version control tool provided by Microsoft Visual Studio. Its biggest feature is its ease of installation (integrated with Visual Studio, which can be done by the way when Vc/Vb is installed, you don't have to spend any extra time). It's easy to use (server side setting is relatively easy. Generally, you can easily do it with a little bit of exploration, and check in/out on the client). The basic functions are complete, the version is quite intuitive (I like it ). It is characteristic that after someone checks out a version, others will not be able to check out the version. That is to say, only one file can be modified at a time, this avoids conflicts between different people on the modification of the same file. (Note: You can set the server side to check multiple users, but this is hardly the case, as a result, one of the most important features of VSS is lost ). In addition, VSS can be integrated into the vs environment, but based on my experience, the check operation on the version directly in VC often does not take effect, therefore, it is best to perform the check operation in the VSS program. Note: VSS can also be used on a single machine. The advantage is that after some files are mistakenly operated or modified on a large scale, they can be restored to the latest version without errors, recover the loss to the maximum extent. VSS is widely used. If you are using Visual Studio, you must use it.

CVS: Concurrent Versions System

This is also a well-known open-source version control tool, mainly active in the Unix world. CVS is rarely used. Generally, it seems that the function is more inclined to the command line method (many developers in UNIX also use the command line method ). Of course, several versions of CVs are also implemented in windows, which can also be integrated into vs. It seems that another version can be attached to IE. I have never tried it. The famous open-source project management website sf.net also uses cvs. If you want to work with programmers all over the world, CVS must be installed. In the world of Java, CVS is also used.

Rational clearcase

This tool is of a higher level. It is produced by rational (Now IBM) and expensive. I used a short period of time when I first joined the work. This tool is characterized by complexity, and the installation and setup are very complicated. The client in my photo printing even has to be added to the NT domain, resulting in insufficient permissions on the local machine, installing new programs is very troublesome and depressing (I don't know if our company's related personnel have incorrect installation settings. I think so, but its complexity is evident ). For use, it has a very useful function, that is, it can automatically generate a version tree (a tree chart) based on the version number you check each time ), you can clearly see the version evolution process. So strictly speaking, such as CVS/clearcase can really be called "version" control, and VSS is too reluctant. Clearcase is very powerful and I will not elaborate on it (I do not want to publish a book). It is suitable for large software companies to implement Software Configuration Management. Although it is well known, I don't know how many companies in China are actually using genuine tools such as clearcase.

OpenGL

OpenGL has a somewhat fallen hero. This set of standards has been achieved so well that it once became a standard of gorgeous game interfaces. Its low performance is also an inevitable trend. After all, Microsoft's strong suppression rarely fails. However, it has brought such pressure to Microsoft and is still widely used in the industry, most games/graphics cards still support it (OpenGL is my favorite in CS ). Compared with d3d in some aspects, its performance still prevails. Unfortunately, DirectX is constantly moving forward, and it is almost stuck, with bright prospects. OpenGL is widely used in the industrial field at present. Its excellent vector image processing performance and gorgeous colors are outstanding in the professional graphic image processing field. Games are used less and less than before. I recently heard that Microsoft's latest operating system, Vista, has put a great deal of pressure on OpenGL. Not only does it offer performance discounts, but it only supports 1.4 (the latest version is like 2.0 ), I don't know how to end it.

DirectDraw & d3d

Almost all decent 2-Dimensional Windows Games use this technology for display. DirectDraw has two modes: full screen and window. There are more full-screen applications. In full screen mode, DirectDraw has a very famous "Page feed" technology, that is, it uses "Swap" between two display pages to achieve display refresh, this speed is very fast, it's just a pointer exchange in a video memory, which is much faster and too much faster than copying a screen using bitblt. Most of the efficient animation effects of the game come from this technology. DirectDraw is mainly used in entertainment and real-time display scenarios with high requirements, such as medical images. D3d is currently the standard used by most 3D games. I have not studied it, So I dare not say more. It works well. You can see it when you play a game :)

UML: unified modeling language.

This language is a graphical language, mainly used as a standard graphical model for modeling during design, facilitating communication between programmers and programmers, programmers and customers, designers and code workers, at the same time, it also helps designers to document the understanding of program functions based on program code in their minds, so as to facilitate clarification and further coding. In other words, the product of the design process can be represented as some text documents, some framework code, or some pseudo code, but more standard and common, is represented as a bunch of UML diagrams. UML includes two types: Dynamic and Static graphs. Class Diagrams in static graphs are most commonly used. Many people do not know how to design the software at the beginning. When writing small software, there is often no design process. In fact, it is very easy to make a picture of the software. When learning to design, you do not need to find a big bully like together or Rational Rose. It is good to use some simple tools that can be used for UML diagrams. There are many small tools dedicated to UML diagrams, which are easy to find online. One more thing: do not draw a UML diagram in all aspects. Do not draw a UML diagram in all aspects. It is good to highlight the key points and make it easy to understand, it doesn't even matter if you use nonstandard marks (when the UML function is draft ).

Rtti: runtime type information

In the program, when we get an instance or pointer of an object, most of the time we cannot directly affirm its type (it is caused by inheritance and type conversion). At this time, depending on the rtti support provided by the vc4.0 or later version compiler, you can call the library function typeid () to obtain the "type information" of this object at runtime ", in some dynamic processing, "type information" is very important. After obtaining the type information, you can call related operations of this type, convert the type, or generate them dynamically. Because of its importance, in Java and. net Library has a more elegant approach to this by virtue of a single inheritance and "virtual machine". Each class inherited from an object naturally has the ability to express its own type information (the benefit of inheritance ), and it is easy to expand. When you need type information, you can directly ask the object: Tell me, what type are you? It will tell you the answer.

Debug & release debugging & release

As we all know, debug is the debugging version, and release is the release version. The difference is that the program generated by debug contains a large number of scenario code for debugging and trial use (not required for actual running ), the release usually removes this information and performs some code optimization. Therefore, the release version program is much smaller than the debug version program and runs faster. At the same time, in order to facilitate debugging, the debug version often adds a macro such as debug to the diagnostic code used for debugging, so that the Code is not compiled under release. Because of the differences in the source code used for compiling the two versions (and poor Optimization of release), the two versions often have different effects during running, A normal crash was encountered by most people. There are many possibilities for problems. For more information, see the excellent posts on various forums. In addition, if different versions are used for the link between the DLL of the same program (for example, EXE is the release version and DLL is the debug version), the program may not run normally, therefore, my general practice is that each DLL uses two def files for different versions to compile and generate two files with different names (add d after the file name of the debug version ), each version is called for its own version, which can avoid confusion to some extent. In addition, release can also be debugged. Open the debugging information in the project settings.

XP: Extreme Programming eXtreme Programming

This is a development model that has just emerged in recent years. It has been widely publicized in China since 01/02.

It is mainly used for small and medium development teams in Small and Medium Projects with tight development time requirements and unstable requirements (this is the case for most software projects. It breaks the traditional software engineering framework and is very clever. For example, there are very few documents in the entire development process, and a lot of "cards (such as CRC cards)" are used to describe the development plan and content; there is no specification for software functional specifications in the true sense, instead, there are a series of testable cases. There are no independent design and test phases, and they are always incremental and repeated in iterations; Design: as small as possible and simple as possible; generally, there is no code review. Everyone has the code together. One of its most significant external features is that it often uses "pair-by-pair development", that is, two developers sit in front of a machine for joint development (one view, one write ), this sounds really interesting at first glance: its basic starting point is that the efficiency of paired development is higher than the sum of independent development by two people under certain conditions. Don't think that the effectiveness of this practice has been proven in many projects.

The features of XP can be summarized by "fast, small, and spiritual". It is similar to the traditional Waterfall Model (from top to bottom) the difference is that it uses iterative increments (design-> code-> test-> Design-> code ...). The idea is simple: there are no goals that can be easily identified at the beginning. Using mountain climbing as a metaphor, the traditional method is to study the map below the hill, select a route, and then move forward along this route, XP is to take a walk, stop and take a look, make new choices for the next step. In many cases, this will allow you to select a better shortcut.

Iconix:

I believe this word has never been seen by many people, and I don't know what it is. I 'd like to mention it to broaden my horizons. This is a development model between XP and Rational Unified Process. In other words, it is "larger" than XP and smaller than "RUP. It adopts a subset of UML, Which is case-driven and maintains good progress tracking capability. It aims to convert the use case into code in the shortest time. Specifically, for XP, this development model emphasizes the establishment, analysis, and Code of use cases, and the use case is its central position.

RUP: Rational Unified Process

As mentioned above, I believe you already feel that it is a rich software development model. This is a software engineering model proposed by IBM, which uses a complete UML diagram for various stages of development (requirements, design, code, testing, and maintenance) there are well-developed and complex standards, so we will not detail them in detail. In essence, RUP is iterative development. Each iteration completes the following four stages: Inception, elaboration, and construction) and transition phase (Transition ).

CMM: Capability Maturity Model Software Maturity Model

This is a masterpiece of the Software Engineering Institute of Carnegie Mellon University (My major is software engineering, so it has become a holy place in my mind, once formed a wave of CMM sweeping the global software development. CMM is divided into five levels, most software enterprises are in the first level, and there are not many in the world that have obtained the fifth level certification. In China, it is also rare to remove the dog meat from the goat's head (well, ). Therefore, the implementation of CMM generally starts from the second level, and the third level can be achieved by powerful software companies. CMM is a process-centered model. Each level has several key processes (key processes) starting from the second level. Each KP is divided into several key active processes (key activities ). In general, the implementation of CMM cannot be implemented more than one level, and the implementation of each level usually takes more than one year. Therefore, it is very difficult to reach a higher level. In addition, CMM can be applied not only to large-scale companies, but also to small companies and small project teams (which many people do not know ). The implementation may be cross-organized according to the specific situations. For example, some level-2 KP and level-3 or level-4 KP are used during implementation, but you only have implemented all level-2 KP, you can only pass level-2 certification, even if you adopt certain level-4 KP. The latest development of CMM is cmme (integration), which mainly considers the relationship between software and non-pure software factors (such as systems) and collaboration between teams. The development of CMM in China seems to be similar to that of ISO, which is not a good news.

Callback Function: callback function

This concept was introduced at the beginning of Hou sir's <in-depth introduction>. The general reference is that the callback function is called by the operating system and you will never call it. This concept makes beginners look a little daunting and thinks it is a deep and hard-to-comprehend core technology at the bottom of the system. Otherwise, this technology is very simple and often used. It is actually the basic application of function pointers (if you don't know what a function pointer is, flip the book ). In a module, some functions can be implemented by other modules. For example, a display module only needs to implement control functions such as resource allocation, screen refresh, and scaling, the code that draws a specific object (such as a circle or polygon) can be implemented by other modules. What should I do? Use the function pointer. Put a circular function pointer in your class, and assign a value to this function pointer from the external (actually pointing to an external function ), you can directly call this function pointer in your code to draw the image (this module does not know how the external module draws the circle ). The external function is the callback function here!

This function callback method is used in many system APIs, so that the code implementation we develop can be embedded into the Code Implementation of the API, in fact, we just passed a function address to it. In other words, these APIs have developed some running code frameworks for specific implementation.

XML: Extensible Markup Language

Maybe you are still struggling to choose. NET and J2EE. If so, you may want to learn a common basis: XML. With HTML, why do we need XML? HTML focuses on text, images, and multimedia content. It is difficult to express data because its tags are fixed, and the data type cannot be described at all .. Net and J2EE both need to solve a difficult problem of standardization of information transmission formats. This format must carry text/data, and it is best to describe the program interface, at the same time it should be as simple as HTML, it is universal and can work well under HTTP. In this case, XML is generated. It features the same features as HTTP. It is also a markup language, but its markup is not fixed and can be customized (or infinitely expanded, these custom tags can well describe the data type and corresponding data content (which is similar to the definition of a database table at first glance ). In addition, XML can also describe program interfaces, So XML can easily directly interact with network program components (such as COM and EJB. Because it is also an ASCII text stream, it is compatible with the current HTTP and can be unobstructed on the current Internet (this is important ). With the above functions, XML is truly a standard information carrier of the new generation of Internet technology. in the network architecture of net and J2EE, information interaction between various "components" is handed over to XML, which is a long way to go.

I have never written XML myself. From the perspective of learning experience alone, I feel that the syntax is more trivial than HTML, with more small details, and less easy to achieve :) fortunately, the root source is better, it is easy to learn from the basics of HTML and even web development.

Java2:

This is the most popular language in recent years. It shines brightly in the world of web development, network platforms, and mobile development. You don't need Java, but you don't need to understand java. After all, this is a huge and rich field of software development. Some people in the vs camp who have never used Java may not understand what the 2 in Java2 means. Let me explain it first. When Java was officially launched 1.0, it was not so well received and criticized, so it was constantly introducing new versions to improve itself, one version with significant changes is 1.2 (I remember it). In addition to the syntax updates, each new version of Java also has a clear mark, that is, JDK (Java development kit, java SDK). Java Versions later than 1.2 are called Java2 in order to distinguish them from previous Java versions. Currently, JDK 1.3/1.4 is widely used, and the latest JDK is 1.5 (codenamed Tiger ). Java ide mainly focuses on JBuilder in China, and eclipse is widely used in open-source fields. Sun also pushes its own open-source Java ide: netbeans (which can be downloaded from Sun's website, free of charge ). Java is a virtual machine mechanism, which is equivalent to adding a soft operating system to the operating system. The source code is compiled into a byte intermediate code, which is interpreted and executed by the Virtual Machine. As long as there is a corresponding virtual machine, java programs can be run on the operating system. This is the origin of a compilation called by Java, and it runs everywhere. The accompanying unavoidable performance issues also make Java hard to become the mainstream of desktop program development. I recommend using jcreator for the best Java IDE for beginners. This is a C ++ IDE, which is several Mb in size and is faster than 10 times faster than eclipse, it brings great convenience to beginners.

J2EE:

Java is actually divided into three categories: J2EE/j2se/j2se, different classes correspond to different JDK, J2EE is developed for the enterprise platform, j2se is the standard edition, and J2EE is developed for the mobile platform. J2EE is really hot at the moment. I recently reviewed the magazine of early programmers and found that J2EE has been discussed in the first issue (2001.1), and now it is 2004.6, how much do you know about it? J2EE is not a pure technology, but an architecture and many standards that constitute the architecture. Enterprise platform development is very different from desktop/simple web database development, and its program size is usually very large (not one or several EXE can be done ), it often uses massive databases and mass communication, and is often non-disruptive. These features make enterprise platform development focus more on architecture issues. However, writing a familiar Java client program, message processing middleware, or database processing program is only a small part of this architecture. J2EE is very favored, so please do not write a few examples of EJB (this is a component in the Java World, the concept is mostly similar to com) Programs and you feel like you are fine with J2EE.

In J2EE, a message manager middleware is often introduced when messages are transmitted. on the server side, EJB containers are used to manage and call ejbs. One important concept in J2EE is transaction processing, which was first widely used in database technology. This is actually a unit that encapsulates a lot of operations. It is used to cancel any operation in the middle and automatically undo it in turn. Therefore, a transaction is the smallest unit for successful/failed operations, if only some operations are successful for a transaction.
Bea is a well-known company in enterprise service platform development (this is a good company and should know its name). Its product is weblogic.

. Net

. NET is the technology Microsoft has prepared for the next decade. What about you ?. Net is also a platform technology, rather than a single technology. It is mainly divided into. Net runtime platform (corresponding to the Java Virtual Machine) and. Net class library (corresponding to the Java JDK ). Currently, only Windows2003 is integrated. net runtime platform operating system, so if you write.. Net program to run on another operating system, the operating system must be installed first. NET platform. This is a very annoying thing, and also why so far, not many people have switched to it. net to write programs (although it can greatly improve the development efficiency ). We hope that the emergence of Longhorn can reverse this situation. Then we can finally say goodbye to the outdated framework class library such as MFC. It's a big deal.
. Net uses many of the latest technologies and ideas. For those who take the vs route (especially those with the com concept), it is relatively easy to learn and enjoyable. ". net Framework Program Design "and ". net essence "is a good book. Of course, before reading them, you 'd better master a. NET language, such as C #, which is the easiest for us to master the language.

After talking about so many technologies, let's relax :)

Public Key: (also often translated as public key)

"Password" is already a widely known word. Do you want to get the money from the bank? No Password is required. I don't know when, since then, such a military-level word has already entered thousands of households, and all women and children know it. However, there are fewer people who know the word "key" and fewer people who know the word "Public Key". There are fewer people who know not only the principle, but also the principle, of course, if you do not know before, then you will join the very few :)

Long long ago, with the increasing development of the military, the importance of intelligence is increasing. How to obtain accurate intelligence has become a major military focus, the other problem that comes along is how to ensure that the enemy still cannot obtain the intelligence after it is intercepted by the enemy (which is unavoidable) to prevent intelligence leakage. Let's think about how to solve this problem with the wisdom of today ...... The first thing that comes to mind is not to use plain text in the ciphertext, to disrupt the plain text into the ciphertext according to certain rules, or to have a one-to-one correspondence rule between the plain text and the ciphertext, even if the ciphertext is leaked, as long as the enemy does not know the rules for the conversion between the plain text and the ciphertext, it will get nothing. This is a simple and effective method. Even in the First World War II of modern times, it is still widely used. Of course, its rules are often dynamic and even complicated. However, this solution has a major theoretical defect, that is, how do you securely pass "rules "? To ensure that the ciphertext can be converted into plain text between the two locations, there must be a common rule. Then, at least "once" is required to securely transfer "rules" from one location to another, this is theoretically not guaranteed, so the entire security system cannot be completely assured. Once rules are leaked, the blow to the ciphertext system is fatal. Is there any better way? Well, if you haven't touched it before, I guess you can't think of it. The solution is the public key system.

Let's look back at how we turn plaintext into ciphertext. The simplest thing is to disrupt it again, or perform some linear or nonlinear transformation, which is hard to read immediately, but this is also the easiest way to crack, because this kind of transformation is relatively easy to solve in mathematics. With the help of the current computer, with the help of a certain amount of ciphertext plaintext statistical analysis, it is easy to find the changed rules. In advanced mode, you can use a set of passwords (which can be dynamically changed, for example, changed with the date) to make a combination of plaintext and this set of passwords to obtain a set of ciphertext, because this "Combination change" may be a very complex mathematical transformation, it is difficult to find this set of passwords and this "Combination change" rule by adding only ciphertext or a certain amount of plain text. This is the fundamental principle of most encryption/Decryption currently. This set of passwords is called keys.

However, this only increases the difficulty of deciphering the ciphertext obtained by the ciphertext, and does not solve the problem we have raised. Now let's take a look at the meaning of "sharing public (sharing public. In mathematics, there is a kind of computation that is one-way (in mathematics theory till now). It is very easy to calculate from one direction. However, when its Inverse Computation lacks some information that is added to the forward computation, it will become almost impossible (for example, the decomposition of large prime numbers is almost impossible to break down. It is simple to multiply the two numbers after decomposition, and the pupils will, this is the famous principle of RSA ). This constitutes the theoretical basis of our "Public Key.

The specific usage is as follows: we first generate a pair of keys, which are called encryption keys and decryption keys, which are related but different. During encryption, the plaintext and the encryption key are combined with "irreversible" mathematical operations for "Combination change" to form the ciphertext. During decryption, the ciphertext and the decryption key are decrypted using similar operations, note that the encryption key is related to the generation of the decryption key, so the decryption key can complete such a "inverse operation ". At the same time, the decryption key can also be used for encryption. Correspondingly, the encryption key can also be used for "ciphertext encrypted with the decryption key ". The encryption key is used as a public key and distributed to anyone who wants to obtain it. the decryption key is properly kept as a private key. When a person with an encryption key wants to pass the ciphertext to himself, he only needs to use the freely obtained public key to encrypt the plaintext. Of course, after he encrypts the data, he cannot solve it himself. But after it is passed to my hand, I can use the decryption key to decrypt it, in this way, the aforementioned "rules" for secure transmission are well solved. Now, my public key is public, take it if you want to take it. :) the private key is saved by myself, so you don't have to release it.

Another important function of a public key is to sign a signature. After I use the private key to encrypt my own files, you can use the public key I issued to decrypt them. If the decryption succeeds, it proves that this is indeed the file I sent.

In the certificate system frequently used in network information security, the "certificate" is actually such a public key system.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.