"Exceptional C + +" and "improve the performance of C + + programming Technology" Learning notes __ Programming

Source: Internet
Author: User
Tags garbage collection inheritance
1, C + + temporary objects

Creating an object is a time-consuming, cost-space operation that produces several situations of a temporary object:

1 to the function of the value of the way to pass parameters
When passed by value, first you will need to pass to the parameters of the function, call the copy constructor to create a copy, all operations in the function are for this copy, and it is for this reason, in the function body of the copy in any operation, does not affect the original parameter.

guidelines: When passing function arguments, you choose to refer to them in a constant, rather than a literal, way .

2) type conversion
When we do a type conversion, the converted object is usually a temporary object.

constructors should avoid implicit type conversions, which implicitly produce temporary variables. Use explicit constructor instead of implicit conversions .

3) function needs to return an object
When the function needs to return an object, he creates a temporary object in the stack and stores the return value of the function.

4 for most containers, including linked lists, the end () function of the calling container returns a temporary object, and the object needs to be constructed and destructor . Since the value of this temporary object does not change in the loop, if recalculated in each iteration of the loop, it will result in unnecessary overhead, and in fact the temporary object can only be computed once and saved to a local object.

5 The use of Post + + will produce a temporary object, you should use more than the predecessor + +. 2, priority to adopt "a op= B;" Rather than "A = a op b;" Perform operations

For example, why operator = = is more efficient. The reason is that this operator operates directly on the object on its left and returns a reference, not a temporary object. Unlike this, operator+ must return a temporary object.

In general, if you define an operator op, you should also define its corresponding assignment operator op=, and use the latter to implement the former. 3. Overload, overwrite and hide

(1) Overloading the function f () refers to a function that defines another same name in the same scope and has a different parameter type, order, or number than f (). \ overload
2 The virtual function f () is overridden (override), which means that a function with the same name is defined in the derived class, and the parameter list of the function is the same as f ().
3 external scope (base class, the function f () in the Outer class or namespace) is hidden (hide), which defines another function of the same name in the inner scope (derived class, nested class, or nested namespace), which hides the function with that name defined in the outer layer scope. 4. Slow-Type construction

In C + +, it is a waste to unconsciously begin to define all the objects in the program. Because this may create objects that are not used until the end. A good coding style should be created when the variable is used.
For example:

if (...)
{
Xxxobject obj;
XXXFUC (obj);
}

Creating a variable after a conditional statement can reduce expenses. If you create a variable before, the result is not used, there will be additional expenses. 5, virtual function (template for inheritance advantage)

Inline is determined at compile time, and it is not possible for the compiler to set the virtual function parsed by the runtime to inline. This often leads to performance problems and solutions:
Because dynamic binding of function calls is the result of inheritance, one way to eliminate dynamic binding is to override inheritance based on the design of the template. The template takes the steps of parsing from the runtime up to the compile time, and can be used inline at compile time to improve performance, and the appropriate increase in compilation times is acceptable. template-based design has two advantages: reuse and efficiency. 6. Memory Pool

A memory pool (Memory pool) is a way of allocating memory. In general, we are accustomed to using APIs such as new, malloc, and so on to allocate memory, the disadvantage of this is that because the size of the requested memory block is variable, it can cause a lot of memory fragmentation and thus degrade performance when used frequently.

The memory pool is the first to allocate a certain amount of memory blocks that are equal in size (in general) for standby before the memory is actually used. When new memory requirements are available, part of the memory block is separated from the memory pool, and the memory block is not enough to continue requesting new memory. A notable advantage of this is that memory fragmentation is avoided as much as possible, resulting in increased memory allocation efficiency. 7. Inline

Inline is the use of method code to replace a call to a method . Inline improves performance by eliminating the call overhead and allows optimization between calls. The main function of inline is to optimize the runtime , and of course he can make the executable image smaller. Summarized as follows:

Inline upgrade two aspects of performance:
1 optimization between calls
Inter-invocation optimization is a call process for a method, which is based on a more comprehensive understanding of context scenarios, which enables the compiler to optimize the method at the source level and at the machine code level . The general form of this optimization is to perform a partial preprocessing during compilation to avoid repetition of similar processes at run time.
2 Avoid costly method calls

Inline disadvantage: 1 code bloat. 2 Some methods themselves should avoid inline, such as recursion. If recursive function A is inline, the compiler will continue to loop through an attempt to insert a method into a method to form a dead loop. 8. Reference Count

Reference counting is a technique for memory management and can be viewed as a simple garbage collection mechanism that allows multiple objects with common values to share the same object, eliminating the process of copying assignments, and saving memory and valuable CPU time .

The basic idea of reference counting is to transfer responsibility for destroying objects from client code to the object itself. The object tracks the number of its current references, not the specific record of who referred to it. Destroy yourself when the reference count reaches zero ... Reduces the cost of establishing and lifting references. . That is, the object is destroyed when it is no longer in use. Reference counting provides a simple and efficient way to manage memory.

It simplifies the process of tracking objects in the heap . Once an object is allocated from the heap, we need to know exactly who owns the object, because only the owner who owns the object can destroy it. But in the course of our actual use, this object may be passed to another object (for example by passing pointer arguments), and once the process is complex, it is difficult to determine who finally owns the object. by using reference counting , we don't need to care about who owns the object, because we give the right to the object . when this object is no longer used by anyone, it is responsible for destroying itself .

In a word, the intelligent pointer with reference counting function has the dual function of common pointers sharing real object and auto_ptr automatically releasing real object, and automatically manages the life cycle of real object and count of valid references, and does not cause loss of reference, memory leak and multiple release.

However, in the reference count process, we also lost important information: In the end who cited themselves. Therefore, the reference count increases at the cost of handling indirect references . 9. Context switching

Context switching, sometimes referred to as process switching or task switching, refers to a CPU switching from one process or thread to another . There is a big time overhead.

This process is to save the state of the process and the processor . The state of the save process is to maintain an accurate record of the execution point of the process, and the state of the processor is saved in order for the processor to return to its original state when the related process continues to execute. Each time the process is swapped out, the processor state associated with the process is moved from the processor to the memory, and the processor will load this part of the data every time it is in the city.

There are three main costs for context switching:
1) Processor context Migration
2 The cache and TLB (bypass conversion buffer) are lost. After each context switch, the process rebuilds its cached content.
3) scheduling overhead. The scheduler decides whether to proceed with the interrupt process, or whether to load another process onto the processor.

10,

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.