Multi-core application programming practices

Source: Internet
Author: User
Tags variable scope oracle solaris
Original Title: multicore Application Programming: for Windows, Linux, and Oracle Solaris Author: Darryl Gove: guo Qing Xia series name: Turing Program Design Series Press: People's post and telecommunications Press ISBN: 9787115317506 mounting time: May 2013 published on: 16 open page: 1 version: 1-1 category: more about computers and multi-core application programming practices computer books multi-core application programming practices is a comprehensive and practical multi-core application programming guide, it is designed to show you how to write applications that have the correct functionality, superior performance, and are suitable for expansion to run on multiple CPU cores. This book provides examples of reference programs for a variety of operating systems and processor types, covering Unix-like operating systems (Linux, Oracle Solaris, OS X) and the compiling method of multi-core applications on Windows systems, the impact of multi-core hardware implementation on application performance, and potential problems to be avoided when writing parallel applications, and how to write applications that can be extended to a large number of parallel threads. Multi-core application programming practices is suitable for all c programmers to learn. Build multi-core applications for mainstream platforms to achieve both high performance and high scalability. Multi-core application programming practices is a comprehensive and practical guide to high-performance multi-core application programming. It not only introduces cutting-edge methods for implementing parallelism on Windows, Linux, and Oracle Solaris, in addition, it illustrates the challenges involved in multi-core processor programming through examples, guides readers to develop functions correctly and provides superior performance, it can be expanded to applications running on 8, 16, or more CPU cores. By reading this book, you will learn about the impact of hardware implementation on application performance, how to avoid common problems, and how to write applications that can process a large number of parallel threads step by step, and master advanced parallelization technology. Multi-core application programming practices are not limited to one method or platform. With it, every C programmer using a modern multi-core processor can easily access any cutting-edge operating system environment! Multi-core application programming practices adopts parallel technology at the best time to securely share data among multiple threads using POSIX or Windows threads to write applications using custom code for synchronization and sharing, making full use of automatic parallelism and openMP overcome common obstacles to scalability by writing correct, fast, and Scalable Parallel Code Directories in new ways Chapter 1 multi-core application programming practices hardware, processes, and threads internal structure of 1st computers 11.2 multi-core processor origin 31.2.1 multithreading is supported on a single chip 41.2.2 instructions are improved through processor core assembly line operations 81.2.3 recently used data is saved using cache 101.2.4 data is stored using virtual memory 121.2.5 from virtual address converting to physical address 131.3 multi-processor system features 141.4 conversion from source code to assembly language 161.4.1 32-bit and 64-bit code performance 181.4.2 ensure correct order of memory operations 191.4.3 difference between process and thread 211.5 summary 23 Chapter 2nd High-Performance Encoding 242.1 definition performance 242.2 understanding of algorithm complexity 252.2.1 examples of algorithm complexity 262.2.2 importance of algorithm complexity 28.2.2.3 careful use of algorithm complexity 302.3 structure how to affect performance 302.3.1 in source code and generation Structure weigh performance and convenience 302.3.2 leverage the impact of database structured application 332.3.3 Data Structure on performance 422.4 compiler role 472.4.1 two compiler optimizations 482.4.2 select the appropriate compiler option 502.4.3 how to improve performance with cross-file Optimization 512.4.4 use the configuration file to feedback 532.4.5 how the potential pointer alias will suppress Compiler Optimization 552.5 identify the time occupied by analysis 582.6 how to avoid Manual Optimization 642.7 performance 642.8 summary from the design perspective 65 chapter 3rd Recognition parallel opportunity 663.1 use of multi-process to improve system efficiency 663.2 multiple users use a system 673.3 improve machine efficiency through integration 683.3.1 isolate applications sharing a system using containers 693.3.2 host multiple operating System 693.4 adopts the parallel mechanism to improve the performance of a single task 713.4.1 understand how parallel applications 723.4.2 parallelism affects algorithm selection 723.4.3 Amdahl's law 733.4.4 determine the maximum actual number of threads 753.4.5 synchronization cost how to reduce scalability 763.5 Parallelism mode 783.5.1 run the SIMD command data in parallel 783.5.2 run the process or thread in parallel 793.5.3 run multiple independent tasks 793.5.4 multiple loosely coupled tasks 803.5.5 multiple copies of the same task 813.5.6 split a single task to multiple threads 823.5.7 use a pipeline task to complete a task 823.5.8 assign work to the client and server 833.5.9 divide responsibility to the producer and consumer 843.5.10 combined with a variety of parallel policies 853.6 dependency on the impact of Parallel Running code capabilities 853.6.1 anti-dependency and output dependency 863.6.2 through speculation to break the dependency 883.6.3 Key Path 913.7 discover parallel opportunities 923.8 summary 93 chapter 4th synchronization and data sharing 944.1 data contention 944.1.1 use tools to detect data contention 954.1.2 avoid data contention 984.2 synchronization primitive 984.2.1 mutex and critical section 984.2.2 spin lock 994.2.3 semaphore 1004.2.4 read/write lock 1004.2.5 barrier 1014.2.6 atomic operation and lock-free code 1024.3 deadlock and live lock 1034.4 thread and inter-process communication 4.4.1 memory, shared memory and memory ing file ipv4.4.2 condition variable 1054.4.3 signal and event ipv4.4.4 Message Queue ipv4.4.5 naming pipeline ipv4.4.6 other methods for communication between 4.4.7 threads through the network stack sharing data 1104.5 storage thread private data 1104.6 summary 112 CHAPTER Use POSIX thread 1135.1 to create thread 1135.1.1 to terminate thread 1145.1.2 use sub-thread to receive and transmit data 1155.1.3 separate thread limit 5.1.4 set pthread attributes 1175.2 compile multi-thread code 1195.3 Process Termination 1215.4 use of shared data between threads 1225.4.1 mutex lock protection access 1225.4.2 mutex lock attribute 1245.4.3 use spin lock 1255.4.4 read/write lock listen 5.4.5 barrier 1295.4.6 semaphore 1305.4.7 condition variable 1365.5 variable and memory 1405.6 multi-process programming 1435.6.1 share memory 1445.6.2 between processes shared semaphore 1475.6.3 Message Queue 1475.6.4 MPs queue and named MPs queue 1505.6.5 use signal to communicate with processes 1515.7 socket 1565.8 reentrant code and compiler flag 1585.9 summary 160 chapter 6th Windows Thread 1616.1 create Windows native thread 1616.1.1 terminate thread 1656.1.2 create and restart a suspended thread 1676.1.3 use the kernel resource handle 1686.2 synchronization and resource sharing mode 1686.2.1 an example of synchronization between threads 1696.2.2 protect access to code in the critical section 1706.2.3 with mutex protection Code segment 1726.2.4 lightweight read/write lock 1736.2.5 semaphore 1756.2.6 condition variable 1776.2.7 send event completion signal to other threads or processes 1786.3 wide string processing in Windows 1796.4 create process 1806.4.1 share memory between processes 1826.4.2 in sub-process inheritance handle 1856.4.3 mutex naming and sharing between processes 1866.4.4 use pipeline communication 1876.4.5 use socket for communication 1906.5 variable atomic update 1936.6 assign Thread Local Storage 1956.7 set thread priority 1976.8 Summary 198 chapter 7th automatic parallelization and OpenMP 1997.1 use automatic parallelization to generate parallel code 1997.1.1 recognition and parallel reduction 2037.1.2 automatic parallelization of code containing calls 2047.1.3 assist compiler in automatic parallelization of code 2067.2 use OpenMP generate parallel application 2087.2.1 use OpenMP parallel loop 2097.2.2 runtime behavior of OpenMP application 2107.2.3 variable scope 2107.2.4 use OpenMP parallelism in OpenMP parallel area reduction 2127.2.5 access private data outside the parallel area 2127.2.6 use scheduling improvement work assignment 2147.2.7 use parallel segments to complete independent work 2177.2.8 nested parallel 2187.2.9 use OpenMP to dynamically define parallel tasks 2197.2.10 keep data to the thread private 2237.2.11 control OpenMP runtime environment 2257.2.12 wait for the work to be completed thread 2297.3 in the code area ensures that the code in the parallel area is executed in sequence 2327.4 fold loop improved workload balancing 2337.5 forced to achieve memory consistency 2347.6 parallel example 2357.7 summary 239 chapter 8th manual coding synchronization and sharing 240

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.