Arm Cortex-A9 CPU of new multi-core Tablets

Source: Internet
Author: User
Tags byte sizes integer division prefetch

Dual core or qua core is very popular now, so everyone should understand the Cortex-A9 in the end. Cortex-A8 is a bit past in smartphone, Cortex-A9 is very popular now, but the real blockbuster application, expert prediction in 2013 ~ 2014. This articleArticleIs I reprint a cattle people abroad on the Cortex-A9 new feature summary, very useful.

The arm Cortex-A9 CPU can be single-core, dual-core or quad-core, and features speculative out-of-order execution (allows High-Level Code such as C/C ++ to automatically run more efficiently), yet is extremely low in battery power. so the arm Cortex-A9 is used in most of the latest multi-core devices, such as the apple ipad2 (Apple A5 processor), LG Optimus 2x (NVIDIA tegra2 ), samsung Galaxy s ii (Samsung exynos 4210), Sony NGP psp2, And the pandaboard (Ti omap4430 ). here are some notes I made when reading   Arm cortex-a programmer's guide:  

Differences between ARM Cortex-A8 and Cortex-A9 (eg: iPad 1 vs iPad 2 ):
  • Cortex-A9 has random advanced features for a risc cpu, such as speculative data accesses, branch prediction, multi-issuing of instructions, hardware cache coherency, out-of-order execution and register renaming. cortex-A8 does not have these, counter t for dual-issuing instructions and branch prediction. therefore assembly code optimizations & neon SIMD are not as important in Cortex-A9 anymore.
  • Cortex-A9 has 32 bytes per L1 cache line, whereas Cortex-A8 has 64 bytes per cache line.
  • Cortex-A9 has an external L2 cache (a separate "outer" pl310 or new L2C-310 chip), whereas Cortex-A8 has an internal L2 cache (On-chip "inner" cache, therefore faster ).
  • Cortex-A9 mpcore has separate L1 data and instruction caches for each core, with hardware cache coherency for the L1 data cache but not the L1 Instruction Cache. Any L2 cache is shared externally between all the cores.
  • Cortex-A9 must use the preload engine in the external L2 cache controller (if it has one), whereas Cortex-A8 has an internal ple for its L2 cache.
  • Cortex-A9 has a full vfpv3 FPU, whereas Cortex-A8 only has vfplite. The main difference being that most float operations take 1 cycle on Cortex-A9 but take 10 cycles on Cortex-A8! Therefore VFP is very slow on Cortex-A8 but decent on Cortex-A9.
  • Cortex-A9 allows half-precision (16-bit) floats, whereas Cortex-A8 only allows 32-bit singles and 64-bit floats. But half-precision has almost no supported operations directly anyway.
  • Cortex-A9 can't dual-issue multiple neon instructions, whereas Cortex-A8 can potentially dual-issue certain neon load/store instructions with other neon instructions.
  • Cortex-A8 had the neon unit behind the arm unit, so neon had fast access to arm registers & memory but it took 20 cycles delay for any registers or flags from neon to reach the arm! This often occurs with function return values (unless if "hardfp" convention or function inlining is used ).
  • Cortex-A8 had a separate load/Store Unit for Neon and one for arm, so if they were both loading or storing addresses in the same cache line, it adds about 20 cycles delay.
  • Cortex-A9 uses ldrex/strex for multi-threaded synchronization without blocking all cores, whereas Cortex-A8 uses simple disabling of interrupts for mutexes.
  • All Cortex-A8 CPUs have a neon SIMD unit, where some Cortex-A9 CPUs don't have a neon SIMD unit (eg: NVIDIA tegra 2 does not have neon, but NVIDIA tegra 3 will have neon ).

 

Notes on arm Cortex-A9 or any arm cortex-A in general:
    • Cortex-A9 has a 4-way set associative L1 data cache using 32 bytes per cache line (16kb, 32kb or 64kb of L1 cache, Which is 512,102 4 or 2048 L1 cache lines ).
    • Cortex-A9 mpcore can't clean or invalidate both L1 & External L2 at the same time, So incoherency can occur unless if done in correct order by softare: to clean, clean the L1 cache first then L2, or to invalidate, invalidate the L2 cache first then L1.
    • Cortex-A9 contains a "fast loop mode" where very small loops (under 64 bytes of code and possibly cache line aligned) can run completely in the CPU Decode & prefetch stages without accessing the instruction cache.
    • Cortex-A9 has support for automatic data prefetching (if enabled by the OS), so that if you are accessing 1 or 2 arrays sequentially, it will detect this and prefetch the next data to cache before you will need it.
    • Cortex-A9 can detect when the instruction STM is used for memset () & memcpy (), and optimize the cache access by not loading data into cache if it will be overwritten anyway.
    • Cortex-A9 mpcore has a separate neon module for each core. eg: a quad-core Cortex-A9 has 4 neon units!
    • If the TLB does not have an page in its table, then a "page table walk" needs 2 or 3 memory accesses instead of 1.
    • "Char" variables on arm may default to unsigned chars, whereas they default to signed chars on x86, so this can cause runtime errors if not expected.
    • The first 4 arguments to a function are sent directly in the first 4 32-bit registers, whereas the rest of arguments use stack memory so are slower. but C ++ automatically uses the 1st argument as a pointer to "this", so only 3 function arguments can go in registers.
    • 64-bit arguments are more tricky and limiting due to the 8-byte alignment requirement.
    • If a function will call another function, it needs to maintain an 8-byte stack alignment, so shocould push/pop an even number of times. leaf functions don't need 8-byte stack alignment.
    • When passing arguments with neon advanced SIMD using the "hardfp" calling convention, registers q0-q3 (s0-s15 or d0-d7) are used. Registers q4-q7 (s16-s31 or d8-d15) must be preserved if modified.
    • Newer c99 compilers allow the "restrict" keyword to say that pointers do not overlap other pointers, allowing compiler optimizations.
    • Cortex-A does not have integer division, so any divide instruction is a slow (~ 50 cycle) function call or floating-point divide. But shifts left or right are often free.
    • Since the branch target address cache (btac) is based on 16-byte sizes and only allows 2 branches per line, if any code has more than 2 branches within 16-bytes of code, then it is likely to flush the instruction pipeline.
    • Since Cortex-A9 does register renaming at upto 2 registers per cycle, LDM or STM instructions of 5 or more registers can cause pipeline stils.
    • Conditional execution of arm mode (not thumb) allowed speedups in older CPUs but now it is often faster to use branches, because conditional instructions may need unwinding.
    • Good info on optimizing memset () & memcpy () is given on page 17.19 of the arm programmers guide, saying to use LDM & STM of a whole cache line, where aligned store is more important than aligned load, and upto 4 PLD's shoshould be inserted, for roughly 3 cache lines ahead of current cache line.
    • Some info on optimizing float operations with VFP are given in Chapter 18 of the arm programmers guide.
    • The Cortex-A9 has a big delay when switching between VFP and neon instructions.
    • Neon can't process 64-bit floats, divisions or square roots, so they are done with VFP instead.
    • Neon can be detected at compile time by checking: # ifdef _ arm_neon __
    • Neon can be detected at runtime on Linux by checking the CPU flags, by running "cat/proc/cpuinfo" or searching the file "/proc/self/auxv" for at_hwcap to check for the hwcap_neon bits (4096 ).
    • Cortex-A9 mpcore uses the MESI protocol to keep all L1 caches coherent. unfortunately, if a thread is often writing to a piece of data and another thread is often reading from a different piece of data on the same cache line, that cache line is transferred significantly (thrashed ).
    • The arm DS-5 development suite generates faster code than GCC/LVDS compilers and has a more powerful debugger (using Eclipse IDE) that can analyze the system non-intrusively using coresight or JTAG.
    • The arm "vector floating point" (VFP) module was intended for SIMD vector operations, but it never became so! The VFP unit is just a scalar FPU for 32-bit floats and 64-bit doubles.
    • The arm "Advanced SIMD" (neon Media Processing Engine) unit is a true SIMD unit for integers (8, 16, 32 or 64 bit signed or unsigned ), floats (32-bit only, plus limited 16-bit half-precision float support) and 16-bit binary polynomials.

 

Linking: http://www.shervinemami.info/armAssembly.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.