A lot of friends in addition to viewing the graphics card parameters or viewing the graphics card ladder, you can also use professional gpu-z tools to view the video card good or bad. With the help of gpu-z mainly need to learn to see the graphics card parameters, through these comprehensive parameter details, but also can distinguish between true and false card, such as the card detected by the difference between the parameters and the authenticity of the video card is roughly a problem. Pull a bit far, the following is mainly to teach you how to use Gpuz to view the graphics card parameters bar.
Gpu-z tool is very easy to use, as long as the computer download the installation, and then open to see the computer graphics card detailed parameters, such as figure. Because the software is developed abroad, the original is the English interface, if you do not understand the words, it is best to download the Chinese version.
Because the small knitting computer is i3 core video card, so gpu-z only detected is i3 built-in HD4000 core graphics parameters
Gpu-z How to see the video card good or bad? The following small series focus for you to introduce these detected graphics card parameters.
"GPU": Display core, core code, refers to the development of the company's internal number, can also be used to distinguish between performance and height.
"Process": the core of the production process, Unit Nano, the smaller the more advanced technology, the lower power consumption.
Raster: The higher the number, the better the performance of the graphics card.
The grating belongs to the output unit, is responsible for the later rendering, the pixel point raster, mainly affects the anti-aliasing, the dynamic blur and so on special effects, but does not have the big influence to the light.
"Bus Interface": Provide data traffic bandwidth, the current mainstream interface is PCI-E 16x, can provide 8g/s data flow (dual channel, up and down 4g/s).
Note: 16x@16x, representing the highest support flow, as well as the current flow of work, if the motherboard or power effect, the working interface may be reduced!
Shader: The old schema is "render pipeline + shaded vertex", and the new schema is unified as "Unified rendering unit", i.e. "flow processor", the higher the number of high-performance the better.
Legacy architecture distinguishes performance by how much of the "Render pipeline" and "shaded vertices" you can learn about graphics performance.
The new architecture is easier to understand because it has only one cardinality, and the higher the number, the better.
Note: N Card A stream processor works, so the number of stream processors appears to be small.
A card for the "Unified render unit" positioning is not the same, to 5 flow processor unit can work, so the number looks very high. (different core architectures may not necessarily be a:n=5:1)
"Directx support": referred to as DX, is the program written by Microsoft, for multimedia instruction, in graphics, is for the screen effects, the highest level is DX12 (Windows 10).
"Image number filling rate": Raster work data processing flow, formula GPU frequency x Grating = image number filling rate.
"Texture fill rate": Renders the data processing flow of the pipeline/stream processor, the formula GPU frequency x pipeline (processor unit) = The texture fill rate.
"Video Memory Type": Video memory, provide storage data and exchange data, the higher the memory algebra, the higher the ram frequency, the greater the transmission of data, the current highest level of GDDR5, can be as high as 4600mhz/s above the speed, the larger the memory, the better performance.
"Video Memory bit width": The memory bit width is a significant number of digits that can transmit data in a clock cycle, the larger the number of bits can be transmitted in a moment, the larger the amount of data, which is one of the important parameters of video memory, the general graphics card is 128bit, a good number of video cards can reach 256bit, some high-end video card fever, Even reached 512bit.
Note: Due to the current GDDR5 high frequency video memory, even if the bit width is very low, can also provide a very high video memory bandwidth, even if 128bit, can also provide more than dozens of GB/s bandwidth, so the bit width does not necessarily demand high.
There is also a special example, HD2900XT video memory bit width reached 512bit, but due to the core structure of the reasons, rendering performance can not play, even if there is a 512bit bit width can not achieve the desired effect.
"Memory capacity" memory is used as data storage and exchange data, but not large capacity means that the graphics card has very high performance, but in other parameters equivalent, the larger the memory capacity, the current independent graphics memory capacity generally large 1GB d5,512m basically eliminated, mainstream graphics cards are now popular 2GB D5, The high-end graphics card is up to 4GB D5.
High-end graphics cards require high-capacity memory support because of the huge core processing data.
Low-end graphics cards do not improve the performance of the video card, even with high-capacity memory, because the core itself is handled at a lower rate.
"Memory Bandwidth": Video memory bandwidth refers to the display of the chip and memory data transfer rate, it in bytes/sec units. Memory bandwidth is one of the most important factors that determine the performance and speed of graphics cards.
Memory Bandwidth Calculation formula: Video Memory bandwidth = working frequency x video Memory bit width/8
Current GDDR5 memory, even if the bit width of 128bit, can also provide more than dozens of g/s bandwidth.
"GPU, Video Memory Frequency": can be used to directly distinguish the same series of high and low, such as HD4850, HD4830 (800SP), the difference between them is only in the GPU, video memory frequency, if the same value, performance is the same.
"Actual frequency": Refers to the GPU, video memory current frequency of work, this value can be changed
"Default frequency" refers to the frequency that is set by the manufacturer, but the frequency with which each vendor is developed varies from one to the other.
Gpuz View the graphics card parameters method for you to introduce here, these are some of the more professional terminology, if you are not very interested in graphics card parameters, you can use the "graphics ladder diagram" To view the approximate performance level of the video card.