dip is not the pixel density, and the pixel density is DPI. Quote: dip: Device Independent pixels (device independent pixel ). different devices have different display effects, which are related to the hardware of the device. We recommend that you use this function to support WVGA, hvga, and qvga without pixels. Well, let's get to the point. Let's talk about PX first. PX is a pixel. If PX is used, it will be drawn using the actual pixel. For example, draw a horizontal line with a length of PX, in a 480-width simulator, the screen width is half the screen width, and in a 320-width simulator, the screen width is 2/3. DIP is to divide the screen's high score into 480 points and the width into 320 points. For example, if you build a 320 dip horizontal line, whether you are on a 480 or simulator, it is half the screen length. So your problem is solved: Question 1: Why is the code in Code , the Unit PX and dip I wrote are the same in the same simulator? It's not about dip! = Px? A: If the width of your simulator is 320 pixels, 100dip is the same as 100px. Question 2: Why is the running result length of simulator 1 different from that of simulator 2? It seems that the length of simulator 1 is a little longer (because simulator 1 shows 512 in full) A: You may have two simulators with different width and height, so the dip display is different.
Http://blog.sina.com.cn/s/blog_9d708b4f01015lz8.html