I have always been a very bumpy person. This is not the end of today's keynote speech by Steve Jobs at WWDC 2010. He had to admire Steve Jobs's legendary "distortion of reality". He grasped the two-hour rhythm just right. First, he used to hit a large number of numbers and target developers, then there were surprises and heuristic demos everywhere in the eight feature of the iPhone 4, which seemed to have existed since the wonderful Wii demo of Reggie on E3 in.
Although the game center was not released in IOS 4.0 yesterday, Apple's Hardware in this step will undoubtedly become a professional game platform. I took a look at two lectures on the game center. Basically, it matches making, friends + invitation, in-game voice chat, p2P Networking (API functions are basically equivalent to directplay/xNa), so game center = xNa + xdk extension.
Aside from the game center, taking the IOS 4 + iPhone 4 features demonstrated by Steve Jobs as an example, I guess the following new apps will appear in the appstore in the future:
1. A large number of newer applications based on Augmented Reality (AR) technology.
Thanks to the introduction of the gyroscope, iPhone 4 is called a real 6-Axis System and cannot communicate with each other in terms of motion perception (very simple and accurate orient the device, orient the virtual object ). Combined with the high-definition camera API and two cameras and microphones, Ar-based applications can greatly surpass the current GPS + compass + Google map routines.
2. Panoramic photos.
Thanks to the gyroscope to detect the rotating speed, you can achieve the current DC (such as Sony TX-7) Panoramic photo function, do not need to post-processing stitching, deformation like autostitch.
3. developers create content games based on Panoramic photos or photos with geographic information.
It can be counted as the branch of the panoramic photo. A Panoramic photo can be used as a world based on which multiple user-generated content can be combined. With GPS devices, photos can carry geographic information, along with a 6-axis orientation and a lens FOV, which can help multiple photos create point cloud in the same scenario (similar to Photosynth, however, with geographic and orientation information, it should be more convenient), it is also a kind of user-created content.
4. Camera-based games.
The front camera makes it possible to detect player movements, and a large number of games like eyetoy will emerge. In addition, some games may not be carried out through the player's "Explicit" action, but may detect some operations such as head shaking and action sensing functions. I believe there will be many possibilities. Then, you can use the API to convert the images/videos obtained by the front or back camera. You may also be able to increase or enhance the gameplay of some games.
5. Eye-controlled games.
This is actually a refinement of the previous category. However, if you can do it well, it is also a novel method. The video information obtained by the front camera tracks the player's eye and moves the eye orientation information to control the game.
6. lighting applications.
If the LED flashlight can be controlled by API, more lighting applications will appear.
Now, I think about this.