What developers need to know about IOS SDK new features
Older than can stay up to see the WWDC of the Times, but still in the small treasure crying and mom adults change diapers in the urging of a morning. So the "hot" to WWDC 2017 of the Keynote read. As in previous years, although WWDC is a developer meeting, but Keynote is not specifically for our developers, it also assumes the company's state of the information, new product launches and other functions. As a technician, the next session may be more meaningful. To use a sentence to evaluate the content of this year's Keynote, is the small steps of innovation. Big technical aspects can be said only Arkit can study, but we still see a similar cross-app drag, new Files application such a further breakthrough IOS original shackles of the update (iMessage transfer What does not mention, my big celestial Mighty, mobile payment field leading the world for at least three years). IOS 11, especially with new hardware, is sure to bring a good experience to the user.
As an IOS developer, as in previous years, I've tidied up where I might need to be concerned.
New Frame
The new SDK has two large frameworks, namely Core ML for simplifying and integrating machine learning and arkit for creating augmented Reality (AR) applications.
Core ML
Since the advent of AlphaGo, deep learning has undoubtedly become an industry hotspot. Google also changed its strategy for Mobile-first to Ai-first last year. It can be said that the first line of internet companies are almost all in custody AI, now it seems that machine learning, especially deep learning is the most promising path.
If you are not familiar with machine learning, I think I can "transgression" here to do some introduction. You can think of the machine learning model as a black box function, given some input (perhaps a piece of text, or a picture), which gives a specific output (such as the name of the person in the text, or the name of the store that appears in the image). The model may be very coarse at first, and it will not give the correct results at all, but you can train and even improve the model with a lot of data and the right results. When the model used is sufficiently optimized and the training volume is large enough, the black box model will not only have a higher accuracy rate for the training data, but also often give the correct return to the unknown actual input. Such a model is a well-trained model that can be used in practice.
The training of machine learning models is a very heavy task, and the role of Core ML is to translate the trained model into a form that IOS can understand, and to "feed" the new data to the model to get the output. Abstract problems and model creation are not difficult, but the improvement and training of the model can be said to be worth studying for a lifetime, and readers of this article may not be too likely to catch a cold. Fortunately, Apple offers a range of tools to transform a variety of machine learning models into a form that can be understood by Core ML. With this, you can easily use the models that have been trained by the predecessors in your IOS app. This may require you to find your own model before, and then write some C + + code to cross-platform calls, and it's hard to take advantage of the GPU performance and Metal of your IOS device (unless you write some shader yourself to do matrix operations). Core ML will reduce the threshold for using the Model a lot.
Core ML is behind the vision framework and the semantic analysis-related APIs in the Foundation that drive the visual identity of IOS. Ordinary developers can benefit directly from these high-level APIs, such as face images or word recognition. This part also existed in previous versions of the SDK, but they were centralized into the new framework in the IOS SDK and opened up some more specific and underlying controls. For example, you can use the high-level interface in Vision, but also specify the model used at the bottom. This brings new possibilities to the computer vision of IOS.
Google or Samsung's efforts on Android AI are mostly integrated services in their own apps. By contrast, Apple has given more options to third-party developers based on its own ecosystem and hardware controls.
Arkit
The demonstration of AR on Keynote is the only bright spot. Apple has brought a great gift to developers, especially AR-related developers, in IOS SDK 11, which is arkit. AR can be said to be not a new technology, such as the Pokémon Go game also verified the potential of AR in the game. But in addition to the IP and freshness, the individual believes that Pokémon Go is not qualified to represent the potential of AR technology. The live demonstrations like we showed a possibility that, roughly speaking, the Arkit uses a single lens and gyroscope to do a pretty good job of identifying the plane and stabilizing the virtual object. Almost certainly, then do not do the earliest, only the best Apple seems to be back on stage at this moment
Arkit greatly reduces the threshold for ordinary developers to play AR, and is also an option for Apple to compete against VR at this stage. You can imagine more Ar games like Pokémon Go (which are probably the most likely to be imagined with real-world virtual pets), with the help of Arkit and SceneKit, and even the full range of multimedia that can be displayed on IPad Pro's existing skills like AR movies It is no longer a mere dream.
And the corresponding, is a set of not very complex API. The View involved is almost as an extension of the SceneKit, coupled with the fact that the real world has been helped by the system, what developers need to do is to put virtual objects in the right place on the screen and interact with the objects. By using Core ML to identify and interact with the actual objects inside the camera, it can be said that all kinds of special effects cameras or photographic apps are filled with imaginative space.
Xcode editor and compiler
Speed is life, and the developer's life is wasted waiting to compile. Swift has been well-received since its inception, but slow compilation speeds, sometimes without syntax hints, inability to refactor, and so on, are the most important black spots in the tool chain. The editor in Xcode 9 has been rewritten to support the refactoring of Swift code (albeit very basic), referring VCS to more important locations, and adding GitHub integration to enable wireless deployment and debugging with LAN.
The new compilation system was rewritten using Swift, and after some comparisons, the speed of the compilation did not improve. Although I do not know if it is due to the Swift 4, but the total compile time of the company project is reduced from the original three to two minutes and a half, it can be said quite obvious.
The indexing system in Xcode 9 also uses a new engine, which is said to be up to 50 times times faster in large projects. However, it may not be obvious that the project is not large enough to be attended by the author. The Swift code in the project is still in a state of extinction. This may be the index system and the compilation system does not work well together, after all, or a beta version of the software, perhaps it should give the Xcode team some time (although it may be the end of this).
Since the Swift 4 compiler also provides the compatibility of Swift 3 (set up the swift version in Build Setting), I might use Xcode 9 beta in my daily development later, and then cut back to Xco when packaging and publishing, if nothing happens. De 8 out. After all, it is tempting to save 1.5 of the time for full compilation.
The quality of the beta is surprisingly good, perhaps because it has been a small, transformative improvement for the past two years, giving Apple's software team a relatively sufficient time to develop the results? In short, Xcode 9 beta is now working well.
Named Color
This is a change that a person likes very much. Now you can add colors to the xcassets and then reference the color in the code or IB. This is probably the case:
As with the use of IB to build the UI, a big headache is that designers say we should not change the color of the theme. You'll probably need to look around for this color to replace it. But now all you have to do is change it in xcassets, and you'll be able to react to all the places in IB.
Other notable changes
The rest are small changes, simply browse the next, I think it is worth mentioning the list, and attached to the reference link.
- Drag-and-drop-a very standard set of iOS APIs, not surprisingly, the iOS system helps us handle most of the work, and developers almost only need to deal with the results.
UITextView
and UITextField
native support drag and drop, UICollectionView
and a UITableView
series of dedicated delegate to indicate the occurrence and end of drag and drop. You can also UIView
define drag-and-drop behavior for any child class. Unlike the drag-and-drop on your Mac, IOS's drag-and-drop is fully respectful of multitouch screens, so you might need to do something special with multiple drag-and-drop behaviors.
- The new Navigation title design-most system apps for IOS 11 have a new design that magnifies the title font of the navigation bar. If you want to use this design is also very simple, set navigation bar
prefersLargeTitles
.
- Fileprovider and Fileproviderui-provides an interface like the files app that allows you to access files on your device or in the cloud. I believe it will be a standard for future document-related apps.
- 32-bit apps are no longer supported-although you can still run 32-bit apps in Beta 1, Apple has made it clear that support will be canceled in subsequent IOS one-to-one beta. So if you want your program to run on IOS 11 devices, a 64-bit recompile is a must-have step.
- Devicecheck-developers who want to track their users with their ad ID every day now have a better choice (assuming, of course, to do business). Devicecheck allows you to communicate with your Apple server through your server and set up two bit of data for a single device. Simply put, you use the Devicecheck API to generate a token on your device, then send the token to your server, and your server communicates with Apple's API to update or query the value of the device. The two bit data is used to track the user's information such as whether or not they have received the reward.
- Pdfkit-This is a long-standing framework on MacOS, but it's late on IOS. You can use this framework to display and manipulate PDF files.
- Identitylookup-You can develop an app extension yourself to intercept system SMS and MMS messages. The system's information app will ask all open filtering extensions when it receives a message from an unknown person, and this information will not be passed to you if the extension indicates that the message should be intercepted. The extension has the opportunity to access the pre-specified server for judgment (so you can get the user's text message in a fair way, but of course, given the privacy, these accesses are encrypted anonymously, and Apple prohibits such extensions from being written in container).
- Core NFC-Provides basic near-field communication read functionality on iphone 7 and iphone 7 Plus. It looks promising, and as long as you have the right NFC tag, your phone can be read. But considering the inability to live in the background, the practicality of the discount. But the author is not very familiar with this piece, perhaps there is a more suitable scene also unknown.
- Auto Fill-Gets the password from ICloud Keychain, and the AutoFill feature is now open to third-party developers. Uitextinputtraits Textcontenttype added
username
and password
, configure the content type for the appropriate text view or Text field, and fill in the relevant Info.plist Content, you can get autofill on top of the keyboard when you ask for a user name password to help users sign in quickly.
So much for the time being, if I find any interesting things to add later. If you think there is anything worth mentioning, please leave a comment in the comments and I will add it.
What developers need to know about IOS SDK new features