From design to development, Silicon Valley technical experts teach you to do "voice-activated" apps

Source: Internet
Author: User

Editor: This article for Ctrip Air Ticket Research and Development Department technical expert Chi share content in Ctrip technology micro-share, pay attention to Ctrip Technical Center number Ctriptech, learn more dry goods.

"Ctrip Technology micro-sharing" is the Ctrip Technology Center launched the online sharing course, monthly 1-2, using the most fiery live form, inviting Ctrip technology people, for the vast number of process apes and technology enthusiasts, together to explore the latest technology hotspots, share the first-line combat experience, talk about wonderful technology life, Build a technology-sharing community on the wire.

Chi joined Ctrip in April 2016 as a technical expert in the Air Ticket development department. He graduated from the Ivy League School of Dartmouth College, where he worked at Oracle, Yahoo! and Salesforce headquarters in Silicon Valley. The most successful product from scratch to delivery is the business community website template with annual sales of over $100 million. Usually like to eat ice cream and doughnuts, but also like the Shanghai Beast pie flowers.

_____________________

At the 2016 Apple Global Developers Conference, which just ended in June, the AI assistant Siri became the focus again. Not only has Siri intelligence been added to the fast input feature and photo app, Apple will also open the Siri SDK for the first time in iOS 10, making it possible for users to interact with a variety of iOS apps with their own voice.

Let's take a peek at Ctrip's technology micro-sharing and see how to simulate Siri intelligence to design and develop a mobile app that searches and listens to itunes music.

Sharing content is divided into two parts. The upper part is design-focused, and we will work together to understand the new features that Siri SDK brings to users in IOS10, review the history of AI, and use the sketch and principle two tools to design the pages and interactions of the Voice music app.

The lower part of the development-oriented, with Xcode and Objective-c to implement the app page development. A new mobile Software development Toolkit resource will be used to make speech recognition, transforming the name of the song we want to listen to from sound to text. Later, through the Apple Ittunes Search API interface to get song title album Pictures and songs to audition resources. Finally, combine these resources to develop this voice-activated music app with a sophisticated UI animation effect.

Let's experience the happiness of a complete product.

Design section

The design tools that will be used include sketch and principle.

Sketch (http://sketchapp.com/) is a design tool specifically designed for product and UI designers, and it was rated as app of the year in 2015 design tools. Sketch is lighter, more flexible, and less expensive than traditional Photoshop design tools.

We first use sketch to draw the first page of The Voice app and the prototype of the second page. Includes add status bar with sketch iOS UI Design template, add text label, background shape pattern, picture with sketch's own drawing tool.

After that, we imported two pages in principle that were designed with sketch. Principle is a tool for animating each design page. It can import sketch designs directly and add rich animations on top of the design page. As an industry-famous design animation tool, principle is widely used by designers in Silicon Valley.

First, we'll copy one in the principle, and the first page of the design shows exactly the same artboard on the first page. When the user clicks on the microphone icon in the first page, principle will automatically jump to the second page, we make changes to the corresponding hint copy, and implement a new animation: after clicking on the microphone on the first page, the second-page microphone rotates around the center of the icon as a reminder that the app is listening to the user's voice. Finally, when the microphone spins the animation ends, add an animation that automatically jumps to the third page, letting the user see the search for music. The cohesion of three pages is as follows.

The implemented animations can be seen from the following GIF.

At this point, we used sketch and principle to complete a "voice-activated" app prototype prototype and page interaction design. The main process is that the user uses a tap microphone, a microphone rotation plus a page copy prompt to inform the user that the app is listening to the user's voice, and when the app resolves to the user's voice and finds the relevant song, it jumps to the next page showing the song's album Pictures and an excerpt of the music.

Development section

After the design is complete, we will replace the equipment and use Xcode and objective-c to develop this smart music app. We will use a new mobile software development Kit resource to do speech recognition, we want to listen to the name of the song from the sound into text. Then, through the Apple's Music Search API interface to get song title album Pictures and songs audition resources.

After the development is completed, the project structure is as follows:

The first step is to introduce how to realize the intelligent recognition of user voice in the app. Speech recognition is a widely used field in artificial intelligence. Among the many existing technologies, I chose the iOS SDK with Nuance Speech Kit 2 to implement the features in the app. Specific usage guidelines for Speechkit can be found in the https://developer.nuance.com/public/Help/DragonMobileSDKReference_iOS/Overview.html. In this Xcode project, we use CocoaPods (https://cocoapods.org/) to maintain engineering dependencies. In project directory, create a new file named Podfile, and then add a row in the Podfile

Pod ' Speechkit '

After saving the file, in the local app directory, execute the command

Pod Install

After the installation is successful, open the. Xcworkspace project can be used directly through the following import statement to use Speechkit

#import <SpeechKit/SpeechKit.h>  

After successful installation, you will also need to register a developer account on the Nuance website, get the URL of the access server and an app KEY, which is used when calling the voice recognition service in the cloud.

In the following code, replace Sksserverurl and Sksappkey with the values shown in your account. The function of this code is to establish a session of speech recognition, and then start a transaction to do automatic speech recognition recognize the voice that the phone device hears:

Transaction successful after the delegate callback method, we only need to get the recognition parameter in the best text recommendation, it is to recognize the best text speech.

After recognizing the voice, the next thing we do is to get the music related to the recognition text. Apple itself has such a public interface that we can use.

Https://itunes.apple.com/search?term= Cowboy's busy.

Suppose I say to the app a Jay's song "Cowboy is Busy", then the HTTP call above the URL of a GET request, Apple will search the itunes music Library to return all the "cowboy very busy" related to all music data.

To make the demo logic as simple as possible, I added a parameter to the previous URL to control the number of returned results in one.

https://itunes.apple.com/search?term= Cowboy is busy &limit=1

In this way, I get the album image address of this "cowboy very busy" song from the data of only one song that I returned and the address of the audition song, and then piece together the data to form a page, with the third page we saw in the design draft.

As for the animation interaction in each page, it is done by the basic cabasicanimation. For example, the code for the rotation animation of the microphone icon is as follows.

All the points involved, we developed a smart voice-activated music app. Two sketch design drafts, three principle interactive pages, then have such an entertainment app. For more details, please watch the video.

From design to development, Silicon Valley technical experts teach you to do "voice-activated" apps

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.