The speech synthesizer technology is launched by iOS7, which can realize the non-network speech function and support multiple languages. define a member variable to record the voice synthesizer AVSpeechSynthesizer # import <AVFoundation/AVFoundation. h> 1 @ interfaceViewController () 2 3 {4 5 // synthesizer 6 7 AVSpeechSynthesizer * _ synthesizer; 8 9 10 11 // instantiate the speaking language, speaking Chinese and English 12 13 AVSpeechSynthesisVoice * _ voice; 14 15} 2. define the voice object AVSpeechSynthesisVoice. Specify the language of the speech zh_CN Chinese en-US English 1-(void) viewDidLoad 2 {3 [super viewDidLoad]; 4 5 // instantiate the language of the speech to speak Chinese. 6 _ voice = [AVSpeechSynthesisVoice voiceWithLanguage: @ "en-US"]; 7 8 // to recite, a speech synthesizer 9 _ synthesizer = [[AVSpeechSynthesizer alloc] init]; 10} 3. instantiate the voice object AVSpeechUtterance, specify the content to read 1 // read the content in the text box 2 // instantiate the voice object, and read the content 3 AVSpeechUtterance * utterance = [AVSpeechUtterance speechUtteranceWithString: _ textView. text]; 4. specify the speech and recitation speed. The Chinese recitation speed is 0.1. The English recitation speed is 0.3. The English recitation speed is also 1 utterance. voice = _ voice; 2 3 utterance. rate = 0.3; 5. start 1 [_ synthesizer speakUtterance: utterance]; Note: when creating an application, if the recitation content is limited, professional dubbing audio is required. If the recitation content is unlimited, this method is the best choice!