Now many mobile phone games in the chat system are added to the function of voice chat, compared to the traditional text chat, voice chat in the MMORPG is particularly important, after all, direct verbal communication is always faster than your code word, but also more intuitive.
To achieve a lot of methods of voice chat, there are many third-party plug-ins in u3d, provide a lot of powerful voice features, specifically, I do not have an example (in fact, I have not used--!), this article wants to from a native development perspective to achieve a simple voice chat function.
Voice chat approximate process:
Can be seen in the client record voice data, and encode the conversion, data compression, and then send voice data to the voice server, voice server for distribution function (voice server can also translate the voice)
When the client requests or receives the voice data that the voice server pushes over, the data is decompressed, converted to playable encoding, and then played, the process is quite simple.
But here we only discuss the client side of the processing, about how to set up a voice server and how to compress, and send voice data this piece, here is not detailed deployment.
Some of the problems you may encounter here are:
1.u3d How to communicate between C # and iOS oc
2.iOS How to call native recording function and playback function
3. How to convert the encoding problem
Well, for these three points, we are happy to solve each:
1.u3d How to communicate between C # and iOS oc
On this issue, it should be relatively simple, unlike Android, C # and OC communication is actually a bit like putting an unmanaged dynamic library into C #, we can add a C + + interface in OC
extern "C" void __sendocmessage (const char* methodname,const char* arg0,const char* arg1);
Then introduce the interface in C #
Private Const string Iossdkdll = "__internal";
#if Unity_iphone
[DllImport (Iossdkdll, callingconvention = callingconvention.cdecl)]
public static extern void __sendsdkmessage (String methodname,string arg0, string arg1);
#endif
In this way, you can send a message to OC in C #, all messages can be sent through this interface, just to determine the parameter methodname to execute the corresponding module can be
Conversely, if OC wants to send a message to C #, we can call the OC interface provided by U3D
extern void Unitysendmessage (const char *, const char *, const char *);
The first parameter is the Gameobject name in the scene, the second parameter is the method name in the component, and the third parameter is any message parameter.
2.iOS How to call native recording function and playback function
The recording function in iOS, we can introduce the Avfoundation library
#import <AVFoundation/AVFoundation.h>
We will use the Avaudiorecorder and the Avaudioplayer two classes, namely the recording class and the playback class
Avaudiorecorder
We can create a recording instance for recording
Create a recording file save path
Nsurl *url=[nsurl Urlwithstring:voicedatapath];
Create recording format settings
Nsdictionary *setting=[nsmutabledictionary Dictionary];
Set the recording format
[Setting setobject:@ (KAUDIOFORMATLINEARPCM) Forkey:avformatidkey];
Set the recording sampling rate, generally use 8000, too low distortion is more serious
[Setting setobject:@ (8000) Forkey:avsampleratekey];
Set channel, single channel
[Setting setobject:@ (1) Forkey:avnumberofchannelskey];
The number of bits per sample point, divided into 8, 16, 24, 32, where 16-bit
[Setting setobject:@ (Forkey:avlinearpcmbitdepthkey]);
Whether to use floating-point sampling
[Setting setobject:@ (YES) Forkey:avlinearpcmisfloatkey];
Create a Sound Recorder
Nserror *error=nil;
Avaudiorecorder *audiorecorder = [[Avaudiorecorder alloc]initwithurl:url settings:setting error:&error];
[Audiorecorder Record];
When you stop recording, call the Stop interface
[Audiorecorder stop];
Avaudioplayer
Similarly, we can create an instance of an audio player
Nsurl *url=[nsurl Urlwithstring:voicedatapath];
Nserror *error=nil;
Avaudioplayer *audioplayer = [[Avaudioplayer alloc]initwithcontentsofurl:url error:&error];
audioplayer.numberofloops=0;
Set Playback Sound
Audioplayer.volume = 1;
Play
[Audioplayer Preparetoplay];
Stop playing
[Audioplayer stop];
So it's not hard to imagine that, combined with the process above, the entire recording and playback process is
1. When recording, U3d sends a message to the iOS to create an instance of Avaudiorecorder for recording, with parameters voicedatapath the absolute path to the recording file, the recording ends, and the recording file is saved in the incoming Voicedatapath path, and notifies back to the u3d that the recording is complete, u3d the callback, and sends the data to the voice server.
2. When playing, the U3D requests the Voice server to download the data, the download completes the data in the local, the concurrent message to the iOS to create the Avaudioplayer instance to play the sound file, and comes with the parameter voicedatapath the path that the sound file is in, then plays the sound
Probably the process is this, should be the process is also relatively simple, only need to encapsulate the Avaudioplayer and Avaudiorecorder interface, you can implement a simple voice chat module.
3. How to convert the encoding problem
We know that the format of iOS recording only WAV format, this format significantly back to occupy a lot of memory space, inconvenient to send data to the voice server or download, so we need to switch to compressed audio data format is easy to reduce the size of the recording file, to ensure a smooth voice chat experience.
The arm format is obviously the best compression format for voice chat, and in wirelessly this format can be converted and played directly, but in iOS, this format is not supported for playback and conversion, so you need to introduce a converted class library Voiceconverter, this class library can be found in GitHub, I'm going to give you a simple, straightforward, two excuse to convert arm and WAV in the back of the essay.
[Voiceconverter Wavtoamr:wavpath Amrsavepath:amrpath];
[Voiceconverter Amrtowav:armpath Wavsavepath:wavpath];
So combined with the above recording and playback process
1. We need to convert the WAV format voicedata to arm format and then send it to the voice server after Avaudiorecorder recording is complete.
2. When you download arm's voice file from the voice server, first convert the voice file to WAV format, then create the Avaudioplayer object for playback.
Okay, about the iOS version of the Voice chat module, that's about it, depending on the iOS native API's
Avaudioplayer and Avaudiorecorder can realize the voice recording and playback function of the client, and then combined with the voice server, this voice function can really run in the game.
Finally, about voice translation
I do not have much contact with this, I do not know the native iOS API does not provide translation capabilities or a third-party library can be translated into speech, but listen to other small partners, translation is done in the voice server, the voice server calls the third-party interface, the voice can be asynchronous translation, completed and then pushed to the client, Interested friends can go on their own to explore the next voice translation, you can also leave a message to me under the recommendation, study together.
Voiceconvert Address Https://pan.baidu.com/s/1kVDHFMn
Unity3d for simple voice chat [iOS version]