In the development of Windows Phone, sometimes we needProgramAt least this is much faster than typing. I have previously discovered this integrated Recording Application in the Android and iOS markets. It seems that both systems have provided interfaces, and I think Windows Phone will certainly be able to do this. Unfortunately, there are very few materials found on Chinese websites. When I searched by English, I immediately found a useful document,Making
A voice recorder on Windows PhoneSo the following isArticle(Omitted from what I think is not important ).
Feature identification
Before I start writing an application, I figured out what functions the program should implement. The list is as follows:
- Export recording
- Save recordings in WAV format
- Add remarks for recordings
- Accelerate or slow down recording
- Change voice
- Mixing, splitting, and editing recordings
- Export in MP3 format
- Classification tag
- Time/date reminder
As you can see, many different things can be added to a recording program application, and the application will soon become complicated from simple. In order not to make the application too complex, I chose a minimal set of functions that can implement my main goal. Because of its simplicity, it also avoids introducing potential bugs in many places, the following is a collection of reduced functions:
- Save recordings in WAV format
- Sort recordings by date or name
- Recording during screen lock
- Add remarks for recordings
Using the xNa class in Silverlight applications, you can create two types of applications on Windows Phone, one is using Silverlight as the UI application, the other is to use the xNa rendering class as the UI application. You can select either of them during development. Silverlight provides some controls for you to use when creating the app UI, such as button, Textbox, label, and so on. In xNa, you need to create everything you need to achieve the desired effect. Based on these features, I finally used the Silverlight UI in this application. For recording, I must use the microphone class derived from Microsoft. xNa. Framework. audio. Although we can no longer use the xNa rendering class in the Silverlight application, we can still use many other xNa classes. Frameworkdispatcher. Update () is called intermittently when the audio-related xNa class is used (). To avoid the loop of program logic when using a timer to call a function, you can use an applicationservice example provided by Microsoft to implement the same function. This class will call this function for you. The entire class is as follows:
public class XNAFrameworkDispatcherService: IApplicationService
{
private DispatcherTimer frameworkDispatcherTimer;
public XNAFrameworkDispatcherService ()
{
this.frameworkDispatcherTimer = new DispatcherTimer ();
this.frameworkDispatcherTimer.Interval = TimeSpan.FromTicks (333333);
this.frameworkDispatcherTimer.Tick + = frameworkDispatcherTimer_Tick;
FrameworkDispatcher.Update ();
}
void frameworkDispatcherTimer_Tick (object sender, EventArgs e)
{FrameworkDispatcher.Update ();}
void IApplicationService.StartService (ApplicationServiceContext context)
{this.frameworkDispatcherTimer.Start ();}
void IApplicationService.StopService () {this.frameworkDispatcherTimer.Stop ();}
}
Once this class is declared in your project, it needs to be added as an object of the application lifetime. There are many ways to achieve this, but I like to add it to App.xaml.
<Application.ApplicationLifetimeObjects>
<!-Required object that handles lifetime events for the application->
<shell: PhoneApplicationService
Launching = "Application_Launching" Closing = "Application_Closing"
Activated = "Application_Activated" Deactivated = "Application_Deactivated" />
<local: XNAFrameworkDispatcherService />
</Application.ApplicationLifetimeObjects>
After completing this step, I no longer need to consider FrameworkDispatcher.Update. It will be executed automatically when the program starts and automatically terminated when the program is closed. Use Microphone Recording The Microphone class records the recording in chunks, and then passes the previous chunk to your program when recording a new chunk. To do this, the Microphone class has its own memory buffer. For example, when you want to record a short sentence: "The quick brown fox jumped over the lazy dog." Now we also assume that the microphone buffer can record one word at a time. Figure 1: Microphone, buffer, your program You start to say this short sentence, the microphone buffer is filled with the sound of the word "The" you say. Once the buffer is full, it is transferred to the program, and the microphone begins to fill a new buffer with the next recorded word. The program receives the buffer and can process it. Considering that the program is to save and play recordings, the program will save audio chunks and wait for the next chunk to be spliced after the previous one. (Note: here buffer is translated into a buffer, but it feels a bit awkward, I will look at it) Figure 3: The program receives the first word and the second word is recorded. After each chunk is recorded, it will be Passed to the program, the program splices the chunk after the chunk it has received. However, when the user finishes speaking the last word, there will be a bug. In many online examples, when the user finishes speaking the last word dog and presses the stop button, the program stops receiving more information from the microphone, but in the end A word has not been transferred from the microphone buffer to the program buffer. The end result is that the program receives everything except the last word. To prevent this problem, you need to consider what happens when the user stops recording. The program should wait until it receives the last buffer before stopping instead of stopping immediately. In the worst case, some noise after the end of the sentence may also be recorded, but this is better than losing data. We can reduce the amount of excess data received by reducing the size of the buffer. Creating code that implements the above functions is relatively simple. In order to get an instance of the Microphone class, we can get it from Microphone.Current. When the microphone is recording, by generating a BufferReady event, it will tell our program that a buffer can be read. Then we can use GetBuffer (byte [] destination) to get the buffer data. In this method, we need to pass a byte array for receiving data. How to set the size of this array? The Microphone class has two members to help us determine the required size. Microphone.BufferDuration lets us know how much time the microphone's buffer can store the recording. The Microphone.GetSampleSizeInBytes (Timespan) method tells us how many bytes are needed to record a recording of a specific length. Combining the two together, the buffer size we need is Microphone.GetSampleSizeInBytes (Microphone.BufferDuration). Once you have an instance of the Microphone class, associate it with the BufferReady event to create a buffer to receive data, The recording process can be started by calling Microphone.Start (). Some operations need to be done in the BufferReady event handling program. When data is taken from the buffer, they need to be collected somewhere. After the data is collected, we need to check whether there is a request to stop recording. If there is a request, tell the Microphone instance to stop sending data before using Microphone.Stop () and perform the operation to keep the recording. To collect the data, I will use a memory stream to write it to the isolated storage area at the end of the recording. One of my requirements is to use WAV format to store the recording data. This can be achieved by writing a suitable wave head before writing all the received bytes. (See: writing
a proper wave header). Here is the code to complete the above operation:
// code for recording from the microphone and saving to a file
public void StartRecording ()
{
if (_currentMicrophone == null)
{
_currentMicrophone = Microphone.Default;
_currentMicrophone.BufferReady + =
new EventHandler <EventArgs> (_ currentMicrophone_BufferReady);
_audioBuffer = new byte [_currentMicrophone.GetSampleSizeInBytes (
_currentMicrophone.BufferDuration)];
_sampleRate = _currentMicrophone.SampleRate;
}
_stopRequested = false;
_currentRecordingStream = new MemoryStream (1048576);
_currentMicrophone.Start ();
}
public void RequestStopRecording ()
{
_stopRequested = true;
}
void _currentMicrophone_BufferReady (object sender, EventArgs e)
{
_currentMicrophone.GetData (_audioBuffer);
_currentRecordingStream.Write (_audioBuffer, 0, _audioBuffer.Length);
if (! _stopRequested)
return;
_currentMicrophone.Stop ();
var isoStore =
System.IO.IsolatedStorage.IsolatedStorageFile.GetUserStoreForApplication ();
using (var targetFile = isoStore.CreateFile (FileName))
{
WaveHeaderWriter.WriteHeader (targetFile,
(int) _currentRecordingStream.Length, 1, _sampleRate);
var dataBuffer = _currentRecordingStream.GetBuffer ();
targetFile.Write (dataBuffer, 0, (int) _currentRecordingStream.Length);
targetFile.Flush ();
targetFile.Close ();
}
}
Recording playback To play back recordings, I will use the SoundEffect class. Similar to the Microphone class, SoundEffect is also an XNA audio class that needs to periodically call the FrameworkDispatcher.Update () function. There are two ways for me to load WAVE files. I can decode the wave head by myself, or let the SoundEffect class do it. Here I give the process of my own decoding, so that others can modify the file. (Note: I did not find the reference written by the author himself) When initializing SoundEffect through the constructor, you need to know three data: the recorded audio data, the sampling rate, and the number of recorded tracks. This application only records normal sound rather than stereo sound. So there is always only one audio track here. I can pass AudioChannels.Mono as a parameter. But in the future, I will add the function of importing the recording (the recording may be stereo), so I took this data off the wave head. Similarly, I can get the sampling rate from the Microphone class instead of the wave head. But for the sake of the future, I still get the data from the wave head, followed by the wave data. Once SoundEffect is initialized, in order to play the recording, I need to get a SoundEffectInstance instance and call the Play method. I do n’t think I need to explain why I only play one recording at a time, so before playing a new recording clip, I first check if a clip already exists in memory, and if it does, terminate it.
public void PlayRecording (RecordingDetails source)
{
if (_currentSound! = null)
{
_currentSound.Stop ();
_currentSound = null;
}
var isoStore = System.IO.IsolatedStorage.IsolatedStorageFile.GetUserStoreForApplication ();
if (isoStore.FileExists (source.FilePath))
{
byte [] fileContents;
using (var fileStream = isoStore.OpenFile (source.FilePath, FileMode.Open))
{
fileContents = new byte [(int) fileStream.Length];
fileStream.Read (fileContents, 0, fileContents.Length);
fileStream.Close (); // not really needed, but it makes me feel better.
}
int sampleRate = ((fileContents [24] << 0) | (fileContents [25] << 8) |
(fileContents [26] << 16) | (fileContents [27] << 24));
AudioChannels channels = (fileContents [22] == 1)?
AudioChannels.Mono: AudioChannels.Stereo;
var se = new SoundEffect (fileContents, 44,
fileContents.Length-44, sampleRate, channels, 0,
0);
_currentSound = se.CreateInstance ();
_currentSound.Play ();
}
}
Loading sound through SoundEffect.FromFile is straightforward
public void PlayRecording (RecordingDetails source)
{
SoundEffect se;
if (_currentSound! = null)
{
_currentSound.Stop ();
_currentSound = null;
}
var isoStore = System.IO.IsolatedStorage.
IsolatedStorageFile.GetUserStoreForApplication ();
if (isoStore.FileExists (source.FilePath))
{
byte [] fileContents;
using (var fileStream = isoStore.OpenFile (source.FilePath, FileMode.Open))
{
se = SoundEffect.FromStream (fileStream);
fileStream.Close (); // not really needed, but it makes me feel better.
}
_currentSound = se.CreateInstance ();
_currentSound.Play ();
}
}
Tracking recordings In addition to recording the recordings in quarantine storage, I want to track some other information, such as the date the recording was created, the recording title, recording notes, etc. It is possible to give the recording a title by naming the file or inferring the recording date from the file data, but this solution is not permanent. There are some character restrictions when naming files, and in the future when I add import or export files, there may be cases where the file date is lost. So I created a class to hold the information I want to track on the recording. The classes are as follows:
public class RecordingDetails
{
public string Title {get; set;}
public string Details {get; set;}
public DateTime TimeStamp {get; set;}
public string FilePath {get; set;}
public string SourcePath {get; set;}
}
To make this class easy to read, I have given a simplified form here. This class needs to be serialized so that I can read and write from the isolated storage. So this class is marked with [DateContract] attribute, and its members are marked with [DateMember] attribute. I plan to bind instances of this class to UI elements, so this class needs to be implemented using the INotifyPropertyChanged interface. The versions of this class are as follows:
[DataContract]
public class RecordingDetails: INotifyPropertyChanged
{
// Title-generated from ObservableField snippet-Joel Ivory Johnson
private string _title;
[DataMember]
public string Title
{
get {return _title;}
set
{
if (_title! = value)
{
_title = value;
OnPropertyChanged ("Title");
}
}
}
// -----
// Details-generated from ObservableField snippet-Joel Ivory Johnson
private string _details;
[DataMember]
public string Details
{
get {return _details;}
set
{
if (_details! = value)
{
_details = value;
OnPropertyChanged ("Details");
}
}
}
// -----
// FilePath-generated from ObservableField snippet-Joel Ivory Johnson
private string _filePath;
[DataMember]
public string FilePath
{
get {return _filePath;}
set
{
if (_filePath! = value)
{
_filePath = value;
OnPropertyChanged ("FilePath");
}
}
}
// -----
// TimeStamp-generated from ObservableField snippet-Joel Ivory Johnson
private DateTime _timeStamp;
[DataMember]
public DateTime TimeStamp
{
get {return _timeStamp;}
set
{
if (_timeStamp! = value)
{
_timeStamp = value;
OnPropertyChanged ("TimeStamp");
}
}
}
// -----
// SourceFileName-generated from ObservableField snippet-Joel Ivory Johnson
private string _sourceFileName;
[IgnoreDataMember]
public string SourceFileName
{
get {return _sourceFileName;}
set
{
if (_sourceFileName! = value)
{
_sourceFileName = value;
OnPropertyChanged ("SourceFileName");
}
}
}
// -----
// IsNew-generated from ObservableField snippet-Joel Ivory Johnson
private bool _isNew = false;
[IgnoreDataMember]
public bool IsNew
{
get {return _isNew;}
set
{
if (_isNew! = value)
{
_isNew = value;
OnPropertyChanged ("IsNew");
}
}
}
// -----
// IsDirty-generated from ObservableField snippet-Joel Ivory Johnson
private bool _isDirty = false;
[IgnoreDataMember]
public bool IsDirty
{
get {return _isDirty;}
set
{
if (_isDirty! = value)
{
_isDirty = value;
OnPropertyChanged ("IsDirty");
}
}
}
// -----
public void Copy (RecordingDetails source)
{
this.Details = source.Details;
this.FilePath = source.FilePath;
this.SourceFileName = source.SourceFileName;
this.TimeStamp = source.TimeStamp;
this.Title = source.Title;
}
public event PropertyChangedEventHandler PropertyChanged;
protected void OnPropertyChanged (string propertyName)
{
if (PropertyChanged! = null)
{
PropertyChanged (this, new PropertyChangedEventArgs (propertyName));
}
}
}
The [DataMember] property runs through the entire code, so I can use data contract serialization to read and write this class. Since I use DataContractSerializer, I don't need to worry too much about the specific encoding details when the file is stored and loaded. It is not difficult to use isolated storage at the same time. I used a variant of a utility class I wrote before (see here for details) to simplify serialization and deserialization into a small piece of code. When the user creates a new recording, a new instance of this class is also created. In addition to the title, remarks, and timestamp, this class also contains the path describing the recording, a non-serialized member SourceFileName that indicates the name of the original file from which the data was loaded. Without this information, when the user decides to update the data, it is impossible to know which file needs to be rewritten when the content is saved.
// Saving Data
var myDataSaver = new DataSaver <RecordingDetails> () {};
myDataSaver.SaveMyData (LastSelectedRecording,
LastSelectedRecording.SourceFileName);
// Loading Data
var myDataSaver = new DataSaver <RecordingDetails> ();
var item = myDataSaver.LoadMyData (LastSelectedRecording.SourceFileName);
After this operation, you have all the information to record, save the recording, and load the recording. When the program is first started, I let it load all the RecordingDetails and load them into an ObservableCollection in my view mode. in. There they can be presented to users in the form of a list.
public void LoadData ()
{
var isoStore =
System.IO.IsolatedStorage.IsolatedStorageFile.GetUserStoreForApplication ();
var recordingList = isoStore.GetFileNames ("data / *. xml");
var myDataSaver = new DataSaver <RecordingDetails> ();
Items.Clear ();
foreach (var desc in recordingList.Select (item =>
{
var result = myDataSaver.LoadMyData (String.Format ("data / {0}", item));
result.SourceFileName = String.Format ("data / {0}", item);
return result;
}))
{
Items.Add (desc);
}
this.IsDataLoaded = true;
}
Save state and tombstone
Your program can be interrupted at any time, such as a phone call, a search that suddenly jumps out of the program, etc. When this happens, your application will be tombstoned; the operating system will save the user's page and give the program the opportunity to save other data. When the program is loaded again, the developer must ensure that appropriate steps are taken to reload the state. In most cases, I do n’t need to worry about tombstones because most of the program ’s state data is quickly saved to isolated storage. There is not much state data that needs to be saved here, because recording will be executed immediately as the program settings change.
The following content is no longer translated, it is nothing more than the author's explanation that he wants to increase the application function. Finally, the source code and application are given:
: Http://www.codeproject.com/KB/windows-phone-7/WpVoiceMemo/WpVoiceMemo.zip