A few weeks ago, I was sitting in a brand new Toyota Prius, listening to the sales agent of a car rental company explaining the strange control switches and indicators that were all over the dashboard. "Wow," I thought, "Although technology is as old as cars, manufacturers continue to beautify the user interface."
On the broadest level, the user interface is the place for human-computer interaction. Although the concept is as old as technology itself, the user interface is a revolution in the personal computer as an art form.
Now, I'm afraid only a small percentage of personal computer users can remember what happened before Apple Macintosh and the Microsoft Windows graphical user interface. At that time, in the late 80, some experts feared that the standardization of user interfaces would lead to applications that were monotonous and tedious. But that is not the case. Instead, with the introduction of standard controls that remove the burden that designers and programmers need to create their own scroll bars, the user interface is actually starting to change and become more interesting.
In this context, the new model introduced by Windows presentation Foundation (WPF) makes the user interface more glorious. WPF lays a solid foundation for retaining pattern graphics, animations, and the. On this basis, it adds a tree-like hierarchy of parent-child elements and a powerful markup language called XAML. The result is unmatched flexibility in customizing existing controls through templating capabilities and building new controls by assembling existing components.
Moreover, these new concepts are not just for client-side programming. XAML and WPF classes are a robust subset of the Microsoft. NET Framework, and are now available for web-based programming through Silverlight. You have ushered in a new era of true sharing of custom controls between client applications and WEB applications. I'm sure that, while leveraging new technologies such as multi-touch, this momentum will evolve into mobile applications and ultimately encompass a wide range of information and entertainment systems.
For these reasons, I believe that the user interface has become a more critical part of application programming. This column explores the potential of user interface design in WPF and Silverlight, including the use of Cross-platform code where possible.
Prelude
The good and bad of user interface selection is not always at a glance. The first humanized Binder introduced in Microsoft Office 97 Clippy at that time may be a very good idea. Therefore, I will focus more on technical potential than design potential. I will avoid using the term "best practices", which is more suitable for historical and market environments.
There is a good out-of-the-box formula: A computer should not emit any sound unless it plays a video or sound file in response to a particular user command. I intend to break this limitation by showing you how to play a custom sound in a WPF application by generating waveform data at run time.
Although this audible feature has not yet been formally incorporated into the. NET Framework, it can be implemented through the Naudio libraries provided on Codeplex (naudio.codeplex.com). Through the links in this site, you can go to Mark Heath's blog to see some sample code, and you can view the site tutorials for Sebastian Gray.
You can use the Naudio library in Windows forms or WPF applications. Because the library accesses Win32 API functions through PInvoke, it cannot be used with Silverlight.
In this article, I'm using the Naudio 1.3.8 version. When you create a project that uses naudio, you need to compile it for 32-bit processing. Go to the Build tab of the Properties page and select x86 from the platform target Drop-down list.
Although this library provides a number of features for specialized applications that require sound, the usage I have shown you may be suitable for more general applications.
For example, suppose your application allows the user to drag objects around the window, and you want to drag and play a simple sound (such as a sine wave), and the frequency of the sound increases as the object goes to the center of the window.
This is the work of waveform audio.
Today, almost all PCs are equipped with audible hardware, which typically functions through one or two chips on the right side of the motherboard. In general, this hardware is nothing more than a pair of digital analog converters (DAC). When a constant integer stream describing the waveform is transmitted to these two DAC, the stereo is emitted.
So how much data is involved? Today's applications typically generate a "CD-quality" sound. The sampling rate is constant at 44,100 samples per second. (Nyquist theorem indicates that the sampling rate must be at least twice times the maximum frequency to reproduce the sound.) The conventional wisdom is that human ears can hear the sound of frequencies between 20Hz and 20,000hz, so 44,100 is plentiful. Each sample is a signed 16-bit integer that represents the size of a 96 db signal-to-noise ratio.