G.723.1 specifies a coded representation that can be used for compressing the speech or other audio signal component of multimedia services at a very low bit rate. in the design of this coder, the principal application considered was very low bit rate Visual Telephony as part of the overall H.324 family of standards.
G.723.1 has two bit rates associated with it. these are 5.3 and 6.3 kbit/s. the higher bit rate has greater quality. the lower bit rate gives good quality and provides system designers with additional flexibility. both rates are a mandatory part of the encoder and decoder. it is possible to switch between the two rates at any 30 MS frame boundary. an option for variable rate operation using discontinuous transmission and noise fill during non-speech intervals is also possible.
G.723.1 coder encodes speech or other audio signals in 30 msec frames. in addition, there is a look ahead of 7.5 msec, resulting in a total algorithmic delay of 37.5 msec. all additional delays in the implementation and operation of this coder are due:
Actual time spent processing the data in the encoder and decoder;
Transmission Time on the communication link;
Additional buffering delay for the multiplexing protocol.
G.723.1 coder is designed to operate with a digital signal obtained by first known Ming telephone bandwidth filtering (recommendation g.712) of the analog input, then sampling at 8000Hz and then converting to 16-bit linear PCM for the input to the encoder. the output of the decoder shocould be converted back to analog by similar means. other input/output characteristics, such as those specified by recommendation g.711 for 64 kbit/s PCM data, shocould be converted to 16-bit linear PCM Before encoding or from 16-bit linear PCM to the appropriate format after decoding.
The coder is based on the principles of Linear Prediction Analysis-by-synthesis coding and attempts to minimize a perceptually weighted error signal. the encoder operates on blocks (frames) of 240 samples each. that is equal to 30 msec at an 8 KHz sampling rate. each block is first high pass filtered to remove the DC component and then divided into four subframes of 60 samples each. for every subframe, a 10th Order Linear Prediction coder (LPC) filter is computed using the unprocessed input signal. the LPC filter for the last subframe is quantized using a predictive split vector quantizer (psvq ). the unquantized LPC coefficients are used to construct the short-term perceptual weighting filter, which is used to filter the entire frame and to obtain the perceptually weighted speech signal.
For every two subframes (120 samples), the Open Loop pitch period, Lol, is computed using the weighted speech signal. this pitch estimation is saved med on blocks of 120 samples. the pitch period is searched in the range from 18 to 142 samples. from this point the speech is processed on a 60 samples per subframe basis.
Using the estimated pitch period computed previusly, a harmonic noise shaping filter is constructed. the combination of the LPC synthesis filter, the formant perceptual weighting filter, and the harmonic noise shaping filter is used to create an impulse response. the impulse response is then used for further computations.
Using the pitch period estimation, Lol, and the impulse response, a closed loop pitch predictor is computed. A second th order pitch predictor is used. the pitch period is computed as a small differential value around the open loop pitch estimate. the contribution of the pitch predictor is then subtracted from the initial target vector. both the pitch period and the differential value are transmitted to the decoder.
Finally the non-periodic component of the excitation is approximated. for the high bit rate, multi-pulse Maximum Likelihood quantization (MP-MLQ) excitation is used, and for the low bit rate, an algebraic-code-excitation (ACELP) is used.
G.723.1 decoder operation is also already med on a frame-by-frame basis. first the quantized LPC indices are decoded, then the decoder constructs the LPC synthesis filter. for every subframe, both the adaptive codebook excitation and fixed codebook excitation are decoded and input to the synthesis filter. the adaptive postfilter consists of A Formant and a forward-backward pitch postfilter. the excitation signal is input to the pitch postfilter, which in turn is input to the synthesis filter whose output is input to the formant postfilter. A gain scaling Unit maintains the energy at the input level of the formant postfilter.
Applications:
WiFi phones vowlan
Wireless GPRS edge systems.
Personal Communications
Wideband IP Telephony
Audio and video conferencing
Wideband IP Telephony
Features:
Full and half duplex modes of operation.
Passes ITU test vectors.
Common compressed speech frame stream interface to support systems with multiple speech coders (g.729, g.728, g.726 et al ).
Optimized for high performance on leading edge DSP ubuntures.
Multi-tasking environment compatible.
Deployments:
DAA interface using linear codec at 8.0 kHz sample rate.
Directly interface to 8.0 kHz PCM Data Stream (a-law or Mu-Law ).
North American/international telephony (including caller ID) support available.
Simultaneous DTMF Detector Operation available-(less than 150 hits on Bellcore test tape typical ).
Mf tone detectors, general purpose programmable tone detectors/generators available.
Line echo cancellation (g.165 & g.168 compliant) available.
Where multiple speech coders (g.729, g.728, g.726 et al.) are available, coder selection can occur at run-time.
Data/facsimile/Voice distinction available.
Various startup procedures available (V.8 and v.8bis ).
Multiple ports can be executed on a Single DSP.
Example Resource requirements (ADSP-2181 ):
Encoder 1/3 k bps requires 18 MIPS
Encoder 6.4 k bps requires 26 MIPS
Decoder (5 1/3 k bps or 6.4 K bps) requires 2 MIPS