gnuspeech-contact
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gnuspeech-contact] Re: what's the status?


From: David Hill
Subject: Re: [gnuspeech-contact] Re: what's the status?
Date: Mon, 20 Feb 2006 19:06:59 -0800

Hi Greg,

On Feb 11, 2006, at 1:13 PM, Gregory John Casamento wrote:

David,

[snip]

3) GNUspeech uses CoreAudio, which is not considered to be part of Cocoa.  As a result, I have had to look into other ways to get this code working using NSSound on GNUstep without, of course, disturbing the code that has been written and works properly for Mac OS X.


Checking into NSSound, the documentation claims that it can play sound from 3 sources: (1) a disk file using a pathname or URL;  (2) a network connection using a URL; (3) the pasteboard.

This doesn't look wonderfully appropriate for what Monet etc. would need.  I suppose it could be done by using two files for double buffering, or something like that, but it seems kludgy.

I realise the Core Audio Framework is relatively new, but -- with the increasing importance of audio -- I would have though GNUstep is going to have to address this area.  What better time than when there's an important sample app that needs it ;-)

I Append an extract from the Apple Developer web page on the topic.

As a matter of interest, do you have access to a NeXT, BTW?

All good wishes.

david

--------

Introduction to Core Audio


Hardware Abstraction Layer (HAL)

Note: In its preliminary form, this document does not yet contain documentation for the Hardware Abstraction Layer. The final document will contain information on this technology.

The Hardware Abstraction Layer (HAL) is presented in the Core Audio framework and defines the lowest level of audio hardware access to the application. It presents the global properties of the system, such as the list of available audio devices. It also contains an Audio Device object that allows the application to read input data and write output data to an audio device that is represented by this object. It also provides the means to manipulate and control the device through a property mechanism.

The service allows for devices that use PCM encoded data. For PCM devices, the generic format is 32-bit floating point, maintaining a high resolution of the audio data regardless of the actual physical format of the device. This is also the generic format of PCM data streams throughout the Core Audio API.

An audio stream object represents n-channels of interleaved samples that correspond to a particular I/O end-point of the device itself. Some devices (for example, a card that has both digital and analog I/O) may present more than one audio stream.

The service provides the scheduling and user/kernel transitions required to both deliver and produce audio data to and from the audio device. Timing information is an essential component of this service; time stamps are ubiquitous throughout both the audio and MIDI system. This provides the capability to know the state of any particular sample (that is, “sample accurate timing”) of the device.


Audio Unit

An audio unit is a single processing unit that either is a source of audio data (for example, a software synthesizer), a destination of audio data (for example an audio unit that wraps an audio device), or both a source and destination (for example a DSP unit, such as a reverb, that takes audio data and processes or transforms this data).

The Audio Unit API uses a similar property mechanism as the Core Audio framework and use the same structures for both the buffers of audio data and timing information. Audio unit also provides real-time control capabilities, called parameters, that can be scheduled, allowing for changes in the audio rendering to be scheduled to a particular sample offset within any given “slice” of an audio unit’s rendering process.

An application can use an AudioOutputUnit to interface to a device. The DefaultOutputAudioUnit tracks the selection of a device by the user as the “default” output for audio, and provides additional services such as sample rate conversion, to provide a simpler means of interfacing to an output device.

----------





reply via email to

[Prev in Thread] Current Thread [Next in Thread]