gnustep-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: NSSound


From: David Chisnall
Subject: Re: NSSound
Date: Thu, 4 Jun 2009 11:37:40 +0100

On 4 Jun 2009, at 01:19, Stefan Bidigaray wrote:

OK, I've been staring at the structure of NSSound as I have it now (not like the one in the tar.gz I sent out before) and I think I need to take a step back and design this a little better. I need some help! One thing to keep in mind is I will not go into the playing back-end since that still needs to be vetted. I'll try to tackle each on at a time:

1 - NSData ivar
As can be seen in the tar.gz and .diff (in savannah) I modified the ivars. The question here is, should I have replaced NSData with a data pointer and length like I did or should I have kept the NSData? My reasoning was merely due to overhead, and the fact that I can only read and play pointers anyway.

Keep the NSData. If someone calls -initWithData: with an NSData object as the argument then you just need to do -copy which, for immutable objects, is just -retain. This eliminates the need to copy. With C pointers you always need to do the copy. Oh, and using an NSData means EtoileSerialize can automatically serialise the object with no need for fallback code.

2 - New, helper methods
Libsndfile allows me to very easily do virtual I/O (all I have to do is implement length, read, seek, write and tell), so I initially moved the reading code to -initWithData: and used [NSData - initWithContentsOfMappedFile:] in [NSSound - initWithContentsOfFile:]. As I moved on, I figured it would be nice to be able to write to file and read from raw PCM data (both easily supported using libsndfile). I went ahead and created two new methods: - initWithData:raw:range:format:channels:sampleRate:byteOrder: (moved reading to here and had -initWithData call it instead) and - dataWithFormat:fileType:. What do you guys think of this?

Make the NSData versions the real implementation so you can avoid the overhead. If you want functions that take C pointers, make these do an explicit copy and wrap it in an NSData.

If you don't use NSData internally, you need to have initWithContentsOfMappedFile: use mmap() explicitly, which is a bit messy. NSData, as I recall, has fallback code for platforms like uCLinux where there is no mmap(), performing explicit reads and just pretending to do mapping. By using mmap() explicitly you avoid this and also make your -dealloc much more complicated because you need to do either free() or munmap() depending on how the object was initialised. Code reuse is good.

Having this functionality would be very useful to me. The current implementation of NSSpeechSynthesizeruses the speech engine's native sound output facility currently. I did this because Flite can only put audio data in memory in a raw format, with no header, or write it to a file. Being able to create an NSSound from raw data would be much cleaner, since I could then use the NSSound code for playback.

3 - Fallback implementation
The tar.gz attached includes a dive into some fallback methods to read WAV and AU (possibly AIFF/AIFF-C) data even when libsndfile is not available. I modeled it somewhat after the NSBitmapImageRep code, where I have 2 new files (NSSound+WAV.m and NSSound+AU.m) + a bunch of helper functions to read the data (in NSSoundPrivate.h). Initially I thought this wasn't going to add much code, but it looks like between all the new file and actual reading it comes out to quite a bit of new code. The question here: is it worth it?

Ideally, yes. On the desktop, requiring libsndfile is fine. On handheld device it may be nicer to avoid the dependency (although the ability to read compressed audio may offset the extra cost of an additonal dependency). I'd consider it a lower priority than having something actually working though.

David




reply via email to

[Prev in Thread] Current Thread [Next in Thread]