octal-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: info on voice architecture (was re: simple modelling)


From: Matt Stanton
Subject: Re: info on voice architecture (was re: simple modelling)
Date: Mon Mar 12 22:45:04 2001

David O'Toole wrote:
>
> Essentially each voice (simultaneous note) of your machine is assigned
> to a channel. You don't have to do any voice allocation, Octal will
> assign note events to voices. All you have to do is perhaps keep an
> array of objects, each of which is capable of creating one voice in your
> machine. Each event comes with a channel number (note on, note off,
> controller change, etc) so all you have to do is select the right object
> before doing your ox_update stuff on it.

I am thoroughly confused.  Will there be any direct representation of
channels in the sequencer?  What will they look like?  Is a channel a
tracker-style track  (a sequence of notes for one voice)?  Is a channel
an internal bookkeeping device for voices?  If the last, how does Octal
know enough to assign them properly

> for more efficiency, you might
> construct your voices to start with one blank buffer, and then have each
> voice add its output to the buffer as it's being generated. (That will
> prevent the need for allocating a zillion buffers.)

Oh, I realize this.  :)  The plucked string will still require multiple
buffers, though, since each voice uses a delay line (circular buffer) of
length=samplerate/frequency.

> How you respond to ox_channel() messages is also up to you. All it
> really means is "prepare channel X for possible use real soon." If you
> keep the objects around and don't need a lot of buffers, then you can
> just set a channel's "in use" flag and not have to allocate/deallocate
> memory when you recieve track messages. Matt, how do these ideas look
> from your point of view?

Well, I'm not sure I understand them.  I'd like to see how they might
work from a UI point of view (the sequencer), and it might be helpful to
have a few examples.

> I will of course have much more detail when the manual is updated.  But
> the basic idea is to create an object that captures the concept of one
> voice in your machine, and decide how they will work together (mixing or
> adding during generation, etc) to create a multi-voice machine.

This much, at least, sounds good.

Matt Stanton
address@hidden



reply via email to

[Prev in Thread] Current Thread [Next in Thread]