---------- Forwarded message ----------
From: *David Spies* <address@hidden <mailto:address@hidden>>
Date: Thu, Jul 31, 2014 at 10:00 PM
Subject: Re: Considering adding a "dispatch" function for compile-time
polymorphism
To: "John W. Eaton" <address@hidden <mailto:address@hidden>>
In many Octave functions, we have code that does things like
NDArray nda = ov.array_value ();
and this operation succeeds if it is possible for the octave_value
object OV to be converted to an NDArray object. Is it really better
to have what is essentially a big switch statement that checks known
types of octave_value objects? Using the switch statement means
that if a new type is added that can be converted to an NDArray
object, you have to modify the switch statement to make this work.
Not necessarily. The switch statement still does this with types it
doesn't handle. For instance, Range degenerates into an NDArray.
With conversions, you can make this work just by providing an
array_value function in your new type.
This is not true. You're assuming that the matrix you're trying to deal
with "can" be efficiently represented as an NDArray. If this were
always true, there would be no need to have other matrix types.
NDArray is not a valid substitute for Sparse, DiagArray2, or PermMatrix
(as all of these are "sparse" in the sense that they are mostly zeros
and so it's necessary to use the proper nz-iterator types). As soon as
these matrices exceed the bounds of octave_idx_type, they can no longer
be converted to an NDArray, but long before that point, converting can
result in hideously inefficient behavior and can cause Octave to consume
all of a machine's memory and crash.