gnu3dkit-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu3dkit-dev] gnuDKit.info: RenderKit - questions


From: Brent Gulanowski
Subject: Re: [Gnu3dkit-dev] gnuDKit.info: RenderKit - questions
Date: Mon, 11 Nov 2002 15:25:13 -0500

On Monday, November 11, 2002, at 02:31  PM, Philippe C.D. Robert wrote:

Well, there was a roadmap for my initial plans wrt the next version. But then everything seems to have changed...:-) This leads to a good point, we need a feature list for the new 3DKit. I try to come up with some input, but you are all welcome to contribute. My goal is to have an initial version available on the website within the next week.

Erm, I already wrote a starter feature list. It was part of my dev plan overview ([Gnu3dkit-discuss] Development Plan Topics - Overview), with section numbers and everything... I'm getting the feeling that that really turn anyone's crank :\.

Wrt the work, for me there is one last real issue which I do not yet really see through, the scene representation. What if sb would like to use not a scene graph but an octree or some CSG? How can we offer a good API to incorporate such scenes in the 3DKit? Ideas?

I am seriously interested in this issue. I think it would make sense to always have a scene graph, but allow for chunks of that graph to have alternative scene representations embedded within them. We need to decide what information we want to preserve in the scene representation. Do we want to preserve the hierarchical nature of represented objects? Whenever possible. But this information is filtered out during rendering, so maybe we can find a way to decouple the information about object assembly from the geometric information.

I like this idea of having 'sub scenes' in different reps. So my idea was (as seen in the UML diagram) that a scene uses a scene rep. Now a scene can have multiple reps which do not have to represent the same data. The problem here is that I do not yet see a good solution about how to handle that internally, ie. how actions can be written so that they can process any kind of scene reps.


Maybe your approach of having a scene which is a scene graph containing some special purpose nodes if needed (which in turn contain scene data in other representations) is better/easier to implement and work with... I guess I need to spend some more cycles on that.


OK, I'm hip with the idea of the over-arching scene (as opposed to scene rep) -- I see what you mean about handling multiple reps, sort of like switching gears with the engine running full speed: GRIND. Plus it means having to implement a "gearbox". Here's where I've let myself get confused, and need to clarify.

A "Scene" is an abstraction. A "Scene rep" is a particular construction of data representing that scene for a particular ... can I say "context"? Maybe we need to consider a context -- not an OpenGL Rendering Context, but a general scene *Presentation* context. That would include files and serialized forms (say, for transmission), scan line rendering contexts (both raw arrays and visibility sorted lists and trees), offline rendering contexts (for ray tracing, radiosity and other lighting calculations), scene databases (pure scene descriptions which are designed for freeform, database-style access), and even textual summaries (for example, in an NSOutlineView).

It would be interesting if we could create a structure which could hold combinations of these sorts of reps, but probably we would instituted some kinds of rules -- most like restricting the scan line renderer sorts together such that they can't have sub-scenes of reps that are suited to the more generalized, offline, or interactive presentation contexts. Although the other way around might be alright.

I have an idea: every discreet scene is represented by a G3DScene. The parts of the scene are made of G3DSceneReps, including such G3DSceneRep classes as G3DSceneNodeRep, G3DSceneBSPListRep, G3DSceneDictionaryRep, G3DSceneDataRep, G3DSceneCSGRep, with whatever rules are required for sub-scenes. We can consider a facility for adding and subtracting sub-scenes, and stripping off or adding a G3DScene wrapper in the process. I don't know if this is analogous to NSImage merging or not, but I don't see that it necessarily would be. The real value of the Scene class seems to be that it can arrange the invisible production of specialized scene reps using the master G3DSceneNodeRep. However you won't be able to re-generate a SceneNodeRep from the stripped reps unless we can successfully preserve the structural data -- hopefully as a decoupled parallel data set. Maybe a DictionaryRep could be coupled with BSPs and the like, removing the structural data from the data needed for rendering, but there if needed to re-generate a more interactive scene rep.

Granted that would be a lot of work to get working properly. It's usefulness depends on how it conserves storage resources compared to just maintaining multiple different reps of the same scene.

Here's a very good side effect if we can get such a thing working: sub-scene paging. If you have a scene made of sub-scenes, perhaps in a very large-chunk Octree, it could contain nothing by G3DScenes -- basically pointers to individual scenes, which would only load their scene reps when necessary. This could proceed down the hierarchy as many layers as needed. When a sub-scene is required for rendering, the Scene is swapped out for the scene rep, which is attached to the parent node, as if it had been there all the time.



<snip>

I know it is common to think of scenes as being a combination of static objects which can be broken up into visible polygon lists, and dynamic objects which require live visibility determination, but that is an annoying artificial distinction. Buildings fall down, bridges move, landscapes change -- nothing is really static. Is this a distinction we will be forced to maintain for the forseeable future?

On the fly tessellation can be quite interesting, esp. for dynamic level of detail rendering. But then this is computation intensive and not suitable for every object in a scene. In general it makes sense to treat objects differently depending on their kind - computer resources are and will always be limited!

Yes, understood. But here I was thinking more of the difference between different reps of non-tessellated geometry. If you have even a boring rectilinear building interior, my experience is that you have to carve that up in some way totally unrelated to the meaningful value of walls and rooms and other things that a person would identify as distinct. Speed has always mandated throwing away these semantic relationships in favour or purely spatial relationships -- what can be seen from each spatial region. But once you chop things up into lists of visible polygons, you lose the ability to alter the geometry in a meaningful way. So, for example, removing an exterior wall is difficult both because you cannot tell which polygons make up that wall, and because it will wreak havoc with the visibility calculations (you won't be able to see outside even if the wall is removed). Or so I understand it.

I admit that removing exterior walls is uncommon, but in games you might like to blow things up, and in architecture you might want to replace a wall with a window, or remove the wall between two rooms. These two events never happen in BSP-using games like Quake et al, no matter how many rockets you fire. Red Faction uses geo-morphing or something to let you blow openings between different enclosed spaces -- I already read a short description of how they calculate the holes, but I can't remember anything about how they fit that into their visibility calculations. That game ran well enough on my G4-400 except for texture memory limitations of my GeForce2.

--
Brent Gulanowski                                address@hidden

http://inkubator.idevgames.com/
Working together to make great software.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]