paparazzi-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Paparazzi-devel] georeferencing video stream?


From: Austin Jensen
Subject: Re: [Paparazzi-devel] georeferencing video stream?
Date: Thu, 1 Oct 2009 13:18:09 -0600

Todd,

We have analyzed the errors in the altitude estimation, the yaw and position (assuming roll and pitch were good). We found that after correcting for yaw error, the orthorectification error decreased to 5-20m. After correcting for position, the error decreased to below 5m. The altitude correction made no significant improvement. Since it is a more difficult problem, we haven't looked at roll and pitch yet, but we are working at it. Since we are using an IMU, we expect that our biggest contributor to this error will be GPS. If there are problems with roll and pitch, it will probably be a bias from the sensor or a misalignment between the camera and the IMU. We will see though.

Austin

------------------------------------------------------------------
Austin Jensen, Research Engineer
Utah Water Research Laboratory (UWRL)
Utah State University, 8200 Old Main Hill
Logan, UT, 84322-8200, USA
E: address@hidden
T: (435)797-3315


On Thu, Oct 1, 2009 at 10:10 AM, Todd Sandercock <address@hidden> wrote:
Hi Austin and all

I am guessing that most of the error came from roll and pitch in your study. Do you think this could be rectified by a stabilised camera on board that ideally always faces directly down?
Of course there is always the error in yaw but that can be solved in a few different ways

Todd


From: Austin Jensen <address@hidden>Sent: Thursday, 1 October, 2009 3:33:22 AM
Subject: Re: [Paparazzi-devel] georeferencing video stream?

Chris,

Sounds like your on the right track. The biggest problem you will face will be the accuracy of your orthorectification based on the sensors of the aircraft (especially the IR sensors). We did a study on it and found that the orthorectification error can vary from 5 to 40m depending on your altitude. And that was using an IMU. Here is an example ..

http://www.engr.usu.edu/wiki/index.php/Image:OSAMBeforeMan.PNG

We are working on ways to improve this by calibrating the aircraft sensors in flight.

I suspect that the method used to georeference the images in the presentation you mentioned might use the aircraft sensors to help georeference, but its probably more based on the features in the images. I know of one open source project that stitches images based on features.

http://jimatis.sourceforge.net/

A different propriatory software called ensomosaic does a very good job at georeferencing the images using position, orientation and position.

http://www.ensomosaic.com/

Austin

------------------------------------------------------------------
Austin Jensen, Research Engineer
Utah Water Research Laboratory (UWRL)
Utah State University, 8200 Old Main Hill
Logan, UT, 84322-8200, USA
E: address@hidden
T: (435)797-3315


On Wed, Sep 30, 2009 at 1:14 AM, Todd Sandercock <address@hidden> wrote:
I am working on the same ideas as you.
Paparazzi is extremely suitable for this because it is soooooo easy get your hands on any data that you want by using the Ivy bus.
Skipping the image processing part and using location from paparazzi seems to be a more feasible initial solution though. That is if position on the ground is something important to you....
Finding a GIS client suitable for the job i have found extremely difficult. Looking around in the image processing area of the OSAM wiki is one successful implementation though.

Todd


From: Chris Gough <address@hidden>
To: address@hidden
Sent: Wednesday, 30 September, 2009 9:11:57 AM
Subject: [Paparazzi-devel] georeferencing video stream?

Sorry if this is be a bit off topic, but...

The Thales SpyArrow presentation linked to from the wiki home page
(http://newton.ee.auth.gr/aerial_space/docs/CS_4.pdf) refers to a
system where the live video stream is georeferenced, and shows images
of what appears to be a stitched image. I'm interested in how to do
this, and was hoping somebody might give me some hints.

I had imagined post-processing (on the ground) combination of video
and telemetry:
1. Convert the video stream into a "timestamped sequence of images"
as they arrive,
    using the native features of a video capture card and operating system.
2. Interpolate the telemetry stream to estimate {lat, long, altitude,
pitch, roll, yaw}
    at the exact time of each image.
3. Orthorectify each image and load them into a temp GIS raster layer
in a database
4. Use elevation model and some geometry to infer the locaction of
some points in
    the streatched image.
[4b. use fancy pattern recognition and/or umpa-lumpas to identify points...]
5. Process the images (stitch, filter, etc) and post fixed
rectangular tiles to another
    GIS layer.
6. Combine tiles (perhaps allong with other spatial data) in a GIS
client to produce
    images/maps as required (periodically refreshed).

am I on the right track? Is there a working open source solution already?

Chris Gough


_______________________________________________
Paparazzi-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/paparazzi-devel


Get more done like never before with Yahoo!7 Mail. Learn more.

_______________________________________________
Paparazzi-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/paparazzi-devel




Get more done like never before with Yahoo!7 Mail. Learn more.

_______________________________________________
Paparazzi-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/paparazzi-devel



reply via email to

[Prev in Thread] Current Thread [Next in Thread]