![]() The second obstacle occurred about 4 hours into the rendering. I had to close the program, after renaming the path of the temporary folder, and start again. When I attempted to render the image (after it took an hour or so for the preview to be created) I got a message to the effect that my hard drive had only 24Gb of free space when 44GB was required. ![]() The first obstacle was a lack of space on the hard drive I had allocated for the temporary scratch folder. Because the process was automatic, it didn't worry me too much when things didn't go quite to plan. I recently tried stitching with Autopano 103 images taken with my 20D, 3 rows of 33-35 images. I've now put it all back on, sitting in front of the computer, processing images and chatting to people on LL.īefore getting Autopano I was fearful of the effect that even more time sitting in front of the computer would do to my overweight situation, especially with huge 100 image projects. Walls, doors, hanging strings are perfect features for this.I agree completely. And during this process manually create vertical control points on vertical edges to straighten the panorama. Then you can go back and optimize everything and recheck for outliers. Then check the control points and remove the outliers. If there are enough wrong control points, the best fit distortion field may look like a jacked up bow tie or worse.Ī good procedure is to automatically set your field of view and optimize first only abc unless its a 360 image then fov is ok. If there are control points which are plain wrong, then these will be factored in. The distortion field is created by the distortion values, field of view and sensor offset. The control points across the images are used to caculate the distortion field which best fits the control points. To calculate it exactly, you have to optimize it with images that cover 360 degrees. This is used to find the crop value and then the field of view can be calculated roughly. In this case let it be set by the exif, which has the lens and camera model. What usually happens, is that one optimizes the field of view and it strays too far from reality. D and e are horizontal and vertical offsets from the physical lens distortion over the sensor or captured image.įield of view is how much angle of view was captured by the lens. Stitching with hugin or ptgui which is the industry standard requires that you understand fairly well the process.Ī b c are distortion values for the center middle and edge of the lens. If people here get Hugin to make good fits (by Hugin's own standards) I'd love to hear how they do it. Have any of you encountered the issues I mention? I keep finding features I didn't know about in Hugin, so maybe I just don't have an intelligent workflow. I really like FOSS, so I'd much prefer to use Hugin as much as I can. I use a panoramic head so parallax shouldn't be a big issue. Also, in cases where the panorama looks good, I've never been able to get Hugin to tell me I have anything better than a "bad fit" (usually it's "very bad fit"), while Autopano always says my fit has a great RMS score. I often find Hugin creating bad control points, resulting in images being stretched far beyond where their supposed to go. It seems Hugin is slower at detection as well, because it tries to match all the pictures with each other, when most of them have no overlap with each other. From what I've read they both use the same set of algorithms to detect control points, so I wonder if I'm doing something wrong. I noticed that the automatic detection in Autopano seems to work better than Hugin, but I don't really understand why. After making a bunch of them I decided to look into other (non-free) software and gave Autopano Giga a try. I've recently gotten into photography and have been having a lot of fun making 360° panoramas with Hugin.
0 Comments
Leave a Reply. |