Monday, March 19, 2012

Canon EOS C300 for 3D

The C300 in side by side 3d configurations would be limited to full shots and establishing shots with subjects 10+ feet from wide lenses. In a pinch you might squeeze a medium by eliminating the background, but that's pushing it. A rig made of these cameras would have a 133mm Inter Axial distance. That's too large for medium work and out of the question for close ups. Consider a proven mirror rig in your costs if you plan to purchase this camera for 3D work.

Monday, March 28, 2011

First Light Version Thumbnail bug.

Just wondering if anyone has any ideas about why some thumbnails are not working properly in the latest version of First Light?

Depth Range (aka Depth Budget)

Depth Range is a subjective judgment method to measure 3D, not a mathematical one.

Lenny Liptons term "Depth Range" which sounds like what Bernard Mendiburu is calling "Depth Budget" is cute but tells us nothing about apparent distance. I wondered why Lipton chose such a subjective way to measure a 3D scene given the fact that he has access to a wide range of math formulas. It's lead to a lot of people throwing away the math all together. I found a comment on his web site asking him to include a calculation for apparent distance and his response was "I have flipped more to the gut feeling and subjective judgement side of the seesaw and while math is useful the fact that digital stereo allows for observtion and interaction trumps the calculator.". Lenny Lipton - July 4, 2010 at 11:40 am -
So Lenny Lipton believes that subjective judgment trumps the math. Interesting.

Because I agree with that in some scenarios I am going to go one step further and say that "Subjective judgment trumps math when the viewer is aware of his surroundings (including the screen), and math trumps subjective judgment when the viewer is unaware of his surroundings (including the screen).". I don't know how to write that in detail so let me expand a little bit more. Being aware of the screen would be a TV in your living room with the house lights are on. But lets be honest. A child sitting up close to the screen could be paying no attention to the screen at all simply because they are so infatuated with the content and I think in that case the math takes over again. In a theater for example your peripheral vision could be saturated, the lights are dim and the people in front of you are hopefully not obstructing your view. Perfect example is a real IMAX screen. In this case your judgment is thrown right out the window. Even when watching a 2D movie.

The mathematical equivalent of Depth Range (aka Depth Budget) is "Apparent Distance". So if you want to backup your Depth Budget with math you have to calculate the Apparent Distance in your scene. Lets talk about that in another blog.

3D Terminology - Parallax Budget

I've seen a lot of different terms in the 3D community. I would like to address some of these terms and clarify what they really mean. It's not OK to use the wrong term to express an idea unrelated to that term. For example... The term "Parallax Budget" is often used to describe the "Apparent Distance" of an object. Parallax Budget alone does not have any of the values you need to calculate Apparent Distance so the two terms should not be used interchangeably. Also Parallax Budget is not the same as Depth Budget!

Parallax Budget is the range of parallax between the object with the most negative parallax and the object with the most positive parallax. It is often expressed as a % of screen width because that value scales relatively between screen sizes.

For example:
A range from (-5%) to (+5%) is a parallax budget of 10%
A range from (0%) to (+8%) is a parallax budget of 8%
A range from (-2%) to (0%) is a parallax budget of 2%

With this value alone you can know the exact location of one point. That point is the object that falls on the convergence point (if there is one). You can measure the tape distance from your eyes to the screen plane. Not very useful.

It is critical to know the maximum positive parallax number. When applied to your target screen size you can calculate the amount of divergence your audience eyes will have to do to see your images. It's extremely important that you try to avoid this scenario at all costs. Excessive divergence is the source of physical discomfort.

Thursday, February 3, 2011

In response to: Why 3D doesn't work and never will. Case closed.

Original Article:

"Somehow the glasses "gather in" the image -- even on a huge Imax screen -- and make it seem half the scope of the same image when looked at without the glasses. "
- Walter Murch

In most circles of 3D film making it is understood that miniaturization is easy to increase or decrease in camera. Some people even know how to use this to their advantage. Very few people understand how miniaturization works, or the scope of the anomaly. Because of this, unwanted miniaturization makes its way into 3D movies, distorting the observers experience. The fact that something looks smaller than life on an IMAX screen is very disturbing considering the film maker had to choose that apparent size during production. Walter Murch should have taken this opportunity to explain how to compensate for this condition in camera. Decreasing camera interaxial will increase apparent size. Calculating the correct interaxial spacing and apparent size is determined by the ratio of viewer interocular to camera interaxial. Because so few people understand miniaturization it is often overlooked in popular 3D educational seminars. Sometimes this calculation takes a back seat to insignificant stereoscopic anomalies (like edge violations). I have even heard some say choosing interaxial spacing is a creative decision, and there is no math involved. Unfortunately that means your teacher is just guessing.

Even Academy Award winning film makers need to learn the basics of 3D and how it looks on the big screen. Not write off things like miniaturization which they do not understand. Improper image size is completely within the scope of the film makers creative decisions. Script supervisors need to take detailed notes about camera interaxial and lens focal length. Editors need to understand these measurements so that they can choose better quality 3D shots, and take into consideration the intended size of objects when editing. Two clips may have the same screen size, but have completely different apparent sizes to the observer. That calculation is falling into the hands of untrained "Rig Techs" who are given the critical job of pulling interaxial distance.

The 3D educational community needs to focus on teaching seasoned film makers how to compensate for the issues like the ones contained in this article in order to bring 3D film quality to a point where it is accepted by the masses.

Let me back up what I am saying by referring to a quote from Hugh Murray's 1995 document ...

IMAX® 3D Film Production
An Overview of 3D Photography
Presented at The Workshop - ISTC 1995.

Apparent Size
"In 2D photography, longer focal length lenses make the images larger. In geometric terms this is not true in 3D. The apparent size of subjects compared to their "real" size is determined by the ratio of viewer interocular to camera interocular and nothing else. Larger camera interoculars make things appear smaller and vice versa."

Friday, September 17, 2010

Sharing Neo3D MAC databases

I just watched this video on sharing Neo3D databases on Vimeo.

"CineForm FirstLight color correction through Dropbox" -

When I went to implement the shared database I realized this was a Windows only solution as the script mentioned would not run on my Mac. I figured I would just manually change the location of the LUTs folder in Cineform prefs panel and have it point to my dropbox where I would store the LUTs folder and be done. It seams there is a(nother) bug in Neo3D Mac which does not update the location of the LUTs folder when you tell it too.

A work around for this is to move the LUTs folder to your dropbox, and then create a symbolic link to that folder and place the symbolic link in /Library/Application\ Support/CineForm/.

The way to create a symbolic link is this.

First move the LUTs folder to your Dropbox folder. In my case its in ~/Dropbox/

Open Terminal.
Type This:

ln -s ~/Dropbox/LUTs/ /Library/Application\ Support/Cineform/LUTs

Now you can share ~/Dropbox/LUTs with your friends and access it on your computer locally through the symbolic link.

On your friends computer, simply have them install Dropbox, and share the ~/Dropbox/LUT's folder with them. They will need to delete the original LUT's folder from /Library/Application Support/Cineform/LUTs

Be sure you are clear with them and tell them to backup their old LUT's folder just in case they need to extract old Databases from it.

Now have them create a symbolic link locally to access the Dropbox LUT's folder. Same command as before.

Open Terminal.
Type This:

ln -s ~/Dropbox/LUTs/ /Library/Application\ Support/Cineform/LUTs


Tuesday, June 16, 2009

SI2K - Black Calibration

A quick note. The black calibration of the SI2K camera can put your camera into a funny state, if Black Calibration is performed when one, or both lens caps are off. You may end up with a black image, and bright edges. Similar to the "Glowing Edges" effect in Photoshop.

To correct this you must put the lens caps back on and redo your black calibration.

The example image is borrowed from