Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Using Lidar to Add Autofocus to a Manual Focus Lens (sonyaddict.com)
87 points by giuliomagnifico on Oct 22, 2020 | hide | past | favorite | 15 comments


To be clear, this is auto-follow-focus for video, not autofocus for stills. The post title is the same as the site, but really you wouldn't bother with this for stills. Follow focus is useful or essential for video production. On large shoots, there is a person dedicated to twisting a large knob which usually has white tape on it so the operator can write on marks for various focus depths that the director wants to hit at different times in the scene. Sometimes these follow focus systems are semi-automated to allow for smooth controlled transitions from one pre-rehearsed focus point to another.

The particular lens this is being demoed on is a 50mm f0.95, which will have a very shallow depth of field. Any minor focus error will be very apparent.


To note, autofocusing adapters for stills already exist for the Sony E-mount system, however because of the closed API of the AF system it is difficult to enter the optical characteristics necessary for in-camera video AF.


This is exciting and I hope it gets extended to be usable for stills as well.

Manual focus fast lenses take quite a bit of practice to hit a stride with and this would make some of those much more palatable for casual use.


What would lidar focusing give still photographers that current hybrid phase-detect + contrast-detect autofocus systems can't? Better low light performance?


Aren’t there, uhm, lenses that autofocus? I get that are more choices for third party manual focus lenses, and that they’re generally cheaper, but the amount of stuff they’re buying / having to haul around to make this solution work seems to be pretty intense.

I guess this is most interesting for people who already use a gimbal, which is a tool that stabilises your camera for video, because until now you couldn’t use manual focus lenses, or at least not change focus during the shot, because operating the lens manually will interfere with the gimbal’s operation.


Most lenses that autofocus are all about reaching that focus point as fast as possible in order to take a still photo. When you are focusing for video, you're often "pulling focus" (i.e. adjusting the depth where you're focusing) very slowly, and sometimes back and forth between two or more set distances [1]. No autofocus lens is going to know what the director or cinematographer had in mind for the shot. At best you can have AI which will follow a moving object, or look for eyes, but that doesn't solve the general case at all.

[1] https://en.wikipedia.org/wiki/Focus_puller


Right, I know that, I should maybe have been explicit but I was replying to a commenter who suggested this setup for stills.


Those lights make it almost look like a SIGGRAPH presentation video.

Big gotcha is probably that the LIDAR does not respect the object tracking but instead focuses on what's in the center of the FOV. The object tracking only moves the tracked object to the center, bringing the tracked object out of focus when you walk around while getting closer to the camera.


Put the LIDAR pack on an independent motorized 2-axis mount too.


At which point the LIDAR could be raster-scanned to build a monochrome image of the scene.

Physically move the platform at the same time and you can start applying synthetic-aperture techniques used in radar processing.


I attempted to do this with a module for an arduino that measured distance with ultrasound and controlled a stepper motor (didn't have a servo motor that would have worked better). As a PoC, it worked in that if focus was set to a background object and then someone stepped in the direct line of the ultrasound, the stepper would turn in the correct direction to bring in focus. It was crazy telling the arduino where infinity was on the lens and then where near field focus limit was, and then keeping track of how many steps were moved. it was also complicated in that a focus lens is not a linear scale to get to infinity.

all in all, it was a fun project to ultimately understand why CineTape[0] systems are so expensive.

[0]https://cinemaelec.com/products/cinetape_measure


Could you have someone walk towards the camera with a focus test card and form a map between the sensor and the focus data from the lens itself (FFT, etc)?

It would be something analogous to the 'learned indexes' paper.

You could then provide any tweening function to give you that focus feeling you need.

http://www.vldb.org/pvldb/vol13/p1162-ferragina.pdf

https://arxiv.org/abs/1712.01208


The problem was that the stepper motor was too slow to do anything worth messing with like this, so I ultimately wound up taking it all apart and reusing the components in other projects. I build different rigs for timelapse motion control so the steppers are perfect, but live action motion really needs the speed a servo can provide. I did take what I learned from that to building a motion control focus puller for time lapse that used the stepper from this test. This rig doesn't need to arbitrarily adjust focus and does a "dumb" move focus ring from point A to point B in predetermined amount of time. Much easier to program.


That sounds really great. I'd like to do that this winter with Rust on Arduino or Rust on a soft-core RISC-V. I have hacked up some PID loops in the past, but nothing using _actual_ math.


Somewhat tangentially related Apple said in their keynote that the iPhone Pro LIDAR will be used for autofocus in low light. Which is a problem I have experienced - I have found for low light sometimes easier to take a video with the flash permanently on to get better focus and then cut the pic out of the video.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: