Ooh, how do you do that? It was just vaguely occurring to me that it ought to be possible in principle to generate a half-decent 3D description of a face starting from two photos taken from slightly different angles à la binocular vision, but the image processing struck me as scary. Is this something that's already conveniently available in some software package, then?
It's already been done for transmitted rays, because our research group do it. Reflected rays are a bit more complicated but possible, so now thinking about it I don't think I could reconstruct it by putting it into our code like I thought initially. The thing to google for, I reckon, is epipolar geometry, and then write some sort of convergent iterative routine based on it. I could probably do it in a couple of months in IDL if I wanted to. You would probably need more than two pictures because you need depth perception of the features that are present in one but not the other. The more pictures you have the less error is in the reconstruction.
You can probably do the reconstruction anyway...
It's already been done for transmitted rays, because our research group do it. Reflected rays are a bit more complicated but possible, so now thinking about it I don't think I could reconstruct it by putting it into our code like I thought initially. The thing to google for, I reckon, is epipolar geometry, and then write some sort of convergent iterative routine based on it. I could probably do it in a couple of months in IDL if I wanted to. You would probably need more than two pictures because you need depth perception of the features that are present in one but not the other. The more pictures you have the less error is in the reconstruction.