It's already been done for transmitted rays, because our research group do it. Reflected rays are a bit more complicated but possible, so now thinking about it I don't think I could reconstruct it by putting it into our code like I thought initially. The thing to google for, I reckon, is epipolar geometry, and then write some sort of convergent iterative routine based on it. I could probably do it in a couple of months in IDL if I wanted to. You would probably need more than two pictures because you need depth perception of the features that are present in one but not the other. The more pictures you have the less error is in the reconstruction.
It's already been done for transmitted rays, because our research group do it. Reflected rays are a bit more complicated but possible, so now thinking about it I don't think I could reconstruct it by putting it into our code like I thought initially. The thing to google for, I reckon, is epipolar geometry, and then write some sort of convergent iterative routine based on it. I could probably do it in a couple of months in IDL if I wanted to. You would probably need more than two pictures because you need depth perception of the features that are present in one but not the other. The more pictures you have the less error is in the reconstruction.