Wigner Distribution Function and Integral Imaging MIT 2.71 Optics, Spring 2009
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. MICHAEL: We're going to talk about the Wigner distribution function and integrate imaging. AUDIENCE: [INAUDIBLE]. MICHAEL: Yeah. So I'm going to start with a description of what a conventional camera does. A conventional camera produces one view of something. It does a pretty good job of imaging within a small range of distance from the camera depending on how you focus it. And what we can do is create optical systems which allow image sensor to capture multiple views. So basically, let's say, you have an array of tiny cameras that share one image sensor in the back, And one way of doing this is to use a pinhole or microlens array. So think of the pinhole cameras of yore, now just have an array of pinholes, and you will get multiple picture of the same scene. So a conventional camera, what it does is-- so this is a part of x versus theta for what the sensor picks up. So for each pixel, it'll pick up the incident rays for the entire range of whatever ray is focused to it. So if you look at the blue lines, it'll pick up all the rays in that entire range of angular frequency-- or, sorry, spatial frequency, angular range-- and integrate it in the form of intensity in that pixel. And so you lose all sense of angular resolution. So we've no sense of how far something is. If we did know what this range was, we could kind of say, well, if it's a smaller range, it'll be further away. If it's a larger range, it'll be closer to the camera. So we have these optical systems which trade some spatial resolution for angular resolution, and these two extra dimensions of recorded information allows us to reconstruct the original trajectory of rays to an extent. And things you can do with this is digital refocusing, which means you take a picture once and then you can focus on different parts, different objects within the scene afterwards, just purely by processing it. And 3D imaging-- imaging with scenes behind obstructions, such as foliage or murky water. And since you also have a system which allows you to select for a location in space, as well as a angle of propagation by selecting just a location in space, you could use the system reverse to kind of project 3D displays. So there's two common setups to do this sort of thing. The first setup is the integral imaging camera. Newer setups use a microlens array, because it allows you to gather more light at once. But you can also think of it as just pinhole array. And each of these pinholes, imagine if you put a little tiny eye there. So you'll be seeing the scene from several different aspects. So you get-- in the image, the sensor picks up several pictures of the entire scene from many different perspectives. The light-field camera is a little bit different. You have a large convex lens focusing onto the microlens array and a sensor plane shortly thereafter. Each of the tiny pictures in this case is a picture of a small part of the original scene, but each pixel within that picture is a different view of that small part of the scene. So if this is kind of confusing, you'll probably see, later on, it will make more sense. And with that, these two methods of basically splitting up the scene, and to have some sense of angular resolution by splitting up how different rays with different angles will propagate to the center, essentially do what a Wigner distribution does. They're physical systems which perform a Wigner transform, which we'll talk about next. Michelle. MICHELLE: Thank you, Michael. Hi, everyone. Today, I'll be talking about two main concepts, which is the Wigner distribution function and the light field. The Wigner distribution function, it is a function that describes a signal in space x and spatial frequency u at the same time. So the equation there is just showing a Wigner distribution function. So this function, it gives the local frequency spectrum of the signal. Compare this to a Fourier transform, which gives the global frequency spectrum. So a Wigner distribution gives a localized frequency spectrum. And why do we want a Wigner distribution function? It comes in useful, because sometimes it gives us extra information compared to the normal Fourier representation. So to illustrate this, I'll talk about signals where the frequency changes with position x. An example of this signal is a chirp function. The picture of a chirp function, you can see in here, the last picture here. Its frequency changes linearly position x. If you get the Fourier transform of the Wigner-- if you get the Fourier transform of the chirp function-- I'm sorry-- it is a symmetric function which doesn't really give you much information about the signal. Instead, if you do a Wigner distribution, what you get is the picture on the right. You get a straight line which shows you the frequency increasing with position x. Next slide, please. I was talking about the Wigner distribution function, which was derived from Fourier optics. Relating to this is the light field, which you can think of in terms of ray optics instead. The light field is a representation of the light flowing along all rays in free space. An array can be parameterized in multiple ways. For example, it can be parameterized by coordinates of two planes. And when one of these planes, they're at infinity, then you can parameterize the ray by a point and an angle. In wave optics, on the other hand, you parameterize when you think about-- you describe a wave in terms of spatial frequency and angle. And if you remember, spatial frequency u can also be thought of as angle theta over lambda. So this is how wave optics relate to ray optics, because ray optics you think of it in terms of angle and position. Fourier optics you think of it in terms of spatial frequency and position. Next slide, please. The Wigner distribution function, which is derived in terms of Fourier optics gives a link between Fourier optics and geometric optics. It is described in terms of spatial frequency and position, but it closely resembles the array concept in geometrical optics, which describes the ray in terms of angle and position. And I'll pass it back to, Lei. LEI: OK. Finishing about the mathematical description of space and frequency domain, I'll first talk about sampling and sharing in space-frequency and domain-- so basically, just to give you an idea of why this system can give you samples in both space and frequency attraction. So you can see from this figure here, because the microlens array is you [INAUDIBLE] or you put inside of your system. So you generally have the coordinates of all the microlens array. So in this case, you're sampling all the coordinating in space domain. Then, each of the subimages here, shows here there's three rays. So each of the three rays goes to different pixels on the CCD screen. So each of these three, you can think about that on the sensor plane they're going to give you the angular information or the spatial frequency information, because it's going to different pixels. So here is a simulation of after several points from the object, it goes through the system, what it's going to look like on the sensor plane. So what I was seeing and what I noticed is that, for example, for this one, this is on one point. So the x-coordinate actually is this one-- so the center of this one, which is x-component of the corresponding microlens. But if each of this segment, which is the one pixel, its sample gives the sample of the angle. So with this data, what kind of thing we can do? The first thing we can do is called digital refocusing. So a basic idea is if we want to take a picture of a one point-- before Michael talked about the conventional camera. So what a conventional camera does is, given a point, you want to form image of the point. What the lens does is it integrates all the rays go to a points, and the corresponding pixel picks all of the rays from this point and add them together. So same idea-- given this point, because we have all these samples, we add them together. Then we're going to recover this point. So this is at we call it in focus point, which is when the image is right at the microlens array plane. This is a 1D case. So when you add them together, maybe you're going to get something like this. So the next one, one of the interesting things we can do with the system is, before, you can imagine, the previous slide shows the image plane of number 1. So here, you have the image. So we want to know, if we assume there's another image plane there at image plane 2, what's the image to look like? So the basic idea is if you think about this problem in ray optics, given a ray here, the intensity is going to conserve. Which means the intensity at this point and at this point is the same. So using this geometry, you can work out the coordinate relationship. So you will find it's basically x. So this thing shows this-- before we assume the shape of the signal, what we're recording, in the Wigner domain or the x in space and angle domain, it's something like this shape. It only gives you x shearing, which is in the apprentice. You see that u is fixed, but with a new x [INAUDIBLE],, it's going to be x plus something. Then you want to get some nice image. Then you integrate without the angle given x. And then you'll get the new image. Another thing we can do is I call it 3D imaging. Actually, it's just, you can imagine, because you have all these microlens arrays. So it looks like you have different views. So how do you get different views from the system? So the basic idea is because we have all these samples along the angle direction, So, for example, from this line, you pick up all these pixels. Then you form image from this from this angle. The same thing for other case. So actually we have set up a system. So this is the raw image we took. So you can see from this small thing, you can think about it's a microlens-- each of the things is microlens. Also, Michael briefly mentioned that it's also a subimage of the big field lens in front of the microlens array. Either way it's OK to understand this issue. So some parameters with the systems-- the diameter of the microlens is 125 micro with a pixel size of 2.2. So you divide it-- you use the diameter of the microlens divided by the pixel size. So in the geometrical sense, optic sense, you can have about 60 different views. But if you're taking the diffraction effect into account, actually she we can only get 9 to 10 different views, because you cannot get a perfect one point image. MICHAEL: [INAUDIBLE] LEI: One. OK. So this is one of the simulation results we get. So it's a very low resolution result, because the microlens array, each of the sides of the microlens is too big. So the size, when you extract doing along this line, if you remember what I mentioned, it's only a 24 by 24 image. But if you can-- AUDIENCE: [INAUDIBLE] LEI: Yeah. AUDIENCE: They're not really late. [INAUDIBLE] LEI: OK. PROFESSOR: Maybe time for a quick question? LEI: Yeah. AUDIENCE: So if each subaperture focuses on sort of sub-group of pixels, what are the requirements on your sampling for each subaperture? Are you kind of like Nyquist sampling, or are you oversampled or undersampled? Obviously, your result will depend on how that sampling occurs for each subaperture. So how did you set the number of pixels per subaperture? LEI: This is going to involve with how to optimize the system. Unless they haven't go through this derivation, but, intuitively, like you said-- you can think about the point spread function of the system. If the point spread function is, say-- so if you can imagine the limit case, if you have 100 hundreds view, or in our case we have 60 views, but only 10 of them can distinguish. Which means that your point spread function is larger than your CCD size. So you might want a larger CCD size, which means, under each microlens, you can afford less microlens. PROFESSOR: I think the question was, how do you sample the angular distribution, right? So after each microlens, you have arrays that come out. And what is the sampling requirement there? Singapur? AUDIENCE: Maybe-- yeah. As Lei was pointing out, basically we sampled the space with microlenses. So your sampling in the spatial domain is equal to the size of each micro lens. And then, behind each microlens, who put some pixels of CCD. So you can trade-off. So you have got some total amount of resolvable information, which is given by the diffraction limit. And then you can choose how much angular resolution you want and how much spatial resolution you want. So if you wanted a higher spatial resolution, you would reduce the size of microlens array, and then you get higher spatial resolution. But in that case, you can only have a few pixels behind per microlens. And then that would make coarser angular resolution. PROFESSOR: Another way to think about this is that the reason you sample the space behind the microlenses, is because, in general, you want to determine the slope of the Wigner distribution. So there was this nice example of a chirp. So the chirp in [INAUDIBLE] space becomes a slope. So you have to space your pixels in a way that you can resolve your minimum desirable slope, that is your minimum desirable chirp. And the chirp is very important, because in optics it represents the focus. So if you want to do something like the 3D imaging, then each plane that you want to resolve corresponds to a different slope of chirp. So then the depth resolution that you want to achieve is actually limited by your ability to sample behind the lenses. So you see that you have a very nice trade-off, because your camera has so many pixels. So then if you make your microlenses small to get good spatial resolution, then you get poor angular resolution. Of course, ideally, you would like to have a camera with billions of pixels. OK. I think, in the interest of time, we should move on. Thank you, guys. [APPLAUSE]
Comments
Post a Comment