Then we discussed how by simply filming the shadowy region in the penumbra and on the ground with a corner nearby, researchers now had possibilities to gain valuable information about various objects just around the related corner.
Readers should know that whenever the objects which are located in the hidden region displace and/or move, the different light rays that these objects project towards the related penumbra actually sweeps through various different angles measured with respect to the wall.
All of these subtle color and intensity changes were previously (in a general sense) not visible to the human naked eye.
However, researchers can now enhance these light changes with the use of computer algorithms.
Researchers have actually found that primitive videos of different light rays projecting off of various different angles of the region they call as the penumbra can reveal, for example, a person making movements.
And then with some more processing, staying with the same example, could reveal a total of two people moving just around the corner.
Freeman and his colleagues reported their groundbreaking work a couple of months ago in June.
Researchers in the group reconstructed, what they called, the light field of a specific room under consideration.
What is a light field?
The light field of any given room is the direction and intensity of all the light rays which are present in a given room.
Freeman and his group went ahead with the process of the reconstruction of the light field from the leafy plants and the shadows that they cat near one of the room’s walls.
As mentioned in the first part of this post the leaves of the houseplants certainly act as the pinspeck cameras of old.
Each of the leaves of the houseplants (or any other leafy plant for that matter) managed to block out a separate and different set of rays of light.
According to researchers, if one could contrast each of the leaf’s shadow with all the rest of the shadows, then one could reveal the leaf’s missing set of light rays.
By doing so, one could unlock a part of the present hidden scene in the form of an image.
But the job is not over then.
After unlocking the related image, researchers have to make sure that they are able to account for the parallax error.
After that, researchers have enough information to piece all such images together.
Researchers also believe that such approaches, which involve the above-mentioned light-field, yield far sharper and crisper images when compared to the earlier work on accidental-cameras.
Because, according to researchers, prior knowledge of related surroundings and indeed the world comes built-in when we’re talking about computer algorithms.
The houseplant’s known shape along with other types of assumptions such as the knowledge that natural images of the environment tend towards having a lot of smoothness along with other priors essentially allow researchers in the field to make appropriate inferences about the various present noisy signals.
All such information helps researchers and their algorithms to sharpen the obtained resultant images.
According to Torralba, the technique known as the light-field, required researchers and computer algorithms to have a lot of knowledge about the environment under consideration in order to complete the task of reconstruction.
However, if a group of researchers is able to do that, then the technique can give them a ton of information.
Light rays which are scattered
As mentioned before (in the previous part of the post as well) that Torralba, Freeman and their colleagues (more accurately, their proteges) essentially work on uncovering images which have been present at a given place all along.
However, there are researchers on the other side of the MIT campus who work on something different but related.
A TED-talking scientist in the field of computer vision, Ramesh Raskar, has one aim.
And that is to change the world with a new approach that he calls active imaging.
Ramesh makes use of specialized and expensive camera laser systems in order to create and develop high-resolution images of objects around corners.
About six years ago in 2012, Ramesh realized an idea which he originally had five years prior to 2012.
In the process of doing so, Raskar and his whole team of researchers pioneered a new technique.
With the help of this new technique, researchers could shoot laser pulses directly at a wall.
They did that so that a tiny fraction of all the scattered light bounced around a specific barrier.
Moreover, just moments after each of the laser pulses, researchers made use of a streak camera.
This streak camera recorded individual photons at a speed of billions of frames each second.
It turns out, researchers need the camera to work at such a pace in order to detect all the photons which bounced back from the given wall.
After measuring the total times-of-flight of various returning photons, computer vision scientists are able to tell the distance that these photons had to travel.
This, in turn, enables them to reconstruct a detailed three-dimensional geometry of various hidden objects that these photons managed to scatter off of behind the given barrier.
The process doesn’t always go as smoothly as we have described above though.
In other words, there are complications that researchers have to deal with.
One of the complications that researchers have to deal with is that they have to raster-scan the specific wall.
And they have to do it with the laser.
Without it, they simply cannot form the three-dimensional image.
Let’s take a help of an example to understand what’s going on a bit better.
Take the instance where there is a corner and around the corner exists a person who is hidden.
According to Raskar, in such a situation the light from a specific point on the person’s head, and a specific point on the person’s shoulder and a specific point on the person’s knee might make their way and arrive at the placed camera at the exact same time. J
However, if a researcher shines the laser at a marginally different spot then, as a result, the light rays from all the three points mentioned above would not arrive at the camera at the exact same time.
According to Raskar, the researcher would have to combine each and every signal and then solve for what the community knows as the inverse problem.
Without solving this inverse problem, the researchers would not have a way to reconstruct the required hidden three-dimensional geometry.
As it turns out, Raskar himself had to deal with a problematic algorithm that he developed in the beginning.
His original algorithm that he wanted to use to solve the inverse problem had too high computational demands.
The algorithm was so computationally demanding that Raskar’s apparatus alone cost around half a million dollars.
However, as the years have gone by, the scientific community has made significant progress in the way of simplifying the involved maths and cutting the related costs.
Back in March, researchers published a new paper in the Nature magazine.
The paper set a brand new standard which established a cost-effective and efficient method of successfully handling the three-dimensional imaging of a given object around a corner.
That standard though, works significantly better for a bunny figurine which is present around a corner.
The authors of the paper, Gordon Wetzstein, David Lindell and Matthew O’Toole all work at the Stanford University, managed to devise a powerful and robust new computer algorithm in order to solve the inverse problem.
They also used a SPAD camera which was relatively affordable.
What is a SPAD camera?
It is a semiconductor device which offers a lower frame rate than the more expensive and special streak camera.
Ramesh Raskar called their work one of his favorite research papers and “very clever”.
It is also true that previously, Raskar supervised two of the four authors mentioned above earlier in their careers.
In techniques such as active non-line-of-sight 3D imaging, scientists use a laser light and bounce it off a wall. That laster light scatters off the present hidden object. Then it rebounds back directly to the place where it originally came from. Researchers can then use the reflected laser light to generated three-dimensional reconstruction of the hidden object.
“Were there not any computer algorithms to solve these problems before?”
The answer is that there were.
But all of them had major problems which bogged them down.
And that too by a procedural detail.
What was that procedural detail?
That detail was that researchers typically opted to make use of their detectors in order to detect photons which were returning from that (different) spot on the given wall which was not the same as where the laser light pointed towards.
Why did researchers do that?
Researchers did that so their equipment (camera) could find success in avoiding the laser light’s problematic backscattered light.
However, when researchers pointed their laser light generator and the camera at the exact same spot, they found out that they could make the relevant outgoing and incoming light photons map out the exact same light cone.
Now, what on earth is a light cone?
As mentioned before, when the light hits a surface, it tends to scatter off into the surroundings.
Whenever it scatters, it actually forms a sphere of light photons which is actually expanding in nature.
And such a sphere of photons traces out a light cone as the sphere manages to extend through time.
Matthew O’Toole is actually credited with translating the physics which is involved with light cones.
He used to work at Stanford University but now performs his research at Carnegie Mellon University.
Now, we though O’Toole translated light cones physics, he did not develop it.
Who developed it?
Why does that name sound familiar?
Well, it sounds familiar because he was once Albert Einstein’s teacher.
He did that in the early part of the 20th century.
O’Toole’s work involved him translating that work into a really concise expression.
That expression related photon times-of-flight to the exact locations of the present scattering surfaces.
O’Toole dubbed his translation as the light-cone transform.
But are there any more application areas for SPADs?
Well, at the moment self-driving autonomous vehicles make use of LIDAR systems in order to perform tasks such as direct imaging.
In the future researchers could conceivably equip these self-driving autonomous cars with SPADs in order to make them see around corners.
According to Andreas Velten, in the very near future people should be able to see these upcoming laser-SPAD sensors available in such a format that they could use it in handhelds.
This is something Andreas has already predicted.
Andreas was actually the first author of Ramesh Raskar’s important and seminal 2012 research paper.
Now, Andreas works at the University of Wisconsin, Madison, and runs a group doing work in the field of active-imaging.
Now, the real task for researchers is to go ahead into more difficult and complex scenes.
This means, working towards more realistic scenarios.
According to Velten, instead of working hard in trying to set up a scene with great care and having a white object with black space around the white object, he wanted a point-and-shoot.
More thing and their place
Freeman and his group of researchers have begun work on integrating active and passive approaches.
Christos Thrampoulidis, a postdoctoral researcher, led a group which published a recent research paper.
The paper showed that in techniques such as active imaging in which researchers used a laser, the actual presence of an old-school pinspeck camera with a known shape just around a given corner could allow researchers to reconstruct the relevant hidden scene without any requirements of using information about photon time-of-flight at all.
According to Thrampoulidis, researchers should have the ability to reconstruct a given hidden scene with the use of nothing but a regular CCD camera.
Some believe that techniques such as non-line-of-sight imaging, one day, could aid,
Currently, Velten (his team) and NASA’s Jet Propulsion Laboratory are collaborating with each other on a project which is aimed at activities such as remote-imaging the precise insides of the caves which are present on the moon.
While that is happening, Ramesh Raskar and other researchers have also started to use their approach in order to read the beginning few pages of a book that is closed.
They are also using the same techniques to see clearly a short distance through thick fog.
All of this is, without a doubt, exciting.
On the other hand, Freeman and his motion magnification algorithm can do more than just audio reconstruction.
It can certainly come in handy for safety and health devices.
Or one could also utilize the algorithm in order to detect small astronomical motions.
According to David Hogg, the use of an algorithm is indeed a good idea.
David Hogg, a data scientist and an astronomer at the New York University and Flatiron Institute, recently said that he feels the community should start to use such techniques in astronomy as well.
The Flatiron Institute is funded by the Simons Foundation.
Freeman seemed introspective when reporters asked him about all the privacy concerns which would pop up as a result of all these recent discoveries.
He replied that privacy represented an issue that, over his career, he gave a lot of thought to over and over and over again.
Freeman, a bespectacled camera-tinkerer has worked on developing photographs ever since he was a child.
He told reporters that when he began his career, he did not actually have a desire to work on any project that had spying or military applications.
However, over time, he came to the thinking that technology represented a tool.
People and organizations could use that tool in a lot of different ways.
So if someone tried to avoid anything and everything that someone could use in a military or spying application, one could not be able to do anything useful ever.
Freeman also added that even in the case of military applications, it was an extremely rich spectrum in terms of the things that one could use such technologies in.
Anyone or organization could actually help someone in not getting killed by a malicious attacker.
Generally speaking, according to Freeman, having knowledge about where things are in areas one cannot see is, from an overall perspective, a good thing.
The thing that has thrilled Freeman even more than all the technological possibilities which his discovery could enable, is that researchers have now found a phenomenon which was hidden in plain view.
He recently said that he thought the word was already rich with lots of things.
But now he realizes it was filled with tons of things which scientists and other folk had not even discovered.
Zohair is currently a content crafter at Security Gladiators and has been involved in the technology industry for more than a decade. He is an engineer by training and, naturally, likes to help people solve their tech related problems. When he is not writing, he can usually be found practicing his free-kicks in the ground beside his house.
Latest posts by Zohair (see all)
Based Blockchain Network