Wednesday, December 9, 2015

Lab 8: Radiometric Signatures

Lab eight was designed to introduce students to the concept of how objects have unique radiometric signatures. Different mediums of material all have a unique reflective value across all the bands meaning that they can be classified via their radiometric values. While I do not go as far as to automatically classify features in this lab I do go through the process of collecting the radiometric signatures of different types of land cover.

All the data for this lab was provided by the Earth Resources Observation and Science Center, United States Geological Survey


In order to start this process I needed to find features and record their reflectance. I used the spectroradiometer tool included in the Erdas interface to record the signature of about a dozen different feature types such as standing/moving water, vegetation, crops, and urban features. This operation can be done at a smaller, more accurate scale in the field with a physical spectroradiometer but recording the signatures with this program will work just fine for my application.
Figure 1. Radiometric signature of standing water (left window). Notice the comparatively high reflectance of bands 1-3 when compared to bands 4-6. 
Standing water was the first feature which I recorded a radiometric signature. If we look at the signature mean plot, as seen in the above image, we notice that the highest levels of reflectance can been seen in the first three bands of the spectrum; blue being the highest with green and red having slightly lower reflectance. Bands 4-6 represent the infrared colors. In water with a depth greater than 2 meters nearly all infrared light is absorbed by the water while most the blue, green, and, to some degree, red are transmitted. Not very much of the energy is reflected off the surface and the remaining energy that is picked up by the spectroradiometer is due to volumetric reflectance of energy that was previously transmitted through the water.
Figure 2. Radiometric signatures of 12 different features. 
 Standing water is only one of 12 different feature types that I need to record the radiometric signature for. After several minutes of looking around the map I was able to collect all the signatures needed. And because of the different mediums each is made of I was able to see distinct difference in the signatures of the different land cover types (figure 2). When looking at the chart above one can see some of these differences and infer to the reasons why.  Let’s take vegetation for example, it has a higher reflectance of the infrared spectrum than it does the visible. This is because vegetation absorbs much visible light to convert it to energy while it reflects the infrared to prevent damage to proteins within its cells.

Figure 3. Comparing the radiometric signatures of riparian and normal vegetation. 
When interpreting the data it can be difficult to tell differences between feature types that are similar in composition. We can compare riparian (located on a hillsides near rivers) vegetation with regular vegetation (figure 3). All the bands are pretty much the same with the exception of band four; near infrared. Due to the higher water content in the riparian vegetation due to its easy access to water it absorbs slightly more infrared energy than the vegetation that is further from water, meaning it is slightly dryer. The difference of reflectance on band four between the two types is the only significant one in the spectral signature that makes it possible to tell the vegetation types apart. 

Monday, December 7, 2015

Lab 7: Stereoscopy and Orthorectification

Lab seven was designed to introduce students to some to the old and new processes of measuring distances in images, createing a three dimension anaglyphic image, and to orthorectify images. This blog has been separated into three parts, one part for each of the aforementioned topics. All imagery has been provided by my instructor Dr. Wilson.

Part 1: Measuring distances

The first part of this lab was all about measuring distances and areas with a computer and calculating scales and relief displacement with the use of hand written formulas. The first two tasks were to calculate the scale of aerial photographs with the use of a ruler and information given about real word distances in the photograph or about the airplane which took them. The first question was to determine a simple scale ratio of an image using a surveyed distance between two real world points. By measuring this distance on the photo and comparing to the real world data I was able to figure out the scale ratio of the image.
                The second question was slightly more complex. I was to determine the scale ratio of a photo by using both land elevation data (796 FASL) flight elevation data (20,000 FASL) and the focal length of the camera (152mm) which took the photo. Below is an image of the math I completed to come up with the scale (Figure 1).

Figure 1. Sloppy hand writing shows the work done to determine and approximate 1:4000 scale of an aerial photograph. 
Figure 2. Photo used to determine the height of the smokestack. 
The final bit of hand calculations were to determine the height of a smoke stack (labeled “A” in figure 2). The scale of the principle point, scale and altitude of the plane when the photo was taken were supplied. Below in figure 3 one can see the simple hand calculations needed to determine its height.

Figure 3.  More sloppy hand writhing shows the math used to determine the height of a smokestack with the use of an image. 


The next area and perimeter calculations were done using the measure polygon tool in Erdas Imagine. 

Figure 4. Erdas Imagine measure polygon tool in action. 

This tool is pretty simple to use, it is just a matter of placing nodes around the perimeter of the feature to be measured. Once the border of the feature has been outlined and the polygon completed Erdas calculates the shape area and perimeter automatically. The image above shows the halfway point in my progress of drawing a polygon around a lake (Figure 4).

Part 2: Stereoscopy

                For part two of this lab I was required make an anaglyphic aerial image of the Eau Claire Area with Erdas Imagine. This image could then be viewed with polaroid glasses to see a three dimension representation of the city of Eau Claire. In order to complete this operation I used two input files, one was a photo with a one meter spatial resolution and the other was a digital elevation model of the city which had a 10 meter spatial resolution. Using the anaglyph generation tool in Erdas was a very simple process which did not require the placement of any geographic control points or extensive image manipulation.

Figure 5. Screenshot of Erdas Imagine working to create an anaglyph image from an aerial photograph (left) and a digital elevation model. 
Figure 6. Anaglyphic image of UWEC.
After the anaglyphic image of Eau Claire was generated I was able to see some differences in relief in the image (Figure 6). But compared to the places that Stereoscopic images usually portray, Eau Claire is far too flat to have much of an effect when viewed through the glasses.

Part3: orthorectification

The third part of this lab required me to orthorectify two images of the Palm Springs area in California. Both were taken from the SPOT satellite and have a spatial resolution of 10 meters. Neither one of the images have been georrectifed and were in sore need of adjustment. In the image below I overlaid the images in the same viewer and used a horizontal swipe tool to see the difference between the images (Figure 6).

Figure 6. SPOT satellite images of The Palm Springs California area in need of georectification. 


In order to correct these images I first needed to import them into the photogrammetry project manager in Eradas Imagine and define the sensor which took the image. I then had to define what coordinate system, projection, and elevation model the image was is. Now I was ready to start placing geographic control points (GCP). I used an image of the area which had already been orthorectified. 

Figure 7. Adding Geographic Control points to images. Frames on right automatically positioned to GCO area by automatic (x,y) drive. 

After placing the first two GCPs in the correct locations I was able to activate the automatic drive tool to speed up the process of placing additional GCPs (Figure 7). The tool essentially uses the existing data from previous control points to approximate the location of newer points. At first this feature of the program is a time saver but as more points are placd the more accurate the program approximates the location of new ones. After placing an adequate amount of control points on this image I was ready to add elevation data. I used a digital elevation model (DEM) of the area to update the elevation of the GCPs.

Next I added the second image to be orthorectified. I used the same process to set the sensor type and coordinate system as I did to the first image. I then proceeded to add GCPs. This was a fairly quick process seeing as I was using the same GCPs I had placed earlier on the first image and I was able to continue using the automatic drive tool.

Now that that both images have had some GCPs and elevation data added I was able to use another feature of the photogrammetry project called an automatic tie point collection tool to automatically create additional GCPs. When using the tool I set it to make at most 40 more GCPs. After the tool had run there were an additional 25 GCPs which were actually pretty well placed.

Figure 8. Control point #23 was generated automatically with the use of a tie point collection function. Its placement appears to be fairly accurate. 
Finally I am able to perform image resampling. I chose to set the resampling method to bilinear interpolation and set the program to correct both images in tandem.

Figure 9. Images orthorectified in tandem for both horizontal and vertical displacement.


The above image shows the results of the process. Spatially they are pretty accurate but there are some areas where there is a noticeable difference between the locations of the same feature on the two images. This is exaggerated where elevation changes are the highest. The edge of the top overlaid photo is very interesting, it is not straight and bends along with the changes in elevation due to the pixels being resampled to the correct locations.