Precision, also called intraswath-precision, is a measurement of repeatability on a hard surface target from within a single pass of a scanner. This metric is primarily a factor of the intrinsic calibration and stability of a scanner. It is also greatly impacted by properties of the measured surface.
Many factors affect the precision of a dataset, most notably:
Laser intrinsic quality (Range accuracy, Range Precision, Angular accuracy)
Target Reflectivity
Range to target
Incident angle/scan angle & beam divergence
Measuring Precision
Phoenix LiDAR quantifies precision of systems and datasets during testing by replicating* the USGS methodology and closely conforming with the ASPRS outline.
Utilize a hard surface area within the calibration site, generally a region of flat parking lot or sidewalk.
The area is carefully selected to contain a target of typical real-world signal return at the laser's wavelength, often around 20% reflectivity, avoiding very dark or highly reflective subjects.
Within this area, data is sampled from all overlapping flightlines to consider a variety of scan angles.
Data is processed independently for each flightline according to the USGS methodology.
The per flightline results are summarized by taking an average that characterizes a dataset or scanner.
These evaluations are made for a single AGL (above ground level) at a time and results are conveyed as such.
* PLS typically does not create the vectorized polygons with summarizing attributes as the USGS outline for data delivery.
Industry References
Data providers should always reach an agreement with end users about data quality standards and reporting specifications.
ASPRS (2014) refers to precision as “within-swath accuracy” and outlines the following criteria it’s quantification:
Evaluated against single swath data.
The sample area should be a relatively flat and hard surface.
The sample area should contain single return LiDAR pulses only.
The test area should not include abrupt changes in reflectivity.
Compute the difference between 2 raster surfaces: 1 representing the max, and 1 the min elevation within each cell.
Raster cell size should be twice the nominal point spacing.
The required “within-swath” accuracy of a dataset to meet a certain accuracy class is outlined in the following table taken from the ASPRS guidelines
USGS Base specification refers to “Intraswath precision” or “Smooth surface precision” as a component of relative accuracy. Their guidelines for measurement are similar to ASPRS, with more details on exactly how to execute the methodology. Per USGS, precision must be calculated through a raster cell analysis where:
Precision = Range-(Slope x Cellsize x 1.414) with the following specifications:
Range is the difference between min and max elevation (Z) values within a cell.
Slope is the maximum slope between a cell and it’s 8 neighbors, calculated using the min Z of each cell.
Cell size is the average nominal point spacing(rounded up to an integer) x 2.
Assessments should be made on hard surfaces with single return pulses only, just like ASPRS.
The sample area should be approximately 100 pixels, so highly dependent on point density.
The full width of the swath should be evaluated, including the range of scan angles, if possible. This may be accomplished by sampling multiple areas and data from multiple flightlines.
Essentially, it is important to consider scan angle / incident angle to avoid misrepresenting the data.
The cells precision values should be statistically summarized per sample area (raster) to show:
RMSDz
Min Precision value
Max Precision value
The USGS guidelines place RMSDz values into a scale of Quality Levels, as seen in the following table taken from USGS.