Airborne Lidar Processing

This workflow is for customers who intend on processing raw, airborne lidar data from a native-Phoenix system to produce a calibrated, colorized point cloud, as well as other deliverables.

  1. If using a RECON system, open one of the DATA files. SpatialExplorer will detect the other associated DATA files, and then will proceed to extract lidar, imagery, and navigation data.

  2. Open the PLP file. If imagery was acquired, ensure images are present in the Cam0 folder. After opening the PLP file, select a CRS for the project.

  3. If working with data from a Riegl lidar scanner, convert the RXP files to SDCX files.

  4. Import a processed trajectory (CTS, CLS, SBET, POF). If you do not yet have a processed trajectory, produce one using either InertialExplorer, NavLab embedded, or NavLab via LiDARMill.

    1. When processing data acquired via an aerial platform (helicopter, UAV, fixed-wing aircraft), it is recommended to use the Airborne or UAV dynamics profiles in NavLab or InertialExplorer.

  5. Configure processing settings for the lidar. Consider a 90 degree field of view, and a minimum range of 5 meters.

  6. Create processing intervals (these intervals will also apply to imagery).

  7. Visually check lidar relative accuracy and determine what degree and type of optimization needs to be performed. Consider reviewing trajectory accuracy reports to determine what trajectory parameters (X,Y,Z, yaw, pitch, roll) require optimization. Typically up and yaw are the only parameters that need to be optimized in an aerial data set.

  8. Run LiDARSnap and optimize for necessary parameters. Typically the LiDARSnap Aerial Trajectory Optimization preset works well.

  9. Classify noise and ground. It's usually recommended to have an accurate ground classification prior to adjusting to control (step 12). Noise should always be classified BEFORE ground.

  10. If ground control points are available, compute residuals from point cloud to control points to determine what rigid adjustment, if any, is needed to match lidar elevations to ground control elevations.

  11. Activate images along processing intervals.

  12. Calibrate the camera using CameraSnap. If necessary, enable per-pose orientation corrections to compute attitude corrections specific to each image.

  13. Colorize the point cloud using CloudColorizer.

  14. Perform any additional classification necessary. Users can make use of the Classify on Selection window, as well as some automated classification routines.

  15. Generate deliverables such as rasters (RGB raster, DTM, DEM) and vector deliverables (contours and meshes).

Last updated