Mobile Lidar Processing

This workflow covers processing raw, mobile lidar data from a native-Phoenix system, optionally using a LadyBug 5+ for colorization. Steps 4 and 13 only apply to Ladybug5+ users. This workflow, minus steps 4 and 13, may also be an effective workflow for pedestrian-acquired data processing.

  1. If using a RECON system, open one of the DATA files. SpatialExplorer will detect the other associated DATA files, and then will proceed to extract lidar, imagery, and navigation data.

  2. Open the PLP file. After opening the PLP file, select a CRS for the project.

  3. If working with data from a Riegl lidar scanner, convert the RXP files to SDCX files.

  4. If using a Ladybug camera, import the Ladybug PGR streams.

    1. Create receptor masks for each of the 6 Ladybug receptors.

  5. Import a processed trajectory (CTS, CLS, SBET, POF). If you do not yet have a processed trajectory, produce one using either InertialExplorer, NavLab embedded, or NavLab via LiDARMill.

    1. When processing data acquired via an mobile platform (car, truck, or other ground vehicle), it is recommended to use the Ground Vehicle dynamics profiles in NavLab or InertialExplorer.

  6. Configure processing settings for the lidar. Consider a 360 degree field of view, and a minimum range of 0.5 meters.

  7. Create processing intervals (these intervals will also apply to imagery).

  8. Visually check lidar relative accuracy and determine what degree and type of optimization needs to be performed. Consider reviewing trajectory accuracy reports to determine what trajectory parameters (X,Y,Z, yaw, pitch, roll) require optimization. It's not uncommon to solve for all parameters with mobile data sets.

  9. Run LiDARSnap and optimize for necessary parameters. Typically, the LiDARSnap Mobile Trajectory Optimization preset works well. Consider whether ground control points should be enabled with LiDARSnap, to act as a vertical constraint.

  10. If ground control is available, compute residuals from point cloud to control points to determine what rigid adjustment, if any, is needed to match lidar elevations to ground control elevations.

  11. If using a Ladybug, calibrate the camera and colorize the point cloud:

    1. Manually create intervals specifically for camera calibration. Your intervals should include about 50-100 images. Try to include opposing lanes of traffic or a hashtag pattern in the trajectory, if available.

    2. Activate images along processing the intervals made in step 13.1.

    3. Calibrate the camera using CameraSnap. Because the Ladybug is not permanently attached to the lidar system's IMU, it's always necessary to calibrate the ladybug. Consider manually reviewing feature matches to exclude false feature matches. Review the CameraSnap report and ensure Average Pixel Offset is less than 1.0.

    4. Colorize the point cloud using CloudColorizer.

  12. Perform any additional classification necessary. Users can make use of the Classify on Selection window, as well as some automated classification routines.

  13. Generate accuracy reports.

  14. Generate deliverables such as rasters (RGB raster, DTM, DEM) and vector deliverables (contours and meshes).

Last updated