Mobile Lidar Processing
This workflow covers processing raw, mobile lidar data from a native-Phoenix system, optionally using a LadyBug 5+ for colorization. Steps 4 and 13 only apply to Ladybug5+ users. This workflow, minus steps 4 and 13, may also be an effective workflow for pedestrian-acquired data processing.
If using a RECON system, open one of the DATA files. SpatialExplorer will detect the other associated DATA files, and then will proceed to extract lidar, imagery, and navigation data.
Open the PLP file. After opening the PLP file, select a CRS for the project.
If working with data from a Riegl lidar scanner, convert the RXP files to SDCX files.
If using a Ladybug camera, import the Ladybug PGR streams.
Create receptor masks for each of the 6 Ladybug receptors.
Import a processed trajectory (CTS, CLS, SBET, POF). If you do not yet have a processed trajectory, produce one using either InertialExplorer, NavLab embedded, or NavLab via LiDARMill.
When processing data acquired via an mobile platform (car, truck, or other ground vehicle), it is recommended to use the Ground Vehicle dynamics profiles in NavLab or InertialExplorer.
Import ground control points (if available).
Configure processing settings for the lidar. Consider a 360 degree field of view, and a minimum range of 0.5 meters.
Create processing intervals (these intervals will also apply to imagery).
Visually check lidar relative and absolute accuracy and determine what type of trajectory optimization needs to be performed. Consider reviewing trajectory accuracy reports to determine what trajectory parameters (X,Y,Z, yaw, pitch, roll) require optimization. Check whether ground control points require horizontal and vertical adjustment, or just vertical adjustment.
If horizontal adjustment is needed to accurately ground control point locations, create Full corrections.
Run LiDARSnap and optimize for necessary parameters. Typically, the LiDARSnap Mobile Trajectory Optimization preset works well. Consider how to treat your control points:
If Full corrections were created manually, enable them and apply a weight between 2 and 100. Consider whether ground control points should be enabled with LiDARSnap, to act as a vertical constraint.
If Full corrections were not created, and only vertical adjustment is required to accurately match ground control, consider enabling control in the general tab of LiDARSnap.
If ground control is available, compute residuals from point cloud to control points to determine what rigid adjustment, if any, is needed to match lidar elevations to ground control elevations. This step may not apply to all mobile data sets.
If using a Ladybug, calibrate the camera and colorize the point cloud:
Manually create intervals specifically for camera calibration. Your intervals should include about 50-100 images. Try to include opposing lanes of traffic or a hashtag pattern in the trajectory, if available.
Activate images along processing the intervals made in step 13.1.
Calibrate the camera using CameraSnap. Because the Ladybug is not permanently attached to the lidar system's IMU, it's always necessary to calibrate the ladybug. Consider manually reviewing feature matches to exclude false feature matches. Review the CameraSnap report and ensure Average Pixel Offset is less than 1.0.
Colorize the point cloud using CloudColorizer.
Perform any additional classification necessary. Users can make use of the Classify on Selection window, as well as some automated classification routines.
Generate accuracy reports.
Generate deliverables such as rasters (RGB raster, DTM, DEM) and vector deliverables (contours and meshes).
Last updated