Here is a comparison of the depth data before and after some pre-processing. The top video is the (mostly) unprocessed depth image stream on the left, and unprocessed RGB image stream on the right. They both stream at 15 Hz. The main issue with this data is that it is not inherently usable for terrain analysis, because it does not take into account the orientation of the sensor that collected it. It essentially shows a rotated version of the terrain. While it is possible to analyze the depth image to determine roughness of the terrain, and to do some semantic segmentation or feature extraction on the RGB image, the conclusions from these analyses will be unoriented with respect to the gravity vector, meaning that safe-looking landing sites could still be prohibitively slanted.
This is the same video as above, but with some pre-processing. It uses primarily the depth information, which gives the overall structure/contours. It also projects the RGB textures onto the depth image, which makes it more human-readable. Finally, the whole surface is rotated such that it aligns with the perceived gravity vector. This is useful for gaining an intuition into what the sensor actually perceives from the ground. Towards the end, the field of view becomes smaller and smaller as the drone descends and lands. At the very end, the image becomes noisy as the gimbal automatically points forwards instead of down, as a result of the flight controller disarming.