licornea_tools

calibration
   calibrate_intrinsics
   cameras_from_checkerboards
   cg_choose_refgrid
   cg_compare_straight_depths
   cg_cors_viewer_f
   cg_cors_viewer_v
   cg_filter_features
   cg_generate_artificial
   cg_measure_optical_flow_slopes
   cg_model_optical_flow_slopes
   cg_optical_flow_cors
   cg_optical_flow_features
   cg_rcpos_from_cors
   cg_redistribute_cors
   cg_rotation_from_depths
   cg_rotation_from_fslopes
   cg_slopes_viewer
   cg_stitch_cameras
   cg_straight_depths_from_depths
   cg_straight_depths_from_disparity
   cg_visualize_fslopes
   copy_cors
   cors_info
   cors_view_fpoints
   evaluate_calibration
   export_feature_depths
   merge_cors
   read_feature_depths
   remove_cors
   undistort_cors
   undistort_fpoints
   undistort_image
camera
   export_mpeg
   import_matlab
   import_mpeg
   import_xml
   merge_cameras
   restrict_cameras
   transform
   visualize
dataset
   duplicates
   flip
   missing
   slice
   view_dataset
kinect
   calibrate_color_ir_reprojection
   checkerboard_color_depth
   checkerboard_depth_parallel
   checkerboard_depth_samples
   checkerboard_depth_stat
   checkerboard_depth_viewer
   checkerboard_samples
   close_kinect
   color_intrinsic_reprojection
   depth_remapping
   depth_reprojection
   fetch_internal_parameters
   import_raw_data
   internal_ir_intrinsics
   ir_distortion_viewer
   ir_intrinsic_reprojection
   parallel_wall
   remapping_viewer
   reprojection_viewer
   viewer
misc
   apply_homography
   cam_rotation
   cat_obj_img_cors
   copy_json
   extract_all
   extract_parametric
   homography_maximal_border
   psnr
   touch
   view_depth
   view_distortion
   view_syn
   yuv_export
   yuv_import
vsrs
   export_for_vsrs
   list_increase_baseline_experiments
   list_parametric_experiments
   list_skip_n_experiments
   make_vsrs_config
   psnr_experiments
   run_vsrs
   run_vsrs_experiments
   vsrs_disparity

Import dataset from Kinect

This is how to import an entire dataset (images and depths) from Kinect. It copies and renames the files, and reprojects and unsamples the depth maps.

1. Prepare dataset parameters

Make a dataset parameters file parameters.json for the dataset. It should indicate the location of the input images and raw depth maps (not reprojected) from the Kinect, in the kinect_raw. And it should indicate the location of the final images and reprojected depth maps, in the root group.

The Kinect reprojection must have been calibrated before. But the location of the reprojection parameters file reprojection.json into the kinect_raw group, under kinect_reprojection_parameters_filename.

If images are already at the correct location, put the same value into both image_filename_format. (Assuming there is no different numbering in kinect_raw)

2. Run the import script

To import all views, run

kinect/import_raw_data.py parameters.json mine

Inside the import_raw_data.py source code, variables can be set to indicate whether to import only images or only depths, and whether to overwrite existing output files.

The depth maps will be reprojected using the given reprojection parameters, and then upsampled with the given densification method (here mine). This algorithm is implemented in src/kinect/densify/depth_densify_mine.cc.

If the environment variable LICORNEA_PARALLEL is set to 1, it will run in parallel. It will also show estimated time remaining (But this can get wrong if many files are skipped). On Linux, due to a bug in joblib (?), it can sometimes block near the final views when running in parallel. Then, terminate it, remove incomplete output files, set it to not overwrite, and re-run it non-parallel, to complete the import.