do_eval module
- evaltools.do_eval.main(est_filenames, gt_filenames, evaluation_setting=None)
Perform accuracy evaluation.
This function compares ground truth and estimated data to compute accuracy metrics.
- Parameters:
est_filename (String) – Estimated trajectory filename (.csv), columns: [timestamp, x, y, yaw, floor]
gt_filename (String) – Ground-Truth trajectory filename (.csv), columns: [timestamp, x, y, yaw, floor]
evaluation_setting (String) – Evaluation setting filename (.json), format: {eval_name:[bool, param_list], …}
- Returns:
df_evaluation_results_tl – Middle data of evaluation results, columns: [timestamp, type, value] type: Type of evaluation, {CE, CA, EAG, VE, OE, …} value: Error at each timestamp
- Return type:
pandas.DataFrame
- evaltools.do_eval.main_cl()
- evaltools.do_eval.pickle_rapper(pickle_filename, evaluation_setting)
This function is triggered by the -p command-line option.