do_eval_rel module

evaltools.do_eval_rel.main(est_filename1, est_filename2, gt_filename1, gt_filename2, evaluation_setting, id1=1, id2=2)
Parameters:
  • est_filename (String) – Estimated trajectory filename (.csv), columns: [timestamp, x, y, yaw, floor]

  • gt_filename (String) – Ground-truth trajectory filename (.csv), columns: [timestamp, x, y, yaw, floor]

  • evaluation_setting (String) – evaluation setting filename (.json), columns: {eval_name:[bool, param_list], …}

Returns:

df_evaluation_results_tl – Middle data of relative-evaluation results, columns: [timestamp, type, value] type: Type of relative-evaluation, {RHA, RDA, RPA, …} value: Error at each timestamp

Return type:

pandas.DataFrame

evaltools.do_eval_rel.main_cl()
evaltools.do_eval_rel.main_with_dp3(est_dir, gt_dir, evaluation_setting=None)

指定ディレクトリ内の軌跡に対して全組み合わせの相対評価を行う。 est - gt間が何かしら名前で紐づけられている必要あり。 現状はxDR_challenge_2024の命名規則を想定する。 ex) {dataset}_{id}.csv

evaltools.do_eval_rel.pickle_rapper(pickle_filename, evaluation_setting)

main with pickle base