Skip to content

Releases: DCBIA-OrthoLab/SlicerDentalModelSeg

Release with trained model for the 3D Teeth Scan Segmentation and Labeling Challenge

14 Sep 14:44
Compare
Choose a tag to compare

This model is trained using the data available for the 3D Teeth Scan Segmentation and Labeling Challenge

Our algorithm for crown segmentation got second place in the challenge.
The ranking is based on three metrics. They are combined to compute a final score as follows:

  • Teeth localization accuracy (TLA): mean of normalized Euclidean distance between ground truth (GT) teeth centroids and the closest localized teeth centroid. Each computed Euclidean distance is normalized by the size of the corresponding GT tooth. In case of no centroid (e.g. algorithm crashes or missing output for a given scan) a nominal penalty of 5 per GT tooth will be given. This corresponds to a distance 5 times the actual GT tooth size. As the number of teeth per patient may be variable, here the mean is computed over all gathered GT Teeth in the two testing sets.
  • Teeth identification rate (TIR): is computed as the percentage of true identification cases relatively to all GT teeth in the two testing sets. A true identification is considered when for a given GT Tooth, the closest detected tooth centroid : is localized at a distance under half of the GT tooth size, and is attributed the same label as the GT tooth
  • Teeth segmentation accuracy (TSA): is computed as the average F1-score over all instances of teeth point clouds. The F1-score of each tooth instance is measured as: F1=2*(precision * recall)/(precision+recall)

image