Statistics

Task agreement

Task agreement shows how multiple collaborators are in labeling consensus performing the same task. There are

Matching score

Each agreement matching score m(x, y) is computed differently depending on data labeling types presented in x and y,

Currently we are using the following matching scores:

Choices

m(x, y) = 1 if x and y are exact matched choices, and m(x, y) = 0 otherwise

TextArea

Each TextArea result x or y contains list of texts, e.g. x = [x1, x2, ..., xN]. The matching score is computed by the following:

The are several options available for this matching function:

Algorithm - edit distance algorithm

Split by - whether splits are taken by words or by chars

Labels

Intersection over resulted spans m(x, y) = spans(x) ∩ spans(y) normalized by total spans length

Rating

m(x, y) = 1 if x and y are exact matched choices, and m(x, y) = 0 otherwise

Ranker

Mean average precision (mAP)

RectangleLabels

There are several possible choices available on project settings page:

PolygonLabels

There are several possible choices available on project settings page:

Agreement method

Agreement method defines the way how matching scores between the all completions for one tasks are combined to form a single inter-annotator agreement score.

There are several possible methods you can specify on project settings page:

Complete linkage

Single linkage

No grouping

Example