Skip to content
Snippets Groups Projects
Commit a3e8ee75 authored by kunyin2's avatar kunyin2
Browse files

Update README.md

parent 582a0370
No related branches found
No related tags found
No related merge requests found
......@@ -32,7 +32,7 @@ https://github.com/bst-mug/n2c2
## Table of results:
I uploaded all outputs under `original_output` folder, including: Baseline model, RBC model, SVM model, LR model, LSTM model.
Overall F1 score per criterion on the test set, compared with the baseline, a majority classifier:
### Overall F1 score per criterion on the test set, compared with the baseline, a majority classifier:
| Criterion | Baseline | RBC | SVM | SELF-LR | SELF-LSTM |
|---|---|---|---|---|---|
......@@ -53,7 +53,7 @@ Overall F1 score per criterion on the test set, compared with the baseline, a ma
| Overall (macro) | 0.427 | 0.7525 | 0.5899 | 0.5714 | 0.497|
Overall accuracy per criterion on the test set, compared with the baseline, a majority classifier
### Overall accuracy per criterion on the test set, compared with the baseline, a majority classifier
| Criterion | Baseline | RBC | SVM | SELF-LR | SELF-LSTM |
|---|---|---|---|---|---|
| Abdominal | 0.651162 | 0.883720 | 0.651162 | 0.662790 | 0.569767 |
......@@ -70,4 +70,4 @@ Overall accuracy per criterion on the test set, compared with the baseline, a ma
| Makes-decisions | 0.906976 | 0.965116 | 0.965116 | 0.965116 | 0.965116|
| Mi-6mos | 0.906976 | 0.965116 | 0.930232 | 0.767441 | 0.965116|
| Overall (micro) | 0.764758 | 0.912343 | 0.809481| 0.808586 | 0.7495527|
| Overall (macro) | 0.764758 | 0.91234 | 0.809481 | 0.808586 |
| Overall (macro) | 0.764758 | 0.91234 | 0.809481 | 0.808586 | - |
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment