From a3e8ee751564d2b4816c537d08562dc9071796b3 Mon Sep 17 00:00:00 2001
From: kunyin2 <kunyin2@illinois.edu>
Date: Sun, 8 May 2022 23:24:18 -0500
Subject: [PATCH] Update README.md

---
 README.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/README.md b/README.md
index 9b3837a..b25aeb9 100644
--- a/README.md
+++ b/README.md
@@ -32,7 +32,7 @@ https://github.com/bst-mug/n2c2
 ## Table of results:
 I uploaded all outputs under `original_output` folder, including: Baseline model, RBC model, SVM model, LR model, LSTM model.
 
-Overall F1 score per criterion on the test set, compared with the baseline, a majority classifier:
+### Overall F1 score per criterion on the test set, compared with the baseline, a majority classifier:
 
 | Criterion | Baseline | RBC | SVM | SELF-LR | SELF-LSTM |
 |---|---|---|---|---|---|
@@ -53,7 +53,7 @@ Overall F1 score per criterion on the test set, compared with the baseline, a ma
 | Overall (macro) | 0.427 | 0.7525 | 0.5899 | 0.5714 | 0.497|
 
 
-Overall accuracy per criterion on the test set, compared with the baseline, a majority classifier
+### Overall accuracy per criterion on the test set, compared with the baseline, a majority classifier
 | Criterion | Baseline | RBC | SVM | SELF-LR | SELF-LSTM |
 |---|---|---|---|---|---|
 | Abdominal | 0.651162 | 0.883720 | 0.651162 | 0.662790 | 0.569767 | 
@@ -70,4 +70,4 @@ Overall accuracy per criterion on the test set, compared with the baseline, a ma
 | Makes-decisions | 0.906976 | 0.965116 | 0.965116 | 0.965116 | 0.965116| 
 | Mi-6mos | 0.906976 | 0.965116 | 0.930232 | 0.767441 | 0.965116|  
 | Overall (micro) | 0.764758 | 0.912343 | 0.809481| 0.808586 | 0.7495527| 
-| Overall (macro) | 0.764758 | 0.91234 | 0.809481 | 0.808586 |
+| Overall (macro) | 0.764758 | 0.91234 | 0.809481 | 0.808586 | - | 
-- 
GitLab