diff --git a/README.md b/README.md
index 9b3837a6c6539e0683c30263a685b5ac8c7f4133..b25aeb9cf10858ba348cd0459960b7c4126dc080 100644
--- a/README.md
+++ b/README.md
@@ -32,7 +32,7 @@ https://github.com/bst-mug/n2c2
 ## Table of results:
 I uploaded all outputs under `original_output` folder, including: Baseline model, RBC model, SVM model, LR model, LSTM model.
 
-Overall F1 score per criterion on the test set, compared with the baseline, a majority classifier:
+### Overall F1 score per criterion on the test set, compared with the baseline, a majority classifier:
 
 | Criterion | Baseline | RBC | SVM | SELF-LR | SELF-LSTM |
 |---|---|---|---|---|---|
@@ -53,7 +53,7 @@ Overall F1 score per criterion on the test set, compared with the baseline, a ma
 | Overall (macro) | 0.427 | 0.7525 | 0.5899 | 0.5714 | 0.497|
 
 
-Overall accuracy per criterion on the test set, compared with the baseline, a majority classifier
+### Overall accuracy per criterion on the test set, compared with the baseline, a majority classifier
 | Criterion | Baseline | RBC | SVM | SELF-LR | SELF-LSTM |
 |---|---|---|---|---|---|
 | Abdominal | 0.651162 | 0.883720 | 0.651162 | 0.662790 | 0.569767 | 
@@ -70,4 +70,4 @@ Overall accuracy per criterion on the test set, compared with the baseline, a ma
 | Makes-decisions | 0.906976 | 0.965116 | 0.965116 | 0.965116 | 0.965116| 
 | Mi-6mos | 0.906976 | 0.965116 | 0.930232 | 0.767441 | 0.965116|  
 | Overall (micro) | 0.764758 | 0.912343 | 0.809481| 0.808586 | 0.7495527| 
-| Overall (macro) | 0.764758 | 0.91234 | 0.809481 | 0.808586 |
+| Overall (macro) | 0.764758 | 0.91234 | 0.809481 | 0.808586 | - |