diff --git a/emnlp2015.tex b/emnlp2015.tex
index 21ab4e1b235ab2a37a0eea32f50ea624448ebda5..b5bf6ed8e6aa57f5140cd1f24a266a8a7ccc1018 100644
--- a/emnlp2015.tex
+++ b/emnlp2015.tex
@@ -130,7 +130,7 @@ We present a latent alignment algorithm that gives state-of-the-art results on t
 \section{Experiments and Results}
 \input{experiments}
 
-\section{Analyzing the Learned Weights}
+\section{Analysis of the Learned Weights}
 %\section{Weight Vector Analysis}
 %\input{discussion}
 As mentioned previously, our model is interpretable. So in order to further explore this, we trained models with just word conjunction features on both the textual entailment and STS tasks and examined the weight vectors of the best performing models on their respective dev sets. We found that the model trained for the STS task, puts its highest weights on identity word pairs and word pairs that are similar in some way (synonymous like {\it cut} and {\it slice}, lemmas, or topically related like {\it orange} and {\it slice}). The most negative weights were on word pairs that were not related like {\it play} and {\it woman} or antonyms such as {\it man} and {\it woman}. In contrast, the entailment model placed its highest weights on mapping stop words like {\it a}, {\it which}, and {\it who} to the NULL token. The most negative weights, not surprisingly, were word pairs involving negation words like {\it not}, {\it no}, and {\it nobody} as well as unrelated word pairs like {\it play} and {\it there}.