Skip to content
Snippets Groups Projects
Commit c3c6b8c0 authored by wieting2's avatar wieting2
Browse files

fixed section header

parent 7eccbea2
Branches master
No related tags found
No related merge requests found
...@@ -130,7 +130,7 @@ We present a latent alignment algorithm that gives state-of-the-art results on t ...@@ -130,7 +130,7 @@ We present a latent alignment algorithm that gives state-of-the-art results on t
\section{Experiments and Results} \section{Experiments and Results}
\input{experiments} \input{experiments}
\section{Analyzing the Learned Weights} \section{Analysis of the Learned Weights}
%\section{Weight Vector Analysis} %\section{Weight Vector Analysis}
%\input{discussion} %\input{discussion}
As mentioned previously, our model is interpretable. So in order to further explore this, we trained models with just word conjunction features on both the textual entailment and STS tasks and examined the weight vectors of the best performing models on their respective dev sets. We found that the model trained for the STS task, puts its highest weights on identity word pairs and word pairs that are similar in some way (synonymous like {\it cut} and {\it slice}, lemmas, or topically related like {\it orange} and {\it slice}). The most negative weights were on word pairs that were not related like {\it play} and {\it woman} or antonyms such as {\it man} and {\it woman}. In contrast, the entailment model placed its highest weights on mapping stop words like {\it a}, {\it which}, and {\it who} to the NULL token. The most negative weights, not surprisingly, were word pairs involving negation words like {\it not}, {\it no}, and {\it nobody} as well as unrelated word pairs like {\it play} and {\it there}. As mentioned previously, our model is interpretable. So in order to further explore this, we trained models with just word conjunction features on both the textual entailment and STS tasks and examined the weight vectors of the best performing models on their respective dev sets. We found that the model trained for the STS task, puts its highest weights on identity word pairs and word pairs that are similar in some way (synonymous like {\it cut} and {\it slice}, lemmas, or topically related like {\it orange} and {\it slice}). The most negative weights were on word pairs that were not related like {\it play} and {\it woman} or antonyms such as {\it man} and {\it woman}. In contrast, the entailment model placed its highest weights on mapping stop words like {\it a}, {\it which}, and {\it who} to the NULL token. The most negative weights, not surprisingly, were word pairs involving negation words like {\it not}, {\it no}, and {\it nobody} as well as unrelated word pairs like {\it play} and {\it there}.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment