diff --git a/docs/final-report/report.pdf b/docs/final-report/report.pdf
index 3a715fd67b41aef56d4e5390e7a2cee813264867..194397258c578c5adda8c2e7cf8e9c7277382331 100644
Binary files a/docs/final-report/report.pdf and b/docs/final-report/report.pdf differ
diff --git a/docs/final-report/report.tex b/docs/final-report/report.tex
index c4a836cd549b223a397f1ca31d7c57895630cfea..fcc92f6610857287761d30ba373d9d6a3dcd0470 100644
--- a/docs/final-report/report.tex
+++ b/docs/final-report/report.tex
@@ -98,7 +98,7 @@
     While Wikipedia does have a \textit{Physicists} category\footnote{\url{https://en.wikipedia.org/wiki/Category:Physicists}},
     it is fragmented into somewhat arbitrary subcategories and thus not optimal to use as a
     collection.
-    However, Wikipedia also features a "List of physicists" which contains 981 articles
+    However, Wikipedia also features a "List of physicists"\footnote{\url{https://en.wikipedia.org/wiki/List_of_physicists}} which contains 981 articles
     that were used to build the corpus. \par
     Data scraping was done using the R package \textit{WikipediR} which is a wrapper around the Wikipedia
     API.
@@ -202,7 +202,7 @@ training examples. It was possible to configure the bot to meet our needs withou
 restrictions. \par
 Wikipedia articles are particularly well suited for the process of information extraction,
 because they generally are composed consistently. The different levels of detail and therefore information
-were an issue when dealing in using these articles. \par
+were an issue in using these articles. \par
 Concluding the textmining part of our project we can assess that the functions
 using mainly NER tags (get\_awards.R and get\_university.R) have high recall and relatively low
 precision. The function get\_spouses.R, which is working with pattern matching, has low recall