- Blog Home
- Content Marketing
- Ryan Fauver
- Lexical Diversity: Improving Writing Through Technology
Lexical Diversity: Improving Writing Through Technology
How the linguistic metric of lexical diversity can help improve writing.
While interning in the engineering department at Scripted.com, I had plenty of questions: "How does that work?" "What's the best way to do this?" "Why do we do it that way?" At some point, a different question entered my head: "Is there any way we can automatically and objectively determine the quality of a given piece of writing?"
Currently, Scripted relies on our team of freelance editors and our own in-house copy editors to pore over all of the writing that passes through our system, and they do a fantastic job. The problem, though, is that two humans rarely agree on something as subjective as writing quality, an issue known as inter-rater reliability. My time spent studying psychology made me certain that this kind of question has been considered before, and, after some preliminary research, I discovered that it has!
As it turns out, researchers have been working to develop more objective measurements of writing for more than half a century! Of course, the quality of writing can not be encapsulated in one number. Instead, researchers have constructed measures that attempt to capture specific elements of writing that are known to correlate with humans' ratings of quality. Some highly correlated measures include:
* Syntactic complexity (more complex writing is rated higher)
* Word frequency (documents with high use of uncommon words are rated higher)
* Text length (in general, longer documents are rated higher)
* Lexical diversity (writing with more varied and broad vocabulary is rated higher)
Software packages that compute these measures and more already exist (like Coh-Metrix, for example), but many are restricted to academic and research use only, or are too large to suit our needs. With that in mind, I decided to try out implementing some of these measures myself in Ruby. After some experimentation, I settled on implementing lexical diversity because it does not rely on any extreme natural language processing, and there were a number of established methods of measuring it.
As I mentioned before, a lexical diversity score is a measurement of the breadth and variety of the vocabulary used in a piece of writing. The most basic lexical diversity measurement is called type-token ratio, or TTR. Take this sentence:
The dog jumped over the other dog.
This sentence contains 5 "types" ("the," "dog," "jumped," "over," "other"), and 7 "tokens" (or total words). So the TTR for this sentence is 5/7 or 0.714.
Unfortunately, TTR has a major problem: it is highly sensitive to text length. The longer the document, the lower the chance that a new token will also be a new type, causing the TTR to drop as more words are added. Fortunately, several other lexical diversity measures have been created specifically to combat this issue.
In the end, I implemented three separate lexical diversity measures: the Measure of Textual Lexical Diversity (MTLD), the Hypergeometric Distribution D (HD-D), and Yule's I.
* MTLD, as described by Philip McCarthy and Scott Jarvis (2010), uses the fact that TTR falls as more words are added and instead computes how many words it takes before the TTR falls below a given threshold.
* HD-D is McCarthy and Jarvis' (2007) improvement of vocd-D, another lexical diversity measure. HD-D uses probability to evaluate the contribution of each word in the text to the overall lexical diversity.
* Yule's I is the inverse of Yule's Characteristic K, which was first described by statistical pioneer G. U. Yule in his 1944 book The Statistical Study of Literary Vocabulary. Yule's I is a formula based on TTR, but specially designed to avoid the issue of text length.
It's conventional for research papers to go into extreme detail about their methodology, which was great for me, because I could easily follow along and translate the steps into Ruby. To view the actual code and a more in-depth explanation of the function of each measure, take a look at the GitHub repo.
Once the measures were fully implemented, I ran them on several thousand existing documents in the Scripted database to see if they were working properly. The results were promising. All three measures produce classic bell curves, and each correlates highly with the others, which tells me that:
* The different measures are actually measuring something.
* They are all measuring the same thing.
Next, I wanted to see if these lexical diversity scores are in any way related to the quality of the writing. Every document in the Scripted system that has gone through the editing process already has a quality rating. So the natural next step was to run a regression analysis between those quality ratings and each of the lexical diversity scores. Despite my hopes, the results were not significant.
Even though I had lost confidence in using lexical diversity as a precise scoring tool, I found that the extreme ends of the scoring range are predictive, especially on the low end. When I focused on documents with remarkably low lexical diversity scores, I finally found a pattern; they were nearly all what I would consider below our standards for quality. Of course there are some false positives and there are certain to be some false negatives, but this is a great sign. We can use this knowledge to detect writing that does not meet our standards.
I soon got to work on implementing this "lexical filter." Instead of slapping this functionality onto Scripted directly, I decided to create a standalone service that computes a single lexical diversity score. Then, whenever a score is needed by the Scripted application, it can easily send a document to the service and get a score back. I call the service "Lex-D" (for lexical diversity).
The score that Lex-D computes is a combination of MTLD, HD-D, and Yule's I. Each score is calculated individually, scaled based on the means and standard deviations from the Scripted database, and finally averaged together to give a single score.
Lex-D is built in Ruby using the Sinatra framework. Sinatra is incredibly small and lightweight, and it suited my needs perfectly. I was able to get a first iteration up and running quickly. The code and an explanation of how to interface with Lex-D is available on GitHub.
At this point, Lex-D is fully functional (try it out!), but it is not hooked in to Scripted just yet. Soon, every applicable document that passes through Scripted will be scored for lexical diversity, and if the score is below a set threshold, the document will be flagged and sent to a human for review.
I have really enjoyed working at Scripted this summer, and I've learned so much. Hopefully my work with lexical diversity will help Scripted get closer to achieving their mission of improving writing on the Internet.
What do you think? Share your thoughts with us below.
How to Teach a Computer to Read
While interning in the engineering department at Scripted.com, I had plenty of questions: "How does that work?" "What's the best way to do this?" "Why do we do it that way?" At some point, a different question entered my head: "Is there any way we can automatically and objectively determine the quality of a given piece of writing?"
Currently, Scripted relies on our team of freelance editors and our own in-house copy editors to pore over all of the writing that passes through our system, and they do a fantastic job. The problem, though, is that two humans rarely agree on something as subjective as writing quality, an issue known as inter-rater reliability. My time spent studying psychology made me certain that this kind of question has been considered before, and, after some preliminary research, I discovered that it has!
Measuring Quality
As it turns out, researchers have been working to develop more objective measurements of writing for more than half a century! Of course, the quality of writing can not be encapsulated in one number. Instead, researchers have constructed measures that attempt to capture specific elements of writing that are known to correlate with humans' ratings of quality. Some highly correlated measures include:
* Syntactic complexity (more complex writing is rated higher)
* Word frequency (documents with high use of uncommon words are rated higher)
* Text length (in general, longer documents are rated higher)
* Lexical diversity (writing with more varied and broad vocabulary is rated higher)
Software packages that compute these measures and more already exist (like Coh-Metrix, for example), but many are restricted to academic and research use only, or are too large to suit our needs. With that in mind, I decided to try out implementing some of these measures myself in Ruby. After some experimentation, I settled on implementing lexical diversity because it does not rely on any extreme natural language processing, and there were a number of established methods of measuring it.
Lexical Diversity Basics
As I mentioned before, a lexical diversity score is a measurement of the breadth and variety of the vocabulary used in a piece of writing. The most basic lexical diversity measurement is called type-token ratio, or TTR. Take this sentence:
The dog jumped over the other dog.
This sentence contains 5 "types" ("the," "dog," "jumped," "over," "other"), and 7 "tokens" (or total words). So the TTR for this sentence is 5/7 or 0.714.
Unfortunately, TTR has a major problem: it is highly sensitive to text length. The longer the document, the lower the chance that a new token will also be a new type, causing the TTR to drop as more words are added. Fortunately, several other lexical diversity measures have been created specifically to combat this issue.
MTLD, HD-D, and Yule's I
In the end, I implemented three separate lexical diversity measures: the Measure of Textual Lexical Diversity (MTLD), the Hypergeometric Distribution D (HD-D), and Yule's I.
* MTLD, as described by Philip McCarthy and Scott Jarvis (2010), uses the fact that TTR falls as more words are added and instead computes how many words it takes before the TTR falls below a given threshold.
* HD-D is McCarthy and Jarvis' (2007) improvement of vocd-D, another lexical diversity measure. HD-D uses probability to evaluate the contribution of each word in the text to the overall lexical diversity.
* Yule's I is the inverse of Yule's Characteristic K, which was first described by statistical pioneer G. U. Yule in his 1944 book The Statistical Study of Literary Vocabulary. Yule's I is a formula based on TTR, but specially designed to avoid the issue of text length.
It's conventional for research papers to go into extreme detail about their methodology, which was great for me, because I could easily follow along and translate the steps into Ruby. To view the actual code and a more in-depth explanation of the function of each measure, take a look at the GitHub repo.
Early Results
Once the measures were fully implemented, I ran them on several thousand existing documents in the Scripted database to see if they were working properly. The results were promising. All three measures produce classic bell curves, and each correlates highly with the others, which tells me that:
* The different measures are actually measuring something.
* They are all measuring the same thing.
Next, I wanted to see if these lexical diversity scores are in any way related to the quality of the writing. Every document in the Scripted system that has gone through the editing process already has a quality rating. So the natural next step was to run a regression analysis between those quality ratings and each of the lexical diversity scores. Despite my hopes, the results were not significant.
Even though I had lost confidence in using lexical diversity as a precise scoring tool, I found that the extreme ends of the scoring range are predictive, especially on the low end. When I focused on documents with remarkably low lexical diversity scores, I finally found a pattern; they were nearly all what I would consider below our standards for quality. Of course there are some false positives and there are certain to be some false negatives, but this is a great sign. We can use this knowledge to detect writing that does not meet our standards.
Lex-D
I soon got to work on implementing this "lexical filter." Instead of slapping this functionality onto Scripted directly, I decided to create a standalone service that computes a single lexical diversity score. Then, whenever a score is needed by the Scripted application, it can easily send a document to the service and get a score back. I call the service "Lex-D" (for lexical diversity).
The score that Lex-D computes is a combination of MTLD, HD-D, and Yule's I. Each score is calculated individually, scaled based on the means and standard deviations from the Scripted database, and finally averaged together to give a single score.
Lex-D is built in Ruby using the Sinatra framework. Sinatra is incredibly small and lightweight, and it suited my needs perfectly. I was able to get a first iteration up and running quickly. The code and an explanation of how to interface with Lex-D is available on GitHub.
Lexical Diversity and Scripted
At this point, Lex-D is fully functional (try it out!), but it is not hooked in to Scripted just yet. Soon, every applicable document that passes through Scripted will be scored for lexical diversity, and if the score is below a set threshold, the document will be flagged and sent to a human for review.
I have really enjoyed working at Scripted this summer, and I've learned so much. Hopefully my work with lexical diversity will help Scripted get closer to achieving their mission of improving writing on the Internet.
What do you think? Share your thoughts with us below.
More on Writing & Engineering:
How to Teach a Computer to Read