The Hiller Communication Vagueness Scales An Applied Researcher’s Perspective

John Ford
Public Sector Research Psychologist
The author can be reached at

This is the second of two Blogs on the Hiller Dictionary. The first one is titled Hiller Vagueness Dictionary: A Content Analysis Dictionary that Keeps on Giving

The Hiller Communication Vagueness scales are useful measures of document clarity because they use vocabulary indicators that are different from other commonly available measures. Spelling and grammar checkers look for aspects of written communication that are clearly incorrect and suggest specific corrections. While this is useful it does not address technically correct use of language that can still make a document difficult to understand. Readability measures are also useful but are overly dependent on structural aspects of writing that make it difficult to understand for less experienced readers. It is often unclear what specific changes should be made to improve a substandard global readability score (1).

The Communication Vagueness scales are free from these limitations and address aspects of writing quality that spelling checking and readability analysis measures do not detect. They have been useful in several projects, two of which are representative.

One early research effort used the Hiller scales to examine banks of multiple-choice test questions being written for a professional human resources certification program (2). Test question review is a specialized type of content analysis conducted to identify and correct test question flaws early in the test development process. A WordStat implementation of Hiller’s Communication Vagueness scales was used to examine 576 multiple-choice test items before and after test question review and revision by experienced question editors. Scores for the final question bank were lower than for the initial question bank, indicating that vagueness was decreased by editing. The strongest differences were for the Multiplicity and Anaphora subscales. This finding helped to establish the value of question editing to human resources content specialists invited to write test questions and to their management, who were protective of their time.

In a more recent project (not yet published) the Hiller scales are being used as part of an ensemble of text measures to examine the clarity of Federal job vacancy announcements. These announcements form an applicant’s first—and sometimes only—impression of the Federal government as an employer. Barriers such as poorly written announcements may more strongly affect socially and economically disadvantaged applicants (3) and those with certain job-irrelevant personality characteristics (4). Well-qualified applicants with other employment options are also less likely to spend time deciphering a cryptic vacancy announcement (5). The importance of clarity and readability is apparent.

A 2003 review found Federal vacancy announcements at that time to be difficult to read and decipher (6). The research study in progress will evaluate the last several years of vacancy announcements to determine if this situation has changed since 2003. Because it depended on manual review, it was necessary to conduct the 2003 study using a relatively small representative sample of the available vacancy announcements. Thanks to the electronic format of more recent job announcements and tools such as the Hiller Communication Vagueness scales, a more comprehensive, multi-year review is possible. More than a million vacancy announcements will be included. This is far beyond what could be accomplished by human raters—even a small army of underpaid graduate students.

Two thoughts for other users of Hiller’s scales. First, it is useful to consider the nature of the documents being evaluated before using the scales and decide whether each scale represents verbal behavior writers should avoid completely or verbal behavior writers should minimize their use of. The answer will help the researchers understand their document collection and how to interpret the results of applying the Communication Vagueness scales. Your “Never” and “Not Much” subscale sets may differ from the author’s—and from project to project across document types.

And an additional thought for all of us in the Hiller scales’ user community. It would be useful to examine a wide variety of documents with these scales and develop a set of norms or expected scores that we could all use. In a personal communication several years ago with the scales author, Jack Hiller, indicated that he saw value in such an effort. As a community we might consider working together to create these norms. Such norms have proven valuable aids to interpretation and explanation of findings by researchers using other text-based scales (7). We might learn even more about the value of Hillers Communication Vagueness scales by working together on this effort.


(1) Balin, A. & Grafstein, A. (2016). Readability: text and context. Palgrave MacMillan: New York, NY.

(2) Ford, J., Stetz, T., Bott, M. & O’Leary, B. (2000). Automated content analysis of test item banks. Social Science Computer Review, 18(3), 258-271.

(3) Constant, A., Kahanec, M., Rinne, U. & Zimmermann, K. (2011). Ethnicity, Job Search and Labor Market Reintegration of the Unemployed. International Journal of Manpower, 32(7), 753-776.

(3)Caliendo, M., Cobb-Clark, D. & Uhlendorff, A. (2015). Locus of Control and Job Search Strategies. Review of Economics and Statistics, 97(1), 88-103.

(4) Kanfer, R., Wanberg, C. & Kantrowitz, T. (2001). Job search and employment: A personality–motivational analysis and meta-analytic review. Journal of Applied Psychology, 86(5), 837-855.

(5) Van Hooft, E., Born, M., Taris, T., Van der Flier, H. & Blonk, R. (2004). Predictors of Job Search Behavior Among Employed and Unemployed People. Personnel Psychology, 57, 25-59.

(6) U.S. Merit Systems Protection Board (2003). Help Wanted: A Review of Federal Vacancy Announcements, Washington, DC.

(7) Hart, R. (2001). Redeveloping Diction: Theoretical Considerations. In M. D. West (Ed.), Theory, method, and practice in computer content analysis (pp. 43-60). London: Ablex Publishing.