top of page
EPISODE 8

Diction in Evaluation Questions

CONTRIBUTOR

Hannah Skiff

>> Hi, my name is Hannah and I'm in my third semester at the University of Central Florida. I'm a graduate student pursuing dual master public administration and master nonprofit management degrees. Currently, I'm taking public program evaluation techniques and today, I wanna talk to you about diction, and evaluation question creation.

 

Let's break this down. I bet you're wondering what diction is or you might sound familiar from an English class you took in high school or undergrad. And diction is simply the choice in use of words and phrases in speech or in writing. And evaluation questions are used in evaluation proposals of programs, or certain interventions.

 

And these questions are used to highlight an idea or issue that will be addressed during the evaluation. So, so what? Why does diction matter when we're writing questions? Well, word choice guides question tone, it lets you know what the purpose might be, and highlights the direction that the question is going in.

 

Also, the vocabulary and level of detail you use in a question helps you establish credibility in your questioning. So let's take a look at some examples. First, we're going to look at outcome evaluation questions and these questions have two main purposes. They're used to determine how well a program or intervention has achieved the outcomes it has set, or to determine the effects that a certain program or intervention has on the lives of its participants.

 

So let's look at some examples. The first example of an outcome evaluation question is, is the program doing well? This question, although pretty direct, is actually really vague. We don't know what the criteria for doing well is. Are there standards? Are there objectives for this program that we could compare or measure the results of this question against?

 

A better question, instead of is the program doing well, might be is the program meeting its stated objectives? Is it fulfilling the performance measures it has set? Similarly, another poor example of an outcome evaluation question is, given that enrollment has increased, is the program doing well? Well, while it may be great that enrollment for it, let's say, an academic program has increased, this may not be a comprehensive indicator if the overall program is worthwhile meeting its objectives.

 

Enrollment may only be a small component that could help evaluate determine other more important factors. So again, we'd wanna focus this question and ask is the program meeting its stated objectives. In a different sense, we can look at another kinda poor evaluation outcome question. Is it the best it can be and doing everything it can?

 

Well, this question sounds very optimistic. It unfortunately is really vague and is going to be difficult to answer during an evaluation. Rather, we could ask, is the program meeting industry standards for X, Y, or Z? Is it meeting national requirements? Is it competitive in regards to blank? These are much more specific questions and can help outcome evaluators determine if a program is reaching these objectives.

 

We can think of another kinda evaluation question and look at some examples of how diction plays a role in question creation. If we think about cost benefit evaluation questions, we know that these questions are used to assess the relationship between program cost and program outcomes. So let's think of some questions.

 

The first one is, should program funding be cut? Right off the bat, this may make an evaluator feel a little nervous when we talk about budget cuts or funding cuts. Oftentimes, people are very iffy about saying, yes, let's remove funding for these possibly essential programs. So we want to make sure that we maintain a neutral tone when asking evaluation questions so that our findings aren't biased, and we're getting true measures of what we're looking for.

 

So a better way to ask a cost benefit evaluation question could be, are there any activities being duplicated within the program? By asking about redundancies and duplication, it may lead evaluators to make budget cuts or merge programs rather than asking upfront if funding should be cut, which could result in a very quick and non evidenced based response.

 

Given these examples, we can kinda create a rules or checklists to help us ensure that the evaluation questions we are creating are strong and that the diction is on point. The first rule would be, be simple but specific. In your evaluation questions, you wanna focus on one topic per question.

 

You wanna write with brevity and plain language. And you wanna provide relevant details. So we wanna specify our pronouns. We don't wanna use acronyms or jargon that an evaluator may not be familiar with. And we wanna make sure we're focusing on a specific topic. Our second rule would be avoided framing and biases.

 

Just like the program funding being cut, we don't want the question to elicit a specific emotional response. We want it to be neutral and that the findings and evidence that are used to answer the question do that without any sorta external influence. So if we already have some questions written, we can now use a checklist to double check that those questions are strong and that the diction is meaningful.

 

The first, underline each topic in your evaluation question. How many topics are in this question? If it's more than one, you most likely want to break this question into multiple questions so that your results can focus on the specific needs of each question. Next, circle any acronyms or industry specific terms in your question.

 

Do you explain this jargon or is context available elsewhere in the question for an external person to understand what this means? You may want to write out acronyms or industry specific terms so that anyone could understand what specifically you are asking about in your question. The third thing you could check for is asking yourself this question.

 

Is there a simpler way to ask this question? Extensive wording or fancy terms can often distract from what the evaluation question is trying to get at. It can be easier to conduct an evaluation when the questions are straightforward and simple. Lastly, does this diction incite a positive or negative response?

 

Again, we want our questions to be neutral and we want the evidence not an emotional response to influence the answers. Overall, word choice should always be considered when evaluators are trying to create strong and meaningful evaluation questions. And you can reflect on these rules and checklists to ensure that the evaluation questions you create are meaningful and that the diction is strong.

 

Thank you.

bottom of page