top of page
Search

The Quality Problem

  • Writer: Clinton Key
    Clinton Key
  • Apr 29
  • 2 min read

Man being reproduced by a reproducing machine. Image from cover art of HUP's edition of Walter Benjamin's Work of Art in the Age of Mechanical Reproduction.

A friend recently told me how they felt commodified at their work; valued for the units and not for the quality they produced. Counting matters. But it often leads to operational disappointment and leaves impact on the table.


Many types of work are long-since commodified–above some threshold, the skill of the particular worker doesn’t affect the quality of the product much. For lots of stuff that is good! I’d hate for the engine to fall out of my car because Jim installed the engine mount instead of Sue who is better at it! Most (I had “all” here, but hedged) work is quantifiable. But I worry that when quantifying, particularly for work where human inputs matter and vary, that quantification often backfires.


It is simple to see IF a thing was done (though a surprising number of big and expensive evaluation studies don’t check that the intervention was actually delivered). It is easy enough to measure the quantity (and speed) of work produced and to assess the gap between the services offered and services received (again, huge blind spot in too many studies). 


Often, especially on the path to scale…even more so when there is evidence that a thing works…organizations are hyper-focused on IF the thing was done. Did we hold the meeting/deliver the curriculum/send the email? I call it the check-box approach. If the box is checked we don’t need to think critically about it ever again.


Measuring and documenting that the thing exists satisfies (sometimes) the funder/c-suite …”we have integrated the AI chatbot [or wrap-around supportive services or AI-driven supportive services]; leave us alone please.” Doing something never guarantees the outcome that motivated the doing in the first place.


It is much trickier to assess and distinguish the quality of the work. It varies both within and across people doing it. It is squishy and (sometimes) costly to measure.  And so it often isn’t measured. And then we wonder why the impacts we were told to expect don’t materialize. We dismiss the thing (or even worse keep doing the thing that isn’t delivering). 


I love to find creative, user-centered approaches to establish and measure the quality of work done. It is the perfect venue for careful mixed methods research that is inclusive of staff and participants. Also a great chance to learn from frontline staff (who should shudder whenever someone starts up of continuous quality improvement) what they want and need to grow and develop as professionals. And to craft a balance between structure and agency/autonomy. 


For human-powered systems to work, we have to pay as much attention to how WELL a thing is done as we do to IF a thing is done. Then use the data to drive innovation as we build capacity, systems, and processes to ensure that clients are getting the level and quality of service that will produce the outcomes and impact the organization wants.

 
 
 

Comments


© 2025 by Key Evidence & Insights, LLC | Powered and secured by Wix

bottom of page