On the Turn of a Phrase
The power to express solutions, and the implications in a prompt-driven world
Over the years, there is an idea I find myself increasingly coming back to. It feels like it should have been a quotation from somebody, but I’ve never found a match. More likely I’ve heard a few things akin to it, and the mind compressed them into a single memory engram much the way a Large Language Model (LLM) might.
We cannot think thoughts for which we lack the language.
I believe there is also a corollary to this, with slightly different emphasis:
The language we use frames the limitations in our thinking.
The point to get across between these is that, in a world of hard problems, we can’t be surprised if those problems remain unsolved if we are fundamentally compromised in our ability to discuss them. If you can’t even pull off a high-quality conversation with yourself in a mirror over something, you’re not going to create, or advocate for, or prompt-engineer workable solutions. Even worse, you could very well be trapped in a hamster wheel where some little pool of existing thoughts drawn from insufficient language just replays on a mental loop to no effect.
View Series => Prev: The Foreshadowing | Next: GenAI Reimagined
If those hamster-thoughts were the right words, the right ideas, the obvious solutions, then circumstances would already have changed. You would have “done the thing”, or the obviousness of the solution would have been shared by a billion other people and resulted collectively in the required change. Much like the Zen parable of the overflowing tea cup, a lot of what we hold on to is blockage. It distracts us from the value of just clearing the slate, using fresh eyes, and reconceiving the situation anew.
Brainstorming
The difficulty, and the potential, of thinking up something new was brought home to me years ago when I was working in Academic Computing Services at MIT. The broader Information Systems department that we fit within had gone through years of various attempts to reconceive and improve upon the mission. We used to joke that it was time to start asking the VP’s executive assistant to hide his books when he’d go on vacation, because when the VP returned it would always be time for the next methodology incursion to be trained in. Unfortunately, we’d always just end up trying to fit the old work — and thus the old thinking — into the new scheme. Sure, a few bits of vocabulary changed and we added some soft skills, but it never rose to the level of a language that spanned all the players and moved activity (and the economics thereof) in a new direction.
The moment I really, viscerally, understood what the problem was, came from an activity we conducted with 7 or 8 faculty members. It was a brainstorming exercise where we asked them to go off and write a description of what they wished they saw as a future educational experience benefiting from technology. The instructions were explicit. They weren’t being asked to talk about routinely applying what we were already supplying them. They were being given the opportunity to “think out of the box” and provide the vision we might entirely lack to drive good long-term planning.
All of the educators in question were experienced teachers, their courses were well-known and well-regarded. They varied by age, department, tenure, etc. In spite of their deep domain expertise coupled with substantial hands-on pedagogical application, only two provided missing vision. Everything else described was an obvious extension of where we were at, but two people were able to set that all aside and articulate broader aspiration instead of just characterizing the results of existing momentum. The other responses, while professional, could have been emailed in as one-liners saying “increase budget, then do more of the same.”
Literal Shackles
A sound byte that gets increasingly recycled relates to the literacy rate in the US is “54% of adults read below a sixth grade level.” Like many such pithy phrases, they get game-of-telephoned and the original context gets lost. It is worth developing the habit of digging a little deeper to get accurate context.
Sidebar: Apologies to global readers not versed in the dynamics American education and politics, but I need to segue here to government data I can have more context in. If anybody ever wants to collaborate on showing similarity or differences in other parts of the globe, I’m open to that. Anyways, editorial sidebar concluded, on with the story.
As best as I can determine, the number apparently originates from an earlier version of a 2022 article by the American Public Media Research Lab, where it misconstrued PIAAC literacy levels and 2017 data in terms of school grades. They have since removed that wording from the article. You may see claims the number is relevant to 2025, but from my digging it appears that is just because the National Literacy Institute has a web page which makes mention of those older numbers. Also as is often the case in our politics-sensitive world, the summaries of the results get administration spin (from either party) to make them sound more palatable than they arguably should be. The spin version was “79% of adults are literate”. What that practically meant was “79% of adults would not fail at ordering a cheese burger instead of a bacon burger from a menu without food pictures; the worst two literacy categories were the remaining 21%.” With the hearsay data origin story clarified, let’s move on from the stale information.
The PIAAC definitions I think provide a clearer picture. PIAAC reporting works in levels (click the link and expand the Cycle 2 Literacy Proficiency Levels), not school grades, and these definitions appear stronger for considering the tasks a person might routinely perform adequately in daily life. There are five levels, and a sixth “below level 1” catch-all for anybody with less literacy than the defined five. The descriptions below are my mulling on the implications, not what PIAAC states, organized into a few bands that I find useful for current purposes.
Below Level 1, and Level 1: not participating in the information economy, and likely very minimal consumers of any technology or media that depends on reading or writing text. Doesn’t read books, likely doesn’t own any.
The literacy band unlikely to use AI in a meaningful way, even accidentally.Levels 2 and 3: routine light consumers of the information economy. The text-based technology interactions are kept modest, but used as needed. Might read simple books, or may just remember having had to do so in school. Could have a few coffee-table books and some pulp fiction or a religious text.
The literacy band that would accidentally use AI because a vendor wired NLP (natural language processing) into their UX, but would have limited awareness of AI otherwise.Levels 4 and 5: active participants in the information economy to varying degrees. The levels of technology use would depend upon the career domain, but technology and media consumption are likely routine both on and off the job. Both reads and writes text for an assortment of reasons. Probably owns multiple books, some of which could be career-related.
The only literacy band that could or would intentionally use AI, as interacting with an LLM creatively necessitates a level of comfort with the written word.
Particularly interesting are the differences between Levels 4 and 5, which I will quote verbatim:
Adults above Level 4 may be able to reason about the task itself, setting up reading goals based on complex and implicit requests. They can presumably search for and integrate information across multiple, dense texts containing distracting information in prominent positions. They are able to construct syntheses of similar and contrasting ideas or points of view; or evaluate evidence-based arguments and the reliability of unfamiliar information sources. Tasks above Level 4 may also require the application and evaluation of abstract ideas and relationships. Evaluating reliability of evidentiary sources and selecting not just topically relevant but also trustworthy information may be key to achievement.
If you consider the statement in a world of AI, what this really is describing is those people with the capacity to be competent in constructing LLM prompts and evaluating the generated results. This is the level of language necessary to “think thoughts” at a meta-level in order to orchestrate a GenAI process, determine if what comes back has utility, and iterate correctively until the task is complete. Anybody with less than Level 5 is increasingly at a disadvantage, and I would argue that the reduced facility with language makes that disadvantage very difficult to avoid unless you ascribe near-mythic powers to LLMs. If LLMs indeed had such powers, it would beg the question of why low-skill human interactions were of value to task execution in the first place. More likely is people in Level 4 feeling pressure to up their game.
Take a moment to line that up with AI vendor marketing on how it will democratize knowledge and skill. Add in all the people who make online boasts about what LLMs enable for them. Some of it will indeed be true, so for the sake of argument let’s stipulate to all of that potential and be full-throated in our optimism. Imagine a big, warm, fuzzy percentage of humanity that will experience this brave, new, ever-expanding world of possibility… and hold my beer. I’m going to go and fetch the literacy data.
The Data
I’ve set up a Git repository to provide two Excel workbooks that I downloaded via the PIAAC Data Explorer:
PIAAC-2012-2023-by-sex-and-age.xlsx: The data over three reporting periods, with some break-down by sex or by 10-year bands of age range.
PIAAC-2023-aggregate.xlsx: Summarized numbers from the most recent reporting period, without any demographic break-down.
The key findings from that second file are:
Year/Study PIAAC 2023
Jurisdiction U.S. Household
(16-74 years old)
Proficiency Percentage
Below Level 1 11.99
Level 1 16.74
Level 2 29.36
Level 3 29.81
Level 4 10.88
Level 5 1.22
There is the reality. That third band combining Levels 4 and 5, representing people who would have some hope of participating meaningfully in an AI-heavy economy: 12 percent. And of that 12 percent, only a little over 1 percent even have the language potential to excel at it, and mere language proficiency alone would never be enough. For the people who have hope, 1 in 12 start with some minimal skill suggesting they might warrant that hope, and the other 11 are scrambling to play catch-up from a position of measurable but possibly repairable deficit. The other 88 don’t even know the race track exists.
Let that rattle around the brain pan for a bit. For those holding the most optimistic view on GenAI, if current literacy patterns continue, then we could face an outcome where 12 percent of the population go in one direction, and 88 percent go in an entirely different one. The only thing that would easily cast the situation in a less dire light would be if something close to 88% of the population currently worked in jobs that don’t have the potential for being cannibalized by AI, such as some blue-collar skilled trades and most unskilled labor.
Unfortunately, according to the AFL-CIO, something like 58% of the US workforce is estimated to currently be white-collar.
It is the basis for a painful transition process into a future combined economic and political divide that will make the current landscape seem a cake-walk by comparison.
This is one of several reasons why I’ll keep saying that we need better options. The search is and will be time-consuming. If you’re finding the material at all compelling, you can vote for more with your dollars on Substack!
The Experimentalist : On the Turn of a Phrase © 2025 by Reid M. Pinchback is licensed under CC BY-SA 4.0