The Structure of Thinking: AI as a Cognitive Tool & Workflow Partner

By Charles Palmer, Ph.D., Interim Dean, School of Applied Media & Innovation at Harrisburg University of Science & Technology

When people first talk about artificial intelligence, they usually talk about it either as a threat or as a magical answer. They talk about it as something taking jobs, or as something that gives you shortcuts. But I think the more interesting and realistic way to think about AI today is as a cognitive tool: like a calculator or a writing partner or a drafting assistant. Something that doesn’t replace your brain but amplifies it. And the amplification isn’t always in the ways people assume.

For example, the explanation people give most often is “AI helps you write faster” or “AI helps you summarize” or “AI helps you make content.” That’s true, but it’s shallow. The more profound shift is not about quantity or speed; it’s about how the tool changes the structure of thinking. Because if you look closely, the big change is in how people formulate intent, describe context, choose lenses, set parameters. It forces a very different approach to what communication even is.

This is why prompt engineering has been such a weird cultural moment. It started as a joke meme (people posting ridiculous, super-long prompts) but then people realized that prompting is actually instructional design. You are giving context, constraints, roles, examples, intended outcomes, audience assumptions, and emotional parameters. If you strip away the trendiness and just look at the mechanics, prompting is basically literacy in asking for what you want. And that is not something most people are actually trained to do. Most people are trained to react, not to articulate intentions with precision.

The Workflow Shift: Persona + Context + Task

One way I talk about it, because it makes people understand the structure fast, is “persona + context + task.” Persona means: Who do you want the AI to be? Is it a research analyst? A hiring manager? A customer? A professor? A therapist? “Context” means: What do they know and not know? What is the situation? What domain are we in? “Task” means: What are we asking for, exactly?

When you stack persona + context + task, you get much better output: not just because the AI is “smarter,” but because the human is now thinking clearly. This is what I mean when I talk about an “intellectual transformation.” The shift is not that the machine is thinking for us. It’s that the machine requires us to think in a more structured and transparent way.

There’s also a post-processing phase. People get hung up on the first output and complain when it’s generic. But the first output is never the point. The first output is scaffolding. It’s like pouring the first batch of clay. You don’t serve the clay, you sculpt it. Humans edit, clarify, challenge, contradict, remove fluff, add insight, and make the thing theirs. So: the workflow isn’t “AI gives you a finished product.” It’s iterative co-drafting.

And this is culturally very unfamiliar to people because they are used to one of two writing modes: (1) purely human writing from scratch, or (2) copy/paste plagiarism. AI introduces a third mode: (3) iterative transformation. And that third mode is much closer to how people actually work in corporate environments anyway. You’re constantly taking drafts, frameworks, templates, and revising them. Nobody writes corporate policy or grant reports or RFPs from a blank page. They start with a previous version or a template or last year’s submission. AI just standardized what was already quietly happening.

From Blank Page to Editing Mindset

When the blank page disappears, a lot of anxiety disappears with it. Writing is not actually a single activity. It’s a stack of different activities that most people pretend are one thing: thinking, structuring, drafting, revising, editing, and proofreading. And most people are bad at the first three and decent at the last three. AI lets people start at the part they’re already good at (shaping and improving) instead of the part they dread, which is staring into the abyss of nothing and having to invent something from dust.

But here’s the twist: once people get comfortable editing AI output, their original thinking often gets sharper. They start noticing blind spots and missing context. They start asking themselves, “Who is the audience for this?” or “What data would strengthen this claim?” or “How do I want this to sound?” They develop editorial intelligence. This actually represents a real improvement in cognitive discipline.

I’ve watched people who were terrified of writing gain confidence simply because they no longer had to be the origin point. The mental model changed from “writing is creating” to “writing is shaping.” And shaping is much more natural to how humans think.

The Skilled User vs. the Casual User

There’s also a gap emerging between what I call the casual user and the skilled user. The casual user uses AI like a vending machine. They type: “Write me a marketing email about our new product.” And they accept whatever comes out. But the skilled user treats AI like an assistant. They give briefing documents, audience information, tone guidelines, old samples, internal style sheets. They iterate. They say “Here are three examples; analyze the linguistic patterns. Now rewrite our draft in that pattern.”

This is why it’s incorrect to say that AI “automates thinking.” It automates the part of thinking that is mechanical, not the part that is conceptual or strategic. And the people who learn to exploit that division of labor are going to have an enormous advantage. It’s not even theoretical; it’s already visible in organizations. You can see who is producing significantly more output at measurably higher quality just because they know how to wield the tool.

And here’s the part people don’t like to admit: AI is raising the floor and raising the ceiling at the same time. The floor rises because mediocre output becomes acceptable output. The ceiling rises because people who were already good become exceptional through leverage. The group that suffers is the middle: the group that used to differentiate itself through competent but not exceptional execution. Though it is worth noting that the picture is more nuanced than a clean division between experts and novices: the Brynjolfsson et al. (2023) study found that the largest productivity gains accrued to newer, lower-skilled workers – not the most experiences. AI appears to encode the tacit knowledge of top performers and make it accessible to everyone climbing the curve. The ceiling may be rising, but not only for those already near it.

Human-in-the-Loop as a Permanent Pattern

There’s also an argument floating around that AI will eventually eliminate human-in-the-loop. I don’t buy that at all. Not because AI won’t get better (it will!) but because communication is not purely about correctness; it is about intention, accountability, risk, and relationship. In environments where those matter, someone always needs to be responsible for what gets said or what gets shipped.

If you’re writing grant proposals or medical guidance or employee policy or audited financial disclosures, you don’t want an unaccountable model deciding your language. Even if the model is 99% accurate, the 1% error in those domains is existential. So, it’s not that AI will replace humans; it’s that AI will progressively replace the parts of human work that are low-value, low-context, and low-risk. And humans will increasingly govern the parts that are high-value, high-context, and high-risk.

In other words: collaboration, not substitution.

Learning to Think in Systems

The more people use AI seriously, the more system design thinking becomes a default skill. Because every serious use case involves systems questions: What data does the model have? What data do we need to supply? What is the feedback loop? How does the draft get reviewed? Who signs off? What are the failure modes? AI basically forces process literacy.

This is why the best users of AI right now are not necessarily the best writers. They are the best system thinkers. They understand that the tool is not just generating sentences; it is participating in a workflow. And workflows are about constraints, handoffs, accountability, and objectives.

The Future of Work: Tool Literacy as Meta-Skill

If you ask me where the real transformation is happening, it’s not that AI is making things easier. It’s that AI is making the invisible visible. It is making the meta-skills of communication explicit: intent, audience, structure, revision, operational context. Before AI, those skills were implicit and unevenly distributed. But now, they’re becoming teachable and scalable.

The people who learn them early will not only work faster; they will think more clearly. And in a world where work is increasingly about coordination and complexity, clarity is power.

This is not an abstract argument; it is a call to act on several concrete fronts. First, audit your current workflows for the “blank page” moments, items like briefs, program narratives, specification drafts, and project post-mortems, and experiment with AI as a scaffolding tool for those specific tasks before extending it further. Second, invest in prompt literacy as a studio skill, not a personal hobby. The gap between casual and skilled AI users is already producing measurable differences in output quality; firms that train their teams deliberately will compound that advantage over those that leave it to individual initiative. Third, identify your high-accountability touchpoints and establish explicit human review protocols for anything AI assists in producing. The goal is not to slow AI adoption but to govern it with the same rigor you would apply to any other strategic decision. Finally, treat AI fluency as a dimension of professional development worth tracking, alongside technical skills and project experience. The meta-skills this technology makes visible—intent, audience, structure, revision—have always been the core of good design thinking. Now, for the first time, they are measurable, teachable, and consequential in ways that firms and institutions can act on.