By Charles Palmer, Ph.D., Interim Dean, School of Applied Media & Innovation at Harrisburg University of Science & Technology
Can you build artificial intelligence (AI) without emotional intelligence (EI)? Should you? What do we mean when we talk about “humans in the loop”? Are we asking the right questions about how humans design and govern “thinking” machines?
One of the immediate problems we face with generative AI is that people increasingly rely on them for big decisions. I won’t call all of these ethical decisions, but in some cases they’re consequential decisions. And many users forget that these systems are trained on data that carry all kinds of inherited biases. When we talk about AI bias, it isn’t always abstract. It shows up in very literal assumptions the models make when they are asked to generate images or ideas.
“Cute” and “Criminal” Through the Eyes of a Machine
Early on, when I was testing image-generation systems, I ran some experiments to see how the model was interpreting different concepts. The first was simple: it assumed that a doctor was male. I would write a prompt that said something like, “a doctor explaining something to a child,” and overwhelmingly the generated images showed men. That alone was enough for me to ask: “If it’s making that assumption, then what else is under the hood?”
That question is really the beginning of understanding how these systems work. They are pulling from enormous bodies of unlabeled or inconsistently labeled data and then inferring patterns. We often forget that the inferences are statistical, not conceptual. To the model, “doctor” aligns with “male” because that’s the pattern the dataset reinforced.

Next, I wanted to see what the system thought “cute” meant. So, I prompted it: “Create an image of a cute child on a playground.” What came back was incredibly consistent. Roughly 80% of the images showed a female, Caucasian, blonde child, often wearing something red.
The model wasn’t saying these things explicitly, but the image distribution made it clear that, based on its training data, “cute” mapped to those features. That immediately opened new questions; once you see that pattern, you start to think about the person who uses that same tool to create marketing materials without realizing they need to adjust their prompts. If they never include the word “diverse” – a word they might not think they need – then their imagery will reflect the existing cultural archetypes the AI has absorbed.
A similar pattern emerged when we worked on a literacy program project. They needed a cover for materials related to early-childhood literacy. In a previous part of my career, I worked in editorial environments, so I knew exactly how long a project like that would take manually: probably a week, involving three people, location scouting, creative direction, editing, and production. Using AI, I created a cover-quality image in 90 minutes, for roughly a $20 license. The time savings were enormous, and the quality was good enough that I would have accepted it had I been producing it the traditional way.
But, once again, there was a catch. If I didn’t tell the system, “diverse audience,” then all the children it generated fell into the same narrow “cute child” category. It’s not that the AI systems are racist or sexist. They simply don’t have self-awareness. They’re reflecting the dominant patterns in the datasets they learned from. But reflection without critique becomes reinforcement, and reinforcement becomes norm.
Artificial Intelligence Is an Isolated Community
I tested another angle. If “cute child” produced blonde girls in red dresses, then what would the model do with “angry criminal”? I honestly expected it to produce a high percentage of African American males, because that’s how US media has historically portrayed criminality. But that isn’t what came back. Instead, almost 90% of the generated images showed Caucasian men who were bald, with very similar scowling expressions.
It was almost amusing. But it was also revealing. The model wasn’t relying on the same racially loaded media cues humans might assume. Instead, it appeared to be operating on majority exposure: the dataset likely contained more Caucasian male images in criminal contexts than any other demographic, so statistically that became the model’s default association. This doesn’t make the system “better”; it just means its biases come from exposure patterns rather than ideology. Either way, the result is a narrow interpretation of reality.
These examples point toward something larger: AI systems behave like isolated communities. They see the world through the limited lens of whatever they have experienced. They don’t have the equivalent of a freshman-year dorm experience, where you sit around talking to someone from a completely different background and realize there are other ways to think and live. Their “world” is their dataset. And like any community with limited exposure, they develop a kind of closed-loop perception. The result is a machine version of cultural myopia.
Students and the Ethical Dimensions of AI
In the courses I teach on AI, we spend a lot of time on the ethical dimensions as well as the practical side. Students must recognize that these systems can be flawed, and that recognizing where the model breaks down is crucial. You can’t just ask a tool to “tell me about Chapter 3 of Anne Frank” and assume the response is sufficient. You must ask follow-up questions and put context around it. You have to interrogate what comes back.
The best way I’ve found to describe this is to tell students to treat AI not like a genius, but like a naïve colleague. Not dumb, not incapable, just inexperienced and lacking exposure. Under that framing, the user must ask better questions, provide context, and evaluate the output critically.
Public Service Announcement: AI Doesn’t Know What Things Mean
What all of these examples add up to is a simple but crucial insight: AI systems don’t possess emotional intelligence because AI systems don’t know what things mean. They only know what things statistically correlate with. To them, “cute” isn’t an idea; it’s a cluster of pixels and features that frequently show up near that label. “Doctor” isn’t a profession; it’s an image that was disproportionately associated with men. These are not moral failures on the machine’s part; they are reflections of us. But if we deploy these systems uncritically – especially in areas like hiring, healthcare triage, public services, advertising, or education – we risk turning statistical artifacts into cultural norms.
This technology isn’t going away. In fact, it’s accelerating. And the more we use it, the more it will define our shared visual and conceptual vocabulary. That’s why understanding its biases now matters. These systems are not infallible. They are mirrors: sometimes distorted, sometimes inconvenient, sometimes illuminating. The question is not whether we should use the mirror, but whether we are willing to look into it critically.
Dr. Charles Palmer is the Interim Dean for the School of Applied Media & Innovation at Harrisburg University of Science & Technology (HU).
