- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Stevens: Humans, AI Need Cognitive Alignment
Researcher says effective human-AI partnerships require shared expectations and trust, not just raw intelligence.
Mar. 19, 2026 at 6:04am
Got story updates? Submit your updates here. ›
In a new paper, Assistant Professor Bei Yan at Stevens School of Business argues that for AI to be successfully deployed alongside humans in the workplace, the two need to develop "hybrid cognitive alignment" - a shared understanding of the AI's capabilities, limitations, and when human judgment should take precedence. Yan says AI failures often stem not from the technology itself, but from a mismatch in how humans and machines work together, leading to over-trust, misuse, and wasted effort. She suggests companies focus more on how tasks and roles are divided between people and AI, and allow time for teams to adapt, rather than treating AI as a "plug-and-play" solution.
Why it matters
As AI becomes more prevalent in daily life and the workplace, it's critical that humans and machines learn to work together effectively. Misalignment between the two can lead to AI failures, wasted effort, and even disastrous outcomes, as seen in examples like high-frequency trading algorithms. Developing shared expectations and trust is key to unlocking the full potential of human-AI collaboration.
The details
Yan's research highlights how differences in how humans and AI approach tasks can be complementary, but only if they are well-coordinated. People rely on experience, judgment, and social cues, while AI uses statistical patterns from data. When these differences aren't aligned, users may over-trust AI outputs, misuse systems, or waste time correcting them. Yan argues that "hybrid cognitive alignment" - the gradual development of shared expectations about the AI's purpose, usage, and when human judgment should take over - is essential for effective partnerships. This alignment doesn't happen automatically, but must be cultivated over time as people learn how the AI behaves and adapt their interactions accordingly.
- Yan's new paper was published on March 18, 2026 in the Academy of Management journal.
The players
Bei Yan
An assistant professor at the Stevens School of Business who studies human and machine teamwork.
Stevens Institute of Technology
A private research university in Hoboken, New Jersey that has focused on technological innovation since its founding in 1870.
What they’re saying
“Companies are using AI alongside people, but it's hard for them to work well together. People think differently than AI. People use experience, judgment, and social cues. AI uses statistical patterns learned from data.”
— Bei Yan, Assistant Professor
“AI failures happen because humans and machines are not aligned in how they understand tasks, roles and responsibilities.”
— Bei Yan, Assistant Professor
“Treating AI as a 'plug-and-play' solution often backfires; treating it as a new collaborator yields better results.”
— Bei Yan, Assistant Professor
What’s next
Yan's research highlights the need for companies to focus more on how tasks and roles are divided between people and AI, and to allow time for teams to adapt and develop shared expectations. AI developers should also design their systems not just for performance, but for effective collaboration with humans.
The takeaway
Successful human-AI partnerships require more than just powerful technology - they need "hybrid cognitive alignment" where people and machines develop a shared understanding of the AI's capabilities, limitations, and when human judgment should take precedence. This alignment doesn't happen automatically, but must be cultivated over time through training, adaptation, and building trust.


