Walberg Calls AI Labor Concerns a 'Halloween Scare' During House Hearing

House committee chairman dismisses warnings about AI's impact on workers as speculative rather than based on real-world cases.

Published on Feb. 5, 2026

During a House Education and Workforce Committee hearing on artificial intelligence, Committee Chairman Tim Walberg characterized concerns about AI's impact on workers as a 'Halloween scare', signaling support for the argument that such warnings are speculative rather than grounded in documented enforcement cases.

Why it matters

As AI adoption expands nationwide, there are growing concerns about its potential to displace workers or be used in ways that undermine labor protections. Walberg's comments suggest some lawmakers may be resistant to new guardrails to address these issues, despite calls from labor advocates for stronger regulations.

The details

The hearing examined whether existing labor laws are sufficient to address AI's impact on workers or if new rules are needed. During questioning, Walberg agreed with testimony from labor attorney Bradford Kelly that a 2022 National Labor Relations Board memo on employer use of AI was an overly broad 'Halloween scare' not based on actual case law.

  • The House Education and Workforce Committee hearing took place on February 5, 2026.

The players

Tim Walberg

The Republican representative for Michigan's 7th congressional district and chairman of the House Education and Workforce Committee.

Bradford Kelly

A labor attorney who testified at the House hearing, criticizing a 2022 NLRB memo on employer use of AI as overly broad and not grounded in case law.

Got photos? Submit your photos here. ›

What they’re saying

“We must not let individuals continue to damage private property in San Francisco.”

— Robert Jenkins, San Francisco resident (San Francisco Chronicle)

The takeaway

Walberg's dismissal of concerns about AI's labor impacts as a 'Halloween scare' suggests some lawmakers may be resistant to new regulations, even as worker advocates push for stronger protections in the face of rapidly expanding AI adoption.