- Today
- Holidays
- Birthdays
- Reminders
- Cities
- Atlanta
- Austin
- Baltimore
- Berwyn
- Beverly Hills
- Birmingham
- Boston
- Brooklyn
- Buffalo
- Charlotte
- Chicago
- Cincinnati
- Cleveland
- Columbus
- Dallas
- Denver
- Detroit
- Fort Worth
- Houston
- Indianapolis
- Knoxville
- Las Vegas
- Los Angeles
- Louisville
- Madison
- Memphis
- Miami
- Milwaukee
- Minneapolis
- Nashville
- New Orleans
- New York
- Omaha
- Orlando
- Philadelphia
- Phoenix
- Pittsburgh
- Portland
- Raleigh
- Richmond
- Rutherford
- Sacramento
- Salt Lake City
- San Antonio
- San Diego
- San Francisco
- San Jose
- Seattle
- Tampa
- Tucson
- Washington
Security Experts Warn Against Using AI Without Rules
Cybersecurity firm Armor says companies rushing to adopt AI without clear policies are creating dangerous blind spots.
Jan. 27, 2026 at 2:15pm
Got story updates? Submit your updates here. ›
According to a new warning from the cybersecurity firm Armor, companies that are quickly adopting artificial intelligence (AI) tools without strict policies and guidelines are creating dangerous security vulnerabilities. The firm says these blind spots leave organizations open to data theft, lawsuits, and other unforeseen threats. Armor's Chief Security Officer Chris Stouff emphasized the urgency for companies to develop and enforce AI usage policies, as traditional security measures were not built to handle these new technologies.
Why it matters
As more businesses and industries turn to AI to boost efficiency, the lack of clear rules around AI usage is creating compliance liabilities and operational risks. This includes issues like sensitive data being shared with public AI chatbots, as well as 'Shadow AI' where departments use unapproved tools without IT or security oversight. Healthcare providers face particular challenges in balancing AI adoption with strict patient privacy laws.
The details
Armor, which protects over 1,700 businesses worldwide, says that without a clear game plan, companies are opening themselves up to data theft, lawsuits, and security threats they aren't even looking for yet. The main issue is that traditional security measures weren't built to handle these new AI tools. Employees might inadvertently share sensitive information with public AI chatbots, and different departments could be using their own unapproved AI tools without the IT team's knowledge.
- Armor released a new five-part framework in January 2026 to help businesses address the transparency and security gaps around AI usage.
The players
Armor
A major cybersecurity firm that protects over 1,700 businesses worldwide.
Chris Stouff
The Chief Security Officer at Armor.
What they’re saying
“If your organization is not actively developing and enforcing policies around AI usage, you are already behind.”
— Chris Stouff, Chief Security Officer (tampafp.com)
What’s next
Armor's new framework aims to help businesses classify AI tools by risk level, set clear boundaries on sensitive data usage, and train employees to understand safe AI practices as part of their job responsibilities.
The takeaway
As AI adoption accelerates, companies that fail to establish comprehensive policies and controls around these new technologies are exposing themselves to significant security, compliance, and operational risks. Developing a strategic, proactive approach to AI governance is crucial for businesses to harness the benefits of AI while mitigating the dangers.
Tampa top stories
Tampa events
Mar. 18, 2026
New York Yankees v. Boston Red SoxMar. 18, 2026
New York Yankees v. Boston Red Sox *Pinstripe Pass*Mar. 18, 2026
Tampa Bay Sun FC vs. Brooklyn FC




