Published to clients: February 10, 2026 ID: TBW2099
Published to Whisper Club: February 10, 2026
Analyst(s): Dr. Doreen Galli
Photojournalist(s): Dr. Doreen Galli
ABSTRACT
“This Whisper Report explores how CTOs should rethink organizational design in the world with Generative AI. Generative AI introduces structural shifts that accelerate delivery and reshape skills, workflows, and talent strategy. The research was conducted at HR Tech in Las Vegas. It highlights how agile cycles, cross‑functional skills, and HR collaboration strengthen organizational resilience.Quotes come from leaders at RChilli, Darwinbox, FitFirst, Fountain, FranklinCovey, Eightfold AI, Gem, and Aptia USA.”
Published to clients: November 28, 2025 ID: TBW2098
Published to Readers: December 1, 2025
Public Release Date: April 13, 2026
Analyst(s): Dr. Doreen Galli
Photojournalist(s): Dr. Doreen Galli
Abstract
This Whisper Report reveals nine overlooked AI risks in HR—from loss of human connection and identity challenges to compliance, data quality, and black-box concerns. Insights from HRTech2025 experts stress the need for ethical design, integrated systems, and AI literacy to safeguard trust and organizational resilience.
Target Audience Titles:
Chief Human Resources Officer (CHRO), Chief People Officer (CPO), Chief Technology Officer (CTO), Chief Information Officer (CIO), Chief Data & Analytics Officer (CDAO)
VP of HR Technology, VP of Talent Management, Director of HRIS (Human Resource Information Systems), Director of Data Privacy & Compliance
HR Technology Manager, HRIS Analyst, Data Scientist (HR Analytics), AI Ethics Specialist
Key Takeaways
Keep humans in HR: Overreliance on AI erodes trust and relationships—HR must preserve human touchpoints for employee engagement.
Protect identity and ethics: AI adoption impacts employee identity; embed responsible AI design and ethical standards from the start.
Secure and integrate systems: Data security lapses and fragmented AI tools increase risk—prioritize compliance and cohesive integration.
Invest in AI literacy: Lack of training leads to misuse; HR teams need prompt engineering and clear goals for effective AI use.
We took the most frequently asked and most urgent technology questions straight to the human resource technology professionals gathering at HRTECH2025 held in Las Vegas. This Whisper Report addresses the question regarding the biggest AI risk in HR no one talks about? Figure 1 displays the nine risks we will now discuss.
Figure 1. Nine Hidden AI risks in HR No One Talks About
Human resources is all about managing the employees of an organization. It is one of the most critical relationships an organization has. Fountain’s Bastian Botella raises one very concerning risk. “It’s the loss of trust between employees and the company. AI is all over right anywhere from the hiring phase down to retention communication tools everywhere. Okay. At some point and I think it’s going to be sooner rather than later all employees will figure out that the human has been removed from all processes. Removed from interviews, removed from communication, removed from any touch points that they have with their employer.” BambooHR’s Paul Swenson is on the same page. “I see in HR is the over reliance on AI. HR is all about people, right? Like interacting with people and AI can sometimes pull you away from that. So HR needs to stay close to the people. Build relationships with the people that they work with in their companies. But sometimes I think an over reliance on AI can lead to people not doing as much of that which is really the bread and butter of what HR is good at and what they excel at. Right? So as we use AI we need to make sure that we’re you know remaining consistent with our relationships with the people at our companies and providing great employee experiences for our people.” In other words, let’s keeps the humans in humans resource management!
HRTech2025’s opening keynote speaker, FranklinCovey Leadership’s Patrick Leddin observed, “a lot of people in the organization find a lot of value in the work they’re doing. It isn’t just about replacing a task and giving somebody a new task or saying this is going to be something that generative AI is going to do and you you’ll be able to do more analysis. It’s recognizing that people’s sense of self is often times connected to their work and if you take that away from them, how are you going to help them find their new identity?” Given that one of the first questions a stranger asks outside of your name is your profession, it is easy to understand how one’s profession is tied to one’s identity. What is a software engineer who no longer writes code but monitors the AI writing code?
Our next risk came from Eightfold AI’s Michael Dunne. “Great concern that should be attention given to is responsible AI by design.” Many of the critical aspects of a solution need to be thought of from the very beginning. TBW Advisors LLC repeatedly reminds one that security, privacy, and accessibility cannot be an afterthought. Ethical AI is right in the center and a critical part of the predesign work. Michael continued, “You having this bloom of hype around AI and the possibilities. There’s a lot of excitement but one is always take into account then how was this system built from the start and so what I like to say is people should look at their providers and see has this been done by design which means have they done understandings about managing the data what’s called feature sets and how it goes in for recommendations also understand whether the right certifications have been done around data privacy data residency and controls around the use of AI. One is for developing applications being consumers of applications and the use of that data. And you’ll see that now with a number of standards that have come out a lot of people pay attention about the EU AI act. There’s also ISO 420001.” Thus the organization’s ethical stance on how to use the data and AI should be defined in conjunction with your security and privacy policies.
With AI comes a lot of data and information. Darwinbox’s Eli Kameron warns that, “people are sending their data all over the place without even thinking about security. This was a problem already with APIs and it is going to explode with agentic AI particularly folks using MCP protocol servers. So a lot of folks are not thinking about the risks and the compliance risks that they are exposing themselves to when they send data everywhere.” Just because it will take your data, doesn’t mean you should be sharing it with the application. Even lower tiered paid models do not provide the privacy expected by many enterprises.
Risk number five comes to us from Benifex’s Joe Sears. “All these different AI agents out there with different functionality. But each of these companies has their own thing that they’re doing and we need to keep that message joined up and all of the different AI needs to talk to one another. If we can integrate our AI capabilities with the wider AI capabilities that are going on, then that’s going to be that best experience for the employee.” In other words, much like what we saw with commercial UAV’s in the enterprise, AI systems are popping up by function within organizations. Enterprises should take a cohesive desired solutions approach to achieve the best ROI with their AI investments. If AI and your data is becoming siloed in your organization, be sure to schedule your inquiry with your TBW Advisors LLC’s analyst. We can provide you guidance based on first hand experience that is sure to make the difference even is the work is outsourced.
One concerning risk was highlighted by Paychex’s Nathan Shapiro. “Over reliance on AI and even furthermore folks outside HR trying to practice because lacking the expertise can lead to dangerous things. The democratization of AI and the proliferation is fantastic and is going to really change the way we work. But lacking that expertise can run you into some significant challenges and liability. Just think about asking AI for guidance on a termination scenario with an employee and lacking the expertise to know that their age is really critical for discrimination law. What jurisdiction is going to rule on that and the liability it could create?” As long as worker’s have rights and the AI isn’t training on the complexities and nuances of those rights, it may be best to keep seasoned professionals as the humans in the loop!
A tool is only useful if it is used and used properly. As Attensi’s Joanna Akar denoted, a huge risk in, “AI is actually not having the knowledge on how to use it. If you don’t know how to prompt engineer or use AI or Gen AI or whatever type of AI you’re using within your day-to-day. If we fall
you risk not being able to follow the trend, not being able to be more efficient within the learning environment. So, it’s super important that HR people are trained in how to prompt AI or prompt engineering to make sure that they’re utilizing it in the best way possible to get the most return on investment that they can get out of their people.” Lollipop’s Jonathan Ferrell shared very similar concerns. “Lack of understanding on what AI is and what it isn’t. I think a lot of people recognize how quickly it’s able to solve immediate tasks and maybe make it feel like it’s a more complex task, but what really matters is what you’re trying to accomplish. And if you don’t know upfront what you’re trying to accomplish, you could really go in the wrong direction.” Thus to minimize this risk, start with the problem and learn how to communicate with the specific AI you are using for best results!
One Model’s Phil Schrader reports our next risk. “Data quality. The AI is going to be able to answer questions in new ways for organizations. But if you don’t have a quality data model to feed into it or quality reliable tools for it to use, it is going to generate noise, it is going to generate nonsense that actually moves you backward.” Or as previously highlighted in Whisper Report: What are the biggest challenges of Using Gen AI in Logistics?, you put garbage data in you get garbage out. Without quality data, it is not possible to get reliable answers.
The final risk should come as no surprise from anyone but is always important to remember. Aptia USA’s Jeff Williams reminds all, “AI is a black box the way it’s permeating everything we do on an everyday basis. And think about how little each of us really understand about what AI is, how it’s generating the answers it’s generating, and the advice it’s dispensing, and the actions that are being taken as a result. I think the fact that we are lumping AI together for things as simple as a chatbot and things as complex as fully generative large language models. I think kind of lumping all that together, calling it AI and expecting to solve all of our problems without really knowing what’s feeding it underneath, I think is a big un-discussed risk that we really need to address.” Clients will recall a similar warning arrived in Whisper Report: What’s the biggest Cybersecurity Myth in 2025? One of the biggest requirements to shine the light on the black boxes are logs. Let’s make 2026 the year all AI systems are required to provide immutable logs.