Skip to content

JUST LAUNCHED: The Ninth Annual ER Benchmark Study Is Here, Packed with Insights from Leading Organizations

GET THE DATA

Expert Webinar: AI in HR: Top 5 Risks and Opportunities

Type

Live Webinar

Date

June 12, 2025

Time

4p.m. - 5 p.m. ET

Speakers

Eli Makus

Oliver McKinstry

Register now

Please complete the form below and we'll send you all the info.

Quick takeaways
Opportunities: Chatbots trim Tier-1 HR tickets by ≈40 %; AI transcripts tag evidence and cut investigation time 30%; predictive analytics flag benefit-cost spikes early; AI search surfaces documents 30 % faster; LLMs draft data-rich performance reviews in minutes. Risks: Algorithmic decisions can discriminate and spark lawsuits; hiring models may amplify bias; pre-trained data embeds systemic bias; over-reliance erodes HR craft skills; leaders may devalue HR if HR looks fully automatable.


Artificial intelligence (AI) is reshaping the HR landscape, driving new efficiencies and challenges across talent acquisition, employee engagement, investigations and performance management.

On June 12, HR Acuity offered an expert-led session, AI in HR: Top 5 Risks and Opportunities. Eli Makus, a leading expert in AI workplace investigations, and Oliver McKinstry, Vice President and Associate General Counsel at DaVita, packed a session with actionable insights designed specifically for HR professionals.

By tuning into the webinar, you could learn more about: 

  • How AI can enhance key HR service lines while addressing critical concerns like bias, data privacy and compliance.
  • Strategies to harness AI responsibly to drive better business and ethical outcomes.
  • Practical, implementable insights for small to mid-size employers navigating the evolving digital workplace.

Watch the webinar today!

Webinar Highlights, Takeaways, and Analysis

Why This AI in HR Webinar Happened Now

The perfect storm of technology, regulation and reputational risk is hitting HR leaders in 2025: The EEOC’s new guidance on AI discrimination, the imminent enforcement phase of the EU AI Act and headline-grabbing lawsuits over biased algorithms. “Just because we must be cautious doesn’t mean we can’t innovate,” insists Deb Muller, CEO of HR Acuity, as she opened the session.

AI should augment the investigator, not replace them. — Deb Muller

The webinar united three seasoned experts:

  • Deb Muller: CEO, HR Acuity. Connect with Deb on LinkedIn.
  • Eli Makus: Managing Partner, Van Dermyden Makus Law Corporation, immediate past president of AWI. Connect with Eli on LinkedIn.
  • Oliver McKinstry: VP & Associate GC, Labor, Employment & Investigations, DaVita; incoming AWI president. Connect with Oliver on LinkedIn.

Who Should Watch (and Read This Recap)

This recap is helpful for HR leaders looking to supercharge their processes with AI, but want to do so thoughtfully and ethically. 

That includes: 

  • Chief People Officers & CHROs under board pressure to “get smart on GenAI.”
  • Employee Relations (ER) Leaders juggling case load, compliance and culture.
  • HRBPs & Generalists who field “quick-question” policy pings all day long.
  • Corporate Counsel tasked with preventing the next AI-bias lawsuit.
  • HR Tech Owners & People Analytics Pros evaluating new AI vendors.

If any of the above describe you, the following recap delivers the distilled highlights, quotes and next steps.


What Is the Current State of AI in Employee Relations?

What does the Ninth Annual Employee Relations Benchmark Study say about the state of AI in employee relations?

Drawing on data from HR Acuity’s Ninth Annual Employee Relations Benchmark Study (8.7 million employees, 280+ organizations):

  • 44 % report no AI use in ER.
  • 35 % are piloting limited solutions.
  • 28 % use GenAI purely for writing assistance.
  • <10 % leverage AI for policy recommendations or case analysis.

ER deals with the messiest, most legally fraught issues at work. No wonder adoption lags. — Deb Muller

Why the adoption gap?

  1. Confidentiality Concerns: Harassment and retaliation cases can’t live in public LLMs.
  2. Legal Worries: Anything created may be subpoenaed.
  3. Ethical Nuance: Emotional intelligence still trumps probability scores.
  4. Patchwork Laws: State, federal and EU rules evolve monthly.

How is AI fundamentally different from traditional HR tech?

Deterministic systems return the same output every time (“6 × 8 = 48”). Probabilistic AI returns its best guess based on patterns in training data; two prompts five minutes apart may differ.

That variance is both the magic and the menace of AI. — Eli Makus

Implementation takeaway: Build a human verification loop around every probabilistic tool touching important decisions. 


Who is already using AI at work?

Generation Primary AI Mental Model Implication for HR Policy
Baby Boomers & Gen X “Advanced search engine” Provide clear usage guardrails; they expect definitive answers.
Millennials “Life advisor” Expect recommendations; demand speed.
Gen Z “Operating system” Assume AI integration everywhere; will self-serve first.

Stat of concern: Internal polling shows 68 % of knowledge workers have pasted sensitive data into public AI tools. Policy urgency is real.


What Are the Top 5 Opportunities for AI in HR?

Below each opportunity, you’ll find: mini-case, implementation framework and guardrails.

Opportunity #1 – Can AI act as a first-line HR generalist?

Answer Box: Yes—chatbots trained on verified policy content cut Tier-1 tickets by up to 40 % while surfacing gaps in documentation.

  • Mini-Case: DaVita deployed an internal bot that fields PTO balance, leave eligibility and benefit questions. Ticket volume fell 37 % in three months, freeing HRBPs for strategic tasks.
  • Implementation Framework: audit → single source of truth → embed citations → 30-day review cadence.
  • Guardrails: No open-text PII uploads, version-controlled policy library, human escalation path.

Opportunity #2 – How can AI streamline conversation capture & triage?

Answer Box: AI transcription plus smart tagging saves 2–3 hours per investigation and improves searchability for future cases.

  • Mini-Case: Van Dermyden Makus built a private LLM that ingests interview audio. Investigators query, “Show every mention of June 3rd staff meeting.” Accuracy: 95 %+.
  • Implementation Framework: Use lossless audio, immediate human review, attach transcripts as exhibits.
  • Guardrails: Maintain raw audio; never rely on summary alone; be litigation-ready.

Opportunity #3 – Where does AI shine in benefits administration?

Answer Box: Pattern detection across claims data personalizes offerings and flags cost spikes before renewal.

  • Mini-Case: A Fortune 100 retailer used AI to predict a 12 % increase in diabetes-related claims, renegotiated plan design, saving $2.1 M.
  • Implementation Framework: BAA-compliant vendor → de-identify data → predictive models → finance/HR joint review.
  • Guardrails: HIPAA compliance, explicit employee notice, opt-out for sensitive analytics.

Opportunity #4 – Can AI accelerate employee relations investigations?

Answer Box: Used judiciously, AI slashes evidence-gather time by 30 % but must never issue findings.

  • Mini-Case: One tech firm queried,“List all emails mentioning ‘Project Atlas overtime.’” AI surfaced 142 hits in seconds, work that previously took an analyst a day.
  • Implementation Framework: Limited read-only corp-docs → investigator prompts → tagged output → human interviews.
  • Guardrails: Human makes credibility determinations; bias audit each quarter.

Opportunity #5 – Is AI helpful in performance management?

Answer Box: AI can unify disparate metrics and draft review summaries, but context & empathy remain human territory.

  • Mini-Case: A 12-store retail chain fed POS data, absence records and customer scores into an LLM that produced first-draft reviews, cutting prep time by 60 %.
  • Implementation Framework: Objective data only → manager review → calibration meeting.
  • Guardrails: No auto-generated ratings; managers must document context; compliance review for adverse impact.

What Are the Top 5 Risks of Using AI in HR?

Risk #1 – Why is delegating employment decisions to AI so dangerous?

Answer Box: Because discriminatory outputs trigger legal liability and erode trust. The EEOC has already settled six-figure cases.

  • Legal Examples: EEOC v iTutorGroup (age bias); Workday class action (screening bias).
  • Mitigation: Keeping humans in the loop, documented rationale, quarterly model audits.

Risk #2 – How can AI amplify bias in hiring & talent acquisition?

Answer Box: AI learns from historical patterns; if past hires skewed, the model replicates inequity.

  • Data Point: Blind-name experiments still show 40 % callback gaps.
  • Mitigation: Diverse training data, bias testing, structured interviews and an override option.

Risk #3 – Why is bias inevitable in large language models?

Answer Box: Training data is human-generated with baked-in assumptions; the model embeds them at scale.

  • Boardroom Example: AI described every board member as a white male, reflecting historic representation.
  • Mitigation: Fairness constraints, continuous re-training, transparent metrics.

Risk #4 – How might over-reliance on AI erode core HR skills?

Answer Box: Active listening, pattern recognition and empathy dwindle if junior staff skip foundational practice.

  • Mitigation: Rotation programs, mentorship, AI-free drills.

Risk #5 – Could AI devalue HR’s strategic role?

Answer Box: If leaders view HR as an automatable admin, budget and influence shrink.

  • Mitigation: Use AI time-savings to tackle culture, analytics and board-level risk strategy.

Essential Governance Framework for Responsible AI in HR

  1. Policy Development: Acceptable use, data classification, decision boundaries, vendor selection.
  2. Training & Literacy: AI basics, bias detection, prompt engineering, ethical frameworks.
  3. Risk Management: Legal review, bias audits, incident response, metrics dashboard.
  4. Stakeholder Engagement: Employee notice, union consultation, executive alignment, IT partnership.
  5. Continuous Monitoring: Monthly KPI review, quarterly model retraining, annual governance audit.

Treat AI like any high-risk vendor: trust and verify—on a schedule. — Deb Muller


Three Immediate Action Steps (Do Them This Week)

  1. Assess Current Usage: Survey shadow AI in your org; map risk hotspots.
  2. Launch a Low-Risk Pilot: Think meeting transcription for non-sensitive calls.
  3. Join a Learning Network: Start with HR Acuity’s empowER community with 6,000+ employee relations peers; schedule weekly “AI office hours.”

Glossary of Key AI × HR Terms (Quick Reference)

Large Language Model (LLM)
Neural-network model trained on billions of tokens; predicts next word.
Deterministic System
Software that returns the same output for the same input every time.
Probabilistic System
Software that returns a “most likely” output; results can vary.
Bias Audit
Statistical test of AI outputs across protected classes.
Human-in-the-Loop
Process where humans review and approve AI suggestions before action.
Inference
Real-time generation of AI output from a trained model.

FAQ: Your Top AI × HR Questions Answered

Is AI legal for hiring decisions?

Bottom-line: Yes, with stringent human oversight and anti-discrimination controls.

The EEOC mandates human review, bias audits and candidate disclosure. Several states (NYC, CA, IL) require independent bias testing before deployment.

How do we reduce bias in AI recruiting?

Bottom-line: Diverse training data + structured interviews + quarterly audits.

Pair technical fixes (fairness constraints) with process safeguards (blind resume review, diverse panels).

What AI tools are “safe” for HR today?

Bottom-line: Private meeting transcription, policy chatbots and trend analytics—if data never leaves your environment.

Avoid tools that auto-decide terminations, compensation or final ratings. Plus, humans should always be making the final call—not AI. 

How much will AI implementation cost my HR team?

Bottom-line: $10 K–$500,000/ year depending on licenses, integration and audits; Expect to see ROI in 12-18 months.

What’s the biggest mistake HR makes with AI?

Bottom-line: Uploading confidential data into public LLMs and skipping bias audits.

How should we train HR staff?

Bottom-line: Blend AI literacy, prompt labs, ethics case studies and mentorship.

Must we tell employees we’re using AI?

Bottom-line: Yes—transparency is increasingly required by law and critical for trust.

Will AI eliminate HR jobs?

Bottom-line: No. Tasks will shift; strategic, human-centric work grows.

How do we measure ROI?

Bottom-line: Track time saved, error reduction, satisfaction scores and cost per case.

What questions should we ask AI vendors?

Bottom-line: Bias-test results, data-security posture, explainability and HR-specific references.

Make sure vendors are not using your data to train models and you have complete visibility into how they operate their AI. 

Is more regulation coming?

Bottom-line: Absolutely—federal frameworks and EU enforcement arrive in 2025; prepare now with audits and documentation.

Conclusion: Embrace AI — Intentionally & Responsibly

AI can help the human make decisions—but the human must remain the investigator.” Those closing words from Oliver McKinstry sum up the webinar’s single biggest lesson. The smartest HR teams in 2025 will marry the speed of AI with the empathy and judgment of seasoned professionals.

🌟 Your next move:


More resources on AI