Exploring AI and Human Coaching: Insights from a 30-Day Pilot Study

AC Greater China, Singapore & Malaysia

 August 2025

Abstract

The rapid rise of generative artificial intelligence (AI) has spurred interest in using chatbots as digital coaches. While AI systems can provide structured guidance and on‑demand support, concerns remain about their ability to build the rapport and trust that underpin effective coaching. To explore these questions, the Association for Coaching (AC) across the Greater China as well as the Singapore and Malaysia Regions partnered with TreeholeHK to run a 30day pilot study involving two coaching modalities: AIenhanced (human coach + AI) and AIonly. Post‑coaching questionnaires and a post‑program focus group were used to evaluate satisfaction, goal progress, and user experience. Eight individuals came from coachingrelated professions and were evenly split between the AIonly and AIenhanced options. Postprogram responses show that human coaches deliver higher satisfaction, trust, and perceived progress, whereas AI coaches provide convenience and anonymity but lack emotional depth. The hybrid model emerged as the most balanced approach, combining the empathy of human coaches with the structure and availability of AI. This article summarizes the study design, ethical considerations, survey and focus‑group findings, and offers practical recommendations for the future of AI coaching.

Introduction and Research Rationale

AI’s entry into coaching

Generative AI systems such as ChatGPT exploded onto the public stage in late 2022, prompting vendors to embed chatbots into learning and development tools. At the same time, the coaching profession has been growing rapidly, with more organizations and individuals adopting coaching worldwide. Commercial vendors now offer AI tools that support aspects of coaching such as goal setting, progress monitoring, and session analysis. Some commentators even predict that within a decade, AI could automate much of what coaches do. These developments raise important questions about how AI can complement or compete with human coaches.

 Objectives of the pilot study

The pilot project reported here had two objectives:

1.      Evaluate the MindForest app—an upgraded mobile platform developed by TreeholeHK—by assessing its functionality, user experience, and benefits through measurable metrics. 

2.      Document the integration of AI and human coaching by capturing experiences and insights for an AC article and sharing practical recommendations.

Study Design and Methodology

Participant profile and recruitment

Eight participants were evenly split by gender (four male, four female) and mostly came from coaching‑related professions (7 coaching professionals, 1 non‑coaching professional). Most had more than ten years of work experience, and seven had previously received coaching. However, only two participants had used AI coaching before. Participants were asked to rate their familiarity with coaching on a 1–5 scale; responses clustered around 4–5, indicating a generally wellinformed sample.

 

Group assignment

Participants selfselected into the experimental groups described in the project brief. The pilot originally envisioned two conditions: Group A (human coach + AI) and Group B (AIonly). Specifically:

·       AIenhanced group (Group A) – four respondents opted to work with a human coach (one session per week) and to use the AI app between sessions.

·       AI-only group (Group B) – four of the survey respondents chose to interact exclusively with the MindForest app for 30 days.

 

Measures and data collection

The study used a mixedmethods approach:

·       Postcoaching questionnaire: assessed overall satisfaction, willingness to recommend the program, perceived goal attainment, confidence in sustaining progress, and detailed perceptions of human vs. AI coaching (if applicable). Scores were again on a 1–6 scale.

·       Focus group discussion: after the program, participants joined a virtual focus group to elaborate on their experiences. Their comments were consolidated into a qualitative report.

 

Timeline

The study ran from early May to midJune 2025, following a structured timeline: recruitment and training in April, coaching sessions and data collection in May, data analysis in early June, and the final article and presentation by the end of June.

 

About TreeholeHK

TreeholeHK is a Hong Kong organization that offers a range of psychological services, including courses, corporate programmes, counseling and therapy, membership subscriptions, and technology initiatives such as the MindForest App. It aims to make psychological knowledge widely accessible and integrates business, innovation, and technology into its work.

 

Ethical Considerations

AI coaching raises ethical questions around privacy, bias, transparency, and the risk of replacing human connection. In response, professional bodies have started developing AI coaching frameworks to ensure that digital tools respect client privacy, promote trust, and minimize bias. Such guidance emphasizes:

·       Ethics and transparency: AI systems should be fair and accountable, with clear explanations of how they use data and why they generate specific responses.

·       Client relationship and trust: Clients must give informed consent and know when they are interacting with an AI, with the option to switch to a human coach at any time.

·       Data security: AI platforms must use encryption and strict access controls to protect sensitive information.

·       Bias mitigation: Developers need to assess and reduce algorithmic biases to ensure equitable coaching recommendations.

Ethical challenges also include the dehumanization of services when AI chatbots are presented as substitutes for human coaches. The project team suggests that AI cannot form genuine relationships, make moral judgments, or adapt to the unpredictable nature of coaching conversations, and that replacing humans with AI may compromise the purpose and quality of coaching. Thus, this pilot framed AI as a supplementary tool rather than a replacement, and participants were informed about data use and confidentiality. We refrained from letting AI handle mentalhealth crises or topics requiring clinical intervention.

 

Survey Response Analysis

Overall satisfaction and effectiveness

A total of 4 responses were received in this postcoaching questionnaire, and the results showed a significant distribution difference between the two groups: AI-enhanced and AI-only:

 

 

Metric

AIenhanced (avg. score out of 6)

AIonly (avg. score out of 6)

Key insight

Comfortable/ Trustworthy

environment

≈ 5.25

≈ 3.5

Participants in the hybrid group described a more comfortable and trustworthy environment, while AI-only users valued privacy but found emotional connection lacking. 

Questions help gain insights

≈ 4.75

≈ 3.25

Participants found that well-crafted questions, especially in the hybrid model, effectively supported self-reflection and insight. However, AI-only prompts were seen as less impactful without human interpretation.

Goal attainment

≈ 4.5

≈ 3.0

Participants felt human coaching helped them progress toward goals; AI‑only users reported more limited progress.

Confidence to sustain progress

≈ 4.5

≈ 3.5

Human‑coached participants felt moderately confident about maintaining gains; AI‑only participants were less sure.

Open comments suggest that AI was perceived as a safe, judgementfree space but that its recommendations were repetitive and lacked depth. Participants valued the human coach’s empathy, active listening, and cultural sensitivity. One respondent noted that the AI “felt like an encyclopedia” rather than an interactive partner, while another described the conversation as “dry” and lacking emotion. 

Focus Group Feedback

The focus group discussions provided richer context to the quantitative findings. Participants highlighted several strengths of the AI app:

·       Convenience and accessibility: Users could engage without appointments or geographic constraints.

·       Informational support: The app offered structured prompts, reflective questions, and tips, which some found helpful for brainstorming and action planning.

·       Stressfree interaction: Interacting with AI felt less judgmental, allowing some participants to express themselves more freely.

They also identified key challenges:

·       Repetitiveness and cognitive overload: Responses often contained multiple questions and long explanations, leaving users feeling overwhelmed and disengaged.

·       Lack of followup: The AI did not proactively check in or remind users to continue the conversation, weakening accountability for goal‑related actions.

·       Limited personalization: Suggestions were generic and did not adapt to individual circumstances; psychological explanations were sometimes incomplete.

·       Inhuman feel: Participants missed non‑verbal cues, empathy, and nuance. Some suggested adding visuals, emojis, or shorter, more conversational responses.

Feedback on the hybrid experience underscored these points. One HR professional used the AI to brainstorm approaches for a difficult workplace conversation and found the structured suggestions useful. However, she noted that the app lacked the adaptive questioning and emotional support of her human coach. Another coachee remarked that the AI mimicked a counselor-like tone but provided generic selfcare tips without tailoring them to her situation. Both hybrid users noted a lack of data sharing between the app and the coach—no progress summaries or session transcripts were provided—limiting the potential for seamless integration.

 

Results & Discussion

Human vs. AI coaching

The data consistently show that human coaches outperformed AI across almost every metric of satisfaction, engagement, and perceived goal attainment. This aligns with coaching research that emphasizes the importance of trust. Human coaches can adapt the coaching process to the client’s emotional state, cultural background, and learning style. They create a safe environment through empathy, active listening, and nonverbal cues, and they encourage accountability by summarizing actions and checking in on progress. These qualities are difficult for current AI systems to emulate.

 

Strengths and limitations of AI coaching

AI coaching demonstrated notable advantages:

·       24/7 Availability

AI is accessible at any time, removing the need for scheduling and allowing users to engage on their own terms.

·       Structured Guidance

The app provides reflective prompts, goal-setting frameworks, and psychoeducational content that support brainstorming and self-coaching.

·       Judgment-Free Interaction

Some participants felt more comfortable disclosing sensitive thoughts to an AI, perceiving it as less intimidating or judgmental than a human coach.

·       Useful Between Human Sessions

Hybrid users found the AI helpful for practising breathing exercises, capturing ideas, and preparing for upcoming coaching conversations.

 

However, limitations were pronounced:

·       Cognitive Overload

AI responses were often too long or included multiple questions, overwhelming users. One participant noted receiving ten questions in a single response.

·       Lack of Proactive Follow-up

The AI did not initiate check-ins or remind users about their goals, weakening accountability and habit formation.

·       Limited Responsiveness and Adaptability

AI struggled to handle changes in topics or shifting goals and could not challenge inconsistencies in a coachee’s thinking.

·       Generic Advice and Insufficient Personalisation

Suggestions were not tailored to users’ individual contexts; psychological explanations were sometimes superficial or incomplete.

·       Inhuman Feel and Absence of Empathy

Participants missed emotional warmth, non-verbal cues, and conversational nuance. Suggestions for improvement included the use of visuals, emojis, or a more casual tone.

·       No Integration with Human Coaches

In the hybrid model, AI conversations were not shared with human coaches, limiting synergy. Users expressed a desire for synchronized data or progress summaries.

 

Hybrid coaching as a balanced model

Hybrid coaching emerged as the most effective model. Combining human empathy with AI’s structure and availability provided a more comprehensive experience. Quantitative results showed the hybrid group achieving the highest satisfaction and goal progress, while qualitative feedback highlighted the AI’s usefulness for preparation and reflection. The key to unlocking hybrid’s full potential lies in integrating data (e.g., sharing summaries or progress scores with the coach) while maintaining confidentiality and complying with accepted ethical standards.

 

Expectations & Future Directions for AI Coaching

Enhancing AI coaching

The study and focus group suggest several avenues for improvement:

1.      Action planning and accountability: Introduce step‑by‑step frameworks (e.g., SMART goals) and progress‑tracking widgets, and ensure reliable reminders for check‑ins. Participants noted that the AI’s failure to send notifications undermined habit‑building.

2.      Conversation design: Shorten responses, ask one question at a time, and use a more conversational tone. Begin with empathetic reflections before offering advice.

3.      Personalization: Tailor suggestions to the coachee’s context; incorporate deeper psychological explanations when appropriate.

4.      Integration with human coaches: Enable secure sharing of conversation summaries or progress metrics so that coaches can monitor and adapt sessions accordingly.

5.      Adherence to ethical standards: Developers should follow the ethical guidelines issued by professional coaching bodies and regulators to ensure privacy, transparency, and bias mitigation. Clients should always have the option to connect with a human coach.

 

Future research

Empirical research on AI coaching is still limited. While some studies indicate that participants can build a moderate working alliance with AI coaches, the long‑term impact on behavioural change remains unclear. More rigorous, randomized controlled trials comparing AI, hybrid, and human coaching over multiple sessions are needed. Further examination is needed into how cultural attitudes, trust in technology, and ethical oversight influence adoption and outcomes.

When to Use Each Coaching Model

Coaching model

When it works best

When it falls short

AIonly coaching

Suitable for selfdirected individuals seeking convenience, anonymity, and basic guidance; useful for betweensession reflection or brainstorming; accessible across time zones and schedules.

Ineffective for clients needing emotional support, nuanced exploration, or personalized feedback; limited accountability and adaptability.

Humanonly coaching

Optimal for complex or emotionally charged topics requiring empathy, cultural sensitivity, and deep listening; allows flexible pacing and adaptive questioning.

Less convenient due to scheduling and cost; may lack structured tools or immediate access between sessions.

Hybrid coaching

Combines human empathy with AI’s availability and structure; ideal for most coachees aiming for comprehensive development; AI can support practice, reflection, and accountability between human sessions.

Requires careful integration of data and clear boundaries; quality depends on how well the AI complements, rather than distracts from, the human coach’s role.

 

 

 

Recommendations & Practical Implications

For coaches:

·       Embrace AI as a support tool: Use AI to generate questions, monitor habits, and provide psychoeducational resources, freeing up time to focus on relational and contextual aspects of coaching.

·       Maintain ethical vigilance: Follow the ethical frameworks developed by professional coaching associations and data‑privacy regulators to ensure transparency, informed consent, and data protection.

·       Train to integrate AI: Develop competencies in using AI data (e.g., progress summaries) to tailor sessions while retaining human warmth and adaptability.

 

For organizations and developers:

·       Invest in hybrid solutions: Build platforms that allow seamless switching between AI and human coaches, with secure data sharing and privacy controls.

·       Improve AI design: Simplify conversational flows, incorporate empathy prompts and personalized advice, and ensure reliable habit‑tracking features.

·       Monitor user experience: Collect feedback on convenience, emotional engagement, and outcomes, and iterate based on user needs.

For clients:

·       Choose the right fit: Consider your goals, preferences, and emotional needs when selecting AI, human, or hybrid coaching. Self‑motivated clients may benefit from AI tools for quick reflections, while those facing complex challenges should prioritize human support.

·       Stay mindful of ethics: Ask providers about data use, privacy, and the role of AI in the coaching relationship. Seek out platforms adhering to recognized standards.

Conclusion

This pilot study reinforces that AI can significantly enhance the accessibility and structure of coaching, particularly when integrated into a hybrid model. However, human coaches remain irreplaceable in providing empathy, adaptability, and deeper relational insight. The findings highlight the potential of AI as a powerful partner in coaching, not as a replacement but as an augmentation. For coaching to evolve with integrity, ongoing attention must be paid to the emotional quality, safety, and cultural resonance of AI tools.

The Association for Coaching recognizes the importance of preparing coaches for a rapidly transforming digital landscape. Through initiatives like the AC AI Interest Group, as well as digital learning events, webinars, and curated resources, the AC actively supports coaches in understanding, evaluating, and adopting AI in a thoughtful and ethical manner. These efforts are grounded in the ACs commitment to professional excellence and to upholding human-centred values in every stage of coaching practice.

About the Author

Keith Ko Yuk Wing

·       Over 20 years of experience in the IT industry, with a focus on AI education and coaching.

·       Delivered more than 5,000 hours of training since 2016, including AI workshops for corporate, education, and government sectors.

·       Certified LEGO® SERIOUS PLAY® Facilitator, NLP Master Coach, and Distinguished Toastmaster (DTM).

·       Actively contributes to the Association for Coaching in the Greater China region, supporting Professional Development and Event.

Posted by: Zobia Erum

Leave a Reply

Your email address will not be published. Required fields are marked *