Biology of Sport
eISSN: 2083-1862
ISSN: 0860-021X
Biology of Sport
Current Issue Manuscripts accepted About the journal Editorial board Abstracting and indexing Archive Ethical standards and procedures Contact Instructions for authors Journal's Reviewers Special Information
Editorial System
Submit your Manuscript
SCImago Journal & Country Rank
Share:
Share:
abstract:
Original paper

Reproducibility and quality of hypertrophy-related training plans generated by GPT-4 and Google Gemini as evaluated by coaching experts

Tim Havers
1, 2
,
Lukas Masur
3
,
Eduard Isenmann
1, 4
,
Stephan Geisler
1
,
Christoph Zinner
5
,
Billy Sperlich
6
,
Peter Düking
3

  1. Department of Fitness and Health, IST University of Applied Sciences, Düsseldorf, Germany
  2. Faculty of Sport and Health Sciences, Technical University of Munich, Munich, German
  3. Department of Sports Science and Movement Pedagogy, Technische Universität Braunschweig, Braunschweig, Germany
  4. Department of Molecular and Cellular Sports Medicine, Institute for Cardiovascular Research and Sports Medicine, German Sport University Cologne, Cologne
  5. Department of Sport, University of Applied Sciences for Police and Administration of Hesse, Wiesbaden, Germany
  6. Integrative and Experimental Exercise Science and Training, Institute of Sport Science, University of Würzburg, Germany
Biol Sport. 2025;42(2):289–329
Online publish date: 2024/12/18
View full text Get citation
 
PlumX metrics:
Large Language Models (LLMs) are increasingly utilized in various domains, including the generation of training plans. However, the reproducibility and quality of training plans produced by different LLMs have not been studied extensively. This study aims to: i) investigate and compare the quality of muscle hypertrophy-related resistance training (RT) plans generated by Google Gemini (GG) and GPT-4, and ii) the reproducibility of the RT plans when the same prompts are provided multiple times concomitantly. Two distinct prompts were used, one providing little information about the training plan requirements and the other providing detailed information. These prompts were input into GG and GPT-4 by two different individuals, resulting in the generation of eight RT plans. These plans were evaluated by 12 coaching experts using a 5-point Likert scale, based on quality criteria derived from the literature. The results indicated a high degree of reproducibility, as indicated by coaching expert evaluation, when the same distinct prompts were provided multiple times to the LLMs of interest, with 27 out of 28 items showing no differences (p > 0.05). Overall, GPT-4 was rated higher on several aspects of RT quality criteria (p = 0.000–0.043). Additionally, compared to little information, higher information density within the prompts resulted in higher rated RT quality (p = 0.000–0.037). Our findings show that RT plans can be generated reproducibly with the same quality when using the same prompts. Furthermore, quality improves with more detailed input, and GPT-4 outperformed GG in generating higher quality plans. These results suggest that detailed information input is crucial for LLM performance.
keywords:

Artificial intelligence, Chatbots, Digital health, Digital training, Innovation, mHealth, Technology

 
Quick links
© 2024 Termedia Sp. z o.o.
Developed by Bentus.