Applications of Language Modelling for a Cycling Aerodynamics’ Coach
Keywords:
positions, bike fit, Cycling Aerodynamics, Body Rocket, machine learning, language modellingAbstract
This study investigates the application of Language Modelling in cycling aerodynamics. A novel ground truth is created through recruiting a cohort of experts in cycling aerodynamics, bike fit and biomechanics and taking that ground truth to be the collective expert consensus. Within this study 9 Large Language Models and 1 Large Reasoning Model were tested with 7 of the Large Language Models being open-source models from Google, Meta, Microsoft and Alibaba and the closed source models from OpenAI. This study tested these models without a system prompt, with a system prompt, with applied Retrieval Augmented Generation, with an enthusiast level knowledge base and Retrieval Augmented Generation with a more technical knowledgebase. The best performing model in this study was OpenAI’s Chat-GPT 4o with an average mark of ()%. And the best performing opensource model was Alibaba’s Qwen2.5:32b with a system prompt and the technical knowledge base providing an average score of . The results from this study show that it is possible to develop a model which performs to a similar level of a human expert within the domain of aerodynamics, bike fit and biomechanics in cycling. Additionally, this study proposes a method to experimentally quantify the improvements an athlete can make through the assistance of a domain specific Large Language Model.
Downloads
References
Abdin, M., Aneja, J., Behl, H., Bubeck, S., Eldan, R., Gunasekar, S., Harrison, M., Hewett, R. J., Javaheripi, M., Kauffmann, P., Lee, J. R., Lee, Y. T., Li, Y., Liu, W., Mendes, C. C. T., Nguyen, A., Price, E., Rosa, G. de, Saarikivi, O., … Zhang, Y. (2024). Phi-4 Technical Report. https://arxiv.org/abs/2412.08905
Aerosensor. (2025). Aerosensor. https://aerosensor.tech/
AllAboutAI-YT. (2024). Easy-local-rag. https://github.com/AllAboutAI-YT/easy-local-rag/tree/main?tab=readme-ov-file
Barnes, C., Hopker, J., & Gibson, S. (2023). To shuffle or not to shuffle. Journal of Science and Cycling, 12(2), 28–30.
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., & others. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877–1901.
Chatoui, H., & Ata, O. (2021). Automated Evaluation of the Virtual Assistant in Bleu and Rouge Scores. 2021 3rd International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), 1–6. doi: 10.1109/HORA52670.2021.9461351
Gibli. (2025). Gibli. https://giblitech.com/
Google. (2025). Think Smater Not Harder. https://notebooklm.google/
Grattafiori, A., Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Vaughan, A., Yang, A., Fan, A., Goyal, A., Hartshorn, A., Yang, A., Mitra, A., Sravankumar, A., Korenev, A., Hinsvark, A., … Ma, Z. (2024). The Llama 3 Herd of Models. https://arxiv.org/abs/2407.21783
Kyle, C. R., & Burke, E. (1984). Improving the racing bicycle. Mechanical Engineering, 106(9), 34–45.
Lee, C., Kim, Y.-B., Ji, H., Lee, Y., Hur, Y., & Lim, H. (2021). On the Redundancy in the Rank of Neural Network Parameters and Its Controllability. Applied Sciences, 11, 725. doi: 10.3390/app11020725
Lee, S., Shakir, A., Koenig, D., & Lipp, J. (2024). Open Source Strikes Bread—New Fluffy Embeddings Model. https://www.mixedbread.ai/blog/mxbai-embed-large-v1
Li, J., Yuan, Y., & Zhang, Z. (2024). Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases. arXiv Preprint arXiv:2403.10446.
Li, X., & Li, J. (2023). AnglE-optimized Text Embeddings. arXiv Preprint arXiv:2309.12871.
Nvidia. (2025). NVIDIA T4. https://www.nvidia.com/en-gb/data-center/tesla-t4/
OpenAI. (2025). Get answers. Find inspiration. Be more productive. https://openai.com/chatgpt/overview/
Qin, H., Ma, X., Zheng, X., Li, X., Zhang, Y., Liu, S., Luo, J., Liu, X., & Magno, M. (2024). Accurate LoRA-Finetuning Quantization of LLMs via Information Retention. arXiv Preprint arXiv:2402.05445
Qwen, :, Yang, A., Yang, B., Zhang, B., Hui, B., Zheng, B., Yu, B., Li, C., Liu, D., Huang, F., Wei, H., Lin, H., Yang, J., Tu, J., Zhang, J., Yang, J., Yang, J., Zhou, J., … Qiu, Z. (2025). Qwen2.5 Technical Report. https://arxiv.org/abs/2412.15115
Rocket, B. (2025). Body Rocket. https://www.bodyrocket.cc/
Streamlines. (2025). Streamlines. https://streamlines.aero/?srsltid=AfmBOopI1l2mJ9UQ67CIwXkjK1GmD1sEgkplt-LsuyhAmjHYXuDBwB64
Team, G., Riviere, M., Pathak, S., Sessa, P. G., Hardin, C., Bhupatiraju, S., Hussenot, L., Mesnard, T., Shahriari, B., Ramé, A., Ferret, J., Liu, P., Tafti, P., Friesen, A., Casbon, M., Ramos, S., Kumar, R., Lan, C. L., Jerome, S., … Andreev, A. (2024). Gemma 2: Improving Open Language Models at a Practical Size. https://arxiv.org/abs/2408.00118
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30.
Published
How to Cite
Issue
Section
Copyright (c) 2025 Journal of Science and Cycling

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Authors contributing to Journal of Science and Cycling agree to publish their articles under a Creative Commons CC BY-NC-ND license, allowing third parties to copy and redistribute the material in any medium or format, and to remix, transform, and build upon the material, for any purpose, even commercially, under the condition that appropriate credit is given, that a link to the license is provided, and that you indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
Authors retain copyright of their work, with first publication rights granted to Cycling Research Center.


