( = Paper PDF,
= Presentation slides,
= Presentation video)
1.
Tajkia Rahman Toma; Balreet Grewal; Cor-Paul Bezemer
Answering User Questions about Machine Learning Models through Standardized Model Cards Inproceedings
International Conference on Software Engineering (ICSE), 2025.
Abstract | BibTeX | Tags: Hugging Face, Q&A communities, Q&A websites, SE4AI, SE4FM, SE4ML
@inproceedings{Toma_UserQuestions,
title = {Answering User Questions about Machine Learning Models through Standardized Model Cards},
author = {Tajkia Rahman Toma and Balreet Grewal and Cor-Paul Bezemer },
year = {2025},
date = {2025-04-27},
booktitle = {International Conference on Software Engineering (ICSE)},
abstract = {Reusing pre-trained machine learning models is
becoming very popular due to model hubs such as Hugging Face
(HF). However, similar to when reusing software, many issues
may arise when reusing an ML model. In many cases, users
resort to asking questions on discussion forums such as the HF
community forum. In this paper, we study how we can reduce the
community’s workload in answering these questions and increase
the likelihood that questions receive a quick answer. We analyze
11,278 discussions from the HF model community that contain
user questions about ML models. We focus on the effort spent
handling questions, the high-level topics of discussions, and the
potential for standardizing responses in model cards based on
a model card template. Our findings indicate that there is not
much effort involved in responding to user questions, however,
40.1% of the questions remain open without any response. A
topic analysis shows that discussions are more centered around
technical details on model development and troubleshooting,
indicating that more input from model providers is required. We
show that 42.5% of the questions could have been answered if the
model provider followed a standard model card template for the
model card. Based on our analysis, we recommend that model
providers add more development-related details on the model’s
architecture, algorithm, data preprocessing and training code in
existing documentation (sub)sections and add new (sub)sections
to the template to address common questions about model usage
and hardware requirements.},
keywords = {Hugging Face, Q&A communities, Q&A websites, SE4AI, SE4FM, SE4ML},
pubstate = {published},
tppubtype = {inproceedings}
}
Reusing pre-trained machine learning models is
becoming very popular due to model hubs such as Hugging Face
(HF). However, similar to when reusing software, many issues
may arise when reusing an ML model. In many cases, users
resort to asking questions on discussion forums such as the HF
community forum. In this paper, we study how we can reduce the
community’s workload in answering these questions and increase
the likelihood that questions receive a quick answer. We analyze
11,278 discussions from the HF model community that contain
user questions about ML models. We focus on the effort spent
handling questions, the high-level topics of discussions, and the
potential for standardizing responses in model cards based on
a model card template. Our findings indicate that there is not
much effort involved in responding to user questions, however,
40.1% of the questions remain open without any response. A
topic analysis shows that discussions are more centered around
technical details on model development and troubleshooting,
indicating that more input from model providers is required. We
show that 42.5% of the questions could have been answered if the
model provider followed a standard model card template for the
model card. Based on our analysis, we recommend that model
providers add more development-related details on the model’s
architecture, algorithm, data preprocessing and training code in
existing documentation (sub)sections and add new (sub)sections
to the template to address common questions about model usage
and hardware requirements.
becoming very popular due to model hubs such as Hugging Face
(HF). However, similar to when reusing software, many issues
may arise when reusing an ML model. In many cases, users
resort to asking questions on discussion forums such as the HF
community forum. In this paper, we study how we can reduce the
community’s workload in answering these questions and increase
the likelihood that questions receive a quick answer. We analyze
11,278 discussions from the HF model community that contain
user questions about ML models. We focus on the effort spent
handling questions, the high-level topics of discussions, and the
potential for standardizing responses in model cards based on
a model card template. Our findings indicate that there is not
much effort involved in responding to user questions, however,
40.1% of the questions remain open without any response. A
topic analysis shows that discussions are more centered around
technical details on model development and troubleshooting,
indicating that more input from model providers is required. We
show that 42.5% of the questions could have been answered if the
model provider followed a standard model card template for the
model card. Based on our analysis, we recommend that model
providers add more development-related details on the model’s
architecture, algorithm, data preprocessing and training code in
existing documentation (sub)sections and add new (sub)sections
to the template to address common questions about model usage
and hardware requirements.