The 1st Workshop on Human-Centered AI for SE

"Where AI4SE Meets Human Insight"

HumanAISE Workshop (Co-located with FSE'25 and ISSTA'25 at Trondheim, Norway).

TailBliss Hero
P1

Organizing Committee

Yu Huang, Vanderbilt University, USA: yuhuang-lab.github.io ([email protected])

Tianyi Zhang, Purdue University, USA: tianyi-zhang.github.io ([email protected])

John Grundy, Monash University, Australia: sites.google.com/site/johncgrundy ([email protected])

David Lo, Singapore Management University, Singapore: www.mysmu.edu/faculty/davidlo ([email protected])

Daniel Russo, Aalborg University, Denmark: www.danielrusso.org ([email protected])

Thomas Zimmermann, University of California, Irvine, USA: thomas-zimmermann.com ([email protected])

Our Mission

Our mission is to foster a human-centered approach to AI in software engineering, creating tools and methods that enhance, rather than replace, human creativity and decision-making. We are dedicated to advancing AI that aligns with human values, builds trust, and integrates seamlessly into developers' workflows.

Through collaboration among leading researchers, industry experts, and practitioners, we aim to explore how AI can support software engineers in a way that respects transparency, fairness, and ethical standards. Our focus is on developing AI techniques that not only improve productivity but also uphold the integrity of human involvement.

Our workshop is a platform for innovative discussions, where participants can share insights, address challenges, and envision AI advancements that place humans at the heart of software engineering. Together, we strive to shape a future where AI and human expertise work hand in hand to build responsible, effective SE practices.

p2
Sponsors

Call for Papers

We are pleased to announce the Call for Papers for the 1st Workshop on Human-Centered AI for Software Engineering (HumanAISE 2025), a platform for researchers and practitioners to present groundbreaking ideas in integrating human-centered AI into SE practices. Topics include explainable AI, ethical considerations, human-AI collaboration, and much more. Submissions will undergo a rigorous review process, and accepted papers will be published in the ACM Digital Library. Please see below for details on submission types, guidelines, and deadlines.

Call for Papers: HumanAISE 2025

The 1st Workshop on Human-Centered AI for Software Engineering (HumanAISE 2025) invites submissions to explore how AI can enhance human capabilities while respecting workflows, building trust, and ensuring fairness in software engineering.

Topics of interest include, but are not limited to:

  • Knowledge Transfer and Human-Guided AI for SE: Exploring how AI learns from human expertise and empowers developers with insights.
  • Human-AI Interaction and Collaboration: Developing models, workflows, and interfaces to enhance collaboration between developers and AI.
  • Explainable AI for SE: Improving AI transparency and interpretability for software engineers.
  • Ethics, Fairness, and Bias in AI: Addressing biases in AI-driven tools and promoting ethical practices in SE.
  • SE Education with AI: Leveraging AI-driven tools to enhance SE learning and education.
  • SE Practices for AI: Enhancing the development and maintenance processes of AI systems.
  • Evaluation of Human-AI Systems: Designing evaluation methods to assess the long-term impact of Human-AI systems.

Submission Guidelines

We welcome three types of submissions:

  • Full Papers: Up to 8 pages, plus 2 additional pages for references, presenting completed research with significant findings.
  • Short Papers: Up to 4 pages, plus 1 additional page for references, highlighting early-stage or ongoing research with innovative ideas.
  • Position Papers: Up to 2 pages, including references, proposing bold, high-risk, high-reward ideas.

Submission Link: https://humanaise2025.hotcrp.com/. Submissions must adhere to the FSE 2025 two-column industry track format. Detailed formatting guidelines can be found at the FSE 2025 - How to Submit page. Submissions must include references within the page limits and will undergo a double-blind review process by at least three program committee members.

Accepted papers will be published in the ACM Digital Library as part of the FSE 2025 companion proceedings. Authors must ensure their submissions comply with ACM formatting guidelines. At least one author of each accepted paper must register and present at the workshop.

Key Deadlines:

  • Submission Deadline: February 15, 2025 (AOE) March 1, 2025 (AOE)
  • Notification of Acceptance: April 17, 2025
  • Camera-Ready Deadline: April 24, 2025
  • Workshop Date: June 27, 2025

Special Journal Issue

We are excited to partner with Empirical Software Engineering (EMSE) for a special issue on Human-Centered AI for Software Engineering (HumanAISE). This issue focuses on using AI to enhance developer creativity and decision-making, while bridging advanced research with real-world practice.

We invite concise contributions that examine socio-technical aspects of AI integration, innovative techniques for human-centric SE, and strategies for building ethical, trustworthy solutions.

Editors: Yu Huang, Tianyi Zhang, John Grundy, David Lo, Daniel Russo, and Thomas Zimmermann. Authors of selected papers will be invited to submit extended versions. Submissions are due December 10, 2025. For details, visit the EMSE Special Issue page .

EMSE Journal Special Issue

Program Committee

We are honored to have the following experts serve on our Program Committee:

Kevin Leach, Assistant Professor at Vanderbilt University, USA — kjl.name

Christoph Treude, Associate Professor at Singapore Management University — ctreude.ca

Yintong Huo, Assistant Professor at Singapore Management University — yintonghuo.github.io

Hong Jin Kang, Lecturer / Assistant Professor at University of Sydney — kanghj.github.io

Tingting Bi, Lecturer / Assistant Professor at The University of Melbourne — drtingtingbi.github.io

Neil Ernst, Associate Professor at University of Victoria — neilernst.net

Kevin Moran, Assistant Professor at University of Central Florida — kpmoran.com

Reid Holmes, Associate Professor at University of British Columbia — cs.ubc.ca/~rtholmes

Ting Zhang, Lecturer (Assistant Professor) at Monash University (Upcoming) — happygirlzt.com/academic

Bianca Trinkenreich, Assistant Professor at Colorado State University — biancatrink.github.io

Qinghua Lu, Principal Research Scientist at CSIRO’s Data61 — people.csiro.au/L/Q/Qinghua-Lu

Jie Zhang, Lecturer / Assistant Professor at King’s College London — kcl.ac.uk/people/jie-zhang

Jin Guo, Associate Professor at McGill University — cs.mcgill.ca/~jguo/

Toby Jia-Jun Li, Assistant Professor at University of Notre Dame — toby.li

Kexin Pei, Assistant Professor at University of Chicago — sites.google.com/site/kexinpeisite/

Brittany Johnson-Matthews, Assistant Professor at George Mason University — cs.gmu.edu/~johnsonb/

Daye Nam, Assistant Professor at UC Irvine — dayenam.com

Thomas Fritz, Professor at University of Zurich — www.ifi.uzh.ch/en/hasel/people/fritz.html

Chenglong Wang, Senior Researcher at Microsoft Research — microsoft.com/en-us/research/people/chenwang/

Vincent Hellendoorn, Assistant Professor at CMU/Google — vhellendoorn.github.io

Prem Devanbu, Professor at UC Davis — web.cs.ucdavis.edu/~devanbu

Web Chair & Publicity Chair

Responsible for overseeing the workshop’s online presence and external outreach:

Yifan Zhang (Web Chair & Publicity Chair), Ph.D. Student at Vanderbilt University, USA — coderdoge.com

Zihan Fang (Publicity Chair), Ph.D. Student at Vanderbilt University, USA — littlehousezh.github.io

Accepted Papers

  • Training Large Language Models to Comprehend LLVM IR via Feedback-Driven Optimization Yifan Zhang (Vanderbilt University), Kevin Leach (Vanderbilt University)
  • "I Would Have Written My Code Differently": Beginners Struggle to Understand LLM-Generated Code" Yangtian Zi (Northeastern University), Luisa Li (Northeastern University), Arjun Guha (Northeastern University), Carolyn Anderson (Wellesley College), Molly Q Feldman (Oberlin College)
  • An Investigation into Maintenance Support for Neural Networks Fatema Tuz Zohra (George Mason University), Brittany Johnson-Matthews (George Mason University)
  • The Evolution of Information Seeking in Software Development: Understanding the Role and Impact of AI Assistants Ebtesam Al Haque (George Mason University), Chris Brown (Virginia Tech), Thomas D. LaToza (George Mason University), Brittany Johnson (George Mason University)
  • Get on the Train or be Left on the Station: Using LLMs for Software Engineering Research Bianca Trinkenreich (Colorado State University), Fabio Calefato (University of Bari), Geir Hanssen (SINTEF), Kelly Blincoe (University of Auckland), Marcos Kalinowski (Pontifical Catholic University of Rio de Janeiro (PUC-Rio)), Mauro Pezzè (USI Università della Svizzera Italiana & SIT Schaffhausen Institute of Technology), Paolo Tell (IT University of Copenhagen), Margaret-Anne "Peggy" Storey (University of Victoria)
  • Why Do Software Practitioners Use ChatGPT for Software Development Tasks? Fairuz Nawer Meem (George Mason University), Justin Smith (Lafayette College), Brittany Johnson-Matthews (George Mason University)
  • Clash of Requirements: Users First vs. Model First Tor Sporsem (SINTEF), Rasmus Ulfsnes (SINTEF), Morten Hatling (SINTEF), Inga Strümke (NTNU)
  • Human-Centric Hybrid-AI for No-Code Development Thiago Rocha Silva (The Maersk Mc-Kinney Moller Institute, University of Southern Denmark), Thomas Troels Hildebrandt (Department of Computer Science, University of Copenhagen)
  • AI Coding Tools in Bilingual Software Development: A Survey of Spanish-Speaking Developers Miguel Botto-Tobar (Eindhoven University of Technology), Alexander Serebrenik (Eindhoven University of Technology), M.G.J. van den Brand (Eindhoven University of Technology)
  • Attitudes Towards LLM Use Among Software Engineering Researchers: Results From A Two-Phase Survey Study Viggo Tellefsen Wivestad (SINTEF Digital), Astri Barbala (SINTEF Digital)
  • Leveraging Human Insights for Enhanced LLM-based Code Repair Yifan Zhang (Vanderbilt University), Kevin Leach (Vanderbilt University)
  • Towards a LLM-Based System for Generating and Validating Product Requirements Evan Krueger (Vanderbilt University), Taylor Carpenter (Vanderbilt University), Kevin Leach (Vanderbilt University), James Weimer (Vanderbilt University)
  • An Empirical Study on the Impact of Gender Diversity on Code Quality in AI Systems Shamse Tasnim Cynthia (University of Saskatchewan), Banani Roy (University of Saskatchewan)
  • CodeNoCode Predictive Modeling: Co-Development with LLMs for Transparency and Inclusivity Felix Dobslaw (Mid Sweden University), Leif Sundberg (Umeå University)

HumanAISE Workshop Updates

Explore the latest insights on human-centered AI in software engineering, featuring articles on ethical AI, collaboration tools, and advancements in AI4Code.

/../assets/images/featured/blog_norwegian_wood.png
Reflections on 'Norwegian Wood' and FSE 2025 in Trondheim

Exploring the connections between Haruki Murakami's 'Norwegian Wood' and the upcoming FSE 2025 conference in Trondheim, Norway.

HumanAISE Workshop Team

HumanAISE Workshop Team

2 min read
/../assets/images/featured/news_1.png
Website Launch Announcement

We are thrilled to announce that the HumanAISE Workshop website is now live! Explore our mission, ongoing projects, and upcoming events focused on advancing human-centered AI in software engineering.

HumanAISE Workshop Team

HumanAISE Workshop Team

1 min read
/../assets/images/featured/blog_1.png
Introduction to HumanAI4SE

Explore the intersection of AI and Software Engineering, focusing on human-centered approaches to enhance developer productivity and uphold ethical standards.

HumanAISE Workshop Team

HumanAISE Workshop Team

1 min read
OSZAR »