Building SAFER-AI: Safety and Accessibility in Fundamental Engineering for Responsible AI
Published in US Research Software Engineers Conference 2025 (USRSE25), 2025
Recommended citation: Butler, D., & Begel, A. (2025). Building SAFER-AI: Safety and Accessibility in Fundamental Engineering for Responsible AI. "US Research Software Engineering Conference 2025 (USRSE25), Philadelphia, PA. Zenodo. https://doi.org/10.5281/zenodo.17268161 http://darrendbutler.github.io/files/USRSE25-SAFER-AI-POSTER.pdf
Abstract: Modern software teams are being asked to build AI while navigating bias, fairness, and risk – work that depends on psychological safety, the shared belief that critique is welcomed. SAFER-AI is a design-based research agenda that partners with engineers in professional workplaces and training environments to (1) understand lived experiences of safety in real team communication; (2) co-design AI-augmented interactions (for videoconferencing and shared workspaces) that surface misunderstandings and scaffold inclusive critique; and (3) evaluate conversational-AI styles (e.g., coach, peacekeeper, devil’s advocate) in team experiments. Across strands, we collect meeting transcripts, collaboration logs, and short surveys, and analyze participation patterns, repair moves, and trust. Early insights suggest developers want support for process transparency (e.g., articulating processes, not just answers) and lightweight prompts that support communication across social differences. Anticipated outcomes include a Safety Audit Toolkit for team retrospectives, design requirements for collaborative software development, and a pluggable AI Safety Agent for meetings that help teams anticipate and bridge communication gaps. For RSEs, these deliverables translate directly into reusable practices and software that make meetings sharper, learning deeper, and systems safer.