[go: up one dir, main page]

Skip to main content
Log in

XR for Knowledge Work and Workplace Enhancement: Human-Centered, Productive, and Ethical XR Workspaces

Extended Reality (XR), including Virtual, Augmented, and Mixed Reality (V/A/MR), is rapidly maturing into a platform for the next generation of productivity tools. While entertainment and training have dominated early adoption, advances in display fidelity, tracking, and AI-assisted interaction now enable immersive workspaces where multiple applications and data streams coexist, persist, and adapt dynamically to user intent. These environments have the potential to transform knowledge work and workplace operations, moving beyond two-dimensional windows to create new paradigms of multitasking, collaboration, and spatial organisation. This special issue brings together two ISMAR 2025 workshops, namely xrWORKS (Extended Reality for Knowledge Work) and WE-XR (Working Enhancement with Extended Reality), to examine how XR reshapes knowledge and operational work. We seek contributions that advance methods, systems, and evidence for productive, safe, and inclusive XR at work, including: ● Intelligent & adaptive XR workspaces: semantic spatial organization and persistence; context/intent-aware orchestration; multimodal interaction (gaze, gesture, voice, pen, eye/hand tracking); embodied/AI copilots with transparency, controllability, and auditability. ● Workplace enhancement & collaboration: real deployments in engineering, healthcare, operations, education, and creative work; hybrid/remote workflows; cross-device continuity (HMD–desktop–mobile); integration with physical tools, digital twins, IoT and live data; architectural/design patterns and safety. ● Cognitive ergonomics, well-being & ethics: fatigue and cybersickness mitigation; attention and interruption management; inclusive and accessible interaction; privacy, security, and data governance for sensor-rich XR (eye/hand/biosignal/environmental data). ● Foresight, adoption & standards: “metaverse office” models; ROI/TCO and organizational barriers; upskilling and change management; synergy with AI/IoT/wearables/edge; interoperability and standards (e.g., OpenXR/WebXR/USD) and shared evaluation protocols/benchmarks. We especially welcome system contributions, empirical studies (lab and field), design frameworks, toolkits, benchmarks and datasets. Submissions should critically engage with both technical and human factors, addressing real-world deployment challenges as well as forward-looking opportunities at the intersection of XR, AI, and workplace transformation. This Special Issue will offer a unique synthesis of two research frontiers, knowledge work productivity and workplace enhancement and organizational adoption , that have so far evolved in parallel. By merging these perspectives, the collection will establish a holistic research agenda for XR at work that goes beyond technical novelty, directly addressing human-centered, organizational, and ethical dimensions often overlooked in prior XR-focused issues. Key new contributions include: ● Cross-domain integration: Bridging productivity-oriented XR systems with deployment case studies in engineering, healthcare, education, and creative industries, ensuring that innovations are grounded in real-world organizational contexts. ● Well-being and ergonomics as primary design goals: Moving beyond task efficiency to include fatigue mitigation, stress management, inclusivity, and cognitive ergonomics as central outcomes for XR workplaces. ● Responsible and ethical frameworks: Explicit focus on privacy, security, accessibility, and fairness in sensor-rich XR environments, offering governance and design principles for sustainable adoption. ● Forward-looking perspective on AI-assisted XR: Advancing research on embodied agents, intelligent copilots, and adaptive spatial workspaces that combine transparency, controllability, and evaluative case studies, paving the way for metaverse-style offices grounded in evidence. ● Exploration of “metaverse workplaces”: Connecting concrete deployments with foresight studies on digital workspaces, hybrid collaboration, and standards (OpenXR, WebXR, USD) to shape interoperable, future-proof XR ecosystems. ● Diverse contribution formats: Encouraging not only systems and toolkits but also datasets, benchmarks, replication and negative results, thereby broadening methodological rigor and reproducibility in the XR workplace domain. By uniting technical advances with human-centered and organizational insights, this Special Issue will provide a comprehensive, interdisciplinary contribution that positions XR as a credible, ethical, and impactful enabler of the future of work.

Participating journal

Submit your manuscript to this collection through the participating journal.

Editors

  • Daniele Giunchi

    Daniele Giunchi is Lead Guest Editor of this collection. Research Associate in the Virtual Environment and Computer Graphics (VECG) at the University College London(UCL). His research spans computer graphics, human-computer interaction, machine learning, computer vision, and large language models, searching for new paradigms of individual interaction or collaborative interaction to improve user experience in VR/AR/XR. Before joining UCL, he obtained a degree in Astronomy from the University of Bologna and had a long career in industry. From May 2025, he will cover the Computer Science lecturer position at the University of Birmingham.
  • Pasquale Cascarano

    Researcher at the University of Bologna since July 2023, focusing on the utilization of computer science in various sectors of the creative industry (cinema, art, and fashion), as well as in the fields of medicine and biology. Research interests are particularly directed towards the study, development, and utilization of Artificial Intelligence and Extended Reality paradigms. Engaged in research projects, collaborating with various national and international public and private institutions.
  • Esen K. Tütüncü

    Researcher at the Event Lab, University of Barcelona, where she is also a member of the Institute of Neurosciences. Her research focuses on the development and evaluation of AI agents in shared virtual environments, tackling themes such as social harmony, conflict resolution, and emergent group dynamics. She is also a part-time lecturer in ELISAVA, teaching Creating Immersive Narratives
  • Riccardo Bovo

    Researcher at the University of Greenwich. His research interests lie in making AI more interactive and personalized, and his research is based on the intersection between AI and virtual/augmented reality (VR/AR) to explore the affordances these devices provide to users, such as sensing and scene awareness. During his PhD he conducted research in VR/AR to train and evaluate behavioural inference models aimed to power Intelligent user interfaces and support collaboration and productivity in VR/AR.
  • Michele Gattullo

    Associate Professor of Mechanical Engineering at the Polytechnic University of Bari, where he leads the IMRLab (Industrial Mixed Reality Laboratory) and is a key member of the VR3Lab. He earned his M.S. (2012) and Ph.D. (2016) at the same university, later serving as research fellow (2016–2022) and assistant professor (2022–2025). His research focuses on guidelines for the efficient use of Extended Reality (XR) technologies in industry, with over 60 international publications. He also teaches innovative courses such as Industrial Augmented Reality for M.S. students. Prof. Gattullo contributes to the XR community as a member of the ISMAR International Program Committee (since 2024), and previously served as Web Chair (2021) and Workshop & Tutorial Chair (2022).
  • Dooyoung Kim

    Researcher at the KI-ITC Augmented Reality Research Center at KAIST. He holds a Ph.D. in Culture Technology (AR/VR) from the KAIST UVR Lab and a Bachelor’s degree in Mechanical Engineering from KAIST. With deep expertise in AR, VR, spatial computing, and human-computer interaction, his research focuses on AR/VR telepresence, mutual space generation, spatial memory, and locomotion, shaping the future of immersive connectivity.
  • Mar Gonzalez-Franco

    Computer Scientist and Neuroscientist, currently serving as a Research Scientist Manager at Google, where she leads the Blended Interactions and Devices Research Lab focused on immersive technologies, generative AI, and input interactions. Her team has envisioned Android XR multimodal and multidevice interactions to the OS and unified input vocabularies, an effort that won a SIGCHI Special Recognition Award. Prior to this, she was a Principal Researcher at Microsoft Research and has contributed to widely used technologies such as Hololens, Xbox, and Microsoft Teams—efforts recognized by Time’s Invention of the Year 2022. With a research background spanning VR/AR, AI, and computer vision, she has published in top scientific venues and studied at world-leading institutions including MIT, Tsinghua, UCL, and the University of Barcelona. She remains engaged with civil society as an expert advisor to global organizations including the United Nations and the European Commission.
  • Jens Grubert

    Professor of Human-Computer Interaction in the Internet of Things at Coburg University. He is heading the laboratory for Augmented and Virtual Reality, the laboratory for Reality Capture and th laboratory for Multimodal Human-Computer Interaction. Jens serves as executive spokesman for the Center for Responsible Artificial Intelligence at Coburg University, and as scientific director of the Technology Transfer Center Upper Franconia: Digital Intelligence in Lichtenfels. His research interests include, amongst others, XR for supporting knowledge work, multimodal XR and generative AI for XR.
  • Verena Biener

    Postdoctoral researcher at the Visualization Research Center at the University of Stuttgart. She received her PhD from Coburg University and the University of Bayreuth in 2024. Her research interests lie in the area of XR and human-computer interaction, specifically focusing on exploring how knowledge workers can benefit from using XR.

Articles