Authors
Yongquan Hu, Mingyue Yuan, Kaiqi Xian, Don Samitha Elvitigala, Aaron Quigley
Publication date
2023/3/29
Journal
arXiv preprint arXiv:2303.16593
Description
As the two most common display content of Augmented Reality (AR), the creation process of image and text often requires a human to execute. However, due to the rapid advances in Artificial Intelligence (AI), today the media content can be automatically generated by software. The ever-improving quality of AI-generated content (AIGC) has opened up new scenarios employing such content, which is expected to be applied in AR. In this paper, we attempt to explore the design space for projecting AI-generated image and text into an AR display. Specifically, we perform an exploratory study and suggest a ``user-function-environment'' design thinking by building a preliminary prototype and conducting focus groups based on it. With the early insights presented, we point out the design space and potential applications for combining AIGC and AR.
Total citations
Scholar articles
Y Hu, M Yuan, K Xian, DS Elvitigala, A Quigley - arXiv preprint arXiv:2303.16593, 2023