I am an Assistant Professor in CS at FSU. My research lies in the intersection areas of Human-Computer Interaction, Cyber-Physical Systems, and AI Technologies. My career goal is to build sustainable, scalable and intelligent devices to enable the creation of smart environments on a large scale. I have developed smart everyday materials that can 1) seamlessly sense user activities and contexts, 2) be used with established methods to create a smart environment, and 3) operate without embedded batteries and silicon-based integrated circuits. In addition, I have also worked on other HCI topics, such as wearable sensing technology, text entry system and Human-AI interactions in VR/AR. My research is generally published in top HCI venues like CHI and UIST, while I also contribute to top AI conferences such as ICLR. My research has attracted considerable public interests via Internet News (e.g. Engadget, Times). Currently, I direct the MakeX Lab at FSU Love 008. I am looking for PhD students who are highly motivated and interested in Human-Computer Interaction research. A background in hardware is advantageous but not mandatory. If you're interested in working with me, please send an email with your CV and a one-page research statement outlining the topic you wish to explore.
iWood is interactive plywood that can sense vibration based on triboelectric effect. As a material, iWood survives common woodworking operations, such as sawing, screwing, and nailing and can be used to create furniture and artifacts.
This paper presents an investigation of body-centric interactions between the NFC device users and their surroundings, and an accessible method for fabricating fexible, extensible, and scalable NFC extenders on clothing pieces, and an easy-to-use toolkit for facilitating designers to realize the interactive experiences.
Project Tasca presents a pocket-based textile sensor that detects user input and recognizes everyday objects usually carried in the pockets of a pair of pants (e.g., keys, coins, electronic devices, or plastic items). By creating a new fabric-based sensor capable of detecting in-pocket touch and pressure, and recognizing metallic, non-metallic, and tagged objects inside the pocket, we enable a rich variety of subtle, eyes-free, and always-available input, as well as context-driven interactions in wearable scenarios
Capacitivo is a contact-based object recognition technique developed for interactive fabrics, using capacitive sensing. Unlike prior work that has focused on metallic objects, our technique recognizes non-metallic objects such as food, different types of fruits, liquids, and other types of objects that are often found around a home or in a workplace.
We propose a new sensing technique for one-dimensional touch input workable on an interactive thread of less than 0.4 mm thick. Our technique locates up to two touches using impedance sensing with a spacing resolution unachievable by the existing methods.
This paper explored the possibilities of interaction with ubiquitous zipper-bearing objects, with a focus on opportunities for foreground and background interactions. Based on the findings, we built a self-contained prototype, Zippro that can replace a common zipper slider.
We present a bimanual text input method on a miniature fingertip keyboard, that invisibly resides on the first segment of a user’s index finger on both hands.
Best Paper Award
In this paper, we propose and investigate a new text entry technique using micro thumb-tip gestures. Our technique features a miniature QWERTY keyboard residing invisibly on the first segment of the user’s index finger. Text entry can be carried out using the thumb-tip to tap the tip of the index finger.
In this paper, we propose the designs for low cost and 3D-printable add-on components to adapt existing breadboards, circuit components and electronics tools for blind or low vision (BLV) users.
Honorable Mention Award
We present a novel haptic and audio feedback device that allows blind and visually impaired (BVI) users to understand circuit diagrams. TangibleCircuits allows users to interact with a 3D printed tangible model of a circuit which provides audio tutorial directions while being touched.
In this paper, we propose blending the virtual and physical worlds for prototyping circuits using physical proxies. With physical proxies, real-world components (e.g. a motor, or light sensor) can be used with a virtual counterpart for a circuit designed in software.
We present CurrentViz, a system that can sense and visualize the electric current flowing through a circuit, which helps users quickly understand otherwise invisible circuit behavior.
CircuitSense is a system that automatically recognizes the wires and electronic components placed on breadboards.
CircuitStack is a system that com- bines the flexibility of breadboarding with the correctness of printed circuits, for enabling rapid and extensible circuit con- struction.
We present Mind’s Eye, a paradigm to ground language model reasoning in the physical world. Given a physical reasoning question, we use a computational physics engine (DeepMind’s MuJoCo) to simulate the possible outcomes, and then use the simulation results as part of the input, which enables language models to perform reasoning.
Xuhai Xu, Mengjie Yu, Tanya Jonker, Kashyap Todi, Feiyu Lu, Xun Qian, João Belo, Tianyi Wang, Michelle Li, Aran Mun, Te-Yen Wu , Junxiao Shen, Ting Zhang, Narine Kokhlikyan, Fulton Wang, Paul Sorenson, Sophie Kahyun Kim, Hrvoje Benko (CHI 2023)
[Video] [DOI] [PDF]
We propose XAIR, a design framework that addresses when, what, and how to provide explanations of AI output in AR. The framework was based on a multi-disciplinary literature review of XAI and HCI research, a large-scale survey probing 500+ end-users’ preferences for AR-based explanations, and three workshops with 12 experts collecting their insights about XAI design in AR.
In this paper, we interviewed and co-designed with eight DHH participants to address the following challenges: 1) associating utterances with speakers, 2) ordering utterances from different speakers, 3) displaying optimal content length, and 4) visualizing utterances from out-of-view speakers.
We present an AR direct-manipulation interface that lets users plan an aerial video by physically moving their mobile devices around a miniature 3D model of the scene, shown via Augmented Reality (AR).
NFCStack is a physical building block system that can support stacking and frictionless interaction based on near-field communication (NFC).
We present ActiveErgo, the first active approach to improving ergonomics by combining sensing and actuation of motorized furniture. It provides automatic and personalized ergonomics of computer workspaces in accordance to the recommended ergonomics guidelines.
Conference Organinzing Committee: UIST'23
Associate Chairs: CHI'24
Conference Review: CHI'19 - '23, UIST'19 - '23, ISS'20, CSCW'21, TEI'20 - '21, MobileHCI'22
Journal Review: IMWUT'22, Natural Communications'23