Lai, Jiun-Ting was born in 1994 in Taipei. A new media artist and the founder of Cognitive Savages, a new media interactive team He focuses on the field of Human-Computer Interaction in which he is now working on the creation of something futuristic — the Orient Cyborg. In order to create new sensory abilities and at the same time expand the existing ones, he is endeavoring to assess the possibility of the cooperation between human brain and artificial intelligence. He won the First price of the 13th Taiwan Influential Rookie Award, 2018 VISION GET WILD AWARD, Knowledge Taiwan Creativity Award, nominated for 14th Laguna Award Virtual Art finalist and participates in 2020 Ars Electronica.ART Gallery. He also pays attention to the impact of surveillance capitalism and how technology is changing our lives nowadays. On top of everything else, he is figuring out ways for humans to approach our world differently. Now he is participating in artist-in-residence programs by Taiwan Industrial Technology Research Institute.
AI & Human Computer Interactive
Could exploring the limit of consciousness become a mode of cultivating oneself? Understand_ V.T.S is an installation that substitutes senses. It helps explore and ponder in the process of the cultivation. In this piece of work, it conducts the experiment in which the possibility of the cooperation between natural and artificial algorithms are assessed, serves as an approach to human enhancement. That is, it tries out how well our brains(natural) work with AI (man-made). The feature of neuroplasticity allows our senses to perceive the world in various ways in which we might see not with our eyes, but with skins or listen not through our ears, but through taste buds, to name but a few. General skin vision relies on brain parsing pieces of information and shaping cognitions thereafter. In this respect, I introduced an object recognition system — YOLO v3, converting the results given by YOLO 3 into Braille reading system to thigh skin, and the other side converting the tactile image to motor on your back directly. You can control a robot wanders about the surroundings of you. The signals its left eye receive will translate the result of object detect to Braille and deliver it to your leg; while its right eye converts the signal received into the tactile image to the your back. So eventually your brain will manage to comprehend the meaning of these signals.Unlock a new tactile cognition by a Human and AI.
Link to artwork: