{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,28]],"date-time":"2026-02-28T07:50:46Z","timestamp":1772265046327,"version":"3.50.1"},"posted":{"date-parts":[[2015,8,10]]},"group-title":"PeerJ PrePrints","reference-count":0,"publisher":"PeerJ","license":[{"start":{"date-parts":[[2015,8,10]],"date-time":"2015-08-10T00:00:00Z","timestamp":1439164800000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/2.zoppoz.workers.dev:443\/http\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"abstract":"<jats:p>Loss of vision is a severe impairment to the dominant sensory system. It often has a catastrophic effect upon the sufferer, with knock-on effects to their standard of living, their ability to support themselves, and their care-givers lives. Research into visual impairments is multi-faceted, focusing on the causes of these debilitating conditions as well as attempting to alleviate the daily lives of affected individuals. One of the methods is through the usage of sensory substitution device. Our proposed system, Luminophonics, focuses on visual to auditory cross modalities information conversions. A visual to audio sensory substitution device a type of system that obtains a continual stream visual inputs which it converts into corresponding auditory soundscape. Ultimately, this device allows the visually impaired to visualize the surrounding environment by only listening to the generated soundscape. Even though there is a huge potential for this kind of devices, public usage is still minimal (Loomis, 2010). In order to promote the adoption from the visually impaired, the overall performance of these devices need to be improved in terms of soundscape interpretability, information preservation and listening comfort amongst other factors. Luminophonics has developed 3 type of prototypes, which we have used to explore different ideas pertaining to visual to audio sensory substitution. In addition to these, one of the prototypes has been converted to include depth information using time of flight camera. Previously, an automated measurement method is used to evaluate the performance of the 3 prototypes (Tan, 2013). The results of the measurement cover the effectiveness in terms of interpretability and information preservation. The main purpose of the experiment reported herein, was to test the prototypes on human subjects in order to gain greater insight on how they perform in real-life situations.<\/jats:p>","DOI":"10.7287\/peerj.preprints.1289v2","type":"posted-content","created":{"date-parts":[[2018,1,12]],"date-time":"2018-01-12T17:15:58Z","timestamp":1515777358000},"source":"Crossref","is-referenced-by-count":0,"title":["Luminophonics experiment: A user study on visual sensory substitution device"],"prefix":"10.7287","author":[{"given":"Shern Shiou","family":"Tan","sequence":"first","affiliation":[{"name":"School of Computer Science, University of Nottingham, Greater Kuala Lumpur, Malaysia"}]},{"given":"Tomas","family":"Maul","sequence":"additional","affiliation":[{"name":"School of Computer Science, University of Nottingham, Greater Kuala Lumpur, Malaysia"}]},{"given":"Neil","family":"Mennie","sequence":"additional","affiliation":[{"name":"School of Psychology, University of Nottingham, Greater Kuala Lumpur, Malaysia"}]}],"member":"4443","container-title":[],"original-title":[],"link":[{"URL":"https:\/\/2.zoppoz.workers.dev:443\/https\/peerj.com\/preprints\/1289v2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/2.zoppoz.workers.dev:443\/https\/peerj.com\/preprints\/1289v2.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/2.zoppoz.workers.dev:443\/https\/peerj.com\/preprints\/1289v2.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/2.zoppoz.workers.dev:443\/https\/peerj.com\/preprints\/1289v2.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2019,12,23]],"date-time":"2019-12-23T13:44:55Z","timestamp":1577108695000},"score":1,"resource":{"primary":{"URL":"https:\/\/2.zoppoz.workers.dev:443\/https\/peerj.com\/preprints\/1289v2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2015,8,10]]},"references-count":0,"aliases":["10.7287\/peerj.preprints.1289"],"URL":"https:\/\/2.zoppoz.workers.dev:443\/https\/doi.org\/10.7287\/peerj.preprints.1289v2","relation":{"references":[{"id-type":"doi","id":"10.7287\/peerj.preprints.1289v2\/supp-1","asserted-by":"subject"},{"id-type":"doi","id":"10.7287\/peerj.preprints.1289v2\/supp-2","asserted-by":"subject"},{"id-type":"doi","id":"10.7287\/peerj.preprints.1289v2\/supp-3","asserted-by":"subject"},{"id-type":"doi","id":"10.7287\/peerj.preprints.1289v2\/supp-3","asserted-by":"object"},{"id-type":"doi","id":"10.7287\/peerj.preprints.1289v2\/supp-2","asserted-by":"object"},{"id-type":"doi","id":"10.7287\/peerj.preprints.1289v2\/supp-1","asserted-by":"object"}],"replaces":[{"id-type":"doi","id":"10.7287\/peerj.preprints.1289v1","asserted-by":"object"}]},"subject":[],"published":{"date-parts":[[2015,8,10]]},"subtype":"preprint"}}