I am bored! I am thinking about the future. I may as well plan my final project.
Table of Contents
I know that I want to do something in tangible computing, but I don’t have links to any resources explaining what I mean by “tangible computing”.
I’ve read a lot of studies about tangible computing enabling accessible interfaces, but I never thought to save any of those links…
9.5.24
I know that the tangible computing labs at MIT and Stanford have produced papers I read before.
I think I remember reading shapeCAD: An Accessible 3D Modelling Workflow for the Blind and Visually-Impaired Via 2.5D Shape Displays and being very impressed! This is the kind of interface I have in mind for the interactive fiction game console.
Apparently what I’m interested in is called tactile graphics - interfaces that allow you to feel an image.
I was thinking the interactive fiction console should allow you to read by touching tactile Braille “images,” but it seems that Braille is not always…accessible? I’m not sure what the text format should be. I could have interpreted that study incorrectly.
I see that I could be inspired by existing refreshable braille displays instead of starting from scratch. That wiki says that speech synthesis is often used in combination with the braille display.
I wonder if self-voicing could be added to Twine games? RenPy relies on the operating system for speech synthesis, but Twine operates in the browser and may not have access to that part of the OS. Self-voicing is not available in Chrome OS.
The Microsoft Speech API is a possibility, but I assume it costs money. There’s also the possibility of using some TensorFlow/ONNX model, but it makes me feel sketchy. I think it would probably be decently easy in TensorFlow but I don’t feel good about it.
I found an ONNX model for text to speech. Piper is another option.
This project actually seems like it could be…quite easy to execute…