EchoGuide — Accessible Museum Companion
EchoGuide explores how museums can become more inclusive for visitors with visual impairments and other access needs. The project started from a case about digital info screens at a Viking ship museum and grew into a broader concept for an audio-based companion app that reads exhibits aloud and supports more independent visits.
Context
The original brief asked for a digital info screen showing the daily programme at the Viking Ship Museum in Roskilde. Rather than simply digitising the analogue magnet boards, our group reframed the task: what if we designed first from the perspective of visitors who cannot rely on vision?
This led to a series of iterations — from a speculative “Smart Cane” to an app-based solution — all centred on how museum information can be experienced through sound, tactility and independence instead of static text on screens.
My role
I contributed across concept development, interaction design and documentation:
- Co-developed the EchoGuide concept and overall user journeys.
- Worked on interface and interaction decisions for the app prototype.
- Helped shape the iterations from speculative “Smart Cane” to app-based solution.
- Contributed substantially to the written report and visual framing (e.g. cover).
- Participated in testing and reflection on accessibility and universal design.
Iteration 1 — “What if 40% of the population were blind?”
The first experiment used speculative design to imagine a world where a large part of the population is born blind. We designed a Smart Cane with tactile feedback and Braille translation as a primary interface to the museum.
- Explored user stories and storyboards around independence and reliance on technology.
- Built a physical prototype to test dimensions, grip and feedback.
- Revealed both possibilities and practical limitations of fully hardware-based solutions.
This iteration gave us empathy for blind visitors and highlighted how much museum information is locked behind visual text.
Iteration 2 — Design sprint & first app prototype
In the second iteration we ran a five-day design sprint and shifted towards a smartphone-based solution. An interview with Dansk Blindesamfund confirmed the need for independent access to the text normally printed on museum labels.
- Built an early EchoGuide prototype in p5.js using Teachable Machine.
- Camera-based object recognition triggers text-to-speech for specific exhibits.
- Tested the prototype on doors and Viking ships to understand robustness in real settings.
The sprint showed that smartphones are less conspicuous than custom hardware and easier to scale across institutions, but also surfaced usability issues and training demands for the ML model.
Iteration 3 — Towards a universal museum companion
In the final iteration we broadened the target group beyond visual impairment and focused on usability and scalability.
- Introduced a login screen so museums can host their own tailored EchoGuide model.
- Refined the camera view and confidence feedback to make scanning clearer.
- Added more everyday, “human” audio scripts and ambient sound to support atmosphere.
The result is a concept that can support blind visitors, but also children, elderly visitors and people with dyslexia — anyone who benefits from audio-based access and simpler language.
Outcome & learnings
- Accessibility is strongest when treated as a core UX concern, not an add-on.
- Combining speculative first iterations with concrete prototypes helps open the design space before converging on feasible solutions.
- Image recognition and ML are powerful, but require careful data collection and consideration of lighting and context in real museums.
EchoGuide strengthened my interest in accessibility and in designing technologies that can travel across institutions while still respecting local content and visitors’ different ways of engaging with culture.