Open Exhibits - Blog



The Hard Problem of Connecting Mobile Apps to Touch Tables

Omeka Everywhere
The role of mobile applications in the museum field has been a matter of discussion since the debut of the iPhone a decade ago. Since then, many museums have developed mobile apps, explored way finding, and experimented with other uses for these ubiquitous devices. Five years ago, we developed an experimental application called Heist which connected mobile devices to digital collections found on touch tables using a captive portal and HTML5. Ahead of its time, Heist was difficult to scale and implement broadly, but we hung on to the idea, wrote another grant (with our partners) and have since developed a new Heist-like system called The Omeka Everywhere Collections Viewer.

Omeka Everywhere is an IMLS-funded project that has brought together Open Exhibits and Omeka to make collections more accessible to the public in a variety of settings. The Omeka Everywhere project is a collaboration between the Roy Rosenweig Center for History and New Media at George Mason University, Ideum, and Connecticut’s Digital Media & Design Department. The software we’ve developed allows museum visitors to pair their mobile devices with a collections viewer application optimized for a multitouch table or a touch wall. Visitors can then favorite collection items and share them on their preferred social media platforms.

As the video demonstrates, we used a mobile app and simple numeric code in the table software to pair devices with stations on the touch table. It is a simple and highly reliable way to connect the applications. The advantage of a full mobile application (as opposed to the HTML5 captive portal page used with Heist) is that the mobile application will travel with visitors after their museum experience ends. The challenge may be getting the museum goers to take the time to download the application in the first place. A possible solution would be to make it easier to download the application at the museum itself, through a captive portal. That may increase adoption. We will soon see how museums use this software and how many visitors opt to participate.

At the moment, there isn’t a simple way to connect to people’s personal devices in museums. Visitors bring their iOS and Android phones with different hardware specs and various OS versions. Sharing between devices in a public setting isn’t seamless. Along with hardware and software fragmentation, general concerns about privacy and security are real, so for the foreseeable future there will be imperfect methods for these types of experiences. Still, for those visitors who do participate, our usability testing strongly suggests that they will have an enhanced experience at the museum and they will take the collection (and their favorites) with them as they leave to share, study, and re-experience on their own terms.

The Omeka Collections Viewer and its mobile application companion will be available later this summer to museums, cultural organizations, and others via Open Exhibits and Omeka. The applications will be free and open. Attendees of this year’s American Library Association Annual Conference and Exhibition in Chicago will have a chance to see the Omeka Collections Viewer in person at Ideum’s exhibition booth 5237.

This project was made possible in part by the Institute of Museum and Library Services [award number MG-30-0037-1].


More Info

by Jim Spadaccini View Jim Spadaccini's User Profile on May 31, 2017

Universal Design Guidelines for Computer Interactives

At the Museum of Science, Boston, we have been reviewing our software development and design process and we have compiled our findings from the last decade into a list of guidelines to consider. This list is an updated version of the table found in Universal Design of Computer Interactives for Museum Exhibitions (Reich, 2006). Although these are not strict rules, we are hoping they help provide a foundation on which to build during development of a universally designed computer interactive.

This list is organized by development area and each guideline is followed by a code, indicating which audiences benefit the most from these considerations. The key for these codes can be found at the bottom of this post.

Overall exhibition

  • Minimize background noise (D, HH, DYS)
  • Minimize visual noise (DYS, ADD, LV)
  • Stools for seating (LV, YC, OA, LM)
  • Consistency in interaction design throughout exhibition (B, LV, ID, LD, NCU)

Content development

  • Multisensory activities for framing (ALL)
  • Use of the clearest, simplest text that is free of jargon (ER, ID, LD, ESL)
  • Screen text that makes sense when heard and not viewed (B, LV, ID, ER, ESL)
  • A short description of the activity’s goals presented through images, audio, and text (ADD, NCU)
  • Clear, simple directions that provide a literal and precise indication of what to do and the exact order for doing it (B, LV, ID, LD, NCU)
  • Make as many options as possible visible to maximize discoverability (ALL)
  • Minimize number of actions required to accomplish a given task (ADD, ASD, ID, NCU)


  • A tactile or gestural interface, such as buttons, for navigating choices and making selections (B, LV, LM)
  • Care should be taken when combining multiple modes of interaction (B, LV, D, CD)
  • Tactile elements that do not require a lot of strength or dexterity (LM, YC)
  • Input mechanisms are within reach for all visitors (ideally limited to a 10” depth at a 33” height) (WC, EXH, YC, LM)
  • Monitors, overlays, and lighting are designed to reduce screen glare (SV, LV, EXH)
  • Usable controls within reach of the edge of the table (LM, WC, LV, YC)
  • 27-29 inches of clearance beneath the kiosk, with a depth of at least 19 inches (WC)

Software development

  • Connect to existing standards or everyday uses of technology (FCU)
  • Minimized use of flickering and quick-moving images or lights (SZ, ASD)
  • User control over pace of feedback (HH, ASD, ID, B, LV)
  • Control over the pace of interaction, including when a computer “times-out” (D, B, LV, LM, DYS, LD)
  • A limited number of choices presented at one time (5-7) (B, LV, ID, ADD)
  • Minimized screen scrolling (LV, ID, NCU)
  • Limit unintentional input by providing tolerance for error (B, LV, LM, NCU)
  • Provide easy methods to recover in the event errors are made (B, NCU, LD)
  • Adjustments of a control should produce noticeable feedback (ALL)
  • Ensure feedback is as close to real-time as possible (B, LV, CD, D, NCU)
  • Ensure dynamic elements indicate current status (e.g., active vs. inactive, selected vs. unselected) (ALL)


  • Auditory feedback for what is happening on the screen (ALL)
  • Audio descriptions for videos, images, and other visual-based information (B, LV, ID)
  • Screen text that is read aloud (B, LV, ID, LD, ESL, ER)
  • Open captions for videos and non-text based audio (D, HH, OA)
  • User control over volume (HH, ASD)


  • Clearly labeled audio/video components that are also presented visually, through open captions or images (D, HH, OA)
  • Text with a large font, clear typeface, capital and lower case letters and ample space between lettering and text lines (test on final screen or device to ensure legibility) (LV, OA, DYS, EXH)
  • High contrast (at least 70%) images and text (LV, OA, CB)
  • Alternatives to color-coded cues (LV, OA, CB)
  • A non-text visual indication of what to do and the activity’s content (ER, LD, DYS, ESL)
  • A clear, consistent and repetitive layout for presenting information (B, LV, LD, NCU)
  • Clear mapping between the buttons and screen images (SV)
  • Screen design should be intuitive and not to draw attention away from the learning goals (ALL)


Key for audience members who benefit:

ADD – visitors who have Attention Deficit Disorder

ALL – all visitors

ASD – visitors affected by Autism Spectrum Disorder

B – visitors who are blind

CB – visitors who are color blind

D – visitors who are d/Deaf

DYS – visitors with dyslexia

ER – visitors who are early readers or are learning to read

ESL – visitors whose first language is not English (including American Sign language users)

EXH – visitors at extreme heights (low and high)

FCU – visitors who are frequent computer users

HH – visitors who are hard of hearing

ID – visitors with intellectual disabilities

LD – visitors with learning disabilities

LM – visitors with limited mobility

LV – visitors with low vision

NCU – visitors who are new or infrequent computer users

OA – visitors who are older adults

SV – visitors who are sighted

SZ – visitors who are subject to seizures

WC – visitors who use wheelchairs

YC – visitors who are young children

More Info

by Emily O'Hara View Emily O'Hara's User Profile on Jun 16, 2015

New Accessibility Feature Enhances Open Exhibits Experience

Video tour of enhanced Solar System Exhibit

Enhanced Solar System Exhibit

Ideum (the lead organization of Open Exhibits) has made significant progress in multitouch accessibility in the process of developing three prototypes for the Creating Museum Media for Everyone (CMME) National Science Foundation-funded project. The third prototype, a new version of our Open Exhibits Solar System Exhibit, incorporates improvements based on usability test results and suggestions from the Museum of Science Boston, National Center for Interactive Learning, WBGH, and advisor Sina Bahram. The major new feature in the current version is an accessibility layer designed for visually impaired users on large touch screen devices. This new CMME software will be released February 6, 2015.

Opening screen of Enhanced Solar System Exhibit with accessibility layer

Enhanced Solar System Exhibit with accessibility layer

Accessibility Layer

The main component of the accessibility layer is the information menu browser. To activate the menu browser, a user holds down three fingers for two seconds. This can be edited to incorporate most of the hundreds of gestures used in the Open Exhibit framework. During this hold, the user receives audio feedback letting them know the accessibility layer is activating. Once the menu is active, the user can swipe left or right to access different choices on the menu, in this case, the different planets in the solar system. The text that normally appears on the screen when an item is chosen from the visual menu is automatically narrated aloud. Using a simple set of gestures, the user can control the menu and the content to be read.

User enables accessibility layer with 3-finger gesture
User enables accessibility layer

Future Steps

In the current version, the accessibility layer is intended for one user, and that one user controls what content is active for the entire screen. We are currently working on a multi-user version that will incorporate multiple “spheres of influence” to allow users to control a range of space from a small area. Using these “spheres of influence,” multiple visually impaired and/or sighted users can interact with the exhibit simultaneously. The multi-user version’s audio will be multidirectional, that is, can be split so that users on different sides of the table can listen to different parts of the content at the same time. Our next step is to develop visual elements that will play along with the audio narration for those who have limited sight or hearing impairment, or are learning the English language.

More Info

by Stacy Hasselbacher View Stacy Hasselbacher's User Profile on Jan 6, 2015

CMME Exhibit Resource Overview

We have finished posting about the Museum of Science's portion of the Creating Museum Media for Everyone (CMME) project. In case you missed any of the posts, you can find direct links to each of them below.

Background: These posts include resources and thinking that jumpstarted our exhibit development process.

Final Exhibit Component: These posts detail the final exhibit, which is a part of the Catching the Wind exhibition, at the Museum of Science.

Exhibit Development Toolkit: These posts include specifications for the software programming, design, and text we used in the final exhibit. Feel free to repurpose any of the resources in these posts for your own exhibit development.

Paths Not Taken: These posts dive deeper into multi-sensory techniques we tried that did not work for our exhibit, but may be useful in other applications.

More Info

by Emily O'Hara View Emily O'Hara's User Profile on Dec 31, 2014

CMME: Audio Toolkit

Audio is a major feature of the final exhibit for the Museum of Science’s portion of the Creating Museum Media for Everyone (CMME) project. The audio components help guide visitors through their interaction with the exhibit. We found that many of the audio components were important for almost all visitors, in addition to those who had low or no vision. Audio is also useful for visitors who are dyslexic and with other cognitive disabilities that affect the ability to read. This post outlines the final audio components, including text and audio files, we included in the exhibit. The findings that led us to most of our audio decisions are outlined in a previous post summarizing the formative evaluation of the CMME exhibit.

In this exhibit we used audio in three distinct ways:

  • Audio phone text
  • Broadcast text audio
  • Broadcast sonified audio

Audio phone text

Audio phone text accompanies almost all of the exhibits at the Museum of Science. This audio gives an overview of the exhibit component, including the physical layout, label copy text, image descriptions, and potential interactions visitors may have at an exhibit. This audio is typically accessed through an audio phone handset and visitors can advance through the audio files by pressing the buttons mounted on the exhibit near the handset holder.


Exhibit Component Drawing

This drawing of the CMME exhibit shows the audio phone handset on the front left edge of the exhibit component. There are two buttons mounted on the slanted surface above the handset that trigger the audio files to play when they are pressed.

The audio phone used for this exhibit has two buttons. The square button audio file contains a physical description of the exhibit so that visitors can orient themselves. The round button contains five audio files that articulate the text, images, and a brief introduction to possible visitor interaction at the exhibit. A file with the full audio phone text can be viewed and downloaded by clicking here. You can also listen to a sample audio file from the audio phone by clicking here (this matches the "Square button" section in the full audio phone text document).


Broadcast text audio

Broadcast text audio provides live feedback in response to a visitor’s action. This feedback includes when a visitor touches the touch screen or pushes a button. This feedback often gives details about their selection and provides additional information about how they might interact with the exhibit. A file with the full broadcast audio text can be viewed and downloaded by clicking here. You can listen to sample audio files from the broadcast audio by clicking on the following links for the button instructions, the introduction to the graph, and a graph title (these match the text in the corresponding sections of the full broadcast audio text document). The dynamic nature of the audio feedback meant some of the phrases and instructions were recorded in separate files and then pieced together in real time through the programming. For example, if a visitor holds their finger in one point on the graph, they will hear seven audio files strung together to describe the date in that area: “Turbine produced - 756 - watts in winds of - 25 - miles per hour - 4 - data points.” We chose not to use any computer generated vocalizations for the text and we recorded all of the audio with the same human voice.

Some exhibits at the Museum of Science have the broadcast audio as an “opt in” feature and visitors have the option to turn the audio on by pressing a button. For this exhibit, we found the introduction to the graph was so important to visitor understanding of the exhibit, we decided to leave the broadcast audio on all of the time. This factor improves understanding for many visitors, but may also limit interactions with the exhibit by visitors who may not want to listen to the audio or who may become overwhelmed by too much auditory stimulation. This concern led us to edit the amount of information we readily broadcast. Additional broadcast audio instructions can be accessed through a “More Audio” button located near the audio phone handset.

CMME audio phone and more info button

Picture of the front left corner of the CMME exhibit. The audio phone handset and corresponding control buttons are on the far left. The “More Audio” button is a few inches to the right and the cutout holes in the surface, where the speaker is mounted into the tabletop for the broadcast audio, are visible next to the buttons.

Although our feedback was dynamic, we were unable to expand this feedback to encompass audio hints. These would have added dynamic direction about the next available options for visitors when there was any idle time. For example, if a visitor explored touching the screen in the area of the graph, after a brief period of inactivity, the exhibit may then prompt them to, “Try holding your finger in one place on the graph for a more detailed description of data at that point.” This option allows you to divide instructions into more digestible pieces that are given when a visitor is ready for them. This kind of dynamic feedback also involves an additional layer of instruction writing and programming in the software that the scope of our project did not include.

Broadcast sonified audio

In addition to the broadcast text audio, this exhibit also includes sonified audio. These are tones that represent data values on the graphs. Similar to the broadcast audio feedback, the sonified audio is also dynamic and changes based on the current data being shown in the graph. This exhibit shows sonified trend lines in these data and sonifies the data points when a visitor moves their finger over them when touching the screen. Below are two videos showing the sonified data. We used static to represent areas of the graph in which no data is present.

This video shows when a graph is first selected. As the trend line slider moves across the screen, audio feedback plays out the values, with higher pitches representing higher values in the data. This graph goes from low to high and then plays static for the second half of the graph where no data is present.

This video shows a person moving their finger around within the graph area on the touch screen. Each tone that is played represents one data point and the pitch corresponds to its value. Static is played when the user moves her finger into an area of the graph where no data points are present.

Our decision to include dynamic audio feedback allows a wider range of visitors to interact with the graphs in this exhibit and understand the wind turbine data being presented in the graphs, but we had to be very judicious in our decisions about where use audio in this exhibit. There were a few areas in which we had to remove audio feedback because it was causing confusion.

Originally, the buttons read out an audio title of which option they represented when they were touched, but before they were even pushed. This led to visitors accidentally triggering the audio when they were interacting with another part of the exhibit and lead to confusion about what feedback corresponded with their actions. Additionally, the names of the turbines were often confusing in themselves, so having them repeated was not helpful. We added “wind turbine” with each of the brand names to reinforce the exhibit topic.

At first, we also played the broadcast audio introduction after each graph button was pushed. Some visitors felt this was repetitive, many did not listen, and some felt it was too complex to understand. Additionally, some visitors didn't realize the same audio was being repeated and felt they should listen to it even if they already understood what to do from using the prior graph. This led us to only play the introduction audio and animation the first time a graph is chosen by a visitor, but visitor interaction is locked out during this period to reinforce their understanding of the instructions. For each subsequent graph choice, visitors move straight to interacting with the graph. If a visitor does want the introduction content, a more detailed explanation is available in the “More Audio” button. Once a visitor stops interacting with the exhibit, it times out and moves back to the idle screen. Any additional interaction would once again trigger the introduction to play.


We would like to note that visitors who are deaf can often feel the vibration of the audio and know that there is auditory information that is being shared. If they feel confused by the interactive, they will think they are missing out on critical information. All audio directions in this exhibit are also reinforced with visual text and images, in order to be accessible for visitors who are deaf.


More Info

by Emily O'Hara View Emily O'Hara's User Profile on Dec 31, 2014
First <<  1 2 3 4 5 6 7 8 9 10  >>  Last