When choreographing the sounds that play in your product, each sound should reflect its level of importance in the UI’s hierarchy. A sound’s prominence and personality should be appropriate to its level, and sounds of the same type (such as hero sounds) share the same level of hierarchy.
High in the hierarchy
Sounds that are higher in the hierarchy are important representations of a brand or product.
In a user flow, sounds that follow or precede one another should have related attributes (like timbre, melody, or envelope).
Type of sounds
Alerts and notifications
Ringtones and alarms
Primary UX sounds
Main UX sounds
Secondary UX sounds
Sounds that share attributes are unified as a group.
Key signatures are a defining characteristic of tonal sounds. They help build harmonic relationships between interactions. Sounds that are played in close proximity to one...
Key signatures are a defining characteristic of tonal sounds. They help build harmonic relationships between interactions.
Sounds that are played in close proximity to one another should use the same or complementary key signatures, unless a specific use case requires otherwise.
Earcons in a product should use complementary signatures to create a relationship between them.
Don’t create earcons with unrelated key signatures, as it doesn’t express a unified product sound experience.
Expressing sound relationships
Show how states are related to one another by using motifs to express that connection. For example, the sound for an “on” state can relate...
Show how states are related to one another by using motifs to express that connection. For example, the sound for an “on” state can relate to the sound for an “off” state.
Each sound plays in a direction, and that direction reverses depending on whether the switch is toggled on or off. This indicates that the two states are related, while performing opposite functions.
Don’t express opposite states with notes that have an ambiguous relationship.
Interaction sounds that occur regularly – such as sounds associated with typing, swiping, scrolling, or navigation – can benefit from small changes to those sounds....
Interaction sounds that occur regularly – such as sounds associated with typing, swiping, scrolling, or navigation – can benefit from small changes to those sounds. These interactions should include minor variations in sound timbre, to mimic the variance of sounds in real-world experiences.
When swiped, each item triggers a sound effect that includes minor variations in sound characteristics.
Each tap on the same UI element triggers a slightly different sound that contains subtle variations.
Mixing is the art of combining different sound sources into one audio stream. It involves adjusting each sound’s volume, frequency, spatial positioning, and more to create a rich, cohesive sound.
Different sound sources can be mixed to vary the emotion, intent, or character of the final sound. You can also adjust a sound’s focal point.
- This mix feels more open, making high frequencies more prominent.
- This mix feels more closed, putting the focus on the trill and reducing high-frequency content.
UX sounds should be balanced to accommodate other sounds in the UI and the physical environment. Treatments that isolate, duck, mix, and balance some sounds at specific moments can help focus user attention properly, so that the intent behind a sound comes across.
When a notification sound occurs while music is playing, the system temporarily gives the notification prominence. The sound priority moves away from the music until the notification is swiped away.
Sound mixing is nuanced and depends on the overall experience being designed. Consider these factors in determining how sounds should interact:
- Priority: Assign sounds to the appropriate priority and category based on user intention and system requirements. Give the highest priority sound the most prominence.
- Fading: When audio streams overlap, one of them may have the volume temporarily reduced, muted, or turned off. Each sound may fade in or out either gradually or abruptly.
- Accessories: The audio accessories used, such as a headset or car speaker, may affect which sounds are played and how they are blended.
- Volume: To emphasize a particular sound in a stream, it’s better to reduce other, surrounding sounds rather than to amplify the volume of a single sound.
Other device sounds
Multiple sounds can occur at the same time, both from user-generated activities and system sounds. For example, sounds from incoming notifications may occur while a user listens to music.
To optimize sound, the sound designer can audition a sound by testing the sound using devices and real-world environments. By listening to sounds in real-world...
Sound for the user’s environment
To optimize sound, the sound designer can audition a sound by testing the sound using devices and real-world environments. By listening to sounds in real-world conditions (using the software, hardware, environmental noise, acoustics and other factors of an environment) sound can be better adjusted to play in a wider range of conditions.
Changes can also be made to a sound’s attributes (such as timbre) using the following processes: composition rewrites, re-orchestration, melodic variations, equalization, and other changes.
Equalization (EQ) is an effect that enhances or reduces specific frequencies. EQ should be adjusted for the range of devices on which playback is designed to occur.
- This sound is equalized for full fidelity playback.
- This sound is equalized to reduce low-end frequencies and amplify high frequencies.
Sounds should play at a consistent level of loudness depending on their position in the sound hierarchy (determined by a sound’s priority level and category). For example, sound from a ringtone alert can be louder than sound from UI feedback, as it has higher priority in the moment it occurs.
When measuring loudness through specific hardware, take into account “perceived loudness,” (measured in A-weighted decibels or dB(A)), rather than relying solely on the direct peak meter level.
Volume controls should reflect how people hear sound, rather than what’s mechanically possible. Volume level increases should use logarithmic (rather than linear) volume increases.
For more information on loudness, see the Actions on Google Audio Loudness guidelines.
The final audio file playback may change depending on a product’s hardware and software limitations. To reduce file size (with minimal degradations to quality): Don’t...
The final audio file playback may change depending on a product’s hardware and software limitations.
To reduce file size (with minimal degradations to quality):
- Apply lossy compression (such as mp3 or ogg) up until artifacts can be heard
- Lower the bit-depth and sample rate until artifacts can be heard
- Trim any unnecessary silence at the beginning or the end of the file
Don’t degrade or compress a sound such that audible artifacts are noticeable (such as noise, distortion, or stray frequencies that can arise from file compression). It’s better to design a new sound than have audible artifacts.
- Uncompressed audio
- The lowered bit-depth and sample rate have introduced a noticeable degradation in quality.
File format recommendations
The final format of the audio depends on system-level implementation and restrictions. Try to choose the best (most lossless) format your system will allow, especially...
The final format of the audio depends on system-level implementation and restrictions. Try to choose the best (most lossless) format your system will allow, especially for key sounds in your user experience.
For more information on supported file formats, visit the Android Developers supported media documentation.