Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Mobile Accessibility

22 views
Skip to first unread message

Eitan Isaacson

unread,
Feb 21, 2012, 11:02:13 PM2/21/12
to dev-pl...@lists.mozilla.org
Hi Mozilla friends!

I shared this with folks on the mobile team last week, I think it is
time for wider distribution.

The main task I have been working on in the last few months is hatching
together an accessibility solution for our mobile browser on Android. I
have been short on communicating stuff to the wider community, so I hope
this mail will outline what I believe would be a fantastic way forward.
I have spent most of my time implementing the disparate components, and
it is now finally taking form.

tl;dr
=====
Keep reading!

Competitor Analysis
===================
Google and Apple, the two main mobile players have taken very different
approaches in their accessibility user experience.

iOS Accessibility
-----------------
Apple's user experience is very Apple. It is clean, robust, and simple
with limited customization options. The former helps create seductive
products that are probably the most popular gadgets among blind users.
The latter forces users to compromise their needs at best and excludes
users with different disabilities at worst.

A user could activate VoiceOver (the screen reader) without any sighted
assistance by triple-pressing the home button. When a user is first
introduced to VoiceOver, they only need to learn a handful of swipe
gestures to master the device. The entire platform conforms to the same
gestures: browsing the web is as simple as setting the alarm clock.
Through progressive disclosure the user could discover new gestures and
features that makes iOS web browsing rival many desktop solutions.

Android Accessibility
---------------------
Before Ice Cream Sandwich

Android's accessibility answer has been to create very specific
solutions for different challenges. The platform accessibility layer,
along with the screen reader, has been de-emphasized and underdeveloped
in favor of per-case specialized solutions. This has led to overall
fragmentation in the user experience and varying degrees of success on
each front. Some examples:
- The Eyes-Free Shell is an alternative home screen with an audible
interface that provides easy access to Android phone features with a
"Stroke Dialer" interface.
- The TalkBack keyboard is an audible on-screen keyboard and "virtual
D-Pad". It is an input method that seeks a solution to the problem of
devices that don't have directional controllers or don't have hardware
keyboards. It provides different features like alternative
home/menu/search/back buttons and provides haptic and voice feedback in
a certain set of circumstances.
- Depending on the vendor and carrier, a different set of packages will
be available on the phone to provide accessibility this leads to some
bizarre scenarios:
* The Google screen reader solution is split into 3 apps: SoundBack,
TalkBack, and KickBack each provides a different kind of feedback:
"earcons", speech, and haptic. If this is not installed on the phone,
the user needs to install each of these separately with no appropriate
description in the Android Market as to what each of those packages does.
* Carriers try to fill the void by providing their own accessibility
solutions. AT&T, for example, ships phones with "AT&T Mobile
Accessibility Lite" which is a suite of 11 applications for blind users
like phone, contacts, or calendar. The full version of the app is
available on the Android Market for $99.
- Even with the overall improvements in ICS, the new web browser has a
separate accessibility solution from the rest of the platform and it is
"self-voicing". Meaning that its only form of output is speech, and it
ignores any system settings the user may have configured for their
screen reader.

Ice Cream Sandwich

The latest Android release greatly improves the platform accessibility
API and introduces some user-visible features such as explore by touch.
The bulk of the API additions have not been made use of, as far as I can
tell. Piecemeal solutions have not been entirely abandoned (see
self-voicing browser above), but this seems to be the start of a unified
experience, or at least a step towards it.

Where We Should Be
==================
Android's and iOS's solutions are radically different. I believe there
is a giant gap to fill that not only answers users' needs but also goes
hand in hand with Mozilla and Firefox's mission. We could be simple and
robust like iOS, while being feature rich and specialized for user's needs.

It Is An Android's World
------------------------
We need to be accessible in Android, a very imperfect platform when it
comes to accessibility UX. Since the platform's accessibility API and
screen reader are so sparse in features, we have been encouraged to
create a stand-alone solution in the form of a self-voicing extension.
But then I think it is important to ask: Will we become complicit in the
complex and fragmented Android experience? An accessibility solution
that requires the user to download extra components, learn how to use
them, troubleshoot and configure it defeats the whole purpose of
accessibility, and renders the well-meaning solution inaccessible.

So the ironic bit is that if we want to provide a truly accessible
solution in Android we need to *conform* to the poor Android feature
set. This will allow the user to install Firefox and have it *just
work*, with no extra steps of configuration and no extra learning. The
user is accustomed to navigating with the directional controller, and we
should allow them to do that out of the box.

More Features
-------------
Being Firefox means we encourage and thrive on extensions. Navigating to
objects on a web page with the directional controller is far from the
end of the story. The accessibility experience on mobile Firefox could
be very rich and diverse for different disabilities and needs via
extensions that augment the limited feature set of Android
accessibility. To start, we could have one that would provide all of the
navigation features that one expects of a screen reader: jump to
headers, links, landmarks, top, bottom, controls, etc. We could
introduce a set of gestures and keyboard input that would satisfy most
users. A user who installs such extensions has enough buy-in, and could
be expected to to learn how to use them.

The Solution
============
Unlike desktop accessibility, our mission in mobile extends to areas
that are traditionally done by the screen reader, specifically because
the screen reader in Android is little more than a presenter that
listens for accessibility events and outputs speech. Here are the three
main additions we need. It has mostly been implemented, and some is
already in mozilla-central. The parts that are not will soon be up for
review.

1. Virtual Cursor
-----------------
Screen readers often keep an external state of a specific position
within a web page in Firefox. This is sometimes called a "virtual
cursor", it behaves something like a caret or tab focus. It could jump
to different parts of the page depending on the user input. It could
find links, headers, landmarks, etc. Since the platform we are working
in does not have this, we need to implement it ourselves. And we did, it
is in mc. Given a special rule object a document's virtual cursor could
jump to different parts of the page. An event is emitted and a state is
retained. This lives in the accessibility module.

2. Android Accessibility API Wrapper
------------------------------------
Just like any other platform, this is the part where we wrap our Gecko
nsIAccessible interface and provide it to the platform. In the pre-ICS
Android case this really just means wrapping events. The tricky part
here is that it is not enough to simply proxy Gecko events since
Android's API has no support for roles (links, buttons, paragraphs,
etc.), and in pre-ICS it does not support accessible object hierarchies.
So when we dispatch these events to the system we need to provide
localized role and state strings along with any useful navigation
context, for example if we just entered a menu bar we should inform the
user. Again, ordinarily this is done on the screen reader's part, but
these are extraordinary circumstances :)

3. Chrome JS Additions
----------------------
The virtual cursor above needs a view and controller. These are
implemented in the chrome JS. By default these code paths are dormant.
If we observe an accessible-event event, we initialize the virtual
cursor controller and listen for directional key events. When the
virtual cursor moves we scroll to the vc's position and highlight it
with a rectangle.

Extensions
----------
Extensions could manipulate the virtual cursor and provide more complex
traversal rules for it. For example an extension could map a two finger
swipe down to navigate to the next header. The javascript would be
minimal since most of the heavy lifting is already implemented in the
core. Extensions should also be able to emit specialized Android
accessibility events, this should be easy to wire through the java-gecko
bridge's handleGeckoMessage call.

Conclusion - The Future
=======================
We, in the accessibility team, need to step outside our module in order
to provide a positive experience for our users in mobile. I hope the
rationalizing I did above helped folks understand where we want to be, I
hope we could get people's support and input in order to get there.
Patches coming your way!

Besides Android, I hope the bits that we are putting in place now will
help in B2G as well. Certain things, like the virtual cursor might end
up in one form or other on the desktop too.

0 new messages