Use AWS AI and ML services to foster accessibility and inclusion of people with a visual or communication impairment
AWS offers a broad set of artificial intelligence (AI) and machine learning (ML) services, including a suite of pre-trained, ready-to-use services for developers with no prior ML experience. In this post, we demonstrate how to use such services to build an application that fosters the inclusion of people with a visual or communication impairment, which includes difficulties in seeing, reading, hearing, speaking, or having a conversation in a foreign language. With services such as Amazon Transcribe, Amazon Polly, Amazon Translate, Amazon Rekognition and Amazon Textract, you can add features to your projects such as live transcription, text to speech, translation, object detection, and text extraction from images.
According to the World Health Organization, over 1 billion people—about 15% of the global population—live with some form of disability, and this number is likely to grow because of population ageing and an increase in the prevalence of some chronic diseases. For people with a speech, hearing, or visual impairment, everyday tasks such as listening to a speech or a TV program, expressing a feeling or a need, looking around, or reading a book can feel like impossible challenges. A wide body of research highlights the importance of assistive technologies for the inclusion of people with disabilities in society. According to research by the European Parliamentary Research Service, mainstream technologies such as smartphones provide more and more capabilities suitable for addressing the needs of people with disabilities. In addition, when you design for people with disabilities, you tend to build features that improve the experience for everyone; this is known as the curb-cut effect.
AWS AugmentAbility is powered by five AWS AI services: Amazon Transcribe, Amazon Translate, Amazon Polly, Amazon Rekognition, and Amazon Textract. It also uses Amazon Cognito user pools and identity pools for managing authentication and authorization of users.
After deploying the web app, you will be able to access the following features:
- Live transcription and text to speech – The app transcribes conversations and speeches for you in real time using Amazon Transcribe, an automatic speech recognition service. Type what you want to say, and the app says it for you by using Amazon Polly text-to-speech capabilities. This feature also integrates with Amazon Transcribe automatic language identification for streaming transcriptions—with a minimum of 3 seconds of audio, the service can automatically detect the dominant language and generate a transcript without you having to specify the spoken language.
- Live transcription and text to speech with translation – The app transcribes and translates conversations and speeches for you, in real time. Type what you want to say, and the app translates and says it for you. Translation is available in the over 75 languages currently supported by Amazon Translate.
- Real-time conversation translation – Select a target language, speak in your language, and the app translates what you said in your target language by combining Amazon Transcribe, Amazon Translate, and Amazon Polly capabilities.
- Object detection – Take a picture with your smartphone, and the app describes the objects around you by using Amazon Rekognition label detection features.
- Text recognition for labels, signs, and documents – Take a picture with your smartphone of any label, sign, or document, and the app reads it out loud for you. This feature is powered by Amazon Rekognition and Amazon Textract text extraction capabilities. AugmentAbility can also translate the text into over 75 languages, or make it more readable for users with dyslexia by using the OpenDyslexic font.
Live transcription, text to speech, and real-time conversation translation features are currently available in Chinese, English, French, German, Italian, Japanese, Korean, Brazilian Portuguese, and Spanish. Text recognition features are currently available in Arabic, English, French, German, Italian, Portuguese, Russian, and Spanish. An updated list of the languages supported by each feature is available on the AugmentAbility GitHub repo.
You can build and deploy AugmentAbility locally on your computer or in your AWS account by using AWS Amplify Hosting, a fully managed CI/CD and static web hosting service for fast, secure, and reliable static and server-side rendered apps.
The following diagram illustrates the architecture of the application, assuming that it’s deployed in the cloud using AWS Amplify Hosting.
The solution workflow includes the following steps:
- The user signs in by entering a user name and a password. Authentication is performed against the Amazon Cognito user pool. After a successful login, the Amazon Cognito identity pool is used to provide the user with the temporary AWS credentials required to access app features.
- While the user explores the different features of the app, the mobile browser interacts with Amazon Transcribe (StartStreamTranscriptionWebSocket operation), Amazon Translate (TranslateText operation), Amazon Polly (SynthesizeSpeech operation), Amazon Rekognition (DetectLabels and DetectText operations) and Amazon Textract (DetectDocumentText operation).
tag that references the hosted SDK package. A custom browser SDK was built with a specified set of services (for instructions, refer to Building the SDK for Browser).
The following walkthrough shows how to deploy AugmentAbility by using AWS Amplify Hosting; it includes the following steps:
- Create the Amazon Cognito user pool and identity pool, and grant permissions for accessing AWS AI services.
- Clone the GitHub repository and edit the configuration file.
- Deploy the mobile web app to the AWS Amplify console.
- Use the mobile web app.
Create the Amazon Cognito user pool and identity pool, and grant permissions for accessing AWS AI services
The first step required for deploying the app consists of creating an Amazon Cognito user pool with the Hosted UI enabled, creating an Amazon Cognito identity pool, integrating the two pools, and finally granting permissions for accessing AWS services to the AWS Identity and Access Management (IAM) role associated with the identity pool. You can either complete this step by manually working on each task, or by deploying an AWS CloudFormation template.
The CloudFormation template automatically provisions and configures the necessary resources, including the Amazon Cognito pools, IAM roles, and IAM policies.
- Sign in to the AWS Management Console and launch the CloudFormation template by choosing Launch Stack:
The template launches in the EU West (Ireland) AWS Region by default. To launch the solution in a different Region, use the Region selector in the console navigation bar. Make sure to select a Region in which the AWS services in scope (Amazon Cognito, AWS Amplify, Amazon Transcribe, Amazon Polly, Amazon Translate, Amazon Rekognition, and Amazon Textract) are available (
- Choose Next.
- For Region, enter the identifier of the Region you want use (among the supported ones).
- For Username, enter the user name you want to use to access the app.
- For Email, enter the email address to which the temporary password for your first sign-in should be sent.
- Choose Next.
- On the Configure stack options page, choose Next.
- On the Review page, review and confirm the settings.
- Select the check box acknowledging that the template will create IAM resources and may require an AWS CloudFormation capability.
- Choose Create stack to deploy the stack.
You can view the status of the stack on the AWS CloudFormation console in the Status column. You should receive a
CREATE_COMPLETE status in a couple of minutes.
As part of the template deployment, the following permissions are granted to the IAM role that is assumed by the authenticated user:
Even though Amazon Comprehend is not explicitly used in this web application, permissions are granted for the action comprehend:DetectDominantLanguage. Amazon Translate may automatically invoke Amazon Comprehend to determine the language of the text to be translated if a language code isn’t specified.
Clone the GitHub repository and edit the configuration file
Now that access to AWS AI services has been configured, you’re ready to clone the GitHub repository and edit the configuration file.
- In the AWS AugmentAbility GitHub repo, choose Code and Download ZIP.
You’re either prompted to choose a location on your computer where the ZIP file should be downloaded to, or it will automatically be saved in your
- After you download the file, unzip it and delete the ZIP file.
You should have obtained a folder named
aws-augmentability-mainwith some files and subfolders in it.
- Create a file named
config.jswith any text editor, and enter the following content in it:
- In the
config.jsfile you created, replace the four
INSERT_strings with the Amazon Cognito identity pool ID, identifier of your Region of choice, Amazon Cognito user pool ID, and user pool client ID.
You can retrieve such values by opening the AWS CloudFormation console, choosing the stack named
augmentability-stack, and choosing the Outputs tab.
- Save the config.js file in the
aws-augmentability-mainfolder, and zip the folder to obtain a new
Deploy the mobile web app to the Amplify console
Now that you have downloaded and edited the AugmentAbility project files, you’re ready to build and deploy the mobile web app using the Amplify console.
- On the Get started with Amplify Hosting page, choose Deploy without Git provider.
- Choose Continue.
- In the Start a manual deployment section, for App name, enter the name of your app.
- For Environment name, enter a meaningful name for the environment, such as
- For Method, choose Drag and drop.
- Either drag and drop the
aws-augmentability-main.zipfile from your computer onto the drop zone or use Choose files to select the
aws-augmentability-main.zipfile from your computer.
- Choose Save and deploy, and wait for the message Deployment successfully completed.
Use the mobile web app
The mobile web app should now be deployed. Before accessing the app for the first time, you have to set a new password for the user that has been automatically created during Step 1. You can find the link to the temporary login screen in the Outputs tab for the CloudFormation stack (field
UserPoolLoginUrl). For this first sign-in, you use the user name you set up and the temporary password you received via email.
After you set your new password, you’re ready to test the mobile web app.
In the General section of the Amplify console, you should be able to find a link to the app under the Production branch URL label. Open it or send it to your smartphone, then sign in with your new credentials, and start playing with AugmentAbility.
If you want to make changes to the mobile web app, you can work on the files cloned from the repository, locally build the mobile web app (as explained in the README file), and then redeploy the app by uploading the updated ZIP file via the Amplify console. As an alternative, you can create a GitHub, Bitbucket, GitLab, or AWS CodeCommit repository to store your project files, and connect it to Amplify to benefit from automatic builds on every code commit. To learn more about this approach, refer to Getting started with existing code. If you follow this tutorial, make sure to replace the command
npm run build with
npm run-script build at Step 2a.
To create additional users on the Amazon Cognito console, refer to Creating a new user in the AWS Management Console. In case you need to recover the password for a user, you should use the temporary login screen you used for changing the temporary password. You can find the link on the Outputs tab of the CloudFormation stack (field
When you’re done with your tests, to avoid incurring future charges, delete the resources created during this walkthrough.
- On the AWS CloudFormation console, choose Stacks in the navigation pane.
- Choose the stack
- Choose Delete and confirm deletion when prompted.
- On the Amplify console, select the app you created.
- On the Actions menu, choose Delete app and confirm deletion when prompted.
In this post, I showed you how to deploy a code sample that uses AWS AI and ML services to put features such as live transcription, text to speech, object detection, or text recognition in the hands of everyone. Knowing how to build applications that can be used by people with a wide range of abilities and disabilities is key for creating more inclusive and accessible products.
To get started with AugmentAbility, clone or fork the GitHub repository and start experimenting with the mobile web app. If you want to experiment with AugmentAbility before deploying resources in your AWS account, you can check out the live demo (credentials:
About the Author
Luca Guida is a Solutions Architect at AWS; he is based in Milan and supports Italian ISVs in their cloud journey. With an academic background in computer science and engineering, he started developing his AI/ML passion at university; as a member of the natural language processing (NLP) community within AWS, Luca helps customers be successful while adopting AI/ML services.