Apple rolls out software features in support of people with disabilities

It's only fair to share...Share on RedditShare on FacebookShare on Google+Tweet about this on TwitterPin on PinterestShare on Tumblr

AssistiveTouch on Apple Watch is aided by the Watch’s gyroscope, accelerometer, and heart rate sensor. It will allow the device to detect subtle differences in muscle movement and tendon activity.

The post Apple rolls out software features in support of people with disabilities appeared first on Pocketnow.

It's only fair to share...Share on RedditShare on FacebookShare on Google+Tweet about this on TwitterPin on PinterestShare on Tumblr

Smart Speakers could bring contactless health monitoring by detecting abnormal heart rhythms

According to a new research done by University of Washington, ordinary smart speakers could be used as a contactless way to screen for irregular heartbeats. The researchers came up with an AI-powered system. It relies on sonar technology to pick up vibrations caused by nearby chest wall movements. It it ever comes to existence, it has the potential to change how doctors conduct telemedicine appointments by providing data that would otherwise require wearables, health hardware or an in-person checkup.

“We have Google and Alexa in our homes all around us. We predominantly use them to wake us up in the morning or play music,” said Shyam Gollakota, a UW computer science professor and co-author of the report. “The question we’ve been asking is, can we use the smart speaker for something more useful.” Smartphone makers could integrate the technology into existing products via software updates, researchers say.

As per the researchers, their goal was to find a way to use devices that people already have to edge cardiology and health monitoring into the future. This system has a mounted chest wall. If you want a reading, you will have to sit within two feet of the speaker for it to work.

It works by emitting audio signals into the room at a volume humans can’t hear. The pulses bounce back to the speaker, and an algorithm works to identify beating patterns generated from a human’s chest wall. Another algorithm is then applied to determine the amount of time between two heartbeats. These inter-beat intervals could allow doctors to gauge how well your heart is functioning.

This data was compared to results from medical-grade ECG monitors. Surprisingly, the smart speakers’ readings turned out to be relatively accurate, only deviating from the ECG readings by an amount that “wasn’t medically relevant,” the researchers say. The test was done on a developer version of Alexa with a low-quality speaker to run their tests. Hence, speakers in mainstream devices could be more powerful, which could enable readings from farther away.

Via: The Washington Post

The post Smart Speakers could bring contactless health monitoring by detecting abnormal heart rhythms appeared first on Pocketnow.

Apple, Google training their voice assistants to understand people with speech disabilities

According to National Institute on Deafness and Other Communication Disorders, approximately 7.5 million people in the U.S. have trouble using their voices. This group is at the risk of being left behind by voice-recognition technology. But we are in 2021 – the era to make technology more accessible to everyone. And, tech firms, including Apple and Google are working on improving their voice assistants to understand atypical speech. They are now trying to train voice assistants to understand everyone.

“For someone who has cerebral palsy and is in a wheelchair, being able to control their environment with their voice could be super useful to them,” said Ms. Cattiau. Google is collecting atypical speech data as part of an initiative to train its voice-recognition tools. Training the voice assistants like Siri and Google Assistant could improve the voice-recognition experience for a number of groups including senior with degenerative diseases.

Apple is working to help Siri automatically detect if someone speaks with a stutter

Apple debuted its Hold to Talk feature on hand-held devices in 2015. It gives users control over how long they want the voice assistant Siri to listen to them. The feature prevents the assistant from interrupting users that have a stutter before they have finished speaking. Now, Apple is working to help Siri automatically detect if someone speaks with a stutter. The company has built a bank of 28,000 audio clips from podcasts featuring stuttering to help its assistant recognize atypical speech.

Google’s Project Euphoria is the company’s initiative where it is testing a prototype app that lets people with atypical speech communicate with Google Assistant and smart Google Home products. It aims to train the software to understand unique speech patterns. The company hopes that these snippets will help train its artificial intelligence in the full spectrum of speech.

Amazon isn’t far off with its Alexa voice assistant. The company announced Alexa integration with Voiceitt, which lets people with speech impairments train an algorithm to recognize their own unique vocal patterns.

Source: WSJ

The post Apple, Google training their voice assistants to understand people with speech disabilities appeared first on Pocketnow.

Zoom is making its automatic closed captioning feature free for all users

Zoom has today announced that it is extending the platform’s Live Transcription capability to all users, both free and paid. The company – which recorded a massive boost in its user base as work and education shifted to a remote collaboration format in the pandemic era – has announced that Live Transcriptions will be rolled out for all users on its free service tier in the fall season. 

Coming in the fall season, but you can request an early preview

However, if you want to try the feature prior to its wider rollout, you can request early access to the service by filling out a form. To recall, Live Transcription has so far been exclusive to the paid Pro, Business, Education, and Enterprise accounts, as well as approved K-12 accounts on both its desktop and mobile clients. However, it appears that the accessibility feature in question is limited to supporting the English language only at the moment.

READ MORE: Google is at work to improve the Meet and Zoom experience on Chromebooks

Additionally, Zoom has also highlighted a few pre-requisites for the Live Transcription feature to work properly. The company says that the performance of its real-time automatic transcription feature depends on factors such as the level of background noise, how loud and clear the speaker’s voice is, and if the speaker is proficient in the English language. Additionally, dialects and words limited to a particular region might prove to be limiting as well. 

Factors such as background noise and how good an English speaker you are, will affect the performance

And in case the Live Transcription feature is not proving to be particularly useful due to any of the aforementioned limitations, Zoom already offers a manual captioning feature to all users. A meeting’s host can either take the responsibility of closed captioning the ongoing interaction on himself, or he can assign the duty to an attendee of his choice. Moreover, Zoom also allows users to rely on a third-party closed captioning service as well. 

READ MORE: Zoom’s ambitions expand to email and calendar service after pandemic surge: Report

Zoom itself recommends a manual captioner for a higher degree of accuracy instead of its AI-based solution whose efficiency is dependent on a variety of external factors. The Live Transcription feature is currently available on v5.0.2 (or later version) of Zoom for Windows, macOS, Android, and iOS. 

The post Zoom is making its automatic closed captioning feature free for all users appeared first on Pocketnow.

Facebook is getting better at providing more details to visually impaired users

Facebook is improving its Automatic Alternative Text (AAT) technology to better utilize object recognition to generate descriptions of photos on demand. It will enable the blind or visually impaired individuals to understand what’s on their News Feed in a better way. For context, AAT was introduced back in 2016, and it is now improved by 10x as the new Facebook AAT recognizes over 1,200 concepts.

Each photo you post on Facebook and Instagram gets evaluated by an image analysis AI (that is, AAT technology) in order to create a caption. It adds information to alt text, which is a field in an image’s metadata that describes its contents: “A dog standing in a field” or a “person playing football.” This allows visually impaired people to understand the images on their news feed. However, people don’t bother adding these descriptions to their images. Hence, Facebook is working on making its social media more accessible by training its AI.

Facebook AAT

The latest iteration of AAT has the ability to detect and identify in a photo by more than 10x, which in turn means fewer photos without a description. It can now identify activities, landmarks, types of animals, and so forth. For example, a photo might read, “May be a selfie of 2 people, outdoors, the Leaning Tower of Pisa.”

Facebook says it is the first in the industry to include information about the positional location and relative size of elements in a photo. For instance, instead of saying “Maybe a photo of 5 people,” the AI can analyze and specify that there are two people in the center of the photo and three others scattered toward the fringes, implying that the two in the center are the focus. Facebook also added that it trained the models to predict locations and semantic labels of the objects within an image.

AAT is now more improved than ever

The company leveraged a model trained on weakly-supervised data in the form of billions of public Instagram images and their hashtags for its latest iteration of AAT. It fine-tuned the data across all geographies and evaluated concepts along gender, skin tone, and age axes. As a result, the AAT is now more accurate and culturally, and demographically inclusive. For example, it can now understand and identify weddings around the world based (in part) on traditional apparel.

Facebook asked users who depend on screen readers how much information they wanted to hear and when they wanted to hear it. And, it came to a conclusion that people want more information when an image is from friends or family, and less when it’s not. Hence, the new Facebook AAT can provide a succinct description for all photos by default alongside offering an easy way to get more detailed descriptions about photos of specific interest. On selecting the latter option, it displays a more comprehensive description of a photo’s contents.

It’s not poetic, but it is highly functional

AAT uses simple phrasing for its default description rather than a long, flowy sentence. It begins every description with “May be,” because there is a margin for error but “we’ve set the bar very high,” says the company. The AAT alt text descriptions are available in 45 different languages and can be used by people around the world.

Source

The post Facebook is getting better at providing more details to visually impaired users appeared first on Pocketnow.

Nokia 8, Galaxy Note 8, accessibility rate | #PNWeekly 266 (LIVE at 3pm Eastern)

The Nokia 8 has just been launched, the Galaxy Note 8 needs launching and we'll launch into accessibility issues in our show this week!

The post Nokia 8, Galaxy Note 8, accessibility rate | #PNWeekly 266 (LIVE at 3pm Eastern) appeared first on Pocketnow.

iOS 10.2 public beta includes new emoji and tweaks to UI

iOS 10.1 is out and the Portrait Camera mode is, too. What does Apple have to work towards now? iOS 10.2. The developer betas for iOS 10.2, watchOS 3.1.1, tvOS 10.1 and macOS 10.12.2 are out, but we’ve got a public beta for the mobile software package today.

According to 9to5Mac, there are now 72 new emoji that have been implemented as part of the Unicode 9.0 update on emoji. Three new wallpapers for the iPhone 7 and iPhone 7 Plus are included. Other major UI tweaks include a new Preserve Camera setting, an accessibility option to press and hold the home button to speak, a new star ratings display under settings for Apple Music and a new icon for Bluetooth audio devices on the status bar. iMessages also has a new “Celebration” effect.

You can sign up for the beta if you aren’t already at our source link.

The post iOS 10.2 public beta includes new emoji and tweaks to UI appeared first on Pocketnow.

Apple opens up October event with focus on accessibility, iPhone 7 photos, Apple TV

“When technology is designed for anyone, it lets everyone do what they love,” said a wheelchair-enabled film editor.Apple CEO Tim Cook led the company’s October event with a refreshed accessibility website for the company, making sure that people know what resources that an iPhone or a MacBook can provide to everyone who can do with just a little help to do what they want to do in life. Cook went onto a laudatory update on the iPhone 7 with 400 million Memories already logged and a burn on Android fragmentation as

Continue reading »

The post Apple opens up October event with focus on accessibility, iPhone 7 photos, Apple TV appeared first on Pocketnow.

If you need assistive technologies, Windows 10 will still be free

As the clock ticks down on that $119 discount on a Windows 10 upgrade, if you need programs like Narrator or Magnifier to help you with your everyday computing, fear not — that free upgrade is going nowhere.Daniel Hubbell announced on the Microsoft Accessibility Blog that those use assistive technologies on Windows that the offer will continue to be free after July 29 and that details on how to take advantage of it will come soon enough.While this may prompt some indecisive cheaters to ...

Continue reading »

The post If you need assistive technologies, Windows 10 will still be free appeared first on Pocketnow.

Android goes hands-free as Google opens Voice Access testing

Google’s already brought powerful voice control to Android, letting users interact with apps and system settings with just a few spoken commands. And while that may work really well when you want Google to set a timer, or help your draft a text message, voice control runs out of steam when you bring in random apps for which voice support was never designed. At least, that used to be the case, but now Google’s inviting users to test out its new Voice Access system that brings ...

Continue reading »

The post Android goes hands-free as Google opens Voice Access testing appeared first on Pocketnow.

Pocketnow Weekly 070: smartphones through the eyes of the blind

Not everyone sees the world in the same way. Of course that’s plainly obvious if you’ve ever spent time in the dungeons comment sections of the internet, but it’s easy to forget that it’s also true in a literal sense. Not everyone sees or hears the same way you might … and that’s as true for smartphone users as for anyone else. So what kind of considerations go into buying and using today’s mobile technology products when you’re visually impaired? What kind of brand wars exist in the world of the blind? Are we going in the right direction with ...

Continue reading »

The post Pocketnow Weekly 070: smartphones through the eyes of the blind appeared first on .