This guide lists twelve keyboard case options to consider buying for your iPad Air 2.
The post Best Keyboard Cases for iPad Air 2 appeared first on Pocketnow.
This guide lists twelve keyboard case options to consider buying for your iPad Air 2.
The post Best Keyboard Cases for iPad Air 2 appeared first on Pocketnow.
Check out the latest leak suggesting several new features in the upcoming iOS 15, iPadOS 15 and more Apple software updates
The post iOS 15, iPadOS 15 and new macOS last minute leaks are good and boring appeared first on Pocketnow.
AssistiveTouch on Apple Watch is aided by the Watch’s gyroscope, accelerometer, and heart rate sensor. It will allow the device to detect subtle differences in muscle movement and tendon activity.
The post Apple rolls out software features in support of people with disabilities appeared first on Pocketnow.
According to a new research done by University of Washington, ordinary smart speakers could be used as a contactless way to screen for irregular heartbeats. The researchers came up with an AI-powered system. It relies on sonar technology to pick up vibrations caused by nearby chest wall movements. It it ever comes to existence, it has the potential to change how doctors conduct telemedicine appointments by providing data that would otherwise require wearables, health hardware or an in-person checkup.
“We have Google and Alexa in our homes all around us. We predominantly use them to wake us up in the morning or play music,” said Shyam Gollakota, a UW computer science professor and co-author of the report. “The question we’ve been asking is, can we use the smart speaker for something more useful.” Smartphone makers could integrate the technology into existing products via software updates, researchers say.
As per the researchers, their goal was to find a way to use devices that people already have to edge cardiology and health monitoring into the future. This system has a mounted chest wall. If you want a reading, you will have to sit within two feet of the speaker for it to work.
It works by emitting audio signals into the room at a volume humans can’t hear. The pulses bounce back to the speaker, and an algorithm works to identify beating patterns generated from a human’s chest wall. Another algorithm is then applied to determine the amount of time between two heartbeats. These inter-beat intervals could allow doctors to gauge how well your heart is functioning.
This data was compared to results from medical-grade ECG monitors. Surprisingly, the smart speakers’ readings turned out to be relatively accurate, only deviating from the ECG readings by an amount that “wasn’t medically relevant,” the researchers say. The test was done on a developer version of Alexa with a low-quality speaker to run their tests. Hence, speakers in mainstream devices could be more powerful, which could enable readings from farther away.
Via: The Washington Post
The post Smart Speakers could bring contactless health monitoring by detecting abnormal heart rhythms appeared first on Pocketnow.
According to National Institute on Deafness and Other Communication Disorders, approximately 7.5 million people in the U.S. have trouble using their voices. This group is at the risk of being left behind by voice-recognition technology. But we are in 2021 – the era to make technology more accessible to everyone. And, tech firms, including Apple and Google are working on improving their voice assistants to understand atypical speech. They are now trying to train voice assistants to understand everyone.
“For someone who has cerebral palsy and is in a wheelchair, being able to control their environment with their voice could be super useful to them,” said Ms. Cattiau. Google is collecting atypical speech data as part of an initiative to train its voice-recognition tools. Training the voice assistants like Siri and Google Assistant could improve the voice-recognition experience for a number of groups including senior with degenerative diseases.
Apple debuted its Hold to Talk feature on hand-held devices in 2015. It gives users control over how long they want the voice assistant Siri to listen to them. The feature prevents the assistant from interrupting users that have a stutter before they have finished speaking. Now, Apple is working to help Siri automatically detect if someone speaks with a stutter. The company has built a bank of 28,000 audio clips from podcasts featuring stuttering to help its assistant recognize atypical speech.
Google’s Project Euphoria is the company’s initiative where it is testing a prototype app that lets people with atypical speech communicate with Google Assistant and smart Google Home products. It aims to train the software to understand unique speech patterns. The company hopes that these snippets will help train its artificial intelligence in the full spectrum of speech.
Amazon isn’t far off with its Alexa voice assistant. The company announced Alexa integration with Voiceitt, which lets people with speech impairments train an algorithm to recognize their own unique vocal patterns.
Source: WSJ
The post Apple, Google training their voice assistants to understand people with speech disabilities appeared first on Pocketnow.
Zoom has today announced that it is extending the platform’s Live Transcription capability to all users, both free and paid. The company – which recorded a massive boost in its user base as work and education shifted to a remote collaboration format in the pandemic era – has announced that Live Transcriptions will be rolled out for all users on its free service tier in the fall season.
However, if you want to try the feature prior to its wider rollout, you can request early access to the service by filling out a form. To recall, Live Transcription has so far been exclusive to the paid Pro, Business, Education, and Enterprise accounts, as well as approved K-12 accounts on both its desktop and mobile clients. However, it appears that the accessibility feature in question is limited to supporting the English language only at the moment.
Additionally, Zoom has also highlighted a few pre-requisites for the Live Transcription feature to work properly. The company says that the performance of its real-time automatic transcription feature depends on factors such as the level of background noise, how loud and clear the speaker’s voice is, and if the speaker is proficient in the English language. Additionally, dialects and words limited to a particular region might prove to be limiting as well.
And in case the Live Transcription feature is not proving to be particularly useful due to any of the aforementioned limitations, Zoom already offers a manual captioning feature to all users. A meeting’s host can either take the responsibility of closed captioning the ongoing interaction on himself, or he can assign the duty to an attendee of his choice. Moreover, Zoom also allows users to rely on a third-party closed captioning service as well.
Zoom itself recommends a manual captioner for a higher degree of accuracy instead of its AI-based solution whose efficiency is dependent on a variety of external factors. The Live Transcription feature is currently available on v5.0.2 (or later version) of Zoom for Windows, macOS, Android, and iOS.
The post Zoom is making its automatic closed captioning feature free for all users appeared first on Pocketnow.
Facebook is improving its Automatic Alternative Text (AAT) technology to better utilize object recognition to generate descriptions of photos on demand. It will enable the blind or visually impaired individuals to understand what’s on their News Feed in a better way. For context, AAT was introduced back in 2016, and it is now improved by 10x as the new Facebook AAT recognizes over 1,200 concepts.
Each photo you post on Facebook and Instagram gets evaluated by an image analysis AI (that is, AAT technology) in order to create a caption. It adds information to alt text, which is a field in an image’s metadata that describes its contents: “A dog standing in a field” or a “person playing football.” This allows visually impaired people to understand the images on their news feed. However, people don’t bother adding these descriptions to their images. Hence, Facebook is working on making its social media more accessible by training its AI.
The latest iteration of AAT has the ability to detect and identify in a photo by more than 10x, which in turn means fewer photos without a description. It can now identify activities, landmarks, types of animals, and so forth. For example, a photo might read, “May be a selfie of 2 people, outdoors, the Leaning Tower of Pisa.”
Facebook says it is the first in the industry to include information about the positional location and relative size of elements in a photo. For instance, instead of saying “Maybe a photo of 5 people,” the AI can analyze and specify that there are two people in the center of the photo and three others scattered toward the fringes, implying that the two in the center are the focus. Facebook also added that it trained the models to predict locations and semantic labels of the objects within an image.
The company leveraged a model trained on weakly-supervised data in the form of billions of public Instagram images and their hashtags for its latest iteration of AAT. It fine-tuned the data across all geographies and evaluated concepts along gender, skin tone, and age axes. As a result, the AAT is now more accurate and culturally, and demographically inclusive. For example, it can now understand and identify weddings around the world based (in part) on traditional apparel.
Facebook asked users who depend on screen readers how much information they wanted to hear and when they wanted to hear it. And, it came to a conclusion that people want more information when an image is from friends or family, and less when it’s not. Hence, the new Facebook AAT can provide a succinct description for all photos by default alongside offering an easy way to get more detailed descriptions about photos of specific interest. On selecting the latter option, it displays a more comprehensive description of a photo’s contents.
AAT uses simple phrasing for its default description rather than a long, flowy sentence. It begins every description with “May be,” because there is a margin for error but “we’ve set the bar very high,” says the company. The AAT alt text descriptions are available in 45 different languages and can be used by people around the world.
The post Facebook is getting better at providing more details to visually impaired users appeared first on Pocketnow.
You'll probably use it to check the time in Braille for the most part, but perhaps you may be able to teach yourself Braille.
The post Dot Watch brings Braille to the wrist for $359 appeared first on Pocketnow.
The Nokia 8 has just been launched, the Galaxy Note 8 needs launching and we'll launch into accessibility issues in our show this week!
The post Nokia 8, Galaxy Note 8, accessibility rate | #PNWeekly 266 (LIVE at 3pm Eastern) appeared first on Pocketnow.
Another entry into the X series seems to be keen on some attribute to charging. Will this mid-range phone end up in the US soon?
The post LG X charge gets picked apart at FCC for accessibility appeared first on Pocketnow.
“When technology is designed for anyone, it lets everyone do what they love,” said a wheelchair-enabled film editor.Apple CEO Tim Cook led the company’s October event with a refreshed accessibility website for the company, making sure that people know what resources that an iPhone or a MacBook can provide to everyone who can do with just a little help to do what they want to do in life. Cook went onto a laudatory update on the iPhone 7 with 400 million Memories already logged and a burn on Android fragmentation as
The post Apple opens up October event with focus on accessibility, iPhone 7 photos, Apple TV appeared first on Pocketnow.
As the clock ticks down on that $119 discount on a Windows 10 upgrade, if you need programs like Narrator or Magnifier to help you with your everyday computing, fear not — that free upgrade is going nowhere.Daniel Hubbell announced on the Microsoft Accessibility Blog that those use assistive technologies on Windows that the offer will continue to be free after July 29 and that details on how to take advantage of it will come soon enough.While this may prompt some indecisive cheaters to ...
The post If you need assistive technologies, Windows 10 will still be free appeared first on Pocketnow.
Google’s already brought powerful voice control to Android, letting users interact with apps and system settings with just a few spoken commands. And while that may work really well when you want Google to set a timer, or help your draft a text message, voice control runs out of steam when you bring in random apps for which voice support was never designed. At least, that used to be the case, but now Google’s inviting users to test out its new Voice Access system that brings ...
The post Android goes hands-free as Google opens Voice Access testing appeared first on Pocketnow.
Not everyone sees the world in the same way. Of course that’s plainly obvious if you’ve ever spent time in the dungeons comment sections of the internet, but it’s easy to forget that it’s also true in a literal sense. Not everyone sees or hears the same way you might … and that’s as true for smartphone users as for anyone else. So what kind of considerations go into buying and using today’s mobile technology products when you’re visually impaired? What kind of brand wars exist in the world of the blind? Are we going in the right direction with ...
The post Pocketnow Weekly 070: smartphones through the eyes of the blind appeared first on .