NONVISUAL STRATEGIES: AUDITORY AND TACTILE INFORMATION
The most effective intervention for someone with vision impairment (VI) is usually the optimum use of residual vision, and several ways of achieving this have already been described, including magnifiers, increased contrast and improved illumination (‘bigger, bolder, brighter’). An alternative, though usually more limited, approach is ‘sensory substitution’; the use of a nonvisual alternative (hearing or touch) as a means of obtaining information from the environment. Of course, for an individual patient, it is not an all-or-nothing choice: use may be made of both visual and nonvisual strategies, depending on the circumstances. For example, the patient may use a magnifier for reading their mail, but for leisure ‘reading’ prefer to listen to audio books. The systematic use of taste or smell as useful alternative senses has not been explored, although the patient may get a useful clue about their location in the high street by the smell of fresh bread from the bakery!
THE ROLE OF SMARTPHONES, DIGITAL ASSISTANTS AND APPS
Historically, devices designed to use sensory substitution were very much identified as ‘equipment for the blind’. They were only useful to a small population, often designed by enthusiasts who identified a perceived need and addressed it with great ingenuity. However, spreading the use of their device to a wider population was difficult. Often the technology was costly, needed extensive training to use, and was difficult to maintain and service: therefore, commercial companies in this field found it difficult to survive. There was also the unfamiliar appearance of the device, and the public perception of it as ‘for the disabled’.
A major development in mainstream technology, the smartphone, which could be used without vision, overcame many of these difficulties, and it has the potential to be life changing for people with VI. It should be noted that the device also has many ‘vision enhancement’ features. Smartphones are suitable for VI users for several different reasons; they have a whole range of accessibility options in terms of changing text to speech (e.g. VoiceOver, Speak Selection) and speech control (Siri, Dictation). They have a camera so the user will be able to take a picture and magnify the view to be able to see the image more clearly or just to record information that the person cannot write down. There are also a whole range of different applications (apps): general apps that are useful for people with VI or specific apps written for people with VI. Some of these apps will allow the person to recognise an object, colours or barcodes; there are apps that work as electronic magnifiers; and those for orientation and location. Appendix 1 includes an up-to-date list of apps.
A smartphone also has a torch that can in itself be a very useful tool. But perhaps one of its greatest assets is its acceptability, as the vast majority of the population uses smartphones and the individual with VI is not going to feel singled out by using it as well. Not everybody has a smartphone available to them, however, and this also depends on age. Table 15.1 shows the difference in media use by age. Currently 86% of UK adults use a smartphone ( ): this was almost zero prior to 2010.
16–24 Years Old | 66+ Years Old |
96% use a smartphone | 55% use a smartphone |
12% only use a smartphone to go online | 2% only use a smartphone to go online |
45% watch on-demand or streamed content | 22% watch on-demand or streamed content |
88% have a social media profile | 59% have a social media profile |
54% correctly identify advertising on Google (amongst search engine users) | 58% correctly identify advertising on Google (amongst search engine users) |
28% are aware of all four surveyed ways in which companies can collect personal data online (amongst internet users) | 39% are aware of all four surveyed ways in which companies can collect personal data online (amongst internet users) |
1% do not use internet | 51% do not use internet |
However, if we have a look at users who are blind or VI, they are not as technology confident as the general population. Table 15.2 shows that the VI and blind population is a little behind in terms of personal use of different devices: they are more likely to have a simple mobile phone and less likely to use smartphone, computer or the internet ( ).
Vision Impaired/Blind | Nondisabled | |
---|---|---|
Landline | 57% | 56% |
Internet | 63% | 92% |
Games console | 12% | 24% |
Computer (including PC, laptop and tablet) | 53% | 77% |
Smartphone | 46% | 75% |
Other phone | 25% | 18% |
However, those people with visual impairment or blindness who use these different technologies or services, appear to do so without limitations ( Table 15.3 ) ( ).
Personally Use | Limited by Disability | Prevented by Disability | |
---|---|---|---|
Television | 76% | 20% | 13% |
Landline | 57% | 6% | 4% |
Internet | 63% | 9% | 3% |
Computer (including PC, laptop and tablet) | 53% | 10% | 8% |
Smartphone | 46% | 8% | 5% |
Other phone | 25% | 6% | 7% |
Games console | 12% | 3% | 3% |
Individuals with visual impairment are just as likely to benefit from ‘mainstream’ apps as any other user, and as mentioned earlier, there are also specific apps that have been written for people with VI. Although smartphones have immense potential for users with VI, it can be very difficult to break through the barrier to reach new users: they may not be used to using technology or have (incorrect) preconceived ideas about it. It is important to bear in mind that although these apps are free, smartphones are not and some individuals may be resistant to investing in a smartphone, particularly if they are unsure if it will be suitable for them. In addition, some of the apps are not available for all phone types (or older models). It is important not to assume that traditional sensory substitution technology is no longer useful or that ‘stand-alone’ devices are not as good as a phone. However, it may be that, increasingly, very specialised devices ‘for the blind’ are discontinued in favour of apps.
It is not a good idea to have a friend or relative who is very familiar and proficient with the device showing someone how to use the device: it really needs specialist training to be able to introduce the accessibility features of the device in the right sort of way and give helpful instructions. Local societies and Social Services departments may have specialist staff (e.g. a Digital Inclusion Officer) to offer this service. Some devices may just be too sophisticated and have too many functions, in which case a good option is the Synapptic phone ( www.synapptic.com ). Synapptic phones and tablets have been specifically designed for people with visual impairment and they are very intuitive and simple to use.
In addition to using voice commands to control smartphones, there are also stand-alone digital assistants which are voice controlled. The Amazon Echo, and Google Home, are examples. These devices have the advantage that they do not have controls to manipulate and all the information delivered is in auditory form. To interact with the device, the user starts their request with a key word—in the case of the Amazon Echo, this is ‘Alexa, …’. It does require an initial set-up, however, and this is likely to need sighted assistance.
A digital assistant uses natural language processing to recognise what is said, and then natural language understanding to separate it into identifiable questions it can answer, or tasks it can perform. It can access multiple sources via the internet to answer questions, but (as with any internet search) it can also pick up misinformation and advertising at the top of its list of hits. Digital assistants can have specific extra programmes written for them to perform certain tasks: in the case of the Amazon Echo, these are called ‘skills’. There are several thousand skills available, and the Royal National Institute of Blind People (RNIB) have produced several of these to give verified information (e.g., ‘ Alexa , how do I register as sight impaired?’) or to connect to services (e.g. ‘ Alexa , open RNIB Talking Books’; ‘ Alexa , call RNIB Helpline’). Digital assistants can also control ‘smart’ household appliances such as heating or lighting, give reminders for taking medication, keep an appointment diary, check transport timetables, order food, make shopping lists, play music and many other activities.
The three major fields in which sensory substitution is used are personal communication (reading and writing), other activities of daily living (home, work, leisure and sport) and mobility and orientation.
Personal Communication
Tactile Methods
Braille and Moon Languages
The best known sensory substitution method is the tactile Braille alphabet. This is a written language which was invented by the Frenchman Louis Braille in 1824, but it was not universally accepted until after his death many years later. There are 63 symbols in the English version which can substitute for letters of the alphabet (with an additional symbol used to indicate a capital letter), punctuation marks and numbers. Each symbol is produced by particular combinations of up to six raised dots arranged as with the number six on a dice. Grade 1 Braille is the basic code with a substitution of the print letter for a Braille symbol, but contracted Braille (previously known as Grade 2 Braille) uses symbols for frequently recurring groups of letters or words ( Fig. 15.1A ). Braille is approximately 150× the bulk of inkprint, but the use of contracted Braille can reduce this by one-quarter. Space can also be saved by printing Braille in ‘interpoint’ style—character dots on one side of the paper between the dots on the reverse—as opposed to ‘interline’, where the symbols on opposite sides of the paper are on separate lines. The latter will be easier to read because the separation between successive lines of symbols will be greater. There are obviously many different foreign languages in Braille, and also some specialist international languages such as those for mathematics or music.
There are relatively few active Braille users in the UK: less than 10% of the blind population can write Braille, with double that number reading books or magazines. Most users have learnt Braille at school and are congenitally blind, but it can be learnt by people of any age or by parents of children with VI.
Learning Braille from an early age assists with literacy, as Braille is a much better method to understand punctuation, grammar and spelling than audio ( ).
There are distance-learning courses, self-teaching audio-recordings, books and computer programmes available from the RNIB, there are free online training courses (Unified English Braille [UEB] Online), and it can be taught by Braille teachers and some rehabilitation officers, or in adult education classes. A difficult stage in learning Braille is the development of sufficient sensitivity in the fingertips, and ‘jumbo’ Braille can be used in the early stages. Decreased tactile sensitivity is often a complication of diabetes, and such patients may find the development of sufficiently sensitive touch difficult.
A machine is needed to write Braille, but this can be quite cheap and simple. A Braille Writing Frame holds the paper in position whilst a pointed metal ‘dotter’ or stylus is used to punch indentations through from the back. These form the raised Braille dots on the opposite side, so writing is backwards, from right to left across the page, with the symbols reversed: although the technique is very slow, the reversal does not seem to cause undue difficulties. A Braille Writing Machine (the most common of which is the Perkins Brailler) is the equivalent of a typewriter, and electronic versions are available. Such machines have six keys, each of which corresponds to one of the dots of the Braille symbol. They do not require writing in reverse, but in some the sheet produced cannot be checked until it is taken out of the machine: they are often too noisy to be used by a pupil in class, for example.
For labelling there are Braille Dymo embossing machines, and the adhesive tape which is used can also be embossed in a writing frame.
Between 500 and 800 Braille books are published each year in the UK. The RNIB Library is the major source of books, magazines and journals for adults and children. The ClearVision project based at Linden Lodge School, London, has developed a series of children’s inkprint books with the standard printed pages interleaved with clear plastic sheets embossed with Braille: these allow sighted and blind siblings to share the same story, or blind parents to read to sighted children. There are a number of transcription services which can convert inkprint letters, documents, or books into Braille, but there is a time delay in getting access to material in this way. Bank statements and utility bills can be provided in Braille on request, and medicine labels (and some food packets) are now labelled in Braille.
Compared to a ‘normal’ visual reading speed of 200 to 300 words per minute (wpm), a good braillist is likely to achieve about 100 wpm. Even if Braille reading is too slow or requires too much effort to read for pleasure, it can still be useful in, for example, labelling, writing lists and messages, and marking dials on household appliances. There are Braille playing cards, dice and dominoes, knitting patterns and puzzles. Braille clocks and watches are also available. These usually have a hinged cover glass over the watch face. To tell the time the cover is opened, and the position of the strengthened hands is felt: the numbers are indicated by one dot on each hour, two dots at the quarter-hours, and three dots at the ‘12’ position.
Moon is another embossed, tactile reading system, invented by Dr William Moon in 1845 ( Fig. 15.1B ). It has a Grade 1 form which does not abbreviate any of the words, but Grade 2 (which is usually used for books) has 45 common-sense contractions. Although it is simpler to learn than Braille (because the symbol shapes resemble those of simplified letters) and easier to feel, it has never been widely adopted. There are currently around 240 active readers borrowing Moon titles from the RNIB’s National Library Service. Moon can be written on a handframe using a stylus rather like a pen or with the Moonwriter (similar to a typewriter): in both, the symbols are embossed onto plastic sheets rather than paper. Moon is not easy to use for labelling: it is difficult to make the labels at home, and some indication needs to be present to show which way is up (a Moon comma at the end of the word is suggested). The RNIB do not currently produce Moon; however, some Moon materials are still produced in the UK for children by schools and the ClearVision project. The RNIB have decided to focus on teaching and promoting Braille but are not planning to return to active production and promotion of Moon, in line with other organisations world-wide ( ). However, the RNIB remains committed to supplying existing users with books and other publications free of charge. Currently there are 1750 titles in Moon, including leisure reading and reference books. Around 20 of these are children’s titles ( ).
Auditory Methods
Audio Reading
This time, the auditory sense is the substitute for vision, with the information being delivered in verbal form. Patients can be encouraged to record letters to their family, for example, if having difficulty writing, and many public information leaflets (such as guidance on eligibility for benefits, crime prevention, or health information) are available in this format upon request. Commercial audio-transcription services can produce recordings from written documents if required. Popular messaging services have made audio delivery of information far easier.
Smartphones have free apps that can be downloaded to the phone in order to record conversations or, in this case, to record your voice—a podcast, thoughts, a shopping list, or a letter to family or friends. Communication apps, such as WhatsApp or Instagram, allow people to leave audio messages (and even video messages) to each other (rather than text messages) and this is currently a very common way of communication between sighted people.
The RNIB Talking Book Service was made free to access for anyone who was registered as severely sight impaired (SSI) and sight impaired (SI) in the UK in 2015. Today, there are over 30,000 SSI and SI adults and children using Talking Books. The RNIB library is the largest of its kind in Europe and contains 60,000 accessible items including their 23,000 Talking Books. Readers can access the audio books on Daisy CD, USB or as a digital download, so that they can listen to them how they choose, nowadays increasingly ‘on-the-go’. Daisy CDs have the advantage of being able to search for, and bookmark, specific sections in the text: they do require a special player, or specific computer software. Anyone who is registered as SSI or SI can now borrow up to six Talking Books at any time, completely free of charge.
Calibre is a free postal lending library which is open to anyone with reading difficulties (such as dyslexia) and not just people with vision impairment. There is no fee for membership if the inability to read print is certified by a GP or ophthalmologist, or if the patient is registered. This library also lends books internationally to countries that have ratified the Marrakesh Treaty (the 2013 international copyright exceptions for people with visual impairment or those who are print disabled). Again, these books can be borrowed by members in different formats; memory stick for a period of 3 months, with a maximum of five memory sticks borrowed at one time. The books can be listened on streaming or downloaded to a smart phone device, laptop or tablet and they can be listened to on a Google Home speaker or Alexa.
The RNIB Newsagent service offers the full text of the major national daily and weekend papers. The full text is available electronically and can be read using a screen reader. Alternatively audio selections from newspapers and magazines (both general interest and specialist publications) are available in various formats such as CD, Daisy CD, USB stick, or digital download. There are currently about 200 titles, available on subscription.
The Talking News Federation (TNF) supports over 300 local Talking Newspapers to deliver local news and information in audio to people who are registered as SI, SSI or are print disabled. The newspapers can be listened to on CDs, memory sticks or they are available on streaming through a website or smartphone, computer or tablet. If the person has not got their own player, their local Talking Newspaper might be able to lend them one. There is also a Talking Newspaper app.
Audiobooks are available for sale in bookshops or online (via CD, USB or as a digital download), and many libraries are now lending audiobooks and e-books using an app such as Overdrive. Audiobooks are spoken by a professional actor, or even the authors themselves, whereas e-books are usually electronic speech.
Reading Machines and Desktop Scanners
Talking books offer a very useful service in leisure reading, but they are less useful in accessing technical literature or textbooks and do not allow the patient to read their own letters, for example. This would require the use of a reading machine. Such a machine can convert an image of inkprint text into synthesised speech. The input stage is via a photograph or scan of the text which is obtained by the camera on the device and this is done at a very high resolution (about 300 pixels per inch) so that print as small as N5 can be recognised.
It is, however, in the processing stage that the success of such devices rests, as the reading machine must perform effective optical character recognition (OCR) ( ). This is typically a five-stage process:
- 1.
Preprocessing—designed to optimise the image to compensate for poor quality, or features in the original image which will make recognition more difficult. In the latter category, any small imperfections which do not appear to be part of the letters are removed, as is any underlining of the text which might cause letters to appear joined. The image is digitised into black and white areas, but the threshold for this may be set individually for each small area of the image if there are variations in image contrast, such as in a newspaper.
- 2.
Layout analysis—to distinguish the areas which cannot be read (such as diagrams and photographs) from those which can (text), and to arrange the text sections into a logical sequence. Photographs, for example, are often identified by the high percentage of consecutive black pixels, with very few white pixels between them.
- 3.
The sections of text are now segmented. If this operates perfectly, then each segment should contain one whole letter, although if letters are poorly printed and touching, then two may be joined. Alternatively, a letter may be erroneously split in two.
- 4.
Character recognition is now carried out, and there are several alternative strategies for this. Template matching is one common method, in which the unknown character is compared to all possible alternative characters, pixel by pixel. Each time a pixel matches (e.g. it is black in the unknown and black in the test), the similarity rating increases, and each time it does not match the similarity diminishes. After all possible templates have been tried, the unknown is identified as the character to which it is most similar. Alternatively, feature or structural analysis can be used. Each character is described in terms of the number and orientation of strokes, holes, arcs (concavities), cross points and end points and this analysis is unique to that particular character.
- 5.
Ambiguity resolution allows the character recognition to be tested for feasibility and allows possible uncertainties to be resolved. This involves checking words for their appearance in the dictionary, and spelling. For example, if there was uncertainty in segmentation about whether the character was a single ‘m’ or the two characters ‘rn’ which had become joined, then looking up the words ‘harnstring’ and ‘hamstring’ would resolve the issue. Context can also assist: if there was confusion about whether a character was ‘5’ or ‘S’, then considering the characters around it and comparing ‘£S00’ and ‘£500’ would show that the latter was more likely.
OCR systems for print reading are designed to be omnifont devices, able to handle any printed text, but they are not as successful with handwritten samples. This is partly due to the difficulty in segmenting letters which are joined together. OCR systems which recognise handwriting do exist in other fields (e.g. postcode recognition in mail-sorting operations) but the methods used are different and not transferable.
Having performed these operations, the signal is passed to the output stage , which uses synthesised speech to process the identified letters into word sounds. Suffixes and prefixes are identified so that, for example, the word ‘re-sort’ would be distinguishable from ‘resort’. The word is then compared to a dictionary to identify those words whose overall sound is not simply a combination of the individual letter sounds: if a special pronunciation guide is not found, the word will be pronounced phonetically. If the user has difficulty interpreting the speech, the words can be repeated, or spelt out letter-by-letter.
The earliest and most famous of these devices, introduced in 1974, was the Kurzweil Reading Machine ( ). Early versions had a high error rate in OCR but that was soon rectified, although successful handling of newsprint, the availability of a portable version with a handheld camera, and foreign language capability all took a little longer to develop. Original costs were very high with the Mark IV version of the 1980s costing over £30,000 and these machines were physically very large in size and almost exclusively based in public libraries. Current systems are dramatically more portable and available for a fraction of this price. It is also possible to add a desktop scanner accessory to a personal computer. The advantage of the reading machine is the instant access it permits to any kind of literature, including technical documents, and private letters. It is usual for the document to be scanned first and then read back at a speed (up to several hundred wpm) and with a voice (several female and male versions) which can be chosen by the user. Although some users dislike listening to synthesised speech, or find it difficult to interpret, this is normally overcome with increasing exposure. Output can be to a speech synthesiser, recorded for storage as an audio file, or to a Braille printer. It is now increasingly common for high-end desktop electronic vision enhancement systems (EVES) to have an additional text-to-speech alternative so that those who have enough remaining vision to read visually can switch to synthesised speech if they get tired.
The OrCam is a wearable (cordless) device with a smart camera, and in some versions, an LED light (that can light the text as the patient reads) that responds to a point of the patient’s finger or the tap or the press of a button to read books, documents, letters, newspapers, magazines, etc. This device is also capable of recognising objects, people, colours, reading numbers, telling the date and the time, or barcode scanning, amongst other features. The device is very light and attaches to the patient’s spectacles and the cost is between £2500 and £3500.
Similar functions are available using a variety of smartphone apps such as SeeingAI (available for Apple devices), Speak! (for Android devices) and EnvisionAI (for Apple and Android). SeeingAI is a free app which includes a suite of applications, all of which use the camera on a smartphone or tablet. These functions include reading printed text and handwriting, describing scenes, recognising people, identifying banknotes and colours, and identifying products using barcode recognition.
Writing is another communication problem for the visually impaired, and typing is the most effective way to produce printed documents. Vision is not needed to touch-type: once the location of four ‘home keys’ has been identified, the location of the other keys is known. The sighted typist would find these four keys using vision, but the blind person normally uses the keys F and J which contain some tactile marking that allows them to reset their fingers at the home row. When in a classroom or workplace, typing notes on a tablet or laptop is very practical: output from this could be inkprint, synthesised speech or Braille, if required. Smartphones and computers now allow users to dictate documents (speech-to-text) and full voice control of computers can be performed easily.
Computer Access via Speech and Braille
An excellent guide to current equipment is published by the which produces regular reports comparing the features of similar products.
It is possible to obtain an electronic Braille display that connects to the user’s phone, tablet or computer and which converts typing from either a Braille or QWERTY keyboard into Braille symbols. Also known as soft, paperless or refreshable Braille, these displays are light, compact and quiet. If built into a stand-alone computer, the tactile display is placed next to the computer keyboard to enable the user to read the contents of the screen (if present) by touch ( ). Each ‘cell’ on the display line corresponds to a character on the screen and contains small plastic pins corresponding to the dots of the Braille symbol. These are moved up or down to correspond to whether a dot is present or absent in that position. To have access to the full screen simultaneously would require approximately 2000 characters and this is impractical, so the usual choice is a 40- or 20-character linear display. The half-line 40-character display requires approximately the same extent of movement as when reading a Braille book. These displays allow users to read books or textbooks, communicate with anyone, do the grocery shopping online or take notes when they are in class. The user is able to text, send emails, transcribe music or do anything their sighted peers would normally do. For this reason, some people talk of a renaissance in Braille use.
Alternatively (or sometimes additionally), the characters typed on the screen can be read using a screenreader feeding into a speech synthesiser. The screenreader is used to select which part of the screen will be spoken: keys are usually spoken as they are pressed, and words can be spelt letter-by-letter, and individual words, lines or pages read, depending on the requirements. The equipment required for speech synthesis is usually cheaper than the Braille display, and the ‘reading’ is quicker which may be very important for long documents. The presence of noncharacter information (such as screen colour, highlights and underlining) can be vocalised, and the device can be set to read the status bar whenever it changes. This is much more difficult in the Braille display: the status bar may only appear for a few seconds, or the user may be unaware that it has altered. The user can carry on with another task whilst listening to the speech output, and could even be away from the computer. Whilst auditory access can be extremely useful, it is not inevitably the best option. Braille is particularly helpful for checking mathematical symbols which are difficult to convert into speech and is silent in operation: the user could be talking on the telephone whilst referring to items on the screen. As noted previously, it also has the advantage that the user can immediately check the spelling, punctuation, grammar and layout of the screen information in addition to just the words: it is only through this use of a written language that the blind person can develop literacy. An inkprint or Braille printer/embosser can be attached to the computer for appropriate output.
Speech synthesis is part of all modern operating systems and additional software is not always needed. It could be argued that with effective speech synthesis, Braille has been made redundant by modern technology. This is, however, equivalent to a sighted person saying that they will never need to use a pen and paper again: even though, it seems unsophisticated, the hand writing frame and stylus are still used by many blind people to make labels or jot down lists ( ). It is also possible using a Braille printer to get access to Braille very much more easily and quickly than was previously the case, thus enhancing the use and popularity of the medium. In 2011, the chair of the Braille Authority of North America (BANA) highlighted some of the reasons why she thought the use of Braille was declining, such as: decisions about Braille and Braille instruction for schools often being made by administrators and others who have misconceptions about Braille being expensive, bulky and slow in coming; the general belief that Braille is complicated, outdated and only read by a few blind people; and misleading statistics on Braille readers, which have the effect of discouraging manufacturers ( ). The use of new technology has also been underlined as another reason for the decline in the use of Braille, as well as the increase in age and late onset of the visual impairment ( ), the overuse of audio books, and the growing number of children with multiple disabilities and the need for them to use their remaining vision ( ). Braille is no doubt the way to go for children who are blind or with very poor remaining vision; it is a crucial tool that enables children to grow in literacy, self-esteem and personal independence ( ). Learning Braille from a young age will have a great impact on their future academic success and employment opportunities ( ). The earlier that a child is introduced to Braille, the more likely that they will become a fluent reader ( ).
Graphical user interfaces (GUI) in computer applications presents considerable challenges in adaptation for people with visual impairment. When the interface between the computer and the user is based on text input, display and output, those text characters can be converted when necessary to synthesised speech or Braille. In GUI, however, the information is often presented in the form of pictures, and the user issues commands by clicking on menu items, or dragging objects around the screen with a mouse. This is not feasible for the blind user, and the Commission of the European Union funded a project to consider possible solutions. A report was issued in 1995 ( ) detailing the situation, but this will be an area of considerable change in the next few years. The GUIB uses a new input/output device, called GUIDE, which integrates vertical and horizontal Braille displays, two loudspeakers and a touch-sensitive tablet. This device allows blind users to experiment with direct manipulation and two-dimensional (2D) spatial sound presentation. There is another interdisciplinary research project focused on solving the GUI issues and this is The Mercator for X Windows. The Mercator replaces the spatial graphical display with a hierarchical auditory interface, adding speech synthesis system to the standard desktop configuration. Both approaches provide a way of translating a graphical interface into a nonvisual medium that will satisfy different type of users ( ).
Desktop accessibility describes the hardware and software solution technologies that help people with visual impairment to use a computer: some are based on vision enhancement and some on sensory substitution. Microsoft Windows and Apple macOS have built-in accessibility available on desktop and laptop computers. The content from the computer can be accessed by a screen reader which will provide speech output (Windows Narrator and VoiceOver), magnification (Windows Magnifier and Zoom) or by changing the colour of how things appear on screen (i.e. high contrast, inverted colours). Other elements of the display can also be changed to suit personal preferences such as size, shape and texture of the cursor and reduction of animations. Virtual assistants (e.g. Cortana, Siri) allow the user to use their voice to perform tasks like sending emails, conducting web searches and opening applications and files. Voice recognition or dictation can also be used to compose emails and documents ( ). All these features make day-to-day activities such as shopping, emailing, web browsing, accessing documents, banking or navigating the device accessible to people with severe sight impairment or sight impairment. There are other speech-recognition software available that can be installed on the user’s computer. One example is Dragon NaturallySpeaking ( https://www.nuance.com/en-gb/dragon.html ), which was not developed or marketed for people with VI, and although, these systems can be helpful, they need training before acquiring optimum effectiveness and the set-up might require useful vision.
Table 15.4 shows a comparison of the advantages and disadvantages of audio versus Braille access.