It’s amazing how far we’ve come. As Disability Service Providers, we have more tools at our disposal now than we’ve ever dreamed. The AT lab has gone mobile, accessible document production can be semi-automated, and captions are available at the click of a button (and they’re getting more accurate every day!). What used to be a highly specialized niche in our field is now pocket-sized, user friendly, and used by far wider audiences than students with disabilities. This is a good thing for the work we do in providing access and accommodations.
Even as recently as 10 years ago, when the pieces and potential were there, assistive applications like accurate transcription of a lecturer, conversation of math into readable text, and on-the-fly captions were pie-in-the-sky propositions. Now that we have tools that can do these things, and integrate them seamlessly into our existing teaching infrastructure, it’s our job as DSP’s to be crafty, to always be thinking of how we can use these tools to provide access to our students. Even when the tools are packaged as assistive tech products, we can use them in creative ways.
Take, for example Glean (previously Sonocent Audio Notetaker). It’s main function of allowing students to “mark up” a recording on their laptop is revolutionary. It built upon, and in a lot of ways improved, what smartpens (like Livescribe pens) could do, by allowing students to focus on coding their active listening, instead of processing the content of a lecture to make their own meaning in real time. Coupled with Artificial Intelligence (AI) voice recognition capability, we now have a tool that can provide that accurate transcription of a lecture, and our students can integrate it into the notes they take.
We can also leverage it in clever ways. Voice recognition has been around, and in the mainstream, for quite some time, built into operating systems or in third party applications. Glean’s transcription tool can be used by students to write papers and dictate their responses to exams, if you set it up the right way, using a platform they are familiar with. And having an AI engine running the recognition means 1) less time teaching students how to use the application and 2) less time teaching the application to recognize the student.
So what is it about AI that DSP’s need to know? Well, AI is being integrated into just about everything we do in higher ed., but it seems a bit daunting. There’s a lot of chatter about how AI will impact higher ed, and whether it will compromise the academic integrity of our courses. (The thinking is that students will use platforms like ChatGPT to generate their papers and research and so on). While the potential is certainly there, there are academic experts exploring ways to build AI competencies for students into our curriculum. There are definite benefits to accessibility, and as DSP’s, we can add to the narrative that 1) AI is here to stay and 2) it can be leveraged to provide better access for students with disabilities.
Ten years ago, we had voice recognition, and now we have automated captions and accurate transcriptions. Today we have AI applications that can recognize when a face is in frame so a user who has low vision can take more accurate photographs with their smartphones. There are similar applications that can identify color and objects. Apps like Lookout and BeMyEyes help those with visual impairments identify objects and understand their surroundings. We’re not that far off from having applications at our fingertips that can provide good, accurate, concise – and academic - text descriptions of images. In fact, those applications exist to a certain degree already. But the rate at which AI is advancing, we won’t have to wait too much longer before those features are integrated into our day-to-day software. We’ll have the ability to apply those tools to retrofit academic content to students with disabilities with more speed and efficiency, and less human error.
Does this put us out of a job? Far from it! In fact, our work is as important as ever. We need strongly advocate on our campuses for the awareness and effective use of the tools we currently have, as well as the AI tools that are yet to come/on the horizon. Our work is cut out for us as we learn how to use and adapt these AI tools, and also teach our campus partners why accessibility and inclusivity are so essential. In addition, we must ensure that the concept of Universal Design for Learning is infused on this new ground floor. Now more than ever (i.e. in a post-pandemic world), we can say with confidence that our campuses are primed to use learning technology and our faculty’s ability to create dynamic content is supercharged. I’ve always believed that our job as DSPs is not to be technology gurus/experts, but rather champions to ensure for access for all. Let’s go out and be that champion. Pull together the right people (your tech experts, media experts, innovative faculty, course designers, academic leadership, etc.) and lay that foundation. Assistive technology is no longer specialized, but the need for access has never changed. With a little time and patience, and a lot of advocacy, we can bring our field out of the niche. If we share the ‘why’ with those who know the ‘how’ we can make our education more accessible than ever before.
Comments