
Many processes are so routinely deployed within universities that they can easily acquire a veneer of innate fairness. However, a process designed with careful consideration for the circumstances of the time that is then continually re-used can favour those whom established it and disadvantage others. Feedback and funding are two such processes that frequently disadvantage both disabled students and staff. Yet they can also easily be made fairer.
I’m sure it is stating the obvious to say that universities regularly request anonymous feedback. Anonymous surveys are often presented as a problem-free method for discovering what a cohort really thinks about a topic because, in theory, everyone is unidentifiable. In practice, surveys are only anonymous for those whose feedback is similar to everyone else’s. And since processes are designed for nondisabled people, their feedback tends to contain no information that can link it with a specific person. However, for a disabled person, providing feedback often necessitates disclosing information that is specific to their impairment and/or access needs, which makes them identifiable.
For example, as a blind person I use a screen reading program on a computer. It takes the information that the computer displays on the screen and provides it in speech and braille. However, if websites or computer programs are not designed properly, this will limit the information that I have access to. I explain all of this to say that, because so many university services are mediated by computers in one way or another (registration, evaluation , teaching, research), it is rare that I can provide feedback about them in a survey without mentioning that I use a screen-reader. And the fact that I use a screen-reader makes me immediately identifiable within a department, and sometimes within a whole institution. And the same is true for many disabled people who interact with university services in non-standard ways. These differences do not matter when the survey does not claim to be anonymous, because everyone in the target group is identifiable and knows it. However, surveys that claim to be anonymous effectively divide the target group into those who are nondisabled whose anonymity is assured, and those who are disabled and have to choose between being identifiable when everyone else is anonymous or not responding. Often this means that surveys prejudice against feedback from nondisabled people.
Clearly some surveys require anonymity. However, not all do. Either way, it would be beneficial for the results if the requirement for anonymity was part of the survey decision tree rather than the assumed default. If anonymity really is necessary, then it should at least be acknowledged that anonymity limits the results.
Another situation that disadvantages disabled people is applying for funding, especially small amounts of internal funding for a specific objective such as professional development or staging an event. When access-related funding is assessed as a component of the total funding rather than separately, disabled peoples’ funding requests are higher, which often equates to being less competitive. When the funding is for those such as graduate students who are allocated a specific amount of skill development and/or travel funding per year, the result is that disabled students have fewer opportunities. In cases where the funding is to stage an event, this approach incentivises event organisers to not provide access.
A solution to this problem is to separate access funding from the rest of the application, either within the application itself or at the point of assessment. Another solution, particularly if the funding is for staging an event, is to expect that everyone will make their event accessible, and access then becomes a standard budget item.
Each one of these processes compounds and multiplies the disadvantage that a disabled person experiences. For example, a disabled graduate student who had fewer professional development opportunities will be less likely to obtain an employment opportunity open to recent graduates, then they will seem less accomplished as an ECR, and so on.
Similarly, the effects of each individual process combine into cycles. For example, it is easy to imagine that a less accessible event might be funded over a more accessible event. Then disabled members of the university community are excluded from the event itself, and then from providing feedback about it. Then the event organiser and the funder will not be aware that potential audience members were excluded, and might conclude that the event was more effective than it really was.
Dr Amanda Tink is a Postdoctoral Research Fellow at UniSA