×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Algorithms can decide your marks, work prospects and financial security, but are they fair?

It is important to know how algorithms work and the kinds of trade-offs that are made in decision-making using algorithms
Last Updated 22 November 2021, 06:30 IST

By Kalervo Gulson, Teresa Swist, Claire Benn and Kirsty Kitto for The Conversation

Algorithms are becoming commonplace. They can determine employment prospects, financial security and more. The use of algorithms can be controversial; for instance, robodebt, as the Australian government’s flawed online welfare compliance system came to be known.

Algorithms are increasingly being used to make decisions that have a lasting impact on our current and future lives. Some of the greatest impacts of algorithmic decision-making are in education.

So what sort of decisions might involve algorithms? Some decisions will involve the next question for school students to answer on a test, such as the online provision of NAPLAN. Some algorithms support human decision-making in universities, such as identifying students at risk of failing a subject. Others take the human out of the loop, like some forms of online exam supervision.

How do algorithms work?

Despite their pervasive impacts on our lives, it is often difficult to understand how algorithms work, why they have been designed, and why they are used. As algorithms become a key part of decision-making in education — and many other aspects of our lives — it becomes important to know how algorithms work and the kinds of trade-offs that are made in decision-making using algorithms.

In research to explore these two issues, an algorithm game was designed using participatory methodologies to involve diverse stakeholders in the research. The process becomes a form of collective experimentation to encourage new perspectives and insights into an issue.

The algorithm game is based on the UK exam controversy of 2020. During Covid-19 lockdowns, an algorithm was used to determine grades for students wishing to attend university. The algorithm predicted grades for some students that were far lower than expected. In the face of protests, the algorithm was eventually scrapped.

The Conversation's interdisciplinary team co-designed the UK exam algorithm game over a series of two workshops and multiple meetings this year. The workshops included students, data scientists, ethicists and social scientists. Such interdisciplinary perspectives are vital to understand the range of social, ethical and technical implications of algorithms in education.

Algorithms make trade-offs, so transparency is needed

The UK example highlights key issues with using algorithms in society, including issues of transparency and bias in data.

The team designed the algorithm game to help people develop the tools to have more of a say in shaping the world algorithms are creating. Algorithm 'games' invite people to play with and learn about the parameters of how an algorithm operates. Examples include games that show people how algorithms are used in criminal sentencing, or can help to predict fire risk in buildings

There is a growing public awareness that algorithms, especially those used in forms of artificial intelligence, need to be understood due to increasing issues of fairness. But while everyone may have a vernacular understanding of what is fair or unfair, when algorithms are used, numerous trade-offs are involved.

In the algorithm game research, people were taken through a series of problems where the solution to a fairness problem simply introduces a new one. For example, the UK algorithm did not work very well for predicting the grades of students in schools where smaller numbers of students took up certain subjects. This was unfair for these students.

The solution meant the algorithm was not used for the often very privileged schools; these students then received grades predicted by their teachers. But these grades were mostly higher than the algorithm-generated grades received by students in larger schools, which were more often government comprehensive schools. So this meant the decision was fair for students in small schools, unfair for those in larger schools who had grades allocated by the algorithm.

The point of the game was to show that it is not possible to have a perfect outcome. And that neither humans nor algorithms will make a set of choices that are fair for everyone. This means we have to make decisions about which values matter when we use algorithms.

Public must have a say to balance the power of EdTech

While the algorithm game focuses on the use of an algorithm developed by a government, algorithms in education are commonly introduced as part of educational technology. The EdTech industry is expanding rapidly in Australia. Companies are seeking to dominate all stages of education: enrolment, learning design, learning experience and lifelong learning.

Alongside these developments, Covid-19 has accelerated the use of algorithmic decision-making in education and beyond.

While these innovations open up amazing possibilities, algorithms also bring with them a set of challenges we must face as a society. Examples like the UK exam algorithm expose us to how such algorithms work and the kinds of decisions that have to be made when designing them. We are then forced to answer deep questions of which values we will choose to prioritise and what roadmap for research we take forward.

Our choices will shape our future and the future of generations to come.

(Gulson, Swist and Kitto are associated with the University of Sydney; Benn is associated with the Australian National University)

Watch the latest DH Videos here:

ADVERTISEMENT
(Published 22 November 2021, 05:45 IST)

Deccan Herald is on WhatsApp Channels| Join now for Breaking News & Editor's Picks

Follow us on

ADVERTISEMENT
ADVERTISEMENT