Technical Democracy Presentations

Australian Association for Research in Education annual conference 2021

On 29/11/2021, members of the Education Futures Studio (University of Sydney), and colleagues from University of Technology-Sydney, University of Wollongong, and Australian Catholic University presented works in progress on the topic:

Educational datafication and automated decision-making: Concepts, tools, and encounters for technical democracy and data justice

Session overview:

This symposium explored the potential of ‘technical democracy’ (Callon, Lascoumes & Barthe, 2001) and ‘data justice’ (Dencik et al., 2019) for education policy and practice. As datafication and automation trends permeate the education sector, we highlight the urgent need for novel forms of collective experimentation and learning. Algorithms, data and artificial intelligence are increasingly embedded and questioned across educational systems: exam grading, student monitoring, learning analytics, school allocation. These technical innovations are rapidly reshaping educational policy networks and institutional formations. Yet, strategies to examine such educational change and controversies in democratic ways are still in nascent form.

Our aim was for participants to learn a range of concepts, tools, and encounters to explore ‘hybrid forums’ (Callon et al., 2001) with heterogeneous groups – alongside ways to “trace the conceptual and empirical horizons of how social justice can be advanced in a datafied society” (Dencik et al., 2019, p. 880). The symposium offered participants a repertoire of vocabulary, resources, and ideas, to spark democracy and justice in their own education contexts.

Abstracts and slides from the symposium are below.

Data journeys, lives, and justice: Making visible the datafication of educational policy, teaching and learning

Authors: Sarah K. Howard, Dragan Gasevic, Simon Knight, Teresa Swist, Kathryn Bartimote, Kalervo N. Gulson, Tiffani Apps, Juliana Peloche, Nathanael Hutchison and Neil Selwyn

Data is shaping educational practice and being used as a mechanism of school reform and improvement. These practices incite a number of questions that need to be addressed, such as the appropriateness of data for certain tasks (e.g. Winne, 2017), centralization of educational policy and detachment from learning contexts (e.g. Ball et al, 2012), and stakeholder data literacy (e.g. Alhadad et al., 2018). We argue that developing a better understanding and increased visibility of the life and journey of data, where data is used appropriately or inappropriately, the spaces where expectations of data as evidence may be problematic, can be identified to better understand the datafication of education, improve data-informed policy and support stakeholder decision making.

The aim of this paper is to employ Bates et al.'s (2016) data journeys methodology as a novel way to explore the complexity of educational data production, processing, and distribution. The 'data journey' (see Bates et al., 2016) focuses on how data is repurposed through production, processing and various activities. In the current analysis, this approach is used to consider how standardized testing data from the National Assessment Program Literacy and Numeracy (NAPLAN), as explicated in a mandatory NSW school improvement policy, is employed by stakeholders and for what purposes over four years.

The original purpose of NAPLAN data was to follow student progress. The scope of usage has since extended. Current policy explains that school leadership and teachers can select pre-processed NAPLAN data, made available through a controlled departmental data platform, to use as evidence of school improvement. The data may then be linked with other data sources. In a new form and linked, possibly 1-3 years later, it may be used to draw conclusions and support school decision making. At the end of the process, data is returned to the Department in another form, as evidence of progress on school improvement, via a second departmental online platform. In this paper, we will present a full data journey and its implications for stakeholders work in schools and education policy. Ultimately, the intention of this analysis is not to limit the use of educational data, but rather to use the data journeys approach to surface ‘data justice’ concerns and, in doing so, highlight the “societal transformations that are associated with datafication and the implications these have on people’s lives” (Dencik et al., 2019, p. 876).

Covid controversies, Automated decision making and technical democracy in education

Authors: Kevin Witzenberger, Kalervo N. Gulson, Teresa Swist, & Greg Thompson

During the Covid 19 pandemic in 2020/21 schools across the world were closed, and students forced to undertake education remotely at home. As governments struggled to come to grips with what to do with education, the EdTech sector was quick to provide solutions that, globally, ministries of education accepted with open arms (Williamson et al, 2021). The educational response to Covid involved ‘the development of new multi-sector networks, public-private partnerships and outsourcing contracts dedicated to promoting educational technologies’ (Williamson et al, 2021, p.119). What had been a series of education policy debates often played out with established interest groups – teacher unions, think tanks, corporations, and so forth – were now unfolding across the polity. As such, Williamson et al characterise education policy responses to Covid as ‘controversies’ that ‘[involve] diverse expert communities, political actors, regulators, financial funders, and various publics’ (p. 118). Many of the EdTech solutions provided were AI embedded systems, such as Google Classroom, supporting an intensified uptake of automated decision-making technology in education (Perrotta et al, 2020). Automated decision making is a form of decision making by automated means without human intervention. Education systems, teachers, parents and students, were grappling with questions of not only access to these technologies, but those of fairness and transparency in decision-making, data privacy, the role of corporations in education, understanding proprietary and ‘black-boxed’ algorithms.

This paper aims to show how controversies about automated decision-making technology in education during COVID-19, can be examined, understood and responded to, through the idea of technical democracy. Technical democracy is a way of examining and understanding socio-technical controversies that focuses on collective experimentation and learning (Callon et al, 2011). We test a novel way to explore this approach with the following lines of inquiry focused on mapping a timeline of the ‘technical objects’ and ‘heterogeneous elements’ (Akrich, 1992) of controversies specific to ADM since the pandemic started. These objects and elements include policy and public responses, the links between health data and education policy responses (e.g., school closures), the new sites of automation in schooling (e.g., Google Classroom), and the policy role of interest groups and new actors such as technology corporations. Through this process of visualising the ADM technical objects of Covid controversies over time and location we sketch a novel, pragmatic method for situating collective dialogue to advance technical democracy.

Pedagogic encounters with algorithmic system controversies: A technical democracy research apparatus

Authors: Teresa Swist, Kalervo N. Gulson, & Justine Humphry

Institutions around the world are striving for ways to create computational systems which benefit constituents and mitigate potential harms. With the exponential rise and reach of algorithms, artificial intelligence, and data, the concept of ‘technical democracy’ (Callon, Lascoumes, & Barthe 2011) invites people with diverse expertise to tackle sociotechnical controversies via collective learning and experimentation. To explore an assemblage approach to technical democracy, we bring together thinking about algorithmic system controversies, methodological innovation, and public pedagogy. We advance this line of inquiry by introducing a research apparatus framework composed of several pedagogical encounters for reinventing algorithmic systems: deliberation, visualisation, exploration, design, examination, narration, speculation, and refusal. This framework is discussed as a way to open up the ethical, epistemic, and creative possibilities of ‘democratic interventions’ (Feenberg 1999) with diverse publics in a datafied society. We conclude with implications for future research attentive to pedagogical encounters with algorithmic system controversies - characterised by political struggle, knowledge, and reinvention.

Technical democracy, fairness and the UK exam algorithm: Making a ‘design Thing’ to explore bias in automated grading systems

Authors: Kalervo N. Gulson, Teresa Swist, Simon Knight & Kirsty Kitto

In 2020, during the height of the COVID-19 pandemic, students in the UK could not sit their final matriculation exams, which are necessary for entry to University. As an alternative, instead of using teacher assessment of students, the UK exam regulator used a simple sorting algorithm to predict students’ A-Level grades. The A-level algorithm can be considered as a simple form of Artificial Intelligence (AI), specifically an automated decision making tool. While it was designed to ensure fairness through standardisation, this tool ultimately penalised students who performed better than might be expected based on their school context and student and school performance in previous years. The algorithmically produced grades were withdrawn after widespread student protests

The UK exam algorithm controversy hurtled the topic of an automated grading system into the mainstream media spotlight and sparked much heated public debate amongst diverse publics/communities: students, parents, teachers unions, statisticians, exam boards, and education bodies. Suddenly, a range of intricate concepts (such as algorithms, fairness, bias, standardisation, prediction) became objects of collective attention and concern. While wide-spread media coverage and in-depth reports from different sectors offered detailed insights and analysis, the potential of educational/pedagogical tools for collective experimentation and learning about such socio-technical controversies remains underexplored.

In this presentation we examine the participatory design process of developing an interface/data visualisation of the UK exam algorithm with a team of interdisciplinary researchers - including social, learning and computer scientists - and students. We propose that this ‘design Thing’ (Björgvinsson et al 2012) opens up technical democracy possibilities: to navigate the multidimensional bias and opacity of automated grading systems, plus negotiate more fair and inclusive alternatives with diverse publics/communities.

Deliberative Democracy as a Strategy for Co-designing University Ethics Around Analytics and AI in Education

Author: Simon Buckingham-Shum

Universities can see an increasing range of student and staff activity as it becomes digitally visible in their platform ecosystems. The fields of Learning Analytics and AI in Education have demonstrated the significant benefits that ethically responsible, pedagogically informed analysis of student activity data can bring, but such services are only possible because they are undeniably a form of “surveillance”, raising legitimate questions about how the use of such tools should be governed.

Our prior work has drawn on the rich concepts and methods developed in human-centred system design, and participatory/co-design, to design, deploy and validate practical tools that give a voice to non-technical stakeholders (e.g. educators; students) in shaping such systems. We are now expanding the depth and breadth of engagement that we seek, looking to the Deliberative Democracy movement for inspiration. This is a response to the crisis in confidence in how typical democratic systems engage citizens in decision making. A hallmark is the convening of a Deliberative Mini-Public (DMP) which may work at different scales (organisation; community; region; nation) and can take diverse forms (e.g. Citizens’ Juries; Citizens’ Assemblies; Consensus Conferences; Planning Cells; Deliberative Polls).

DMP’s combination of stratified random sampling to ensure authentic representation, neutrally facilitated workshops, balanced expert briefings, and real support from organisational leaders, has been shown to cultivate high quality dialogue in sometimes highly conflicted settings, leading to a strong sense of ownership of the DMP's final outputs (e.g. policy recommendations).

This symposium contribution will describe how the DMP model is informing university-wide consultation on the ethical principles that should govern the use of analytics and AI around teaching and learning data.

Confronting the datafication of schooling via technical democracy: Problematising the agonistic and pluralistic im/possibilities of ‘hybrid forums’

Authors: Steven Lewis, Jessica Holloway, Sarah Langman

Schooling has become a key target of the ‘data deluge’ (Anderson, 2008; Kitchin, 2014) and is now increasingly subject to modes of accountability that both rely upon and produce new forms of digital data. Within this thoroughly ‘datafied’ world (Lupton, 2016; Smith 2016), the datafication of schooling has led to a series of ontological and epistemological shifts concerning who and what schools, teachers and students are; as well as how they are known and understood (cf. Wyatt-Smith, Lingard, & Heck, 2021; Lewis and Holloway, 2019). Arguably, the space of public education is rapidly being ceded to obscured algorithms and technical data specialists, both of which are frequently beyond the purview – or even the comprehension – of the teaching profession and public.

Despite the building critique regarding datafication, there remains a palpable lack of consensus for how society should best respond to the growing prevalence of data and, relatedly, the lack of transparency for how such data shape our public and private lives. One such strategy draws on what Callon, Lascoumes and Barthe (2001) describe as technical democracy. Here, ‘hybrid forums’ of citizens and specialists help to penetrate the otherwise closed space of expertise by fostering a more democratic dialogue between technical expertise and social concern. This purportedly bridges the divide between expert and ‘lay’ ways of knowing and acting, and thereby challenges the delegation of authority to experts.

We would suggest technical democracy aligns with democratic thinkers who see ‘deliberative democracy’ as the ideal means for both eliminating conflict and achieving consensus-derived resolution (cf. Habermas, 1996; Rawles 1997). However, the work of political theorists like Mouffe (1999) and Connolly (2005) provides a compelling lens for problematising the very premise that consensus should be a virtue of democracy. Rather, they argue that democratic politics should embrace conflict as a productive means for pursuing pluralistic values, perspectives and identities.

Thinking with theory (Jackson & Mazzei, 2013), this paper seeks to problematise the hybrid forum in terms of producing agonistic and pluralistic im/possibilities. Drawing on vignettes developed from our previous respective research (e.g., interviews with teachers, school leaders, policymakers, technical specialists etc.), we produce a series of ‘contrast models’ (Connolly, 2005) for simulating what hybrid forums might offer towards realising democratic practices and outcomes.