Referenda Case

All Student Vote (Summer 2026)

Warwick SU to lobby the university for an ethical stance on AI

This proposal outlines a draft stance for Warwick Students’ Union on the use of Artificial Intelligence (AI) and Large Language Models (LLMs). It brings together evidence on the environmental, social, labour, and academic impacts of these technologies, including their energy use, data practices, and potential effects on teaching, learning, and work.? It brings together evidence on the environmental, social, labour, and academic impacts of these technologies, including their energy use, data practices, and potential effects on teaching, learning, and work.

 

The motion proposes that the SU lobby the University of Warwick to take a cautious and regulated approach to AI. This includes limiting the University’s use of AI in specific areas such as governance documents, marketing and promotional materials, and assessment design, as well as introducing clear expectations around transparency and oversight.

 

The motion does not call for a total ban on AI use by the SU. Instead, it supports an opt-in approach, meaning AI would only be used where actively chosen, alongside clear transparency and declaration requirements when it is used.

 

Overall, the proposal aims to ensure that both the University’s approach to AI and the SU’s own practices align with sustainability commitments, academic integrity, ethical standards, and meaningful staff and student involvement.

This Union notes:

  1. That the popularity of Artificial Intelligence (AI)/Large Language Models (LLMs) has risen in the past 3 years prompting its rapid integration into institutional systems and individual workflows. At the same time, regulation of AI/LLMs development and use has lagged.
  2. That this rapid, uncontrolled integration is impacting the livelihoods of students and staff, contributing to the ecological crisis, disturbing long-standing social and academic norms and conventions.

Environmental Impacts

  1. That ChatGPT queries use ten times the amount of energy as Google searches, with more complex queries using even greater amounts (EPRI, 2024). 
  2. That the number of data centres needed to support increased use of AI/LLMs in the UK is set to increase by almost a fifth, including one in Herefordshire which will be built on a greenbelt (Kleinman & Shveda, 2025).
  3. That AI/LLMs emit the equivalent of more than 8% of the annual global emission of aviation and consume the equivalent of the entire energy consumption of New York City (Dodds, 2025).

Social and Labour Impacts

  1. That AI/LLMs are trained on big data – extremely large data sets – which are increasingly sourced from illegal and extralegal means. Scores of literature and visual arts works have been scraped off the internet by AI/LLM developers, often without explicit permission from or compensation to the authors. This problem is worse for small-scale artists, which often do not have the resources to lodge a copyright infringement against multi-billion dollar developers. (The Authors Guild, 2023; Grynbaum & Mac, 2023; Reisner, 2025)
  2. That AI/LLMs are also increasingly used to outright replace the work of humans within managerial and academic spaces, in the name of efficiency and productivity. Yet, actual review of AI/LLMs integration increasingly suggests that AI/LLMs integration does not actually reduce the burden of work on workers, especially if integration is followed by layoffs. (Niederhoffer et al. 2025; Walther, 2025; Rogelberg, 2026)
  3. That AI/LLMs work still needs to be checked, verified, and augmented by humans. This means that workers could end up with more work, because not only do they have to do their own work, but now they have to check the work of AI to ensure accuracy and reliability. In one case, such work has been outsourced to prisoners, with extremely low-pay and non-existent workers protection (Meaker, 2023).
  4. That higher-powered AI/LLMs are often paywalled behind a costly subscription, reinforcing existing socio-economic inequalities.
  5. That continuous and uncritical use of AI/LLMs have been found to be linked with cognitive decline (Gerlich, 2025).

 

This Union believes: 

  1. That the University, who aims to be sustainable, should not use and promote tools that increase carbon emissions and promote greenbelt developments.
  1. That AI/LLMs, as currently designed and operated, are profiting from the stolen work and labour of artists, authors, and academics.
  2. That the University, as an academic institution, should not endorse nor engage in business practices that devalue the work of artists, authors, and academics through the scraping off, replication, and imitation of their work by AI/LLMs.
  3. That the University, as an academic institution, should not seek to replace or otherwise devalue the innately human work of academia by enthusiastically integrating AI/LLMs into academic and managerial work without due process.
  4. That the University, as an academic institution, should seek to protect the process of critical thinking and other essential academic skills (such as referencing and note-taking), and so they should not promote short-cut measures such as AI/LLM tools.

This Union resolves:

Management

  1. For the resolutions of this stance be under the remit primarily of the VP Education Officer and VP Democracy and Development Officer and where possible the Environment and Ethics Officer as well as in committees where AI is discussed.
  2. To lobby the University to not actively encourage or promote AI/LLMs tools to be used by students and staff.
  3. To lobby the University to not use AI/LLMs tools to generate images, videos, or music for marketing and promotion purposes, or otherwise to use AI/LLMs-generated images, videos, or music for marketing and promotion purposes.
  4. To lobby the University to not use AI/LLMs tools based on or that replicates, imitates, or uses the personality or likelihood of any person.
  5. To lobby the University to not use AI/LLMs tools in the creation or amendment, in part or in full, of governance documents, including byelaws and regulations.
  6. To lobby the University to put in place an ‘opt-in’ policy for AI/LLMs tools, where staff must opt-in to the use of any AI tools for their work.
  7. To lobby the University to require a declaration of AI use, when and where AI is used.
  8. To lobby the University to incorporate in future sustainability policy assessment of AI/LLMs impacts on its Scope 2 and 3 emissions and how they mitigate the environmental impacts of AI/LLMs.
  9. To lobby the University to include at least one staff and one student sustainability representatives in all AI/LLM consultations, committees and policy-making.
  10. To lobby the University to not accept sponsorships or business partnerships that promote the use of AI/LLMs.

Academic

  1. To lobby the University to not encourage the creation of AI/LLM only assessments, or where AI/LLMs form an integral part of the assessments.
  2. To lobby the University to require a declaration of AI/LLMs use as part of teaching and learning in the module approval form for staff, including where and how the AI/LLM tools are to be used.
  3. To lobby the University to require students to declare where and how they have used AI in all assignments (see example declaration from the PAIS Department).
  4. To lobby the University to require staff to declare where and how they have used AI in all assignments, including in the creation of questions and marking/grading.
  5. To lobby the University to ensure that all departments publicise what and where AI/LLMs can and cannot be used in assignments.
  6. To lobby for a stronger framework for assessing academic misconduct regarding the use of AI in assignments and Exams.

 

Kleinman, Z & Shveda, K, (2025).
https://www.bbc.co.uk/news/articles/clyr9nx0jrzo

EPRI (2024).
https://www.epri.com/research/products/3002028905

The Authors Guild (2023). https://authorsguild.org/news/ag-and-authors-file-class-action-suit-against-openai/

Reisner, A. (2025). https://www.theatlantic.com/technology/archive/2025/03/libgen-meta-openai/682093/

Grynbaum, M. & Mac, R. (2023). 

https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html

Rogelberg, S. (2026). 

https://fortune.com/article/does-ai-increase-workplace-productivity-experiment-software-developers-task-took-longer/

Walther, C. (2025)
https://knowledge.wharton.upenn.edu/article/the-ai-efficiency-trap-when-productivity-tools-create-perpetual-pressure/

Niederhoffer G Kellerman, A Lee, A Liebscher, K Rapuano and J T. https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity

Meaker, M. (2023)
https://www.wired.com/story/prisoners-training-ai-finland/ 

Dodds, Io (2025)

https://www.independent.co.uk/tech/ai-data-center-emissions-environment-b2887454.html