top of page

AI-ASSISTED TOOLS

What do you need?

Planning a party?

GENERATIVE AI

CHATGPT

GEMINI

Need summaries of REAL research papers?

AI RESEARCH ASSISTANTS

ELICIT

SCITE.AI
STATISTA

Need to find full text REAL papers?

RESEARCH DATABASES

FIND

JSTOR
EBSCO
PROQUEST

Utilize the power of GenAI with AI Research Assistants to discover
real researchers, real individuals, and real data

What's next? Harness your data and workflow.

A workflow is essential to help you manage your information effectively.

Below is a practical workflow adapted for Gen AI research tools, designed to help you manage information and avoid unintended plagiarism. This has been adapted from Noodletools to include a Gen AI workflow. If you’d like to use this template, please click here.

image.png

Philip Williams, 2024, Adapted from Noodletools to include GenAI workflow.

See https://my.noodletools.com/

AI AND ETHICS

Ethics live in our actions, interactions, & the decisions we make. As AI changes rapidly, knowing how to engage with AI ethically is challenging. Knowing where ethical considerations exist and bringing these issues into our conversations allow us to adapt and respond to the opportunities and challenges of AI as a learning community.

Whose data was used to train AI?

Regardless of how big the data set used was to train an AI tool, it will carry forward the societal biases embedded within that data, perpetuating stereotypes, toxic attitudes, and discrimination. AI companies frequently do not disclose where the data came from or who owns it. Because AI cannot provide the source of information, it cannot effectively cite sources. Even content governed by Creative Commons licenses would have been used regardless of the creators’ license guidelines. The output from AI is only as good as the data it was trained on. AI cannot make these ethical judgements and decisions itself - humans are doing that.

Is AI biased? ChatGPT says YES

“Educators can help students understand bias and think critically by showing how certain questions lead to biased responses”.

In what ways is bias, abusive content and discrimination being addressed by AI companies?

Ever wonder why, despite being trained on content scraped from the World Wide Web, AI doesn’t provide abusive, violent, and toxic responses? AI is not making this call; humans moderate AI output. Tech companies employ vast numbers of human moderators from regions where pay is low, worker rights are limited, and minimal support is provided for workers who are faced with the most violent and toxic products of human online content.

 

Engineers and researchers continue to develop ways to evaluate ‘fairness’ using tools such as ‘re-weighting’, ‘adversarial debiasing’, and ‘algorithmic adjustments’, so humans are still making these decisions.

Content moderation by low-paid

and exploited workers

“It’s destroyed me completely:” Kenyan moderators decry toll of training of AI models” - The Guardian

Amazon’s Mechanical Turk crowdsourcing work

FAIR: Fair adversarial instance re-weighting

Who owns the AI tools that we use?

Private companies own and develop these AI tools. These are not democratic or transparent, operating within governance structures not designed to address the complexities of AI.

Generating income for shareholders is the main driving force behind these owners, and we have seen how users and vulnerable people are not considered as important as profits.

Amnesty International’s report on the ways that Facebook’s systems promoted violence against the Rohingya

In what ways could the use of AI in education be considered a high-risk situation?

image.png

The European Commission has undertaken considerable work to create a legal framework for AI. As part of this process, they defined four levels of risk for AI systems. Why do you think education is considered a high-risk context for implementing AI?

European Commission AI Act

EU guidelines on ethics in AI: context & implementation

“Human agency and oversight” are emphasised as essential in implementing AI in the education setting.

What is at risk when AI gets it wrong?

AI Chatbots distort and mislead confidently. Chatbots are also bad at telling you when they don’t know how to accurately answer your question. If you use the paid version, the AI gets even more confident. Humans and the peer review process for academic writing are also flawed, but is AI helping or hindering? Is AI to blame or human over-reliance on AI? AI companies are always making improvements, but it remains that an over-reliance on AI places students, researchers, writers, decision makers and policy makers in a highly vulnerable position. 


We know AI gets it wrong, but just how wrong? The Columbia Journalism Review found that the answer is about 60% wrong of the time. Does this worry you? (summary article from Techspot)

image.png

CASE STUDY

image.png

In short:

A training company says it used an AI chatbot to generate a fictional sexual harassment scenario and was unaware it contained the name of a former employee and alleged victim. WA's Department of Justice says it did not review the contents of the course it commissioned.

What's next?
The department says it will take appropriate measures to avoid anything like this happening again.

What questions does this raise for you about the ethical use of generative AI?

McArthur, Bridget. “AI Chatbot Blamed for Psychosocial Workplace Training Gaffe at Bunbury Prison - ABC News.” ABC News, 20 Aug. 2024, https://www.abc.net.au/news/2024-08-21/ai-chatbot-psychosocial-training-bunbury-regional-prison/104230980

image.png

What risks are we exposed to if we off-load our work to AI?

It could cost you your job.

Please sign in with your GApps account to access this article:
Ortiz, Aimee. “Wyoming Reporter Resigns after Using A.I. To Fabricate Quotes.” The New York Times, 14 Aug. 2024, www.nytimes.com/2024/08/14/business/media/wyoming-cody-enterprise-ai.html.

Our opening hours

Monday - Friday 

7:30 am - 4:30 pm

(except school vacations and public holidays)

Connect with us

Instagram_Glyph_White.png
logo-white.png

Locate us

UWCSEA East

1 Tampines Street 73

Singapore 528704

libraryeast@uwcsea.edu.sg

© 2025 United World College Southeast Asia East Libraries

bottom of page