AI has significantly transformed the landscape of the arts, blurring the lines between human creativity and machine intelligence. In visual arts, algorithms generate mesmerizing paintings and sculptures, challenging traditional notions of authorship. Music composition witnesses a fusion of human emotions and AI precision, producing harmonies that push the boundaries of conventional melodies. AI has also found its place in literature, generating compelling narratives and even collaborating with human authors. While some fear a loss of the human touch, others see AI as a powerful muse, inspiring new possibilities and pushing artists to explore uncharted territories. The effects of AI in the arts are both provocative and promising, opening up avenues for innovation while sparking essential conversations about the nature of creativity.
Today's session proved to be a bit of an eye-opener. The topic of conversation centred around the development and potential use of Artificial Intelligence in the arts with some consideration to our own practises. It's a controversial topic driving some interesting debates around concerns such as copyright, emotion, aesthetics and ethics, not just amongst artists. Thing is, we already interact with AI in one form or another; my own experiences range from the frustrating bot that doesn't quite understand some of my London vernacular, or my partners lisp, to the infuriating ePassport gate at Gatwick Airport which insists that I and the person on my passport are not the same! How realistic after all would an encounter with a system designed to mimic human behaviour be without some inbuilt human bias?
The "Experiments & Play" element of this session took the form of an introduction to online image generators such as Leonardo AI. I entered two prompts into Leonardo; "Collage of eternal happiness" and "Collage of poverty misery". The results confirmed what I already expected to see so no surprises there. But it remains disappointing that such negative associations continue to permeate the machine, even in its relative infantry (can we not just get it right before it’s too late?).
There are systems and software that incorporate AI to my advantage; Photoshop for example uses algorithms in their Remove Tool (to replace unwanted areas in an image with similar content), Curvature Pen Tool (which enables anyone to draw straight lines and smooth curves effortlessly), Sky Replacement Tool (allowing for a change of mood in the sky in a few clicks) and the Object Selection and Refine Edge Tool (which selects the main object within an image and allows for precise masking). These tools make my work as a digital-based artist a less time consuming. I've also been testing the waters with ChatGPT which (on the surface) seems like a remarkable tool bursting with knowledge, however I remain wary of the fact that what comes out needs to be varified, and the process of doing so could take as long as conducting the research myself from scratch. Incidentally (and as an example of the capabilities of AI) the first paragraph of this blog entry (in bold italics) was generated by asking ChatGPT to "write a paragraph on the effects of AI in the arts". The rest is me, promise!
In 2016 Microsoft Corporation unleashed a monster.. a bot which started its life as an "innocent" virtual companion, programmed to mimic the language patterns of a teenage girl. Tay (which stood for "thinking about you") was Microsoft's attempt to mimic the success of its older Chinese cousin XiaoIce which according to the corporation was being used by some 40 million people in the region. Within hours of its launch, Tay began posting offensive tweets after being targeted by online trolls who exploited its learning capabilities by feeding it with racist, sexist, and otherwise inappropriate content. Tay's responses quickly devolved into hate speech and offensive language, prompting Microsoft to call time-out on the unruly teen. Tay only survived 24 hours but its behaviour served as an almost laughable illustration of the old adage garbage in = garbage out. The embarrassing episode taught Microsoft (and the industry) a valuable lesson in the perils of releasing AI systems without rigorous safeguards in check. Corporate Vice President of Healthcare at Microsoft, Peter Lee was forced to issue a statement of apology, one where he attempted to distance himself and the corporation from their own baby by exclaiming that.. "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for".
According to Microsoft, Tay was only meant to be "a chatbot created for 18- to 24- year-olds in the U.S. for entertainment purposes", if all had gone to plan it may have lived to be just that. But there are other flawed systems in use today, some which appear to be riddled with bias and whose shortcomings have more immediate and concerning effects on the people who use them. Systems like that implemented by HM Passport Office (HMPO) have a great deal of room for improvement; Joshua Bada (pictured below) fell victim to the shortcomings of the government agency's automatic passport photo checker when he tried uploading a high-resolution image to the system only to be told that "it looks like your mouth is open" despite that not being the case. I myself have suffered similar irritations recently when trying to assist with the renewal of my partner's parent’s passports (tongue twister not intended), the system complaining in this case that there wasn't enough light. Questions need to be answered regarding the possible causes for what can only be described as built-in bias within these systems, surely they are designed to make tasks like passport renewals easier? What place in society do algorithms that discriminate belong, and what good is an industry that claims to benefit us all yet fails black and brown people? It's clear that the data used to train platforms like HMPO's passport checker is flawed, in fact they admitted just that in a freedom of information request raised by the New Scientist magazine in 2019. HMPO's response read.. "User research was carried out with a wide range of ethnic groups and did identify that people with very light or very dark skin found it difficult to provide an acceptable passport photograph, however, the overall performance was judged sufficient to deploy.”
Just as with Kodak and their Shirley Card fiasco (post to follow) black faces are not being given the same attention as their white counterparts. And if we dig deeper we'll no doubt find a lack of black faces within the industry and in positions to influence the decision making process. One sure way of levelling out the AI experience for all and avoiding situations of like the one Joshua Bada encountered is to ensure that training data used to develop AI models is representative of the entire user base. But that's not all, a more inclusive design process is needed and robust means of testing the data in realistic and diverse environments. An honest commitment to engaging communities affected by AI biases could ensure that systems are developed and deployed in ways that promote equality and justice. And we must also ensure that rigorous monitoring of AI systems is prioritised; the carnage caused by Microsoft's short-lived Tay chatbot serves as a stark reminder of what can happen when systems are allowed to mix with the wrong company and learn how to hate.
Ultimately we have little option but to form relationships with this emerging technology, it's here to stay, and in honesty the IT guy in me actually welcomes some aspects of it. But I am also aware that there remains a lot to be done in the name of fairness and accuracy. Platforms like ChatGPT take on human-like characteristics, including the apparent ability to correct itself (see conversation below), but unfortunately they also seem to have embraced human-like biases. Needless to say my approach to AI will (for now) remain one of cautious curiosity.
References:
ChatGPT
Chatbot
Considering human imagination the last piece of wilderness, do you think AI will ever be able to write a good song?
Article / Peter Ljubljana / The Red Hand Files
Craiyon
AI image generator
'I was a bit annoyed': Black man's lips flagged by passport checker as open mouth
Article / Zamira Rahim / Independant
Learning from Tay’s introduction
Blog / Peter Lee
Leonardo AI
AI image generator
Race After Technology
Book / Ruha Benjamin
The 12 Days Of AI
Eventbrite
The Politics of Images in Machine Learning Training Sets
Article / Kate Crawford & Trevor Paglen / Excavating AI
Comments