Overview
This lesson addresses the ethical, legal, and societal aspects of generative AI in media and art. Topics include biases in training data and outputs, copyright issues related to training processes, regulatory frameworks for large models, effects on creative industries, risks from deepfakes and synthetic media, and value embeddings in prompting systems. The lesson draws on required readings to provide analysis of these dimensions.
Ethical Implications of Generative AI
Generative AI systems introduce distinct ethical concerns that arise at multiple stages: data collection, model training, deployment, and use.
1. Consent and Privacy in Data Collection
Most large-scale generative models are trained on massive web-scraped datasets containing personal images, text, voices, and artworks.
Individuals whose content appears in training data typically did not provide explicit consent for use in commercial AI systems.
Right of publicity and data protection laws (e.g., GDPR Article 6, CCPA) raise questions about the lawful basis for processing.
Example: Facial images of private individuals were used to train face-generation models without permission.
2. Perpetuation and Amplification of Harmful Stereotypes
Training data reflects historical and current societal biases.
Models learn and reproduce associations that disadvantage marginalized groups.
Example: Text-to-image models generate doctors predominantly as male and nurses as female when no gender is specified.
Example: Language models associate certain ethnic groups with negative stereotypes more frequently than baseline rates in curated corpora.
3. Economic Concentration and Power Asymmetries
The infrastructure required to train frontier generative models is accessible to only a small number of organizations.
Control over model weights, fine-tuning access, and API pricing concentrates economic and cultural influence.
Independent artists, small studios, and local media organizations face barriers to participation.
Example: A handful of companies determine safety alignments and content filters applied globally.
4. Erosion of Human Creative Labour Value
Generative tools enable the rapid production of content that previously required skilled human input.
Displacement risk is highest in routine, mid-tier creative work (stock photography, commercial copywriting, entry-level illustration).
When human-created works are used to train replacement systems, creators lose both current income and future bargaining power.
Example: A freelance illustrator discovers AI-generated images in the exact style they developed over years being sold at a fraction of their rates.
5. Environmental and Resource Justice
Training and inference of large generative models consume substantial electricity and water.
Energy demand is comparable to that of small cities during training runs.
Carbon footprint disproportionately affects regions already impacted by climate change.
Example: A single training run of a 100B+ parameter multimodal model can emit hundreds of tons of CO₂ equivalent.
6. Non-Consensual Representations and Dignity Harms
Generative systems enable the creation of realistic but fabricated content involving real people.
Non-consensual intimate imagery (deepfake pornography) constitutes a form of gender-based violence.
Fabricated depictions can damage reputation, particularly when targeting public figures, journalists, or activists.
Example: Explicit deepfakes of actresses and politicians circulated without consent, often used for harassment or extortion.
7. Epistemic and Democratic Risks
Widespread synthetic media undermines shared epistemic foundations.
The ability to produce convincing false evidence at scale weakens trust in visual and audio records.
Coordinated disinformation campaigns become cheaper and more plausible.
Example: Synthetic video of a political leader making inflammatory statements spreads hours before fact-checks can respond.
8. Value Alignment and Cultural Domination
Reinforcement learning from human feedback (RLHF) and content filters embed the preferences of annotators and safety teams.
These teams are often concentrated in high-income countries and specific demographic groups.
Resulting alignments may reflect Western, English-centric, corporate-friendly norms.
Example: Models refuse to generate content about certain political topics while permitting others, reflecting annotator consensus rather than universal values.
Summary of Ethical Dimensions
| Consent & Privacy |
Was permission obtained for training use? |
Individuals in datasets |
Opt-out mechanisms, synthetic data |
| Bias & Stereotypes |
Does the system reproduce societal harms? |
Marginalized groups |
Targeted debiasing, diverse annotation |
| Economic Concentration |
Who controls frontier capabilities? |
Independent creators, smaller firms |
Open-weight models, public infrastructure |
| Labour Displacement |
How is creative work valued in an AI-augmented world? |
Working artists, writers, designers |
Revenue sharing, licensing frameworks |
| Environmental Impact |
Is the carbon cost justified? |
Future generations, climate-vulnerable regions |
Efficient architectures, carbon-aware training |
| Non-Consensual Content |
Can real people be protected from harmful fakes? |
Individuals targeted by deepfakes |
Watermarking, detection mandates, prohibitions |
| Epistemic Trust |
Can shared reality survive synthetic media? |
Democratic institutions, journalism |
Provenance standards, media literacy |
| Cultural Value Alignment |
Whose norms are encoded as default? |
Global cultural diversity |
Multilingual/diverse RLHF, customizable filters |
These ethical implications require analysis across technical, legal, economic, and cultural dimensions rather than purely technical solutions.
Summary
This lesson examines ethical, legal, and societal issues of generative AI in media and art. It covers ethical concerns such as lack of consent in data use, stereotype amplification, economic concentration, labour displacement, environmental impact, non-consensual representations, epistemic risks, and cultural value alignment. Biases stem from historical imbalances, gender/role stereotypes, racial skews, and linguistic associations, propagated through representation, co-occurrence, amplification, and compression effects. Governance proposals include product safety tiers, transparency obligations, pre-deployment risk assessment, liability rules, and use-case prohibitions. Training data and copyright issues involve reproduction rights, derivative works, fair use limitations, uncompensated use, market substitution, and precarisation of creative labour. Creative professions face task substitution, deskilling, opportunity polarization, and new precarious roles. Deepfakes and synthetic media create risks of non-consensual intimate content, harassment, political misinformation, fraud, and eroded trust in evidence. Prompting and power dynamics show how reinforcement learning, safety filters, language biases, and access differences embed specific cultural norms and values in models.