海角社区

Philosophy of AI & Emerging Technologies Working Group

This working group consists of weekly discussions aimed at developing our understanding of the foundations of contemporary AI and emerging technologies both for its own sake and to interrogate the legitimating ideas most closely associated with contemporary technological culture. We will also discuss chapter and paper drafts of participants, as we plan for this to be a vehicle for collaborative projects. Topics include, but are not limited to:

  • Computational theory of mind
  • Revolution in military affairs
  • Algorithmic thinking and Bayesianism
  • Gebru and Torres鈥 鈥淭ESCREAL鈥: Transhumanism, Extroprianism, Singularitanism, Cosmism, Rationalism, Effective Altruism, and Longtermism
  • 鈥淏ay Area Rationalism,鈥 and the widespread use of science-fiction and trolley problem type gedankenexperiments
  • Accelerationism/Libertarianism/Neo-Feudalism
  • AI architectures and facets of human cognition

Meetings will regularly occur each Thursday at 12pm in 208 Coates Hall.

Each week, we will start our meeting with a brief introduction about the day's reading followed by an open discussion facilitated by rotating members. Occasionally, we will invite speakers to address the group.

The group is open to faculty, staff, administrators, graduate students, and (by invitation) undergraduates.

To receive weekly readings and announcements, email Carrie Powell at cpowell3@lsu.edu.

Hannah Fry, Hello World: Being Human in the Age of Algorithms (2018)

Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (2019)

Shannon Vallor, The AI Mirror (2024)

David J. Gunkel and Mark Coeckelbergh, Communicative AI: A Critical Introduction to Large Language Models (2025)

Alan Turing, "Computing Machinery and Intelligence" (1950)

Samuel Butler, Erewhon, "The Book of Machines" (1872)

 

In Spring 2026, we will begin by reading Matteo Pasquellini, The Eye of the Master: A Social History of Artificial Intelligence (2023)

Dr. Michael Ardoline

Assistant Professor of Philosophy

michaelardoline@lsu.edu 

 

Dr. Jon Cogburn

Professor of Philosphy and Chair of the Department of Philosophy and Religious Studies

jcogbu1@lsu.edu

 

Dr. Lauren Horn Griffin

Assistant Professor of Religious Studies

lhgriffin@lsu.edu 

  • Hubert Dreyfuss - What Computers Still Can鈥檛 Do: A Critique of Artificial Reason (1992)
  • Matthew Stewart - The Management Myth: Debunking Modern Business Philosophy (2010)
  • Scott Aaronson - 鈥淲hy Philosophers Should Care About Computational Complexity鈥 (2011)
  • John Ralston Saul - Voltaire鈥檚 Bastards: The Dictatorship of Reason in the West (2013)
  • Andy Clark -  Mindware: An Introduction to Cognitive Science (2013)
  • OpenAI - 鈥淐oncrete Problems in AI Safety鈥 (2016)
  • Vaswani et al. - 鈥淎ttention is All You Need鈥 (2017)
  • OpenAI - 鈥淎I and Compute鈥 (2018)
  • Hannah Fry 鈥 Hello World: Being Human in the Age of Algorithms (2018)
  • Melanie Mitchell - Artificial Intelligence: A Guide for Thinking Humans (2019)
  • Janelle Shane 鈥 You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It鈥檚 Making the World a Weirder Place (2019)
  • Rich Sutton - 鈥淭he Bitter Lesson鈥 (2019)
  • Brown et al. - 鈥淟anguage Models are Few-Shot Learners鈥 (GPT-3 Paper) (2020)
  • Kate Crawford - Atlas of AI (2021)
  • Timnit Gebru et al. - 鈥淥n the Dangers of Stochastic Parrots鈥 (2021)
  • Brian Christian 鈥 The Alignment Problem: Machine Learning and Human Values (2021)
  • Dan McQuillan 鈥 Resisting AI (2022)
  • Luciano Floridi - The Ethics of Artificial Intelligence (2023)
  • David Chalmers - 鈥淐ould a Large Language Model Be Conscious? 鈥(2023)
  • Andy Clark - The Experience Machine (2023)
  • Timnit Gebru, 脡mile P. Torres et al. - AI Con: The Great AI Swindle (2024)
  • Michael Townsen Hicks, James Humpries, & Joe Slater - 鈥淐hat GTP is Bullshit鈥 (2024)
  • Jakob Stenseke 鈥 鈥淥n the Computational Complexity of Ethics: moral tractability for minds and machines鈥 (2024)
  • Quinn Slobodian 鈥 Hayek鈥檚 Bastards (2025)
  • Stanford HAI - AI Index Report 鈥 (Updated Yearly)
  • Air Street Capital - State of AI Report 鈥(Yearly) 

Philosophy

  • Can AI be conscious or sentient? Can machines experience subjective consciousness or emotions?
  • To what extent is thinking computation? What does the development of machines that 鈥渢hink鈥 reveal about human and animal thinking? Can non-algorithmic behavior emerge on systems where the behavior of their parts can be described algorithmically? Is this what is going on with humans? With AI systems of sufficient complexity?
  • What does it mean to be human in the age of AI? When machines match or surpass human abilities, what sets humans apart? Are AI systems likely to surpass the human ability (such as it is) to produce non-slop? Or are there philosophical and computational reasons to think that this will not happen.
  • At what point should a machine be granted moral or legal status? Should advanced AI be granted rights, responsibilities, or personhood? If we are not there yet, why not?
  • Who is responsible for AI actions? When AI causes harm, who is accountable鈥攖he developer, user, company, or the AI itself?
  • Can AI make ethical decisions? Can we embed moral reasoning into AI鈥攁nd whose morality should it follow?
  • How does AI affect human autonomy and agency? What is human autonomy after all? Are humans losing control over their decisions as AI systems increasingly shape choices?
  • What are the long-term risks of superintelligence? Could a future AI become vastly more intelligent than humans and act in ways that are harmful. Is this plausible? If not, what is the political/economic purpose of worrying about it? What do these questions teach us about human and animal intelligence?

Psychology

  • Do interactions with AI tools influence attention, decision-making, or cognitive biases?
  • What is the psychological impact of anthropomorphic design in AI (e.g., voice assistants, chatbots)?
  • How does AI-assisted cognition (e.g., autocomplete, summarization) reshape human problem-solving or creativity?
  • What social or emotional roles are people willing to assign to AI companions or assistants?
  • Does prolonged exposure to AI-generated content (e.g., recommendations, deepfakes) affect belief formation, polarization, or group identity?
  • How does AI shape interpersonal dynamics, such as conflict resolution, persuasion, or empathy in mediated communication? How do children understand and relate to AI entities compared to adults?
  • What developmental impacts might AI-based toys, tutors, or caregivers have on attention, language, or empathy?
  • How effective are AI tools (e.g., chatbots, digital therapists) in mental health screening or intervention?
  • What are the psychological risks of replacing human care with AI in therapy, counseling, or crisis situations?
  • Can AI help detect early signs of psychological disorders through behavioral or linguistic analysis? How do people react to algorithmic decision-making in high-stakes domains (e.g., hiring, healthcare, criminal justice)?

Computability Theory

  • P vs NP Problem - Whether every problem whose solution can be verified quickly (in polynomial time) can also be solved quickly. If P 鈮 NP (most believe it isn鈥檛), heuristics and approximations are not just pragmatic, but also necessary.
  • Exponential Growth of Search Trees - The combinatorial explosion in the number of possible states or actions in AI planning, game playing, and decision-making tasks. In pre LLM AI, this motivated techniques like Monte Carlo Tree Search, pruning, approximate inference, and deep learning as function approximation. Often a less stupid algorithm does not suffer the same explosion (such as proof systems versus truth tables in propositional logic). Is something similar going on with the explosion of data center needs for running and training LLMs?
  • The Halting Problem - There is no general algorithm that can decide whether an arbitrary program halts on a given input. AI alignment and interpretability research must grapple with the fact that some behaviors (including formal verification and safety checking) or failure modes may be fundamentally unpredictable.
  • No Algorithm for Determining Consistency of First-Order Theories - By G枚del鈥檚 incompleteness theorems and Church鈥檚 work, there鈥檚 no algorithm that can determine whether arbitrary first-order (or stronger) logical systems are consistent. This poses a fundamental problem with respect to AI hallucinations. AI systems that attempt formal reasoning (e.g., symbolic AI, theorem provers) can generate contradictions or meaningless statements that they cannot detect as such. This impacts AI reasoning in formal domains, such as law, mathematics, and even AI safety where self-reference may arise.
  • Undecidability and Semi-Decidability in Logic-Based AI - Problems like logic entailment, satisfiability in certain theories, or general program verification are undecidable or only semi-decidable. Trade-offs betweenexpressivity vs. tractability are central in knowledge-based AI. In AI alignment, undecidability limits our ability to fully specify or prove the safety of general-purpose agents.
  • Kolmogorov Complexity and the Limits of Compression - The shortest program that can describe a string is incomputable; there鈥檚 no general algorithm to determine the minimal description length. Impacts theories of intelligence based on compression or minimal description length (e.g., Solomonoff induction, universal AI).Suggests limits to prediction, explanation, and generalization鈥攌ey goals in AI and machine learning. Also limits our ability to evaluate when an AI鈥檚 learned model is 鈥渟imple鈥 or 鈥渙ptimal鈥 in any universal sense.

Public Policy

  • Hype/Bubbles - To what extent do economic and political factors driving investment in emerging technologies produce widespread false beliefs and unreasonable hopes and fears about those technologies?
  • Algorithmic Culture - What are the social effects of offloading tasks to algorithms, of thinking of thinking and expertise in terms of procedures that can be implemented in an algorithm.
  • Algorithmic Bias and Fairness - How do AI systems perpetuate or exacerbate existing social inequalities? What frameworks (technical and legal) exist to measure or mitigate bias? Who is accountable for harm when it results from automated decisions?
  • AI Governance and Regulation - What are the appropriate roles of national governments vs international bodies? How should emerging regulations (e.g. EU AI Act, U.S. Executive Orders) be evaluated? What policy tools (mandates, standards, audits) are most effective?
  • Surveillance, Privacy, and Civil Liberties - How should AI-enabled surveillance (e.g. facial recognition, predictive policing) be regulated? What rights do individuals have against automated monitoring? What constitutes meaningful informed consent in data-driven AI?
  • Labor and the Future of Work - What are the likely impacts of AI on job displacement, deskilling, and wage inequality? What policies can support equitable adaptation (e.g., UBI, reskilling, labor protections)? Should there be limits on automation in certain sectors?
  • AI in Critical Sectors (Healthcare, Finance, Justice) - How can we ensure safe, equitable deployment of AI in sensitive domains? What standards of accuracy, explainability, or auditability should be enforced? How can public trust be maintained in automated decision-making?
  • Misinformation and Democratic Integrity - How does AI (e.g., generative models, deepfakes) threaten truth, elections, and public discourse? What are the limits of content moderation and speech regulation? Should synthetic media be labeled or restricted?
  • Accountability, Transparency, and Explainability - How do we ensure AI systems are understandable and contestable to those affected? What legal and ethical frameworks support 鈥渞ight to explanation鈥? What are the challenges of governing black-box systems
  • Global Power and AI Geopolitics - How does AI influence global power dynamics (e.g. U.S.-China rivalry)? What are the risks of militarized AI or arms races? Can international norms or treaties for AI safety be achieved?