RhetAI and Center for Humane Technology

Annual Research Challenges

Our Research

Each year, our coalition convenes and organizes international working groups whose sole focus is to respond to a set of challenges proposed to us by the Center for Humane Technology. The Research Challenges focus on how AI persuades human behavior and respond to specific, urgent contingencies or events emerging from the AI technology and its persuasive capabilities. At the end of each one-year challenge period, the School of Communication and Journalism at Stony Brook University, in cooperation with members of the coalition, will release an annual report on our findings which will include recommendations for the AI industry. Those reports and supporting documents will be freely available.

Members of our 2025 Research Challenge Working Groups come from more than a dozen countries and more than thirty different organizations and represent a wide range of industries and university disciplines, including national defense, energy, higher ed, entertainment, and neuroscience. We maintain an ongoing commitment to broad participation, believing that widely diverse backgrounds and experiences provide the most compelling pathway for groundbreaking results. We welcome requests to join a Research Challenge Working Group. Contact Roger Thompson for more information.

Challenge Question 1

How should we balance Freedom of Speech (viz., the right as claimed by tech companies to be free of government regulation into content moderation or product design) with Freedom of Thought in the age of social media, persuasive AI, and neurotech?

•  Identify examples or case studies that might provide a model or pathway for balancing technological capabilities for persuasion with freedom of speech.

•  What technologies may have a type of persuasive force that could limit or constrain freedom of thought? What models, either existing or that you propose, might provide for more ethically persuasive technology?

Challenge Question 2

What new forms of deception and persuasion are possible using new technologies? What new terms might need to be developed to begin to discuss these kinds of vulnerabilities (e.g. “Astroturfing” needed to be created for a fake “grass-roots” movement online)?

Challenge Question 3

How do emerging technologies alter the ethics of persuasion and deception

  • in politics?

  • in advertising?

  • in human health and relationships?

  • What intervention(s) may provide a remedy for unethical persuasion?

Challenge Question 4

Where do the existing legal and social protections against deception (e.g. fraud, libel, slander, perjury, false-advertising, forgery, etc.) fail to offer substantive protections from emerging forms of tech & AI mediated deception? 

  • Where do existing protections need to be expanded or reworked?

  • What are the risks of changing them?

  • What are the key trade-offs that need to be grappled with?

Challenge Question 5

How do human relationships with emerging technologies impact individual mental health and how we relate to each other, and what consequences does this have for policy and the ethics of design?

• How (or, in what ways) might an ethics of design center AI’s persuasive force in order to ensure

emerging technologies are safe for individual mental health and human relationships?

• How might designing products to promote individual mental health improve societal outcomes?

 Challenge Question 6

How are emerging technologies and concentrations of power impacting institutions and the political order?

  • What rhetorical traits of emerging technologies like AI lead to concentrations of power? How do their persuasive capabilities facilitate these concentrations of power and impact political institutions?

Challenge Question 7

How can technology foster trust and collaboration rather than polarization and violence?

  • What persuasive strategies or tactics can be built into AI to foster trust?

  • What models, either existing or proposed, might foster collaboration while limiting polarization?

 

Research Priorities

AI, ML, and Persuasion

Persuasion and Human Behavior

NeuroAI and Neuro Rhetorics

Rhetoric and Rhetorical Theory

Technology and Human Behavior

Persuasion and National Defense

AI, ML, and Marketing

Persuasion and Entertainment

Persuasion and Cognition