February 29th, 2024 • Irvine, CA

Training AlphaPersuade: A Framework for Ethical Persuasion in AI

In collaboration with

About the Summit

This is AlphaPersuade, and it will be better at persuasion than any human.
— Aza Raskin

Responding to the call from industry to address AI’s persuasive ability, the AlphaPersuade summit presents a framework for ethical persuasion that can be tooled across AI technologies . Built from long-standing principles of democratic rhetoric, the framework provides a solution to what Tristan Harris and Aza Raskin have described as the emergence of “AlphaPersuade,” AI’s unprecedented capacity to change human behavior. A fully integrated rhetorical system of ethical persuasion mitigates the need for external regulations and forges more durable ethical outcomes for AI.

The framework presented at the summit will be especially useful for professionals in risk management and companies seeking to provide end-users confidence and comfort. By illuminating how persuasion actually works on the human mind and body, our framework provides organizations with a tool to limit disinformation, reduce the impact of hallucinations, contain and eliminate bias, and empower customers in novel uses of AI.


Pre-Summit Workshop

Our invitation-only Pre-Summit Workshop brings together academic and industry leaders to tackle the mounting force of so-called “AlphaPersuade”—AI’s almost unlimited ability to persuade humans to act in certain ways. An intensive two-days will conclude in the presentation of a meaningful framework for the AI industry at our summit.

RESEARCH TEAM FEATURING

Steven Mailloux

Steven Mailloux

Stephanie Dinkins

Stephanie Dinkins

Galen Buckwalter

Galen Buckwalter



Keynote

Sessions

Olaf Kramer 

Center for Rhetorical Science Communication Research

on Artificial Intelligence

“Persuasion in Artificial Intelligence”

A swift rhetoric primer and exploration of case studies intended for AI specialists seeking to implement ethical persuasion in their technologies

Tiera Tanksley

UCLA

“Exploring Bias in AI”

How exploration of bias and hallucinations provides solutions to some of the most intractable problems with human use of AI

Casey Mock

Center for Humane Technology

“On Deception in AI”

The flip side of persuasion, deceit, has far reaching effects on users. How can we intervene and build a technology that minimizes its capacity for deceit