While the dangers of online hate have affected society since the early days of dial-up internet, the past 25 years have demonstrated that waiting until after harms occur to implement safeguards fails to protect users. The emergence of Generative Artificial Intelligence (GAI) lends an unprecedented urgency to these concerns as this technology outpaces what regulations we have in place to keep the internet safe. GAI is gaining widespread adoption; thus, we must heed the lessons learned regarding internet governance–or lack thereof–and implement proactive measures to address its potential adverse effects.
GAI refers to a subset of artificial intelligence systems that can produce new or original content. GAI uses machine learning and neural networks to create audio, code, images, text, simulations, videos, and other information, resembling human-like creativity and decision-making. Although numerous promising prospects exist for using GAI in scientific, medical, artistic, and linguistic domains, it could intensify the spread and prevalence of online hate, harassment, and extremism.