📄

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜

Entry
Type
Paper
Year
2021
Posted at
Tags
ethics

Bender, E. M., Gebru, T., Mcmillan-Major, A., Shmitchell, S.-G., & Shmitchell, S.-G. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 CCS CONCEPTS • Computing methodologies → Natural language processing. ACM Reference Format. FaccT 2021. https://doi.org/10.1145/3442188.3445922

Overview - 何がすごい?

言語モデルの巨大化に伴って顕著化するモデル内のバイアスなどについてまとめた論文。

この論文の発表がきっかけで共著者の一人、Timnit Gebruが当時勤めていたGoogleを解雇されることになり、AIと倫理、企業の関わり方について大きな議論が巻き起こった。

Abstract

The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, es- pecially for English. BERT, its variants, GPT-2/3, and others, most recently Switch-C, have pushed the boundaries of the possible both through architectural innovations and through sheer size. Using these pretrained models and the methodology of fine-tuning them for specific tasks, researchers have extended the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks for English. In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We provide recommendations including weighing the environmen- tal and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models. CCS

Motivation

Architecture

Results

Further Thoughts

  • 論文のタイトルに絵文字が使われているのを初めて見た

Links