Regulation Priorities for Artificial Intelligence Foundation Models
Matthew R. Gaske | 26 Vand. J. Ent. & Tech. L. 1 (2023)
This Article responds to the call in technology law literature for high-level frameworks to guide regulation of the development and use of Artificial Intelligence (AI) technologies. Accordingly, it adapts a generalized form of the fintech Innovation Trilemma framework to argue that a regulatory scheme can prioritize only two of three aims when considering AI oversight: (1) promoting innovation, (2) mitigating systemic risk, and (3) providing clear regulatory requirements. Specifically, this Article expressly connects legal scholarship to research in other fields focusing on foundation model AI systems and explores this kind of system’s implications for regulation priorities from the geopolitical and commercial competitive contexts. These models are so named because they have a novel ability to easily apply their resources across a broad variety of use cases, unlike prior AI technologies. These systems, such as OpenAI’s ChatGPT or Alphabet’s LaMDA, have recently rocketed to popularity and have the potential to fundamentally change many areas of life. Yet, legal scholarship examining AI has insufficiently recognized the role of international and corporate competition in such a transformational field. Considering that competitive context and the Trilemma, this Article argues from a descriptive perspective that solely one policy prioritization choice is needed: whether to emphasize systemic risk mitigation or clear requirements, given that prioritizing innovation is effectively a given for many governmental and private actors. Next, regulation should prioritize systemic risk over clarity because foundation models present a substantive change in the potential for, and nature of, systemic disruption. Finally, the Article considers ways to mitigate regulators’ lack of legal clarity. It argues instead, in light of the Trilemma’s application, for use of a sliding scale of harm-based liability for AI providers when reasonably implementable, known technological advances could have prevented injury. This tradeoff thus promotes innovation and mitigates systemic risk from foundation AI models.