Abstract
The paper examines today’s debate on the legal governance of AI. Scholars have recommended models of monitored self-regulation, new internal accountability structures for the industry and the implementation of independent monitoring and transparency efforts, down to new forms of co-regulation, such as the model of data governance set up by the EU legislators with the 2016 general data protection regulation, i.e. the GDPR. As shown by current regulations on self-driving cars, drones, e-health, etc., most legal systems, however, already govern the field of AI in a context-dependent way. The aim of this paper is to stress that such context-dependency does not preclude an all-embracing structure of legal regulation. The adaptability, modularity and flexibility of the regulatory system suggest a sort of middle ground between traditional top-down approaches and bottom-up solutions, between legislators and stakeholders. By fleshing out the legal constraints for every model of AI governance, the context-dependency of the law makes clear some of the features that such models should ultimately incorporate in the governance of AI.