Keep up with the latest developments by signing up for the Artificial Intelligence myFT Digest — it’s sent straight to your inbox at no cost.
I’m coming to you as one of the founding directors of the Stanford Institute for Human-Centered AI (HAI) and the CEO as well as co-founder of World Labs. Let’s talk about the exciting, rapid strides being made in artificial intelligence. Processes that once took days now wrap up in minutes. This swift progression, though costly in terms of training, will become more efficient as developers refine their approaches. AI is not just the future; it’s the present.
Now, if you’re involved in this space, none of this should catch you off guard. It’s the fruits of computer scientists’ relentless efforts and businesses pushing the boundaries of innovation for years. What does raise a few eyebrows, however, is the noticeable lack of a comprehensive framework guiding the governance of AI. With AI advancing swiftly, it is essential that this extraordinary technology serves the entire humanity.
As someone immersed in technology and education, I believe firmly that we all hold a duty within this global AI community to push the tech envelope while maintaining a human-centric perspective. This is no small feat—it calls for structured guidelines to steer us in the right direction. Ahead of the forthcoming AI Action Summit in Paris, I’ve outlined three key principles to guide AI policymaking moving forward.
The first principle is to lean on science, not science fiction. Science thrives on empirical data and meticulous research — a method just as crucial in managing AI governance. While futuristic visions of utopias or catastrophes enthrall us, policymaking must pivot on a realistic understanding of current capabilities.
We’ve seen tremendous achievements in image recognition and natural language processing. Chatbots and software assistants are revolutionizing how we work, employing advanced data analytics and pattern recognition. Remember, these are not sentient beings with intentions or consciousness. Comprehending this helps us steer away from distractions of unrealistic scenarios, allowing us to focus on more pressing challenges.
Navigating AI’s complexity means understanding the bridge from scientific advances to practical applications. We need informed, real-time insights into what AI can truly accomplish. Esteemed organizations like the US National Institute of Standards and Technology can illuminate the real-world impacts of AI, laying the foundation for policies that are grounded in the actual technical landscape.
Next, let’s be pragmatic, not ideological. Despite AI’s swift evolution, it remains a young field, its most significant contributions yet to come. So, any rules on what can or cannot be developed need to be pragmatic, minimizing unintended fallout while spurring invention.
Think about AI’s role in diagnosing ailments more accurately. This could democratize access to top-notch healthcare quickly. But it also holds the risk of enhancing current biases in healthcare systems if not properly guided.
Creating AI solutions is a formidable task. A well-intended model can easily be misused. Hence, policies should strategically minimize such risks while encouraging responsible innovation. Thoughtful liability policies are essential, discouraging malicious misuse without unduly punishing sincere efforts.
Lastly, it’s important to empower the AI ecosystem. AI is inspirational — it holds the potential to transform education, healthcare for the elderly, and clean energy solutions. These breakthroughs thrive on collaboration, underscoring the need for policies that support the whole AI community, including open-source groups and academia.
Granting access to AI models and computational tools is vital for ongoing progress. Curtailing this access can stifle innovation, especially for academic researchers who often have fewer resources than counterparts in the private sector. Such restrictions not only affect academia but extend further. If today’s computer science students lack the tools to research the finest models, they’ll struggle to comprehend these systems when they step into industry roles or start their own ventures — a concerning gap indeed.
The era of AI is not approaching; it’s already here, and I find that thrilling. There’s a profound opportunity to enhance human life in an AI-enabled world. However, achieving this aspiration demands governance that’s evidence-based, collaborative, and firmly anchored in human-centered ethics.