The Mindful AI ManifestoNathalie H
About 3 years ago I watched a TED Talk on AI and the one thing I can recall vividly was the statement made by this mature and grounded scientist saying “time is running out and we must develop a greater sense of urgency in embedding human values into AI before it is too late!”. His concern was that AI was evolving very rapidly however nobody was seriously investing in AI ethics, governance or regulation back then. It really impacted me.
Fast forward a few years and today I can see that AI ethics is becoming more mainstream, being discussed in social media and occasional meet-ups. However, in my humble opinion, we are still way behind in taking the necessary actions to minimize the potential risk to humanity’s wellbeing given that AI learns and improves itself at an incredible exponential rate. It has been reported several times that programmers do not know or understand why AI is doing what it is doing i.e. AlphaGo (Deep Mind). Often machines make decisions within a black box using highly complex mathematical models beyond their own developers comprehension. This is already happening right now.
Don’t get me wrong, I am a big believer of the promise of technology being used as a force for good. I worked in the tech industry for 15 years, 10 years in the Data Analytics space. As most people I am a fan of AI and its benefits including of course Spotify, Google maps and Amazon! I am aware that AI has enabled amazing initiatives and contributed to enhancing the lives of kids with special needs, such as autism and other noble projects.
However, due to a lack of a deeper understanding of technology, I believe leaders (including CIOs and CTO’s) in many public and private organizations are oblivious to the risk ungoverned AI can pose to humanity in a not so distant future. The recently released documentary “Coded Bias” from film maker Shalini Kantayya is an excellent wake-up call for those who remains skeptic. The issue is complex and government agencies can take a long time to define policies and establish AI best practices. In terms of governance, Stuart Russell highlighted in his book below that reports on AI ethics and governance are “multiplying like rats” however there are still no implementable recommendations to handle the problem of control.
According to professor Stuart Russell (University of California, Berkeley), one of the most respected and renowned AI thought leaders and researchers or our times, AI Governance is now a hot topic and although the World Economic Forum has identified over 300 separate efforts (boards, summits etc) to address the issue of AI ethics we are still far away from translating the intention, standards and guidelines into actual functional applications. In his book Human Compatible, published in November last year, he proposed a new basis for human-machine behavior and interaction. He suggested that by creating beneficial AI systems we might be able to eliminate the risk of loosing control over super intelligent machines. He shared a well balanced view of the future. An optimistic option, where AI allow humanity to enter a golden age and another, less optimistic, alternative where we loose control, knowledge and skills to intelligent machines. Needless to say how detrimental that would be to human race autonomy in the medium-long term.
THE TECHNICAL SOLUTION
He proposes the idea of beneficial machines which are designed to achieve our (humans) objectives which are external and not embedded in themselves (machines). Such approach would force machines to defer critical decisions to humans, they would ask for permission, they would act cautiously when guidance is unclear and very importantly would allow themselves to be switched off. A key concern flagged by professor Stuart is that while the extensive collaboration and research in the academic space takes into account the best interests of humanity a lot of research is now done by corporations which not necessarily share such interests given their desire to take control of AI systems as they become more powerful. Leading players in 2019 were Google (deep mind), Facebook, Amazon, Microsoft, IBM, Tencent and Alibaba. As it could be expected the primary focus of those organisations is to profit on the rapid implementation of AI.
The intention of this article is to be a call to action. My wish is to ignite conversations about the role and responsibilities of AI professionals in this digital age. My humble view is that you (since I have retired!), data scientists, machine learning experts and data analytics gurus, who are at the front-line and have a real understanding of the issue, have the duty to firstly better educate yourself on the issue and secondly raise awareness, articulate and flag to your leaders and decision-makers, who have a limited understanding of technical intricacies, the potentially dramatic consequences of AI being developed without a proper governance framework. My hope is that you will start a conversation with you peers and this will naturally evolve into a grassroots movement. I also wish AI consumers (everyone!) will wake up, acknowledge their own responsibility-power in this scenario and be inspired to learn more about AI implications to society and vote with their money.
I invite all conscious and mindful humans working in the AI field to reflect on the following questions and start a dialog within your community:
- What foundational work could we do within our circle of authority?
- How could we influence decision-makers and minimise the risk of algorithms we write or manage? (e.g. bias)
- If we were to write a “Mindful AI Manifesto”, how would that look like?
Imagine if we could design routines and algorithms to embed the behaviors and mindset of the universal human values below in every piece of AI to be ever built. Utopia? Perhaps.
Yes, cultural values are a barrier for establishing AI standards across countries however I trust there are core universal values unanimous to humans regardless of cultural background.
A FINAL QUOTE FROM Sundar Pichai, Google CEO
Not that I agree Google should (or any tech industry player) should lead AI regulation.
“Artificial intelligence needs to be regulated. History is full of examples of how technology’s virtues aren’t guaranteed. Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents. The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread. These lessons teach us that we need to be clear-eyed about what could go wrong. There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone.”
Thanks for reading. I look forward to hear your thoughts, especially if you disagree with the perspective shared above. I am sincerely open and keen to explore this topic so do reach out if you are willing to have a constructive and healthy discussion.🙏
RESOURCES FOR AI EXPERTS & CONSUMERS:
Stuart Russell – Professor at UC Berkeley:
- 3 Principles for Creating Safer AI (2017) – https://www.youtube.com/watch?v=EBK-a94IFHY
- The future of AI and Human Race (2015) – https://www.youtube.com/watch?v=tjrZQIcU4aQ
- Podcast | Foundations, Benefits, and Possible Existential Threat of AI (2020) https://soundcloud.com/futureoflife/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-threat-of-ai#t=0:00
Sam Harris – Neuroscientist and philosopher:
- Can we build AI without losing control over it? – https://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it
Elon Musk – Tesla CEO
- Elon Musk on AI – https://www.youtube.com/watch?v=H15uuDMqDK0
- It’s already too late – https://www.youtube.com/watch?v=z3EQqjn-ELs
Stephen Hawking – Physicist, Cosmologist & author of A Brief History of Time
- AI could spell the end of the human race – https://www.youtube.com/watch?v=fFLVyWBDTfo (5min)
Scientific Papers | AI Ethics & Governance
- The Moral Machine Experiment – https://search.proquest.com/docview/2136863808?accountid=12528&rfr_id=info%3Axri%2Fsid%3Aprimo
- The Global Landscape of AI Ethics Guidelines – https://www.nature.com/articles/s42256-019-0088-2
- Don’t let Industry write rules for AI – https://search.proquest.com/docview/2227406141?accountid=12528&rfr_id=info%3Axri%2Fsid%3Aprimo
- Life 3.0 by Max Tegmark – https://www.amazon.com.au/Life-3-0-Being-Artificial-Intelligence/dp/1101946598
- Association for the advancement of AI – https://www.aaai.org/
- Algorithmic Justice League – https://www.ajl.org/
- Institute of Electrical and Eletronic Engineers – https://www.ieee.org/
- EU High Level Expert Group on AI – https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence
- AI for Good UK – https://www.aiforgood.co.uk/
- AI for Good Global Summit – https://aiforgood.itu.int/
- TED Talk | How to keep human bias out of AI – https://www.youtube.com/watch?time_continue=30&v=BRRNeBKwvNM&feature=emb_logo