I must admit that it was particularly satisfying that some of the most repeated words in the event included sustainability, inclusion, respect, security, wellbeing, fairness, and trust. I think it reflects the positive impact technology can have on society, citizenship awareness of that potential (and also its risks and challenges), and therefore how big organizations are getting more and more aware of the need of using technology not only for pure commercial interests but also to improve their contribution to society in a way that is both lawful and ethical.
From the different technologies that were discussed in the different panels, AI is probably the disruptive technology with the greatest potential to redefine human activity in the coming years. AI Driven initiatives are already impacting practically all sectors of business and society. Organizations need to build end-to-end AI strategies that generate both Business and Social Value and bring at scale the benefits of intelligence and automation to the communities they serve.
In the final roundtable of the event, we had the pleasure to have Richard Benjamins, Chief AI & Data Strategist, Telefónica and Kay Firth-Butterfield, Head of Artificial Intelligence & Executive Committee Member, World Economic Forum joining Rich Karlgaard, Global Futurist & Editor-At-Large, Forbes to discuss these topics.
While I am not going to reproduce Richard and Kay’s interventions, as they are available for you to watch here, I would like to also touch upon some of the main points that were discussed during the roundtable. And the first one is why now? Why is AI getting such attention now that is being called “the new electricity”? The answer, as Richard pointed out, is the combination of different factors like the availability of data, infrastructure, and advanced algorithms (Deep Learning) which have led to real business results and therefore have attracted more investment, generating a virtuous circle.
But these advancements and investment have also already brought ethical and social impact concerns around AI, as Kay mentioned. While AI has proven to help in creating and deploying greener energy, measuring climate change or fight health crisis like the one created by COVID-19, it has also demonstrated that can amplify social discrimination and injustice at scale.
Within this context, Kay made a great point by stating that a lot of organizations are having a bad time moving from PoCs to AI at scale, which does not come as a surprise as it requires different key aspects to be solved and combined in a solid AI governance, which is not something a lot of organizations have achieved, as the poll that was launched during the event showed us.
This is one of the main challenges we are helping our customers with in everis. Not that long ago, the key challenge for organizations was to design a data strategy, define a data governance framework, and integrate Big Data capabilities across the organization. However, the time to talk about AI governance has come, since squeezing the data to generate Business value now demands a sound orchestration across a variety of expertise and domains.
There are some key aspects to consider when developing a solid AI Governance:
- AI Strategy: every organization aiming to lead the competitive race should design and rely on a sound business and technology strategic alignment, identifying business opportunity and assessing risk.
- Organization: the AI-driven organization requires fostering hybrid capabilities, and expanding the AI culture through AI literacy, as well as defining a myriad of roles and responsibilities.
- AI Lifecycle: transparency, reproducibility, and explicability of the processes are essential to properly manage AI at scale, being paramount to identify all requirements across the AI stages: from business opportunity to development, deployment, and monitoring.
These three levers should help organizations continuously innovate by enabling fast experimentation, and foster AI-driven initiatives and go-to-market solutions with differentiated value.
Critical challenges arise for organizations when designing and implementing successful AI governance. A key challenge is developing AI initiatives compliant with Ethical Principles such as Human Oversight, Transparency, Fairness, Diversity and Non-discrimination, Robustness of the systems, and Privacy.
In short, a positive impact of AI should create both business value and societal benefit. Organizations need to guarantee the development of responsible AI, complying with the regulation and trustworthy AI principles for contributing to the development of the communities where they operate, delivering positive impact to individuals and society.
Let me finish by retrieving some words by Kay and Richard as a final call for action. When discussing AI regulation, she said that “doubt is stultifying. Doubt is killing innovation, not regulation” to what Richard added “regulation makes sense if you know what things you want to avoid, and I think we are not yet there with AI.” As our CEO Fritz Hoderlein mentioned in his speech, creating a competitive advantage is about speed, determination, and continuous adjustment. AI is no exception, and organizations should be determined to move from a PoC scenario to a solid scalable implementation while continuously learning and adjusting along the way.