Assessing accessibility in your mobile applications
Exploring mobile app accessibility, guidelines, and testing methods to ensure inclusivity for all users.
Most would agree that some kind of AI governance is necessary. What many can't agree on is what that governance will look like, how it will be implemented, and who will oversee it.
One of the biggest challenges facing us at the moment is how to transform a fairly nebulous concept like governance into something more tangible. And as the technology is developing rapidly, this process needs to happen rapidly too.
In this article, we're going to look at why AI governance is so important and how we can begin to put this into action in the next phase of AI development.
Whenever we talk about AI, there's always a trade-off involved. On one hand, it's exciting to explore the innovative possibilities of artificially intelligent technology. On the other, we can't do this without recognising the real challenges and, to be frank, dangers involved.
This does not mean being alarmist. In December 2024, computer scientist Geoffrey Hinton shocked the world when he said there was a "10% to 20%" chance that AI would wipe out humanity in the next three decades. This kind of talk runs the risk of derailing the debate around governance, sending everyone into a panic.
Instead, we need to confront the real impacts AI is already having on our world and on society. We have to work to make sure these impacts are positive.
For instance, there's no point in pretending that AI cannot be harmful. It can be, and it will be if it is not governed. Left to its own devices, AI will create jobs — it will create four or five jobs for the team that needs to maintain the system while destroying the jobs of the hundreds or even thousands of other human beings it replaces.
Similarly, AI will create wealth. But if it is not governed, that wealth will be concentrated in the hands of the few and will not benefit the wider populace.
If AI is going to be a force for good in the world, which it certainly can be, it needs to be controlled. And this is not going to happen by itself.
One of the age-old arguments against control is that it restricts innovation. This argument suggests that media censorship, for example, stops an artist from expressing their true vision or that industrial regulation stifles the creativity of product designers and developers.
And this is true, to a certain extent. But unless you want a slew of offensive and harmful materials and products that put public health at risk, some degree of control and structure is necessary.
It's about balance. We do need to adopt measures that bring out the best in AI, helping us leverage this technology to its full potential. But at the same time, we must make sure this does not have a negative impact on society and on our planet.
In an article for Forbes, Rohit Anabheri describes how we need to move away from the view that innovation and governance are at odds with each other. Rohit suggests adopting frameworks of governance that "embed ethical considerations into the process of innovation itself." Rohit advises building ethical impact assessments into the lifecycle of the AI project itself so that governance does not need to play catch-up with technology that has already been deployed.
In 2024, the World Economic Forum (WEF) outlined how they envisaged achieving balance between innovation and governance, based around three core pillars.
The WEF's first pillar involves working with existing regulatory frameworks. We already have a significant network of regulatory and legal governance in place around the world. Just because these frameworks are not quite in step with the new capabilities of AI does not necessarily mean we need to do away with them completely.
If we can modify and repurpose existing frameworks, we may be able to arrive at a robust system of governance far more quickly.
With their second pillar, the WEF recommends "fostering multi-stakeholder collaboration." As governments cannot oversee AI by themselves, we must bring about a "whole of society" approach to governance. This is echoed by Rohit Anabheri in his Forbes article and by James Winters of Deeper Insights, who describes how business stakeholders, legal stakeholders, and technical stakeholders must come together to bring about new forms of governance.
In practice, governance will probably need to include other stakeholders too. Consumers and members of society who are directly impacted by AI — essentially all of us — will need to have their say as well. CSIRO includes entities like AI users, impacted subjects, and consumers in their own governance stakeholder model.
The third and final pillar is all about preparing for the future evolution of AI. It's clear that we haven't yet reached some kind of zenith for AI. In fact, we're only just beginning. AI governance frameworks need to not only meet the challenges of the present day but also remain fit for purpose long into the future as new technological capabilities and new use cases for AI emerge.
Writing for IBM, Tim Mucci and Cole Stryker offered their own principles of responsible AI governance. They argued for the need to audit and stress test training data to eliminate biases, the need for "clarity and openness" into how AI algorithms operate, and the need for accountability from organisations that use AI.
They also offered another principle — empathy. Mucci and Stryker stated that "organisations should understand the societal implications of AI, not just the technological and financial aspects."
At the end of the day, it's this principle that's going to be key to creating AI governance that meets both present and ongoing needs. The development of AI cannot be simply driven by financial gain and competitive advantage in business; this is a recipe for disaster. Instead, the broader societal benefits of AI must be the driving forces in its development.
By ensuring that AI moves in this direction — social good instead of monetary gain — governance can actively support innovation rather than hinder it.
Exploring mobile app accessibility, guidelines, and testing methods to ensure inclusivity for all users.
Learn how to seamlessly integrate AI into existing workflows, enhancing efficiency, collaboration, and productivity with practical strategies.
We help clients benefit from the full potential of Azure without breaking the bank. Here’s how we do it.