OPEN APP
Home / Ai / Artificial-intelligence /  AI applications that are human-centred, unbiased, fair

Even fairly cautious predictions suggest that artificial intelligence, or AI, will reshape our workforces, redesign business processes and give rise to new services in a way that we are only starting to imagine. At Accenture, we believe the societal impact of AI will be huge, but if deployed responsibly, it will be overwhelmingly beneficial, too.

The core of responsible AI is a recognition by businesses, governments and technology leaders that with the benefits of AI comes a duty of care to manage any adverse consequence. Responsible AI prioritizes ethics above anything else in the development of AI applications. Economic growth is, of course, an end goal of AI, but it must be done in such a way as to empower humans and ensure communities thrive. To meet this objective, transparency is critical and AI must comply fully with all relevant regulations in the locations it is deployed.

The negative consequences of mismanaging the transition to our AI-enabled future are hard to overstate. Even in our relatively immature era of AI deployment, we have seen how badly things can go wrong when AI applications are not sufficiently trained or tested: self-driving cars cause accidents, AI bots develop gender and racial bias, and insensitivity and rogue algorithms create chaos.

How can we ensure that AI decision-making is valid, safe, reliable and, above all, ethical? One of the key methods will be establishing a robust framework for teaching and training AI applications. Just as parents need to educate their children on societal norms, values and behaviours, so, too, will AI applications need to be trained and validated to ensure they are aligned with our values and societal goals—the applications will not know these things implicitly. Businesses and their technology partners, therefore, need to build a robust test framework, bespoke to AI applications, to guarantee decision-making is transparent, explainable, fair and non-discriminatory.

It is important not to underestimate the challenges of building such a test regime. AI software is made up of many components, some of which change over time. Software engineers need to consider, for example, how best to process and verify the vast amounts of structured and unstructured data that fuel AI applications. Additionally, engineers will need to select which AI or machine learning algorithm to use, and then evaluate the accuracy and performance of the learning models to ensure ethical and unbiased decisioning and regulatory compliance. Engineers will also need to build new test and monitoring processes that account for the data-dependent nature of AI systems.

One way to simplify this task is to divide it into two. In this approach, engineers will first carry out a “teach" stage: processes to train the system to produce outputs by learning from training data. This stage tests the performance of various algorithms, allowing engineers to select the best-performing model to be deployed in production. Next is the “test" phase. Here, engineers check the accuracy and performance of the system by validating outputs both on test and production data.

What’s important across the teach-and-test framework is that businesses and their technology partners should not only assess whether the AI application is effective in meeting a given business goal (enhancing customer experiences, for example), but also that the data used to train the system is unbiased, high-quality and accurate. Importantly, the framework also needs to include a mechanism to test the ability of an AI system to explain its decisions logically.

At Accenture, for example, we developed our own teach-and-test framework and applied it when we built a sentiment analysis solution. The framework was critical in allowing us to develop unbiased training data, which guaranteed the application had the right balance of different sentiments across the social media, news and other data sources it would be expected to analyse. Significantly, the framework also allowed us to accelerate the model training time by an astounding 50%.

The most important work that lies ahead of us now is to ensure that the AI revolution works for everyone and not just for a few. Accenture believes in building AI applications that are human-centred, unbiased and fair. And the best way to do that is it to apply a rigorous teach-and-test framework while developing new applications, so that ethical outcomes are as certain as good business outcomes.

Bhaskar Ghosh is group chief executive at Accenture Technology Services.

Subscribe to Mint Newsletters
* Enter a valid email
* Thank you for subscribing to our newsletter.
Close
Recommended For You
×
Edit Profile
Get alerts on WhatsApp
Set Preferences My ReadsFeedbackRedeem a Gift CardLogout