How will AI Ethics affect the tech field?
AI is a growing field, but where will ethics fit into how organisations deliver their products? We'll explore AI's common problems, some solutions, and how AI ethics can help organizations make a lasting impact.
What is AI Ethics?
Artificial Intelligence (or AI) is the new goldrush in technology. While generative AI is the big new name on the street, AI has many applications, such as:
- predictive analytics
- natural language processing
- autonomous systems
- anomaly detection
- generative AI
- symbolic AI
- search and optimisation algorithms, or heuristics
AI Ethics is the exploration of the risk that using AI brings, the short and long-term impacts and how to control them.
It focuses on optimising the beneficial impacts of AI, while managing the risks and adverse outcomes. Organisations are increasingly aware of the risk of AI, such as hallucinations, algorithmic biases, privacy and confidentiality concerns. Public sector organisations especially have a lot on the line, as public trust is fundamental.
Following AI ethics principles reduces the risk of harm to both your users and organisations.
What are the most common issues with AI?
There are a number of risks with AI that require careful consideration when creating and using AI, such as:
- causing misinformation through hallucinations, incorrect or outdated information retrieval
- causing security or privacy breaches
- exacerbating biases and discrimination
- the sustainability of the AI systems, financial, environmental, and the longevity of the systems impacts on users or society
- where accountability for any issues that arise from using AI lies
- using unexplainable AI, which limits an organisation’s ability to predict or verify the outputs of AI
Unexplainable AI is a particularly complex issue. The less you can explain about the AI you’re using, how and why it reaches conclusions, the less you can manage the risks associated with it. All other problems that can be associated with AI, as outlined above, are magnified. It can cause issues such as a reluctance to adopt the technology, limiting the technical capabilities of the AI, and carries a high risk to the reputation of an organisation if something did go wrong.
How can we address these challenges?
Transparency
There are a few ways that transparency can help you manage risks when using AI. First, by being transparent in what sources of information are being used, an organisation can better understand why an AI performed the way it did in a specific context. By citing which sources of information the AI used to generate a response, any misinformation that occurs can be traced back to a source more easily. This reduces the risks of hallucination and misinformation, as issues can be picked up more easily in development and in the future. You can also use transparent sources of information to identify other possible issues, such as algorithmic biases and lack of suitable data sets.
Being transparent in how AI is being used in your organisation is a good way to foster public trust. Many organisations are already following this approach by beginning to put out AI statements, which are often similar to privacy statements disclosing how and why you might be using user data. Where possible, it’s also good to consider whether your AI might have an opt in/opt out option, so that those who don’t want their information to be processed have an avenue to do so.
Finally, in some cases being transparent in the capabilities and limitations of the AI that you’re using can be useful. For example, if there’s a chance your AI has small inaccuracies when labelling data, then by flagging this to a user, you can foster better trust, understandability, accountability with users, and open the process to other ways to control the issue, such as allowing users to flag inaccuracies, diversifying your data set.
Including humans in the loop
One of the least risky ways to adopt AI into your organisation is to use AI to augment your current systems while including humans in the loop of decision making. Ways in which this can be done is using AI to quickly analyse large data sets with a human agent using these analyses to aid their decision-making, or AI creating outputs which human agents then check for accuracy and add nuance, context and common sense.
Another aspect of including humans in AI augmented decision-making processes is that it can be easier for a human to explain why a decision is made than AI, as often AI involves some level of a ‘black box’ process, where there is little to no understanding of why an AI processes the inputs and outputs the way they have done. As much AI relies in machine learning, it’s not always easy to point to why a decision was made. This is especially important in circumstances which are unprecedented. AI systems are typically designed to handle specific tasks based on historical data, but when an unprecedented event or condition occurs, a human agent can make a decision, justify it, and take accountability.
You can also conduct stakeholder impact assessments in the discovery process before implementing AI in your organisation to better understand how many people will adopt the new way of work, how sustainable the system is in the long term, to understand the risks and how you can put in place measures against them.
Including humans in the loop can:
- build trust with your users
- ensure fairness and accuracy by helping find and correct AI errors
- add context and common sense to AI outputs
- ensure there is a line of accountability for the decisions made
- handle unforeseen issues that an AI isn’t trained to handle
Restricting information flow
Restricting what information an AI is fed and where the information originates is an effective way of reducing risks of hallucinations and security breaches. Any inputs used in a machine-learning AI can become part of the knowledge base. By carefully choosing the types of information sources an AI has access to, you can reduce the risk of hallucination and make sure that the origin of outputs from the AI can be explained.
Also, when users input sensitive or confidential information into AI as prompts or for data retrieval, the risk this also becoming a part of the knowledge base. Being aware of what inputs you’re feeding an AI can reduce possibilities of security risks arising.
Diversity
Increasing diversity can come in many forms, from increasing diversity of background or ways of thinking within an organisation’s team, to making sure the data sets that you may use are inclusive, such as including various dialects and accents in audio labelling use cases, or skin tones in image recognition. Diversifying your teams may help:
- identify potential risks early in the processes
- increase trust in your systems
- create improved performance across diverse contexts
- build more adaptability in AI solutions
To aid these goals is you can also conduct bias and unconscious bias training in teams developing and using AI and ensure you are following accessibility standards when creating user interfaces for AI. These approaches reduce the harm that could be caused by badly trained algorithms.
Regular auditing or review
In many cases, it may be worth reviewing and auditing your processes with AI on a regular basis, such as annually. Regularly reviewing can identify problems early on and can catch any possible issues before they develop. Ways in which you can review these processes include monitoring the performance of an AI tool and assess how well it matches your KPIs, or identifying possible security vulnerabilities that an AI could introduce or exacerbate in your systems. Reviewing how you use AI can increase the reliability of your systems by identifying where unexpected outputs may be found and identify any issues in the systems so that they can be addressed quickly.
Identifying problems with current processes
As Professor Shannon Vallor, one of the judges on the AI Challenge that we have collaborated on with Futurescot, said, “AI is an accelerant”. AI can do many things faster than manual workers through pattern identification and amplification. However, this means AI can also amplify any pre-existing negative patterns. One of the best ways to understand the risks of introducing AI to your processes is to look at existing issues in your organisation’s process or data sets. Are there any possible patterns that would cause issues when replicated on large scale?
How will AI Ethics affect the tech field?
AI Ethics will continue to be included in public sector guidance, such as the contributions to the UK public policy made by The Alan Turing Institute, but it will likely be included in private sector decision making as well. Risk management is a core principle of project planning and management, and the risks associated with AI are well publicised and becoming public knowledge. Therefore, to become lasting influences in AI, following AI Ethics principles will become common place in the tech field. Common ways that these principles will be included will be in increasing transparency in how AI is used in organisations through AI and privacy statements, restricting information flows both in training the AI and ongoing in the case of generative AI, where prompts will likely be restricted in confidentiality. Diversity in data sets and in development teams may continue to be emphasised, and companies may start to consider their auditing processes and how they can be improved., or put in place.
We can help you prepare your organisation for the adoption of AI, and help you find the best use cases to innovate your organisation responsibly. We offer AI consulting at every stage, from discoveries and impact assessments, to proofs of concept and helping you to discover ways to innovate your data use and data sets, and help you feel confident in implementing AI responsibly.
The approach with the least risk when using AI is to continue to include human supervision in the decision-making process, using AI to augment current processes. This can be in the form of chatbots which direct back to human agents when they don’t know the answer, or using AI to summarise or label content while leaving decision making up to human agents. However, the biggest focus for organisations that want to set themselves up for the future with AI is building robust datasets, diversifying their teams and feedback opportunities, and building transparency into their AI processes from the outset. When combined with the appropriate training in how to use AI responsibly and how to continuously review and improve an AI to match your KPIs, you can prepare your organisation to use AI in more innovative ways, while continuing to promote trust in your technology.
Sources
AI Ethics and Governance in Practice | The Alan Turing Institute
Lessons from the AI Mirror by Shannon Vallor
Get AI Ready: Action plan for IT Leaders | Gartner
Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance