Are councils clear on the ethics and governance challenges of AI?

  • Local authorities are increasingly using artificial intelligence (AI) to drive efficiencies and support decision-making
  • Yet there remain reservations about how and when it is appropriate to use AI, particularly in relation to vulnerable groups
  • Councils should be aware of the ethical and governance risks and build a framework to address them

Local authorities are responsible for delivering services that support some of the most vulnerable people in society. However, they also have a legal duty to balance their budgets, amid increasing financial pressures.

Many local authorities have enthusiastically embraced the opportunities afforded by AI and other digital technologies to achieve significant savings and make more effective use of limited resources.

A recent example is Bristol City Council’s use of risk profiling algorithms to help inform decisions about where social workers and family support workers would be best deployed.

Yet AI also presents a challenging set of risks for the public sector, including concerns about ethics and governance. The Guardian recently reported that one in three councils are using AI systems for automated guidance relating to welfare matters. However, concerns have been raised about the reliability of these systems, and about whether there is sufficient human oversight and a genuine ability to challenge automated decision-making.

The Local Government Association says data should only ever be used by councils “to inform decisions and not to make decisions”.

Public concern about AI accountability

The UK’s Office for Artificial Intelligence (OAI) has identified many of the potential benefits of AI for the public sector. These include: generating more accurate information, forecasts and predictions; simulating policy impacts pre-implementation; and automating simple, manual tasks to free up staff.

Yet our latest whitepaper, Artificial intelligence in the public sector: the future is here, reveals the public has significant reservations about AI being used in relation to service users. Nearly half (45%) of the public don’t like the idea of humans not being involved in decisions that affect them. Just 16% approve of AI’s use for elderly care, and even fewer (9%) support its use with vulnerable children. One in four people are also worried about reduced human accountability and an emphasis on logical rather than ethical decision-making.

Before they can reap the benefits of AI, local authorities must first assess its potential impact on service users, particularly the most vulnerable, and they should be ready to explain their processes. No matter how sophisticated – or complicated – the technological tools they use, local authorities still have a responsibility to be transparent and accountable.

Taking a considered, long-term view on AI

Rod Penman, Head of Public Services, Zurich Municipal, says: “It is important that local authorities look beyond the short-term financial savings and efficiency benefits of introducing AI. What could be the impact of any AI failure on your most vulnerable service users?”

As an insurer, we’re keen to support local authorities in managing the risks involved when embracing AI technologies.

Our specialist risk consultants are happy to discuss the challenges your organisation could face. We can help with auditing your innovations in development, checking ethical and governance frameworks, as well as legislation compliance, and other key challenges.

For more information, download our new report, Artificial intelligence in the public sector.