Futurescot AI Challenge Submissions Guidance: Part one

An abstract diagram of some guidance documents

Portrait of Stewart Cruickshank
By Stewart Cruickshank

03 July 2024

Part one: becoming familiar with what AI can and can’t do.

This is the first in a series of three articles that provide some practical considerations for organisations submitting an application to the Futurescot AI Challenge. This article focuses on becoming familiar with what AI can and can’t do.

Defining Artificial Intelligence

Artificial Intelligence (AI) is a broad discipline and the following definition, used by UK Parliament, is sufficient for our purposes: “Machines that perform tasks normally performed by human intelligence, especially when the machines learn from data how to do those tasks.”

For example, traditional automation methods, such as Robotic Process Automation (RPA) are often mistaken for AI however AI recognises patterns in data and learns over time mimicking human intelligence whereas RPA follows a process defined by an end user.

Over the 2000’s, the rise of cloud service providers like Azure and AWS enabled the widespread access to machine learning (ML) services for enterprises by offering scalable, cost-effective, and user-friendly solutions. The NHS, for example, has been at the forefront of integrating ML technologies to improve decision-making and diagnostics. 

ML services such as those available via APIs from the large cloud service providers can perform very narrow tasks such as Natural Language Processing (NLP), Text analytics or computer vision but lack the versatility of Large Language models.

An abstract diagram of some quotations marks

Large Language Models

Open AI’s ChatGPT, a type of Large Language Model (LLM), launched in late 2022 to much fanfare, signalling a paradigm shift in AI. This leap forward for how public sector can benefit from AI is attributed to several factors:

  • Versatility - LLMs are trained on vast amounts of data so they can perform tasks they were not specifically trained for.
  • Natural language comprehension – LLMs are tailored to handle the complexities of human like language understanding and unstructured data.
  • Creativity and content generation – LLM’s have shown ability to not only summarise content but produce new novel content including text, audio, images, video and code opening up new opportunities for innovation in organisations.
  • Data analysis – LLM’s cab support better decision-making processes by analysing vast datasets, uncover hidden patterns, and provide actionable recommendations.

We recommend that pre trained LLMs, such Open AI’s ChatGPT available via Azure cloud services, offers a good starting point for public sector organisations considering AI use cases given their immediate availability and versatility of application.

An abstract diagram of some direction arrows

How can a public sector organisation use LLMs now?

Even at this early stage in the story, LLMs are already performing a wide range of useful office-based functions. The use of AI is evolving quickly, but some promising examples of use cases for the public sector are outlined below. Often in practice a real world problem is likely to feature several tasks and business processes so will cut across many of these use cases.

Better citizen support

  • Offer natural language interfaces to users that can involve one or more modalities such as text, audio or image for fast convenient assistance.
  • Speed up delivery of services by helping users retrieve and search for online content and data.
  • Automating citizen digital queries through email or form correspondence to the right parts of the business
  • Personalised digital interfaces tailored to their personal informational needs that could potentially aggregate information from external sources.

Improving Accessibility

  • Making information more accessible to users such as rephrasing complex content into simpler language.
  • Automating of content translation in multiple languages.
  • Text to speech and vice versa, aiding individuals with visual or hearing impairments.

Speeding up paperwork

  • Documenting and recording information.
  • Reviewing and extracting key information from various documents and across different internal systems
  • Automating the assessment of inbound applications or requests for information from citizens
  • Organising unstructured data from different sources into structured formats.

Augment decision-making

  • Triaging and summarising pertinent information to support decision making.

Automating internal workflows

  • Automating internal processes such as data entry and autonomously interacting with internal databases and applications and inter-departmental communication.

Data analysis

  • Performing data analysis across large datasets to generate new insights and patterns and gather evidence or make recommendations.

Content generation

  • Generation of content to support first drafts of documents or automated summaries or report generation.
  • Generation of audio, images or video from text and vice versa

An abstract diagram of ellipses

Public Sector constraints using LLMs

LLMs offer significant benefits for the public sector and its users, however there are some use cases where its use is not yet appropriate to use LLMs in the public sector and which should be avoided. This list is not exhaustive but gives a general guide and should be considered in line with any internal policies around AI as well following responsible AI principles which are outlined in Scotland in a National AI Strategy and AI Playbook.

  • High stakes decision making - fully automated decision-making involving significant decisions, such as those involving someone’s health or safety, should not be made by LLMs alone.
  • High-accuracy results: generative AI is optimised for plausibility rather than accuracy, and should not be relied on as a sole source of truth, without additional measures to ensure accuracy such as human intervention.
  • High-explainability contexts: the inner workings of an LLM solution may be difficult to explain, meaning that it should not be used where it is essential to explain every step in a decision.
  • Limited data contexts: Unless specifically trained on specialist data, LLMs are not true domain experts. On their own, they are not a substitute for professional advice, especially in legal, medical, or other critical areas where precise and contextually relevant information is essential.

The information above should provide enough understanding in order to generate AI uses cases. You may also wish to consult any internal subject matter data and AI experts or the Scottish Government AI playbook for further information.