The Algorithmic Transparency Reporting Standard

A standardised way of recording and sharing information about how the public sector uses algorithmic tools.

Details

You must complete both sections of the template.

  • Tier 1: provide a short non-technical description of your algorithmic tool, and an overview of what the tool is and why the tool’s being used
  • Tier 2: provide more detailed technical information, such as specific details on how your tool works and the data the tool uses

The numbers in brackets correspond to the field numbers in the Algorithmic Transparency Recording Standard.

Tier 1 information

How and why you’re using the algorithmic tool (1.2)

Explain:

  • how your tool works
  • how your tool is incorporated into your decision making process
  • what problem you’re aiming to solve using the tool, and how it’s solving the problem
  • your justification or rationale for using the tool

How people can find out more information (1.3 and 1.4)

Explain how people can find out more about the tool or ask a question - including offline options and a contact email address of the responsible organisation, team or contact person

Tier 2 information

Section 1: Who owns and has responsibility for the algorithmic tool

List who’s accountable for deploying your tool, including:

  • your organisation (2.1.1)
  • the team responsible for the tool (2.1.2)
  • the senior responsible owner (2.1.3)
  • whether an external supplier is involved and if the tool has been developed externally (2.1.4)
  • the name of your supplier (2.1.4.1) and the Companies House number (2.1.4.2)
  • the role of the external supplier (2.1.4.3)
  • the procurement procedure type (2.1.4.4)
  • the terms of their access to any government data (2.1.4.5)

Section 2: How the tool works in detail and what the rationale for its use is

Provide a detailed description of the tool. Compared to the high-level description in Tier 1, you should provide an explanation at a more granular and technical level, including the main rules and criteria used by the algorithm/algorithms. (2.2.1)

Expand your rationale for using the tool:

  • describe what the tool has been designed for and not designed for, including purposes people may wrongly think the tool will be used for (2.2.2)
  • provide a list of benefits - value for money, efficiency or ease for the individual (2.2.3)
  • describe the decision making process that took place prior to the deployment of the tool, where applicable (2.2.4)
  • list non-algorithmic alternatives you considered, if this applies to your project (2.2.5)

Section 3: How the tool is integrated into the wider decision making process

Explain how the tool is integrated into the process, and what influence the tool has on the decision making process. (2.3.1)

Explain how humans have oversight of the tool, including:

  • how much information the tool provides to the decision maker, and what the information is (2.3.2)
  • the decisions humans take in the overall process, including options for humans reviewing the tool (2.3.3)
  • training that people deploying and using the tool must take, if this applies to your project (2.3.4)

Explain your appeal and review process. Describe how you’re letting members of the public review or appeal a decision. (2.3.5)

Section 4: Technical specification and data

List the tool’s technical specifications, including:

  • the type of method, for example an expert system or deep neural network (2.4.1)
  • how regularly the tool is used - for example the number of decisions made per month, or number of citizens interacting with the tool (2.4.2)
  • the phase - whether the tool is in the idea, design, development, beta/pilot, production, or retired stage including the date and time it was created and any updates (2.4.3)
  • the maintenance and review schedule, for example information on how often and in what way the tool is being reviewed post-deployment, and how it is being maintained if further development is needed (2.4.4)
  • the model performance, for example accuracy metrics (such as precision, recall, F1 scores), metrics related to privacy, and metrics related to computational efficiency (2.4.5)
  • the system architecture (2.4.6)

List and describe:

  • the datasets you’ve used to train the model
  • the datasets you’ve used to test the model
  • the datasets the model is or will be deployed on

Add links to the datasets if you can.

Include:

  • the name of the datasets you used, if applicable (2.4.7)
  • an overview of the data used to train, test and run the tool, including a description of the types of variable used for training, testing or operating the model - for example ‘age’, ‘address’ and so on (2.4.8)
  • the URL for the datasets you’ve used, if available (2.4.9)
  • how and why you collect data, or how and why data was originally collected by someone else (2.4.10)
  • how you are the supplier cleaned or pre-processed the data (2.4.11)
  • details on how representative and complete the data is, including missing data (2.4.12)
  • the data sharing agreements you have in place (2.4.13)
  • details on who has or will have access to this data and how long the data’s stored for, and under what circumstances (2.4.14)

Section 5: Risks, mitigations and impact assessments

List the impact assessments you’ve completed. For each assessment, provide the name and a short overview of the impact assessments conducted, the date of completion, and if possible a summary of the findings. If available, provide a publicly accessible link. (2.5.1)

Examples of relevant impact assessments are:

  • data protection impact assessment
  • algorithmic impact assessment
  • ethical assessment
  • equality impact assessment

Provide a detailed description of common risks for your tool, including the names of the common risks, a description of each identified risk, and a detailed description of actions you’ve taken to mitigate the risks. (2.5.2)