Overrepresentation of Black, Asian and minority ethnic solicitors in reports to the SRA: Decision making at the assessment stage

1. Introduction

The universities of York, Lancaster, and Cardiff were commissioned by the Solicitors Regulation Authority (SRA) to understand the reasons why there is overrepresentation of Black, Asian and minority ethnic solicitors in reports to the SRA. There are two main components to the research. The first looks at the factors, present in the legal sector and wider society, which may explain the overrepresentation in complaints of potential misconduct made to the SRA. The second looks at decision making at the assessment stage, when the SRA decides which complaints to progress for investigation. The reason for this focus is that the overrepresentation is particularly evident at these two early stages of the SRA's processes. It is present in the complaints received and increases further at the assessment stage. The research uses multiple complementary research methods, including both quantitative and qualitative analyses, to shed further light on this subject.

The overall findings from the research, including an overview of the component parts of the project, is published separately. This supporting report covers the second component of the project and sets out the findings from our review of decision making at the assessment stage of the SRA's enforcement process.

Open all

Our approach to this part of the research was to examine if the SRA is consistently applying its decision-making criteria at this early stage of the enforcement process and to see if there was anything which contributes to the overrepresentation of Black, Asian and minority ethnic solicitors in cases progressed for investigation.

We completed an onsite visit to the SRA. This was followed by a desk-based analysis of induction, training and guidance materials given to staff who make initial assessment decisions about whether reports received should be progressed to investigation. The materials included the criteria which guides this decision making, referred to as the Assessment Threshold Test (which we will refer to as the assessment test), and the supporting guidance materials used by staff, including the overarching Enforcement Strategy. The assessment test and how it is applied is published on the SRA's website and covers how it relates to the SRA's Enforcement Strategy. We also reviewed two modules from the SRA's Equality, Diversity and Inclusion (EDI) e-learning suite. Finally, we completed fourteen online semi-structured interviews with Investigation Officers (IOs) and Investigation Managers (IMs) working within the Assessment and Early Resolution Team (which we will refer to as the assessment team). IOs and IMs decide whether to progress reports received for investigation.

This report does not examine the investigation stage of the process, which takes place if there is a decision to progress the report for investigation.

The remainder of the report is structured as follows:

  • Section 3 focuses on the training, guidance, and decision-making criteria, and starts by introducing the role of the assessment team together with the nature of the assessment test. This is followed by an analysis of the training provided to the assessment team, including the IO induction, the online knowledge portals, and standard operation procedures, as well as the EDI training modules provided.
  • Section 4 is dedicated to the semi-structured interviews with members of the assessment team. Findings are discussed under core headings including decision making and networks, personal responsibility, and confidence in outcomes.
  • Section 5 summarises the key findings and
  • Section 6 discusses key insights drawn from the analysis.

Introduction to the assessment team

The assessment team is responsible for decision making at the first stage of the enforcement process for any reports received by the SRA. The assessment team is part of the Investigation and Supervision Directorate and reports to the SRA's Director of Investigations and Supervision.

The assessment decisions are made by a core group of staff. When joining the assessment team, staff start in one of the training teams and on average will stay in the training team for around 9 to 12 months. Staff in the training teams have enhanced supervision and will not be able to make an assessment decision without it being checked by a manager. When the IM is satisfied that they are fully trained, staff will join one of the core assessment teams.

Reports can be made by members of the public, solicitors, and other external stakeholders, such as the courts, the police or government departments. Solicitors and law firms may also refer themselves to the SRA or a matter could be referred by SRA staff, for example if potential misconduct is identified on a visit to a firm or seen in the media. Regardless of their source, all reports must pass through the assessment process. When a report is received, the assessment team assesses it against a three-stage assessment test. This test involves dedicated IOs assessing the following three limbs:

  1. Whether there is a potential breach of the SRA's standards or requirements based on the allegations made?
  2. Whether that breach is sufficiently serious that, if proved, is capable of resulting in regulatory action?
  3. Whether that breach is capable of proof?

The results of the assessment test determine whether a report proceeds to the investigation stage. If the report does not meet all three limbs of the assessment test, the SRA will advise the complainant that the report will not be progressed for investigation and explain why not. If an IO decides that the report does meet the assessment criteria, and should be referred for investigation, this decision has to be approved by an IM. Once the report is passed to the investigation team, there is a further check by a manager in the investigation team. If the investigation team manager takes a different view, the report may be passed back if the decision to be reconsidered.

The team deals with 800 to 1,000 reports per month with around 16 percent of these meeting the assessment test and being passed on for investigation.

Research methodology

A review of the SRA's training and guidance materials was undertaken via a visit to SRA headquarters in Birmingham on 20th March 2023. The visit was led by the SRA's Investigation Manager and the Head of Investigations (who has responsibility for the whole assessment team).

The university research team was shown the induction schedule which all staff joining the assessment team are required to complete. They also spent the day understanding how the IOs were trained and ran through a series of practical scenarios to demonstrate how the assessment test is applied. The university research team was finally taken through the guidance that was available to the IOs, including the role of aggravating and mitigating factors that impact degrees of seriousness, this being part of the second of the three limbs of the assessment test which determines whether cases are progressed for investigation.

Further online access was granted to allow a desk-based analysis of the guidance that staff have available to them when making decisions. This suite of materials is collectively referred to here as the training materials, guidance and standard operating procedures and included:  

  • SRA Enforcement Strategy
  • Guidance on making decisions to investigate concerns
  • Internal Seriousness Table
  • Core SRA standards and regulations
  • Access to a suite of work instructions including further guidance on making enquiries regulatory checks, and red flags
  • Guidance on completing the three limbs (questions) which comprise the assessment test.

Two modules from the SRAs wider online EDI training suite were then reviewed over a period of five days during August 2023. The review assessed the nature and quality of the training and the extent to which the training provided might be used to inform day-to-day decision making by members of the assessment team. The materials were worked through sequentially and notes taken throughout with a particular focus on: 

  • the nature of the content
  • the robustness of the content
  • the accessibility of the content and
  • the way in which the material could be operationalised.

The two EDI modules were compared to the Chartered Institute for Personnel and Development's (CIPD) standards for good practice.

The first module reviewed, called 'Conscious Inclusion – equality, diversity and inclusion in action' is an online module which the SRA requires all staff to undertake in their first week of induction. Its stated aim is to set out what the SRA does to make its workplace inclusive and to encourage staff to play an active role in its approach. It covers a range of topics including why an inclusive workplace is important, how staff can share their views (including through its staff networks), its approach to equality impact assessment, and reasonable adjustments. It covers the Equality Act and the SRA's EDI policy and provides an overview of unconscious bias. This induction module is supported by a range of 'spotlight' modules which focus in more detail on various EDI topics, such as carrying out an equality impact assessment, providing reasonable adjustments, being an ally and trans and non-binary inclusion.

The research team was provided with the spotlight module which was thought to be most relevant to this research, the ' Spotlight on Unconscious Bias.' This module provides an overview of unconscious bias, covers how individuals can minimise the impact of bias and the controls and processes the SRA has in place to support an objective approach to decision and policy making.

Training overview

Investigation Officer induction

The IO induction lasts for one week and covers all aspects of the assessment process including an introduction to the SRA's Enforcement Strategy, good decision making, the Principles and Codes of Conduct for legal firms and individuals. IOs are introduced to the assessment of seriousness table, a variety of topic guides and 'red flag' issues. The week is interspersed with a series of case studies where IOs are encouraged to discuss and evaluate a variety of complaints varying in complexity.

The induction training offers IOs the chance to 'sense check their knowledge' and understand and navigate other knowledge repositories within the organisation. The case studies do not comprise of 'packaged scenarios' – they require IOs to ask questions and to make sense of the data they have. This approach is taken because members of the team are frequently required to make further enquiries to make sense of the incoming reports they handle, given that it may not be possible to gather the full facts from the incoming report. These enquiries may need to be made either with the complainant in the first instance or with the reported party. The purpose is to understand and analyse the report to ensure the IO fully understands the facts before deciding on whether to progress a report for investigation.

The induction training is comprehensive and once complete the new IOs are kept in the training team for nine months to one year, with some remaining in training for up to one and a half years. This ensures that new IOs have each of their decisions reviewed by IMs. During the training period, individuals are encouraged to reach out to the different investigative units within the SRA for further assistance and are encouraged to make contact with the various parties involved when they feel they need more information.

Training managers said that the SRA is committed to a continuous improvement approach with the provision of more training to IOs on consistency and how to complete the requisite forms. IOs are encouraged to use work instructions as exemplars, check the Standards and Regulations (STaRs) guides and access e-learning via the internal learning hub to complement their on-the-job training.

Guidance and standard operating procedures

The SRA's suite of guidance and standard operating procedures provide ongoing support for the assessment team, including a series of work instructions and scenarios which support the decision-making process. The Enforcement Strategy supports the second limb of the assessment test (is the breach sufficiently serious to justify action?) and states:

'All of our decision makers are required to exercise their judgement on the facts of each case, on the basis of the guidance set out in this document and our suite of decision-making guidance…Our assessment of seriousness will necessarily involve looking at past conduct and behaviour. However, our assessment of any future risk will look forward as well as back…We will take into account all the circumstances, including any aggravating and mitigating factors.'

The above highlights the centrality of the training provided, and the importance of scenario and case study exercises included in the induction week. Further, the flexibility of the training phase (between 9 and 12 months) ensures that those who need additional training are kept within the training pool as required. It was observed as part of the review that the scenarios provided were relatively straightforward, something we return to in our conclusions as a possible area for change.

The internally developed Seriousness Table used during the induction and made available to all IOs when making decisions, distinguishes between a range of actions associated with one-off incidents and patterns of repeated and persistent behaviour, and one-off incidents of sufficient or utmost seriousness. The Enforcement Strategy provides further guidance on what affects the SRA's view of seriousness, outlining the areas where judgement is required including system and human error, environmental issues (i.e. the environment in which the events took place), personal mitigation, personal intent, experience and seniority, regulatory history, and patterns of behaviour. The parameters of what constitutes aggravating and mitigating circumstances are therefore clearly defined for IO decision making purposes.

There are over 40 work instructions available to the assessment team and these comprise of a range of standard operating procedures that take the individual IO through the entire assessment process including the triage process, events assessment, identification of case categories, links to the Legal Ombudsman and specialist topics such as Abandonment of practice, Cybercrime, Insolvency etc. Furthermore, the procedures outline the process of making enquiries with the law firm or individual following a report, conducting regulatory checks, and identifying 'red flags.' They also cover how to write closure letters with useful paragraphs (for cases which do not pass the assessment test) or passing out cases (referring the case to another team within the SRA for further investigation), and what to include in the decision summary.

By far the most detailed and comprehensive guide for an IO working within the assessment team was a document called 'Completing the limbs.' This contained detailed work guidance on:

  • how to complete the assessment test, focusing on its purpose
  • how to answer each limb, or question in the assessment test, understanding what information to include and the reasons for each one
  • how to ensure a consistent approach to the application of the assessment test and
  • understanding the link between the test and the final outcome.

The documented procedure represents a clear step-by-step guide that appears a vital aide to all within the assessment team regardless of prior experience. It incorporates a range of reflection points that encourage the IO to take a step back and review their evidence and thought processes. This guidance emphasises that all IOs must provide explanations for their decisions throughout the process. The guidance references the Enforcement Strategy and the seriousness table and reinforces 'the decision-making process must be clear to any internal person reading it through and the [assessment test] should explain what happens to each issue of the conduct reported.'

The EDI training modules

Overall, the EDI training modules are well-developed, comprehensive and in line with the Chartered Institute of Personnel and Development's (CIPD) best practice.

The Conscious Inclusion module provides guidance to employees on what the SRA can do to ensure that it makes the workplace an inclusive one by encouraging a deep sense of belonging, sharing openly and freely, providing opportunities for development and career flexibility, celebrating success, and making people feel appreciated, and developing strong leaders who are keen to learn and earn respect.

The SRA highlights their use of engagement surveys and the agreement of plans and actions designed to drive improvements. The number of staff networks within the organisation is commendable. Networks promote racial and gender equality, LGBTQ+ inclusion and wellbeing. They also support carers and working parents and green initiatives, among others.

The EDI module also offers links to policies such as the SRAs Equality, Diversity and Inclusion Policy 2021; the Stress and Wellbeing Policy, 2021; and Transitioning at Work Policy 2021.

SRA employees are encouraged to challenge bias, think critically and practice empathy to create an environment where others feel safe and heard. They are encouraged to monitor decisions and trends across time and to make fair decisions. Such an approach has the potential to take people out of their comfort zones and take an outsider perspective (Noon and Ogbonna, 2021).

SRA staff are also encouraged to seek other diverse views, respect alternative experiences, and demonstrate a willingness to learn and develop. The training encourages staff to be the best, authentic version of themselves and challenge bias, providing details on how the brain processes information and seeks to create short cuts. Guidance is provided on The Equality Act, 2010 and the focus on how the SRA regulates solicitors' firms in a way that encourages equality, diversity, and inclusion within the focal firm.

The Spotlight on Unconscious Bias is well-developed and emphasises that individual decision making needs to be complemented by other measures including effective quality assurance, monitoring decisions to observe trends over time, setting guidelines for decision makers, publishing regulatory decisions, involving others in decision making and engaging independent reviewers. Staff are made aware via the training of situations where unconscious bias is likely to occur including 'when evaluating the conduct of solicitors who have been reported to us' and 'when considering various applications made to us by solicitor firms.' This is further reinforced by the commitment to 'introduce higher levels of accountability and transparency to…[the] organisation's processes' and to base individual decisions on facts and information.

In this part of the report the research methodology for the semi-structured interviews is explained and the findings of the interviews with members of the assessment team are discussed.

Research methodology

Semi-structured interviews, based around a set of questions agreed in advance with the SRA, were conducted in September 2023. The purpose of these interviews was to explore how staff interpret and make sense of the guidance they are given and how they use their judgement as assessment decisions are made.

The majority of the questions were focused on the training and materials the team are given to make the assessment decisions. The aim was to see if the assessment test and guidance were being applied as they were intended to be applied, and to tease out any areas of potential concern or positive aspects of the approach. For example, how are IOs applying their judgement? Are the training and materials helping to manage the exercise of judgement when it is needed? A final broad question on the SRA's EDI training was included and deliberately saved until the end of the interview, because it is separate training but nonetheless relevant to understanding how staff think about EDI and the SRA's expectations when decisions are being made.

The SRA engaged with members of the assessment team and identified a list of staff who were willing to be interviewed. A list of 15 names was passed to the researchers by the Head of Investigations. Protocols were in place to protect the identity of the staff involved so far as possible and they were given assurances by the researchers that comments would not be attributed. Each of the names on the list was sent a research brief, including the proposed arrangements for confidentiality, outlining the purpose of the study, and invited to take part in the interviews. Once an individual respondent had agreed to take part in the study, a consent form providing further details of the study and the participant's role in the study's outcomes was then sent to everyone for agreement. Once signed, online interview dates were then scheduled.

In all, fourteen responses resulted in fourteen semi-structured interviews being conducted within a two-week period, comprising of eleven IOs and three IMs. Each interview lasted approximately one hour and was recorded and later transcribed. Data were analysed thematically. The details of those interviewed are not included in this report to ensure the anonymity of those involved. The small number of interviews risks the identification of participants and therefore the numbered respondents are neither identified by gender nor position within the assessment team.

Areas of analysis

Decision-making and networks

Each of the participants agreed that the assessment test served as a key decision-making tool that helped frame their actions. Some believed that the assessment test ensured greater objectivity and prevented IOs from interpreting data differently, removing human judgement and creating a more definable boundary between right and wrong.

'In some ways, it removes the element of individual judgement, and, to some extent, moral responsibility that we have. As we are very low in this process. At the same time, it's almost the first time where questions of right and wrong ought to come in. Moving to that, as it were, very black-and-white process, it becomes much more a question of, does this fit the template? I see here, there is misconduct that fits this template. Therefore, I will become part of an administrative process, rather than a process of human judgement.' (No. 5)
'I mean the assessment threshold test is the bread-and-butter guidance, that's pretty much what I do day in, day out, is decide whether that has been met, whether we can meet it or whether, based on what the information we have, it's probably not going to be met and we need to close it. So pretty much everything I do is framed by the assessment threshold test. So yes, I use that a lot.' (No. 1)

While each of the participants suggested that the assessment test provided the scaffolding necessary to support decision making, suggestions were made that once along the experience curve, IOs relied less on the test because the process eventually became embedded.

'The more cases you assess, the more cases you analyse, it just comes as second nature really, because you know exactly what you're looking for.' (No. 7)

Further, issues were raised by some about the level of detail entered into the system by IOs despite a full application of the assessment threshold tests (ATTs).

'Sometimes you can see where the corners get cut a bit in terms of the detail that they're putting in there. They're still carrying out an assessment…they're still doing all three limbs, but you do question, you know, not all ATTs are created equal, let's put it like that.' (No. 13)

The process of decision making eventually becomes 'second nature' – but the EDI training that each of the IOs and IMs have access to highlights the danger of the brain creating shortcuts and therefore increasing potential for bias. The participant below highlighted the danger of patterns of assessment influencing decision-making, suggesting that more experienced IOs could go on 'auto pilot' which could lead to the potential risk of unintentional bias.

'I was shocked to hear that we do have this imbalance, in terms of our treatment of people from different ethnic groups. I'm afraid to say, I can believe it, because of that sense of pattern, and that sense of almost informal assessment that we do make.' (No. 5)

One of the most difficult judgements that an IO is required to make is about the seriousness of a breach and how to take account of the aggravating and mitigating circumstances. While IOs work with subject specific guides and have access to a range of other support networks within the SRA, IMs suggest that there is more leeway and flexibility in the decision-making process and therefore some room for personal judgement.

There is that lateral thinking, thinking outside the box. That is, again, encouraged. Yes, you have a set of set criteria, but that criteria isn't necessarily set in stone. It gives a good basis to ensure consistency and ensure that cases are dealt with fairly, but, yes, think of it holistically, think of it laterally.' (No. 6)

It is clear from the guidance and emphasised in the Enforcement Strategy, that applying the assessment test does involve the exercise of judgement which introduces an element of subjectivity. The assessment decision will involve considering factors such as the IOs understanding of the case, whether claims are substantiated and corroborated and the regulatory history of the individual or firm involved. These judgements come into play in relation to the second limb of the test, dealing with seriousness, where IOs choose to assess a variety of factors which are set out in the Enforcement Strategy and the internal guidance (including the Seriousness Table) covered above.

The IOs are encouraged to think laterally when assessing seriousness, using the guidance provided. They need to distinguish between 'contextual mitigation' and 'personal mitigation' – the latter referring to the 'background, character and circumstances of the individual or firm' (although the Enforcement Strategy makes it clear this is usually more relevant to sanction). Further, the SRA Enforcement Strategy highlights the need to recognise the 'stressful circumstances in which many solicitors and firms are working' as well as the health of the individuals at the time of the event, both of which may have a bearing on the nature and seriousness of the alleged breach. It also emphasises the importance of intent or motivation, with the seriousness of a breach potentially being 'dependent on the intention behind it.' Finally, the Enforcement Strategy references the importance of considering the 'role, experience and seniority of solicitors' as well as 'the culture of an organisation and pressure from peers and managers.'

It is important to note that IOs do not have access to any diversity data about individuals named in the reports they are assessing, but names and title and pronouns in the papers they see may lead to an inference being made about gender and ethnicity. Some IOs stated that they did not look at the names of these solicitors or assume specific demographics based on details in the report.

'Naturally, yes, when you're profiling a firm you'll look at… I can't say I look at where they qualified. I will always look at when they qualified. I think it's always important to look at, are they senior or are they newly qualified? I think it's only fair, if you've got a newly-qualified solicitor that might've done something, well, it probably explains why they're newly qualified, right.' (No. 14)
'I would say for decision-making, it's on the facts of the case. The actual background of the individual or whatever is irrelevant to this situation because we're looking at the cold, hard facts. What's the allegation? What's the evidence? Is there reg history? It's not like, where do you live and what's your name, and let's just have a look…' (No. 9)

The quotations above suggest that while there is no consideration of personal factors that shape an IOs perception of the facts, details of when a solicitor qualified are considered. The consideration of aggravating and mitigating circumstances therefore incorporates a degree of human judgement with some recognition that this may be problematic. This highlights the importance of support structures and additional knowledge sources.

'There isn't a set of 50 factors and we just pile up however many of each, we can think around it all. That does I suppose give a possibility that one particular IO might think of a certain something outside of the box, as it were.' (No. 13)
'My judgement call on a Monday, it may be different from my judgement call on Friday. So, having a group of senior investigators, to which we can take cases on that borderline cases forum, allows us to be more consistent, I think it gives us the opportunity to step outside our feelings on the day. It allows us to be less, ironically, the phrase that popped into my mind was, it allows us to be less individual. I think it takes the risk of excess individuality in these decisions away.' (No. 5)
'We do have senior investigation officers at hand, which basically entails sending them an email. They're very good at responding as well…we have the support of the legal team as well at hand. There's a rota for a legal representative that will be on every day… we have team meetings every week where we discuss cases, and we share ideas.' (No. 4)

IOs recognise that each case is different and that they are continuously learning, but also that their judgement could be flawed. The IOs, therefore, use additional knowledge sources in the form of informal and formal social networks of support within the SRA. A borderline case forum also provides a further useful opportunity to advice and support. These networks were perceived as central in ensuring greater degrees of consistency.

The perceived impacts of EDI training on decision making were, however, limited. Indeed, participants felt that EDI had more to do with working within the SRA environment and helped to create a supportive workplace culture.

'No, I don't think it (EDI) does play a part. I think it's just a nice place to work and they've embraced the EDI stuff to it, but I don't think that... No, I don't think it actually plays a part in my decision-making at all, no.' (No. 10)

Personal responsibility

Those interviewed were asked where in the SRA they felt accountability for decisions lay. In all cases, participants felt that they were personally accountable, but each had a different explanation as to what that accountability meant. For some, the inclusion of their contact details on correspondence made them instantly accountable; for others there was a sense of a duty of care and a need to provide a good service to each complainant. These responses highlighted the vulnerability of complainants who were often experiencing some form of trauma in their lives. This prompted some IOs to think carefully about their communication with the complainant, sometimes going the extra mile to ensure a degree of sensitivity around the engagement process.

Others felt that the purpose of their training was to increase that sense of accountability and the need to make the right decisions based on the right evidence. IOs are expected to explain their decisions when communicating to complainants and sometimes complainants challenge this, usually when told their complaint would not be taken forward. Some of the IOs expressed degrees of disappointment when feeding back lengthy responses to complainants, only to have those complainants feel that the process constituted a waste of time because the outcome was not what they expected.

'….at the end of the day it's me that's living and breathing this case. I know the nuances of it, whereas when I send it to my manager they might review the correspondence that we've had, the original report that we've had, and some of the evidence…. well, all of the evidence. They are looking at it objectively from an outside perspective. I do feel accountable for it, and it does hurt sometimes when you've put a lot of time and effort into something, and like this morning, you've written a really detailed response with why we're not taking it any further, send it out, and then all of a sudden you get, 'Well, this was a waste of time. You're useless, you're toothless,' all of this. You try not to take it personally.' (No. 12)

Confidence in outcomes

When asked 'how confident are you that assessment decisions are fair, proportionate and transparent?,' all the participants felt confident in the decisions that were made and considered them to be fair. They explained this with reference to there being a logical procedure to follow, decisions not being rushed, and all relevant information being captured as a result.

Participants agreed that their assessment decisions were proportionate, in that the purpose of the process was to identify only the serious cases to pass on for investigation. One participant reflected further about the question of proportionality, recognising the impact on a solicitor if a complaint about them was taken forward for investigation:

'I feel that fair and proportionate, are, interestingly, in tension with one another. To me, a fair decision focuses on the facts of the individual case. A proportionate decision focuses on our response. Is it proportionate to put a formal investigation in place that may take two or more years, and put a solicitor's life on hold sometimes, with some of these sole practitioners?' (No. 5)

This demonstrated an empathy and an awareness of process beyond the assessment decision.

The quote suggests that 'fair' is based on the facts of the case and the assessment test enables IOs to focus on the facts. However, the issue of 'proportionate' is based on the SRAs overall response and Participant No. 5 shows that whilst an IO makes decisions based on the assessment test, they may have more personal views about the extent to which the precise nature and duration of the subsequent investigation could then be considered proportionate.

The Enforcement Strategy does encourage a degree of flexibility around the factors that might be considered pivotal to any given case when considering issues of proportionality. When applying the test of seriousness in the second limb of the assessment test complex judgements must be made about what is proportionate in any context. Here, consideration of mitigating and aggravating factors is crucial but challenging. The above quotation suggests that the current process may result in a fair response, but the subsequent impacts of any further investigation deriving from that initial response may have a disproportionate impact on the day-to-day operations of the legal firm in question.

Participants felt that their decision making was transparent, and it is clear that the process itself and the factors taken into account in coming to a decision are published on the SRA's website. But some participants spoke about how difficult it was to explain the reasons for their decision, in particular to complainants who were disappointed the SRA was not taking their complaint forward for investigation. This echoed the response of participant No. 12 in the section above. In relation to the challenge of communicating their decisions to complainants, one participant said:

'I think the decisions that we come to are fair. I think we could probably be more transparent' (No. 14).

Although guidance is available to the assessment team on writing final decision letters, with a range of standard paragraphs to help with the phrasing of these communications, as seen from the quote, participant 14 and others, felt the explanation and level of detail provided in letters to complainants could be improved.

This report outlines research conducted to review the SRA's guidance for initial assessment decision making and to evaluate the extent to which the SRA is consistently applying its standard operating approaches.

The methods involved a review of the SRA's training and induction materials, including a one-day visit to the SRA's headquarters, desk-based research of a range of documents and procedures, and fourteen semi-structured interviews with investigating officers and their managers. The different methods revealed several key findings.

The training materials, online guidance and standard operating procedures are, for the most part, as robust as they can be. IOs also have access to a range of networks including a borderline case forum which considers any complexities and where they can gain further clarification on decision making.

Many of the resources available to the assessment team include a range of training scenarios, and it is impossible to include all of the possible scenarios given the diversity and multiplicity of the reports received. Nonetheless, it might be useful to augment the portfolio of complex case scenarios and decision guides. At no point was there a complex example in the training materials examined, although the resource 'Completing the limbs' of the assessment test does provide more nuance about the complexities associated with decision making.

The participants each observed that decision making within the assessment team is based on 'objectively assessing evidence.' The assessment test structured the decision-making process and this structure supported IOs in focusing on the facts of the case. IOs and IMs suggested that the materials they have access to about applying the assessment test (including the Enforcement Strategy) are 'bread and butter guidance' that provides structure to decision making.

The EDI training modules are comprehensive and can be considered a 'best practice' tool aligned to expectations of an employer of choice. The modules encourage authenticity and accountability inside the organisation, and to a large degree the semi-structured interviews confirmed a deep sense of personal accountability among those working in the assessment team, with a duty of care and a desire to humanise contact wherever possible.

While application of the assessment test is broadly an objective process, the second limb addresses the issue of seriousness and therefore connects to the Enforcement Strategy which requires IOs to think laterally as well as reach out to broader knowledge networks. Those networks within the SRA are multifarious and each of the participants in the interviews highlighted the importance of their direct team and those more specialist internal networks. It was apparent that those networks were particularly important, and that everyone knew where to go and who to ask for further advice, support, and information. This suggests adequate socialisation, information, and network support within the SRA for the assessment team members. The social/informational support mechanisms available to IOs in the team are robust enough to ensure that all staff can sense-check and obtain advice in a timely manner. Supports are both informal and formal and include peer groups, in-house legal and subject matter experts, access to senior IOs and IMs as well as a series of developed topic guides. Participants suggested that these supports helped to ensure consistency of decision making.

Further, a range of factors are used in assessing the seriousness of any breach. IOs need to select the contextual variables that apply and consider the extent to which these can be treated as aggravating or mitigating circumstances. In examining context, IOs may refer to a wide range of factors, including the experience of the solicitor firm or individual, the size of the firm, the regulatory history and how long the firm has been trading. When such factors are considered, they are recorded in the case file and used to support final decisions. Each of the participants were clear about which factors needed to be considered and suggested that the various standard operating procedures and work instructions are clear and were being followed consistently. But because they were required to make their own judgement on the various factors available to them, it was recognised that it was possible for the IOs to arrive at the same conclusion, even if the precise route to how they got there could be slightly different.

Each of the participants stated that they did not look at names or, instead they focused on the facts of the case: 'the actual background of the individual is irrelevant to this situation because we're looking at the cold, hard facts.'

While work has been done to ensure that assessment team members justify their decisions via the assessment test proforma, IMs suggested that there remain incidents where insufficient detail is provided by the IO. Therefore, more work could be done to ensure the quality and detail of the case decision notes. For those interviewed, this represented the weakest area of assessment team operationalisation.

Members of the assessment team each felt that they were personally accountable for the decisions they made: 'It's up to me to make the right decision for the right reason…obviously I want to make sure that I am doing the right thing.' This was further reinforced by the 'hands-on' and personalised nature of evidence gathering and the emphasis placed on team members to show how they came to any given decision.

Overall, participants believed that the decisions made by the assessment team were 'fair, proportionate and transparent.' They felt that fairness was assured by the application of a logical process which captured all relevant information. They felt that the seriousness test itself helped make sure their decisions were proportionate, albeit with one participant reflecting on the impact on the solicitor concerned if the case proceeded for investigation. And participants agreed the decision making was transparent but there was room for improvement in explaining the reasons for their decisions, in particular, to complainants whose reports were not being taken forward for investigation.

Details of our suggestions for further consideration by the SRA can be found in the main report.

Noon, M., & Ogbonna, E. (2021). Controlling management to deliver diversity and inclusion: Prospects and limits. Human Resource Management Journal, 31(3), 619-638.