Blog

Stay Secure: How to Engage with AI Tools Safely

By 13 June 2024 No Comments

Data is the new and highest form of currency in today’s world. So just like money it must be protected and safeguarded. When making financial investments on behalf of your clients it’s important that you understand where their money is sitting, how it’s being invested and used, how sound are the firms involved etc.  The same goes for the data!

If you’ve been reading our newsletters and following our work, you already know that we strongly encourage firms to engage with AI tools to super charge their work and to improve both productivity and efficiency. However, this must be done safely and due consideration should be given before engaging with AI tools.

AI Policy

Before you do anything, you should have an AI policy in place that clearly outlines how AI tools can be engaged with in your firm; what’s allowed and most importantly what’s not allowed. This will help you avoid embarrassments such as Samsung losing their top secret IP by entering their source code into ChatGPT or other data breaches reported worldwide, which include entering client data into ChatGPT.

The UK government has set out an ambitious plan of becoming a “global AI superpower”. According to The Global AI Index, the UK currently stands at number 4 (see image below) in the global AI index ranking (falling down on infrastructure (24th) and operating environment (40th)). As detailed on their website, the Global Index is underpinned by 111 indicators, collected from 28 different public and private data sources, and 62 governments. These are split across seven sub-pillars: Talent, Infrastructure, Operating Environment, Research, Development, Government Strategy and Commercial.

Despite the UK government plans and a relatively high ranking, surprisingly there isn’t much in the way of help with relevant policies to make the adoption of AI easier in your firm. More can be found at the EU level and from the ICO in relation to Data Protection and AI specifically.

AI Governance and Regulation Timeline

The UK’s principles-based regulatory framework for AI

The UK government outlined 5 principles that it expects UK regulators to interpret and apply within their remit in its principles-based regulatory framework for AI.

  1. Safety, security, and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

In April, the FCA and the Bank of England responded to this. See the FCA’s update and the Bank of England’s update for more details.

The EU AI Act risk-based approach

The EU Parliament’s priority was to make sure that AI systems, used in the EU, are safe, transparent, traceable, non-discriminatory, and environmentally friendly, while boosting AI innovation. The AI Act wants to ensure that AI systems are overseen by people, rather than by automation, to prevent harmful outcomes. The EU adopted a risk-based approach to AI with corresponding regulation (see image below).

As you will note from the above the main difference between the two regulatory approaches is that the UK principles-based approach is regulatory guidelines, while the EU risk-based approach is more prescriptive and legislative. However, the UK hasn’t completely ruled out legislating for some specific AI instances in the future.

What do I need to do based on UK’s principles-based regulatory framework?

Here are some suggestions on what you can do to help you safely engage with AI:

  1. The leadership team should drive the use of AI in your business with the appropriate people and support in place. (See more about this in the section on ‘Create an AI project team’ below).
  2. Have in place an up-to-date risk assessment on the use of AI in your business (including risks related to AI used by your service suppliers and outsourced team(s)) and adopt the appropriate levels of risk mitigation. You can use the FS-ISAC Generative AI Vendor Risk Assessment Guide.
  3. Have in place an up-to-date AI Policy. Implement it with appropriate systems, processes and controls, and document your ongoing governance to meet your legal requirements (e.g. GDPR) and the regulatory guidelines (e.g. The UK’s principles-based regulatory framework for AI).
  4. Build/adopt safe, secure, client centric, and robust AI systems.
  5. Understand, and be able to explain, how AI is being used in your business. Be transparent in communicating this information. Understand how your outsourced service providers and any collaborators with whom you share data are using AI too.
  6. Know and understand your legal and regulatory obligations regarding the existing/proposed uses of AI in your business. 
  7. Understand how you and your staff are using AI, and what systems, processes and data your AI tools can access. Ensure all client and business data is protected.
  8. Ensure fair and ethical use of AI in your business.
  9. Make sure you have clear information, documentation and processes to deal with any challenges to your business regarding harmful outcomes or decisions generated as a result of your use of AI.
  10. Do your due diligence before adopting AI tool(s) and service(s) in your business.

“The industry should aim for a rapid adoption of AI tools to deliver efficiency, a better customer experience, and a more robust sector. This will require all involved, from senior management to technology and product teams in financial institutions, and their counterparts in regulation and technology to get up to speed quickly on existing and emerging risks to be managed”.

The Impact of AI in Financial Services: Opportunities, Risks and Policy Considerations UK Finance(OliverWyman)

Do your homework – due diligence, due diligence, due diligence

There is a lot to investigate when choosing AI tools to use. To this end, we have created a list of due diligence questions that we believe you should be considering. You can download your copy here

We appreciate that it’s hard to know everything AI, however, we’d recommend you find out as much as possible about the tools you are considering embracing as you have a responsibility to know how the tools you use work. In fact, if you expose the tool(s) to sensitive data or conversations, you need to have a clear idea of how this data is then stored, used, and what are the risks involved.

For example, have you ever considered the right to be forgotten? Would it cause a problem if a user wanted themselves removed from all datasets? Or if third party solutions are being used as part of the tool, are there any limitations in terms of licenses of software or patents that are being used? Or does the tool provide audit trails and logs for compliance monitoring? Or does the tool comply with GDPR (not all of them do!).

We appreciate that our list of questions is extensive and that you are far more likely to get the answers from firms that are sector specific, however, we encourage you to give our document some thought and obtain as much information as possible to minimise the risks.

Create an AI project team

Experiment safely by creating a cross functional AI team, made up of individuals who represent ops, tech, financial planning, paraplanning, administration, client servicing, and marketing, to ensure diversity of thought and perspectives. If you have a very small team you may want to involve everyone and engage with some external experts in the field (e.g. Chief AI Officer or Chief Technology Officer). Take an Agile project management approach and start with something small but scalable.

An effective AI team includes technical experts, business leaders, and professionals who understand the ethical, regulatory, and business implications of AI technology. Equally, a single individual can often cover multiple of the above roles. Likewise, individuals can be conscripted onto the team to fulfil a particular role when needed on a project basis. Note there is an increasing demand for AI roles and a limited number of people with the expert knowledge and even fewer with the expert knowledge and financial services experience.

Who would be on your AI team to enable the business to handle the unique challenges and opportunities presented by AI?

Is the tool worth the investment?

There are thousands of tools currently available and new ones are launched daily. Directories such as https://www.futuretools.io/ or https://www.toolify.ai/ are a great resource if you want to see what’s available across various business functions and sectors. There are also multiple sector specific tools already available in the UK, which were designed specifically for financial planning firms. The tech space is fast moving, so we expect to see a lot more sector specific tools by the end of the year.

So, how can you decide which tool to use, after all they are all start-ups and not all are going to succeed. Before you invest both time and money do your best to assess both the tool and the supplier. In addition to using our due diligence questions, which will quickly rule out some of the companies, you may want to use an evaluation matrix to assess the AI tool’s effectiveness, compliance, and strategic alignment to your business goals. In selecting the tool that best meets your needs you could consider several of the following elements:

1. Compliance and Regulatory Adherence

  • GDPR and Data Protection: Evaluation of AI tools’ compliance with GDPR and other data protection legislation.
  • FCA Guidelines: Assessment of adherence to the Financial Conduct Authority’s regulations and guidelines. Remember the UK Governments 5 principles when it comes to AI regulation: 1) safety, security, robustness; 2) appropriate transparency and explainability; 3) fairness; 4) accountability and governance; and 5) contestability and redress.
  • Ethical Standards: Review of the tool’s alignment with ethical AI frameworks and standards.

2. Technical Performance

  • Accuracy: Measurement of the tool’s precision and accuracy.
  • Scalability: Assessment of the tool’s ability to scale operations without a loss in performance.
  • Features & functionality – Interfaces, functionalities, capabilities, and does it do the job for you? Is it easy to use? Use the MoSCoW method to check the AI tool meets your needs.
  • Transparency – openness and clarity of AI systems, particularly regarding their operations, decision-making processes, and underlying algorithms. Can you explain how it works?
  • Innovation potential – will the AI tool continue to be relevant into the future? What makes the AI tool future proof?

3. Risk Management

  • Security and Privacy Risks: Analysis of the tool’s security features and its ability to protect against cyber threat and its handling of sensitive data.
  • Operational Risks: Evaluation of the tool’s robustness and reliability in various operational scenarios.
  • Vendor Risks: Assessment of risks associated with the AI tool’s vendor, including financial stability and reputation.

4. Strategic Impact

  • Competitive Advantage: Evaluation of the tool’s potential to provide a competitive edge through innovation or cost savings.
  • Customer Experience: Assessment of how the tool improves customer engagement and satisfaction.
  • Market Adaptability: Review of the tool’s flexibility to adapt to changing market conditions and customer needs.

5. Integration and Implementation

  • Compatibility: Assessment of the AI tool’s compatibility with existing systems and infrastructure. Is the tool a fit with your current tech stack?
  • Implementation Complexity: Evaluation of the ease or difficulty of integrating the AI tool into current workflows.
  • Training and Support: Review of the training and support provided by the vendor for effective implementation. Is there an active user community? Responsive support (SLA)? Comprehensive, accessible and up-to-date documentation?

6. Cost-Benefit Analysis

  • ROI Estimation: Calculation of the return on investment over a specified period (7% to 10% per annum minimum).
  • Total Cost of Ownership: Comprehensive analysis of all costs associated with the deployment and maintenance of the AI tool including licensing fees, number of individual users with access, license usage limits etc
  • Break-even Analysis: Determination of the point at which the benefits of the AI tool outweigh the costs.

7. User Feedback and Satisfaction

  • Internal User Feedback: Collection and analysis of feedback from the team members who interact with the AI tool.
  • Client Feedback: Gathering and evaluating client opinions and satisfaction levels regarding services enhanced by the AI tool.
  • Net Promoter Score (NPS): Measurement of the likelihood of users recommending the AI tool to others.

This evaluation matrix is best used in conjunction with the list of due diligence questions. As the AI landscape in financial services evolves both can be regularly updated to reflect the changes. It’s also important to note that the specific metrics and methods of assessment will vary depending on the particular AI tool and its intended use within the organisation.

The Fourth Industrial Revolution

“There are four main effects that the Fourth Industrial Revolution has on business—on customer expectations, on product enhancement, on collaborative innovation, and on organizational forms”. 

The Fourth Industrial Revolution: what it means, how to respond – World Economic Forum

Are you keeping up with the scale, scope, complexity and velocity of AI and the Fourth Industrial Revolution? Hopefully we are helping you and your financial services business adapt faster, safely, and securely.


Top Tips for AI GDPR Compliance: Due Diligence

Leave a Reply