Artificial intelligence (AI) has spread to nearly every industry as companies and consumers both aim to leverage its efficiencies of scale. Tasks like data analysis, transcription, customer support, and everything in between can be performed using AI to reduce time to results by orders of magnitude. Financial planning is no exception.
According to a survey from global wealth management firms F2 Strategy, over half of their compatriots have an AI project underway already. They’re interested in predictive analytics on market conditions and securities changes over time, optical character recognition to analyse documents, workflow automation, chatbots, and more. The potential is clear – AI can reduce the human time spent on these tasks by as much as 90%. At the same time, more than 60% of firms say they need more education on AI. So while the upside is undeniable, the value-to-risk ratio is less clear.
This dynamic is particularly important for financial planning, where the stakes are higher – families’ and individuals’ money is directly at risk. While bespoke wealth management services typically cater to higher-net-worth individuals, AI makes it possible to offer these services to a broader group of people. Advisors can develop customer profiles and deliver personalised plans based on age, assets, risks, goals, and needs in a fraction of the time, which means firms can offer it to more people. This represents a new market for wealth managers, but also a larger risk pool.
Threat actors use AI too
The cost of cybercrime reached $8 trillion in 2023, according to a report from Cybersecurity Ventures. Clearly, this isn’t a niche threat. It can reasonably be considered among the primary threats every business faces today, and proactive security is therefore a foundation for doing business at all.
We must always remember that threat actors are using AI too. AI offers attackers the same benefits – it’s a force multiplier that allows them to increase the scale and effectiveness of their campaigns. They can even poison the AI model to reveal sensitive information or deliver malicious results. Moreover, employees who are not adequately trained can inadvertently expose sensitive information through the information they input into AI tools, which subsequently incorporate it into their machine-learning activities. We’ve already seen instances of this invalidating intellectual property claims.
Security controls therefore have to be integrated into the entire AI lifecycle, including employee training. Before using any AI tool, organisations must understand the privacy classification of all the data that might be input, the source of the data used to train the AI tools, and the specifics of the security protocols in place to protect sensitive information. This must be part of the AI rollout from day one. Open AI systems carry even more risk, as they’re designed to be accessible to the public, which enables them to learn from a much larger dataset, but also allows manipulation by bad actors.
Closed systems are more secure, but require more hands-on management and model training. Employees should be given in-depth training about the tool, how it works, and how to use it safely – emphasising which data can be used and which should never be exposed to a large language model (LLM) like the kind that power generative AI applications.
Understand the scope and restrict data access
When implementing an AI-based solution, it’s important to identify the scope of the tool and restrict its data access to what’s absolutely necessary to train it. Develop a comprehensive understanding of the privacy of the information, the source of the model’s data, and the native security mechanisms built in. Many AI tools have built-in defences to protect against unethical use – a good example is ChatGPT’s rules that seek to prevent people from using it for nefarious purposes, like building malware. However, it’s also clear that these rules can be bypassed through cleverly worded prompts that obscure the intent of the user. This is one type of prompt injection attack, which is a category of threats unique to AI-based systems. Strong controls must be in place to prevent these attacks before they happen. Broadly, these controls fall under the scope of zero-trust cybersecurity strategies.
AI tools, especially the LLMs that enable generative AI, should not be treated as typical software tools. They are more like a hybrid between a tool and a user. Zero trust programs limit access to resources based on a person’s job function, scope, and needs. This limits the damage an attacker can do by compromising a single employee because it limits the range of lateral movement. We must remember that adding any software tool also increases the attack surface by offering more entry points to an attacker.
Compromising a tool – like an AI tool – that has unlimited access to personally identifiable information, company secrets, proprietary tools, strategic forecasting, competitive analysis, and more could be catastrophic. Preventing this kind of breach must be at the forefront of the strategy-level discussion to implement AI tools from the very beginning. After a cyber security incident, it’s often too late.
Take care to tailor your AI to specific needs.
While most AI tools come with built-in security, organisations must tailor these to their specific needs. They must also go beyond them. Despite similarities, each organisation will have unique use cases, and calibrating defences to match these dynamics is table stakes for cybersecurity in 2024.
When we talk about AI, security is even more important. AI won’t replace financial advisors, but it will take the industry to its next stage of
evolution, and that means new threats. The scale of the models and the data they ingest expand the attack surface exponentially, and one breach can negate any gains a company makes by leveraging AI. Cyber security analysis and control, under a zero trust model, is indispensable for unlocking the full potential of any AI-based tool.
About Lionel Dartnell Lionel has over 25 years’ experience in the ICT industry specialising in Networking, Unified Communications and Cyber Security. From 2014 he has led sales and technical teams among Original Equipment Manufacturers (OEMs), Resellers and Distributors across Sub Saharan Africa. As Check Point’s Security Engineering Manager for the SADC region he leads his team to identify cyber vulnerabilities and deficiencies within customers’ environments, consults on counter measures, best practice and threat prevention; ultimately ensuring they can conduct business safely in the digital world. Lionel is a father of two, an avid rugby supporter and motorcycle enthusiast. |
About Check Point Software Technologies Ltd.
Check Point Software Technologies Ltd. (www.checkpoint.com) is a leading AI-powered, cloud-delivered cyber security platform provider protecting over 100,000 organizations worldwide. Check Point leverages the power of AI everywhere to enhance cyber security efficiency and accuracy through its Infinity Platform, with industry-leading catch rates enabling proactive threat anticipation and smarter, faster response times. The comprehensive platform includes cloud-delivered technologies consisting of Check Point Harmony to secure the workspace, Check Point CloudGuard to secure the cloud, Check Point Quantum to secure the network, and Check Point Infinity Core Services for collaborative security operations and services.