Interview about responsible AI and cybersecurity governance with Brad Lin (Deloitte)

This week's interview is with Brad Lin, Partner Technology & Transformation at Deloitte

Brad Lin

Could you give a quick introduction of yourself and your role at Deloitte?

I'm Brad Lin, one of the cyber partners based in Hong Kong. We provide services in the cyberspace including cyber strategy, data privacy, data protection, identity, cloud security, AI, governance risk and compliance, etc. I've been working for Deloitte for roughly six and a half years now. Before, I worked for another big four and as an in-house in several IT assurance functions. I started my career in IT consultancy in the Netherlands. 

Could you explain how AI or cyber security frameworks are implemented in a company?

Every framework – whether for AI, cyber security or technology risk management, has several key components. These include governance (how is it organized?), operations (how is the technology managed?) and the technology itself. At Deloitte, we call this structure: governance, process, people, and technology.

A significant portion of our work within the AI GRC domain is focused on the governance part, helping organizations to structure their teams and decision-making processes from the operational staff to their supervisors and bosses, and across various streams, such as data streams and cloud streams. Additionally, risk management and security control play a crucial role in governance. Organisations must focus on monitoring, metrics and reporting to risk or technology risk committees. The next step is to define operational procedures like what rules, processes and procedures the organization needs to comply with as well as the implementation of guidelines.

In what kind of sectors are most companies active that come to Deloitte to ask for advice on (Gen) AI? 

All sectors are very active, getting more and more initiatives in the AI space. The most discussions we have are with retail, banking, and insurance, companies that have large data and need to make decisions and predictions, including automation and data analytics / processing. More and more companies are pioneering to leverage generative AI in more highly advance ways, marketing would be a typical area.

Do you encounter shortcomings in organizations regarding the implementation of AI? 

AI presents complex challenges because it is accessible across the whole organization impacting departments like HR, legal or business processes. Unlike other disciplines that follow more standardized frameworks, the implementation and management of AI depends on specific use cases, making it more complex to take one approach for all.

For example, insurance companies could use AI to automate claim processes to a certain extend. This would bring more speed and efficiency to the processing of claim. But more human experts would be needed to manage exceptions and for verification. The implementation of AI in this case doesn’t just alter the process but also everything around it, including business continuity since the AI system becomes a key operational component. Therefore, its crucial for businesses to keep updating their governance and related matters.  

How does AI impact cybersecurity strategies and how should companies start using AI?

AI is already used in a lot of security solutions also at our company. It can be used to identify threats earlier based on faster pattern recognition, historical events, and faster analysis on correlating events. AI can also be used to develop and improve security testing.

Not just for cybersecurity, but in any application of AI, the most important thing is to start with a clear business case and well-defined. Narrowing down your problems to something that is solvable is crucial.

How would you advice companies to navigate the continuous regulatory developments around AI and how do you stay updated?

In Hong Kong, there are no formal AI regulations yet, only guidelines from the HKMA as well as from the PDPC. However, if you look at China and EU, they already have introduced some regulations. What we normally do, not only in AI but also in cybersecurity and data privacy, is monitoring and translating the requirements into controls that should be integrated into the governance structure of a company. For the near future, I expect the regulatory framework on AI to change in Hong Kong, so it is crucial to stay up to date on these developments.

We have access to the information of regulations via our broader network. Our AI and GRC leadership are based in Australia, and we have a community that keeps track of regulatory developments. We keep updated with cybersecurity developments through industry associations as well. At Deloitte we are also involved in certain associations, and we maintain strong connections with them as well. Additionally, it is most important that we keep a close watch on industry news and developments.

What are the biggest risks of generative AI? And how should people navigate around this? 

From a security and compliance angle, AI makes the lives of attackers easier. For example, phishing emails can become more realistic and creating deep fakes by generating or manipulating images or videos has become much easier using AI technology.

Another concern, that we have been discussing with companies, is ensuring that AI is used in a trustworthy way. Prioritise education of employees, teaching them how to use AI and what risks to be mindful of. At this moment, if users blindly rely on the output of AI, it can pose a big risk because AI can provide wrong output. This could lead to unintended but disastrous outcomes. To safeguard ourselves from misinformation, we need to be diligent in verifying information sources and organizations should teach their workforce to use AI in a proper way. The key question is how to use AI without becoming overly dependent on it?

As a company, another key responsibility is to test AI outputs on their reliability – whether you build an internal AI solution or buy one externally. A good question to ask is: if I wouldn’t use AI, would I reach the same conclusions? We acknowledge and understand each of the risk domains related to AI, in particular in the use case context and we need to assess this each time.

How do you test AI outputs on their reliability?

We have many discussions with various clients on how much testing is sufficient. The answer depends on the use case and the application of the AI model. AI models are trained on specific data sets, and companies expect them to behave in a certain way. There is a parallel between AI testing and traditional system development. When deploying a new system, companies often run it alongside the existing one to ensure that it meets the expectations or at least matches the old systems output.

The challenge of AI testing lies within determining the number of tests required. There is no fixed answer and whether 100 or 1000 samples are enough is determined on a case-by-case basis. Its not different from implementing a new automated system because in both cases you must test the accuracy of the output.  

Another difficulty regards testing of generative AI because it creates content that didn’t exist before rather than making predefined predictions. When testing generative AI, the tester is required to have a deep understanding of the subject.

What are future challenges of AI adoption?

AI will continue to shape organizations by influencing governance, risk, and compliance. A key framework guiding adoption of AI is Trustworthy AI. This framework evaluations the adoption of AI on multiple aspects such as accountability, bias, security, privacy, etc.

One of the challenges in AI governance is ensuring that these aspects are covered in the development life cycle. Companies that build AI solutions must assess whether these aspects are covered sufficiently and whether they comply to global standards, like the ISO.

How do you use AI in your daily life? 

I enjoy using AI in my daily life, as I believe it is very useful and can speed up a lot of processes. For example, I use it to gain quicker access to information, but of course it still requires human oversight and critical thinking. At Deloitte, we also have developed various AI based tools to help with our work or to make our work easier and faster. For example, we have a search for reference cases, credentials and past work which could help creating proposals. We also have document search and querying used natural language. However, we’re still learning how to use AI optimally and how to balance automation with human expertise to prevent us from becoming over reliant.

Do you have anything else that you want to mention? 

AI is a promising development, but its success depends on how we use it. We need to stay educated and cautious to ensure AI is applied correctly. I don't believe it's there to replace people, it's to make things easier and faster. A useful analogy is the development of factories. When automation was introduced, many people feared to be replaced by machines. Yet no factory operates without people on the floor today. Automation led to faster production, more efficiency, and a shift toward more skilled workers with a broader expertise. Similarly, AI allows employees to focus on higher-value tasks instead of eliminating them from the workforce.