
In a rapidly evolving technological landscape, the use of artificial intelligence (AI) has opened up new horizons for businesses worldwide. However, along with its creative potential, AI also brings forth considerable destructive capabilities, as highlighted in Mustafa Suleyman’s book, “The Coming Wave.” The Indian Computer Emergency Response Team (CERT-In) has recently issued a warning about the adversarial threats posed by AI language-based applications like ChatGPT and Bard. Furthermore, the “Cost of a Data Breach Report 2023” by IBM underlines the significant financial repercussions of cyber-attacks, with the global average cost pegged at US$4.45 million and India at US$2.18 million.
In this blog post, we will explore key regulatory areas and measures that businesses can adopt to protect themselves against potential cyber attacks in the AI age.
Under Section 43A of the Information Technology Act 2000, businesses that handle “sensitive personal data or information” must implement and maintain “reasonable security practices and procedures.” Failure to do so may result in compensation to affected parties. To comply with this requirement, it is highly recommended that businesses in India pursue ISO/IEC 27001 certification, a standard for information security management systems (ISMS). This certification ensures that security practices and standards are commensurate with the nature of the business and the information assets being protected. While the Digital Personal Data Protection Act, 2023, is expected to replace this provision, currently, there is no statute specifying security standards for non-personal data.
Additionally, various sectors have specific obligations for data protection, such as the Cyber Security Framework for banks issued by the Reserve Bank of India and guidelines for Critical Information Infrastructure (CII) protection. These obligations extend to stock exchanges, clearing corporations, insurers, and listed entities.
The use of AI by businesses introduces data security and privacy concerns. To mitigate these risks, organizations should consider implementing AI usage policies. These policies should encompass filtration and moderation techniques to prevent the dissemination of malicious AI-generated content, frequent security audits, and multi-factor authentication (MFA) usage to regulate employee interaction with AI-based tools. Sensitizing employees on AI ethics and best practices and regular monitoring and auditing of AI usage are essential to identify and rectify potential threats promptly.
Businesses should ensure that their contracts with data custodians and clients incorporate well-tailored clauses related to data protection. Inadequate cyber protections by data custodians could result in substantial liability for businesses in the event of data breaches. To mitigate these risks, businesses must secure robust insurance and indemnity provisions.
The age of AI brings both promise and peril to businesses. Safeguarding your organization against the ever-present threat of cyber attacks is crucial. By adhering to security standards compliance, implementing AI usage policies, and fortifying contractual protections, businesses can significantly reduce their vulnerability and minimize the potential damages caused by cyber threats. In this rapidly evolving landscape, a proactive approach to cybersecurity is essential to protect your business and customer data.
Sign up for newsletter to get latest updates. Do not worry, we will never spam you.