By Philip de Souza 

In 2017, as the festive season approached, Oprah Winfrey’s OWN network followers were targeted by an impersonation scam on Instagram—a forewarning of the rampant deceit that generative AI would soon unleash across the digital landscape. Six years later, these AI-driven scams continue to deceive, mislead and even bankrupt individuals globally. Once a trusted technology, questions are now raised about its reliability and the authenticity of its outputs. Amid this, the critical challenge lies in distinguishing between genuine and deceptive AI-generated content.

Generative AI, as defined by IBM, refers to deep learning models that can generate high-quality text, images and other content based on data they were trained on. The technology’s infancy demands a cautious approach rather than alarmist reactions. While the risks are evident, understanding their full extent is still a work in progress. A triad approach can mitigate these risks, focusing on awareness, governance and the deployment of defensive AI tools.

Awareness and education are pivotal in navigating the landscape of generative AI. A well-informed user—be it a customer, vendor, employee or stakeholder—is a valuable asset. Recent surveys indicate a significant awareness of AI’s benefits and risks. For instance, a 2023 KPMG study titled “Trust in Artificial Intelligence” published by Global Insights revealed that 85% of respondents recognized AI’s advantages, while 73% acknowledged its inherent risks.

Key considerations include:

  • Discerning the Good from the Bad: Despite its potential for innovation, generative AI also poses risks like AI-supported phishing and identity theft. Understanding these dual aspects is crucial for leveraging AI responsibly.
  • Assessing the Risk Quotient: Generative AI’s widespread application necessitates a critical evaluation of organizational risk postures, considering both its benefits and potential for misuse.
  • Ethical Implications: Ethical use of AI is a major concern. Organizations must establish principles for ethical AI adoption, addressing issues like privacy, bias and misuse.
  • Promoting Responsible AI (RAI): As defined by TechTarget, RAI involves ethical and legal considerations for AI deployment. Key principles include accountability, fairness, privacy and security.
  • Preparation and Action: World Economic Forum’s piece, “Here’s Why Organizations Should Commit to RAI” states that fostering a learning environment and diverse teams, identifying training metrics and regular bias testing are crucial steps toward RAI.

Legislation and administrative measures are essential for AI risk management. The National Institute of Standards and Technology and Cybersecurity and Infrastructure Security Agency provide frameworks and roadmaps for AI use in organizations. Collaboration between policymakers and industry experts is key, emphasizing the adoption of best practices and regulatory compliance.

Deploying AI for defense involves using its capabilities to detect and counteract threats. This includes using unbiased data sets, recognizing attack patterns and proactive threat remediation. Tools like AI PromptGuard from Plurilock offer innovative solutions for safer Generative AI usage.

The journey with generative AI is complex yet inevitable. As the technology evolves, so must our strategies to harness its potential while mitigating risks. The resilience of the cybersecurity industry, coupled with proactive measures and technological advancements, points toward a future where generative AI is used safely and responsibly. •

Philip de Souza is president of Aurora Systems Consulting, Inc., a leading provider of comprehensive cybersecurity solutions and services, dedicated to safeguarding organizational digital assets and enhancing enterprise information security frameworks. He will lead the session “Unlocking the Power of AI: An Introduction to Artificial Intelligence and Generative AI” at the March 28 General Assembly. Register to attend.