Beware the Dark Side of Generative AI tools: Untrusted Intermediaries Loom Large

art_of_AI_data_security
Image Source: VentureBeat

TL;DR

  • The rise of  dark of generative AI tools models poses new security challenges.
  • Untrusted intermediaries are a growing source of shadow IT.
  • Privacy concerns intensify when AI and PII intersect.
  • Organizations must adapt security programs to AI-related risks.

The meteoric ascent of extensive linguistic models (ELMs) and creative artificial intelligence has ushered in fresh challenges for security teams worldwide.

When it comes to forging innovative pathways for data accessibility, dark side of generative AI tools  defies conventional security paradigms primarily focused on averting unauthorized data access.

To facilitate organizations in embracing dark side of generative AI tools swiftly without unwarranted risks, security providers must undertake program overhauls that account for novel forms of risk and their impacts on pre-existing security protocols.

Untrusted Intermediaries: A Novel Wellspring of Shadow IT

A burgeoning industry is currently burgeoning atop ELMs hosted by prominent services AI tools like OpenAI, Hugging Face, and Anthropic. Moreover, numerous openly accessible models such as LLaMA from Meta and GPT-2 from OpenAI further expand this landscape.

Access to these models can potentially empower employees within an organization to tackle various business challenges.

However, for assorted reasons, not all individuals possess direct access to these models. Instead, employees frequently seek tools such as browser extensions, SaaS productivity applications, Slack integrations, and paid APIs, promising seamless utilization of these models.

These intermediaries are rapidly evolving into a burgeoning source of shadow IT. Employing a Chrome extension to compose more effective sales correspondence doesn’t seem akin to engaging with a vendor; it resembles a productivity enhancement. 

For many employees, it isn’t readily apparent that they might inadvertently expose sensitive data by sharing it with third-party entities, even if their organization is amenable to the underlying models and providers.

Crossing Security Boundaries Through Training

This form of risk represents a relatively nascent concern for most organizations. Three distinct boundaries factor into this risk:

Boundaries between users of a foundational model and customers of a company that customizes a foundational model.

Boundaries among users within an organization with divergent access privileges to data used for model customization.

In each of these scenarios, the pivotal issue revolves around discerning which data contributes to model development. Only individuals with authorized access to the training or customization data should be granted access to the resultant model.

Privacy Breaches: Harnessing AI and PII

Although privacy considerations aren’t novel, harnessing generative tools AI with personally identifiable information (PII) exacerbates these concerns. In numerous jurisdictions, the automated processing of personal data to analyze or predict specific facets of an individual is a regulated activity. 

Employing dark of Generative AI tools adds complexity to these processes, making compliance with mandates like offering opt-out mechanisms more intricate.

Another factor to consider is how training or customization of models using personal data might impede compliance with deletion requests, constraints on data repurposing, data residency stipulations, and other intricate privacy and regulatory prerequisites.

Adapting Security Programs to AI-Related Hazards

Vendor security, enterprise security, and product security face distinctive challenges arising from the novel risks introduced by generative tools AI. Each of these programs must evolve to effectively manage these risks.

Vendor Security: Treating AI Tools Like Any Other Vendor’s Offerings

The starting point for vendor security concerning dark of generative AI tools is to subject these tools to the same scrutiny as any other vendor’s offerings. Verify that they align with your customary security and privacy criteria. The aim is to ensure their role as trustworthy custodians of your data.

Given the novelty of these tools, many of your vendors may employ them in ways that lack responsibility. Consequently, you should incorporate additional considerations into your due diligence process.

Consider augmenting your standard questionnaire with inquiries such as:

Will our company’s data be employed for training or customizing machine learning (ML) models?

How will these models be hosted and deployed?

How will you ensure that models trained or customized using our data remain accessible solely to individuals within our organization with authorized data access?

What is your approach to addressing hallucinations in dark of generative AI tools models?

Your due diligence process may adopt alternative forms, and established compliance frameworks like SOC 2 and ISO 27001 will probably incorporate pertinent controls into their forthcoming iterations. Now is an opportune moment to begin contemplating these questions and encouraging your vendors to do the same.

Enterprise Security: Establishing Appropriate Expectations

Every organization charts its unique path between minimizing friction and maximizing usability. Your organization may have already implemented stringent controls concerning browser extensions and OAuth applications within its SaaS environment.

Presently, it is advisable to reassess this approach to ensure it maintains the right equilibrium.

Untrusted intermediary applications often manifest as effortlessly installable browser extensions or OAuth applications linking to your existing SaaS tools. These vectors can be monitored and governed. 

The risk of employees utilizing tools that transmit customer data to unsanctioned third parties looms especially large now that many of these tools proffer compelling solutions leveraging generative tools AI.

it’s vital to set clear expectations for your employees and presume good intentions. Ensure that your colleagues comprehend what constitutes acceptable usage of these tools. Collaborate with your legal and privacy teams to craft a formal AI policy for your workforce.

Product Security: Transparency Fosters Trust

The most significant transformation in product security entails safeguarding against becoming an untrusted intermediary for your customers.

In your product documentation, elucidate how you leverage customer data in conjunction with dark side of generative AI tools. Transparency stands as the foremost and most potent instrument for cultivating trust.

Moreover, your product should uphold the same security boundaries your customers have come to anticipate.

Do not permit individuals to access models trained on data they lack direct access to. It is plausible that future technologies may facilitate the application of granular authorization policies to model access, but we are still in the nascent stages of this paradigm shift. 

Prompt engineering and prompt integration constitute intriguing new domains of offensive security, and you should ensure that your utilization of these models does not inadvertently expose your organization to security breaches.

Source(S): VentureBeat

Adam Pierce

Adam Pierce is a seasoned technology journalist and professional content writer who has a genuine passion for delivering the latest tech news and updates. With a wealth of experience in the field, Adam is committed to providing NwayNews readers with accessible, informative, and engaging content. He aims to keep readers well-informed about the latest breakthroughs, gadget releases, and industry trends through his articles.

Leave a Reply

Your email address will not be published.