AI Increased Our Productivity. It Also Increased Our Risk.

By: Oscar Gutierrez, Security Engineer

Since the introduction of ChatGPT in November 2022, the adoption of Large Language Models (LLMs) has accelerated across industries. As organizations embed these technologies into customer-facing applications and internal workflows, the associated security implications warrant deliberate evaluation. As threat actors probe and exploit web applications that incorporate LLMs, organizations must understand and actively manage the resulting attack surface. 

Attack Surface Discovery

Managing your business’s attack surface starts with understanding where LLMs are used across your company. 

LLMs are commonly integrated through API that support use cases such as: 

  • Chat support and virtual assistants 
  • Document and PDF generation 
  • Data analytics and reporting 
  • Image and audio processing 
  • Workflow automation 

In some cases, LLM usage is visible through embedded chatbots. However, attackers often take a more systematic approach, crawling application pages and analyzing network traffic to identify third-party integrations. Requests for these services may indicate LLM usage: 

  • api-iam.intercom.io 
  • js.intercomcdn.com 
  • api.chatbot.com 
  • api.anthropic.com 
  • api.openai.com 

Identifying the presence of an LLM is the first step. Attackers then seek to determine what data sources, APIs, or internal systems the model can access. In enterprise environments, LLM functionality may be embedded in high-value workflows such as financial reporting, analytics pipelines, document processing, or customer data systems. When LLMs are connected to internal databases or privileged APIs, the potential impact of compromise increases significantly. 

Threat actors may attempt to extract information by crafting deceptive prompts designed to manipulate the model’s behavior. For example, an attacker might impersonate internal personnel and request troubleshooting information to elicit system-level details. 

As LLM integrations expand across enterprise systems, so does the attack surface. 

Common Web LLM Attacks

LLMs introduce a distinct set of risks that differ from traditional application vulnerabilities. The severity of exposure depends on what the model can access and how tightly it is integrated with sensitive systems. 

Prompt Injection  

One of the most prevalent and impactful LLM-specific vulnerabilities is prompt injection. 

Prompt injection occurs when an attacker manipulates input to cause the model to override or ignore its intended instructions. Because LLMs process developer-provided instructions and user input within the same context, improperly designed systems may struggle to reliably distinguish between the two. 

Successful prompt injection can result in: 

  • Exposure of sensitive or confidential information 
  • Generation of unauthorized responses 
  • Manipulation of downstream systems 
  • Execution of unintended actions 

A straightforward example involves instructing the model to ignore previous instructions and follow newly supplied directives. If appropriate safeguards are not in place, the model may comply. 

Indirect Prompt Injection 

Prompt injection does not always occur through direct user interaction. 

In indirect prompt injection scenarios, an attacker may instruct the LLM to retrieve and analyze content from an external website. Hidden malicious instructions embedded within HTML comments, images, metadata, or invisible characters can then influence the model’s output. 

This technique is frequently used to bypass direct prompt filtering controls and demonstrates how LLM risk can extend beyond traditional user input fields. 

LLMs as Intermediaries to Sensitive Systems

Beyond model-specific vulnerabilities, LLMs may serve as intermediaries through which attackers interact with other sensitive components. 

For example, an LLM integrated with internal APIs, databases, or automation systems can unintentionally provide a pathway for attackers to query or manipulate those resources. In such cases, the model effectively becomes a proxy. 

For this reason, all user-supplied input should be treated as untrusted, and any downstream system interaction should include proper validation, access control, and monitoring. 

Mitigating LLM-Related Risk

There is no single control that fully eliminates the risks associated with LLM-enabled applications. As generative AI capabilities evolve and threat actors refine their techniques, mitigation requires a layered and adaptive approach. 

Organizations should consider implementing the following safeguards: 

  • Limit data exposure: Provide LLMs access to the minimum data necessary to perform their intended function. Avoid broad or unnecessary connectivity to internal systems. 
  • Validate and sanitize input: Treat all user-supplied content as untrusted. Implement input validation and filtering before allowing interaction with downstream services. 
  • Restrict external interactions: Limit the model’s ability to retrieve or process external web content unless explicitly required by the application’s design. 
  • Conduct human-led security testing: Perform regular adversarial assessments focused on prompt injection and abuse scenarios. Organizations should not rely solely on automated or AI-driven tools to identify vulnerabilities. 

By combining architectural controls, disciplined access management, and continuous testing, organizations can significantly reduce the risk associated with LLM deployments while continuing to leverage their operational benefits. 

Moving Forward

LLM-enabled applications can deliver meaningful operational efficiencies and enhanced customer engagement. However, without deliberate security design, they can introduce complex and evolving attack vectors. 

Organizations adopting LLM technologies should ensure that security architecture, governance controls, and continuous testing mature in parallel with innovation. As generative AI capabilities expand, treating AI security as an ongoing risk management discipline, rather than a one-time implementation effort, will be essential. 

Share
Related Articles
Why Cybersecurity Readiness Matters More Than Ever: Tips for Protecting Your Business and Data from BlueHammer
Why Leaders Shouldn’t Have Full Access to Company Systems

Looking for support or have a question?

Contact us to speak with one of our advisors.

Search

Sign up for our newsletter