Blog

Leading the Way in Responsible AI for Cybersecurity

0 Mins Read

·

Tuesday, December 17, 2024

Spencer Engleson

Director of Product

Spoiler alert: We, like you, see and use AI a lot these days. Over one million models are now available on Hugging Face, and more new frameworks are released every day than we care to count. While the fears of a full-blown Skynet situation have mostly abated, we share folks’ concerns about user privacy, customer data security, copyright infringement, and job security. Unfortunately, we do see a lot of usage of AI that doesn’t conform to a high standard for trust and safety, and that negatively affects the general perception of AI. But at Ghost, we see an overwhelming number of opportunities where AI, applied responsibly, can really help with the workloads and everyday toil that small AppSec teams in charge of defending large environments face every day. 

When it comes to building a platform that uses AI responsibly, we believe trust is obtained through transparency and adherence to a set of core values. So, we wanted to share three of the core tenets that we follow when developing and incorporating LLMs/AI in the Ghost platform:


  1. Private by design.

  2. Play to its strengths.

  3. It must deliver.

Private by Design

Protecting customer data is at the heart of everything we do and factors into every design decision. For example, we determined that having a label on every application to denote its operating environment (dev/stage/prod) helps AppSec teams with context and decision-making. We carefully considered what data to feed into our categorization AI agent. Is it publicly available information? Yes. Is there a chance of sensitive data reaching the LLM? No. Great! We can safely use AI to help with this nebulous task.

We also implemented a mechanism to accept user feedback that enhances our categorization agent’s accuracy. By allowing users to change the environment categorization in the rare event that the AI agent gets it wrong, we can dynamically update retrieval-augmented generation (RAG) for that organization. This feedback loop enables our categorization agent to adapt to each customer’s unique environment naming conventions. This approach maintains data privacy and allows users to tune the agent specifically for their environment.

There is another key aspect of our Private by Design philosophy. We never send sensitive customer data to public models where it could be incorporated into training datasets. Instead, we use private models that do not allow sensitive user data to be fed back into training sets to keep our customers’ data secure and under our control. 

Play to its Strengths

Despite all the hype about the capabilities of today’s impressive LLMs, they are not always the best, most accurate, or most efficient solution to a problem. For certain problem classes, no amount of prompt engineering or RAG for context and grounding will deliver consistent results that customers can trust and depend on to drive their security programs.

When designing a solution, we start by focusing on the customer outcome. Typically, this means focusing on traditional methods and flows first. Only when we run into specific and measurable tasks that are challenging to solve with traditional approaches, align with the problem space that LLMs excel in, and meet our requirements (security, confidence, cost, and latency) do we decide to leverage AI.

It Must Deliver

Put simply: If AI doesn’t deliver better results in real-world situations, we won’t use it. For nearly every single problem of meaningful complexity suited to AI, we’ve found that you have to go many levels beyond the “Hello World” example used in tutorials to reach that high bar of quality and reliability. We run all our AI prototypes in a dedicated environment using realistic data until we achieve our desired results. We routinely iterate, validate, and measure the overall system performance before we incorporate it into the Ghost Platform. Sometimes, it doesn’t work, and that’s ok. We’ll rethink and retry until it does.

To achieve positive and consistent results, we typically use an “Agentic AI” pattern leveraging multiple agents working together to break down complex tasks into discrete work that each agent can handle reliably. As we evaluate the performance of our prompts and models, we focus heavily on ensuring that each agent has the best and most accurate context using a multi-step RAG process before leveraging that data, and we are continuously validating results and incorporating those back into the system as reinforced feedback. Our expert AppSec staff carefully curates the data we supply to that retrieval because it is vital to the accuracy of responses. The “garbage in, garbage out” axiom still applies.

We’re always on the lookout for foundational model improvement, too. It’s exciting to see the pace of innovation new Large Language Models provide, so we constantly evaluate new releases from Google, Meta, Anthropic, and of course, OpenAI. That said, we’re less focused on impressive synthetic benchmark data and laser-focused on optimizing for overall efficiency based on price to performance for each use case. Once we have a system working well, it’s important to seek out and use the smallest possible model for the job for environmental and cost reasons.

Driving Innovation with Responsible AI Development

We are committed to harnessing the power of AI responsibly and effectively to enable AppSec teams to scale with their environment. By adhering to our core tenets of being private by design, playing to AI's strengths, and ensuring it delivers, we aim to build trust with our users and lead by example in the industry. Our dedication to responsible AI development enables us to meet today's challenges and anticipate tomorrow's opportunities.

Step Into The Underworld Of
Autonomous AppSec

Step Into The Underworld Of
Autonomous AppSec

Step Into The Underworld Of
Autonomous AppSec

Ghost Security provides autonomous app security with Agentic AI, enabling teams to discover, test, and mitigate risks in real time across complex digital environments.

Join our E-mail list

Join the Ghost Security email list—where we haunt vulnerabilities and banish breaches!

© 2024 Ghost Security. All rights reserved

Ghost Security provides autonomous app security with Agentic AI, enabling teams to discover, test, and mitigate risks in real time across complex digital environments.

Join our E-mail list

Join the Ghost Security email list—where we haunt vulnerabilities and banish breaches!

© 2024 Ghost Security. All rights reserved

Ghost Security provides autonomous app security with Agentic AI, enabling teams to discover, test, and mitigate risks in real time across complex digital environments.

Join our E-mail list

Join the Ghost Security email list—where we haunt vulnerabilities and banish breaches!

© 2024 Ghost Security. All rights reserved