• bitcoinBitcoin$97,043.04-1.06%
  • ethereumEthereum$3,382.13-2.45%
  • rippleXRP$2.28-1.83%
  • binancecoinBNB$667.13-1.95%
  • solanaSolana$185.94-4.31%

WitnessAI Build’s Guardrails for Gen AI Models

WitnessAI Build's Guardrails for Gen AI Models

Gen AI can fabricate information, exhibit bias, and sometimes produce harmful content Is it possible for it to be truly “safe”?

Rick Caccia, the chief executive officer of WitnessAI, is confident that it can.

WitnessAI Build's Guardrails for Gen AI Models
COE of WitnessAI |Google Image

Former Palo Alto Networks SVP of marketing Caccia stated in an interview with TechCrunch, “Securing AI models is a real problem, and it’s gleaming for AI researchers, but it’s different from securing use.” “I compare it to a sports car: a more potent engine, or model, is useless if the vehicle lacks adequate stopping power and steering. “The controls are equally important to the engine for quick driving.”

The enterprise undoubtedly needs such restrictions, as it is apprehensive about the technology’s limitations despite being cautiously optimistic about the productivity-boosting potential of generative AI.

An IBM survey reveals that 51% of CEOs are recruiting for generative AI-related positions that only existed after this year. A Riskonnect survey shows that only 9% of businesses are prepared to manage hazards related to privacy and intellectual property that may arise from their use of generative AI.

Through risk-reducing policies and protections, WitnessAI’s platform listens in on conversations between employees and the custom generative AI models their employer is using. These models are more like Meta’s Llama 3 than OpenAI’s GPT-4, which is not API-gated.

WitnessAI Build's Guardrails for Gen AI Models
Meta’s Llama 3 Google Image

“One of the benefits of enterprise AI is that it enables and democratizes access to enterprise data for employees, thereby enhancing their job performance.” However, the issue pertains to inadequately safeguarding sensitive information, which could result in its disclosure or theft.

WitnessAI offers for sale access to several modules, each addressing a distinct type of generative AI risk. Organizations can enforce regulations that restrict the unauthorized use of generative AI-powered tools by members of specific teams (e.g., by inserting internal codebases or inquiring about pre-release earnings reports).

Another approach involves obfuscating confidential and sensitive information from the prompts provided to models and implementing safeguards to prevent assaults that could compel them to deviate from the script.

Caccia stated, “We believe the most effective way to assist businesses is to define the issue logically, such as the secure adoption of artificial intelligence, and then sell a solution that addresses the issue.” “WitnessAI assists the CISO in safeguarding the organization by preventing prompt injection, ensuring data protection, and enforcing identity-based policies.”

To facilitate adherence to current and forthcoming regulations, the chief privacy officer desires that we provide them visibility and a mechanism to report on risk and activity.

The fact that all data passes through WitnessAI’s platform before reaching a model complicates things from a privacy standpoint. The organization is explicit regarding this matter, providing tools for employees to monitor the models they access, the queries they pose to the models, and the responses they receive. However, it may introduce privacy hazards.

In response to inquiries regarding WitnessAI’s privacy policy, Caccia stated that the platform is “isolated” and encrypted to safeguard customer secrets.

He said, “A regulatory separation mechanism is built into our millisecond-latency platform—an isolated design that protects enterprise AI activity in a way that is fundamentally different from traditional multitenant software-as-a-service offerings.” “With their keys, we generate a unique instance of our platform for each client.” Their AI activity data is private; only they have access to it.

That may assist in alleviating customers’ concerns. Concerned employees regarding the surveillance capabilities of WitnessAI’s platform face a more difficult decision.

Regardless of the cause, surveys indicate that individuals dislike monitoring their work activities and believe it negatively affects company morale. A little over one-third of the participants in a study by Forbes indicated that they would contemplate quitting their employment if their employers monitored their online communications and activity.

However, Caccia asserts that there has been and continues to be significant interest in WitnessAI’s platform, as evidenced by the 25 early corporate consumers in its proof of concept phase. (It will be widely available in the third quarter.)

Additionally, WitnessAI has secured $27.5 million in funding from GV, Google’s corporate venture branch, and Ballistic Ventures, the latter of which incubated the former. This represents a vote of confidence from venture capitalists.

The allocated funding is intended to expand WitnessAI’s current 18-person team to 40 by the conclusion of the current year. WitnessAI’s only hope of fending off competitors in the nascent market for model conformance and governance solutions—including startups like CalypsoAI and tech titans AWS, Google, and Salesforce—is undoubtedly growth.

Caccia stated, “We’ve constructed our plan to last well into 2026 even in the absence of sales, but we already have nearly twenty times the pipeline volume required to meet our sales goals this year.” “This is our inaugural funding round and public debut; however, secure AI enablement and application is a nascent field, and our entire feature set is evolving to cater to this emerging market.”

Previous Article

Oyo Pulls Out of IPO Application to SEBI Again

Next Article

Nigeria Appoints New Blockchain Policy Experts

Write a Comment

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.