Security: What to Look for in Any AI Software Evaluation

What should be non-negotiable on your evaluation checklist, and what can you keep as a nice-to-have? Today, we’re sharing some tips to help you understand the most important Security points to keep in mind when evaluating AI software.

Ian Wright
Writer

When it comes to vetting new software, security is always top-of mind – but it’s particularly important in the space of artificial intelligence (AI)  at this particular moment, given the novelty of the technology, the data involved in using it, and the still-to-be-determined norms and standards around how to responsibly use, deploy, and manage AI.

But what should be non-negotiable on your evaluation checklist, and what can you keep as a nice-to-have? Today, we’re sharing some tips to help you understand the most important Security points to keep in mind when evaluating AI software.

Understanding AI Software Security

When it comes to AI software security, there are the base concerns that come with any software package — encryption, user authentication, compliance with industry standards — but because AI software can involve training machine learning models on large volumes of data, there are many additional privacy issues to consider, especially around training. Businesses should be wary not only with the potential for proprietary data to be compromised, but also have an explicit understanding to what extent their proprietary data may be used to train other models.

AI also opens up new avenues of cyberattack in the form of prompt injections or data poisoning. In the former case, an attacker creates an input for a large language model (LLM) that’s designed to make it behave in an unintended way; in the latter, an attacker corrupts the data used to train a model, causing it to produce undesirable outcomes.

One example of a recently uncovered vulnerability due to the potential for prompt injection comes from the API service and Google Chrome plugin, EmailGPT. According to the Synopsys Cybersecurity Research Center (CyRC), via Cybersecurity News, exploiting this vulnerability could lead to intellectual property leakage, denial-of-service, and even direct financial loss if attackers make repeated requests to the AI provider’s API, which are pay-per-use. This simple case demonstrates why security in AI software is so important.

Key Security Considerations in AI Software

In addition to the basic considerations that apply to any software package, there are four essential points on security that need to be kept in mind when evaluating AI software.

  1. Encryption: This is non-negotiable, and any vendor that is serious has this built in. Look for data to be encrypted using industry-leading standards - in transit at TLS 1.2/1.3 and at rest in AES 256-bit encryption.
  2. Data Privacy: A critical point to evaluate in any AI software is the provider’s policies and ability to manage data privacy. This can include features such as the automatic redaction of personally identifying information (PII), which is particularly relevant given the European Union’s General Data Protection Regulation (GDPR) and similar legislation. In addition to anonymization, the software should always include AES-256 encryption to ensure any client or proprietary data is protected. This is table stakes.
  3. Indexing Controls: Taking this one step further, leading AI vendors will also offer indexing controls around the data you integrate. This means you can filter and restrict what data is actively integrated into the AI software tool – and the best AI software tools even enable you to drill down to individual fields for each software tool. For example, you might choose to pass on Customer ID into the AI software, but leave First Name and Last Name within your existing CRM. This allows you to ensure that the AI software you deploy only has access to the data that it needs, further reducing risk and helping you practice stronger information governance. 
  4. Compliance Standards: Given the novelty of AI software and the sheer number of offerings in the market, it’s crucial that any AI software you adopt meets (and ideally exceeds) the current AI security, privacy and compliance standards. These include SOC 2 Type II compliance, which covers a service provider’s internal controls, ISO 27001:2022, and GDPR compliance. If it’s helpful, here’s Ask-AI’s own Responsible AI policy, which meets and exceeds industry standards. 
  5. User Authentication and Permission: Strong user authentication and permission controls, such as Single Sign-on (SSO) and multi-factor authentication (MFA) are also a critical way to ensure compliance and security within your systems. SSO and MFA helps ensure only authenticated and approved users can log in to your systems, while user permission controls allow you to further refine who within your organization can access certain data, folders, information, or full databases. These features help ensure that only authorized users can access the system and reduce the chance of data breaches. Additionally, being able to implement a role-based access control (RBAC) approach can further protect your data by restricting actions, such as API calls, to specific resources.

    Look also into whether the AI software you’re evaluating can inherit the user permissions of any underlying software tools they’re integrating or searching through. For instance, if User A can access any file in your organization’s Google Drive, but only a select amount of data in Salesforce, it’s important that your AI software only surface results that match what User A should be able to see in Google Drive and Salesforce when they log in – even if the AI software has indexed more results in Salesforce that a user with higher administrative privileges might be able to see.
  6. Operating in a Secure Cloud Environment: The vast quantities of data required to train and deploy machine learning models necessitates processing in a cloud environment, rather than on any local machine. This point is – once again – table stakes, but look for any AI software operating on the cloud to be housed in a trusted, secure environment, such as Microsoft Azure or Amazon Web Services, In such cases, you can be assured that client or proprietary data used by your AI software will enjoy the same protections as any data hosted by a major cloud provider.

    It is also important to know what nuances or guardrails, philosophically, your organization prefers to have. For instance, some businesses may avoid choosing a particular cloud computing platform because an arm of its parent organization competes with their core business. This may impact what cloud environments you or your AI vendors may choose to use.

All in all, it’s not difficult to find examples of software providers failing to address these essential points when it comes to AI security. Even large companies at the forefront of AI development can make critical missteps when rolling out new integrations. Consider Microsoft’s recent debacle with the upcoming Recall feature in Windows 11: 

The initial version saved screenshots and a large plaintext database tracking everything users do on their PCs. The idea was to make it easier for the Windows AI to assist users, but the potential for all of that data to be compromised quickly raised alarm bells among cybersecurity experts. Microsoft has since scaled back its ambitions for Recall amidst the criticism.

Slack faced a similar backlash earlier this year for its new AI training policy, which granted the use of customer data to train what the company referred to as “global models” that are used for channel and emoji recommendations, as well as search results. And, like Microsoft, Slack has (somewhat) backed down in response to public outcry.

The moral here is that security cannot be an afterthought when it comes to AI software. As the examples from Microsoft and Slack demonstrate, failing to keep these essential considerations in mind can erode user trust and confidence, in addition to the potential for introducing new vulnerabilities into your technology stack.

Evaluating AI Software Security

Based on the preceding considerations, the following checklist of security features should be helpful in evaluating the security of your AI software:

  • Enterprise-Grade Certifications
    • SOC 2 Type II
    • ISO 27001
    • GDPR Compliance
  • Indexing Controls
    • Automatic PII redaction
    • Restricted AI data access, including fields
    • Dedicated data tenancy
  • No Third-Party Training
    • Data not shared with third-party vendors
    • No separate LLM training

In addition to the checklist above, here are additional important questions you can ask any AI software vendor when evaluating the security of AI software:

  • Which LLMs are being used in this software?
  • Do you use third-party models and, if so, how?
  • How do your models use customer data?
  • How is customer data stored?
  • How do you handle personally identifiable information (PII)?
  • What cloud environment does the software use?
  • What are your user access controls?
  • What certifications do you have?

Conclusion

Security is never more important than when deploying new technologies, and that’s why it’s crucial to consider factors such as data privacy, compliance standards, user authentication and permissions, and the security of the cloud environment in which the software operates. It’s easy to be distracted by flashy demos and big promises when it comes to AI, but, as with any software, enterprises should always consider security first and foremost.

Questions about AI software security? We’re here to help. 

Reach out and request a demo of Ask-AI’s AI assistant solution today, or reference our Responsible AI page to draw inspiration for your AI security and compliance policies.

More from Ask-AI

Case-study

AI-Powered Knowledge Management: A Practical Guide for GTM Leaders

Become a go-to resource for how to implement AI across customer-facing teams

Team Ask-AI
Read more
Case-study

How to Choose the Right AI Solution for Your Company

Deciding which AI solution is right for your company can be difficult. We made the decision easier to assess.

Team Ask-AI
Read more
Case-study

Should We Really Build Our Own Knowledge Management Platform In-house?

How to avoid the common pitfalls of building an in-house knowledge management solution.

Antasha Durbin
Writer
Read more
Case-study

What’s an AI Committee and Why Do You Need One?

Forming committees for strategic AI adoption helps companies stay organized and make well-informed decisions. In this article, we’ll explore what an AI steering committee is, why your organization needs one, and the steps to take to form one.

Antasha Durbin
Writer
Read more
Case-study

Yotpo and Ask-AI: Improving Case Handling Efficiency by 30% and Minimizing Internal Tickets

Yotpo agents who use Ask-AI's assistant on a daily basis to search for answers are able to achieve a 30.2% reduction in ticket handling time.

Team Ask-AI
Read more
Case-study

monday.com uses Ask-AI to Enhance the Customer Experience

Ask-AI has had an immediate impact on ticket handling time at monday.com, with teams seeing a 13.5% reduction in ticket handling time among its most-active Ask-AI users – a nearly tenfold jump compared to a 1.4% reduction for non-active users.

Team Ask-AI
Read more
Case-study

Embrace the Knowledge Chaos

Once we let go of the idea of a single perfect knowledge base, our actual knowledge creation and consumption skyrocketed – we saw a 10x increase compared to the traditional method of hiring technical writers.

Alon Talmor
Founder and CEO
Read more
Case-study

Conductor deploys Ask-AI to slash agent ramp times, boost collaboration, and improve customer satisfaction

With nine different software tools and multiple ticketing systems, Conductor’s tech stack was always going to be complex to bring together. Other AI vendors quoted a 2-3 month integration process before Conductor could start using their tools. To Joe’s surprise, with Ask-AI the entirety of Conductor’s org-wide data was ready to use in just three weeks.

Team Ask-AI
Read more
Case-study

3 Things Companies Get Wrong About Workplace AI

Here's the three most common mistakes that organizations make when implementing AI – and the steps you can take to avoid them.

Amita Parikh
Writer
Read more