3 Things Companies Get Wrong About Workplace AI

Here's the three most common mistakes that organizations make when implementing AI – and the steps you can take to avoid them.

Amita Parikh
Writer

If you’re reading this post in 2024, your organization is certainly talking about, or at least investigating, how generative AI will disrupt your workplace and company in the near future. Some organizations will be more cutting edge and ready to start today, while others are taking a more cautious wait-and-see approach – but nearly all will be evaluating workplace AI software either today, or in the coming months and years.

But what’s the right process to follow? What factors should organizations be considering, and what mistakes can they look to avoid making? It’s not all so straightforward. Many organizations are rightfully cautious about how they implement AI because they’re heard horror stories of implementations gone wrong, or have valid concerns about data security and the implementation processes. 

Through implementing our workplace AI assistant with customers, we’ve witnessed plenty of missteps and challenges to workplace AI adoption. Today, we’ll tell you about the three most common mistakes that organizations make – and hope that in sharing these hard-fought lessons, you can take the steps to avoid these same mistakes.

Mistake #1: Involving Legal and Security too late in the process

The number one mistake that companies make? You guessed it – waiting too long to involve their partners in Legal and Security in the evaluation process. Too many companies in 2024 make it halfway (or more!) through their software evaluation or implementation process before finally looping in Legal and Security. When they’re involved that late in the process, Legal and Security can often risk derailing the entire initiative, or at minimum delaying the implementation timeline. And don’t blame them – they’re just doing their job.

So what do you do about this? Start by looping in Legal and Security early in the process, understand in advance their concerns and guardrails, and help steer the conversations on what standards your organization will apply to the use of AI internally and with the external vendors you bring on. 

Some more details here:

  1. Involve Legal and Security teams early on: Start talking to your colleagues in Legal and Security early in your evaluation process. Understand what concerns or questions they will have of any vendor you bring on, and ask what they are working on in parallel around AI standards and security requirements. If they’re like many of the Legal and Security teams we talk to, they’re already hard at work thinking about this – and will appreciate being looped in. Everyone wins when you lead with transparency.
  1. Get internal alignment on the responsible AI standards of your organization: Because generative AI is still in its infancy, there are still many privacy and legal concerns surrounding it. Many organizations are still developing the standards they’ll adopt for themselves and the vendors they use. Seek out from your Legal and Security colleagues the internal guidelines around the appropriate and fair use of AI and LLMs at your organization. Need some guidance or advice on standards that might make sense? We’d be happy to share our expertise.
  1. Get internal alignment on the security requirements of any AI vendor: While standards and norms are still evolving, at minimum, be sure to know how any software or AI vendor you evaluate handles the following:

    1. Encryption: is data encrypted using industry-leading standards (in transit at TLS 1.2/1.3 and at rest in AES 256-bit encryption)?
    2. Disclosing use of models: what large language models does the software vendor using?
    3. Disclosing any training on your data: is this vendor using your data to train their LLMs in any way? 
    4. Indexing controls: can you filter and restrict what data is shared with your vendors – down to individual fields?
  1. Define the scope of a pilot or proof-of-concept with Legal and Security. It’s important to find value quickly in your AI initiatives, and many of the best rollouts we’ve seen use that to make the business case for more permanent or organization-wide rollouts. Work with Legal and Security early on to understand what requirements the organization has for pilots or connecting data sources, so you will know how to prepare your teams for integrating data and adding new systems.

Pro tip: Many Legal and Security teams will move faster to approve a “proof of concept” or “proof of value” pilot – compared to the heavy lift of thinking through an entire org-wide rollout upfront. We often recommend starting here, as it is a simpler evaluation process, and helps Legal and Security teams get comfortable with a vendor’s contacts and procurement process. Legal and Security’s core concerns around data and security will still need to be addressed, even in a pilot – but it’s often a more straightforward conversation compared to the complexity of discussing a full rollout. Plus, if you’re speaking to a best-in-class AI vendor, they should be able to answer your data security questions with ease. Make it simple for Legal and Security to approve your vendor, and everyone wins!

For a full list of things to consider when evaluating workplace AI solutions, check out our page on responsible AI

Mistake #2: Having unrealistic expectations of workplace AI’s capabilities:

Another common mistake that many companies make is having unrealistic expectations of what AI can actually accomplish in their organizations at this time. Executives see the latest Google Gemini demo or OpenAI demo, and think their organizations can replicate the same results by next week. The reality is that these demos are often showcased in squeaky clean lab environments, with perfect data – and wouldn’t stand up to how most organizations’ CRMs, emails, and data look. You know what we’re talking about, right?

So here are a few points to consider around managing expectations around workplace AI in 2024:

  1. Have a healthy skepticism of what you’re shown in demos: Plenty of vendors will have slick presentations and promises that only really work in the demo environment. And it’s not all vaporware – the bigger issue is that many only work well when the user data is perfect, or when users follow the vendors’ prescribed exact steps in an exact order. That may be great for a demo, but it’s obviously not practical for how your teams work in day-to-day usage.

  2. Start with a proof-of-concept pilot: A great way to separate the flashy demos from real substance is to set up a pilot where your teams can test out the software. You’ll quickly see which vendors can actually accelerate your workplace tomorrow, and which ones aren’t quite primetime ready yet.

  3. Align on how to measure success in your workplace AI initiative. It can be hard to determine whether your AI initiative is successful without internal alignment on what you’re measuring and what improvement you’re looking to see. Be sure to define what metrics or KPIs your organization is striving to improve on, and ensure that the pilots or vendors you work with are able to provide quantifiable results against those metrics.

  1. Strong data inputs = stronger AI assistants. Having the right data inputs can make a world of difference in how useful your AI assistants become. If you don’t have clean data or the right information to start, an AI assistant can’t magically solve for that. Some things to keep in mind during the evaluation process:some text
    1. Are you able to access or export the data that would be useful to power an AI assistant?
    2. Can the software vendor you’re evaluating integrate with the tools your organization uses? 
    3. How capable is the AI vendor at handling conflicting information across different data sources – where Source A says X, and Source B says Y? This is worth keeping in mind in your evaluation and pilot process. (Note: We’ve been able to successfully crack this through a proprietary approach – so if this is a concern, we’ve got you covered!)
    4. Minimize the hallucinations – it’s important to look for AI assistants that will tell you, when there’s no good answer, that there’s no good answer. You’ll be surprised how many AI vendors still can’t do this at this time. 
  1. Human intervention still matters: At this stage, AI assistants aren’t replacing any humans yet. Human intervention is still very important – even critical, to benefitting from a well functioning AI assistant.

    To leverage workplace AI to its fullest, teams need to be looking at AI-generated results with a critical eye and interacting with them – flagging poor answers to improve future results, rewarding the good results so that the assistant can improve, editing answers with tips or information that only someone on-the-job would know – and in providing this information to the AI assistant, ensuring that every future search will get to benefit from that tribal knowledge.

Mistake #3: Insufficient change management 

The final thing we often see organizations get wrong about integrating AI into the workplace is failing to devote enough time and energy to change management and adoption. 

It will come as no surprise that not everyone in your organization may be as ready or excited to embrace AI as you are. There are a lot of valid concerns around AI and automation and there’s no point in minimizing employee feelings. 

As a starting point, ask your employees which tasks they don’t enjoy doing. In an ideal world, what would they remove – or at least minimize – from their workday? What would they rather spend their time working on? If you can find AI assistants to automate part or all of the tasks they find cumbersome, this will increase employee productivity and satisfaction.

Companies also come up short when they assume there will be equal adoption across the entire organization. The needs of a marketing team are very different to those of a finance team. Not every department is going to be able to use assistants in the same way or to the same degree, and those championing the use of AI should be mindful of this. 

Start with one team to pilot your AI assistant.

We’ve found that organizations are more successful when they choose one department and run a pilot with it first. Ideally, the department should be one with a healthy amount of good data, complex processes, be metrics-driven and be willing to embrace new tools. 

The results of the pilot will help your organization make any necessary changes and drive company-wide adoption down the road. Remember – you don’t have to roll out AI across the organization all at once. This will often bog you down in evaluation purgatory. Just start with a pilot with one department. By breaking things down and learning as you go, you’re setting your company and employees up for long-term, lasting success.

Rejigging Internal Processes 

Adding in a workplace AI tool means you’ll have to reshape internal processes and procedures. Here are 3 steps to doing this effectively:

  1. Identify your early adopters: These will be your internal champions who may have expressed interest in using AI tools, or who are naturally excited to help reshape the traditional workday with the use of an AI assistant. Decide on a timeframe for how long you’ll let them trial tools. 
  2. Set up weekly or bi-weekly touch bases internally, and with your AI vendor: Leverage these check-ins to see how early adopters have made their days more productive or efficient via the AI assistant. It’s a good idea to document progress so that at the end of the pilot, you'll have a good idea of what works and what doesn’t. 
  3. Establish an internal AI cross-functional council: This council will be dedicated to evaluating new software and initiatives, and once brought in, on sharing best practices and learnings on how teams are adopting AI to drive efficiency and productivity.

Mini case study: How we reduced repeat questions by 85% using an AI-powered assistant: 

At Ask-AI, we found ourselves overwhelmed by product-related Slack messages that only a handful of our team could answer. Not only was it time-consuming, but it led to delayed responses to customers and opportunities. We tried creating and directing the use of “ask-product” or “ask-support” channels for a while, but these channels got overwhelmed soon enough too – this was a band aid solution at best.

So, we pivoted and came up with a thoughtful way to use our universal AI assistant. 

Here’s how we did it: 

After integrating the Ask-AI assistant, we developed a new process for all employees when it came to asking product or customer -related questions:

  1. We encouraged team members to consult the AI assistant for answers first.
  2. If the AI assistant didn’t have the answer, they could ask the AI assistant to send the question to Slack.
  3. Once a human expert with the right knowledge provided an answer, the Ask-AI assistant automatically created a knowledge card for all future queries. 

While it was challenging to break the habit of using Slack, eventually everyone caught on. The more we used the AI assistant, the more it improved. Successfully implementing and shifting this internal process resulted in a 30% drop in Slack questions and an 85% drop in repeated queries.

Conclusion:


We all know that AI is here to stay and is already changing the future of the workplace considerably. The organizations that will emerge as winners are the ones who will take the time to thoughtfully plan their workplace AI strategies before pressing ‘go’ and then continually refine their approach as new information comes to light. Hopefully this article helps you avoid the three most common pitfalls we see organizations make – and sets you up for success in your AI adoption journey.

Interested in learning more about how Ask AI can help your organization save time? Let’s talk.

More from Ask-AI

Case-study

AI-Powered Knowledge Management: A Practical Guide for GTM Leaders

Become a go-to resource for how to implement AI across customer-facing teams

Team Ask-AI
Read more
Case-study

How to Choose the Right AI Solution for Your Company

Deciding which AI solution is right for your company can be difficult. We made the decision easier to assess.

Team Ask-AI
Read more
Case-study

Should We Really Build Our Own Knowledge Management Platform In-house?

How to avoid the common pitfalls of building an in-house knowledge management solution.

Antasha Durbin
Writer
Read more
Case-study

What’s an AI Committee and Why Do You Need One?

Forming committees for strategic AI adoption helps companies stay organized and make well-informed decisions. In this article, we’ll explore what an AI steering committee is, why your organization needs one, and the steps to take to form one.

Antasha Durbin
Writer
Read more
Case-study

Yotpo and Ask-AI: Improving Case Handling Efficiency by 30% and Minimizing Internal Tickets

Yotpo agents who use Ask-AI's assistant on a daily basis to search for answers are able to achieve a 30.2% reduction in ticket handling time.

Team Ask-AI
Read more
Case-study

Security: What to Look for in Any AI Software Evaluation

What should be non-negotiable on your evaluation checklist, and what can you keep as a nice-to-have? Today, we’re sharing some tips to help you understand the most important Security points to keep in mind when evaluating AI software.

Ian Wright
Writer
Read more
Case-study

monday.com uses Ask-AI to Enhance the Customer Experience

Ask-AI has had an immediate impact on ticket handling time at monday.com, with teams seeing a 13.5% reduction in ticket handling time among its most-active Ask-AI users – a nearly tenfold jump compared to a 1.4% reduction for non-active users.

Team Ask-AI
Read more
Case-study

Embrace the Knowledge Chaos

Once we let go of the idea of a single perfect knowledge base, our actual knowledge creation and consumption skyrocketed – we saw a 10x increase compared to the traditional method of hiring technical writers.

Alon Talmor
Founder and CEO
Read more
Case-study

Conductor deploys Ask-AI to slash agent ramp times, boost collaboration, and improve customer satisfaction

With nine different software tools and multiple ticketing systems, Conductor’s tech stack was always going to be complex to bring together. Other AI vendors quoted a 2-3 month integration process before Conductor could start using their tools. To Joe’s surprise, with Ask-AI the entirety of Conductor’s org-wide data was ready to use in just three weeks.

Team Ask-AI
Read more