Resist the Temptation to Implement AI for the Sake of It

Resist the Temptation to Implement AI for the Sake of It

Read how to maintain the balance between pursuing innovative generative AI-driven features and maintaining practical, user-friendly product design.

AI, particularly generative AI, is a hot topic captivating everyone’s imagination from LinkedIn to Friday night bars. Companies seem to be in an arms race to deploy AI features, but unfortunately, many of these rushed products fail or have to be rolled back. Moreover, models trained on copyrighted or biased data can lead to lawsuits and even business collapse. Therefore, it is crucial for companies to remain grounded in solving real user problems and delivering value.

By focusing on core user needs, like an existing pain point, improving productivity, rethinking existing experiences, or offering unique personalised interactions, AI can significantly enhance product functionality. However, it is equally important to recognise the limitations and potential pitfalls of AI, such as ethical concerns, technical challenges like hallucinations, data biases, and the high cost of implementation. 

When to Use Generative AI:

  • Focus on core user needs: AI is a tool, not a goal. Start by identifying real human problems and assess if AI can solve them better than existing solutions. The goal is to make users more efficient and effective. For example, meeting summarisation features in apps like Teams and Zoom save time by providing key takeaways and action items, eliminating the need to rewatch long recordings.
  • Improve productivity of existing tasks: AI excels at automating repetitive tasks, freeing users for work requiring human expertise. GitHub Copilot assists with coding, saving developers time for complex problem-solving. A GitHub study found 88% of developers felt more productive and satisfied using Copilot.
  • Rethink existing experiences: AI can create significant value by improving existing experiences. Perplexity AI provides direct answers to questions with citations, offering a more user-friendly search experience than traditional search engines. It summarises information with annotated links and suggests follow-up questions to refine searches.
  • Personalised experiences: Generative AI can tailor content to individual preferences. Netflix famously customizes content thumbnails based on user preferences. When users browse Netflix, they are not just shown generic images but rather thumbnails that have been specifically chosen or generated based on their viewing history and preferences. This makes it easier for user to find something appealing to watch, driving higher engagement and product satisfaction.  

When to Avoid Generative AI:

    • Maslow’s Hammer: Avoid the “hammer-everything” approach (Maslow’s hammer). Just because you have a new tool (AI) doesn’t mean it’s the right solution for every problem. For instance, Air Canada’s AI chatbot misrepresented bereavement policies, leading to a lawsuit. While a customer service chatbot powered by a large language model (LLM) sounds appealing, LLMs can generate inaccurate outputs (hallucination). In this case, a traditional chatbot with guardrails would have been a better solution.
    • Lack of training data: The rapid pace of AI development can lead to models being trained on inappropriate data. For example, Getty Images sued Stability AI (creators of Stable Diffusion) for producing images with Getty watermarks because the model was trained on unlicensed internet data. Similarly, biased data can lead to problematic outputs, as seen with Google’s racially diverse Nazi generation using generative AI.
    • No ROI: Despite decreasing costs, generative AI computational cost is very expensive. As per Goldman Sach’s report, there is too much spend on GenerativeAI but with too little value. Companies must carefully assess the return on investment (ROI) before deploying it. Start small, experiment to quantify value. Determine if the value is durable and sticky.
    • Ethical or legal concerns: Situations where outcomes can have serious ethical implications. Example using AI for policing or legal services could lead to wrong arrests due to bias in model or wrong facial detection. Likewise unreliable medical advice from AI can lead to misdiagnosis, inappropriate treatments causing harm to the patient. 

In the pursuit of generative AI, the true measure of success will not be in the technology but in its meaningful impact on users’ lives. Resist the temptation to implement AI for the sake of it. Instead, focus on enhancing user experience. This involves addressing genuine user needs in ways that truly resonate with users. Simultaneously, it is important to navigate the ethical, technical, legal and economic challenges. Don’t get lost in the Generative AI hype! So next time when you think of implementing generative AI, ask yourself: What user problem are we solving and why is AI the best solution compared to non-AI solution? 

READ MORE: Intelligent Messaging Accelerates 1:1 Experiences

As a thought exercise, I asked this question to ChatGPT, who do you think did better?

Resist the Temptation to Implement AI for the Sake of It