top of page

Challenges of Adopting AI in Business

Updated: Jun 26

June 06, 2024 | Artificial Intelligence | Adoption | Transformation | CIO | By Priyanka Nagpal

Challenges of Adopting AI

Introduction

As organizations increasingly integrate artificial intelligence (AI) into various aspects of the business, the need to balance innovation with trust and security becomes paramount. Blind trust in AI capabilities can lead to unintended consequences.


AI’s potential to transform business operations, from customer service to data analysis, is well-documented. Yet, the path to successful AI adoption is fraught with obstacles that can deter even the most tech-savvy leaders. In this article, we share challenges of adopting AI in business that leaders can anticipate and how they can address them proactively.


Challenges of Adopting AI in Business


Challenge of Cybersecurity

For Cybersecurity, AI adoption is a challenge and opportunity


AI in cybersecurity is a double-edged sword. While it offers unprecedented opportunities to enhance security measures and operational efficiency, it also introduces new risks that require careful management. From detecting malicious activities to predicting future threats, AI empowers security teams to stay one step ahead of cyber adversaries. However, ensuring the reliability and accuracy of AI outputs remains a critical challenge, necessitating continuous validation and oversight.


One practical application of AI in cybersecurity is in endpoint protection. Companies are using AI-driven solutions to monitor and analyze endpoint data in real-time, identifying threats and responding to them swiftly. Moreover, the concept of AI bills of materials (BOMs) is gaining traction. AI BOMs help track and trace the origin of data and algorithms, ensuring transparency and security in AI applications.

"AI enables us to understand endpoint, user, server, and network behaviors, facilitating proactive threat detection and incident response", Fahim Siddiqui, EVP and CIO at The Home Depot

This ability to identify deviations from the norm helps in quickly flagging potential threats, thereby improving the overall security posture of an organization. By leveraging machine learning algorithms, organizations can analyze vast amounts of data to discern normal patterns and detect anomalies indicative of potential security breaches.

"AI is a tool and an opportunity. We should not allow paranoid security concerns to overshadow its potential benefits.", Jeffrey Wheatman, Senior Vice President, Cyber Risk Evangelist at Black Kite.
 Jeffrey Wheatman, Senior Vice President, Cyber Risk Evangelist at Black Kite

Indeed, while AI offers tremendous advantages, blind trust in its outputs can lead to vulnerabilities. AI-related security and privacy issues are prevalent, with 41% of organizations experiencing problems. According to ​a Gartner survey, insider threats are a major concern, responsible for 60% of AI-related breaches. It's important to address the security implications of AI integration. As AI systems become more sophisticated, so do the tactics of malicious actors.


"Attackers can exploit AI's learning mechanisms, leading to data poisoning and widened security vulnerabilities.", Jeffrey Wheatman, Senior Vice President, Cyber Risk Evangelist at Black Kite, warning about the risks of data poisoning and exploitation.



When integrating AI into business operations, cybersecurity training and education become indispensable. AI systems, while powerful and transformative, bring with them a unique set of vulnerabilities that can be exploited if not properly managed. Educating employees on cybersecurity best practices ensures that they are aware of potential threats and know how to mitigate them. For instance, employees trained in spotting phishing attempts and unauthorized access can act as the first line of defense against cyber attacks.

"Without a well-informed workforce, even the most advanced AI systems can become liabilities.", Mary Coleman, a cybersecurity expert.

Therefore, consistent and comprehensive cybersecurity education is essential to protect AI investments and secure organizational assets from cyber threats. By harnessing the power of AI while upholding ethical standards, organizations can fortify their defenses and stay resilient against evolving cyber threats.


Tips to address challenges of adopting AI in business:

  • Ensure that access to AI systems and sensitive data is protected by robust authentication processes. Multi-factor authentication (MFA) adds an extra layer of security, making it more challenging for unauthorized users to gain access.

  • Training programs that focus on the specific security requirements of AI systems can help in safeguarding sensitive data and maintaining system integrity.



The challenge of understanding and measuring AI Value

The challenge of understanding and measuring AI Value

There is often a lack of understanding of AI benefits and challenges in measuring its value, which hampers broader adoption. Ensuring a clear understanding and alignment of AI initiatives with business goals is crucial. Without a strategic vision, AI projects can become disjointed and fail to deliver expected outcomes. "We need to get in front of our board and C-level colleagues to understand their expectations and plans.", Jeffrey Wheatman, Senior Vice President, Cyber Risk Evangelist at Black Kite.


Scaling AI solutions from pilot projects to full-scale operations is a common challenge. Organizations often find it difficult to replicate success across different departments or use cases.

According to a Gartner survey, a substantial issue is moving AI models from pilot to production. On average, only 54% of AI models make this transition, indicating a persistent gap that hinders broader adoption.

This issue is partly due to a lack of alignment with business value and inadequate governance structures to manage the complexity of thousands of deployed models.


Tips to address challenges of adopting AI in business:

  • Use clear metrics and KPIs to measure the impact of AI on business performance and make data-driven decisions.

  • Continuous tracking and analysis can help identify areas for improvement and inform future AI investments.



The AI challenge of trust

The AI challenge of trust

Lack of trust is a significant barrier to AI adoption. Many business leaders are skeptical about the reliability and accuracy of AI algorithms. A Deloitte survey found that only 42% of executives have "a high level of trust" in their organization's own use of AI technology, citing concerns such as reliability, bias, and security. This skepticism in AI adoption often stems from a lack of transparency from AI vendors and a limited understanding of how AI works and the potential risks involved.


"Vendors are using AI whether they're telling you about it or not. You need to figure out what the implication is for your businesses. Establishing trust requires transparency from AI vendors and continuous validation of AI outputs.", Jeffrey Wheatman, Senior Vice President, Cyber Risk Evangelist at Black Kite.

Jeff emphasizes the need to push vendors to address biases in AI algorithms and promote responsible AI practices. Building trust requires transparency and open communication about the capabilities and limitations of AI systems, even when vendors are not forthcoming about their usage. It also involves addressing data privacy concerns and providing assurance that AI is used ethically and responsibly.


Trust is vital in AI adoption, but addressing biases remains a challenge in adopting AI, emphasizing the importance of responsible technology use.


Tips to overcome the challenge of trust when adopting AI: 

  • Establish AI governance frameworks that include ethical guidelines and compliance measures.

  • Be transparent and urge vendors to be transparent about data usage and AI decision-making processes.

  • Engage with stakeholders, including employees and customers, to address their concerns and build trust. Consider partnering with experienced AI vendors and investing in explainable AI technologies that provide insights into how decisions are made.

  • Ongoing testing and validation of AI systems can also help build trust within the organization.

  • Invest in educating their teams about AI technologies and promote a culture of transparency and accountability.



The challenge of addressing bias and ethical concerns

The challenge of addressing bias and ethical concerns

AI adoption brings forth numerous ethical and legal questions. Issues related to data privacy, algorithmic bias, and the potential for job displacement are significant challenges for both businesses and regulators. AI systems are only as good as the data they are trained on. Biases inherent in training data can lead to unethical outcomes, perpetuate existing inequalities, and distort decision-making.


In 2018, Amazon scrapped an AI recruiting tool that showed bias against women. The system, trained on predominantly male resumes, consistently favored male candidates, highlighting the need for ongoing vigilance in AI development.

Bias in AI algorithms is a well-documented issue, without scrutiny and oversight, these biases can perpetuate inequalities and processes.

Fahim Siddiqui, EVP and CIO at The Home Depot

"The biases inherent in AI algorithms pose significant challenges. Attackers can exploit these biases, leading to data poisoning and wider security vulnerabilities. Ethical AI is a real thing. Governance for ethical AI can only be done at a use case by use case level.", Fahim Siddiqui, EVP and CIO at The Home Depot


To combat this, organizations must implement rigorous oversight and governance. By scrutinizing each AI application individually, companies can better manage the risks associated with bias and ensure that their AI systems operate fairly and transparently. Failure to address the bias and ethical challenges can lead to damaging consequences, including legal repercussions and reputational damage.



Tips to address challenges of adopting AI in business:

  • Implement comprehensive governance frameworks to scrutinize AI algorithms and ensure ethical AI practices.

  • Scrutinize each AI use case individually for ethical AI use.



Addressing AI Risks: Governance and Testing

Addressing AI Risks: Governance and Testing

AI introduces new risks that need to be managed. From data poisoning to the exploitation of AI systems, businesses must be vigilant. Effective governance and robust testing are essential to mitigate the risks associated with AI.

Robust governance frameworks are necessary to manage AI-related risks. These frameworks should include policies for data management, ethical guidelines, and protocols for continuous monitoring and testing of AI systems.


"It's crucial to have a clear policy around the use of AI. Companies should ask their AI vendors detailed questions about how they use AI to ensure alignment with their governance standards.", Jeffrey Wheatman, Senior Vice President, Cyber Risk Evangelist at Black Kite.

This proactive approach helps organizations understand the potential risks and establish necessary controls.


A meticulous focus on the modularity of AI components can significantly reduce risks and improve the overall reliability and performance of AI solutions.

“We keep our software isolated with loose coupling, ensuring we understand the behavior of individual components before integrating them into the system.”,  Fahim Siddiqui, EVP and CIO at The Home Depot

By keeping software components isolated with loose coupling, organizations can ensure that the behavior of each component is thoroughly understood before it is integrated into the larger system. This approach allows for more precise troubleshooting, enhanced security, and easier updates without disrupting the entire system.


Tips to address challenges of adopting AI in business:

  • Develop and implement robust governance frameworks and continuously test AI systems to identify and mitigate risks.

  • Adopt a modular approach to AI development, emphasizing loose coupling and isolation of components.

  • Conduct rigorous testing of individual components to ensure they function correctly before integration. This strategy will help manage complexity, minimize errors, and streamline the AI adoption process.




The challenge of skills and training

The challenge of skills and training

Consider the case of a major bank that inadvertently uploaded sensitive financial data into an AI analysis engine, exposing proprietary information to potential breaches. How can organizations navigate the delicate balance of accelerating adoption and filling the skill and training gap?


The rapid adoption of AI also necessitates a shift in skill sets. Employees need to be trained not only on how to use AI-based solutions but also on how to validate the outputs the AI solutions generate.

"Training people on here is the tool, here's how it might help you, but here's how you use it responsibly," a cyber security expert emphasizing the need for comprehensive training programs.

The demand for AI experts far exceeds the supply. Finding professionals with the right mix of skills in data science, machine learning, and AI development is a significant challenge. Furthermore, retaining such talent can be difficult due to the competitive job market. According to Gartner, a significant challenge in AI adoption is the shortage of skilled professionals. Organizations struggle to find individuals with the expertise required for developing, deploying, and managing AI systems.

Establishing comprehensive governance frameworks and policies around AI use is essential. By scrutinizing AI algorithms, ensuring transparency from vendors, and investing in employee training, organizations can harness AI's potential while mitigating security risks.


Tips to address challenges of adopting AI in business:

  • Invest in training and upskilling existing employees.

  • Partner with educational institutions to create internship and co-op programs that can serve as talent pipelines.

  • Consider remote work options to tap into a broader talent pool.

  • Train people on how to use AI responsibly, validate its outputs, and write good prompts. This ensures that AI enhances their jobs rather than replaces them.



The challenge of high-cost

The challenge of high-cost

Organizations struggle with the substantial investment associated with AI implementation. The costs associated with purchasing AI software, data management, hardware, and other necessary infrastructure can be prohibitive for many organizations. Additionally, there is the financial burden of hiring skilled professionals who can develop, manage, and maintain AI systems.


Effective strategies to mitigate these costs include starting with small pilot programs, gradually scaling up, and partnering with AI vendors to leverage pre-built solutions. However, dependence on specific AI vendors for tools and platforms can create lock-in situations, limiting flexibility and increasing long-term costs.


Tips to address challenges of adopting AI in business:

  • To mitigate these costs, businesses can start with small-scale AI projects that offer quick wins and clear ROI.

  • Leveraging cloud-based AI solutions can also reduce upfront expenses, as they often use subscription-based pricing.



Addressing resistance to change

Addressing resistance to change

Adopting AI often requires a cultural shift within the organization. Employees may fear job loss or be resistant to changing long-established practices. Overcoming this resistance is crucial for successful AI implementation.


Research indicates that in the successful adoption of AI in the business, for every dollar spent on developing AI technology, three dollars are being spent on change management efforts, including training and driving adoption. This investment is aimed at ensuring that employees are equipped to leverage AI effectively while maintaining a critical eye on its outputs.




Tips to address challenges of adopting AI in business:

  • Foster a culture of innovation and continuous learning.

  • Communicate the benefits of AI clearly and involve employees in the adoption process.

  • Providing training programs and communicating how AI can augment rather than replace human roles can help alleviate fears.



The challenge of data quality, integration, and data management

The challenge of data quality, integration, and data management

Data is the lifeblood of AI systems, but many organizations struggle with data silos, inconsistent data formats, and poor data quality. AI systems rely on vast amounts of high-quality data to function effectively. Poor data quality and availability of data can lead to inaccurate AI outputs, affecting decision-making processes. While integration challenges can delay AI projects.


Therefore, investing in data management is crucial for successful AI integration. Organizations must prioritize data governance to maintain high data quality standards and ensure its availability for AI training and testing. Ensuring high-quality data and integrating diverse data sources are major hurdles.

Data is the new core capability of digital business

With increased reliance on data, organizations must prioritize data privacy and security. AI models are only as reliable as the data they are trained on. Organizations must adhere to strict privacy policies and security measures to protect sensitive data from cyber threats and potential breaches. The implementation of strict policies and procedures for collecting, storing, and handling sensitive data is crucial. Additionally, regular audits and updates to these protocols are essential in the constantly evolving landscape of data privacy laws.


Tips to address challenges of adopting AI in business:

  • Even if your data isn't in the best shape, and you lack a formal data quality process, start small. Focus on cleaning and standardizing key data sets and gradually expand your efforts.

  • Implement data governance protocols and invest in data quality tools to help maintain high-quality data.

  • Implement robust data management practices.

  • Establish a centralized data repository and ensure data consistency across the organization.

  • Investing in data cleaning and preprocessing tools can also enhance data quality.


Are there unintended consequences?

While AI promises immense benefits, it also comes with risks. Unintended consequences are often the result of complex algorithms that operate in ways even their creators cannot fully predict. We share some use cases where the adoption of AI led to unintended consequences.


  • Amazon's Recruiting Tool: Amazon developed an AI recruiting tool that was found to be biased against women. The tool favored resumes that included male-oriented language, leading to discriminatory hiring practices. This highlights the challenge of bias in AI systems.

  • IBM Watson for Oncology: IBM Watson for Oncology faced criticism for sometimes providing incorrect or unsafe treatment recommendations. This underscores the importance of rigorous validation and human oversight in AI applications in healthcare.

  • Uber's Autonomous Vehicles: Uber's autonomous vehicle program experienced a setback when one of its self-driving cars was involved in a fatal accident. This incident raised concerns about the safety and readiness of AI technologies for real-world deployment.

AI solutions must undergo rigorous testing and validation to ensure AI systems function as intended without causing harm. It also raises ethical considerations, such as ensuring AI operates fairly and transparently.


Conclusion

The challenges of adopting AI in modern business are indisputable. However, by addressing unintended consequences, building trust, addressing bias and ethical concerns, investing in skills and training, implementing robust governance frameworks, understanding and measuring value, considering costs, and managing change effectively, organizations can overcome these hurdles and unlock the full potential of AI. The key lies in taking a measured, informed approach that balances innovation with practicality.

In the words of Fahim Siddiqui, EVP and CIO at The Home Depot, "Engage actively but don’t burn yourself." This approach will help companies harness the full potential of AI while safeguarding against its inherent risks.

Navigating the balance between innovation and risk management becomes crucial as AI technologies become more integrated into organizational frameworks.


References

AI Barbarians at the Gate: The New Battleground of Cybersecurity and Threat Intelligence

Gartner


About Us

Leading the Way: The Transformation Office and the Art of Change Management

Our story began with the deep desire to drive tangible, visible, and measurable outcomes for clients. With that as our guiding beacon, we launched Gravitas Consulting – a boutique consulting firm specializing in bringing Insight to Oversight.


We help our clients scale and improve their businesses by the thoughtful application of Intelligent Information to guide decisions and actions. We leverage our data analytics and visualization, enterprise program and change management, and customer experience design expertise to provide leaders with the intelligence they need to do what they do best, even better.


At Gravitas, we measure success by only one metric: each client’s satisfaction with our ability to drive Outcomes that matter. We stand behind this belief by putting a portion of our fees at risk if we do not meet the commitments we promise.


Our promise to clients is simple: we drive outcomes that matter.


bottom of page