After decades of research and development, mostly confined to academia and projects in large organizations, artificial intelligence (AI) and machine learning (ML) are advancing into every corner of the modern enterprise, from chatbots to tractors, and financial markets to medical research. But companies are struggling to move from individual use cases to organization-wide adoption for several reasons, including inadequate or inappropriate data, talent gaps, unclear value propositions, and concerns about risk and responsibility.

This MIT Technology Review Insights report, commissioned by and produced in association with with JPMorgan Chase, draws from a survey of 300 executives and interviews with seven experts from finance, health care, academia, and technology to chart elements that are enablers and barriers on the journey to AI/ML deployment.

The following are the report’s key findings:

Businesses buy into AI/ML, but struggle to scale across the organization. The vast majority (93%) of respondents have several experimental or in-use AI/ML projects, with larger companies likely to have greater deployment. A majority (82%) say ML investment will increase during the next 18 months, and closely tie AI and ML to revenue goals. Yet scaling is a major challenge, as is hiring skilled workers, finding appropriate use cases, and showing value.

Deployment success requires a talent and skills strategy. The challenge goes further than attracting core data scientists. Firms need hybrid and translator talent to guide AI/ML design, testing, and governance, and a workforce strategy to ensure all users play a role in technology development. Competitive companies should offer clear opportunities, progression, and impacts for workers that set them apart. For the broader workforce, upskilling and engagement are key to support AI/ML innovations.

Related work from others:  Latest from Google AI - Google Research, 2022 & Beyond: Responsible AI

Centers of excellence (CoE) provide a foundation for broad deployment, balancing technology-sharing with tailored solutions. Companies with mature capabilities, usually larger companies, tend to develop systems in-house. A CoE provides a hub-and-spoke model, with core ML consulting across divisions to develop widely deployable solutions alongside bespoke tools. ML teams should be incentivized to stay abreast of rapidly evolving AI/ML data science developments.

AI/ML governance requires robust model operations, including data transparency and provenance, regulatory foresight, and responsible AI. The intersection of multiple automated systems can bring increased risk, such as cybersecurity issues, unlawful discrimination, and macro volatility, to advanced data science tools. Regulators and civil society groups are scrutinizing AI that affects citizens and governments, with special attention to systemically important sectors. Companies need a responsible AI strategy based on full data provenance, risk assessment, and checks and controls. This requires technical interventions, such as automated flagging for AI/ML model faults or risks, as well as social, cultural, and other business reforms.

Download the report

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Similar Posts