Market contemplates AI standards amidst regulatory pressure

By Emma Olsson | 29 January 2020

An industry-led artificial intelligence (AI) standard may be forthcoming according to Bill Wardwell, vice president of strategy and product at Bottomline Technologies.

“I think looking at AI and machine learning at this point from a technology provider standpoint, what you’re going to continue to see is probably more of an industry framework related to guidance around AI,” says Wardwell.

“We’ve seen other firms release plans or information around their approaches to AI and a lot of it is centred on being focused on engaging with your customer and understanding where AI and machine learning can help their experience in using products, and being transparent with customers around how that technology might make a financial application easier for their users … right now the view seems to be more along the lines of an industry approach and framework and ideas for technology companies to leverage.”

Bottomline’s AFP 2019 Conference Survey, published in December 2019, found that 23 percent of financial professionals noted regulatory or legal uncertainty as the biggest issue preventing increased adoption of AI. Whether tech companies will adopt common standards or operate individually is unclear.

“I think we’ll see a mix. You’ll probably see some loose industry consortium that will help provide guidance and information to technology companies that they can look to leverage and insert in their businesses, and then you’ll have organisations that will adopt standards or other models that are appropriate for their technology and their customers.”

At the World Economic Forum in Davos, Switzerland last week, bigtech players such as Google, Microsoft, and IBM discussed AI standardisation and regulation. Both Google and Microsoft’s chief executives, Sundar Pichai and Satya Nadella, called for a form of global regulation of AI, emphasising a collaborative effort between tech companies and regulators.

On January 21, the day before the panel, IBM published a list of recommendations for companies using AI, which included hiring an ethics officer and ensuring AI system explainability. The company stated that achieving ethical and explainable AI could be done “without creating new and potentially cumbersome AI-specific regulatory requirements, but rather by adhering to a set of agreed-upon definitions, best practices, and global standards.”

These recommendations follow a leaked European Union whitepaper proposing stricter AI regulations, obtained by Belgian publication EURACTIV and published on January 17. The whitepaper presents a vast leap from past regulation of tech companies, which has mainly focused on data privacy in regulations such as the General Data Protections Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

“What we’ve seen mostly is that regulations at this point have been focused around data privacy versus specific regulatory guidance around AI or machine learning. So I can’t really comment on a regulatory approach because we haven’t seen a lot of guidance or information,” says Wardwell.

According to Wardwell, a centralised approach to AI and machine learning is helpful to implementing an industry framework.

Become a bobsguide member to access the following

1. Unrestricted access to bobsguide
2. Send a proposal request
3. Insights delivered daily to your inbox
4. Career development