The past decade has been a productive time for AI. Periods of rapid advancement like these tend to result from just the right blend of environmental conditions. In the case of AI, several factors have played a part, including:
- important theoretical breakthroughs by AI researchers and scientists like Geoffrey Hinton’s image classification algorithms
- business investment in AI across all sectors
- the democratisation of development tools via platforms such as Google’s TensorFlow
- the onward march of Moore’s law and the availability of cheap, cloud computing power
- an abundance of data, powered in part by the advancement of mobile technologies
While each element has contributed its share, it’s hard to overstate the significance of the last point, that of data. In terms of abundance, quality and relevance to various aspects of our daily lives, data has perhaps played the biggest part in fueling these advances. Along with methods for optimising the initial conditions and configuration of the neural network, improving the scale and richness of datasets used to train deep learning systems are currently the most common ways to optimise their performance. So far, no alternatives have been found that can better sate the appetite of these algorithms.
With this in mind, I was interested to read a report last week that the European Commission is considering a ban on facial recognition technology in public spaces. It’s the latest in a long row of signposts towards the challenge of trade-offs between individual rights regarding privacy on the one hand, and utility, convenience and opportunities afforded by advances in technology on the other. And it’s prompting questions over the extent to which concerns about privacy might hamper further progress and innovation in AI.
One BBC article covering the report highlights some of the differences in attitudes between nations and cultures when it comes to the balance between privacy and utility. It contrasts attitudes among western nations, where privacy scandals like the NSA’s global surveillance programmes exposed by Edward Snowden or Facebook and Cambridge Analytica’s non-consented use of personal data have resulted in a public that’s highly suspicious of the motivations of governments and big-tech when it comes to personal information, with those in China where there’s a far greater acceptance of the technology’s incursion into private lives in exchange for the benefits it can bring.
In his book AI Super Powers – China, Silicon Valley and the New World Order, the venture capitalist Kai-Fu Lee brilliantly explains the potential for competitive advantage that these attitudes can bring to Chinese businesses around the development of AI. He and others point out that China has a distinct edge when it comes to data ecosystems – both in terms of volume and quality. A more relaxed public attitude to privacy is undoubtedly a major factor behind this. It has enabled businesses like Baidu and Tencent to harvest vast quantities of data at the boundaries between online and offline worlds, painting a rich picture of individual preferences, habits, social interactions and political inclinations that make the tracking activities of Facebook and Google seem positively unobtrusive by comparison.
The EU’s GDPR singles out automated decision making as a principal consideration in its provisions for individual rights. It requires that businesses give subjects clear information about how such processing works, allow human intervention in those decisions if requested by the subject, and can demonstrate that regular checks are performed to make sure the process is working as it should. To western ears, these measures sound eminently sensible. But they contrast with the techno-utilitarian approach that’s typical in China, where an acceptance that certain trade-offs are inevitable to support technological, social and economic progress, meaning that they would be unlikely to wash with many businesses there. So far, the rapid commercialisation of AI technology in the Chinese market suggests consumers there are happy with their side of the current bargain.
That said, in response to regulatory requirements like those of the GDPR, big-tech has begun moving in some new and interesting directions. Explainability in AI (AXI) for example, aims to overcome the problem that trained deep learning algorithms are still something of a black-box. Traditionally, we have had difficulty explaining why a given image classification algorithm has become good at knowing the difference between a Highland Terrier and a Border Collie. We just know they can having been exposed to a large number of labelled training examples. Answering such questions is an important step towards improving control and could bring many other benefits to businesses.
But western policy-makers must be mindful of the potential side-effects of regulation. At a talk on AI in financial services that I attended recently, one speaker referred to EU laws that no longer permit discrimination between male and female drivers when it comes to setting car insurance premiums. Preparation for the changes consumed vast time and resources within the affected businesses. But the speaker went on to point out that the only difference now is that artifacts emerge in the underwriting data that serve as proxies for gender, ultimately leading to much the same decisions that would have been made before the law was changed.
Taking the use of AI in making decisions about an individual’s credit-worthiness as a further example. Behavioural economists have long told us that while we humans believe we are the ultimate authority when it comes to quality decision making based on imprecise, real-world inputs such as those needed here, time and again research shows that this confidence is misguided. Factors like the availability of recent instances of similar situations, confirmation biases and even the length of time since a last meal, have a far greater effect on the quality and consistency of our decisions than we realise. Unlike most AI today, we humans are just better practiced at justifying our conclusions post-hoc. In these situations we must therefore be mindful of regulation taking us down a route that ultimately kowtows to the superiority of human decisions – with all its fallibility – as a backstop.
And while western businesses dedicate substantial time and resources to matters of privacy and data, such matters will be a lesser consideration for Chinese firms. When it comes to data, China already has the edge in a key part of the AI ecosystem. One can only foresee this gap getting wider as the combined characteristics of scale, real-world proximity and more relaxed public attitudes towards the privacy:utility equation continue to power the Chinese AI juggernaut ever onward.