The EU's stifling regulation of AI is a huge opportunity for Brexit Britain


Blackleaf
#1
With the UK free now to diverge on EU regulations, we can forge our own path forward in the world of AI...

The EU's stifling regulation of AI is a huge opportunity for Brexit Britain




James Lawson
21 February 2020
The Telegraph


A third of all European AI startups are based in the UK, more than twice as many as France or Germany. Credit: Andrew MacColl/REX


The European Commission has just unveiled its artificial intelligence (AI) strategy, one that it hopes will turn Europe into the global leader via a regulatory framework “based on excellence and trust”. These are laudable aims – the only problem for Brussels is that their plans risk having the opposite effect.

AI’s use is booming. Businesses are increasingly employing this technology, which allows computers to perform tasks that ordinarily require human intelligence, to automate work and to make more informed decisions. More than ten thousand AI companies have been founded since 2015, with over $37 billion of private investment. A global arms race has already begun as countries battle to lead this technology revolution.

The EU’s proposals encourage many best practices. AI should learn from good data. Decisions should not be made by inexplicable “black boxes” but easily understood. There should be human oversight over decisions. Companies should document how they developed their AI. Citizens should know when they are interacting with an AI. These are sensible expectations that help ensure the technology performs effectively and ethically.

The broader plans of the European Commission are more confused. The EU is hinting at much tougher regulations. These risk unintended consequences that hinder their progress towards AI leadership and limit people's ability to benefit from progress.

They want to introduce a single approach across all of Europe. This is to “reach sufficient scale” and “avoid the fragmentation of the single market”. However, technology changes quickly. They may not get things right the first time and by promoting uniformity they leave little room for diversity or experimentation.

While details remain thin, European Commission President Ursula von der Leyen is encouraging a much more risk averse approach. But this stance means that implementations of AI could prove too expensive or be blocked by risk-averse entrepreneurs altogether.

Treating technology as guilty until proven innocent does not provide fertile ground for innovation. The EU is seeking to restrict a wide range of AI applications deemed "high risk", but the definition of what constitutes this is so unclear that potentially any AI can meet the threshold.

Brussels has defined this circularly as “areas where, generally speaking, risks are deemed most likely to occur” and provides the examples of healthcare, transport, energy and government. Instead of providing regulatory certainty and a clear focus, these examples cover more than half the European economy.

By holding new technology to a higher standard than what came before it demands that entrepreneurs and scientists only ever come up with plans for problem-free innovation. This is not only an impossible goal, it is undesirable, too, because it refuses to take account of the problems with the status quo which might be solved by innovation.

For example, among the projects run by my company, DataRobot, we have used AI to detect forest fires more quickly, enhance water supplies in Africa, identify sepsis in hospitals, fight money laundering in banks, prevent fraud for insurers and cut food waste in supermarkets. AI innovations can raise many legitimate ethical concerns but there is also a risk that regulation gets in the way; we could lose life-enriching and life-saving innovations. A balanced risk assessment has to consider both sides.

The EU has also proposed that after an application is updated, compliance “must be reassessed”. This could have far reaching implications if it means that every tweak requires another costly regulatory review. Parameters for AI models are being constantly tweaked and re-assessed.

This all contrasts with the lighter touch approach of regulators in the United States. The US guidance says agencies must not “needlessly hamper” AI innovations, and should “avoid a precautionary approach” that holds AI to an “impossibly high standard”. The US government recommends considering the benefits and costs of AI, and to compare this against the systems being complemented or replaced.

Becoming the global leader in AI is a nice ambition, but easier said than done. Europe was a laggard in the internet revolution. It punched below its economic and cultural weight. Today none of the top 10 global tech companies are European. Productivity improvements have been relatively slow, with companies cautious to implement new technology. It is unclear how the EU plans to reverse these trends. The current timid approach is an unlikely path to the top

For Britain this opens up a huge leadership opportunity. With war hero Alan Turing having fathered modern computer science in the 1940s, today the UK remains the European AI leader. Aside from the US, we publish the most research (over a thousand articles in top journals annually) and house the most startups (nine percent of the world total). A third of all European AI startups are based in the UK, more than twice as many as France or Germany. New talent is emerging, with computer science as our second fastest growing degree.

With the UK free to diverge on regulations, we can forge our own path forward on AI. The combination of our robust legal system, top talent and a more welcoming AI policy would place us in a prime position to reap the benefits of this technological revolution.

James Lawson is the AI evangelist at DataRobot

https://www.telegraph.co.uk/politics...rexit-britain/
 
QAbones
#2
The EU are economic worker and military worker.