Europe is mulling tough new limits on the use of facial recognition and A.I.
The European Union is considering new legally binding requirements for developers of artificial intelligence in an effort to ensure modern technology is developed and used in an ethical way.
The EU’s executive arm is set to propose the new rules apply to “high-risk sectors,” such as healthcare and transport, and suggest the bloc updates safety and liability laws, according to a draft of a so-called “white paper” on artificial intelligence obtained by Bloomberg. The European Commission is due to unveil the paper in mid-February and the final version is likely to change.
The paper is part of the EU’s broader effort to catch up to the U.S. and China on advancements in AI, but in a way that promotes European values such as user privacy. While some critics have long argued that stringent data protection laws like the EU’s could hinder innovation around AI, EU officials say harmonizing rules across the region will boost development.
European Commission President Ursula von der Leyen has pledged her team would present a new legislative approach on artificial intelligence within the first 100 days of her mandate, which started Dec. 1, handing the task to the EU’s digital chief, Margrethe Vestager, to coordinate.
A spokesman for the Brussels-based Commission declined to comment on leaks but added: “To maximize the benefits and address the challenges of Artificial Intelligence, Europe has to act as one and will define its own way, a human way. Trust and security of EU citizens will therefore be at the center of the EU’s strategy.”
The EU is also considering new obligations for public authorities around the deployment of facial recognition technology and more detailed rules on the use of such systems in public spaces. However, the provision on facial recognition isn’t among the three policy options officials recommend that the commission pursue.
The provision suggests prohibiting use of facial recognition by public and private actors in public spaces for several years to allow time to assess the risks of such technology.
“Such a ban would be a far-reaching measure that might hamper the development and uptake of this technology,” the commission says in the document, adding that it’s therefore preferable to focus on implementing relevant provisions in the EU’s existing data protection laws.
As part of the recommended policy measures, the EU also wants to urge its member states to appoint authorities to monitor the enforcement of any future rules governing the use of AI, according to the document.
In the draft, the EU defines high-risk applications as “applications of artificial intelligence which can produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage for the individual or the legal entity.”
Artificial intelligence is already subject to a variety of European regulations, including rules on fundamental rights around privacy, non-discrimination, as well as product safety and liability laws, but the rules may not fully cover all specific risks posed by new technologies, the Commission says in the document. For instance, product safety laws currently wouldn’t apply to services based on AI.
The EU’s AI strategy will build on previous work coordinated by the commission, including reports published in the last year by a committee of academics, experts and executives. EU rules often reverberate across the globe, as companies don’t want to build software or hardware which would be banned from the bloc’s vast developed market.
One of the reports outlined a set of seven key requirements that AI systems should implement in order to be deemed trustworthy, including incorporating human oversight, respect for privacy, traceability and avoiding unfair bias in decisions taken by the systems.
The other report outlined policy and investment recommendations for the EU and its member states. The experts said unnecessarily prescriptive regulation should be avoided but that governments should restrict the development of automated lethal weapons and consider new rules around unjustified tracking through facial recognition or other biometric technologies.
Alphabet Inc.’s Chief Executive Officer Sundar Pichai will also make a rare public appearance in Brussels next week to give a speech at a think-tank about the development of responsible AI ahead of the EU’s February announcement.
More must-read stories from Fortune:
—Why the U.K. is bailing out regional airline Flybe when it’s let others fail
—The Carlos Ghosn affair: A look inside the motivations of a fugitive CEO
—Experts doubt China can fulfill its targets in “Phase One” of the U.S. trade deal
—Laws meant to close down tax havens and shut loopholes could have opposite effect
—7 M&As that defined a decade of dealmaking and reshaped the economy
Catch up with Data Sheet, Fortune’s daily digest on the business of tech.