According to reports, a draft of a “white paper” on artificial intelligence shows that the EU is considering new regulations. These rules will restrict artificial intelligence developers and ensure that these modern technologies are developed in an ethical way. The competent authority of the European Union plans to implement these new regulations in “high-risk sectors”. They include health care, transportation, and other fields. Member states in the European Union can upgrade laws involving safety and liability determination. The EU will publish the relevant documents in mid-February this year.
European Union to ban face recognition in public places
In addition, the European Union is also considering the deployment of face recognition technology for government authorities and public relations agencies. It will also restrict the implementation rules for the use of systems such as face recognition in public places. These provisions indicate that public and private organizations will have limited access to artificial intelligence technologies. This will help the union to assess the risks with these technologies.
This “white paper” is also part of the European Union’s efforts to catch up with China and the US that have made progress in the field of artificial intelligence. However, to a certain extent, this will raise European standards of value, such as the privacy of users.
EU documents said: “On the basis of these existing regulations, the future regulatory framework can be taken a step further. This includes a time-bound ban on the use of face recognition technology in public places”. In the meantime, “a sound method for assessing the impact of this technology and possible risk management measures can be identified and developed.”
The European Union stated that in Europe, artificial intelligence has been subject to various European regulations. The union is looking at provisions on privacy, non-discriminatory fundamental rights, product safety, and liability compensation laws, etc. However, these regulations may not fully cover all the specific risks with these new technologies. For example, product safety regulations currently do not apply to AI-based services.