EU regulators apply old laws to control AI as new rules take years to enforce
The rapid development of artificial intelligence (AI) services like ChatGPT has led regulators to rely on existing laws to control a technology that could transform societies and businesses. The European Union is working on new AI rules to address privacy and safety concerns associated with generative AI technology, such as OpenAI’s ChatGPT. However, it will take several years for the legislation to be implemented.
In the meantime, governments are applying existing rules to protect personal data and ensure public safety. In April, Europe’s national privacy watchdogs established a task force to tackle issues with ChatGPT, after the Italian regulator Garante accused OpenAI of violating the GDPR privacy regulations of the European Union (EU). The service was reinstated once OpenAI agreed to implement age verification features and allow European users to block their information from being used to train the AI model.
Generative AI models are known for making mistakes or “hallucinations,” which can lead to misinformation. This could have serious consequences if banks or government departments use AI to expedite decision-making, potentially resulting in individuals being unfairly denied loans or benefit payments. As a result, regulators are focusing on applying existing rules to cover issues such as copyright and data privacy.
We’re sharing details on our approach to safety https://t.co/IRSfIyVxyA
— OpenAI (@OpenAI) April 5, 2023
In the EU, proposals for the AI Act will require companies like OpenAI to disclose any copyrighted material used to train their models, potentially opening them up to legal challenges. However, proving copyright infringement may be difficult, according to Sergey Lagodinsky, one of the politicians involved in drafting the EU proposals.
French data regulator CNIL is exploring how existing laws might apply to AI, with a focus on data protection and privacy. The organisation is considering using a provision of GDPR that protects individuals from automated decision-making, although it remains uncertain whether this will be legally sufficient.
In the UK, the Financial Conduct Authority is among several state regulators tasked with creating new guidelines for AI. The authority is consulting with the Alan Turing Institute in London and other legal and academic institutions to better understand the technology.
As regulators adapt to the rapid pace of technological advances, some industry insiders are calling for increased engagement with corporate leaders. Harry Borovick, general counsel at startup Luminance, said that the dialogue between regulators and companies has been “limited” so far, and expressed concerns about the balance between consumer protection and business growth, reports Channel News Asia.
Business NewsWorld News