Artificial Intelligence (AI) is developing at a rapid pace.
For most of the past decade, public concerns about digital technology have focused on the potential abuse of personal data.
Ever since OpenAI released ChatGPT into the wild in late 2022, the world has been abuzz with talks of Generative Artificial Intelligence and the future it could create.
What is Artificial Intelligence?
Artificial Intelligence (AI) can take many forms. As such, there is no agreed single definition of what it encompasses.
In general terms, it can be regarded as the theory and development of computer systems to perform tasks that are required by humans.
According to IBM, the current real-world applications of AI include:
Speaking what has been written (text to speech, natural language processing).
Generally looking for patterns in large amounts of data (machine learning).
Extracting information from pictures (computer vision).
Pulling insights and patterns out of written text (natural language understanding).
Autonomously moving through spaces based on its senses (robotics).
Transcribing or understanding spoken words (speech-to-text and natural language processing).
The House of Commons Library revealed some research from Stanford University, and other sources to offer the following definitions:
• Artificial General Intelligence (AGI) is an AI system that can undertake any intellectual task/problem that a human can. AGI is a system that can reason, analyse, and achieve a level of understanding that is on a par with humans; something that has yet to be achieved by AI.
• Machine learning is a method that can be used to achieve narrow AI; it allows a system to learn and improve from examples, without all its instructions being explicitly programmed.
• Narrow AI is designed to perform a specific task (such as speech recognition), using information from specific datasets, and cannot adapt to perform another task.
• Deep learning is a type of machine learning whose design has been informed by the structure and function of the human brain and the way it transmits information.
As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology.
In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation. Once approved, these will be the world’s first rules on AI.
Risks of AI
The risks of AI include;
a. Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children.
b. Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics.
c. Real-time and remote biometric identification systems, such as facial recognition.
Generative AI
Generative AI, like ChatGPT, would have to comply with transparency requirements:
Disclosing that the content was generated by AI.
Designing the model to prevent it from generating illegal content.
Publishing summaries of copyrighted data used for training.
More so, the EU gave reasons why AI be regulated. They are;
I. Complex and Challenging Implementation: Regulations relating to world-changing technologies can often be too vague or broad to be applicable. This can make them difficult to implement and enforce across different jurisdictions. This is particularly true when accounting for the lack of clear standards in the field.
II. Stifling Innovation and Progress: The case could be made that regulations will slow down AI advancements and breakthroughs. That not allowing companies to test and learn will make them less competitive internationally. However, we are yet to see definitive proof that this is true.
III. Potential for Overregulation and unintended Consequences: Furthermore, we know that regulation often fails to adapt to the fast-paced nature of technology. AI is a rapidly evolving field, with new techniques and applications emerging regularly. New challenges, risks and opportunities continuously emerge, and we need to remain agile / flexible enough to deal with them.
Why Generative Artificial intelligence needs to be regulated
A. Safeguarding Human Rights and Safety.
B. Ensuring Ethical Use of Artificial Intelligence.
C. Mitigating Social and Economic Impact.
Read Also: How to Become a Good Brand Marketer
Conclusion
Governments should collaborate and cooperate to establish broad frameworks while promoting and encouraging knowledge sharing and interdisciplinary collaboration. We need to involve diverse stakeholders in regulatory discussions, while engaging the public in AI policy decisions.
About Author
Discover more from TRW Interns
Subscribe to get the latest posts sent to your email.