However, this lack regulation in the use of artificial intelligence something worries Governments. Similarly, the developers of these technologies they don’t want to take responsibility how users use oh at your services.
in that sense, European Union is taking steps to set standards, such as companies open ai, they must implement immediately if they want to continue operating in their area. Now, America is also considering joining this initiative. both political blocs are working on it contract of one Code of conduct, But doubts and questions arise about the proposals and how they will be implemented.
10 questions about the future of artificial intelligence
When will the code of conduct come into force?
a meeting between Margaret VestagerVice President of the European Commission and Anthony Blinken, the United States Secretary of State marks the starting point for the AI regulation agenda. However, the code of conduct may come into force within three years.
Does this measure represent the end of AI?
Far from being the end, these collaborations between political factions show that the world fears that the rapid development of artificial intelligence will lead to human extinction. So they are on their way to regulate them to continue using them as a tool.
What do the developers think?
Sam Altman, President of Open AI I was a little put off by the prospect of complying with regulations in Europe. However, with the incorporation of the United States of America, it has become open to the US state regulating AI.
Neurocity Crown Artificial Intelligence ChatGPT 2.jpg
How does the rest of the world think about AI?
each country operates under own laws. But, for example, in latin america, They follow the rules set by the western powers.
Does China have its own AI?
the huge Asia has developed its own system of artificial intelligence, that will definitely not be followed the code of treatsif they don’t go hand in hand with the idea Peking.
How do you plan to mitigate the risks, keeping in mind that the code could be implemented in six months?
The wishes of developers, such as in the case of Sam Altman, are the starting point for introducing self-regulation before there is an official regulation. However, the risk would be equally high due to the huge number of systems that exist today.
Would a regular code of conduct really limit AI’s capabilities?
It is possible to think about it in the use of common users. But machine learning mechanisms already exist, and it’s hard to think of a way to reduce them.
ChatGPT: What does AI think of all this?
“The regulation of artificial intelligence is a complex topic and is under constant debate in society. Some argue that regulation is necessary to guarantee safety, ethics and transparency in the development and deployment of artificial intelligence systems. There may be concerns about.” Regulation on innovation and technological progress.
What is the opinion of Bard AI?
“I understand there has been some discussion about regulating a large language model like mine as an AI mechanism. I believe regulation can be beneficial in some way, such as ensuring that I am safe and used responsibly. However, I also recognize that it is important to strike a balance between regulation and innovation. I am still developing and believe that regulation should not affect my ability to learn and grow should do.”
P32 – Intelligence art_opt.jpeg
employment. AI is expected to create 69 million new jobs in 45 economies, comprising 673 million workers.
What is the worst unchecked AI risk according to ChatGPT?
Excessive reliance on unregulated AI systems can undermine human autonomy and responsibility. Decision-making algorithms can be opaque and difficult to understand, hindering accountability and hindering people’s ability to make informed decisions.