OpenAI’s new team to Tackle ‘Superintelligent’ AI systems

OpenAI, one of the largest Artificial intelligence platforms, has decided to set up a team to tackle artificial intelligence systems that can cause harm to users.

OpenAI, the founder of the most famous Chat Bot, ChatGPT, has been working on the issues of artificial intelligence systems that can cause damage to users. With the world delving more and more into artificial intelligence, there are more and more systems that are being created for criminal use. 

On July 5th, an announcement was released on their blog that OpenAI will be creating a new team to control and manage systems much smarter than us. 

OpenAI believes that Superintelligence, while being the biggest revolution that humanity has ever seen, can also be dangerous with the vast power of knowing what it will contain. This power could lead to destruction or even the extinction of humanity and if these systems are not managed from the start, it could lead to dire consequences.

Furthermore, OpenAI believes If Superintendent systems could be introduced in this decade. It’s exciting and unsettling at the same time. 

The approach of OpenAI is to build a roughly human-level automated alignment researcher. The respective team is aiming to create an alignment AI system that aligns with the values and intent of humans. They plan on creating this system using the 3 main pillars which are: training the AI systems using human feedback, training the system to help in human evaluation, and lastly training AI systems to do alignment research. 

Open AI aims to dedicate 20% of its secured computer power to creating this human-level automated alignment researcher. It has appointed its chief scientist Ilya Sutskever and Jan Leike, as co-leaders for the project. They have also announced open opportunities for machine learning researchers and engineers to join the team. 

This announcement was published as governments around the world are considering measures to monitor Artificial intelligence systems. 

On June 14, the European Union passed the EU AI Act. This act aims to prohibit particular types of artificial intelligence while imposing restrictions and laws on the others. The European Union is the first to introduce an Act to monitor AI systems. 

According to a press release, the European Parliament stated,

“The rules aim to promote the uptake of human-centric and trustworthy AI and also protect the health, safety, fundamental rights and democracy from its harmful effects.”

The Artificial intelligence systems that are banned by this act are biometric surveillance, social scoring systems, predictive policing, and other unrecognized emotion detectors and facial recognition systems. Google Bard and OpenAI’s ChatGPT are the only AI systems that will be allowed to operate under the terms that it is solely AI-Generated.

This act will classify any AI system that causes harm to the health, safety, fundamental rights, or environment tagged as high risk. This act also classifies AI systems that will directly influence the voters and the outcome of elections as high risk. 

This Act was implemented after the bill was passed on May 31. The Markets in Crypto-Assets (MiCA) bill defines a crypto asset as a “digital representation of value or rights that can be electronically transferred and stored.” This means that cryptocurrencies, tokens, and other digital assets that meet these criteria fall under the scope of MiCA regulation. 

Adding to the above, the bill sets standards that crypto assets need to be transparent, and all information about the crypto they are issuing needs to be disclosed. 

The Bill is still under discussion and has yet to be implemented. The AI developers have expressed concerns regarding the implementation of this bill. 

The CEO, Sam Altman, went to Brussels to discuss the potential drawbacks of over-regulation with the regulators of the EU. 

Lawmakers in the US have issued a National AI Commission Act to monitor the nation’s AI technology. 

On June 30, Senator Michael Bennet drafted a letter urging the technology companies to label them as AI-Generated. 

Disclaimer: This article was created for informational purposes only and should not be taken as investment advice. An asset’s past performance does not predict its future returns. Before making an investment, please conduct your own research, as digital assets like cryptocurrencies are highly risky and volatile financial instruments.

Author: Puskar Pande

Leave a Reply