OpenAI building new team to stop superintelligent AI going rogue

It’s critical to pay close attention when experts at the forefront of artificial intelligence technology express worries about the potentially disastrous effects of extremely sophisticated AI systems.

Geoffrey Hinton, who is regarded as one of the “godfathers” of AI for having made ground-breaking contributions to the subject, has said that given the speed at which AI is being developed, it is “not inconceivable” that superintelligent AI that is more intelligent than humans may endanger civilization. This warning, which was given only a few months ago, emphasizes how bad the situation is.

Sam Altman, the CEO of OpenAI, the organization responsible for the widely popular ChatGPT chatbot, openly confessed to feeling “a little bit scared” concerning the potential impacts of advanced AI systems on society.

Because of Altman’s worries, his business has made a definite decision. A new division dubbed Superalignment, created by OpenAI recently, is tasked with mitigating the possible turmoil or even worse outcomes that might result from superintelligent AI.

Superintelligence has the ability to answer some of humanity’s most pressing problems, but it also contains enormous power that might be dangerous, OpenAI stressed in the release of this program. The group is aware of the risk that improper management of superintelligent AI would render humans powerless or perhaps extinct. The goal of superalignment is to guarantee the ethical and advantageous advancement and use of this revolutionary technology.

Although OpenAI concedes that superintelligent AI may still seem far off, it thinks it may be developed as early as 2030. There is currently no technology available that can properly direct or regulate a potentially superintelligent AI to stop any undesirable behavior.

OpenAI seeks to create a “roughly human-level automated alignment researcher” tasked with conducting safety evaluations on superintelligent AI in order to solve this important topic. In addition, creating new governance structures and addressing the superintelligence alignment issue are necessary for controlling the dangers connected with this technology.

Superalignment needs to have an effect, thus OpenAI is committed to put together a highly qualified team made up of the best machine learning researchers and engineers. Their knowledge will be crucial to the success of this project.

The corporation explicitly admits the size of their undertaking, calling it a “incredibly ambitious goal” while simultaneously acknowledging that success is not assured. However, they express confidence that the issue can be resolved with concentrated and coordinated work.

Experts are sure that significant changes to the workplace and society are imminent, even at the pre-superintelligence stage, thanks to the development of ground-breaking AI tools like OpenAI’s ChatGPT and Google’s Bard, among others.

Governments all around the globe are quickly passing laws for the quickly developing AI sector in an effort to ensure the safe and ethical use of the technology. However, as each nation has its own views on the use of AI, the absence of a united strategy might lead to drastically different policies and results.And it’s these different approaches that will make Superalignment’s goal all the harder to achieve.

Leave a Reply