FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

A very good read from a respected source!

INDEPENDENT. OpenAI may have made a ‘dangerous’ artificial intelligence discover that led to chaos, Elon Musk says.

Musk, who co-founded company, said he would ‘guess’ that something scared its board.

By Andrew Griffin

OpenAI may have discovered “something dangerous” that caused chaos at the company, Elon Musk has said.

Recent days have seen ChatGPT creator OpenAI fire and then re-hire its chief executive, Sam Altman. Many of the circumstances of that decision still remain entirely mysterious, and it is not clear why OpenAI’s board removed Mr Altman.

Elon Musk co-founded OpenAI, as part of his response to concerns that artificial intelligence could prove dangerous to humanity. But he has been critical of its recent direction, including its turn towards operating for a profit and no longer open sourcing its work.

During the New York Times’s Dealbook conference, Mr Musk said that he had attempted to find out what happened behind the scenes at OpenAI, but had failed to do so. He had reached out to numerous people working at the company, including Ilya Sutskever, the OpenAI chief scientist and board member who is believed to have led the rebellion against Mr Altman, but had not heard anything.

But he suggested that the company had found “something dangerous” that had caused Mr Sutskever to be concerned. He said that the most likely scenario was a worrying breakthrough that had led the company to try and avoid the danger.

He was asked by journalist Andrew Ross Sorkin whether he meant that he thought something dangerous had been discovered within the company. Mr Musk said that would be his guess.

In the same interview, Mr Musk once again criticised OpenAI’s move away from the open source and non-profit principles that it had been founded with.

He also suggested that artificial intelligence companies were lying if they claimed their artificial intelligence systems were not trained on people’s data. But he said that any lawsuits over the issue will not be settled before we have a “digital god”, and would therefore be irrelevant.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.