There is growing concern over how AI is being pushed by big tech companies
GETTY IMAGES

AI pioneer quits Google over “scary” pace of change

 

Tuesday, May 02, 2023

By Mark Sellman, Technology Correspondent

Reprinted from The Times [London]

 

A pioneering British computer scientist who is known as the “godfather of AI” has quit his role at Google so he can warn about the technology’s dangers.

Geoffrey Hinton, who developed the foundations of modern machine learning, says that the speed of change in AI was “scary” and needed to be regulated.

“Look at how [AI] was five years ago and how it is now,” he said. “Take the difference and propagate it forwards. That’s scary.”

Hinton revealed his resignation as vice-president engineering fellow at Google in an interview with The New York Times.

He later tweeted: “I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.”

Hinton said he had changed his view that it would be some time before AI was more intelligent than human beings.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he told the newspaper. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Geoffrey Hinton developed the foundations of modern machine learning

Geoffrey Hinton developed the foundations of modern machine learning
DANIEL EHRENWORTH/GOOGLE/AP

He added: “Maybe what is going on in these systems . . . is actually a lot better than what is going on in the brain.”

His move adds a significant voice to the growing concern over the pace of change in AI that is being driven by big tech companies.

Microsoft is the main commercial backer of OpenAI, which developed ChatGPT, and Google is pushing forward to release AI products as it fears losing its dominance in the search market.

Meta and Amazon have also started to pivot their priorities towards the technology.

Hinton, a former winner of the prestigious Turing award, backed a recent call from leading researchers and tech executives for companies to pause their most advanced development.

“I don’t think they should scale this up more until they have understood whether they can control it,” he said. He added that leading scientists should collaborate to control the technology.

In March an open letter from the Future of Life Institute, signed by other leading figures in the field, called for a six-month halt in work on the most powerful AI.

Hinton said that his immediate fears over AI focused on its appropriation by “bad actors” and the impact of job losses.

Large language models such as ChatGPT are already at the stage where they can write copy and computer code quite fluently. Champions of the AI believe that this will enable people to increase productivity and outsource menial tasks to technology.

A recent paper by Stanford and MIT assessing call-centre workers using AI said that it could increase the productivity of lower-skilled staff by up to 35 per cent.

Hinton acknowledged that it “takes away the drudge work” but added in the interview that “it might take away more than that”, in a reference to job losses.

He said that it would be hard to “prevent the bad actors from using it for bad things”, especially in propagating disinformation, where people would “not be able to know what is true anymore”. In addition to text generators such as ChatGPT, there are now sophisticated image, audio and video AI that can produce high-quality content through simple text prompts.

Hinton’s work on neural networks, a key element in modern AI, has earned him a legendary status in the community. He developed a technique that enables the networks to learn and created an AI that can recognise images, which are both considered foundational.

Ilya Sutskever, one of his Ph.D. students with whom he created the image recognition AI, is now chief scientist at OpenAI.

He appeared wistful about these breakthroughs in his interview, saying: “I console myself with the normal excuse: if I hadn’t done it, somebody else would have.”

Hinton was born in London and gained his Ph.D. from Edinburgh University before moving to Canada, where he is a professor of computer science at Toronto University.

Jeff Dean, Google’s chief scientist, said: “Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions at Google. I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well.

“As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

Ian Hogarth, co-author of the State of AI report, said: “Hinton’s resignation is a watershed moment: he is arguably the most influential scientist in the entire field of deep learning and now explicitly argues that we should stop making AI systems more powerful until we understand how we can control them.

“I imagine we will see more leading AI scientists start to speak out in the coming months as the danger of private companies racing towards God-like AI with no oversight becomes more apparent.”

Darkness wafts within the mind

RELATED:

Paedophiles using AI to create child abuse images

 

Monday, May 01, 2023

By Mark Sellman, Technology Correspondent

Reprinted from The Times [London]

 

A popular new AI program is being used to transform pictures of children from the internet into sexualised and abusive images.

Parents are being warned to be careful about posting pictures of their children online after The Times found that some users of Midjourney, an AI image generator, are creating a large volume of sexualised images of women, children and celebrities.

Some users are employing real pictures to generate the images, which can be paedophilic in nature.

Midjourney, which is used by millions of people to create images from simple text or image prompts, has recently helped to create viral fake images of Donald Trump and the Pope.

The program has also been utilised for the generation of provocative or explicit deepfake images of female celebrities, including Jennifer Lawrence, Kate Upton and Kim Kardashian. One account holder has generated more than 57,000 images of this nature of Upton alone since October.

The program’s popularity has grown in recent weeks after the release of a new version, which has increased the photorealism of its images.

Users sign up and prompt the AI generator on Discord, a communication platform that was used to share the recent Pentagon leaks. Their creations are then uploaded onto the Midjourney website in a public gallery.

There are more than 14 million members of Midjourney’s community on Discord. It is the largest single group on the social platform.

The company’s guidelines say that content should be “PG-13 and family friendly”. But it also warns that the technology is new and “does not always work as expected”, with no guarantees given as to its suitability for customers.

The images discovered by The Times breach Midjourney’s and Discord’s terms of use and could, in some cases, be illegal in England and Wales, which ban content of this nature, known as non-photographic imagery (NPI). Virtual child sexual abuse material is not illegal in the U.S.

Deepfake pornography is also set to be criminalised in England and Wales through the Online Safety Bill, which is expected to be fully introduced in 2024.

Richard Collard, associate head of child safety online policy at the NSPCC, said: “It is completely unacceptable that Discord and Midjourney are actively facilitating the creation and hosting of degrading, abusive and sexualised depictions of children.

“In some cases, this material would be illegal under U.K. law and by hosting child abuse content they are putting children at a very real risk of harm.”

Midjourney has attempted to stem the flow of sexualised content on its website by banning a range of words and even the peach emoji, which is visual slang for buttocks, from being used as prompts.

David Holz, Midjourney’s founder and CEO, recently confessed that the company was struggling to create content rules “as the images get more and more realistic and as the tools get more and more powerful”.

In response to The Times’s findings, Midjourney said it would ban users who had violated its rules. Holz added in an email: “Over the last few months, we’ve been working on a scalable AI moderator, which we began testing with the user base this last week.”

Baroness Kidron, chairwoman of 5Rights foundation, which campaigns for child safety online, is proposing changes to the Online Safety Bill that would incorporate AI and machine-generated content in the legislation. She said: “I am very fearful of the speed at which this is happening. It can take less than five seconds to create a child abuse image by just typing in a prompt. This is an overwhelming wave.”

A spokesman for Discord said: “Discord has a zero-tolerance policy for the promotion and sharing of non-consensual sexual materials, including sexual deepfakes and child sexual abuse material.”

Darkness wafts within the mind