- Tubelator AI
- >
- Videos
- >
- News & Politics
- >
- Top AI Expert Warns of Extinction Level Threat from Artificial Intelligence | The InnerView
Top AI Expert Warns of Extinction Level Threat from Artificial Intelligence | The InnerView
Discover why Connelayhi, a renowned AI guru, believes humanity faces an existential crisis due to the rise of AI. Learn how his startup, Conjecture AI, is striving to align AI with human values to prevent a future dominated by intelligent non-human entities.
Instantly generate YouTube summary, transcript and subtitles!
Install Tubelator On ChromeVideo Summary & Chapters
No chapters for this video generated yet.
Video Transcript
Connelayhi is one of the world's leading minds in artificial intelligence.
He is a hacker who sees the rise of AI as an existential threat to humanity.
He dedicates his life to make sure its success doesn't spell our doom.
There will be intelligent creatures on this planet that are not human.
This is not normal.
And there will be no going back.
And if we don't control them, then the future will belong to them, not to us.
Leahy is the CEO of Conjecture AI, a startup that tries to understand how AI systems think
with the aim of aligning them to human values.
He speaks to the interview about why he believes the end is near
and explains how he's trying to stop it.
Leigh, he joins us now on the interview.
He's the CEO of Conjecture.
He's in our London studio.
Good to see you there. Good to have you on the program, Connor.
You're something of an AI guru.
And you're also one of those voices saying we need to be very, very careful right now.
And a lot of people don't quite have the knowledge or the, they don't quite have the vocabulary
or the deeper understanding as to why they should be worried.
They just feel some sort of sense of doom, but they can't quite map it out.
So maybe you can help us along that path.
Why should we be worried about AGI and tell me the difference between AGI and what is widely perceived as AI right now?
So I'll answer the second question first just to get some definitions out of the way.
The truth is that there's really no true definition of word AGI and people use it to mean all kinds of different things.
When I talk about the word AGI, usually what I mean by this is AI systems or computer systems
that are more capable than humans at all tasks that they could do.
So this involves any scientific task, programming, remote work, science, business, politics,
anything.
And these are systems that do not currently exist, but are actively attempting to be built.
There are many people working in building the systems and many experts believe these systems
are close.
And as for why these systems are going to be a problem, well, I actually think that a
lot of people have the right intuition here.
The intuition here is just, well, if you build something that is more competent than you,
it's smarter than you, and all the people, you know, and all the people in the world,
it is better a business, politics, manipulation, deception, science, weapons development, everything.
and you don't control those things, which we currently do not know how to do,
well, why would you expect that to go well?
It reminds me a little bit about the debate about whether we should be looking for
life in the universe beyond our solar system.
Stephen Hawking said, be careful.
Look at the history of the world.
Anytime you sort of invite us a stronger power, more competent power,
they might come and destroy you.
But then the counter to that is that you're mapping human behavior,
human desires, passions, needs, wants onto this thing.
Is this natural to do and fair to do because humans created it?
Humans created the parameters for it.
So it's actually worse than that in that it's really important to understand.
When we talk about AI, it's easy to imagine it to be software.