It will be a long time, if ever, before robots are turned into anything remotely resembling humans. From Mark E. Jeftovic at bombthrower.com:
Ruminating over our robot overlords and the missing scenario
Now that ChatGPT has exploded onto the stage, there is renewed hype around Artificial Intelligence (AI). Whenever AI captures the public imagination, we are subjected to unrestrained conjectures around how it will inevitably take over the future and change our lives.
We’re led to believe that AI will usher an era of hyper-intelligent overlords, so far advanced beyond our own coarse and analog cognitive skills that the existential question of the future will center around:
- how much power or rights do we confer on these beings?
- will they act benevolently or malevolently toward us?
But these questions presuppose a core assumption around AI that everybody agrees isn’t true now but will inevitably become true in the future – after a few more iterations of Moore’s Law…
That’s the idea that AI will achieve general artificial intelligence, and with that is implied some degree sentience (otherwise there is nothing to give any rights to).
The Newsweek piece on the right in the images above is by the transhumanist futurist Zoltan Istvan. He describes how AI ethicists are divided on the matter of whether future hyper-intelligent robots should be granted rights.
On one hand, by not affording human rights to robots possessing AGI (general intelligence on par with humans), we are committing a “civil rights error” that we will regret in the future.
This is opposed by those who assert that robots are machines and will never require rights, because they aren’t sentient (this is where I land on it, and I’ll tell you why below).
Some debate about the Einstein quote regarding his biggest fear being a generation of idiots outpaced by technology.
Just because it isn’t in the book of compiled quotes, doesn’t mean that he didn’t say it.
LikeLike