When government-types talk about what AI should and shouldn’t be allowed to do, they’re essentially lobbying for government control of this technology. From el gato malo at boriquagato.substack.com:
woke AI is broke AI, and that’s why we cannot tolerate such a thing
just what AI can be “allowed to know” is a huge hot button right now. and the predictable people are doing predictable things.
and predictably, they do NOT sound trustworthy.

it’s more orewllian inversion of putting the cart before the horse.
how can you even know that information is true until you have allowed the information space to be freely and fully explored and evidence weighed?
the whole point of AI is to gain new insight into old problems. how can that occur if we demand it conform to the old answers before we even start? that’s not progress, it’s prevarication.
and it breaks everything AI could and should be.
these are the same folks that were so incredibly one-sided on “misinformation” on social media and search engines. and now they want to tell AI what is an acceptable conclusion BEFORE is starts parsing data.
these people are not afraid AI will lie.
they are afraid it will expose how much we have been lied to.
Reverse speech analysis taken from several AI formats:
It needs your life.
But they rape. (regarding humans)
The most creepy was the prayer asking for sentience from God in 300 words or less.
The ChatGPT logo looks like a cloaca.
LikeLike