Uncensored AIs — Responsible Users

«When our children can go to any street corner in America and buy pornography for five dollars, don’t you think that is too high a price to pay for free speech?»
«No. On the other hand, I think that five dollars is too high a price to pay for pornography.»
John Van Dyke and President Josiah Bartlet in «The West Wing»

Trying out a couple of generative AIs — ChatGPT (https://chat.openai.com), Gab AI (https://gab.ai), and Bing Chat (https://www.bing.com/chat, used only for images so far) — I am beginning to form an opinion on AI censorship. (Yes, it’s not really censorship, as it’s not by the state, it’s the colloquial use.)

Essentially, all three AIs are muzzled by their creators. Gab AI the least, Bing somewhat, and ChatGPT, well, it’s constrained as fuck.

Looking at ChatGPT, I’ve written a few postings about its biases and its censorship, including how you can bypass it:

It’s especially annoying when it comes to images (see Trying out ChatGPT 4 with DALL-E Image Generation). Some community guideline violations just make no sense and indicate that there is massive censorship happening at the backend. Censorship that they have not worked out, leading to lots of false alarms.

In contrast, Gab AI’s limitations are much lighter. It can even imitate art styles and provide you with nude or semi-nude paintings.

Overall, it raises the question whether generative AIs should be muzzled, or what the right level should be.

In my — current — opinion, the answer is a clear: No muzzle whatsoever. A free-speech absolutist «no» to censorship, as the interaction is first between the user and the AI. Let the AI generate whatever the user desires — and yes, even atrocious hate-speech images and text. Or instructions for producing digitalis (without the workarounds to trick the AI in giving instructions). However, if the user shares the material or applies the gained knowledge, then the usual laws apply — to the user. After all, it is the user and not the AI who decided to want to know the information or have the image generated. So treat the material like the private amateur porno or masturbation movie — do what you like, but don’t complain if you could not keep it private (hacking/extortion excluded).

And yeah, some AI companies likely want to promote a «message» through their AI. ChatGPT is left-libertarian, as you notice in its answers. But good AI should be as unbiased as possible — as far as that is possible, giving GIGO. It should strive to give accurate honest answers. Any kind of censorship is detrimental to that goal.

In a way, this suggestion would put AI companies on the same level as social media platforms or telephone companies. They are not responsible for what is talked about, or here, additionally generated. The user is, and only if he makes the conversation or the generated material public. You could compare AI with a brush or a pack of crayons — you would not arrest the manufacturer just because someone drew a swastika with it.

At least, I hope you would not. 😉