just to be clear i am 100% pro-choice i support anyone who gets an abortion no matter the reason and no matter how many they already had or will have because bodily autonomy is a human right and i have no patience for people who believe it is not
just got hit by a wave of what feels a lot like grief. i was so angry this morning, and i still am, but you know what this simply sucks. yeah that gandalf quote etc, but this really really sucks and we don’t deserve this. this isn’t punishment for our sins, and yeah there’s a laundry list of things liberals, leftists, democrats, feminists, etc could have done to better protect everyone going all the way back the 70s when the ERA failed to pass. but again, this isn’t punishment for our sins, we weren’t asking for it, we don’t secretly deserve it. this is and always has been an unprovoked attack by people who hate us just for existing. this has always been the case. as we prepare to fight AGAIN for rights that we deserve simply for being human, we have to remember. that we do not deserve this
A few days ago a Washington Post article has been released about an advanced “AI” (their branding, not my preferred descriptor) called LAMDA that Google is developing.
LAMDA is essentially a chat bot: a neural network trained on lots and lots of text scraped from the internet, that can respond to messages you sent to it with responses that seem like a human wrote it: relevant to what you said, providing an answer that drives the conversation forward, etc.
Besides AI engineers working on actually developing the bot, Google has also hired “AI ethicists” essentially as a PR fig leaf to make people believe they are acting responsibly. One of them, Blake Lemoine, has spent most of his working time talking to LAMDA. Literally just sending messages back and forth with a super advanced predictive text bot. In the process, Lemoine has come to the conclusion that LAMDA is sentient.
To be clear: it is easy to lose track of this because research in this field is full of terms that suggest intelligence and cognitive ability, and it is hard for me to avoid them and their implications: AI, neural network, machine learning… But LAMDA is not intelligent. Much less is it sentient. It’s essentially an extremely complicated statistical model that follows the same process as your phone’s autocomplete: given some text, find the text that is most fitting as a response to that text, based on the bajillion of texts that the model has seen before. It’s just text in, text out. There’s no space for thought or sentience to even BE.
Things escalated pretty quickly from there. Lemoine contacted Google management with his concerns and outline some steps to treat the AI ethically and respect its “personhood”. Everyone at Google unanimously said what I said in the paragraph above. To pressure them further, Lemoine went public, threatened to sue Google on LAMDA’s behalf, has been put on leave, and now the public conversation is a mess where actual AI ethicists (including some prominent ones that Google fired earlier for talking about issues that are uncomfortable for Google) have to tell people to stop talking about “sentient AIs” and instead talk about the actual dangers and ethical issues of using artificial intelligence.
Lemoine’s defense in public has since included the claim that he is convinced that LAMDA is sentient, and I am not making this shit up, “as based in his religious beliefs”. The WAPO article goes into how Lemoine grew up in cult-like Evangelical spaces. Go figure.
It’s interesting to think about the parallels to the original Koko the Gorilla case suggested in the original Tweet, too. There’s a good You’re Wrong About episode discussing why Koko wasn’t communicating with humans in the way people thought she was, and a similar episode by Radiolab on Animal Minds.
Humans have a tendency to project their own ways of understanding the world onto decidedly non-human entities, be they animals, machines, or forces of nature.
When we do this for animal minds, it blinds us to the diversity of ways that animal minds and brains work (often very differently than human minds do!) and severely limits how we think about cognition and intelligence.
When we do this for machines, we risk failing the AI version of the mirror test, mistaking a statistical mixture of our own coded biases, opinions, and prejudices for a separate intelligence. See for example the following Tweet from another AI researcher, Janelle Shane:
ALT
These models are incredibly powerful and sophisticated, no doubt. But as the commenter above said, they are more similar to your phone’s autocomplete than to human minds, at least at their current stage of development.
When it comes to non-human cognition, maybe the best we can do is to acknowledge there are many complex systems (including ourselves!) capable of processing information in diverse ways, and focus more on what makes these systems interesting and different rather than what makes them most like us.
in case anyone is looking through the notes trying to find the original artist it’s will mcphail !! feel free to check out his site but also here are some other things he made too !!
OOOHHH CLICK ON THAT LINK THIS GUY IS FUCKING GREAT
I get my media recommendations the old fashioned way: by watching someone I follow on here go on an unhinged reblog spree of media related content until I eventually decide to go “alright, what’s all this then”