Welcome to Sex and the State, a newsletter about human connection. To support my life’s work, upgrade to a paid subscription, buy one of my guides, follow me on OnlyFans, follow me on Twitter, support me on Patreon, or just share this post 🙏
~~~~~
Lensa is the hot new app everyone on Facebook is using to make hot new AI-created avatars of themselves.
Here’s an example from fellow substacker
Today, I want to write about the allegation that Lensa is reproducing (and therefore also entrenching) bigotry.
TL;DR: “Technology X is good” or “technology X is bad” takes are simplistic and dumb.
1. Each particular technology is, at base, a neutral tool that can be used for good or evil.
2. Most useful technologies aren’t going to go away until they stop being useful. So a conversation about whether or not they should exist may be interesting, but it usually useless otherwise.
3. The useful work is figuring out how to maximize benefit and minimize harm from each new useful technology.
Except for this one, obvs:
Figuring out how to maximize benefit and minimize harm from Lensa and related tools requires first recognizing its current and potential benefits and harms.
I’d argue that in some cases, but not all, Lensa definitely is reproducing (and therefore also entrenching) bigotry. At the same time, Lensa also seems to be helping some trans and enby people see themselves as they’d like to look, which is a good thing.
Users downloaded the app 1.6 million times in November, and have posted to Instagram with the hashtag #Lensa more than half a million times.
Lensa is supposed to create avatars that are equally if not more beautiful than the pictures the user provides. So the changes Lensa makes to users’ appearance reveals what the model has learned is beautiful. In so doing, it also further entrenches existing beauty standards.
What a society considers beautiful is often inextricably linked with status. Some beauty markers are pretty constant across contexts. These traits tend to fairly reliably indicate health and/or fertility. These include symmetry, youth, full hair, large-but-high breasts and a certain waist-to-hip ratio for women, wide shoulders and a narrow waist for men, and a certain BMI range.
Other beauty markers only indicate status in a particular context. For example, having tanned skin doesn’t reliably convey information about health or fertility. It’s purely a status marker. Whether tanned skin is low-status or high-status depends on the context. Societies that consider it high-status also consider it beautiful. Societies that consider it low-status also consider it ugly.
Interestingly, some status-based beauty markers actually reduce health and/or fertility, like body fat percentages too low for optimal female fertility.
And status is inextricably linked with identity-based oppression. If being Black (for example) is low-status in a society, it’s considered ugly as well.
In the US, whiteness and features associated with whiteness, such as narrow noses and straight hair, are status-based beauty markers.
There’s evidence that Lensa favors bigoted beauty markers. For example, some Black users have said Lensa makes them appear whiter. “It insisted on giving me green eyes,” wrote Ivy Summer, who is Black and has dark brown eyes. She said most of her Lensa photos would lead the viewer to assume she is mixed-race. “My eyes are green in almost half of my images. I assume it’s due to society’s beauty standards: light-colored eyes in contrast with darker skin is more beautiful and/or interesting and/or captivating.”
But this isn’t everyone’s experience. Samantha Brennan, a white woman around middle age, wrote: “I like that it didn't make me young or skinny!”
The jury seems to be out on whether Lensa makes most users look thinner. Some say yes and others say no.
But there was pretty widespread agreement that Lensa made users feel hyper-sexualized.
That was Gail Tyler’s biggest complaint about Lensa. “I didn't add any risque reference photos, but so many results focused on large breasts,” Tyler wrote. It's disturbing to see how the objectification of women translates to AI.”
“I’ll second the weirdly sexualized photos,” Kati Lane wrote. “I used photos from different angles as it suggested. Not only did it not give me a full body portrait, but I rarely portrayed me as plus size (which I am).”
Brennan found the opposite, “Wild. I didn't get any with even suggestions of breasts!” she wrote.
Not to get too far into the weeds, but this newsletter is called Sex and the State. It’s not like I can leave this unaddressed. What sucks about “sexualization” imo is that it’s generally not consented to and that it’s usually a sexist double-standard.
My solution to both of these problems is to de-stigmatize sex and end sexism. First, if there were no stigma around sex or negative ramifications to AI slapping honkers on your avatar 99.99% of people wouldn’t mind. Second, if sexism didn’t exist men would get big codpieces and chest muscles or whatever the equivalent is and that sounds good to me.
But, since we don’t yet inhabit a sex-positive feminist utopia Lensa should stop putting honkers on users without their consent.
On the positive side, Lensa is helping many people see themselves as they would like to be seen, often for the first time.
For example, Saoirse Egan is running an initiative to bring more BIPOC and trans/NB folks into the dataset, and increase their visibility in the AI art space in general. “I’ve had feedback from a lot of folks who say it has helped them with their body dysmorphia.”
Egan’s project is also helping expecting and recent mothers see themselves as they want to be seen is.
“This is absolutely the most interesting angle for me - computer-aided euphoria,” Jem Gold wrote. “The machine sees me as I see myself, even if my physical self doesn’t quite line up yet.”
As with most technology, Lensa is a mixed bag — just like the society that produced it. A racist, sexist, sex-negative society is obviously going to produce training data that is racist, sexist, and sex-negative. If we want Lensa to act better than the people whose data trained it, we have to teach it better ways. I don’t know who runs Lensa, but including people from marginalized communities at every level of hierarchy seems obviously important. It also seems important to go out of our way to include marginalized communities in the training data. And the team should continually manually monitor outputs and tweak the algorithm.
Lensa didn’t create bigotry, nor can it cure it. Like all tech, it’s just one part of the larger picture.
I haven't seen Hugo for a while. I'm glad he's alive.