I completely understand why some people might want to use something like 11labs to preserve the voice of a deceased loved-one for their own use, but did anyone check with Tunehead's family, for instance, to find out if they're OK with his being reduced to a voice print for mass consumption? That struck me as disturbing in a way I'm having trouble defining and I knew him for years. And it demonstrates why we need to be having robust ethics discussions around this kind of technology.

We come to bury ChatGPT, not to praise it. by Dan McQuillan
Large language models (LLMs) like the GPT family learn the statistical structure of language by optimising their ability to predict missing words in sentences (as in 'The cat sat on the [BLANK]'). Despite the impressive technical ju-jitsu of transformer models and the billions of parameters they learn, it's still a computational guessing game. ChatGPT is, in technical terms, a 'bullshit generator'. If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it's talking about because it has no idea about anything at all. It's more of a bullshitter than the most egregious egoist you'll ever meet, producing baseless assertions with unfailing confidence because that's what it's designed to do. It's a bonus for the parent corporation when journalists and academics respond by generating acres of breathless coverage, which works as PR even when expressing concerns about the end of human creativity.
Excavating AI: The Politics of Images in Machine Learning Training Sets
You open up a database of pictures used to train artificial intelligence systems. At first, things seem straightforward. You’re met with thousands of images: apples and oranges, birds, dogs, horses, mountains, clouds, houses, and street signs. But as you probe further into the dataset, people begin to appear: cheerleaders, scuba divers, welders, Boy Scouts, fire walkers, and flower girls. Things get strange: A photograph of a woman smiling in a bikini is labeled a “slattern, slut, slovenly woman, trollop.” A young man drinking beer is categorized as an “alcoholic, alky, dipsomaniac, boozer, lush, soaker, souse.” A child wearing sunglasses is classified as a “failure, loser, non-starter, unsuccessful person.” You’re looking at the “person” category in a dataset called ImageNet, one of the most widely used training sets for machine learning. Something is wrong with this picture. Where did these images come from? Why were the people in the photos labeled this way? What sorts of politics are at work when pictures are paired with labels, and what are the implications when they are used to train technical systems? In short, how did we get here?
It's Not Yesterday Anymore by Dan Sinker
My hope is that we won't simply replace one monolithic platform with another. That we'll take this disruption in routine as an opportunity to further disrupt a status quo that has needed disruption for some time. That we'll try new things, build new things, find new ways to connect that don't simply replicate the patterns of the past but instead move toward a future that feels better for everyone.