I completely understand why some people might want to use something like 11labs to preserve the voice of a deceased loved-one for their own use, but did anyone check with Tunehead's family, for instance, to find out if they're OK with his being reduced to a voice print for mass consumption? That struck me as disturbing in a way I'm having trouble defining and I knew him for years. And it demonstrates why we need to be having robust ethics discussions around this kind of technology.

We come to bury ChatGPT, not to praise it. by Dan McQuillan
Large language models (LLMs) like the GPT family learn the statistical structure of language by optimising their ability to predict missing words in sentences (as in 'The cat sat on the [BLANK]'). Despite the impressive technical ju-jitsu of transformer models and the billions of parameters they learn, it's still a computational guessing game. ChatGPT is, in technical terms, a 'bullshit generator'. If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it's talking about because it has no idea about anything at all. It's more of a bullshitter than the most egregious egoist you'll ever meet, producing baseless assertions with unfailing confidence because that's what it's designed to do. It's a bonus for the parent corporation when journalists and academics respond by generating acres of breathless coverage, which works as PR even when expressing concerns about the end of human creativity.