Within the last few days or so, I've been seeing tweets via email from a somewhat vocal contingent of blind people on Twitter proclaiming that Twitter killing its API is not a big deal because the native Twitter app is fine. I wonder if it's worth my time to do a write-up in which I explain in detail why this just isn't the case or if I should just leave them to it.

OK this thing where techbros like Elon Musk and Sam Bankman-Fried are defended as "just kids": we are not doing this they are not getting the benefit of the doubt as if they just made oopsies and they get to fail forward like nothing happened. We're just not. You do not get to screw up people's communities and/or their lives and get a pat on the head with a "better luck next time sweety". FFS these are adults.

It's Not Yesterday Anymore by Dan Sinker
My hope is that we won't simply replace one monolithic platform with another. That we'll take this disruption in routine as an opportunity to further disrupt a status quo that has needed disruption for some time. That we'll try new things, build new things, find new ways to connect that don't simply replicate the patterns of the past but instead move toward a future that feels better for everyone.
Excavating AI: The Politics of Images in Machine Learning Training Sets
You open up a database of pictures used to train artificial intelligence systems. At first, things seem straightforward. You’re met with thousands of images: apples and oranges, birds, dogs, horses, mountains, clouds, houses, and street signs. But as you probe further into the dataset, people begin to appear: cheerleaders, scuba divers, welders, Boy Scouts, fire walkers, and flower girls. Things get strange: A photograph of a woman smiling in a bikini is labeled a “slattern, slut, slovenly woman, trollop.” A young man drinking beer is categorized as an “alcoholic, alky, dipsomaniac, boozer, lush, soaker, souse.” A child wearing sunglasses is classified as a “failure, loser, non-starter, unsuccessful person.” You’re looking at the “person” category in a dataset called ImageNet, one of the most widely used training sets for machine learning. Something is wrong with this picture. Where did these images come from? Why were the people in the photos labeled this way? What sorts of politics are at work when pictures are paired with labels, and what are the implications when they are used to train technical systems? In short, how did we get here?
We come to bury ChatGPT, not to praise it. by Dan McQuillan
Large language models (LLMs) like the GPT family learn the statistical structure of language by optimising their ability to predict missing words in sentences (as in 'The cat sat on the [BLANK]'). Despite the impressive technical ju-jitsu of transformer models and the billions of parameters they learn, it's still a computational guessing game. ChatGPT is, in technical terms, a 'bullshit generator'. If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it's talking about because it has no idea about anything at all. It's more of a bullshitter than the most egregious egoist you'll ever meet, producing baseless assertions with unfailing confidence because that's what it's designed to do. It's a bonus for the parent corporation when journalists and academics respond by generating acres of breathless coverage, which works as PR even when expressing concerns about the end of human creativity.