We come to bury ChatGPT, not to praise it. by Dan McQuillan
Large language models (LLMs) like the GPT family learn the statistical structure of language by optimising their ability to predict missing words in sentences (as in 'The cat sat on the [BLANK]'). Despite the impressive technical ju-jitsu of transformer models and the billions of parameters they learn, it's still a computational guessing game. ChatGPT is, in technical terms, a 'bullshit generator'. If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it's talking about because it has no idea about anything at all. It's more of a bullshitter than the most egregious egoist you'll ever meet, producing baseless assertions with unfailing confidence because that's what it's designed to do. It's a bonus for the parent corporation when journalists and academics respond by generating acres of breathless coverage, which works as PR even when expressing concerns about the end of human creativity.
Excavating AI: The Politics of Images in Machine Learning Training Sets
You open up a database of pictures used to train artificial intelligence systems. At first, things seem straightforward. You’re met with thousands of images: apples and oranges, birds, dogs, horses, mountains, clouds, houses, and street signs. But as you probe further into the dataset, people begin to appear: cheerleaders, scuba divers, welders, Boy Scouts, fire walkers, and flower girls. Things get strange: A photograph of a woman smiling in a bikini is labeled a “slattern, slut, slovenly woman, trollop.” A young man drinking beer is categorized as an “alcoholic, alky, dipsomaniac, boozer, lush, soaker, souse.” A child wearing sunglasses is classified as a “failure, loser, non-starter, unsuccessful person.” You’re looking at the “person” category in a dataset called ImageNet, one of the most widely used training sets for machine learning. Something is wrong with this picture. Where did these images come from? Why were the people in the photos labeled this way? What sorts of politics are at work when pictures are paired with labels, and what are the implications when they are used to train technical systems? In short, how did we get here?
It's Not Yesterday Anymore by Dan Sinker
My hope is that we won't simply replace one monolithic platform with another. That we'll take this disruption in routine as an opportunity to further disrupt a status quo that has needed disruption for some time. That we'll try new things, build new things, find new ways to connect that don't simply replicate the patterns of the past but instead move toward a future that feels better for everyone.
Elon Musk Is Running Scared From Mastodon; Cuts Off The Best Tool For Finding Your Twitter Followers There by Mike MasnickMike Masnick

People keep claiming that Mastodon isn’t scaring Elon Musk, but it’s pretty clear that he’s worried about the exodus of people from Twitter. With his bizarrely short-sighted decision to end free access to the Twitter API, driving developers over to Mastodon, some people realized that the various tools that people use to find their Twitter […]

Technology: What about “Log in with Twitter”? by Adam Chandler

Via Twitter’s Developer Site: “Use Log in with Twitter, also known as Sign in with Twitter, to place a button on your site or application which allows Twitter users to enjoy the benefits of a registered user account in as little as one click. This works on websites, iOS, mobile, and desktop applications.” “Access tokens …