Shavuah tov everybody, I really hope you all have a week full of prosperity and blessing, whatever that means for you. #MazelDon
I just ran into my first need-to-edit-a-post situation on the fediverse, and I love that I have the ability to do this.
I was this minute years old when I realized that that WordCamp in Alabama going on this weekend whose name my screen reader is pronouncing WP-Isle is actually WPYall, and this kids is why you should camel-case hashtags.
Within the last few days or so, I've been seeing tweets via email from a somewhat vocal contingent of blind people on Twitter proclaiming that Twitter killing its API is not a big deal because the native Twitter app is fine. I wonder if it's worth my time to do a write-up in which I explain in detail why this just isn't the case or if I should just leave them to it.
OK this thing where techbros like Elon Musk and Sam Bankman-Fried are defended as "just kids": we are not doing this they are not getting the benefit of the doubt as if they just made oopsies and they get to fail forward like nothing happened. We're just not. You do not get to screw up people's communities and/or their lives and get a pat on the head with a "better luck next time sweety". FFS these are adults.
Good morning everybody and welcome to Sunday. I'm still drinking coffee and I plan to be lazy today.
And Republicans should stop treating them as ordinary violence.
My hope is that we won't simply replace one monolithic platform with another. That we'll take this disruption in routine as an opportunity to further disrupt a status quo that has needed disruption for some time. That we'll try new things, build new things, find new ways to connect that don't simply replicate the patterns of the past but instead move toward a future that feels better for everyone.
Waiting on lunch to get here. I have a serious case of the hungries.
I ended up taking a shortish afternoon nap and now I'm winding down the rest of the day.
Good morning everybody and welcome to Monday. I fell asleep on the love-seat and when I finally woke up it was time to get up and start the day. I plan to spend tonight in actual bed.
You open up a database of pictures used to train artificial intelligence systems. At first, things seem straightforward. You’re met with thousands of images: apples and oranges, birds, dogs, horses, mountains, clouds, houses, and street signs. But as you probe further into the dataset, people begin to appear: cheerleaders, scuba divers, welders, Boy Scouts, fire walkers, and flower girls. Things get strange: A photograph of a woman smiling in a bikini is labeled a “slattern, slut, slovenly woman, trollop.” A young man drinking beer is categorized as an “alcoholic, alky, dipsomaniac, boozer, lush, soaker, souse.” A child wearing sunglasses is classified as a “failure, loser, non-starter, unsuccessful person.” You’re looking at the “person” category in a dataset called ImageNet, one of the most widely used training sets for machine learning. Something is wrong with this picture. Where did these images come from? Why were the people in the photos labeled this way? What sorts of politics are at work when pictures are paired with labels, and what are the implications when they are used to train technical systems? In short, how did we get here?
A walkthrough of Copilot and I attempting to add some content to the post-publish panel in the block editor.
A GOP celebration of a mass-killing machine on the House floor is on-brand for a nihilistic party that prides deadly individualism over problem-solving.
Plus: Elon traffics in Russian propaganda
Police profanity isn’t just impolite—it poisons the relationship with the public.
Managing our colors can truly help people to access our content. In this article, Brecht de Ruyte takes a deep dive into how we can create a high-contrast system while maintaining a balance between designing something accessible and respecting the look and feel of a brand.
Large language models (LLMs) like the GPT family learn the statistical structure of language by optimising their ability to predict missing words in sentences (as in 'The cat sat on the [BLANK]'). Despite the impressive technical ju-jitsu of transformer models and the billions of parameters they learn, it's still a computational guessing game. ChatGPT is, in technical terms, a 'bullshit generator'. If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it's talking about because it has no idea about anything at all. It's more of a bullshitter than the most egregious egoist you'll ever meet, producing baseless assertions with unfailing confidence because that's what it's designed to do. It's a bonus for the parent corporation when journalists and academics respond by generating acres of breathless coverage, which works as PR even when expressing concerns about the end of human creativity.