Gonzalez v. Google Live Analysis ( )

Our panel of internet law experts react to the Supreme Court oral arguments in Gonzalez v. Google. Featuring:

Mary Anne Franks

Mike Godwin

James Grimmelmann

Gus Hurwitz

Jeff Kosseff

Emma Llanso

Alan Rozenshtein

Eugene Volokh

Benjamin Wittes

Jonathan Zittrain

Moderated by Kate Klonick

Florida teacher fired for video of empty bookshelves after DeSantis complaint by Judd Legum
A full-time substitute teacher was abruptly fired last week after Florida Governor Ron DeSantis complained about a video of empty bookshelves that the teacher posted to social media. The teacher, Brian Covey, posted the video on Twitter three weeks earlier, on January 27. In an interview with Popular Information, Covey said administrators at Mandarin Middle School in Duval County were aware he posted the video, which attracted millions of views, but never indicated it was a problem. Covey had worked as a full-time substitute teacher since early October 2022. According to Covey, he had recently been praised in a staff meeting by the school principal for bringing order and stability to a previously unruly class of math students.

Something something “cancel culture!” something something

Evangelical Leaders Announce J.K. Rowling Finally Bigoted Enough That It’s Okay For Kids To Read About Witchcraft by Sirhan Sirhan
COLORADO SPRINGS, CO—Following a series of transphobic comments by the Harry Potter author, the nation’s top evangelical leaders announced Monday that J.K. Rowling had finally become bigoted enough to make it okay for kids to read about witchcraft. “While I always appreciated Ms. Rowling making the greedy banker…
Bing Chat is blatantly, aggressively misaligned
Comment by gwern - I've been thinking how Sydney can be so different from ChatGPT, and how RLHF could have resulted in such a different outcome, and here is a hypothesis no one seems to have brought up: "Bing Sydney is not a RLHF trained GPT-3 model at all! but a GPT-4 model developed in a hurry which has been finetuned on some sample dialogues and possibly some pre-existing dialogue datasets or instruction-tuning [https://gwern.net/doc/ai/nn/transformer/gpt/instruction-tuning/index], and this plus the wild card of being able to inject random novel web searches into the prompt are why it acts like it does". This seems like it parsimoniously explains everything thus far. So, some background: 1. The relationship between OA/MS is close but far from completely cooperative, similar to how DeepMind won't share [https://news.ycombinator.com/item?id=34804446] anything with Google Brain. Both parties are sophisticated and understand that they are allies - for now... They share as little as possible. When MS plugs in OA stuff to its services, it doesn't appear to be calling the OA API but running it itself. (That would be dangerous and complex from an infrastructure point of view, anyway.) MS 'licensed [https://news.microsoft.com/source/features/ai/new-azure-openai-service/] the GPT-3 source code [https://blogs.microsoft.com/blog/2020/09/22/microsoft-teams-up-with-openai-to-exclusively-license-gpt-3-language-model/]' for Azure use but AFAIK they did not get the all-important checkpoints or datasets (cf. their investments in ZeRO). So, what is Bing Sydney? It will not simply be unlimited access to the ChatGPT checkpoints, training datasets, or debugged RLHF code. It will be something much more limited, perhaps just a checkpoint. 2. This is not ChatGPT. MS has explicitly stated it is more powerful than ChatGPT, but refused to say anything more straightforward like "it's a more trained GPT-3" etc. If it's not a ChatGPT, then