@SarahBreau @delong LLMs don’t abstract; humans have to classify the training set for them. That this was done by underpaid people in the third world has been one of the reasons to suppose the corporate actors pushing LLMs are perhaps not a value of ethically reliable the rest of us ought to be OK with.

There’s a shedload of harm happening now. It’s happening because greed is a sin, not because of some spectral possibility of an inhuman intelligence.

@graydon

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

To respond on your own website, enter the URL of your response which should contain a link to this post's permalink URL. Your response will then appear (possibly after moderation) on this page. Want to update or remove your response? Update or delete your post and re-enter your post's URL again. (Find out more about Webmentions.)