@SarahBreau @delong LLMs don’t abstract; humans have to classify the training set for them. That this was done by underpaid people in the third world has been one of the reasons to suppose the corporate actors pushing LLMs are perhaps not a value of ethically reliable the rest of us ought to be OK with.

There’s a shedload of harm happening now. It’s happening because greed is a sin, not because of some spectral possibility of an inhuman intelligence.

@graydon

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.