The more I see gushing uncritical praise of machine learning, or worse, completely fabricated accounts of what people think goes on inside machine learning models, the more I worry about what seems increasingly like a literacy problem on the part of users. When people don't understand a technology, they just start making things up that sound like they could be true and which fit their existing biases. What's that gonna look like going forward with stuff like LLMs?