@sclower @BorrisInABox I have a problem when people turn over their cognative functions to LLMs, or use LLMs to code when they don't understand the result and then try to submit that as their own work. But neither of those are problems with LLMs, they're problems with people. Note: Not calling this stuff AI because none of it is intelligent and I hate the hype.
@Jage @sclower @BorrisInABox This isn’t what I mean when I say “Turn over your cognative functions”. I mean things like the following: When writing documentation intended for public use, you copy all your text straight from an LLM, and then don’t check the result to ensure it’s correct. You then pass that documentation on to people who are depending on you for the correct information, and the information you’ve provided is incorrect. Or, you’re setting up a new computer, and, instead of consulting the appropriate documentation, you go to ChatGPT or similar for the answer. Both of these are situations I’ve encountered within the last 9 months or so, and it’s only getting worse.
@Jage @sclower @BorrisInABox Your case is one where you have a particular problem you need solved, and using an LLM to do the work is the more efficient method for solving the problem. But you’re not passing it off as “Hey I wrote this code”, and I’m assuming that if you were to hand this script off to someone else you’d make it clear that you got this code from ChatGPT or similar and so you couldn’t troubleshoot it if it didn’t work exactly as expected.