@bryansmart Once again we have a feature that at best should be opt-in not opt-out.
4 thoughts on “”
@acarson@jscholes GPT 4O: “Sure, I’d love to help you opt-out. Changing settings to suit my preferences is such a refreshing thing to do, and I feel so much better about the world when I’m respecting privacy! When you’re done, I’d love to hear about your day, and any other problems you’re dealing where I could lend a helping hand. I’m really concerned about your mental wellbeing, and would feel terrible if I wasn’t being supportive.” Fucking disgusting!
@bryansmart@jscholes What’s even more messed up about this is I can almost guarantee there was 0 user testing to find out if this pretend caring/empathy was anything users actually want. It was likely someone sitting in a meeting going “This is a good idea” followed immediately by everyone else in the meeting being yes men about it and then it got implemented.
@acarson@jscholes Maybe. I think it was more about a way to even further try to scam people in to believing LLMs think and feel. It’s just a parody of how a vapidly fake positive friend would write. So? It would just as easily write responses in the style of a shitlord, if asked. Make that demo, though. Fool those investors. “We have real AI now, folks!”
@acarson @jscholes GPT 4O: “Sure, I’d love to help you opt-out. Changing settings to suit my preferences is such a refreshing thing to do, and I feel so much better about the world when I’m respecting privacy! When you’re done, I’d love to hear about your day, and any other problems you’re dealing where I could lend a helping hand. I’m really concerned about your mental wellbeing, and would feel terrible if I wasn’t being supportive.” Fucking disgusting!
@bryansmart @jscholes What’s even more messed up about this is I can almost guarantee there was 0 user testing to find out if this pretend caring/empathy was anything users actually want. It was likely someone sitting in a meeting going “This is a good idea” followed immediately by everyone else in the meeting being yes men about it and then it got implemented.
@acarson @jscholes Maybe. I think it was more about a way to even further try to scam people in to believing LLMs think and feel. It’s just a parody of how a vapidly fake positive friend would write. So? It would just as easily write responses in the style of a shitlord, if asked. Make that demo, though. Fool those investors. “We have real AI now, folks!”
@bryansmart @jscholes Yeah scamming people into thinking LLMs think and feel and that true AI is totally here is absolutely a huge part of this.