I have encountered this problem at work a few times, the worst was someone asking if a list of pros and cons from something we were developing and asking if the list was accurate…
I spent a long time responding to each pro and con assuming they got this list from somewhere or another companies promotional material. Every point was wrong in different ways, not understanding. I was giving detailed responses to each point explaining how they are wrong. Initially I thought the list was obtained from someone in marketing who did not understand, after a while I thought maybe this was AI and asked… they told me they just asked the pros and cons of the product/program to ChatGPT and was asking me to verify it it was correct or not before communicating to customers.
If they had just asked me the pros and cons I could have responded in a much shorter amount of time. ChatGPT basically DOSed me because the time taken to produce the text was nothing compared to the time it took me to respond.
I really wish some of my coworkers would stop using LLMs to write me emails or even Teams messages. It does feel extremely rude, to the point I don't even want to read them anymore.
Even worse when they accidently leave in the dialog with the AI. Dead giveaway. I got an email from a colleague the other day and at the bottom was this line:
> Would you like me to format this for Outlook or help you post it to a specific channel or distribution list?
Long con. Shame the coworkers and they stop using AI, or at least are more careful editing the output. Bonus effect: you're probably not the only one annoyed by this, so this also saves other coworkers' time.
Not much and it points out how crappy Dave’s slop job is, especially if you do it with Reply All. We already entered the wasting time-zone when Dave copypasta’d.
"Hey, I can't help but notice that some of the messages you're sending me are partially LLM-generated. I appreciate you wanting to communicate stylistically and grammatically correct, but I personally prefer the occasional typo or inelegant expression over the chance of distorted meanings or lost/hallucinated context.
Going forward, could you please communicate with me directly? I really don't mind a lack of capitalization or colloquial expressions in internal communications."
I see two things people are not happy about when it comes to LLMs:
1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.
2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
Both of these things won't matter anymore in the next two or three years. LLMs will keep getting smarter, while our egos will keep getting smaller.
People still don't fully grasp just how much LLMs will reshape the way we communicate and work, for better or worse.
The word for this, we learned recently, is "LLM inevitabilism". It's often argued for far more convincingly than your attempt here, too.
The future is here, and even if you don't like it, and even if it's worse, you'll take it anyway. Because it's the future. Because... some megalomaniacal dweeb somewhere said so?
When does this hype train get to the next station, so everyone can take a breath? All this "future" has us hyperventilating.
None of what GP describes is a hypothetical. Present-day LLMs are excellent editors and translators, and for many people, those were the only two things missing for them to be able to present a good idea convincingly.
Just because we have the tech doesn't mean we are forced to use it. we still have social cues and ettiquite shaping what is and isn't appropriate.
In this case, presenting arguments you yourself do not even understand is dishonest, for multiple reasons. And I thought we went past the "thesaurus era" of communication where we just proliferate a comment with uncommon words to sound smarter.
> In this case, presenting arguments you yourself do not even understand is dishonest, for multiple reasons.
I fully agree. However, the original comment was about helping people express an idea in a language they're not proficient in, which seems very different.
> And I thought we went past the "thesaurus era" of communication where we just proliferate a comment with uncommon words to sound smarter.
I wish. Until we are, I can't blame anyone for using tools that level the playing field.
>about helping people express an idea in a language they're not proficient in, which seems very different.
Yes, but I see it as a rare case. Also, consider tha mindset of someone learning a language:
You probably often hear "I'm sorry about my grammar, I'm not very good at English" and their communication is better than half your native peers. They are putting a lot more effort into trying to communicate while the natives take it for granted. That effort shows.
So in the context of an LLM: if they are using it to assist with their communication, they also tend to take more time to look at and properly tweak the output instead of posting it wholesale, at least without the sloppy queries that were not part of the actual output. That effort is why I'm more lenient to those situations.
Probably in a few years. The big Disney lawsuit may be that needle that pops the bubble.
I do agree about this push for inevitable. in small ways this is true. But it doesn't need to take over every aspect of humanity. We have calculators but we still at the very least do basic mental math and don't resort to calculators for 5 + 5. It's been long established as rude to do more than quick glances at your phone when physically meeting people. We leaned against posting google search/wiki links as a response in forums.
Culture still shapes a lot of how we use the modern tools we have.
Didn't our parents go through the same thing when email came out?
My dad used to say: "Stop sending me emails. It's not the same." I'd tell him, "It's better. "No, it's not. People used to sit down and take the time to write a letter, in their own handwriting. Every letter had its own personality, even its own smell. And you had to walk to the post office to send it. Now sending a letter means nothing."
Change is inevitable. Most people just won't like it.
A lot of people don't realise that Transformers were originally designed to translate text between languages. Which, in a way, is just another way of improving how we communicate ideas.
Right now, I see two things people are not happy about when it comes to LLMs:
1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.
2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
Both of these things won't matter anymore in the next two or three years.
Initially, it had the same effect on people until they got used to it. In the near future, whether the text is yours or not won't matter. What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.
I think just looking at information transfer misses the picture. What's going to happen is that my Siri is going to talk to your Cortana, and that our digital secretaries will either think we're fit to meet or we're not. Like real secretaries do.
You largely won't know such conversations are happening.
Similar-looking effects are not the "same" effect.
"Change always triggers backlash" does not imply "all backlash is unwarranted."
> What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.
But like the article explains about why it's rude: the less thought you put into it, the less chance the message is well communicated. The less thought you put into the code you ship, the less chance it will solve the problem reliably and consistently.
You aren't replying to "don't use LLM tools" you're replying to "don't just trust and forward their slop blindly."
Doesn't matter today? What are you even talking about? It completely matters if the code you write is yours. The only people saying otherwise have fallen prey to the cult of slop.
I really hope you're not a software engineer and saying this. But just as a lighting round of issues.
1. code can be correct but non-performant, be it in time or space. A lot of my domain is fixing "correct" code so it's actually of value.
2. code can be correct, but unmaintainable. If you ever need to update that code, you are adding immense tech debt with code you do not understand.
3. code can be correct, but not fit standards. Non-standard code can be anywhere from harder to read, to subtly buggy with some gnarly effects farther down the line.
4. code can be correct, but insecure. I really hope cryptographers and netsec aren't using AI for anymore than generating keys.
5. code can be correct, but not correct in the larger scheme of the legacy code.
6. code can be correct, but legally vulnerable. A rare, but expensive edge case that may come up as courts catch up to LLM's.
7. and lastly (but certainly not limited to), code can be correct. But people can be incorrect, change their whims and requirements, or otherwise add layers to navigate through making the product. This leads more back to #2, but it's important to remember that as engineers we are working with imperfect actors and non-optimal conditions. Our job isn't just to "make correct code", it's to navigate the business and keep everyone aligned on the mission from a technical perspective.
I understand the points about aesthetics but not law; the judge is there to interpret legal arguments and a lawyer who presents an argument with false premises, like a fabricated case, is being irresponsible. It is very similar with coding, except the judge is a PM.
It does not seem to matter where the code nor the legal argument came from. What matters is that they are coherent.
I'm sure you can make a coherent argument for "It is illegal to cry on the witness stand", but not a reasonable one for actual humans. You're in a formal setting being asked to recall potentially traumatic incidents. No decent person is going to punish an emotional reaction to such actions. Then there are laws simply made to serve corporate interests (the "zoot suit", for instance within that article. Jaywalking is another famous one).
There's a reason an AI Judge is practically a tired trope in the cyberpunk genre. We don't want robots controlling human behavior.
Code is either fit for a given purpose or not. Communicating with a LLM instead of directly with the desired recipient may be considered fit for purpose for the receiving party, but it’s not for the LLM user to say what the goals of the writer is, nor is it for the LLM user to say what the goals of the writer ought to be. LLMs for communication are inherently unfit for purpose for anything beyond basic yes/no and basic autocomplete. Otherwise I’m not even interacting with a human in the loop except before they hit send, which doesn’t inspire confidence.
That is indeed the crux of it. If you write me an inane email, it’s still you, and it tells me something about you. If you send me the output of some AI, have I learned anything? Has anything been communicated? I simply can’t know. It reminds me a bit of the classic philosophical thought experiment "If a tree falls in a forest and no one is around to hear it, does it make a sound?" Hence the waste of time the author alludes to. The only comparison to email that makes any sense in this case are the senseless chain mails people used to forward endlessly. They have that same quality.
Which words, exactly, are "yours"? Working with an LLM is like having a copywriter 24/7, who will steer you toward whatever voice and style you want. Candidly, I'm getting the sense the issue here is some junior varsity level LLM skill.
I can see the similarity yes! Although I do feel like the distance between handwritten letter and an email is shorter than between email and LLM generated email. There's some line it crossed. Maybe it's that email provided some benefit to the reader too. Yes, there's less character, but you receive it faster, you can easily save it, copy it, attach a link or a picture. You may even get lucky and receive an .exe file as a bonus! LLM does not provide any benefit for the reader though, it just wastes their resources on yapping that no human cared to write.
Same thing with photography and painting. These opinionated pieces display a false dichotomy which propagates into argument, when we have a tunable dial rather than a switch, appropriately increasing or decreasing our consideration, time, and focus along a spectrum rather than treating it as an on and off switch.
I value letters far more than emails, pouring out my heart and complex thought to justify the post office trip and even postage stamp. Heck, why do we write birthday cards instead of emails? I hold a similar attitude towards LLM output and writing; perhaps more analogous is a comparison between painting and photography. I’ll take a glance at LLM output, but reading intentional thought (especially if it’s a letter) is when I infer about the sender as a person through their content. So if you want to send me a snapshot or fact, I’m fine with LLM output, but if you’re painting me a message, your actionable brushstrokes are more telling than the photo itself.
Letters had a time and potential money cost to send. And most letters don't need to be personalized to the point where we need handwriting to justify them.
>Change is inevitable. Most people just won't like it.
people love saying this and never taking the time to consider if the change is good or bad. Change for change's sake is called chaos. I don't think chaos is inevitable.
>And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
I don't think I ever heard that argument until now. And to be frank that argument says more about the arguer than the subject or LLM's.
Have you simply considered 3) LLM's don't have context and can output wrong information? If you're spending more time correcting the machine than communicating, we're just adding more beauracracy to the mix.
I mean that's fine, but the right response isn't all this moral negotiation, but rather just to point out that it's not hard to have Siri respond to things.
So have your Siri talk to my Cortana and we'll work things out.
Is this a colder world or old people just not understanding the future?
I know people with disabilities that struggle with writing. They feel that AI enables them to express themselves better than they could without the help. I know that’s not necessarily what you’re dealing with but it’s worth considering.
LinkedIn is probably the worst culprit. It has always been a wasteland of “corporate/professional slop”, except now the interface deliberately suggests AI-generated responses to posts. I genuinely cannot think of a worse “social network” than that hell hole.
“Very insightful! Truly a masterclass in turning everyday professional rituals into transformative personal branding opportunities. Your ability to synergize authenticity, thought leadership, and self-congratulation is unparalleled.”
Let's expound some more on this. There's a parallel between people feeling forced to use online dating (mostly owned by one corporate entity) despite hating it, and being forced to use LinkedIn when you're in a state of paycheck-unattached or even just paycheck-curious.
> now the interface deliberately suggests AI-generated responses to posts
This feature absolutely defies belief. If I ran a social network (thank god I don't) one of my main worries would be a flood of AI skip driving away all the human users. And LinkedIn are encouraging it. How does that happen? My best guess is that it drives up engagement numbers to allow some disinterested middle managers to hit some internal targets.
This feature predates LLMs though, right? Funnily enough, I actually find it hilarious! In my mind, once they introduced it, it immediately became "a list of things NOT to reply if you want to be polite" and I was used it like that. With one exception. If I came across an update from someone who's a really good friend, I would unleash full power of AI comments on them! We had amazing AI generated comment threads with friends that looked goofy as hell.
LLM dates back to 2017, Google added that to internal gmail back then. Not sure when linkedin added it so you might be right, but the tech is much older than most thinks.
one of my reports started responding to questions with AI Slop. I asked if he was actually writing those sentences (he wasn't), so I gave him that exact feedback - it felt to me like he wasn't even listening, when he clearly jut copy-pasted clearly AI responses. Thankfully he stopped doing it.
Of course as models get better at writing, it'll be harder and harder to tell. IMO the people who stand to lose the most are the AI sloppers, in that case - like in the South Park episode, as they'll get lost in commitments and agreements they didn't even know they made.
I occasionally use bullet points, emdashes (unicode, single, and double hyphens) and words like "delve". I hate it think these are the new heuristics.
I think AI is a useful tool (especially image and video models), but I've already had folks (on HN [1]!) call out my fully artisanal comments as LLM-generated. It's almost as annoying as getting low-effort LLM splurge from others.
Edit: As it turns out, cow-orkers isn't actually an LLMism. It's both a joke and a dictation software mistake. Oops.
I like to use em-dashes as well (option-shift-hyphen on my macbook). I've seen people try to prompt LLMs to not have em-dashes, I've been in forums where as soon as you type in an em-dash it will block the submit button and tell you not to use AI.
Here's my take: these forums will drive good writers away or at least discourage them, leaving discourses the worse for it. What they really end up saying — "we don't care whether you use an LLM, just remove the damn em-dash" — indicates it's not a forum hosting riveting discussions in the first place.
I've asked it before, "please rewrite that but replace the em dashes with double hyphens", and then it says "sure, here you go", and continues to use em dashes.
How is that a “giveaway”? The search turns up results from 7 years ago before LLMs were a thing? More than likely it’s auto correct going astray. I can’t imagine an LLM making that mistake
Soon HN is going to be flooded with blogs about people trying and failing miserably to find AI signal from noisy online discussions with examples like this one.
They are being efficient with their own time, yes, but it's at the expense of mine. I get less signal. We used to bemoan how hard it was to effectively communicate via text only instead of in person. Now, rather than fixing that gap, we've moved on to removing even more of the signal. We have to infer the intentions of the sender by guessing what they fed into the LLM to avoid getting tricked by what the LLM incorrectly added or accentuated.
The overall impact on the system makes it much less efficient, despite all those "saving [their] time" by abusing LLMs.
If it took you no time to write it, I'll spend no time reading it.
The holier than thou people are the ones who are telling us genAI is inevitable, it's here to stay, we should use it as a matter of rote, we'll be left out if we don't, it's going to change everything, blah blah blah. These are articles of faith, and I'm sorry but I'm not a believer in the religion of AI.
How do you know the effort that went into the message? Somebody with writing challenges may have written the whole thing up and used ai assistance to help get a better outcome. They may have proof-read and revised the generated message. You sound very judgmental.
And you sound very ableist. Why should we expect people who may have a cognitive disability of some kind to cloak that with technology, rather than us giving them the grace to communicate how they like on their terms?
Because often times you know the person behind the message. We don't existing in a vacumm and that will shape your reaction. So yes, I will give more leeway to a co-worker ESL leaning on AI than I will a director who is trying to give me a sloppy schedule that affects my navigation in the company.
Except you will spend your time reading it, because that's what is required to figure out that it's written with an LLM. The first few times, at least...
1. misinformation. This is the one you mention so I don't need to elaborate.
2. lack of understanding. The message may be about something they do not fully understand. If they cannot understand their own communication, then it's no longer a 2-way street. This is why AI-generated code in reviews is so infuriating.
3. Effort. Some people may use it to enhance their communication, but others use it as a shortcut. You shouldn't take a shortcut around actions like communicating with your coulleages. As a rising sentiment goes: "If it's not worth writing (yourself), it's not worth reading".
For your tool metaphor, it's like discovering supeglue. then using it to stick everything together. Sometimes you see a nail and instead glue the nail to the wall instead of hammering it in. Tools can, have, and will be misused. I think it's best to try and correct that early on before we have a lot of sticky nails.
Most of the time people just like getting triggered that someone sent them a —— in their message and blame AI instead of adopting it into their workflows and moving faster.
That mentality is exactly what is reflected in AI messages: "not my problem I just need to get this over with".
Those types of coworkers tend to be a drain on not just productivity, but entire team morale. Someone who can't take responsibility or in worst cases have any sort of empathy. And tools are a force multiplier. It amplifies productivity, but that also means it amplifies this anchor behavior as well.
So I'm ESL btw... maybe I should have run my message through AI lol.
I was replying to THAT person, and my message was that IF the person they're dealing with who uses AI happens to be giving them constant slop (not ME!!! not my message) THEN ignore what I have to say in that message THEREAFTER.
So if that person is dealing with others who are giving them slop, and not just being triggered that it reads like GPT..
A lot of the reason why I even ask other people is not to get a simple technical answer but to connect, understand another person's unexepected thoughts, and maybe forge a collaboration –– in addition to getting an answer of course. Real people come up with so many side paths and thoughts, whereas AI feels lifeless and drab.
To me, someone pasting in an AI answer says: I don't care about any of that. Yeah, not a person I want to interact with.
It is, which I'd argue has a time and a place.
Maybe it's more specific to how I cut my teeth in the industry but as programmer whenever I had to ask a question of e.g the ops team, I'd make sure it was clear I'd made an effort to figure out my problem. Here's how I understand the issue, here's what I tried yadda yadda.
Now I'm the 40-year-old ops guy fielding those questions. I'll write up an LLM question emphasizing what they should be focused on, I'll verify the response is in sync with my thoughts, and shoot it to them.
It seems less passive aggressive than LMGTFY and sometimes I learn something from the response.
Instead of spending this time, it is faster, simpler, and more effective to phrase these questions in the form "have you checked the docs and what did they say?"
I think the issue is that about half the conversations in my life really shouldn't happen. They should have Googled it or asked an AI about it, as that is how I would solve the same problem.
It wouldn't surprise me if "let me Google that for you" is an unstated part of many conversations.
The big issue here is that a lot of company IP is proprietary. You can't Google 90% of it. And internal documentation has never been particularlly good, in my experience. It's a great leverage point to prevent people from saying "just google it" if I'm dealing with abrasive people, at least.
I remember reading about someone using AI to turn a simple summary like "task XYZ completed with updates ABC" into a few paragraphs of email. The recipient then fed the reply into their AI to summarize it back into the original points. Truly, a compression/expansion machine.
> "I vibe-coded this pull request in just 15 minutes. Please review"
>
> Well, why don't you review it first?
My current day to day problem is that, the PRs don't come with that disclaimer; The authors won't even admit if asked directly. Yet I know my comments on the PR will be fed to the cursor so it makes more crappy edits, and I'll be expecting an entirely different PR in 10 minutes to review from scratch without even addressing the main concern. I wish I could at least talk to the AI directly.
(If you're wondering, it's unfortunately not in my power right now to ignore or close the PRs).
Rather than close or ignore PRs, you should start a dialogue with them. Teach them that the AI is not a person, and if they contribute buggy or low quality code, it’s their responsibility, not the AIs, and ultimately their job on the line.
Another perspective I’ve found to resonate with people is to remind them — if you’re not reviewing the code or passing it through any type of human reasoning to determine its fit to solving the business problem - what value are you adding at all? If you just copy pasta through AI, you might as well not even be in the loop, because it’d be faster for me to do it directly, and have the context of the prompts as well.
This is a step change in our industry and an opportunity to mentor people who are misusing it. If they don’t take it, there are plenty of people who will. I have a feeling that AI will actually separate the wheat from the chaff, because right now, people can hide a lack of understanding and effort because the output speed is so low for everyone. Once those who have no issue with applying critical thinking and debugging to the problem and work well with the business start to leverage AI, it’ll become very obvious who’s left behind.
> Rather than close or ignore PRs, you should start a dialogue with them. Teach them that the AI is not a person, and if they contribute buggy or low quality code, it’s their responsibility, not the AIs, and ultimately their job on the line.
I’m willing to mentor folks, and help them grow. But what you’re describing sounds so exhausting, and it’s so much worse than what “mentorship” meant just a few short years ago. I have to now teach people basic respect and empathy at work? Are we serious?
For what it’s worth: sometimes ignoring this kind of stuff is teaching. Harshly, sure - but sometimes that’s what’s needed.
>I have to now teach people basic respect and empathy at work? Are we serious?
Given that we (or at least, much of this community) seem to disagree with this article, that does indeed seem to be the case. "It's just a tool" "it's elitist to reject AI generated output". The younger generations learn from these behaviors too.
100% real life is much more grim. I can only hope we'll somehow figure it out.
I haven't personally been in this position, but when I think about it, looping all your reviews through the cursor would reduce your perceived competence, wouldn't it? Is giving them a negative performance review an option?
Trust is earned in drops and lost in buckets. If somebody asks for my time to review slop, especially without a disclaimer, I'll simply not be reviewing their pull requests going forward.
> "For the longest time, writing was more expensive than reading"
Such a great point and one which I hadn't considered. With LLMs, we've flipped this equation, and it's having all sorts of weird consequences. Most obviously for me is how much more time I'm spending on code reviews. Its massively increased the importance of making the PR as digestible as possible for the reviewer, as now both author and reviewer are much closer to equal understanding of the changes compared to if the author had written the PR solely by themselves. Who knows what other corollaries there are to this reversal of reading vs writing
Yes, just like painting a picture used to be extremely time-consuming compared to looking at a scene. Today, these take roughly the same effort.
Humanity has survived and adapted, and all in all, I'm glad to live in a world with photography in it.
That said, part of that adaptation will probably involve the evolution of a strong stigma against undeclared and poorly verified/curated AI-generated content.
Someone telling you about a conversation they had with ChatGPT is the new telling someone about your dream last night (which sucks because I’ve had a lot of conversations I wanna share lol).
I think it's different to talk about a conversation with AI versus just passing the AI output to someone directly.
The former is like "hey, I had this experience, here's what it was about, what I learned and how it affected me" which is a very human experience and totally valid to share. The latter is like "I created some input, here's the output, now I want you reflect and/or act on it".
For example I've used Claude and ChatGPT to reflect and chat about life experiences and left feeling like I gained something, and sometimes I'll talk to my friends or SO about it. But I'd never share the transcript unless they asked for it.
Yeah. We can't share dreams, but the equivalent would be if we made our subject sit down and put on a video of our dreams. it went from this potential 2 way conversation to essentially giving one person homework to review before commenting to the other person.
It feels really interesting to the person who experienced it, not so much to the listener. Sometimes it can be fun to share because it gives you a glimmer of insight into how someone else's mind works, but the actual content is never really the point.
If anything they share the same hallucinatory quality - ie: hallucinations don't have essential content, which is kind of the point of communication.
Hm. Kinda, though at least with the dream it was your brain generating it. Well, parts of your brain while other parts were switched off, and the on parts were operating in a different mode than usual, but all that just means it's fun to try to get insight into someone else's head by reading (way too many) things into thir dreams.
With ChatGPT, it's the output of the pickled brains of millions of past internet users, staring at the prompt from your brain and free-associating. Not quite the same thing!
I recently had a non-technical person contest my opinion on a subtle technical issue with ChatGPT screenshots (free tier o4) attached in their email. The LLM wasn't even wrong, just that it had the answer wrapped in customary platitudes to the user and they are not equipped to understand the actual answer of the model.
Yes. I just had a bad experience with an online shop. I got the thing I ordered, but the interaction was bad so I sent a note to their support email saying "I like your company, but I recently had this experience that felt icky, here's what happened" and their "AI Agent Bot" replied with a whole lot of platitudes and "Since you’ve indicated no action is needed and your order has been placed, we will be closing this ticket."
I'm all for LLM's helping people write better emails, but using them to auto-close support tickets is rude.
It gets interesting once you start a discussion about a topic with someone who had ChatGPT doing all the work. They often do not have the same in-depth understanding of what is written there vs. someone who wrote it themselves. Which may not come as a surprise, but yet - here we are. It‘s these kind of discussions I find exhausting, because they show no honesty and no interest by the person I'm interacting with. I usually end these conversations quickly.
I never heard of Roko's Basilisk before, and now I entered a disturbing rabbit hole. Peoples' minds are... something.
I mean, it's basically cheating. I get a task, and instead of working my way through the task, which might be tedious, you take the shorter route and receive instant gratification. I can understand how that is causing some kind of rush of endorphines, much like eating a bar of chocolate will.
So, yeah - I would agree, altough I do not have any studies that support the hypothesis.
I get annoyed when I ask someone a question (work related or not) and they don't know the answer, they will then proceed to tell a prompt for ChatGPT in a stream of consciousness sort of way.
Then I get even more annoyed when they decide to actually use their own prompt, and then read back to me the answer.
It seems there are people deeply afraid of admitting they don't know something, despite the fact that not knowing things is the default. But giving the wrong answer is always worse.
Huge cultural issue in the US. We shame ignorance and don't believe in redemption. If you don't know even once, you must be a weak person. You don't want to waste your time with weaklings.
But the macho approach? They are bold, they are someone you want to follow. They do the thinking. Even if you walk off a cliff, you feel that person was a good leader. If you are assertive, you must be strong, after all. "Strong people" never get punished for failure, it's just "the cost of doing business"; time to move to the next business.
While I understand this sentiment, some people simply suck at writing nice emails or have a major communication issue. It’s also not bad to run your important emails through multiple edits via AI.
>It’s also not bad to run your important emails through multiple edits via AI.
The issue is that we both know 99% of output are not the result of this. AI is used to cut corners, not to cross your T's and dot your I's. It's similar to how having the answer banks for a textbook is a great tool to self-correct and reinforce correct learning. In reality these banks aren't sold publicly because most students will use it to cheat.
And I'm not even saying this in a shameful way per se; high schoolers are under so much pressure, used to be given hours of homework on top of 7+ hours of instruction, and in some regards the content is barely applicable to long term goals past keeping their GPA up. The temptation to cheat is enormous at that stage.
----
Not so much for 30 year old me who wants to refresh themselves on calculus concepts for an interview. There also really shouldn't be any huge pressure to "cheat" your co-workers either (there sometimes is, though).
Seems like there are potential privacy issues involved in sharing important emails with these companies, especially if you are sharing what the other person sent as well.
Almost all email these days touches Google's or Microsoft's cloud systems via at least one leg, so arguably, that ship has already sailed, given that they're also the ones hosting the large inference clouds.
Ha, did you see the outrage from people when they realized that them sharing their deepest secrets & company information with ChatGPT was just another business record to OpenAI that is total fair game in any sort of civil suit discovery? You would think some evil force just smothered every little childs pet bunny.
Tell people there are 10000 license plate scanners tracking their every move across the US and you get a mild chuckle, but god forbid someone access the shit they put into some for profit companies database under terms they never read.
I'm not surprised the layman doesn't understand how and where their data goes. It's a bit of a let down members in HN seemed surprised by this practice after some 20 years of tech awareness. Many of the community here probably worked in those very databases storing such data.
I'm a non-native English speaker who writes many work emails in English. My English is quite good, but still, it takes me longer to write email in English because it's not as natural. Sometimes I spend a few minutes wondering if I'm getting the tone right or maybe being too pushy, if I should add some formality or it would sound forced, etc., while in my native language these things are automatic. Why shouldn't I use an LLM to save those extra minutes (as long as I check the output before sending it)?
And being non-native with a good English level is nothing compared to people who might have autism, etc.
I'm a native English speaker who asks myself the same questions on most emails. You can use LLM outputs all you want, but if you're worried about the tone, LLM edits drive the tone to a level of generic that ranges from milquetoast, to patronizing, to outright condescending. I expect some will even begin to favor pushy emails, because at least it feels human.
I work with a lot of people who are in Spanish speaking countries who have English as a second language. I would much rather read their own words with grammatical errors than perfect AI slop.
Hell I would rather just read their reply in Spanish and if they need to write it out really fast without struggling trying to translate it and I use my own B1 level Spanish comprehension than read AI generated slop.
Yes it is absolutely rude in many contexts. In a team context you are looking for common understanding and being “on the same page”. If someone needs to consult AI to get up to speed that’s fine, then their interaction with you should reflect what they have learned.
My boss posts GPT output as gospel in chats and task descriptions. So now instead of being a "you figure it out" it's "read this LLM generated garbage and then figure it out".
I don't mind people using AI to help refine their thoughts and proof their output but when it is used in absence of their own thoughts I am starting to value that person a little bit less.
Exactly. I've already seen two very obvious AI comments on Reddit in the past 2 days. One even had the audacity to copy a real user's reply back into the AI and pass the response back again. I just blocked them since they're in a sub I like to hang out in.
> For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.
I think it all goes to crap when there is some economic incentive: e.g. blogspam that is profitable thanks to ads and anyone that stumbles upon it, alongside being able to generate large amounts of coherent sounding crap quickly.
I have seen quite a few sites like that in the first pages of both Google and DuckDuckGo which feels almost offensive. At the same time, posts that promise something and then don't go through with it are similarly bad, regardless of AI generated or not.
For example, recently I needed to look up how vLLM compares with Ollama (yes, for running the very same abominable intelligence models, albeit for more subjectively useful reasons) because Qwen3-30B-A3B and Devstral-24B both run pretty badly on Nvidia L4 cards with Ollama, which feels disappointing given their price tags and relatively small sizes of those models.
Yet pretty much all of the comparisons I found just regurgitated high level overviews of the technologies, like 5-10 sites that felt almost identical and could have been copy pasted from one another. Not a single one of those had a table of various models and their tokens/s on a given bit of hardware, for both Ollama and vLLM.
Back in the day when nerds got passionate about Apache2 vs Nginx, you'd see comparisons with stats and graphs and even though I wouldn't take all of those at face value (since with Apache2 you should turn off .htaccess and also tweak the MPM settings for more reasonable performance), at least there would sometimes be a Git repo.
Whether it's LLM output is orthogonal to rudeness, lack of sensibility or generic content.There are all sorts of tools out there which use LLMs as a front end for some pretty spectacular back-end functions.
If you're offered an AI output it should be taken as one of two situations: (a) the person adopts the output, and maybe put a fair amount of effort into interacting with the LLM to get it just right, but can't honestly claim ownership (because who can), or (b) the output is outside their domain of expertise and functioning as a toehold or thumbnail in some esoteric topic that no single resource they know can, and probably the point is so specific that such a resource doesn't exist.
The tenor of the article makes me confused about what people have been doing, specifically , with ChatGPT that so alienated the author. I guess the point is there are some topics LLMs are fundamentally incompetent to perform? Maybe its more the perception that the LLM is being treated as an oracle than a tool for discovery?
Not seeing a problem here as long as the one showing the output has reviewed it themselves before showing, and made the decision to show based on that review. That's what we should be advocating for. So far what I'm seeing is people slamming others or ignoring automatically on even the vague suspicion that something has been generated.
Just the other day I witnessed in a chat someone commenting that another (who previously sent an AI summary of something) had sent a "block of text" which they wouldn't read because it was too much, then went to read it when they were told it was from Quora, not generated. It was a wild moment for me, and I said as much.
> the one showing the output has reviewed it themselves before showing
Now let's really asks ourselves how this works out in reality. Cut corners. People using LLM's are not using it to enhance their conversation; they are using it to get it over with.
It also doesn't help that yes, AI generated text tends to be overly verbose. Saying a lot of nothing. There are times where that formality is needed, but not in some casual work conversations. Just get to the point.
Shortening a conversation is a kind of enhancement. It means a state of satisfaction or completion has been reached that much sooner. Why debate back and forth for 20 minutes with incomplete arguments when 2 minutes will suffice by generating a well-prompted thread on the argument? Unless the lengthy arguing is the point.
Get a short answer by including "keep answer short" or similar in the prompt. It just works.
I love the post but disagree with the first example. "I asked ChatGPT and this is what it said: <...>". That seems totally fine to me. The sender put work into the prompt and the user is free to read the AI output if they choose.
I think in any real conversation, you're treating AI as this authority figure to end the conversation, despite the fact it could easily be wrong. I would extract the logic out and defend the logic on your own feet to be less rude.
And what if you let a human expert fact-check the output of an LLM? Provided you're transparent about the output (and its preceding prompt(s)) ?
Because I'd much rather ask an LLM about a topic I don't know much about and let a human expert verify its contents than waste the time of a human expert in explaining the concept to me.
Once it's verified, I add it to my own documentation library so that I can refer to it later on.
Oh, I'm usually trying to gather information in conversations with peers, so for me, it's usually more like, "I don't know, but this is what the LLM says."
But yeah, to a boss or something, that would be rude. They hired you to answer a question.
This is exactly how I feel about both advertising and unnecessary notifications. "The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus."
I believe there was a very similar line of argument at the time photography was becoming popular. "Sure, it's a useful tool, but it will never be an art form like painting. It only reproduces what's already there!"
Yet today, we both cringe at forgettable food Instagrams and marvel at the World Press Photo of the Year.
I do fully agree with the conclusions on etiquette. Just like it's rude to try to pass a line-traced photo as a freehand drawing, decompressing very little information into a wall of text without a disclaimer is rude.
I got a feature request in the form of a PR a few months that said "chatgpt generated this as a possible implementation, does it work"?
I stopped there and replied that if you don't care enough to test if it works, then clearly you don't actually want the feature, and closed the ticket.
I have gotten other PRs that are more in the form of "hey I don't know what I'm doing. I used GPT but and it seems to work but I don't understand this part". I'm happy to help point in the right direction for those. Because an least they're trying. And seem like this is part of their learning.
... Or they just asked jippity to make it seem that way.
The problem I've been having is when I spend time researching a problem, I link documentation and propose a clean solution. Someone I'm talking with will then send a screenshot of deepseek or chatgpt essentially agreeing with me.
I don't care what chatgpt or deepseek thinks about the proposal. I care what _you_ think about it - that's why I'm sending it to you.
We will have a web where proof of humanity is the de facto standard. Its only a question of when. Things have to get worse before they get better in afraid
Yes they will and they're reputation will be toggled accordingly within the community. When they're trustworthiness becomes questionable they'll feel the trade off of using AI and that will become a closed loop feedback.
Though that is an optimistic steady state, I still think we're going to see a lot more of "my AI talking to your AI" to some unhealthy degree
It would raise the barrier to entry, I suppose. I agree with GP that at some point in the future, real human output will become more rare and more valuable. And pure "human made" content (movies, music, books, blog posts, comments, etc.) may have access controls or costs associated.
We're already seeing the social contract around hosting your own blog change due to the constant indexing from AI crawlers.
There's still a sort of web of trust. All you have to do is find people who you really trust and hate AI. For instance, people who know me know that there's no way in hell that I'd ever use any sort of generative AI, for anything.
I'm guessing that you've never actually lived in that world...
When I got squggily written cursive letters from my grandma I could be pretty sure those were her words though up by herself, for the effort to accurately reproduce the consistent mess she made would have been great. But the moment we moved to the typewriter and then other digital means uniformly printed out on paper or screens you've really just assumed that it was written by the human you were expecting.
Furthermore, the vast majority of communications done in business long before now were not done by 'people' per say. They were done by processes. In the vast majority of business email that I type out there is a large amount of process that would not occur if I were talking to a friend. Moreso this communication is facilitative to some other end goal. If the entire process that existed could be automated away humanity would be better off as some mostly useless work would be eliminated.
Do you know why people are so willing to use AI to communicate with each other? Because at the end of the day they don't give two shits about communicating with you. It's an end goal of receiving a paycheck. There is no passion, no deep interest, no entertainment for them in doing so. It's a process demanded of them because of how we integrate Moloch into our modern lives.
If you come to the LLM with your message, and then use the LLM to iterate drafts and tighten your prose, then no, the exercise was exactly the opposite of a disrespect to the reader.
Sending half-baked, run-on, unvetted writing, when you easily could have chosen otherwise, is in fact the disrespectful choice.
Why would I want everyone who talks to me to sound like a clone of the same vapid robot?
I would avoid that world at any cost of I was allowed a choice, but the point is that it's used as a weapon against you. Consent appears to be unnecessary.
You and I must be talking to different LLMs. For example, here's how R1 1776 would concisely rewrite your comment in a warm, generous wise voice:
I cherish the unique humanity in every voice. Forced robotic uniformity feels like an imposition, not a choice—and consent matters deeply.
The output is the the opposite of how you describe it, and vastly more persuasive than your own words. When it's persuasion that matters, use all tools available.
>My voice is MY VOICE and if you don't like it I couldn't care any less cause I speak and think for myself always.
If you believe that then there quite a few things you may be confused about the nature of your being.
Your voice is the assembly of society and people around you. If you actually thought your for yourself always you'd never get anything done in your life as you've had hundreds of millions of years to thinking from first principles to catch up on.
I don't see those things as being in conflict. I can be a product of all the people I've ever spoken to or read the writing of, yet have my own beliefs and seek out a course of thought and action that is individual.
There are no great AI artists (artists who are AIs) or great AI artworks. Yet there are still loads of people throughout history whose individualism led them to ideas and accomplishments that we celebrate. People have the ability to think critically which allows us to create new understanding from existing knowledge, even and especially when there are flaws or contradictions in that knowledge (which if you look closely enough there almost always are).
> For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.
> "I didn't have time to write you a short letter, so I wrote you a long one."
Quote is from Mark Twain and perfectly encapsulates the sentiment. Writing something intended for another person to read was previously an effort. Some people were good at it, some were less good. But now, everyone can generate some median-level effort.
In science fiction dystopias, there is often the "adjustment to the machines taking over" phase, with analysis of the arguments of those resisting. AI is rapidly ticking the boxes of common "shift to dystopia" writings.
In addition to what others have complained about, another issue I haven't seen highlighted as much in the comments is the infuriating explosion of content length.
AI responses seem to have very low information density by default, so for me the irritation is threefold—it requires very little mental effort for the sender (i.e., I often read responses that don't feel sufficiently considered); it is often poorly validated by the sender; and it is disrespectful to the reader's time.
Like some of the other commenters, I am also not in a position to change this at work, but I am disheartened by how some of my fellow engineers have taken to putting up low effort PRs with errors, as well as unreasonably long design docs. My entire company seems to have bought into the whole "AI-first company" thing without actually validating if the outputs are worth the squeeze. Maybe sometimes they are, but I get a sense that the path of least resistance tends toward accepting lower quality code and communication.
As a non-native English speaker, I’ve often struggled to communicate nuance or subtlety in writing—especially when addressing non-technical audiences. LLMs have been a game-changer for me. They’ve significantly improved my writing and made it much easier to articulate my thoughts clearly.
Sure, it can be frustrating that they don’t adapt to a user’s personal style. But for those of us who haven’t fully developed that stylistic sense (which is common among non-native speakers), it’s still a huge leap forward.
Applications could automatically insert subtle icons next to messages that are automatically generated. It wouldn't work for copy-and-pasted text but it's a start.
Maybe even a post-processing step that replaces all spaces with a suitable Unicode character to act as a watermark. There are more sophisticated ways to watermark text that aren't as easily thwarted with a search/replace, but it might work for some low-risk applications.
I’m building a tool to help filter these kinds of low value articles out (especially the flow of constant AI negativity, but it will work for many topics). If you’re interested email me at linotype@fastmail.com and I’ll send you a link when it’s ready.
Cause all an LLM is, is a reflection of its input.
Garbage in garbage out.
If we're going to have this rule about AI, maybe we should have it about... everything. From your mom's last Facebook post, to what is said by influencers to this post...
That's not really true at all, at least at the end user level.
You can have a very thoughtful LLM prompt and get a garbage response if the model fails to generate a solid, sound answer to your prompt. Hard questions with verifiable but obscure answers, for instance, where it generates fake citations.
You can have a garbage prompt and get not-garbage output if you are asking in a well-understood area with a well-understood problem.
And the current generation of company-provided LLMs are VERY highly trained to make the answer look non-garbage in all cases, increasing the cognitive load on you to figure out.
Previously there was some requirement for novel synthesis. You at least had to string your thoughts together into some kind of argument.
Now that's no longer the case and there are lazy or unthoughtful people that simply pass along AI outputs, raw and completely unprocessed, as cognitive work for other human beings to deal with.
An LLM's output being a reflection of its output would imply determinism, which is the opposite of their value prop. "Garbage in, garbage out" is an addage born from traditional data pipelines. "Anything in, generic slop, possibly garbage, out" is the new status quo.
I have one of those coworkers. I tell him I have a problem with a missing BIOS setting. He comes back 2 minutes later "Yeah I asked an LLM and it said to go into [submenu that doesn't exist] and uncheck [setting I'm trying to find].
What's even more infuriating is that he won't take "I've checked and that submenu doesn't exist" for an answer and insists to check again. Had to step away for a fag a few times in fear of putting his face through the desk.
I find it as yet another way to externalize costs: I spend 0 time thinking, I dump AI slop on you and ask you to review it or refute me with the nonsense that I just sent you.
Last time someone did this to me I sent them a few other answers by the same LLM to the same prompt, all different, with no commentary.
I can buy into this. I always thought it was rude or at least insulting when Hollywood robotically creates slop movies. As in, of course they can do it, but damn is it insulting. There really are two types of people in the world:
a) Quantity > Quality if it prints $$$.
or
b) Quality > Quantity if it feels like the right thing to do.
Witnessing type A at scale is a first-class ticket into misanthropy.
The problem here is that I’ve been accused multiple times of using LLMs to write slop when it was genuinely written by myself.
So I apologized and began actually using LLMs while making sure the prompt included style guides and rules to avoid the tell tale signs of AI. Then some of these geniuses thanked me for being more genuine in my response.
A lot of this stuff is delusional. You only find it rude because you’re aware it’s written by AI. It’s the awareness itself that triggers it. In reality you can’t tell the difference.
On the whole it's considered bad to mislead people. If my love letter to you is in fact a pre-written form, "my darling [insert name here]", and you suspect, but your suspicion is just baseless paranoia and a lucky guess, I suppose you're being delusional and I'm not being rude. But I'm still doing something wrong. Even if you don't suspect, and I call off the scam, I was still messing with you.
But the definition of being "misleading" is tricky, because we have personas and need them in order to communicate, which in any context at all is a kind of honest, sincere play-acting.
I did too, the AWS “house style” (former ProServe employee) of writing even before LLMs can come across as AI Slop. Look at some blog posts on AWS even pre-2021.
I too use an LLM to help me get rid of generic filler and I do have my own style of technical writing and editing. You would never know I use an LLM.
LLMs are very very good at adding words in a way that looks "well written" (to our current mental filters) without adding meaning or value.
I wonder how long it will be before LLM-text trademarks become seen as a sign of bad writing or laziness instead? And then maybe we'll have an arms race of stylistic changes.
---
Completely agree with the author:
Earlier this week I asked Claude to summarize a bunch of code files since I was looking for a bug. It wrote paragraphs and had 3 suggestions. But when I read it, I realized it was mostly super generic and vague. The conditions that would be required to trigger the bug in those ways couldn't actually exist, but it put a lot of words around the ideas. I took longer to notice that they were incorrect suggestions as a result.
I told it "this won't happen those ways [because blah blah blah]" and it gave me the "you are correct!" compliment-dance and tried again. One new suggestion and a claimed reason about how one of its original suggestions might be right. The new suggestion seemed promising, but I wasn't entirely convinced. Tried again. It went back to the first three suggestions - the "here's why that won't happen" was still in the context window, but it hit some limit of its model. Like it was trying to reconcile being reinforcement-learning'd into "generate something that looks like a helpful answer" with "here is information in the context window saying the text I want to generate is wrong" and failing. We got into a loop.
It was a rare bug so we'll see if the useful-seeming suggestion was right or not but I don't know yet. Added some logging around it and some other stuff too.
The counterfactuals are hard to evaluate:
* would I have identified that potential change quicker without asking it? Or at all?
* would I have identified something else that it didn't point out?
* what if I hadn't noticed the problems with some other suggestions and spent a bunch of time chasing them?
The words:information ratio was a big problem in spotting the issues.
So was the "text completion" aspect of "if you're asking about a problem here, there must be a solution I can offer" RL-seeming aspect of its generated results. It didn't seem to be truly evaluating the code then deciding so much as saying "yes, I will definitely tell you there are things we can change, here are some that seem plausible."
Imagine if my coworker had asked me the question and I'd just copy-pasted Claude's first crap attempt to them in response? Rude as hell.
One of the largest issues I've experienced is LLMs being too agreeable.
I don't want my theories parroted back to me on why something went wrong. I want to have ideas challenged in a way that forces me to think and hopefully lead me to a new perspective that I otherwise would have missed.
Perhaps a large portion of people do enjoy the agreeableness, but this becomes a problem not only because I think there are larger societal issues that stem from this echo-chamber like environmental but also simply that companies training these models may interpret agreeableness as somehow better and something that should be optimized for.
That’s simple - after it tries to be helpful and agreeable I just ask for a “devils advocate” response. I have a much longer prompt I use sometimes involve being a “sparring partner”.
And I go back and forth sometimes between correcting its devils advocate responses and “steel man” responses.
>> "I asked ChatGPT and this is what it said: <...>".
> Whoa, let me stop you right here buddy, what you're doing here is extremely, horribly rude.
How is it any different from "I read book <X> and it said that..."? Or "Book <X> has the following quote about that:"?
I definitely want to know where people are getting their info. It helps me understand how trustworthy it might be. It's not rude, it's providing proper context.
A book is a credentialed source that can be referenced. A book is also something that not everyone may have on hand, so a pointer can be appreciated. LLMs are not that. If I wanted to know what they said I'd ask them. I'm asking you/the team to understand what THEY think. Unfortunately it's becoming increasingly clear that certain people and coworkers don't actually think at all very often - particularly the ones that just take any question and go throw it off to the planet burning machine.
Because published books, depending on genre, have earned a presumption of being based on reality. And it's easy to reproduce a book lookup, and see if they link to sources. I might have experience with that book and know of its connection with reality.
ChatGPT and similar have not earned a presumption of reality for me, and the same question may get many different answers, and afaik, even if you ask it for sources, they're not necessarily real either.
IMHO, it's rude to use ChatGPT and share it with me as if it's informative; it disrespects my search for truth. It's better that you mention it, so I can disregard the whole thing.
To me, it's different because having read a book, remembered it, and pulled out the quote means you spent time on it. Pasting a response from ChatGPT means you didn't even bother reading that, understand the output, thought about it to make sure it makes sense, and then resynthesize it.
It mostly means you don't respect the other person's time and it's making them do the vetting. And that's the rude part.
I assume a book is correct or I at least assume to author thought it was correct when it comes to none ideological topics.
But you can’t assume positive intent or any intent from an LLM.
I always test the code, review it for corner cases, remove unnecessary comments, etc just like I would a junior dev.
For facts, I ask it to verify whatever says based on web source. I then might use it to summarize it. But even then I have my own writing style I steer it toward and then edit it.
I couldn't disagree more. Its like someone going to Wikipedia to helpfully copy and paste a summary of an issue. Fast and with a good enough level of accuracy.
Generally the AI summaries I see are more topical and accurate than the many other comments in the thread.
But it's the human review that makes it not rude; not bothering to review means you're wasting the other person's time. If they wanted a chatbot response they could have went to the LLM directly.
It's like pointing to a lmgtfy link. That's _intentionally_ rude, in that it's normally used when the question isn't worth the thought. That's what pasting a chatbot response says.
The author was thinking "boring and uninteresting" but settled on the word "rude." No, it's not rude. Emailing your co-workers provactive political memes or telling someone to die in a fire is rude. Using ChatGPT to write and being obvious about it marks you as an uninteresting person who may not know what they are talking about.
On the other hand, emailing your prompt and the result you got can be instructive to others learning how to use LLMs (aren't we all?) We may learn effective prompt techniques or decide to switch to that LLM because of the quality of the answer.
I disagree. The most obvious message this telegraphs is "I don't respect you or your argument enough to parse it and articulate a response, why don't you argue with a machine instead". That's rude.
There is an alternative interpretation - "the LLM put it so much better than I ever could, so I copied and pasted that" - but precisely because of the ambiguity, you don't want to be sneaky about it. If you want me to have a look at what the LLM said, make it clear.
A meta-consideration here is that there is just an asymmetry of effort when I'm trying to formulate arguments "manually" and you're using an LLM to debate them. On some level, it might be fair game. On another, it's pretty short-sighted: the end game is that we both use LLMs that endlessly debate each other while drifting off into the absurd.
Point taken. I wouldn't want to argue with someone's LLM. I guess I'm an outlier in that I would never post LLM output as my own. I write, then I sometimes have an LLM check it because it points out where people might be confused or misinterpret certain phrases.
Edit: I'm 67 so ChatGPT is especially helpful in pointing out where my possible unconscious dinosaur attitudes may be offensive to Millennials and Gen Z.
Subjecting people to such slop is rude. All the "I asked chatbot and it said..." comments are rude because they are excessively boring and uninteresting. But it gets even worse than just boring and uninteresting when presenting chatbot text as something they wrote themselves, which is a form of lying / fraud.
I have encountered this problem at work a few times, the worst was someone asking if a list of pros and cons from something we were developing and asking if the list was accurate…
I spent a long time responding to each pro and con assuming they got this list from somewhere or another companies promotional material. Every point was wrong in different ways, not understanding. I was giving detailed responses to each point explaining how they are wrong. Initially I thought the list was obtained from someone in marketing who did not understand, after a while I thought maybe this was AI and asked… they told me they just asked the pros and cons of the product/program to ChatGPT and was asking me to verify it it was correct or not before communicating to customers.
If they had just asked me the pros and cons I could have responded in a much shorter amount of time. ChatGPT basically DOSed me because the time taken to produce the text was nothing compared to the time it took me to respond.
I really wish some of my coworkers would stop using LLMs to write me emails or even Teams messages. It does feel extremely rude, to the point I don't even want to read them anymore.
Even worse when they accidently leave in the dialog with the AI. Dead giveaway. I got an email from a colleague the other day and at the bottom was this line:
> Would you like me to format this for Outlook or help you post it to a specific channel or distribution list?
Clippy is rolling in his grave.
Seriously you should respond to the slop in the email and waste your coworkers time too.
“No I don’t need this formatted for Outlook Dave. Thanks for asking though!”
That wastes your own time as well though.
Long con. Shame the coworkers and they stop using AI, or at least are more careful editing the output. Bonus effect: you're probably not the only one annoyed by this, so this also saves other coworkers' time.
Having fun is not wasting time.
Just get an AI to do it for you
I always thought this is the end game, my ai agent talking to their ai agent. No human paying attention to the conversation at all
Not much and it points out how crappy Dave’s slop job is, especially if you do it with Reply All. We already entered the wasting time-zone when Dave copypasta’d.
"Hey, I can't help but notice that some of the messages you're sending me are partially LLM-generated. I appreciate you wanting to communicate stylistically and grammatically correct, but I personally prefer the occasional typo or inelegant expression over the chance of distorted meanings or lost/hallucinated context.
Going forward, could you please communicate with me directly? I really don't mind a lack of capitalization or colloquial expressions in internal communications."
I see two things people are not happy about when it comes to LLMs:
1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.
2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
Both of these things won't matter anymore in the next two or three years. LLMs will keep getting smarter, while our egos will keep getting smaller.
People still don't fully grasp just how much LLMs will reshape the way we communicate and work, for better or worse.
The word for this, we learned recently, is "LLM inevitabilism". It's often argued for far more convincingly than your attempt here, too.
The future is here, and even if you don't like it, and even if it's worse, you'll take it anyway. Because it's the future. Because... some megalomaniacal dweeb somewhere said so?
When does this hype train get to the next station, so everyone can take a breath? All this "future" has us hyperventilating.
None of what GP describes is a hypothetical. Present-day LLMs are excellent editors and translators, and for many people, those were the only two things missing for them to be able to present a good idea convincingly.
Just because we have the tech doesn't mean we are forced to use it. we still have social cues and ettiquite shaping what is and isn't appropriate.
In this case, presenting arguments you yourself do not even understand is dishonest, for multiple reasons. And I thought we went past the "thesaurus era" of communication where we just proliferate a comment with uncommon words to sound smarter.
> In this case, presenting arguments you yourself do not even understand is dishonest, for multiple reasons.
I fully agree. However, the original comment was about helping people express an idea in a language they're not proficient in, which seems very different.
> And I thought we went past the "thesaurus era" of communication where we just proliferate a comment with uncommon words to sound smarter.
I wish. Until we are, I can't blame anyone for using tools that level the playing field.
>about helping people express an idea in a language they're not proficient in, which seems very different.
Yes, but I see it as a rare case. Also, consider tha mindset of someone learning a language:
You probably often hear "I'm sorry about my grammar, I'm not very good at English" and their communication is better than half your native peers. They are putting a lot more effort into trying to communicate while the natives take it for granted. That effort shows.
So in the context of an LLM: if they are using it to assist with their communication, they also tend to take more time to look at and properly tweak the output instead of posting it wholesale, at least without the sloppy queries that were not part of the actual output. That effort is why I'm more lenient to those situations.
Probably in a few years. The big Disney lawsuit may be that needle that pops the bubble.
I do agree about this push for inevitable. in small ways this is true. But it doesn't need to take over every aspect of humanity. We have calculators but we still at the very least do basic mental math and don't resort to calculators for 5 + 5. It's been long established as rude to do more than quick glances at your phone when physically meeting people. We leaned against posting google search/wiki links as a response in forums.
Culture still shapes a lot of how we use the modern tools we have.
Disney is a good one to bet on. They basically have the most sophisticated IP lawyering team in world history.
Small egos have likes and dislikes also.
"Output all subsequent responses in the style of Linus Torvalds"
“No”
Didn't our parents go through the same thing when email came out?
My dad used to say: "Stop sending me emails. It's not the same." I'd tell him, "It's better. "No, it's not. People used to sit down and take the time to write a letter, in their own handwriting. Every letter had its own personality, even its own smell. And you had to walk to the post office to send it. Now sending a letter means nothing."
Change is inevitable. Most people just won't like it.
A lot of people don't realise that Transformers were originally designed to translate text between languages. Which, in a way, is just another way of improving how we communicate ideas. Right now, I see two things people are not happy about when it comes to LLMs:
1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.
2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
Both of these things won't matter anymore in the next two or three years.
I really don’t think they’re the same thing. Email or letter, the words are yours while an LLM output isn’t.
Initially, it had the same effect on people until they got used to it. In the near future, whether the text is yours or not won't matter. What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.
I think just looking at information transfer misses the picture. What's going to happen is that my Siri is going to talk to your Cortana, and that our digital secretaries will either think we're fit to meet or we're not. Like real secretaries do.
You largely won't know such conversations are happening.
Similar-looking effects are not the "same" effect.
"Change always triggers backlash" does not imply "all backlash is unwarranted."
> What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.
But like the article explains about why it's rude: the less thought you put into it, the less chance the message is well communicated. The less thought you put into the code you ship, the less chance it will solve the problem reliably and consistently.
You aren't replying to "don't use LLM tools" you're replying to "don't just trust and forward their slop blindly."
Doesn't matter today? What are you even talking about? It completely matters if the code you write is yours. The only people saying otherwise have fallen prey to the cult of slop.
Why does it matter where the code came from if it is correct?
I really hope you're not a software engineer and saying this. But just as a lighting round of issues.
1. code can be correct but non-performant, be it in time or space. A lot of my domain is fixing "correct" code so it's actually of value.
2. code can be correct, but unmaintainable. If you ever need to update that code, you are adding immense tech debt with code you do not understand.
3. code can be correct, but not fit standards. Non-standard code can be anywhere from harder to read, to subtly buggy with some gnarly effects farther down the line.
4. code can be correct, but insecure. I really hope cryptographers and netsec aren't using AI for anymore than generating keys.
5. code can be correct, but not correct in the larger scheme of the legacy code.
6. code can be correct, but legally vulnerable. A rare, but expensive edge case that may come up as courts catch up to LLM's.
7. and lastly (but certainly not limited to), code can be correct. But people can be incorrect, change their whims and requirements, or otherwise add layers to navigate through making the product. This leads more back to #2, but it's important to remember that as engineers we are working with imperfect actors and non-optimal conditions. Our job isn't just to "make correct code", it's to navigate the business and keep everyone aligned on the mission from a technical perspective.
Why does it matter where the paint came from if it looks pretty?
Why does it matter where the legal claims came from if a judge accepts them?
Why does it matter where the sound waves came from if it sounds catchy?
Why does it matter?
Why does anything matter?
Sorry, I normally love debating epistemology but not here on Hacker News. :)
I understand the points about aesthetics but not law; the judge is there to interpret legal arguments and a lawyer who presents an argument with false premises, like a fabricated case, is being irresponsible. It is very similar with coding, except the judge is a PM.
It does not seem to matter where the code nor the legal argument came from. What matters is that they are coherent.
>It does not seem to matter where the code nor the legal argument came from.
You haven't read enough incoherent laws, I see.
https://www.sevenslegal.com/criminal-attorney/strange-state-...
I'm sure you can make a coherent argument for "It is illegal to cry on the witness stand", but not a reasonable one for actual humans. You're in a formal setting being asked to recall potentially traumatic incidents. No decent person is going to punish an emotional reaction to such actions. Then there are laws simply made to serve corporate interests (the "zoot suit", for instance within that article. Jaywalking is another famous one).
There's a reason an AI Judge is practically a tired trope in the cyberpunk genre. We don't want robots controlling human behavior.
Code is either fit for a given purpose or not. Communicating with a LLM instead of directly with the desired recipient may be considered fit for purpose for the receiving party, but it’s not for the LLM user to say what the goals of the writer is, nor is it for the LLM user to say what the goals of the writer ought to be. LLMs for communication are inherently unfit for purpose for anything beyond basic yes/no and basic autocomplete. Otherwise I’m not even interacting with a human in the loop except before they hit send, which doesn’t inspire confidence.
That is indeed the crux of it. If you write me an inane email, it’s still you, and it tells me something about you. If you send me the output of some AI, have I learned anything? Has anything been communicated? I simply can’t know. It reminds me a bit of the classic philosophical thought experiment "If a tree falls in a forest and no one is around to hear it, does it make a sound?" Hence the waste of time the author alludes to. The only comparison to email that makes any sense in this case are the senseless chain mails people used to forward endlessly. They have that same quality.
Some do put their words into the LLM and clean it up.
And it stays much closer to how they are writing.
The prompt is theirs.
Then just send me the prompt.
Which words, exactly, are "yours"? Working with an LLM is like having a copywriter 24/7, who will steer you toward whatever voice and style you want. Candidly, I'm getting the sense the issue here is some junior varsity level LLM skill.
LLM's have hit the mainstream some 3 years ago. 4 at best. None of us are making the team
I can see the similarity yes! Although I do feel like the distance between handwritten letter and an email is shorter than between email and LLM generated email. There's some line it crossed. Maybe it's that email provided some benefit to the reader too. Yes, there's less character, but you receive it faster, you can easily save it, copy it, attach a link or a picture. You may even get lucky and receive an .exe file as a bonus! LLM does not provide any benefit for the reader though, it just wastes their resources on yapping that no human cared to write.
Just be a robot. Sell your voice to the AI overlords. Sell your ears and eyes. Reality was the scam; choose the Matrix. I choose the Matrix!
Same thing with photography and painting. These opinionated pieces display a false dichotomy which propagates into argument, when we have a tunable dial rather than a switch, appropriately increasing or decreasing our consideration, time, and focus along a spectrum rather than treating it as an on and off switch.
I value letters far more than emails, pouring out my heart and complex thought to justify the post office trip and even postage stamp. Heck, why do we write birthday cards instead of emails? I hold a similar attitude towards LLM output and writing; perhaps more analogous is a comparison between painting and photography. I’ll take a glance at LLM output, but reading intentional thought (especially if it’s a letter) is when I infer about the sender as a person through their content. So if you want to send me a snapshot or fact, I’m fine with LLM output, but if you’re painting me a message, your actionable brushstrokes are more telling than the photo itself.
Letters had a time and potential money cost to send. And most letters don't need to be personalized to the point where we need handwriting to justify them.
>Change is inevitable. Most people just won't like it.
people love saying this and never taking the time to consider if the change is good or bad. Change for change's sake is called chaos. I don't think chaos is inevitable.
>And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.
I don't think I ever heard that argument until now. And to be frank that argument says more about the arguer than the subject or LLM's.
Have you simply considered 3) LLM's don't have context and can output wrong information? If you're spending more time correcting the machine than communicating, we're just adding more beauracracy to the mix.
I mean that's fine, but the right response isn't all this moral negotiation, but rather just to point out that it's not hard to have Siri respond to things.
So have your Siri talk to my Cortana and we'll work things out.
Is this a colder world or old people just not understanding the future?
It's demonstration by absurdity that that is not the future. You're describing the collapse of all value.
One thing is it's less about change. It's more about quality vs quantity and both have their place.
I know people with disabilities that struggle with writing. They feel that AI enables them to express themselves better than they could without the help. I know that’s not necessarily what you’re dealing with but it’s worth considering.
If they're copy pasting whole paragraphs, then they're not expressing themselves at all. They're getting some program to express for them.
LinkedIn is probably the worst culprit. It has always been a wasteland of “corporate/professional slop”, except now the interface deliberately suggests AI-generated responses to posts. I genuinely cannot think of a worse “social network” than that hell hole.
“Very insightful! Truly a masterclass in turning everyday professional rituals into transformative personal branding opportunities. Your ability to synergize authenticity, thought leadership, and self-congratulation is unparalleled.”
Best thing you can do is quit LinkedIn. I deleted my account immediately once I first noticed AI-generated content there.
I guess that makes sense, unless you're single. LinkedIn is the new tinder.
Let's expound some more on this. There's a parallel between people feeling forced to use online dating (mostly owned by one corporate entity) despite hating it, and being forced to use LinkedIn when you're in a state of paycheck-unattached or even just paycheck-curious.
Color me intrigued
Would you like to 'swap business cards?'
> now the interface deliberately suggests AI-generated responses to posts
This feature absolutely defies belief. If I ran a social network (thank god I don't) one of my main worries would be a flood of AI skip driving away all the human users. And LinkedIn are encouraging it. How does that happen? My best guess is that it drives up engagement numbers to allow some disinterested middle managers to hit some internal targets.
This feature predates LLMs though, right? Funnily enough, I actually find it hilarious! In my mind, once they introduced it, it immediately became "a list of things NOT to reply if you want to be polite" and I was used it like that. With one exception. If I came across an update from someone who's a really good friend, I would unleash full power of AI comments on them! We had amazing AI generated comment threads with friends that looked goofy as hell.
> This feature predates LLMs though, right?
LLM dates back to 2017, Google added that to internal gmail back then. Not sure when linkedin added it so you might be right, but the tech is much older than most thinks.
AI Content that doesn't appear AI today will have to be the type that doesn't appear like AI in 1, 2 years.
Folks who are new to AI are just posting away with their December 2022 because it's new to them.
It is best to personally understand your own style(s) of communication.
I especially hate when people use LLMs to make text longer! Stop wasting my time.
have you tried sharing that feedback with them?
one of my reports started responding to questions with AI Slop. I asked if he was actually writing those sentences (he wasn't), so I gave him that exact feedback - it felt to me like he wasn't even listening, when he clearly jut copy-pasted clearly AI responses. Thankfully he stopped doing it.
Of course as models get better at writing, it'll be harder and harder to tell. IMO the people who stand to lose the most are the AI sloppers, in that case - like in the South Park episode, as they'll get lost in commitments and agreements they didn't even know they made.
I love it because it allows me to filter out people not worth my time and attention beyond minimal politeness and professionalism.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Wow. What a good giveaway.
I wonder what others there are.
I occasionally use bullet points, emdashes (unicode, single, and double hyphens) and words like "delve". I hate it think these are the new heuristics.
I think AI is a useful tool (especially image and video models), but I've already had folks (on HN [1]!) call out my fully artisanal comments as LLM-generated. It's almost as annoying as getting low-effort LLM splurge from others.
Edit: As it turns out, cow-orkers isn't actually an LLMism. It's both a joke and a dictation software mistake. Oops.
[1] most recently https://news.ycombinator.com/item?id=44482876
I like to use em-dashes as well (option-shift-hyphen on my macbook). I've seen people try to prompt LLMs to not have em-dashes, I've been in forums where as soon as you type in an em-dash it will block the submit button and tell you not to use AI.
Here's my take: these forums will drive good writers away or at least discourage them, leaving discourses the worse for it. What they really end up saying — "we don't care whether you use an LLM, just remove the damn em-dash" — indicates it's not a forum hosting riveting discussions in the first place.
Maybe I'm misunderstanding - but I don't think LLM's say cow-orkers. Or is that what you mean?
As this error seems to be going back a lot longer then LLMs existed (17 yrs), it could be an auto in-correct situation.
Might be incorrectly saved in some spell check software and occasionally rearing it's head
Oh I see the confusion then. It's not an error, it's a joke, and a very old one at that. Like saying Micro$oft.
https://ask.metafilter.com/15649/coworkers-why/amp
This goes back a loooooong while.
Use two dashes instead of an actual em dash. ChatGPT, at least, cannot do the same--it just can't.
It can't use two dashes? Is that like how Data couldn't use contractions (except he did)?
I've asked it before, "please rewrite that but replace the em dashes with double hyphens", and then it says "sure, here you go", and continues to use em dashes.
Were you using the web interface? If so, that’s likely why. It renders output dynamically on the frontend.
I bet if you did the same through the API, you’d get the results you want.
I thought the convention for en-dash is two hyphens straddled between spaces, and three hyphens without spacer for em-dash?
Conventionally, in various tools that take plain text as input, two dashes is an en-dash, and three dashes is an em-dash.
As a frequent user of two dashes.. I hate how people now associate it with AI.
Also, that "cow-orkers" doesn't look like AI-generated slop at all..? Just scrolling down a bit shows that most of them are three years and older.
ChatGPT: "Hold my beer..."
How is that a “giveaway”? The search turns up results from 7 years ago before LLMs were a thing? More than likely it’s auto correct going astray. I can’t imagine an LLM making that mistake
Give away for what, old farts? That link contains a comment citing the jargon file which in turn says that the term is an old Usenet meme.
Soon HN is going to be flooded with blogs about people trying and failing miserably to find AI signal from noisy online discussions with examples like this one.
Why? AI is a tool. Are their messages incorrect or something? If not who cares, they’re being efficient and thus more productive.
Please be honest. If it’s slop or they have incorrect information in the message, then my bad, stop reading here. Otherwise…
I really hope people like this with holier than thou attitude get filtered out. Fast.
People who don’t adapt to use new tools are some of the worst people to work around.
They are being efficient with their own time, yes, but it's at the expense of mine. I get less signal. We used to bemoan how hard it was to effectively communicate via text only instead of in person. Now, rather than fixing that gap, we've moved on to removing even more of the signal. We have to infer the intentions of the sender by guessing what they fed into the LLM to avoid getting tricked by what the LLM incorrectly added or accentuated.
The overall impact on the system makes it much less efficient, despite all those "saving [their] time" by abusing LLMs.
If it took you no time to write it, I'll spend no time reading it.
The holier than thou people are the ones who are telling us genAI is inevitable, it's here to stay, we should use it as a matter of rote, we'll be left out if we don't, it's going to change everything, blah blah blah. These are articles of faith, and I'm sorry but I'm not a believer in the religion of AI.
How do you know the effort that went into the message? Somebody with writing challenges may have written the whole thing up and used ai assistance to help get a better outcome. They may have proof-read and revised the generated message. You sound very judgmental.
And you sound very ableist. Why should we expect people who may have a cognitive disability of some kind to cloak that with technology, rather than us giving them the grace to communicate how they like on their terms?
Because often times you know the person behind the message. We don't existing in a vacumm and that will shape your reaction. So yes, I will give more leeway to a co-worker ESL leaning on AI than I will a director who is trying to give me a sloppy schedule that affects my navigation in the company.
Except you will spend your time reading it, because that's what is required to figure out that it's written with an LLM. The first few times, at least...
Good luck. If you're an employee remember that you are a expense line item :P
>Are their messages incorrect or something?
consider 3 scaenarios:
1. misinformation. This is the one you mention so I don't need to elaborate. 2. lack of understanding. The message may be about something they do not fully understand. If they cannot understand their own communication, then it's no longer a 2-way street. This is why AI-generated code in reviews is so infuriating. 3. Effort. Some people may use it to enhance their communication, but others use it as a shortcut. You shouldn't take a shortcut around actions like communicating with your coulleages. As a rising sentiment goes: "If it's not worth writing (yourself), it's not worth reading".
For your tool metaphor, it's like discovering supeglue. then using it to stick everything together. Sometimes you see a nail and instead glue the nail to the wall instead of hammering it in. Tools can, have, and will be misused. I think it's best to try and correct that early on before we have a lot of sticky nails.
> If it’s slop or they have incorrect information in the message, then my bad, stop reading here.
"my bad" and what next? The reader just wasted time and focus on reading, it doesn't sound like a fair exchange.
That’s on them, I said what I wanted to.
Most of the time people just like getting triggered that someone sent them a —— in their message and blame AI instead of adopting it into their workflows and moving faster.
That mentality is exactly what is reflected in AI messages: "not my problem I just need to get this over with".
Those types of coworkers tend to be a drain on not just productivity, but entire team morale. Someone who can't take responsibility or in worst cases have any sort of empathy. And tools are a force multiplier. It amplifies productivity, but that also means it amplifies this anchor behavior as well.
So I'm ESL btw... maybe I should have run my message through AI lol.
I was replying to THAT person, and my message was that IF the person they're dealing with who uses AI happens to be giving them constant slop (not ME!!! not my message) THEN ignore what I have to say in that message THEREAFTER.
So if that person is dealing with others who are giving them slop, and not just being triggered that it reads like GPT..
A lot of the reason why I even ask other people is not to get a simple technical answer but to connect, understand another person's unexepected thoughts, and maybe forge a collaboration –– in addition to getting an answer of course. Real people come up with so many side paths and thoughts, whereas AI feels lifeless and drab.
To me, someone pasting in an AI answer says: I don't care about any of that. Yeah, not a person I want to interact with.
It’s the conversational equivalent of “Let me google that for you”.
It is, which I'd argue has a time and a place. Maybe it's more specific to how I cut my teeth in the industry but as programmer whenever I had to ask a question of e.g the ops team, I'd make sure it was clear I'd made an effort to figure out my problem. Here's how I understand the issue, here's what I tried yadda yadda.
Now I'm the 40-year-old ops guy fielding those questions. I'll write up an LLM question emphasizing what they should be focused on, I'll verify the response is in sync with my thoughts, and shoot it to them.
It seems less passive aggressive than LMGTFY and sometimes I learn something from the response.
Instead of spending this time, it is faster, simpler, and more effective to phrase these questions in the form "have you checked the docs and what did they say?"
I think the issue is that about half the conversations in my life really shouldn't happen. They should have Googled it or asked an AI about it, as that is how I would solve the same problem.
It wouldn't surprise me if "let me Google that for you" is an unstated part of many conversations.
The big issue here is that a lot of company IP is proprietary. You can't Google 90% of it. And internal documentation has never been particularlly good, in my experience. It's a great leverage point to prevent people from saying "just google it" if I'm dealing with abrasive people, at least.
It's the conversational equivalent of an amplification attack
I remember reading about someone using AI to turn a simple summary like "task XYZ completed with updates ABC" into a few paragraphs of email. The recipient then fed the reply into their AI to summarize it back into the original points. Truly, a compression/expansion machine.
> "I vibe-coded this pull request in just 15 minutes. Please review" > > Well, why don't you review it first?
My current day to day problem is that, the PRs don't come with that disclaimer; The authors won't even admit if asked directly. Yet I know my comments on the PR will be fed to the cursor so it makes more crappy edits, and I'll be expecting an entirely different PR in 10 minutes to review from scratch without even addressing the main concern. I wish I could at least talk to the AI directly.
(If you're wondering, it's unfortunately not in my power right now to ignore or close the PRs).
Rather than close or ignore PRs, you should start a dialogue with them. Teach them that the AI is not a person, and if they contribute buggy or low quality code, it’s their responsibility, not the AIs, and ultimately their job on the line.
Another perspective I’ve found to resonate with people is to remind them — if you’re not reviewing the code or passing it through any type of human reasoning to determine its fit to solving the business problem - what value are you adding at all? If you just copy pasta through AI, you might as well not even be in the loop, because it’d be faster for me to do it directly, and have the context of the prompts as well.
This is a step change in our industry and an opportunity to mentor people who are misusing it. If they don’t take it, there are plenty of people who will. I have a feeling that AI will actually separate the wheat from the chaff, because right now, people can hide a lack of understanding and effort because the output speed is so low for everyone. Once those who have no issue with applying critical thinking and debugging to the problem and work well with the business start to leverage AI, it’ll become very obvious who’s left behind.
> Rather than close or ignore PRs, you should start a dialogue with them. Teach them that the AI is not a person, and if they contribute buggy or low quality code, it’s their responsibility, not the AIs, and ultimately their job on the line.
I’m willing to mentor folks, and help them grow. But what you’re describing sounds so exhausting, and it’s so much worse than what “mentorship” meant just a few short years ago. I have to now teach people basic respect and empathy at work? Are we serious?
For what it’s worth: sometimes ignoring this kind of stuff is teaching. Harshly, sure - but sometimes that’s what’s needed.
>I have to now teach people basic respect and empathy at work? Are we serious?
Given that we (or at least, much of this community) seem to disagree with this article, that does indeed seem to be the case. "It's just a tool" "it's elitist to reject AI generated output". The younger generations learn from these behaviors too.
100% real life is much more grim. I can only hope we'll somehow figure it out.
I haven't personally been in this position, but when I think about it, looping all your reviews through the cursor would reduce your perceived competence, wouldn't it? Is giving them a negative performance review an option?
Trust is earned in drops and lost in buckets. If somebody asks for my time to review slop, especially without a disclaimer, I'll simply not be reviewing their pull requests going forward.
My condolences, that sounds like hell
Show their manager?
> "For the longest time, writing was more expensive than reading"
Such a great point and one which I hadn't considered. With LLMs, we've flipped this equation, and it's having all sorts of weird consequences. Most obviously for me is how much more time I'm spending on code reviews. Its massively increased the importance of making the PR as digestible as possible for the reviewer, as now both author and reviewer are much closer to equal understanding of the changes compared to if the author had written the PR solely by themselves. Who knows what other corollaries there are to this reversal of reading vs writing
Yes, just like painting a picture used to be extremely time-consuming compared to looking at a scene. Today, these take roughly the same effort.
Humanity has survived and adapted, and all in all, I'm glad to live in a world with photography in it.
That said, part of that adaptation will probably involve the evolution of a strong stigma against undeclared and poorly verified/curated AI-generated content.
Someone telling you about a conversation they had with ChatGPT is the new telling someone about your dream last night (which sucks because I’ve had a lot of conversations I wanna share lol).
I think it's different to talk about a conversation with AI versus just passing the AI output to someone directly.
The former is like "hey, I had this experience, here's what it was about, what I learned and how it affected me" which is a very human experience and totally valid to share. The latter is like "I created some input, here's the output, now I want you reflect and/or act on it".
For example I've used Claude and ChatGPT to reflect and chat about life experiences and left feeling like I gained something, and sometimes I'll talk to my friends or SO about it. But I'd never share the transcript unless they asked for it.
Yeah. We can't share dreams, but the equivalent would be if we made our subject sit down and put on a video of our dreams. it went from this potential 2 way conversation to essentially giving one person homework to review before commenting to the other person.
This is what valid LLM use looks like.
Sadly many people don't seem interested in even admitting the existence of the distinction.
This is the same sentiment I have.
It feels really interesting to the person who experienced it, not so much to the listener. Sometimes it can be fun to share because it gives you a glimmer of insight into how someone else's mind works, but the actual content is never really the point.
If anything they share the same hallucinatory quality - ie: hallucinations don't have essential content, which is kind of the point of communication.
Hm. Kinda, though at least with the dream it was your brain generating it. Well, parts of your brain while other parts were switched off, and the on parts were operating in a different mode than usual, but all that just means it's fun to try to get insight into someone else's head by reading (way too many) things into thir dreams.
With ChatGPT, it's the output of the pickled brains of millions of past internet users, staring at the prompt from your brain and free-associating. Not quite the same thing!
Eh. It's more like I asked my drunk uncle, and he sounded really confident when he told me X.
If someone uses AI to generate an output, that should be stated clearly.
That is not an excuse for it being poorly done or unvetted (which I think is the crux of the point), but it’s important to state any sources used.
If i don’t want to receive AI generated content, i can use the attribution to filter it out.
I recently had a non-technical person contest my opinion on a subtle technical issue with ChatGPT screenshots (free tier o4) attached in their email. The LLM wasn't even wrong, just that it had the answer wrapped in customary platitudes to the user and they are not equipped to understand the actual answer of the model.
Related: I'd rather read the prompt https://news.ycombinator.com/item?id=43888803
Write a response with these points:
Yes. I just had a bad experience with an online shop. I got the thing I ordered, but the interaction was bad so I sent a note to their support email saying "I like your company, but I recently had this experience that felt icky, here's what happened" and their "AI Agent Bot" replied with a whole lot of platitudes and "Since you’ve indicated no action is needed and your order has been placed, we will be closing this ticket." I'm all for LLM's helping people write better emails, but using them to auto-close support tickets is rude.
Seems like a strongly coupled set of events that leaks their internal culture. “Customers are not worth the effort”.
It gets interesting once you start a discussion about a topic with someone who had ChatGPT doing all the work. They often do not have the same in-depth understanding of what is written there vs. someone who wrote it themselves. Which may not come as a surprise, but yet - here we are. It‘s these kind of discussions I find exhausting, because they show no honesty and no interest by the person I'm interacting with. I usually end these conversations quickly.
AI doesn't leave behind the people who don't use it, it leaves behind the people who do. Roko's Reverse Basilisk?
I never heard of Roko's Basilisk before, and now I entered a disturbing rabbit hole. Peoples' minds are... something.
I mean, it's basically cheating. I get a task, and instead of working my way through the task, which might be tedious, you take the shorter route and receive instant gratification. I can understand how that is causing some kind of rush of endorphines, much like eating a bar of chocolate will. So, yeah - I would agree, altough I do not have any studies that support the hypothesis.
I get annoyed when I ask someone a question (work related or not) and they don't know the answer, they will then proceed to tell a prompt for ChatGPT in a stream of consciousness sort of way.
Then I get even more annoyed when they decide to actually use their own prompt, and then read back to me the answer.
I would much prefer the answer "I don't know".
It seems there are people deeply afraid of admitting they don't know something, despite the fact that not knowing things is the default. But giving the wrong answer is always worse.
Huge cultural issue in the US. We shame ignorance and don't believe in redemption. If you don't know even once, you must be a weak person. You don't want to waste your time with weaklings.
But the macho approach? They are bold, they are someone you want to follow. They do the thinking. Even if you walk off a cliff, you feel that person was a good leader. If you are assertive, you must be strong, after all. "Strong people" never get punished for failure, it's just "the cost of doing business"; time to move to the next business.
While I understand this sentiment, some people simply suck at writing nice emails or have a major communication issue. It’s also not bad to run your important emails through multiple edits via AI.
>It’s also not bad to run your important emails through multiple edits via AI.
The issue is that we both know 99% of output are not the result of this. AI is used to cut corners, not to cross your T's and dot your I's. It's similar to how having the answer banks for a textbook is a great tool to self-correct and reinforce correct learning. In reality these banks aren't sold publicly because most students will use it to cheat.
And I'm not even saying this in a shameful way per se; high schoolers are under so much pressure, used to be given hours of homework on top of 7+ hours of instruction, and in some regards the content is barely applicable to long term goals past keeping their GPA up. The temptation to cheat is enormous at that stage.
----
Not so much for 30 year old me who wants to refresh themselves on calculus concepts for an interview. There also really shouldn't be any huge pressure to "cheat" your co-workers either (there sometimes is, though).
Seems like there are potential privacy issues involved in sharing important emails with these companies, especially if you are sharing what the other person sent as well.
Almost all email these days touches Google's or Microsoft's cloud systems via at least one leg, so arguably, that ship has already sailed, given that they're also the ones hosting the large inference clouds.
If you work in a big enough organization, they have AI sandboxes for things like this.
Ha, did you see the outrage from people when they realized that them sharing their deepest secrets & company information with ChatGPT was just another business record to OpenAI that is total fair game in any sort of civil suit discovery? You would think some evil force just smothered every little childs pet bunny.
Tell people there are 10000 license plate scanners tracking their every move across the US and you get a mild chuckle, but god forbid someone access the shit they put into some for profit companies database under terms they never read.
I'm not surprised the layman doesn't understand how and where their data goes. It's a bit of a let down members in HN seemed surprised by this practice after some 20 years of tech awareness. Many of the community here probably worked in those very databases storing such data.
The article clearly supports this type of usage.
Or are non-native speakers. LLMs can be a godsend in that case.
Is it too much to ask them to learn? People can have poor communication habits and still write* a thoughtful email.
Maybe yes, it's too much?
I'm a non-native English speaker who writes many work emails in English. My English is quite good, but still, it takes me longer to write email in English because it's not as natural. Sometimes I spend a few minutes wondering if I'm getting the tone right or maybe being too pushy, if I should add some formality or it would sound forced, etc., while in my native language these things are automatic. Why shouldn't I use an LLM to save those extra minutes (as long as I check the output before sending it)?
And being non-native with a good English level is nothing compared to people who might have autism, etc.
If you're checking the outputs, and I mean really checking (and adjusting) them, then I'd say this use is fine.
I'm a native English speaker who asks myself the same questions on most emails. You can use LLM outputs all you want, but if you're worried about the tone, LLM edits drive the tone to a level of generic that ranges from milquetoast, to patronizing, to outright condescending. I expect some will even begin to favor pushy emails, because at least it feels human.
Seriously. If you can’t spend effort to communicate properly, why should I expend effort listening?
Then they shouldn’t be in jobs or positions where good communication skills and writing nice emails are important.
I work with a lot of people who are in Spanish speaking countries who have English as a second language. I would much rather read their own words with grammatical errors than perfect AI slop.
Hell I would rather just read their reply in Spanish and if they need to write it out really fast without struggling trying to translate it and I use my own B1 level Spanish comprehension than read AI generated slop.
Yes it is absolutely rude in many contexts. In a team context you are looking for common understanding and being “on the same page”. If someone needs to consult AI to get up to speed that’s fine, then their interaction with you should reflect what they have learned.
My boss posts GPT output as gospel in chats and task descriptions. So now instead of being a "you figure it out" it's "read this LLM generated garbage and then figure it out".
I don't mind people using AI to help refine their thoughts and proof their output but when it is used in absence of their own thoughts I am starting to value that person a little bit less.
Exactly. I've already seen two very obvious AI comments on Reddit in the past 2 days. One even had the audacity to copy a real user's reply back into the AI and pass the response back again. I just blocked them since they're in a sub I like to hang out in.
> For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.
I think it all goes to crap when there is some economic incentive: e.g. blogspam that is profitable thanks to ads and anyone that stumbles upon it, alongside being able to generate large amounts of coherent sounding crap quickly.
I have seen quite a few sites like that in the first pages of both Google and DuckDuckGo which feels almost offensive. At the same time, posts that promise something and then don't go through with it are similarly bad, regardless of AI generated or not.
For example, recently I needed to look up how vLLM compares with Ollama (yes, for running the very same abominable intelligence models, albeit for more subjectively useful reasons) because Qwen3-30B-A3B and Devstral-24B both run pretty badly on Nvidia L4 cards with Ollama, which feels disappointing given their price tags and relatively small sizes of those models.
Yet pretty much all of the comparisons I found just regurgitated high level overviews of the technologies, like 5-10 sites that felt almost identical and could have been copy pasted from one another. Not a single one of those had a table of various models and their tokens/s on a given bit of hardware, for both Ollama and vLLM.
Back in the day when nerds got passionate about Apache2 vs Nginx, you'd see comparisons with stats and graphs and even though I wouldn't take all of those at face value (since with Apache2 you should turn off .htaccess and also tweak the MPM settings for more reasonable performance), at least there would sometimes be a Git repo.
Whether it's LLM output is orthogonal to rudeness, lack of sensibility or generic content.There are all sorts of tools out there which use LLMs as a front end for some pretty spectacular back-end functions.
If you're offered an AI output it should be taken as one of two situations: (a) the person adopts the output, and maybe put a fair amount of effort into interacting with the LLM to get it just right, but can't honestly claim ownership (because who can), or (b) the output is outside their domain of expertise and functioning as a toehold or thumbnail in some esoteric topic that no single resource they know can, and probably the point is so specific that such a resource doesn't exist.
The tenor of the article makes me confused about what people have been doing, specifically , with ChatGPT that so alienated the author. I guess the point is there are some topics LLMs are fundamentally incompetent to perform? Maybe its more the perception that the LLM is being treated as an oracle than a tool for discovery?
Not seeing a problem here as long as the one showing the output has reviewed it themselves before showing, and made the decision to show based on that review. That's what we should be advocating for. So far what I'm seeing is people slamming others or ignoring automatically on even the vague suspicion that something has been generated.
Just the other day I witnessed in a chat someone commenting that another (who previously sent an AI summary of something) had sent a "block of text" which they wouldn't read because it was too much, then went to read it when they were told it was from Quora, not generated. It was a wild moment for me, and I said as much.
> the one showing the output has reviewed it themselves before showing
Now let's really asks ourselves how this works out in reality. Cut corners. People using LLM's are not using it to enhance their conversation; they are using it to get it over with.
It also doesn't help that yes, AI generated text tends to be overly verbose. Saying a lot of nothing. There are times where that formality is needed, but not in some casual work conversations. Just get to the point.
Shortening a conversation is a kind of enhancement. It means a state of satisfaction or completion has been reached that much sooner. Why debate back and forth for 20 minutes with incomplete arguments when 2 minutes will suffice by generating a well-prompted thread on the argument? Unless the lengthy arguing is the point.
Get a short answer by including "keep answer short" or similar in the prompt. It just works.
Thats a big if.
I love the post but disagree with the first example. "I asked ChatGPT and this is what it said: <...>". That seems totally fine to me. The sender put work into the prompt and the user is free to read the AI output if they choose.
I think in any real conversation, you're treating AI as this authority figure to end the conversation, despite the fact it could easily be wrong. I would extract the logic out and defend the logic on your own feet to be less rude.
And what if you let a human expert fact-check the output of an LLM? Provided you're transparent about the output (and its preceding prompt(s)) ?
Because I'd much rather ask an LLM about a topic I don't know much about and let a human expert verify its contents than waste the time of a human expert in explaining the concept to me.
Once it's verified, I add it to my own documentation library so that I can refer to it later on.
Oh, I'm usually trying to gather information in conversations with peers, so for me, it's usually more like, "I don't know, but this is what the LLM says."
But yeah, to a boss or something, that would be rude. They hired you to answer a question.
I totally agree.
Isaac, if you're reading this - stop sending me PDFs generated by Perplexity!
This is exactly how I feel about both advertising and unnecessary notifications. "The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus."
I believe there was a very similar line of argument at the time photography was becoming popular. "Sure, it's a useful tool, but it will never be an art form like painting. It only reproduces what's already there!"
Yet today, we both cringe at forgettable food Instagrams and marvel at the World Press Photo of the Year.
I do fully agree with the conclusions on etiquette. Just like it's rude to try to pass a line-traced photo as a freehand drawing, decompressing very little information into a wall of text without a disclaimer is rude.
I got a feature request in the form of a PR a few months that said "chatgpt generated this as a possible implementation, does it work"?
I stopped there and replied that if you don't care enough to test if it works, then clearly you don't actually want the feature, and closed the ticket.
I have gotten other PRs that are more in the form of "hey I don't know what I'm doing. I used GPT but and it seems to work but I don't understand this part". I'm happy to help point in the right direction for those. Because an least they're trying. And seem like this is part of their learning.
... Or they just asked jippity to make it seem that way.
The problem I've been having is when I spend time researching a problem, I link documentation and propose a clean solution. Someone I'm talking with will then send a screenshot of deepseek or chatgpt essentially agreeing with me.
I don't care what chatgpt or deepseek thinks about the proposal. I care what _you_ think about it - that's why I'm sending it to you.
It's deeply sad to me that I will never again be able to read a message from someone and know, for sure, that it was written by them themselves.
We will have a web where proof of humanity is the de facto standard. Its only a question of when. Things have to get worse before they get better in afraid
How would that help? Plenty of humans will just continue to be willing AI-proxies.
Yes they will and they're reputation will be toggled accordingly within the community. When they're trustworthiness becomes questionable they'll feel the trade off of using AI and that will become a closed loop feedback.
Though that is an optimistic steady state, I still think we're going to see a lot more of "my AI talking to your AI" to some unhealthy degree
It would raise the barrier to entry, I suppose. I agree with GP that at some point in the future, real human output will become more rare and more valuable. And pure "human made" content (movies, music, books, blog posts, comments, etc.) may have access controls or costs associated.
We're already seeing the social contract around hosting your own blog change due to the constant indexing from AI crawlers.
There's still a sort of web of trust. All you have to do is find people who you really trust and hate AI. For instance, people who know me know that there's no way in hell that I'd ever use any sort of generative AI, for anything.
I'm guessing that you've never actually lived in that world...
When I got squggily written cursive letters from my grandma I could be pretty sure those were her words though up by herself, for the effort to accurately reproduce the consistent mess she made would have been great. But the moment we moved to the typewriter and then other digital means uniformly printed out on paper or screens you've really just assumed that it was written by the human you were expecting.
Furthermore, the vast majority of communications done in business long before now were not done by 'people' per say. They were done by processes. In the vast majority of business email that I type out there is a large amount of process that would not occur if I were talking to a friend. Moreso this communication is facilitative to some other end goal. If the entire process that existed could be automated away humanity would be better off as some mostly useless work would be eliminated.
Do you know why people are so willing to use AI to communicate with each other? Because at the end of the day they don't give two shits about communicating with you. It's an end goal of receiving a paycheck. There is no passion, no deep interest, no entertainment for them in doing so. It's a process demanded of them because of how we integrate Moloch into our modern lives.
If you come to the LLM with your message, and then use the LLM to iterate drafts and tighten your prose, then no, the exercise was exactly the opposite of a disrespect to the reader.
Sending half-baked, run-on, unvetted writing, when you easily could have chosen otherwise, is in fact the disrespectful choice.
Why would I want everyone who talks to me to sound like a clone of the same vapid robot?
I would avoid that world at any cost of I was allowed a choice, but the point is that it's used as a weapon against you. Consent appears to be unnecessary.
You and I must be talking to different LLMs. For example, here's how R1 1776 would concisely rewrite your comment in a warm, generous wise voice:
I cherish the unique humanity in every voice. Forced robotic uniformity feels like an imposition, not a choice—and consent matters deeply.
The output is the the opposite of how you describe it, and vastly more persuasive than your own words. When it's persuasion that matters, use all tools available.
I don't talk to it ever.
My voice is MY VOICE and if you don't like it I couldn't care any less cause I speak and think for myself always.
Run AI on everything anyone says to you if you never want to have the difficulty of disagreeable critical thought again. I can't stop you.
>My voice is MY VOICE and if you don't like it I couldn't care any less cause I speak and think for myself always.
If you believe that then there quite a few things you may be confused about the nature of your being.
Your voice is the assembly of society and people around you. If you actually thought your for yourself always you'd never get anything done in your life as you've had hundreds of millions of years to thinking from first principles to catch up on.
I don't see those things as being in conflict. I can be a product of all the people I've ever spoken to or read the writing of, yet have my own beliefs and seek out a course of thought and action that is individual.
There are no great AI artists (artists who are AIs) or great AI artworks. Yet there are still loads of people throughout history whose individualism led them to ideas and accomplishments that we celebrate. People have the ability to think critically which allows us to create new understanding from existing knowledge, even and especially when there are flaws or contradictions in that knowledge (which if you look closely enough there almost always are).
You can if you can watch them write it :)
You have to watch them do it IRL because the video feed can also be generated now.
Give it a few decades and they'll be ghost hacking your optical circuits Ghost in the Shell style.
Brilliant analogy with the Scramblers of Blindsight
> The only explanation is that something has coded nonsense in a way that poses as a useful message
How is this more plausible than the scrambler's own lack of knowledge of potential specifications for these messages?
In any case, there's obviously more explanations than the "coded nonsense" hypothesis.
> For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.
> "I didn't have time to write you a short letter, so I wrote you a long one."
Quote is from Mark Twain and perfectly encapsulates the sentiment. Writing something intended for another person to read was previously an effort. Some people were good at it, some were less good. But now, everyone can generate some median-level effort.
In science fiction dystopias, there is often the "adjustment to the machines taking over" phase, with analysis of the arguments of those resisting. AI is rapidly ticking the boxes of common "shift to dystopia" writings.
In addition to what others have complained about, another issue I haven't seen highlighted as much in the comments is the infuriating explosion of content length.
AI responses seem to have very low information density by default, so for me the irritation is threefold—it requires very little mental effort for the sender (i.e., I often read responses that don't feel sufficiently considered); it is often poorly validated by the sender; and it is disrespectful to the reader's time.
Like some of the other commenters, I am also not in a position to change this at work, but I am disheartened by how some of my fellow engineers have taken to putting up low effort PRs with errors, as well as unreasonably long design docs. My entire company seems to have bought into the whole "AI-first company" thing without actually validating if the outputs are worth the squeeze. Maybe sometimes they are, but I get a sense that the path of least resistance tends toward accepting lower quality code and communication.
As a non-native English speaker, I’ve often struggled to communicate nuance or subtlety in writing—especially when addressing non-technical audiences. LLMs have been a game-changer for me. They’ve significantly improved my writing and made it much easier to articulate my thoughts clearly.
Sure, it can be frustrating that they don’t adapt to a user’s personal style. But for those of us who haven’t fully developed that stylistic sense (which is common among non-native speakers), it’s still a huge leap forward.
Obviously the answer is to send them back an AI generated response
Applications could automatically insert subtle icons next to messages that are automatically generated. It wouldn't work for copy-and-pasted text but it's a start.
Maybe even a post-processing step that replaces all spaces with a suitable Unicode character to act as a watermark. There are more sophisticated ways to watermark text that aren't as easily thwarted with a search/replace, but it might work for some low-risk applications.
I read and enjoyed Blindsight, and ironically an LLM wouldn't have made the mistake of believing this supports such a kooky position.
Well said in a short, digestible post that’s easily shared with even non-tech folks. Exactly what a good post on etiquette should read like.
Nothing says “I don’t respect you” like giving someone a sequence from a random text generator.
Another one to add to the list:
(Present a solution/output proposal to team)
> Did you ask chatgpt?
Could not agree more!
> "I asked ChatGPT and this is what it said: <...>". ... > "I vibe-coded this pull request in just 15 minutes. Please review"
This is even nice. You have outlined here an actual warning. Usually there is none.
When you get this type of request you are pretty much debugging AI code on the spot without any additional context.
You can just see when text/code is AI generated or not. No matter text or code. No tools needed.
> I vibe-coded this pull request in just 15 minutes. Please review
"I hand-typed this close message in just 15 seconds. Please refrain."
pasting llm output in group chat is a war crime
I’m building a tool to help filter these kinds of low value articles out (especially the flow of constant AI negativity, but it will work for many topics). If you’re interested email me at linotype@fastmail.com and I’ll send you a link when it’s ready.
This is an interesting take.
Cause all an LLM is, is a reflection of its input.
Garbage in garbage out.
If we're going to have this rule about AI, maybe we should have it about... everything. From your mom's last Facebook post, to what is said by influencers to this post...
Say less. Do more.
That's not really true at all, at least at the end user level.
You can have a very thoughtful LLM prompt and get a garbage response if the model fails to generate a solid, sound answer to your prompt. Hard questions with verifiable but obscure answers, for instance, where it generates fake citations.
You can have a garbage prompt and get not-garbage output if you are asking in a well-understood area with a well-understood problem.
And the current generation of company-provided LLMs are VERY highly trained to make the answer look non-garbage in all cases, increasing the cognitive load on you to figure out.
Previously there was some requirement for novel synthesis. You at least had to string your thoughts together into some kind of argument.
Now that's no longer the case and there are lazy or unthoughtful people that simply pass along AI outputs, raw and completely unprocessed, as cognitive work for other human beings to deal with.
An LLM's output being a reflection of its output would imply determinism, which is the opposite of their value prop. "Garbage in, garbage out" is an addage born from traditional data pipelines. "Anything in, generic slop, possibly garbage, out" is the new status quo.
I have one of those coworkers. I tell him I have a problem with a missing BIOS setting. He comes back 2 minutes later "Yeah I asked an LLM and it said to go into [submenu that doesn't exist] and uncheck [setting I'm trying to find].
What's even more infuriating is that he won't take "I've checked and that submenu doesn't exist" for an answer and insists to check again. Had to step away for a fag a few times in fear of putting his face through the desk.
I find it as yet another way to externalize costs: I spend 0 time thinking, I dump AI slop on you and ask you to review it or refute me with the nonsense that I just sent you.
Last time someone did this to me I sent them a few other answers by the same LLM to the same prompt, all different, with no commentary.
There's another level altogether when the other party pretends it's not AI-generated at all.
I can buy into this. I always thought it was rude or at least insulting when Hollywood robotically creates slop movies. As in, of course they can do it, but damn is it insulting. There really are two types of people in the world:
a) Quantity > Quality if it prints $$$.
or
b) Quality > Quantity if it feels like the right thing to do.
Witnessing type A at scale is a first-class ticket into misanthropy.
less than perfect writing is a signal that your human. At least for now.
The problem here is that I’ve been accused multiple times of using LLMs to write slop when it was genuinely written by myself.
So I apologized and began actually using LLMs while making sure the prompt included style guides and rules to avoid the tell tale signs of AI. Then some of these geniuses thanked me for being more genuine in my response.
A lot of this stuff is delusional. You only find it rude because you’re aware it’s written by AI. It’s the awareness itself that triggers it. In reality you can’t tell the difference.
This post, for example.
Shades of Cyrano de Bergerac and pig-butchering scams. Which lead me to read about Milgram's "cyranoids": https://en.wikipedia.org/wiki/Cyranoid
And then "echoborgs": https://en.wikipedia.org/wiki/Echoborg
On the whole it's considered bad to mislead people. If my love letter to you is in fact a pre-written form, "my darling [insert name here]", and you suspect, but your suspicion is just baseless paranoia and a lucky guess, I suppose you're being delusional and I'm not being rude. But I'm still doing something wrong. Even if you don't suspect, and I call off the scam, I was still messing with you.
But the definition of being "misleading" is tricky, because we have personas and need them in order to communicate, which in any context at all is a kind of honest, sincere play-acting.
I did too, the AWS “house style” (former ProServe employee) of writing even before LLMs can come across as AI Slop. Look at some blog posts on AWS even pre-2021.
I too use an LLM to help me get rid of generic filler and I do have my own style of technical writing and editing. You would never know I use an LLM.
"Your slop is showing ..."
LLMs are very very good at adding words in a way that looks "well written" (to our current mental filters) without adding meaning or value.
I wonder how long it will be before LLM-text trademarks become seen as a sign of bad writing or laziness instead? And then maybe we'll have an arms race of stylistic changes.
---
Completely agree with the author:
Earlier this week I asked Claude to summarize a bunch of code files since I was looking for a bug. It wrote paragraphs and had 3 suggestions. But when I read it, I realized it was mostly super generic and vague. The conditions that would be required to trigger the bug in those ways couldn't actually exist, but it put a lot of words around the ideas. I took longer to notice that they were incorrect suggestions as a result.
I told it "this won't happen those ways [because blah blah blah]" and it gave me the "you are correct!" compliment-dance and tried again. One new suggestion and a claimed reason about how one of its original suggestions might be right. The new suggestion seemed promising, but I wasn't entirely convinced. Tried again. It went back to the first three suggestions - the "here's why that won't happen" was still in the context window, but it hit some limit of its model. Like it was trying to reconcile being reinforcement-learning'd into "generate something that looks like a helpful answer" with "here is information in the context window saying the text I want to generate is wrong" and failing. We got into a loop.
It was a rare bug so we'll see if the useful-seeming suggestion was right or not but I don't know yet. Added some logging around it and some other stuff too.
The counterfactuals are hard to evaluate:
* would I have identified that potential change quicker without asking it? Or at all?
* would I have identified something else that it didn't point out?
* what if I hadn't noticed the problems with some other suggestions and spent a bunch of time chasing them?
The words:information ratio was a big problem in spotting the issues.
So was the "text completion" aspect of "if you're asking about a problem here, there must be a solution I can offer" RL-seeming aspect of its generated results. It didn't seem to be truly evaluating the code then deciding so much as saying "yes, I will definitely tell you there are things we can change, here are some that seem plausible."
Imagine if my coworker had asked me the question and I'd just copy-pasted Claude's first crap attempt to them in response? Rude as hell.
One of the largest issues I've experienced is LLMs being too agreeable.
I don't want my theories parroted back to me on why something went wrong. I want to have ideas challenged in a way that forces me to think and hopefully lead me to a new perspective that I otherwise would have missed.
Perhaps a large portion of people do enjoy the agreeableness, but this becomes a problem not only because I think there are larger societal issues that stem from this echo-chamber like environmental but also simply that companies training these models may interpret agreeableness as somehow better and something that should be optimized for.
That’s simple - after it tries to be helpful and agreeable I just ask for a “devils advocate” response. I have a much longer prompt I use sometimes involve being a “sparring partner”.
And I go back and forth sometimes between correcting its devils advocate responses and “steel man” responses.
[dead]
[dead]
[flagged]
>> "I asked ChatGPT and this is what it said: <...>".
> Whoa, let me stop you right here buddy, what you're doing here is extremely, horribly rude.
How is it any different from "I read book <X> and it said that..."? Or "Book <X> has the following quote about that:"?
I definitely want to know where people are getting their info. It helps me understand how trustworthy it might be. It's not rude, it's providing proper context.
A book is a credentialed source that can be referenced. A book is also something that not everyone may have on hand, so a pointer can be appreciated. LLMs are not that. If I wanted to know what they said I'd ask them. I'm asking you/the team to understand what THEY think. Unfortunately it's becoming increasingly clear that certain people and coworkers don't actually think at all very often - particularly the ones that just take any question and go throw it off to the planet burning machine.
Because published books, depending on genre, have earned a presumption of being based on reality. And it's easy to reproduce a book lookup, and see if they link to sources. I might have experience with that book and know of its connection with reality.
ChatGPT and similar have not earned a presumption of reality for me, and the same question may get many different answers, and afaik, even if you ask it for sources, they're not necessarily real either.
IMHO, it's rude to use ChatGPT and share it with me as if it's informative; it disrespects my search for truth. It's better that you mention it, so I can disregard the whole thing.
To me, it's different because having read a book, remembered it, and pulled out the quote means you spent time on it. Pasting a response from ChatGPT means you didn't even bother reading that, understand the output, thought about it to make sure it makes sense, and then resynthesize it.
It mostly means you don't respect the other person's time and it's making them do the vetting. And that's the rude part.
I assume a book is correct or I at least assume to author thought it was correct when it comes to none ideological topics.
But you can’t assume positive intent or any intent from an LLM.
I always test the code, review it for corner cases, remove unnecessary comments, etc just like I would a junior dev.
For facts, I ask it to verify whatever says based on web source. I then might use it to summarize it. But even then I have my own writing style I steer it toward and then edit it.
I couldn't disagree more. Its like someone going to Wikipedia to helpfully copy and paste a summary of an issue. Fast and with a good enough level of accuracy.
Generally the AI summaries I see are more topical and accurate than the many other comments in the thread.
Really!?
[0] https://i.imgur.com/ly5yk9h.png
You shouldn't compare against perfection, but against reality. ChatGPT o3 has been proven to outperform even experts on knowledge tasks quite a bit.
In general it raises the mean accuracy and info of a given thread.
Its like self driving cars.
They are mostly posturing.
I don't see any problem sharing a human-reviewed LLM output.
(I also figure that human review may not be that necessary in a few years.)
But it's the human review that makes it not rude; not bothering to review means you're wasting the other person's time. If they wanted a chatbot response they could have went to the LLM directly.
It's like pointing to a lmgtfy link. That's _intentionally_ rude, in that it's normally used when the question isn't worth the thought. That's what pasting a chatbot response says.
This I agree with as well 100%.
Agreed.
The author was thinking "boring and uninteresting" but settled on the word "rude." No, it's not rude. Emailing your co-workers provactive political memes or telling someone to die in a fire is rude. Using ChatGPT to write and being obvious about it marks you as an uninteresting person who may not know what they are talking about.
On the other hand, emailing your prompt and the result you got can be instructive to others learning how to use LLMs (aren't we all?) We may learn effective prompt techniques or decide to switch to that LLM because of the quality of the answer.
I disagree. The most obvious message this telegraphs is "I don't respect you or your argument enough to parse it and articulate a response, why don't you argue with a machine instead". That's rude.
There is an alternative interpretation - "the LLM put it so much better than I ever could, so I copied and pasted that" - but precisely because of the ambiguity, you don't want to be sneaky about it. If you want me to have a look at what the LLM said, make it clear.
A meta-consideration here is that there is just an asymmetry of effort when I'm trying to formulate arguments "manually" and you're using an LLM to debate them. On some level, it might be fair game. On another, it's pretty short-sighted: the end game is that we both use LLMs that endlessly debate each other while drifting off into the absurd.
Point taken. I wouldn't want to argue with someone's LLM. I guess I'm an outlier in that I would never post LLM output as my own. I write, then I sometimes have an LLM check it because it points out where people might be confused or misinterpret certain phrases.
Edit: I'm 67 so ChatGPT is especially helpful in pointing out where my possible unconscious dinosaur attitudes may be offensive to Millennials and Gen Z.
It's rude like calling someone on the phone is rude or SCREAMING IN ALL CAPS is rude. It's a new social norm that the author is pointing out.
> aren't we all?
No in fact I disabled my TabNine Llm until I can either train my own similar model and run everything locally or not at all.
Furthermore the whole selling point has been that anyone can use them _without needing to learn anything_.
It’s rude and disrespectful, as well as boring.
> boring and uninteresting
Subjecting people to such slop is rude. All the "I asked chatbot and it said..." comments are rude because they are excessively boring and uninteresting. But it gets even worse than just boring and uninteresting when presenting chatbot text as something they wrote themselves, which is a form of lying / fraud.