I write documentation for a living. Although my output is writing, my job is observing, listening and understanding. I can only write well because I have an intimate understanding of my readers' problems, anxieties and confusion. This decides what I write about, and how to write about it. This sort of curation can only come from a thinking, feeling human being.
I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.
Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.
I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.
People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.
I thought it was saying "a letter to those who fired tech writers because they were caught using AI," not "a letter to those who fired tech writers to replace them with AI."
The whole article felt imprecise with language. To be honest, it made me feel LESS confident in human writers, not more.
I was having flashbacks to all of the confusing docs I've encountered over the years, tightly controlled by teams of bad writers promoted from random positions within the company, or coming from outside but having a poor understanding of our tech or how to write well.
I'm writing this as someone who majored in English Lit and CS, taught writing to PhD candidates for several years, and maintains most of my own company's documentation.
The kind of documentation no one reads, that is just here to please some manager, or meet some compliance requirement. These are, unfortunately, the most common kind I see, by volume. Usually, they are named something like QQF-FFT-44388-IssueD.doc and they are completely outdated with regard to the thing they document despite having seen several revisions, as evidenced by the inconsistent style.
Common features are:
- A glossary that describe terms that don't need describing, such as CPU or RAM, but not ambiguous and domain-specific terms, of which there are many
- References to documents you don't have access to
- UML diagrams, not matching the code of course
- Signatures by people who left the project long ago and are nowhere to be seen
- A bunch of screenshots, all with different UIs taken at different stages of development, would be of great value to archeologists
- Wildly inconsistent formatting, some people realize that Word has styles and can generate a table of contents, others don't, and few care
Of course, no one reads them, besides maybe a depressive QA manager.
The best tech writers I have worked with don’t merely document the product. They act as stand-ins for actual users and will flag all sorts of usability problems. They are invaluable. The best also know how to start with almost no engineering docs and to extract what they need from 1-1 sit down interviews with engineering SMEs. I don’t see AI doing either of those things well.
Yeah. AI might replace tech writers (just like it might replace anyone), but it won't be a GOOD replacement. The companies with the best docs will absolutely still have tech writers, just with some AI assistance.
Tech writing seems especially vulnerable to people not really understanding the job (and then devaluing it, because "everybody can write" - which, no, if you'll excuse the slight self-promotion but it saves me repeating myself https://deborahwrites.com/blog/nobody-can-write/)
In my experience, tech writers often contribute to UX and testing (they're often the first user, and thus bug reporter). They're the ones who are going to notice when your API naming conventions are out of whack. They're also the ones writing the quickstart with sales & marketing impact. And then, yes, they're the ones bringing a deep understanding of structure and clarity.
I've tried AI for writing docs. It can be helpful at points, but my goodness I would not want to let anything an AI wrote out the door without heavy editing.
The best tech writers I've known have been more like anthropologists, bridging communication between product management, engineers, and users. With this perspective they often give feedback that makes the product better.
And here I am, 2026, and one of my purposes for this year is to learn to write better, communicate more fluently, and convey my ideas in a more attractive way.
I do not think that these skills are so easily replaced; certainly the machine can do a lot, but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.
In my humble opinion we will be losing that from people, the upscaling of skills will be lost for sure, but the human upscaling is the real loss.
None of the ten or so staff tech writers I have worked closely with over the years have honestly been great. This has been disappointing.
Always had to contract external people to get stuff done really well. One was a bored CS university professor, another was a CTO in a struggling tiny startup who needed cash.
The failure mode isn't just hallucinations, it's the absence of judgment: what not to document, what to warn about, what's still unstable, what users will actually misunderstand
Two years ago, I asked chatgpt to rewrite my resume. It looked fantastic at a first sight, then, one week later I re-read it, and feel ashamed to have sent it to some prospective employers. It was full of cringe inducing babble.
You see, for an LLM there are no hierarchies other than what it observed in their training, and even then, applying it in a different context may be tricky for them. Because it can describe hierarchies, relationships by mimicry, but it doesn't actually have a model of them.
Just an example: It may be able to generate text that recognizes that a PhD title is a step above from a Master’s degree, but sometimes it won't be able to translate this fact (instead of the description of this fact) into the subtle differences in attention and emphasis we do in our written text to reflect those real world hierarchies of value. It can repeat the fact to you, can even kind of generalize it, but it won't take a decision based on it.
It can, even more now, get a very close simulation of this, because relative importance of stuff would have been semantically capture, and it is very good at capturing those subtle semantical relationships, but, in linguistic terms, it absolutely sucks at pragmatics.
An example: Let's say in one of your experiences, you improved a model that detected malignancy in a certain kind of tumor images, improving its false negative rate to something like 0.001%, then in the same experience you casually mention that you tied the CEOs toddler tennis shoes once. Given your prompt to write a resume according to the usual resume enhancement formulas, there's a big chance it will emphasize the irrelevant tennis lace tying activity in a ridiculously pompous manner, making it hierarchically equivalent to your model kung-fu accomplishments.
So in the end, you end up with some bizarre stuff that looks like:
"Tied our CEO's toddler tennis shoes, enabling her to raise 20M with minimal equity dilution in our Series B round"
Is it expected that LLMs will continue to improve over time? All the recent articles like this one just seem to describe this technology's faults as fixed and permanent. Basically saying "turn around and go no further". Honestly asking because their arguments seem to be dependent on improvement never happening and never overcoming any faults. It feels shortsighted.
I suspect a lot of folks are asking ChatGPT to summarize it…
I can’t imagine just letting an LLM write an app, server, or documentation package, wholesale and unsupervised, but have found them to be extremely helpful in editing and writing portions of a whole.
The one thing that could be a light in the darkness, is that publishers have already fired all their editors (nothing to do with AI), and the writing out there shows it. This means there’s the possibility that AI could bring back editing.
I have not fired a technical writer, but writing documentation that understands and maintains users focus is hard even with llm. I am trying to write documentation for my start up and it is harder than I expected even with llm.
Kudos to all technical writer who made my job as software engineer easier.
While I agree with the article, the reducing of the number of technical writers due to the belief that their absence can be compensated by AI is just the most recent step of a continuous process of degradation of the technical documentation that has characterized the last 3 decades.
During the nineties of the last century I was still naive enough to believe that the great improvements in technology, i.e. the widespread availability of powerful word processors and the availability of the Internet for extremely cheap distribution will lead to an improvement in the quality of technical documentation and to easy access to it for everybody.
The reverse has happened, the quality of the technical documentation has become worse and worse, with very rare exceptions, and the access to much of what has remained has become very restricted, either by requiring NDAs or by requiring very high prices (e.g. big annual fees for membership to some industry standards organization).
A likely explanation for the worse and worse technical documentation is a reduction in the number of professional technical writers.
It is very obvious that the current management of most big companies does not understand at all the value of competent technical writers and of good product documentation; not only for their customers and potential customers, but also for their internal R&D teams or customer support teams.
I have worked for several decades at many companies, very big and very small, on several continents, but, unfortunately only at one of them the importance of technical documentation was well understood by the management, therefore the hardware and software developers had an adequate amount of time planned for writing documentation in their schedules for product development. Despite the fact that the project schedules at that company appeared to allocate much more time for "non-productive tasks" like documentation, than in other places, in reality it was there where the R&D projects were completed the fastest and with the least delays over the initially estimated completion time, one important factor being that every developer understood very well what must be done in the future and what has already been done and why.
Nice read after the earlier post saying fire all your tech writers. Good post.
One thing to add is that the LLM doesn't know what it can't see. It just amplifies what is there. Assumed knowledge is quite common with developers and their own code. Or the more common "it works on my machine" because something is set outside of the code environment.
Sadly other fields are experiencing the same issue of someone outside their field saying AI can straight up replace them.
A lot of this applies to programming as well. And pretty much everything people are using GenAI for.
If you want to see how well you understand your program or system, try to write about it and teach someone how it works. Nature will show you how sloppy your thinking is.
I will share my experience, hopefully it answers some questions to tech writers.
I was terrible writer, but we had to write good docs and make it easy for our customers to integrate with our products. So, I prepared the context to our tech writers and they have created nice documentation pages.
The cycle was (reasonably takes 1 week, depending on tech writer workload):
1. prepare context
2. create ticket to tech writers, wait until they respond
3. discuss messaging over the call
4. couple days later I get first draft
5. iterate on draft, then finally publish it
Today its different:
1. I prepare all the context and style guide, then feed them into LLM.
1.1. context is extracted directly from code by coding agents
2. I proofread it and 97% of cases accept it, because it follows the style guide and mostly transforms my context correctly into customer consumable content
3. Done. less than 20 minutes
Tech writers were doing amazing job of course, but I can get 90-95% quality in 1% of the time spend for that work.
I’m on engineering side . We are in the same boat.
Writers become more productive = less writers needed not 0 but less.
That’s current step. Now if the promise of cursor that capable of Multi week system to be automated completely. All the internal docs become ai driven .
So only exception are external docs . But … if all software is written by machine there are no readers .
This obviously a vector not a current state :( very dark and gloom
"Productivity gains are real when you understand that augmentation is better than replacing humans..." Isn't this where the job losses happen? For example, previously you needed 5 tech writers but now you only need 4 to do the same work. Hopefully it just means that the 5th person finds more work to do, but it isn't clear to me that Jevons paradox kicks in for all cases.
I agree with the core concern, but I think the right model is smaller, not zero. One or two strong technical writers using AI as a leverage tool can easily outperform a large writing team or pure AI output. The value is still in judgment, context, and asking the right questions. AI just accelerates the mechanics.
First, we've fallen into a nomenclature trap, as so-called "AI" has nothing to do with "intelligence." Even its creators admit this, hence the name "AGI," since the appropriate acronym has already been used.
But, when we use "AI" acronym, our brains still recognize "intelligence" attribute and tend to perceive LLMs as more powerful than they actually are.
Current models are like trained parrots that can draw colored blocks and insert them into the appropriate slots. Sure, much faster and with incomparably more data. But they're still parrots.
This story and the discussions remind me of reports and articles about the first computers. People were so impressed by the speed of their mathematical calculations that they called them "electronic brains" and considered, even feared, "robot intelligence."
Now we're so impressed by the speed of pattern matching that we called them "artificial intelligence," and we're back to where we are.
However, the writing is on the wall: AI will completely replace technical writers.
The technology is improving rapidly, and even now, with proper context, AI can write technical documentation extremely well. It can include clear examples (and only a very small number of technical writers know how to do that properly), and it can also anticipate and explain potential errors.
It’s not so much that AI is replacing “tech writers”; with all due respect to the individuals in those roles, it was never a good title to identify as.
Technical writing is part of the job of software engineering. Just like “tester” or “DBA”, it was always going to go the way of the dodo.
If you’re a technical writer, now’s the time to reinvent yourself.
I think using AI for tech documentation is great for people who don't really give a shit about their tech documentation. If you were going to half-ass it anyway, you can save a lot of money half-assing it with AI.
I remember the days when every large concern employed technical writers and didn't expect us programmers and engineers to write for the end users. But that stopped decades ago in most places at least as far as in house applications are concerned, long before AI could be used as an excuse for firing technical writers.
Someone has to turn off their brain completely and just follow the instructions as-is. Then log the locations where the documentation wasn't clear enough or assumed some knowledge that wasn't given in the docs.
Meh. A bit too touchy feely for my taste, and not much in ways of good arguments. Some of the things touched on in the article are either extreme romanticisations of the craft or rather naive takes (docs are product truth? Really?!?! That hasn't been the case in ages, with docs for multi-billion dollar solutions, written by highly paid grass fed you won't believe they're not humans!)...
The parts about hallucinations and processes are also a bit dated. We're either at, or very close to the point where "agentic" stuff works in a "GAN" kind of way to "produce docs" -> read docs and try to reproduce -> resolve conflicts -> loop back, that will "solve" both hallucinations and processes, at least at the quality of human-written docs. My bet is actually better in some places. Bitter lesson and all that. (at least for 80% of projects, where current human written docs are horrendous. ymmv. artisan projects not included)
What I do agree with is that you'll still want someone to hold accountable. But that's just normal business. This has been the case for integrators / 3rd party providers since forever. Every project requiring 3rd party people still had internal folks that were held accountable when things didn't work out. But, you probably won't need 10 people writing docs. You can hold accountable the few that remain.
With every job replaced by AI the best people will be doing a better job than the AI and it'll be very frustrating to be replaced by people that can't tell the difference.
Why should I hire a dedicated writer if I have people with better understanding of the system? Also worth noting that like in any profession the most writers are... mediocre. Especially when you hire someone on contract. I had mostly bad experience with them in past. They happily charge $1000 for a few pages of garbage that is not even LLM-quality. No creativity, just pumping out words.
I can chip in like $20 to pay some "good writer" that "observes, listens and understands" for writing documentation on something and compare it with LLM-made one.
"Write a manual for air travel for someone who never flew. Cover topics like buying a ticket, preparing for travel, getting to airport, doing things in the airport, etc"
I don't think I've ever seen documentation from tech writers that was worth reading: if a tech writer can read code and understand it, why are they making half or less of what they would as an engineer? The post complains about AI making things up in subtle ways, but I've seen exactly the same thing happen with tech writers hired to document code: they documented what they thought should happen instead of what actually happened.
I'm currently in the middle of restructuring our website. 95% of the work is being done by codex. That includes content writing, design work, implementation work, etc. But it's a lot of work for me because I am critical about things like wording/phrasing and not hallucinating things we don't actually do. That's actually a lot of work. But it's editorial work and not writing work or programming work. But it's doing a pretty great job. Having a static website with a site generator means I can do lots of changes quickly via agentic coding.
My advise to tech writers would be to get really good at directing and orchestrating AI tools to do the heavy lifting of producing documentation. If you are stuck using content management systems or word processors, consider adopting a more code centric workflow. The AI tools can work with those a lot better. And you can't afford to be doing things manually that an AI does faster and better. Your value is making sure the right documentation gets written and produced correctly; correcting things that need correcting/perfecting. It's not in doing everything manually; you need to cherry pick where your skills still add value.
Another bit of insight is that a lot of technical documentation now has AIs as the main consumer. A friend of mine who runs a small SAAS service has been complaining that nobody actually reads his documentation (which is pretty decent) and instead relies on LLMs to do that for them. The more documentation you have, the less people will read all of it. Or any of it.
But you still need documentation. It's easier than ever to produce it. The quality standards for that documentation are high and increasing. There are very few excuses for not having great documentation.
I write documentation for a living. Although my output is writing, my job is observing, listening and understanding. I can only write well because I have an intimate understanding of my readers' problems, anxieties and confusion. This decides what I write about, and how to write about it. This sort of curation can only come from a thinking, feeling human being.
I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.
Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.
I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.
People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.
Was anyone else confused by this title?
I thought it was saying "a letter to those who fired tech writers because they were caught using AI," not "a letter to those who fired tech writers to replace them with AI."
The whole article felt imprecise with language. To be honest, it made me feel LESS confident in human writers, not more.
I was having flashbacks to all of the confusing docs I've encountered over the years, tightly controlled by teams of bad writers promoted from random positions within the company, or coming from outside but having a poor understanding of our tech or how to write well.
I'm writing this as someone who majored in English Lit and CS, taught writing to PhD candidates for several years, and maintains most of my own company's documentation.
AI works well for one kind of documentation.
The kind of documentation no one reads, that is just here to please some manager, or meet some compliance requirement. These are, unfortunately, the most common kind I see, by volume. Usually, they are named something like QQF-FFT-44388-IssueD.doc and they are completely outdated with regard to the thing they document despite having seen several revisions, as evidenced by the inconsistent style.
Common features are:
- A glossary that describe terms that don't need describing, such as CPU or RAM, but not ambiguous and domain-specific terms, of which there are many
- References to documents you don't have access to
- UML diagrams, not matching the code of course
- Signatures by people who left the project long ago and are nowhere to be seen
- A bunch of screenshots, all with different UIs taken at different stages of development, would be of great value to archeologists
- Wildly inconsistent formatting, some people realize that Word has styles and can generate a table of contents, others don't, and few care
Of course, no one reads them, besides maybe a depressive QA manager.
The best tech writers I have worked with don’t merely document the product. They act as stand-ins for actual users and will flag all sorts of usability problems. They are invaluable. The best also know how to start with almost no engineering docs and to extract what they need from 1-1 sit down interviews with engineering SMEs. I don’t see AI doing either of those things well.
Yeah. AI might replace tech writers (just like it might replace anyone), but it won't be a GOOD replacement. The companies with the best docs will absolutely still have tech writers, just with some AI assistance.
Tech writing seems especially vulnerable to people not really understanding the job (and then devaluing it, because "everybody can write" - which, no, if you'll excuse the slight self-promotion but it saves me repeating myself https://deborahwrites.com/blog/nobody-can-write/)
In my experience, tech writers often contribute to UX and testing (they're often the first user, and thus bug reporter). They're the ones who are going to notice when your API naming conventions are out of whack. They're also the ones writing the quickstart with sales & marketing impact. And then, yes, they're the ones bringing a deep understanding of structure and clarity.
I've tried AI for writing docs. It can be helpful at points, but my goodness I would not want to let anything an AI wrote out the door without heavy editing.
The best tech writers I've known have been more like anthropologists, bridging communication between product management, engineers, and users. With this perspective they often give feedback that makes the product better.
This was at #1 on the front page like an hour ago, and now after almost 100 new comments it’s off the front page at #40. What happened?
And here I am, 2026, and one of my purposes for this year is to learn to write better, communicate more fluently, and convey my ideas in a more attractive way.
I do not think that these skills are so easily replaced; certainly the machine can do a lot, but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.
In my humble opinion we will be losing that from people, the upscaling of skills will be lost for sure, but the human upscaling is the real loss.
I like the post but we can learn from insurance companies.
They have AI finding reasons to reject totally valid requests
They are putting to court that this is a software bug and they should not be liable.
That will be the standard excuse. I hope it does not work.
None of the ten or so staff tech writers I have worked closely with over the years have honestly been great. This has been disappointing.
Always had to contract external people to get stuff done really well. One was a bored CS university professor, another was a CTO in a struggling tiny startup who needed cash.
The failure mode isn't just hallucinations, it's the absence of judgment: what not to document, what to warn about, what's still unstable, what users will actually misunderstand
A somewhat related anecdote:
Two years ago, I asked chatgpt to rewrite my resume. It looked fantastic at a first sight, then, one week later I re-read it, and feel ashamed to have sent it to some prospective employers. It was full of cringe inducing babble.
You see, for an LLM there are no hierarchies other than what it observed in their training, and even then, applying it in a different context may be tricky for them. Because it can describe hierarchies, relationships by mimicry, but it doesn't actually have a model of them.
Just an example: It may be able to generate text that recognizes that a PhD title is a step above from a Master’s degree, but sometimes it won't be able to translate this fact (instead of the description of this fact) into the subtle differences in attention and emphasis we do in our written text to reflect those real world hierarchies of value. It can repeat the fact to you, can even kind of generalize it, but it won't take a decision based on it.
It can, even more now, get a very close simulation of this, because relative importance of stuff would have been semantically capture, and it is very good at capturing those subtle semantical relationships, but, in linguistic terms, it absolutely sucks at pragmatics.
An example: Let's say in one of your experiences, you improved a model that detected malignancy in a certain kind of tumor images, improving its false negative rate to something like 0.001%, then in the same experience you casually mention that you tied the CEOs toddler tennis shoes once. Given your prompt to write a resume according to the usual resume enhancement formulas, there's a big chance it will emphasize the irrelevant tennis lace tying activity in a ridiculously pompous manner, making it hierarchically equivalent to your model kung-fu accomplishments.
So in the end, you end up with some bizarre stuff that looks like:
"Tied our CEO's toddler tennis shoes, enabling her to raise 20M with minimal equity dilution in our Series B round"
Is it expected that LLMs will continue to improve over time? All the recent articles like this one just seem to describe this technology's faults as fixed and permanent. Basically saying "turn around and go no further". Honestly asking because their arguments seem to be dependent on improvement never happening and never overcoming any faults. It feels shortsighted.
Good points.
I suspect a lot of folks are asking ChatGPT to summarize it…
I can’t imagine just letting an LLM write an app, server, or documentation package, wholesale and unsupervised, but have found them to be extremely helpful in editing and writing portions of a whole.
The one thing that could be a light in the darkness, is that publishers have already fired all their editors (nothing to do with AI), and the writing out there shows it. This means there’s the possibility that AI could bring back editing.
I have not fired a technical writer, but writing documentation that understands and maintains users focus is hard even with llm. I am trying to write documentation for my start up and it is harder than I expected even with llm.
Kudos to all technical writer who made my job as software engineer easier.
But you might not need 5 tech writers anymore. Just 1 who controls an LLM.
While I agree with the article, the reducing of the number of technical writers due to the belief that their absence can be compensated by AI is just the most recent step of a continuous process of degradation of the technical documentation that has characterized the last 3 decades.
During the nineties of the last century I was still naive enough to believe that the great improvements in technology, i.e. the widespread availability of powerful word processors and the availability of the Internet for extremely cheap distribution will lead to an improvement in the quality of technical documentation and to easy access to it for everybody.
The reverse has happened, the quality of the technical documentation has become worse and worse, with very rare exceptions, and the access to much of what has remained has become very restricted, either by requiring NDAs or by requiring very high prices (e.g. big annual fees for membership to some industry standards organization).
A likely explanation for the worse and worse technical documentation is a reduction in the number of professional technical writers.
It is very obvious that the current management of most big companies does not understand at all the value of competent technical writers and of good product documentation; not only for their customers and potential customers, but also for their internal R&D teams or customer support teams.
I have worked for several decades at many companies, very big and very small, on several continents, but, unfortunately only at one of them the importance of technical documentation was well understood by the management, therefore the hardware and software developers had an adequate amount of time planned for writing documentation in their schedules for product development. Despite the fact that the project schedules at that company appeared to allocate much more time for "non-productive tasks" like documentation, than in other places, in reality it was there where the R&D projects were completed the fastest and with the least delays over the initially estimated completion time, one important factor being that every developer understood very well what must be done in the future and what has already been done and why.
Nice read after the earlier post saying fire all your tech writers. Good post.
One thing to add is that the LLM doesn't know what it can't see. It just amplifies what is there. Assumed knowledge is quite common with developers and their own code. Or the more common "it works on my machine" because something is set outside of the code environment.
Sadly other fields are experiencing the same issue of someone outside their field saying AI can straight up replace them.
How is this for an ordering:
Good human written docs > AI written docs > no docs > bad human written docs
A lot of this applies to programming as well. And pretty much everything people are using GenAI for.
If you want to see how well you understand your program or system, try to write about it and teach someone how it works. Nature will show you how sloppy your thinking is.
I will share my experience, hopefully it answers some questions to tech writers.
I was terrible writer, but we had to write good docs and make it easy for our customers to integrate with our products. So, I prepared the context to our tech writers and they have created nice documentation pages.
The cycle was (reasonably takes 1 week, depending on tech writer workload):
Today its different: Tech writers were doing amazing job of course, but I can get 90-95% quality in 1% of the time spend for that work.I’m on engineering side . We are in the same boat.
Writers become more productive = less writers needed not 0 but less.
That’s current step. Now if the promise of cursor that capable of Multi week system to be automated completely. All the internal docs become ai driven .
So only exception are external docs . But … if all software is written by machine there are no readers .
This obviously a vector not a current state :( very dark and gloom
"Productivity gains are real when you understand that augmentation is better than replacing humans..." Isn't this where the job losses happen? For example, previously you needed 5 tech writers but now you only need 4 to do the same work. Hopefully it just means that the 5th person finds more work to do, but it isn't clear to me that Jevons paradox kicks in for all cases.
Im not kidding when I say the tone feels AI generated.
It’s obviously not AI generated but I’m more speaking to the tonality of the latest gpt. It’s now extremely hard to tell the difference.
I agree with the core concern, but I think the right model is smaller, not zero. One or two strong technical writers using AI as a leverage tool can easily outperform a large writing team or pure AI output. The value is still in judgment, context, and asking the right questions. AI just accelerates the mechanics.
First, we've fallen into a nomenclature trap, as so-called "AI" has nothing to do with "intelligence." Even its creators admit this, hence the name "AGI," since the appropriate acronym has already been used.
But, when we use "AI" acronym, our brains still recognize "intelligence" attribute and tend to perceive LLMs as more powerful than they actually are.
Current models are like trained parrots that can draw colored blocks and insert them into the appropriate slots. Sure, much faster and with incomparably more data. But they're still parrots.
This story and the discussions remind me of reports and articles about the first computers. People were so impressed by the speed of their mathematical calculations that they called them "electronic brains" and considered, even feared, "robot intelligence."
Now we're so impressed by the speed of pattern matching that we called them "artificial intelligence," and we're back to where we are.
However, the writing is on the wall: AI will completely replace technical writers.
The technology is improving rapidly, and even now, with proper context, AI can write technical documentation extremely well. It can include clear examples (and only a very small number of technical writers know how to do that properly), and it can also anticipate and explain potential errors.
It’s not so much that AI is replacing “tech writers”; with all due respect to the individuals in those roles, it was never a good title to identify as.
Technical writing is part of the job of software engineering. Just like “tester” or “DBA”, it was always going to go the way of the dodo.
If you’re a technical writer, now’s the time to reinvent yourself.
I think using AI for tech documentation is great for people who don't really give a shit about their tech documentation. If you were going to half-ass it anyway, you can save a lot of money half-assing it with AI.
I remember the days when every large concern employed technical writers and didn't expect us programmers and engineers to write for the end users. But that stopped decades ago in most places at least as far as in house applications are concerned, long before AI could be used as an excuse for firing technical writers.
Documentation needs to be tested.
Someone has to turn off their brain completely and just follow the instructions as-is. Then log the locations where the documentation wasn't clear enough or assumed some knowledge that wasn't given in the docs.
>> liability doesn’t vanish just because AI wrote it
I think this is going to be a defining theme this year.
All valid points but I fear our brave new world cares not
Meh. A bit too touchy feely for my taste, and not much in ways of good arguments. Some of the things touched on in the article are either extreme romanticisations of the craft or rather naive takes (docs are product truth? Really?!?! That hasn't been the case in ages, with docs for multi-billion dollar solutions, written by highly paid grass fed you won't believe they're not humans!)...
The parts about hallucinations and processes are also a bit dated. We're either at, or very close to the point where "agentic" stuff works in a "GAN" kind of way to "produce docs" -> read docs and try to reproduce -> resolve conflicts -> loop back, that will "solve" both hallucinations and processes, at least at the quality of human-written docs. My bet is actually better in some places. Bitter lesson and all that. (at least for 80% of projects, where current human written docs are horrendous. ymmv. artisan projects not included)
What I do agree with is that you'll still want someone to hold accountable. But that's just normal business. This has been the case for integrators / 3rd party providers since forever. Every project requiring 3rd party people still had internal folks that were held accountable when things didn't work out. But, you probably won't need 10 people writing docs. You can hold accountable the few that remain.
Hey Claude, summarise this letter for me and write a response.
This was sooo well written (that’s the point I guess)
No! No! I want all the companies to go all in on AI and completely destroy the last of the professional respect given to writers.
Why?
Because the legal catastrophe that will follow will entertain me so very very much.
With every job replaced by AI the best people will be doing a better job than the AI and it'll be very frustrating to be replaced by people that can't tell the difference.
But most people aren't that great at their jobs.
The fired writers should get together start their own publications.
AI can’t generate insights far beyond what it’s trained on.
Their writing will be a different moat.
> So here’s my request for you: Reconsider
Why should I hire a dedicated writer if I have people with better understanding of the system? Also worth noting that like in any profession the most writers are... mediocre. Especially when you hire someone on contract. I had mostly bad experience with them in past. They happily charge $1000 for a few pages of garbage that is not even LLM-quality. No creativity, just pumping out words.
I can chip in like $20 to pay some "good writer" that "observes, listens and understands" for writing documentation on something and compare it with LLM-made one.
"Write a manual for air travel for someone who never flew. Cover topics like buying a ticket, preparing for travel, getting to airport, doing things in the airport, etc"
Let's compare!
Getting emotional about it won't work. Companies only see results. If replacing your job with AI works, your job will be replaced.
Answer me how a technical writer thinks that formatting markdown is a clear way of relating information?
Hopefully they used AI to write this.
https://daniel.feldroy.com/
I don't think I've ever seen documentation from tech writers that was worth reading: if a tech writer can read code and understand it, why are they making half or less of what they would as an engineer? The post complains about AI making things up in subtle ways, but I've seen exactly the same thing happen with tech writers hired to document code: they documented what they thought should happen instead of what actually happened.
I'm currently in the middle of restructuring our website. 95% of the work is being done by codex. That includes content writing, design work, implementation work, etc. But it's a lot of work for me because I am critical about things like wording/phrasing and not hallucinating things we don't actually do. That's actually a lot of work. But it's editorial work and not writing work or programming work. But it's doing a pretty great job. Having a static website with a site generator means I can do lots of changes quickly via agentic coding.
My advise to tech writers would be to get really good at directing and orchestrating AI tools to do the heavy lifting of producing documentation. If you are stuck using content management systems or word processors, consider adopting a more code centric workflow. The AI tools can work with those a lot better. And you can't afford to be doing things manually that an AI does faster and better. Your value is making sure the right documentation gets written and produced correctly; correcting things that need correcting/perfecting. It's not in doing everything manually; you need to cherry pick where your skills still add value.
Another bit of insight is that a lot of technical documentation now has AIs as the main consumer. A friend of mine who runs a small SAAS service has been complaining that nobody actually reads his documentation (which is pretty decent) and instead relies on LLMs to do that for them. The more documentation you have, the less people will read all of it. Or any of it.
But you still need documentation. It's easier than ever to produce it. The quality standards for that documentation are high and increasing. There are very few excuses for not having great documentation.