Thanks for posting this, it's a very interesting case study. Considering that the thing they seem to excel at is this type of writing, it's interesting that they still seem to be only ok at it if you're trying to produce a serious, genuinely useful output. This fits with my experience, though yours is much more extensive and thorough. In particular I fully concur with the voice/tone, and the need to verify everything (always the case anyway), and "Never abdicate your role as the human mind in charge" -- sometimes the suggestions it makes are just not that good.
Question is, do you think this process was faster using the various LLMs? Could two (or N) sufficiently motivated people produce the same thing in the same time? (and if so, what is N). I'm wondering if the caveats and limitations end up costing as much time as they save. Maybe you're 2x faster, if so that would be significant and good to know.
In the abstract, this is similar to my experience with AI produced code. Except for very simple, contained code, you ultimately, need to read and understand it well enough to make sure that it's doing all the things that you want and not producing bugs. I'm not sure this saves me much time.
Nice. I leverage the strengths of AI in a way that affirms the human element in the collaboration. AI as it exists in LLMs is a powerful source of potentially meaningful language but at this point LLMs don't have a consistent conscious mind that exists over time like humans do. So it's more like summoning a djinn to perform some task and then it disappears back into the ether. We of course can interweave these disparate tasks into a meaningful structure and it sounds like you have some good strategies for how to do this.
I have found that using an LLM to critique your writing is a helpful way of getting free generic but specific feedback. I find this route more interesting than the copy pasta AI voiced stuff. Suggesting that AI embodys a specific type of character such as a pirate can make the answers more interesting than just finding the median answer, add some flavor to the white bread.
One of the things I found helpful about getting out of the specific / formulaic feedback was asking the LLM to ask me questions. At one point I asked a fresh LLM to read the book and then ask me questions. It showed me where there were narrative gaps / confusing elements that a reader would run into, but didn't realy on the specific "answer" from the LLM itself.
I also had a bunch of personal stories interwoven in and it told me I was being "indulgent" which was harsh but ultimately accurate.
That's a great approach. I find LLMs work really well as Socratic sounding boards and can lead you as the writer to explore avenues you might have otherwise not even noticed.
Thanks for posting this, it's a very interesting case study. Considering that the thing they seem to excel at is this type of writing, it's interesting that they still seem to be only ok at it if you're trying to produce a serious, genuinely useful output. This fits with my experience, though yours is much more extensive and thorough. In particular I fully concur with the voice/tone, and the need to verify everything (always the case anyway), and "Never abdicate your role as the human mind in charge" -- sometimes the suggestions it makes are just not that good.
Question is, do you think this process was faster using the various LLMs? Could two (or N) sufficiently motivated people produce the same thing in the same time? (and if so, what is N). I'm wondering if the caveats and limitations end up costing as much time as they save. Maybe you're 2x faster, if so that would be significant and good to know.
In the abstract, this is similar to my experience with AI produced code. Except for very simple, contained code, you ultimately, need to read and understand it well enough to make sure that it's doing all the things that you want and not producing bugs. I'm not sure this saves me much time.
Nice. I leverage the strengths of AI in a way that affirms the human element in the collaboration. AI as it exists in LLMs is a powerful source of potentially meaningful language but at this point LLMs don't have a consistent conscious mind that exists over time like humans do. So it's more like summoning a djinn to perform some task and then it disappears back into the ether. We of course can interweave these disparate tasks into a meaningful structure and it sounds like you have some good strategies for how to do this.
I have found that using an LLM to critique your writing is a helpful way of getting free generic but specific feedback. I find this route more interesting than the copy pasta AI voiced stuff. Suggesting that AI embodys a specific type of character such as a pirate can make the answers more interesting than just finding the median answer, add some flavor to the white bread.
One of the things I found helpful about getting out of the specific / formulaic feedback was asking the LLM to ask me questions. At one point I asked a fresh LLM to read the book and then ask me questions. It showed me where there were narrative gaps / confusing elements that a reader would run into, but didn't realy on the specific "answer" from the LLM itself.
I also had a bunch of personal stories interwoven in and it told me I was being "indulgent" which was harsh but ultimately accurate.
That's a great approach. I find LLMs work really well as Socratic sounding boards and can lead you as the writer to explore avenues you might have otherwise not even noticed.
Given that humans are 'wired for story', perhaps you should consider indulging. These could be what makes the books stand out after all.