I applaud topics like this that get to the banality and dehumanization involved with the promises of an AI future. To me, if AI fulfills even some of its promises then it puts us in a rather high stakes situation between the people that make up society and those that govern the productivity machines.
My first instinct is to say that when one loses certain trusts society grants, society historically tends to come hard. A common idea in political discourse today is that no hope for a brighter future means a lot of young people looking to trash the system. So, yknow, treat people with kindness, respect, and dignity, lest the opposite be visited upon you.
Don’t underestimate the anger a stolen future creates.
Easy to do and not a bad idea. You don't need to pass structured output and accept structured input: in the end, an LLM uses any readable text. A tool is just a way for an LLM to ask questions of a certain type and wait for the answer. For example, I'm wondering if certain flows could be improved by a "ask_clarification_question" tool that simply displays the question in the chat and returns the answer.
I understand that this is not exactly in the spirit of your question but, well, a tool is just this.
Do you work for Peter Thiel and are you tasked with validating his wet dream?
This seems like the inevitable outcome of our current trajectory for a significant portion of society. All the blather about AI utopia and a workless UBI system supported by the boundless productivity advances ushered in by AI-everything simply has no historically basis. History's realistic interpretation points more to this outcome.
Coincidentally, I've been conceptualizing a TV sitcom that tangentially touches on this idea of humans as real-time inputs an AI agent calls on, but those humans are collective characters not actually portrayed in scenes.
We will never know how many brain chipped humans are already captured in his catacombs. Perhaps the brain in a jar wetware is further developed than we would like to know.
Now that I think about it. I still do think I'm sitting in my chair.... Humm...
Part of my role is designing assessments for online courses and technical certifications. This is exactly what we want to build in our assessment development process. We want the LLM to monitor the training content and create draft questions and exercises that are vetted by humans. It's maybe a classic "human in the middle" design for content development, but the more we can put humans in at the right point and time and use LLMs for the other parts helps us create a more robust and up to date training and assessment system.
My thought as well - the infra already exists through MTurk, as well as the ethical and societal questions. You can already pay people pennies per task to do an arbitrary thing, chain that into some kind of consensus if you want to make it harder for individuals to fudge the results, offer more to get your tasks picked up faster, etc.
Honestly wouldn't mind more competition in this sector. This one doesn't seem optional for the rest of us in the future and I don't like the idea of Scale AI being in charge.
I applaud topics like this that get to the banality and dehumanization involved with the promises of an AI future. To me, if AI fulfills even some of its promises then it puts us in a rather high stakes situation between the people that make up society and those that govern the productivity machines.
My first instinct is to say that when one loses certain trusts society grants, society historically tends to come hard. A common idea in political discourse today is that no hope for a brighter future means a lot of young people looking to trash the system. So, yknow, treat people with kindness, respect, and dignity, lest the opposite be visited upon you.
Don’t underestimate the anger a stolen future creates.
Easy to do and not a bad idea. You don't need to pass structured output and accept structured input: in the end, an LLM uses any readable text. A tool is just a way for an LLM to ask questions of a certain type and wait for the answer. For example, I'm wondering if certain flows could be improved by a "ask_clarification_question" tool that simply displays the question in the chat and returns the answer.
I understand that this is not exactly in the spirit of your question but, well, a tool is just this.
Do you work for Peter Thiel and are you tasked with validating his wet dream?
This seems like the inevitable outcome of our current trajectory for a significant portion of society. All the blather about AI utopia and a workless UBI system supported by the boundless productivity advances ushered in by AI-everything simply has no historically basis. History's realistic interpretation points more to this outcome.
Coincidentally, I've been conceptualizing a TV sitcom that tangentially touches on this idea of humans as real-time inputs an AI agent calls on, but those humans are collective characters not actually portrayed in scenes.
We will never know how many brain chipped humans are already captured in his catacombs. Perhaps the brain in a jar wetware is further developed than we would like to know.
Now that I think about it. I still do think I'm sitting in my chair.... Humm...
Check out Mrs. Davis (2023)
now the computer decides what it needs, and we bid our time lower and lower to accomplish the task .. :/
Maybe I write a bot that answers fivverr requests at the lowest price possible. We can all race to the bottom.
Part of my role is designing assessments for online courses and technical certifications. This is exactly what we want to build in our assessment development process. We want the LLM to monitor the training content and create draft questions and exercises that are vetted by humans. It's maybe a classic "human in the middle" design for content development, but the more we can put humans in at the right point and time and use LLMs for the other parts helps us create a more robust and up to date training and assessment system.
When LLMs become better than humans at the following:
1. Knowing what you don't know
2. Knowing who is likely to know
3. Asking the question in a way that the other human, with their limited attention and context-window, is able to give a meaningful answer
It's probably already done but in some third world country and hidden behind NDAs.
Amazon Mechanical Turk?
My thought as well - the infra already exists through MTurk, as well as the ethical and societal questions. You can already pay people pennies per task to do an arbitrary thing, chain that into some kind of consensus if you want to make it harder for individuals to fudge the results, offer more to get your tasks picked up faster, etc.
I know nothing about this other than I thought it was a joke at first, but I think it's the same idea https://github.com/RapidataAI/human-use
That talks about getting some kind of free feedback out of the human for free.
Now we have to find the next level and condition the human to pay to respond to questions.
It seems like an idea bad enough to pay $10 to downvote? Or should that be good enough?
Wouldn't this be better than pretending humans are fully automatable?
There are products that do this, langchain itself has a method for it
you can reinvent scale api and get yc funding before selling out to ine of the faangs
Honestly wouldn't mind more competition in this sector. This one doesn't seem optional for the rest of us in the future and I don't like the idea of Scale AI being in charge.
If the business thinks I'm expensive now, just wait until on-call goes from an optional rotation to a machine-induced hell
https://en.wikipedia.org/wiki/Manna_(novel)
Exactly this. OP, this is basically where this book goes - AI management that directs humans around as automata.