All of these tools that are not controlled by the user, trained on datasets they do not own or understand, will inevitably be subject to manipulation. I do not necessarily believe that Canva went in and specifically trained their AI models to do this, but that's almost worse because they become the face of what somebody else has decided their model should be doing.
Anybody using AI tools should be extremely cautious about what is being produced.
You can see it a lot if you ask anything remotely political to the different AI models... in some places you can definitely see the hand-editing/overrides as well.
Hard to get around these kinds of issues and definitely leads me to avoid them for non-technical questions.
> All of these tools ... will inevitably be subject to manipulation.
I have often wondered about the legality of such manipulation. As AI becomes used for increasingly important things, it becomes increasingly valuable to make a system serve the needs of someone other than its owner.
Yes these models apply their knowledge non-deterministically. We need to be aware and ready to handle their 'behaviours' doesn't mean they are not useful - I feel like ant-AI advocates are rushing to find issues
It reminds me of the early internet days and everyone making a big deal about the anonymity of internet forurms and safety.. sure it is an isssue
We have to stop acting like these things "think"; it leads to really weird misinterpretations of the output as "meaning" things.
For example, they will occasionally replace "colour" with "color". Why? Because both occur in the training data in the "same role" but "color" is, apparently, more common[1]. You can also trick them into replacing things like "sardines" with "anchovies" (on pizza) and "head of lettuce" with "cabbage" in the context of rowboats.
They are lossy text compressing parrots and we are all suffering from a massive madness-of-crowds scale Eliza Effect.
This feels very different because there is no powerful political force trying to squelch discussion of colour or sardine. But there are lots of powerful folks trying to avoid discussions about Gaza or Palestine and related things. It's to their advantage to have tools hide that word
When a company packages this tool up and makes it part of their product they are taking some of that responsibility. The end user isn't supposed to need to know what an LLM is or how it works, that's what they're paying Canva for.
There are trillions of dollars riding on the fact that they in fact think, and a bunch of people here have their lottery tickets tied up in that, so good luck with that
Don’t worry, goalpost shifting will ensure that no matter how useful LLMs get, there will always be a large contingent of people who insist that anything non-human is not thinking, just sparkling cognition.
LLMs are not/will never be thinking though, no matter how good they get? You could potentially argue that there is some level of cognition during the training phases (as long as that isn't being outsourced to humans anyways), but generation of output is stachostic selection of most common (/highly ranked if tuned) following patterns? They cannot learn things outside of training, nor do they actually "know" things. To use the parrot example from above, a parrot doesnt "know" what the words its been taught to mimic are, nor does an LLM "know" what the concept of love is, its just be trained to regurgitate the words that are used by humans to describe such a thing. This isn't a criticism of LLMs, that's what they're supposed to do, but its certainly not cognition.
All of these tools that are not controlled by the user, trained on datasets they do not own or understand, will inevitably be subject to manipulation. I do not necessarily believe that Canva went in and specifically trained their AI models to do this, but that's almost worse because they become the face of what somebody else has decided their model should be doing.
Anybody using AI tools should be extremely cautious about what is being produced.
You can see it a lot if you ask anything remotely political to the different AI models... in some places you can definitely see the hand-editing/overrides as well.
Hard to get around these kinds of issues and definitely leads me to avoid them for non-technical questions.
> All of these tools ... will inevitably be subject to manipulation.
I have often wondered about the legality of such manipulation. As AI becomes used for increasingly important things, it becomes increasingly valuable to make a system serve the needs of someone other than its owner.
Yes these models apply their knowledge non-deterministically. We need to be aware and ready to handle their 'behaviours' doesn't mean they are not useful - I feel like ant-AI advocates are rushing to find issues
It reminds me of the early internet days and everyone making a big deal about the anonymity of internet forurms and safety.. sure it is an isssue
We have to stop acting like these things "think"; it leads to really weird misinterpretations of the output as "meaning" things.
For example, they will occasionally replace "colour" with "color". Why? Because both occur in the training data in the "same role" but "color" is, apparently, more common[1]. You can also trick them into replacing things like "sardines" with "anchovies" (on pizza) and "head of lettuce" with "cabbage" in the context of rowboats.
They are lossy text compressing parrots and we are all suffering from a massive madness-of-crowds scale Eliza Effect.
[1] Yep. https://books.google.com/ngrams/graph?content=color%2C+colou...
This feels very different because there is no powerful political force trying to squelch discussion of colour or sardine. But there are lots of powerful folks trying to avoid discussions about Gaza or Palestine and related things. It's to their advantage to have tools hide that word
When a company packages this tool up and makes it part of their product they are taking some of that responsibility. The end user isn't supposed to need to know what an LLM is or how it works, that's what they're paying Canva for.
There are trillions of dollars riding on the fact that they in fact think, and a bunch of people here have their lottery tickets tied up in that, so good luck with that
Don’t worry, goalpost shifting will ensure that no matter how useful LLMs get, there will always be a large contingent of people who insist that anything non-human is not thinking, just sparkling cognition.
LLMs are not/will never be thinking though, no matter how good they get? You could potentially argue that there is some level of cognition during the training phases (as long as that isn't being outsourced to humans anyways), but generation of output is stachostic selection of most common (/highly ranked if tuned) following patterns? They cannot learn things outside of training, nor do they actually "know" things. To use the parrot example from above, a parrot doesnt "know" what the words its been taught to mimic are, nor does an LLM "know" what the concept of love is, its just be trained to regurgitate the words that are used by humans to describe such a thing. This isn't a criticism of LLMs, that's what they're supposed to do, but its certainly not cognition.
They factorize the distribution in which they are trained on which is essentially generalization
https://arxiv.org/abs/2602.02385