Instead of asking me, a senior developer, how to solve a problem so I could quickly and easily explain the best approach, instead they go to ChatGPT, ask it, and then come to me to ask me to audit and prove whether ChatGPT's response was valuable or not. Often it's not, but every time it's a big time waster since the right thing to do was easily communicated without needing to vet a long generated message with probably lots of flaws or places where the details simply don't make sense.
Specifically for ChatGPT: Lots and lots of real estate investors using it to judge markets, investments and strategies. And landlords using it to create lease agreements. The information they want exists but it used to cost money or time, now they get it for less with corresponding reduction in accuracy.
My better half uses various AI services (paid, mostly) to either stub out documents or presentations or to distill and summarize gobs of information. They're fully aware of the shortcomings and we've seen some absolutely hilarious misfires but they're generally finding it to be a useful time saver.
To any Googlers/Xooglers in the crowd, there's an awesome opportunity to take the improvements made to Moma search in the past decade vis a vis security authorization / authentication / access controls and combine them with "AI" analysis of enterprise information and documentation.
I've only personally seen it used to aggregate and summarize data-sets. Google search, Amazon questions about a product. It was decent for Google search but now they have cranked up their anti-bot detection and I am a bot to them so I have blocked all their domains.
For me personally big-data LLM on most sites are missing:
- debugging showing me where it found the data, how it arrived at an answer including what sites it scraped.
- ability to say "I don't know" or "no conclusive results" or "too many conflicting results"
- display of confidence level. "45% chance I am high or hallucinating a.k.a. guessing"
Most of the people I know that are not in tech avoid it. Whether because it was built on other people's IP, the energy burn to run it, or just the 'ick' factor of AI-generated content, especially with the ghibli copying coming up recently... my social circle is not interested.
Tried generating a 6-month plan to upskill in VLSI programming. It was pretty useful and helped lay out a comprehensive roadmap using different sources and projects to try.
Also tried generating a 6-month plan to upskill my guitar playing. Wasn't as useful.
It's a huge search engine, or basically another "Google" but with more insights and ideas. People I know usually go to GPT to clarify ideas, thoughts, or help with more mundane things they want done quickly.
Instead of asking me, a senior developer, how to solve a problem so I could quickly and easily explain the best approach, instead they go to ChatGPT, ask it, and then come to me to ask me to audit and prove whether ChatGPT's response was valuable or not. Often it's not, but every time it's a big time waster since the right thing to do was easily communicated without needing to vet a long generated message with probably lots of flaws or places where the details simply don't make sense.
How do you mitigate this situation?
Specifically for ChatGPT: Lots and lots of real estate investors using it to judge markets, investments and strategies. And landlords using it to create lease agreements. The information they want exists but it used to cost money or time, now they get it for less with corresponding reduction in accuracy.
My better half uses various AI services (paid, mostly) to either stub out documents or presentations or to distill and summarize gobs of information. They're fully aware of the shortcomings and we've seen some absolutely hilarious misfires but they're generally finding it to be a useful time saver.
To any Googlers/Xooglers in the crowd, there's an awesome opportunity to take the improvements made to Moma search in the past decade vis a vis security authorization / authentication / access controls and combine them with "AI" analysis of enterprise information and documentation.
I've only personally seen it used to aggregate and summarize data-sets. Google search, Amazon questions about a product. It was decent for Google search but now they have cranked up their anti-bot detection and I am a bot to them so I have blocked all their domains.
For me personally big-data LLM on most sites are missing:
- debugging showing me where it found the data, how it arrived at an answer including what sites it scraped.
- ability to say "I don't know" or "no conclusive results" or "too many conflicting results"
- display of confidence level. "45% chance I am high or hallucinating a.k.a. guessing"
Most of the people I know that are not in tech avoid it. Whether because it was built on other people's IP, the energy burn to run it, or just the 'ick' factor of AI-generated content, especially with the ghibli copying coming up recently... my social circle is not interested.
Tried generating a 6-month plan to upskill in VLSI programming. It was pretty useful and helped lay out a comprehensive roadmap using different sources and projects to try. Also tried generating a 6-month plan to upskill my guitar playing. Wasn't as useful.
It's a huge search engine, or basically another "Google" but with more insights and ideas. People I know usually go to GPT to clarify ideas, thoughts, or help with more mundane things they want done quickly.
I see it used by many daily to write spam on linkedin describing software dev in mellow unoffensive terms.
I saw one on LI with an image. It ended with a ChatGPT style question like, "Would you like me to explain X in more detail?"
Nobody noticed. It seems they only see the image and read the first paragraph.
Most people who don't understand technology use it as a search engine.