08-10-2024, 07:31 PM
I believe "AI", as they incorrectly call it, is a snake eating its tail. I'm already reading articles where LLMs are regurgitating content created by other LLMs causing very strange results. I just read an article a few days ago that explained how ChatGPT 4o went completely off the rails when voice recognition was released.
A few reasons why I believe LLMs are doomed:
- It's too damned expensive to operate. The power consumption and raw compute power needed outweighs any profits that might be made.
- Hallucinating will never go away. There's just too much misinformation to digest.
- Copyright holders will eventually win their day in court (see the first entry above)
- Companies with trade secrets will not allow their employees to use it. This includes software companies.
- An LLM helping you do your job is of no use if you are not already proficient at said job. Correcting LLM errors just eats into your productivity.
As Kernelpanic pointed out above we are really no further than we were 50 years ago. The difference today is the compute power at our disposal makes LLMs "appear" to work better then previous attempts at machine learning. Which brings me to my next point, LLMs are not actually learning anything, they are mimicking the input they receive.
Eventually, when all the hype and investor money dries up I'm sure the LLM craze will subside. Hell, Pete's point about Google is a perfect example. Google is doing everything in their power to try and profit from this instead of actually making it work to better their service. It's a snake eating its tail.
So, will hobbyist programmers be around in 10 years. Yes, but only those that don't try to use an LLM to learn a language in the first place. They won't have a skill set to draw from but instead a magic box to wonder about (and why the damn code never runs correctly).
A few reasons why I believe LLMs are doomed:
- It's too damned expensive to operate. The power consumption and raw compute power needed outweighs any profits that might be made.
- Hallucinating will never go away. There's just too much misinformation to digest.
- Copyright holders will eventually win their day in court (see the first entry above)
- Companies with trade secrets will not allow their employees to use it. This includes software companies.
- An LLM helping you do your job is of no use if you are not already proficient at said job. Correcting LLM errors just eats into your productivity.
As Kernelpanic pointed out above we are really no further than we were 50 years ago. The difference today is the compute power at our disposal makes LLMs "appear" to work better then previous attempts at machine learning. Which brings me to my next point, LLMs are not actually learning anything, they are mimicking the input they receive.
Eventually, when all the hype and investor money dries up I'm sure the LLM craze will subside. Hell, Pete's point about Google is a perfect example. Google is doing everything in their power to try and profit from this instead of actually making it work to better their service. It's a snake eating its tail.
So, will hobbyist programmers be around in 10 years. Yes, but only those that don't try to use an LLM to learn a language in the first place. They won't have a skill set to draw from but instead a magic box to wonder about (and why the damn code never runs correctly).