>There isn’t a single day where I don’t have to deal with software that’s broken but no one cares to fix
Since when does this have anything to do with AI? Commercial/enterprise software has always been this way. If it's not going to cost the company in some measurable way issues can get ignored for years. This kind of stuff was occurring before the internet exists. It boomed with the massive growth of personal computers. It continues to today.
I think the point the author is trying to make is that there are many problems in plain sight we could be spending our efforts on, and instead we are chasing illusory profits by putting so many resources into developing AI features. AI is not the source of the issues, but rather a distraction of great magnitude.
> Commercial/enterprise software has always been this way
All software is this way. The only way something gets fixed is if someone decides it's a priority to fix it over all the other things they could be doing. Plenty of open source project have tons of issues. In both commercial and open source software they don't get fixed because the stack of things to do is larger than the amount of time there is to do them.
Also interesting would be to compare the qualities between them.
From my experience software has much much bigger probability of ending as eventually working, but not fixing the problem it was set out to do in the first place aka "building the right thing vs building it right". Which I guess is somewhat related to OP's dilemma.
Thought exercise: has any of the money Apple has spent integrating AI features produced as much customer good-will as fixing iOS text entry would? One reason for paying attention to quality is that if you don't, over time it tarnishes your brand and makes it easier for competitors to start cutting into your core business.
Then you missed the point of my post. That money never did. It went back into the hands of the investors, the investors that are now putting money into genAI.
> What could have been if instead of spending so much energy and resources on developing “AI features” we focused on making our existing technology better?
This is a bit like the question "what if we spent our time developing technology to help people rather than developing weapons for war?"
The answer is that, the only reason you were able to get so many people working on the same thing at once, was because of the pressing need at hand (that "need" could be real or merely perceived). Without that, everyone would have their own various ideas about what projects are the best use of their time, and would be progressing in much smaller steps in a bunch of different directions.
To put it another way - instead of building the Great Pyramids, those thousands of workers (likely slaves) could have all individually spent that time building homes for their families. But, those homes wouldn't still be around and remembered millenia later.
> instead of building the Great Pyramids, those thousands of workers (likely slaves) could have all individually spent that time building homes for their families. But, those homes wouldn't still be around and remembered millenia later.
They would have been better off. Those pyramids are epitomes of white elephants.
I wonder about the world where, instead of investing in AI, everyone invested in API.
Like, surfacing APIs, fostering interoperability... I don't want an AI agent, but I might be interested in an agent operating with fixed rules, and with a limited set of capabilities.
Instead we're trying to train systems to move a mouse in a browser and praying it doesn't accidentally send 60 pairs of shoes to a random address in Topeka.
LLMs offer the single biggest advance in interoperability I've ever seen.
We don't need to figure out the one true perfect design for standardized APIs for a given domain any more.
Instead, we need to build APIs with just enough documentation (and/or one or two illustrative examples) that an LLM can help spit out the glue code needed to hook them together.
The problem with LLMs as interoperability is they only work sub 100% of the time. Yes they help but the point of the article is what if we spent 100billion on APIs? We absolutely could build something way more interoperable and that’s 100% accurate.
I think about code generation in this space a lot because I’ve been writing Gleam. The LSP code actions are incredible. There’s no “oh sorry I meant to do it the other way” you get with LLMs because everything is strongly typed. What if we spent 100billion on a programming language?
We’ve now spent many hundreds of billions on tools which are powerful but we’ve also chosen to ignore many other ways to spend that money.
For $100 billion you could get public standards for APIs of all kinds implemented. I don’t think people understand just how much money that is. We’re talking solve extreme hunger and create international api standards afterwards money.
Having lurked around in the edges of various standardization processes for 20+ years I don't think this is a problem that gets fixed by money.
You can spend an enormous amount of money building out a standard like SOAP which might then turn out not to have nearly as much long-running as the specification authors expected.
That’s totally fair. Money though is a representation of desire and the reality is people don’t have interest in solving these problems. We live in a society where there’s much more interest in creating something that might be god than solving other problems. And that’s really the main point of the article.
But also even if the W3C spent $10m a year for the 10 years SOAP was being actively developed according to Wikipedia that would still be 1/1000 of the 100billion we’re talking about. So we really have no idea what this sort of money could do if mobilized in other ways.
Yeah. Much of that money is going to physically building data centers, in the middle of an affordable housing crisis. "Look, I just need a few billion, to build a server farm, to build the machine god, who will tell us how to solve the homelessness and housing insecurity." If it works? That'd be neat. Right now, it sounds like crackhead logic.
Agree. I often prefer to screen scrape even when an API is available because the API might contain limited data or other restrictions (e.g. authentication) that web pages do not. If you don't depend on an API, you'll never be reliant on an API.
> LLMs offer the single biggest advance in interoperability I've ever seen.
> ... we need to build APIs with just enough documentation (and/or one or two illustrative examples) that an LLM can help spit out the glue code needed to hook them together.
If a developer relies on client code generated by an LLM to use an API, how would they know if what was generated is a proper use of said API? Also, what about when lesser used API functionality should be used instead of more often used ones for a given use-case?
If the answer is "unit/integration tests certify the production code", then how would those be made if the developer is reliant upon LLM for code generation? By having an LLM generate the test suite?
And if the answer is "developers need to write tests themselves to verify the LLM generated code", then that implies the developer understands what correct and incorrect API usage is beforehand.
Which begs the question; why bother using an LLM to "spit out the glue code" other than as a way to save some keystrokes which have to be understood anyway?
As if the challenges in writing software are how to hook APIs together.
I get that in the webdev space, that is true to a much larger degree than has been true in the past. But it's still not really the central problem there, and is almost peripheral when it comes to desktop/native/embedded.
Today I’ve compiled a few thousand classes of Javadocs in .978 second. I was so impressed, with a build over 2 minutes, each byte of code we write takes a second to execute, computing is actually lightening fast, just now when it’s awfully written.
Time of executing bytecode << REST APIs << launching a full JVM for each file you want to compile << launching an LLM to call an API (each << is above x10).
The point is that you call the LLM to generate the code that lets you talk to the API, rather than writing that glue code yourself. Not that you call the LLM to talk to that API every time.
Basically the opposite has happened. Not only has every API either been removed or restricted. Every company is investing a lot of resources in making their platforms impossible to automate even with browser automation tools.
Mix of open platforms facing immense abuse from bad actors, and companies realising their platform has more value closed. Reddit for example doesn't want you scraping their site to train AIs when they could sell you that data. And they certainly don't want bots spamming up the platform when they could sell you ad space.
We work with American health insurance companies and their portals are the only API you’re going to get. They have negative incentive to build a true API.
LLMs are 10x better than the existing state of the art (scraping with hardcodes selectors). LLMs making voice calls are at least that compared to the existing state of the art (humans sitting on hold.)
The beauty of LLMs is that they can (can! not perfectly!) turn something without an API into one.
I’m 100% with you that an API would be better. But they’re not going to make one.
I feel like it’s not technically difficult to achieve this outcome… but the incentives just aren’t there to make this interoperable dream a reality.
Like, we already had a perfectly reasonable decentralized protocol with the internet itself. But ultimately businesses with a profit motive made it such that the internet became a handful of giant silos, none of which play nice with each other.
While I'm somewhat sympathetic to this view, there's another angle here too. The largesse of investment on a vague idea means that lots of other ideas get funding, incidentally.
Every VC pitch is about some ground-breaking tech or unassailable moat that will be built around a massive SAM; in reality early traction is all about solving that annoying and stupid problem your customers hate doing but that you can do for them. The disconnect between the extraordinary pitch and the mundane shipped solution is the core of so much business.
That same disconnect also means that a lot of real and good problems will be solved with money that was meant for AGI but ends up developing other, good technology.
My biggest fear is that we are not investing in the basic, atoms-based tech that we need in the US to not be left behind in the cheap energy future: batteries, solar, and wind is being gutted right now due to chaotic government behavior, the actions of madmen that are incapable of understanding the economy today, much less where tech will take it in 5-10 years. We are also underinvesting in basics like housing, or construction tech. Hopefully some of the AI money goes to fixing those gaping holes in the country's capital allocation.
If a million families each has a $1,000 to invest in new business, how would you envision the money to be invested collectively? what would be the process?
The money would distributed mostly as people purchasing things rather than as upfront investment (although it's far from unheard of for startup capital to come from people's local communities where those communities have the resources to enable this). It would be harder to start a business, but easier to maintain a sustainable business model built on actual demand.
it’s peculiar because i love to use chat gpt to fill my knowledge gaps as i work through solutions to building and energy problems that i want to solve. i wonder how many people are doing something similar and, although i haven’t* read through all the comments, i doubt much is being said let alone giving credence to that simple but potentially profound idea. learning amplified.
The reply defining terms from busterarm was flagged, so I'm repeating them here:
> TAM or Total Available Market is the total market demand for a product or service. SAM or Serviceable Available Market is the segment of the TAM targeted by your products and services which is within your geographical reach. SOM or Serviceable Obtainable Market is the portion of SAM that you can capture.
Hacker News may be hosted as part of the Y Combinator website, but as the name suggests, the primary audience is hackers, not entrepreneurs. Your answer is good, but could have done without the condescension.
By all means; I also work at a start-up. That doesn't mean that everyone here does, or is interested in doing so, or will have the necessary background. All I ask of you is to present information in the spirit of the XKCD 10,000: https://xkcd.com/1053/
Been here for years (across many different accounts), and this is the first time I've heard of these terms. I am here for programming content, not business.
Why should I expect anyone who openly admits to having multiple accounts on the site in clear opposition to the site rules to have any level of awareness?
I'm sure dang will come and ding me for this one, but I'm sitting here having my points undermined by literal sockpuppets.
Weird ad hominem flex. And "across" doesn't imply multiple accounts at once. People who register with their company email may be forced to create a new account when they leave the company (perhaps with sudden and unexpected loss of access to the account).
Here's the actual guideline (not rule):
"Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to."
People have commented both appreciating your clear definitions and calling you out for the condescension, with a perfect xkcd suggesting an attitude change. It's up to you how you react to such feedback.
More less you could say similar things about most of the crypto space too. I think maybe it's because we're at the point where a lot of things that tech can do, it's more than capable of doing, but they're just not easy to do out of a dorm room and without a lot of domain knowledge.
There is still so much one can build and do in a dorm room. The hardest part is still the hardest part in every business, which is getting sufficient money to get sufficient runway for things to be self sufficient.
>What could have been if instead of spending so much energy and resources on developing “AI features” we focused on making our existing technology better?
The author doesn't seem to appreciate that investors aren't incompetent, but malicious.
Investing 100 years of open-source Blender does not give them any fraction of monopoly.
Even if scientists present 100's of proposals for computation (optical, semiconductor,...) they will specifically invest in technologies that are hard to decentralize: growing monocrystalline ingots, reliant on dangerous chemicals, ... if there is no money in easily decentralizable processor manufacture, then it could easily be duplicated then proposals to pursue it would basically be equivalent to begging investors to become philantropists. Quite a naive position.
It's in the interest of the group to have quality software, manufacturing technologies, ... so the onus is on representatives of the group of taxpayers to invest in areas investors would prefer to see no investment in (even if someone else invests it). Perhaps those "representatives" are inept or malicious or both.
There is real value being created by creating interactive summaries of the human corpus. While it is taking time to unlock the value, it will definitely come.
I've been watching this my whole life. UML, SOA, Mongo, cloud, blockchain, now LLMs, probably 10 others in between. When tools are new there's a collective mania between VCs, execs, and engineers that this tool unlike literally every other one doesn't have trade offs that make it only an appropriate choice in some situations. Sometimes the trade offs aren't discoverable in the nascent stage, a lot of it is monkey-see-monkey-do which is the case even today with React and cloud as default IMHO. LLMs are great but they're just a tool.
The big difference is LLMs are as big as Social Media and Google in the pop culture, but with a promise of automation and job replacement. My 70 year parents use it every day for work and general stuff (with generally understanding the limitations), and they’re not even that tech savvy.
We haven’t mapped the hard limitations of LLMs yet but they’re energy bound like everything else. Their context capacity is a fraction of a human’s. What they’ll replace isn’t known yet. Probabilistic answers are unacceptable in many domains. They’re going to remain amazingly helpful for a certain class of tasks but marketing is way ahead of the engineering, again.
IoT wasn't exactly a waste of money. If anything, the problem was that companies didn't spend enough doing it properly or securely. People genuinely do want their security cameras online with an app they can view away from home. It just needs to be done securely and privately.
I want a Wireguard-like solution - preferably with an open source Home Assistant plugin - rather than yet-another-subscriber-lockin-on-company-servers.
synology has surveillance station, synology supports wireguard (tailscale, too), there's ios and android surveillance station apps.
The software that the synology uses on the backend is open source, so you could set this all up with proxmox or a debian server or something, too.
you need to ensure your cameras support either direct access inside the network, or onvif or something like that. IDK, i don't use it anymore, but i did for a good long while, with wifi and wired IP cameras. My synology had a "license" for 12 cameras, but lightning took it out (something about a bunch of ethernet cables in trees), and my new synology doesn't have enough licenses to bother with.
anyhow, just thought you should know - "software" "NVR" is available, and has been for over a decade.
I have 4 cameras, a home security system, a remotely monitored smoke detector, a smart plug, 4 leak sensors, smart bulbs, a car whose location and state of charge I can track remotely, a smart garage door opener, a smart doorbell, and 7 smart speakers.
The amount of money that's been spent on AI related investments over the past 2-5 years really has been astonishing - like single digit percentage points of GDP astonishing.
I think it's clear to at there are productivity boosts to be had from applying this technology to fields like programming. If you completely disagree with that statement I have a hunch that nothing could convince you otherwise at this point.
But at what point could those productivity boosts offset the overall spend? (If we assume we don't get to some weird AGI that upturns all forms of economics.)
Two points of comparison. Open source has been credibly estimated to have provided over 8 trillion dollars of value to the global economy over the past few decades: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4693148 - could AI-assisted programming provide a similar boost over the next decade or so?
The buildout of railways in the 1800s took resources comparable to the AI buildout of the past few years and lost a lot of investors a lot of money, but are regarded as a huge economic boost despite those losses.
Green energy, including generation, but also storage, transmission, ev chargers, smart grid technology, etc would be the obvious thing to invest in that I would expect to have a much higher payoff.
Are they really? Is this one of those "just because people say so" beliefs?
The countries adopting these the most are declining economies. It's places that are looking for something to do after there's no more oil left to drill up and export.
You know where fossil fuel use is booming? Emerging (e.g., growing) economies. Future scarcity of such resources will only make them more valuable and more profitable.
Yes, this is a dim view on the world, but until those alternatives are fundamentally more attractive than petrochemicals, these efforts will always be charity/subsidy.
If you're expecting that to be the area of strong and safe returns on investment, I've got some dodgy property to sell you.
Isn't it China that adopting renewable more rapidly than anywhere else and also has a booming economy? Although they're also investing in non-renewable energy sources.
My understanding is that "green" investment portfolios which were intended as "ethics over return on investment" have actually outperformed petrochemical stocks for years now, and it's more ideology than economics that's preventing further investment (hence why you see so much renewable energy in Texas which is famously money driven)
They are bringing neatly one new coal power plant online per day. To the extent they’re building any renewable energy capacity, it’s nuclear power, and some PV farms with their excess solar panel production that can’t be sold internationally. The Chinese are smart: they want reliable, cheap energy, and know what it takes to get it.
Renewable energy in the form of wind and solar are direct subsidies for the oil and gas industry. Wind and solar are intermittent and you have to build a proportional amount of natural gas power plants to maintain a stable grid. Every new PV farm creates demand for another natural gas power plant.
The IRA subsidies for renewables were extremely lucrative. That’s why you see renewable energy projects in Texas. Government money spends just as good as profits from oil and gas.
“Renewable” energy exists where the subsidies are, and in some rare niche cases like geothermal power in Iceland. Many of these wind farms won’t produce more energy than it took to create them.
railways only lost investor money because everyone was investing in a national Monopoly, so the when we did get the Monopoly everyone else lost everything. sounds like a skill problem. plenty of value was created and remain in use for decades, completely different from Slop today.
Not to mention that rail only got better as more was built out. With LLMs the more you allow them to create, to scrape, and to replace deterministic platforms that can do the same thing better and faster - the further down the rabbit hole we all go.
I look around and the only people that are shilling for AI seem to be selling it. There are those that are also in a bubble and that's all they hear day in and out. We keep hearing how far the 'intelligence' of these models has come (models aren't intelligent). There are some low hanging fruit edge cases, but just again today I spent an extra hour thinking I could shortcut a PoC by having LLMs bang out the framework. I leveraged all the latest versions of Opus, Kimi, GLM and Grok. For a very specific ask (happened to be building a quick testing setup for PaddleOCR) none of them got it right. Even when asking for very specific aspects of the solution I had in mind Opus was off the rails and "optimizing" within a turn or two.
I probably ended up using about 20% of the structure it gave me - but I could have easily gone back to another project that I've done where that framework actually had more thought put into it.
I really wish the state of the art was better. I don't use LLMs for searching much as I believe it's a waste of resources. But the polarization from the spin pieces by C-levels on top of the poor performance by general models for very specific asks looks nothing like the age of rail.
Do I believe that there are good use cases for small targeted models built on rich training data? I do. But that's not the look and feel from most of what we're seeing out there today. The bulk of it is prompt engineering on top of general models. And the AI slop from the frontier players is so recognizable and overused now that I can't believe anyone still isn't looking at any of this and immediately second guessing the validity. And these are not hallucinations we're seeing because these LLMs are not intelligent. They lack cognition - they are not truly thinking or reasoning.
Again - if LLMs were capable of mass replacement of workers today OpenAI wouldn't be selling anyone a $20/month subscription, or even a $200 one. They'd be selling directly to those C-levels the utopia of white collar replacements that doesn't exist today.
Humans are fundamentally irrational. Not devoid of rationality, but not limited by it. Many social phenomena are downstream from that fact.
Humans have fashions. If something is considered cool, many people start doing that thing, because it automatically gives them a bit of appreciation from most other people. It is often rational to follow a fashion and reap the social benefits it brings.
People are bad at estimating probabilities. They heavily discount the future, and want everything now, hence FOMO. At the same time, they tend to believe in glowing future prospects uncritically, because it helps build social cohesion and power structures.
This is why fads periodically flush all over our industry, and our society, and the whole civilization. And again, it becomes rational to follow the trend and ride the wave. Say the magic word (OOP, XML, Agile, Social, Mobile, Cloud, SaaS, ML, more to come), and it becomes easier to get a job, press coverage, conference invites, investments.
Then the bubble deflates, the useful parts remain (often quite a bit), the fascination, hype, attention, and FOMO find a new worthy object.
So companies add "AI features" partly because it's cool (news coverage, promotions), partly because of the FOMO (uncertainty is high, but what if we'd be missing a billion-dollar opportunity?), partly because of social cohesion (following fashion is natural, being a contrarian may be respectable, but looking ignorant is unacceptable). It's not about carefully calculated material returns on a a carefully measured investment. It may look inane, but it's not always stupidity, much like sacrificing some far-future perspectives in exchange of stock growing this quarter is not about stupidity.
I've been using an app recently that added a bunch of AI features, but the basic search is still slow and often doesn't work. Every time I open it, I kind of brace myself, but it still disappoints me.
It feels like more and more products are focused on looking impressive, when all I really want is for the everyday features to just work well.
You could have a society where there's one single spreadsheet package made by a team of 20 people, a few operating systems, a new set of 50 video games every year (with graphics that are good enough but nothing groundbreaking so they'll run on old hardware) created according to quota by state-run enterprises, Soviet style.
This would be very efficient in avoiding duplication, the entire industry would probably only need a few thousand developers. It would also save material resources and energy. But I think that even if the software these companies produced was entirely reliable and bug-free it it would still be massively outcompeted by the flashy trend-chasing free-market companies which produce a ton of duplicated outputs (Monday.com, Trello, Notion, Asana, Basecamp - all these do basically the same thing).
It's the same with AI, or any other trend like tablets, the internet, smartphones - people wanted these and companies put their money into jumping aboard. If ChatGPT really was entirely useless and had <10,000 users then it would be business as usual - but execs can see how massive the demand is. Of course plenty are going to mess it up and probably go broke, but sometimes jumping on trends is the right move if you want a sustainable business in the future. Sears and Blockbuster could've perfected their traditional business models and customer experience without getting on the internet, and they would have still gone broke as customers moved there.
A lot of how I use AI is to assist me to build the software manually. I focus 1 function and ask it to fix or implement it. That's a good way to use AI. But if you mean using AI to improve existing systems, I also think that's being a done a lot. For instance, you know Krita the KDE drawing program? They naturally added a way to prompt image generation, based on your initial doodles, which makes a lot of sense.
What could have been if instead of crypto trillions were invested in something actually useful? What about the housing bubble of which we learned nothing as we are falling into it again?
There is a lot of stinky garbage in AI, but at least you can rescue some value from it, in fact it could be most of the activity out there, but you only notice what stinks.
TFA kind of assumes that the companies involved would have improved their software in a world in which those resources weren't spent on AI. Since much software contained long-unfixed bugs well before the GenAI boom, I'm not convinced.
Maybe the epitome of shoving AI into everything is Gemini showing up in my Gmail to help me write (which I do not need) but their spam filters still allowing obvious phishing emails through.
I understand organizationally how this happens, and the incentives that build such a monstrosity but it’s still objectively a shame.
the recent stackoverflow survey said that only 25% of developers are actually happy at work https://survey.stackoverflow.co/2025/work#job-satisfaction-j... . Gallup said its only 33% of employees are engaged in the economy in general. Not everyone gets to go to conferences and network for Godot game engine like the author, most are doing super repetitive jobs. I definitely want an AI to automate as many as possible ASAP
2. Delivering genuinely upgraded complexity and incoherence, under the cloak of new feature theatre.
3. In a context where customers just wish they could keep using what they have already paid for. Without paying again.
4. But they have to, because of artificially introduced cross-user new-version incompatibilities and strategically scheduled bit rot.
5. All so a company can keep extracting money from people it is no longer actually serving.
This is very similar to enshittification. Customers invest (time, money, data), in something they love. Their common investment has been transmogrified into a dairy, er, prison. And now they are all milked by all the dark patterns and milk pumps their supplier/master can insert between the users and water, users and grass, users and the midden, er, I mean into every remaining useful part of the service. While they shamelessly and unconvincingly claim to still be their hope, light and beloved benefactor.
Encrapification? No...
Fucked-up-grades? Choose which ever silla-bic em-fassis you prefer. (Sorry for the saucy stout boldness, to you of youth and gentle heart. But the point of crass terms is to not let targets off the verbial hook, with a weak or cute euphemism.)
What's the alternative? Churning out new software for the same use cases until Kingdom come because maintaining and improving older software is somehow icky?
Let's make a startup that is just to do PR on long-standing ignored issues that really would fix things.
It would be philanthropy (not-for-profit), but i imagine if you get a few big names behind it (some around here, some not), you could probably have a fair group of developers, translators, UI experts, API experts, hardware bug experts, etc. Like a Think Tank that outputs bugfixes and incremental improvements to software. The sort that reduce electricity usage - you could get some government funding out of that, i bet. How much did Britain pay for those "delete old emails to save water" marketeering?
I guess the downside is meta, google, microsoft, et al will benefit the most, but whatever.
Also someone the other day mentioned something like this, where users could subscribe for $50 to pursue legal avenues - sorry i don't have the link, but:
> What we actually need is a Consumer Protection Alliance that is made by and funded by people who want protection from this and are willing to pay for the lawyers needed to run all of the cases and bring these cases before a judge over and over and over again until they win.
> This would mean people like you and me and a million others of us paying $20-$50/month out of pocket to hire people to sue companies that do this [...]
This is another genre of AI article that annoys me: The one where the author starts by agreeing with you that AI is a definitely a bubble, and we’re gonna just “know” that for the rest of the article, no argument necessary.
I don’t feel like this article is trying to start a conversation, it wants to end the conversation so we can have dessert (aka, catastrophizing about the outcome of the thing “we know” is bad).
The basic assumption of the article is that there is a bubble. You want an entirely different article then about whether there is a bubble. You just don't like the conversation it's starting because you also disagree with its base assumption.
ZIRP->AI/enshitification is the one-two punch combo that I think is going to devastate our economy for 50 years or more. We have an entire generation of executives, financiers and government that have only ever operated in an era of free money.
They've never had to generate a real return, create a product of real value, etc. This wave-of/gamble-on AI slop just shows that they don't even know what value looks like. We've operated for ~40 years on a promise of...something.
> organizations such as Blender, Godot, or Ladybird and...
So you want an open source project to really succeed? It's not money, but real passion for the work.
Write better documentation (with realistic examples!) and fix the critical bugs users have been screaming about since over a decade ago.
Sure fine pay a few people real wages to work on it full time, but that level of funding has to deliver something more than barely documented functionality.
Yeah, nah... passion only sustains a person for 3 days max before they expire.
My theory is that open source boomed in the last few decades because developers had enough income and free time from their day jobs to moonlight as contributors.
With the gravy train ending, I suspect open source will suffer greatly. Maybe LLMs can cover what was lost, or maybe corporations will pay their engineers to contribute directly (even more so than what they do now), but there will definitely be some losses here.
Unlike "value demand," which is genuine demand arising from customer needs, failure demand is demand caused by failures such as errors, defects, inefficiencies, or poor service delivery. For example, if a service does not fulfill a customer's need properly, the customer must come back, creating more demand that is essentially avoidable. Failure demand leads to inefficiency, additional costs, and deteriorated customer and employee experiences.
So you're implying that ChatGPT and similar are so popular because of "errors, defects, inefficiencies, or poor service delivery", how does this make any sense in the context?
Perhaps not ironically, the careless distribution of incorrect information, combined with a dismissal of human endeavor, is such a perfect encapsulation of why so many people absolutely despise everything surrounding LLM hype.
> What could have been if instead of spending so much energy and resources on developing “AI features” we focused on making our existing technology better?
The implied answer to this question really just misunderstands the tradeoffs of the world. We had plenty of money and effort going into our technology before AI, and we got... B2B SaaS, mostly.
I don't disagree that the world would be better off if all of the money going into so many things (SaaS, crypto, social media, AI, etc.) was better allocated to things that made the world better, but in order for that to happen, we would have to be in a very different system of resource allocation than capitalism. The issue there is that capitalism has been absolutely core to the many, many advances in technology that have been hugely beneficial to society, and you if you want to allocate resources differently than the way capitalism does, you lose all of those benefits and probably end up worse off as a result (see the many failures of communism).
> So I ask: Why is adding AI the priority here? What could have been if the investment went into making these apps better?
> I’m not naive. What motivates people to include AI everywhere is the promise of profit. What motivates most AI startups or initiatives is just that. A promise.
I would honestly call this more arrogant than naive. Doesn't sound like OP has worked at any of the companies that make these apps, but he feels comfortable coming in here and presuming to know why they haven't spent their resources working on the things he thinks are most important.
He's saying that they're not fixing issues with core functionality but instead implementing AI because they want to make profit, but generally the sorts of very severe issues with core functionality that he's describing are pretty damaging to the revenue prospects of a company. I don't know if those issues are much less severe than he's describing or if there's something else going on with prioritization. I don't know if the whole AI implementation was competitive with fixing those - maybe it was just an intern given a project, and that's why it sucks.
I have no idea why they've prioritized the things they have, and neither does the author. But just deciding that they're not fixing the right things because they implemented an AI feature that he doesn't like is not a particularly valid leap of logic.
> Tech executives are robbing every investor blind.
They are not. Again, guy with a blog here is deciding that he knows more than the investors about the things they're investing in. Come on. The investors want AI! Whether that's right or wrong, it's ridiculous to suggest they're being robbed blind.
> Unfortunately, people making decisions (if there are any) only chase ghosts and short term profits. They don’t think that they are crippling their companies and dooming their long term profitability.
If there are any? Again, come on. And chasing short term profits? That is obviously and demonstrably incorrect - in the short term, Meta, Anthropic, OpenAI and everybody else is losing money on AI. In the long term, I'm going to trust that Mark Zuckerberg and Sam Altman, whether you like them or hate them, have a whole lot better idea of whether or not they're going to be profitable in the long term than the author.
This reads like somebody who's mad that the things he wants to be funded aren't being funded and is blaming it on the big technology of the day then trying to back into a justification for that blame.
It’s not just the AI bubble. Think of all the public services, rights, and scientific and medical research being destroyed by rightwing extremists and their billionaire enablers. It will take years, decades perhaps, to undo the damage they’ve already done, in just over 6 months in power.
I disagree. The hype is wearing people down and making it think it's a waste of time but LLMs just came out a couple years back and even the trendline from the past decade (pre-LLMs) is up up up.
The amount of interest to explore this opportunity is worth it. The bubble is worth it. I don't think it's lost years, and even if it is, the technology is compelling enough to make the gamble worth it.
The fatigue of reading the same shit over and over again makes people forget that it's only a couple years. It also makes people forget how ground breaking and paradigm shifting this technology was. People are complaining about how stupid LLMs are when possibly just 5 years back no one could even predict that that such levels of intelligence in machinese was even possible.
Of course fairly quick progress was made - a truly astounding amount of money was poured into this industry in a short timeframe. The thing is, now it’s clear that AI isn’t really valuable enough to justify investment on the same scale anymore.
It feels like after people were still flush with cash at the end of the pandemic, reality hit and as people were profit taking from the market, LLMs seemed to emerge from the æther as the next best thing to glom on to. So now the hive mind dumped all their money into that and we are riding an incredible bubble.
So cheap gaming hardware in the future (similar to when telecoms over invested in transcontinental undersea fiber-optic cables)? What's the hangover gonna look like after this? What's the next grift?
I'm a massive AI skeptic, and I think the amount of money being spent is astonishing, but I really don't want to go back to searching the web the old way.
Asking Gemini _is_ just much better at finding you the answers you need, _and_ providing links for you to verify that information.
It will be a sad day when they start injecting ads, I really hope the foss alternatives catch up.
It's not really all that different from trusting some random post on stack overflow. You always needed to be a little skeptical.
I'm asking the AI things that are easy to verify, and often ask it to provide web references. It's working well for me.
I don't ask it about niche topics. I occasionally ask it about myself or my games and it's always funny. It's a good reminder how wrong the AIs can be.
>There isn’t a single day where I don’t have to deal with software that’s broken but no one cares to fix
Since when does this have anything to do with AI? Commercial/enterprise software has always been this way. If it's not going to cost the company in some measurable way issues can get ignored for years. This kind of stuff was occurring before the internet exists. It boomed with the massive growth of personal computers. It continues to today.
GenAI has almost nothing to do with it.
I think the point the author is trying to make is that there are many problems in plain sight we could be spending our efforts on, and instead we are chasing illusory profits by putting so many resources into developing AI features. AI is not the source of the issues, but rather a distraction of great magnitude.
> Commercial/enterprise software has always been this way
All software is this way. The only way something gets fixed is if someone decides it's a priority to fix it over all the other things they could be doing. Plenty of open source project have tons of issues. In both commercial and open source software they don't get fixed because the stack of things to do is larger than the amount of time there is to do them.
It's worth pointing it that the "priority" in both open source and closed isn't just "business priority".
Things that are easy, fun, or "cool" are done before other things no matter what kind of software it is.
All hardware eventually fails, all software eventually works.
Interesting take.
Also interesting would be to compare the qualities between them.
From my experience software has much much bigger probability of ending as eventually working, but not fixing the problem it was set out to do in the first place aka "building the right thing vs building it right". Which I guess is somewhat related to OP's dilemma.
Thought exercise: has any of the money Apple has spent integrating AI features produced as much customer good-will as fixing iOS text entry would? One reason for paying attention to quality is that if you don't, over time it tarnishes your brand and makes it easier for competitors to start cutting into your core business.
Apple's photo search has been an outstanding application of on-device machine learning models for almost a decade at this point.
FaceID has proven pretty popular tools.
Text entry has been mostly fixed with AI: dictate, transcribe and cleanup with Ai works well for many use cases, especially larger texts.
The point is that money that is going into GenAI or adding GenAI-related features to software should be going to fix existing broken software.
Then you missed the point of my post. That money never did. It went back into the hands of the investors, the investors that are now putting money into genAI.
> What could have been if instead of spending so much energy and resources on developing “AI features” we focused on making our existing technology better?
This is a bit like the question "what if we spent our time developing technology to help people rather than developing weapons for war?"
The answer is that, the only reason you were able to get so many people working on the same thing at once, was because of the pressing need at hand (that "need" could be real or merely perceived). Without that, everyone would have their own various ideas about what projects are the best use of their time, and would be progressing in much smaller steps in a bunch of different directions.
To put it another way - instead of building the Great Pyramids, those thousands of workers (likely slaves) could have all individually spent that time building homes for their families. But, those homes wouldn't still be around and remembered millenia later.
> instead of building the Great Pyramids, those thousands of workers (likely slaves) could have all individually spent that time building homes for their families. But, those homes wouldn't still be around and remembered millenia later.
They would have been better off. Those pyramids are epitomes of white elephants.
Consider the income from tourists coming to see the pyramids. People traveled to Giza for this purpose for millennia, too.
This was not the original intent of the construction though.
Trickle down economics does work. It just takes millenia to feel the effects.
> But, those homes wouldn't still be around and remembered millenia later.
Yes, but they'd have homes. Who's to say if a massive monument is better than ten thousand happy families?
> Who's to say if a massive monument is better than ten thousand happy families?
It's not. The pyramids have never been of any use to anyone (except as a tourist attraction).
I'm referring merely to the magnitude of the project, not to whether it was good for mankind.
I wonder about the world where, instead of investing in AI, everyone invested in API.
Like, surfacing APIs, fostering interoperability... I don't want an AI agent, but I might be interested in an agent operating with fixed rules, and with a limited set of capabilities.
Instead we're trying to train systems to move a mouse in a browser and praying it doesn't accidentally send 60 pairs of shoes to a random address in Topeka.
LLMs offer the single biggest advance in interoperability I've ever seen.
We don't need to figure out the one true perfect design for standardized APIs for a given domain any more.
Instead, we need to build APIs with just enough documentation (and/or one or two illustrative examples) that an LLM can help spit out the glue code needed to hook them together.
The problem with LLMs as interoperability is they only work sub 100% of the time. Yes they help but the point of the article is what if we spent 100billion on APIs? We absolutely could build something way more interoperable and that’s 100% accurate.
I think about code generation in this space a lot because I’ve been writing Gleam. The LSP code actions are incredible. There’s no “oh sorry I meant to do it the other way” you get with LLMs because everything is strongly typed. What if we spent 100billion on a programming language?
We’ve now spent many hundreds of billions on tools which are powerful but we’ve also chosen to ignore many other ways to spend that money.
If you gave me $100 billion to spend on API interoperability, knowing what I know today, I would spend that money inventing LLMs.
For $100 billion you could get public standards for APIs of all kinds implemented. I don’t think people understand just how much money that is. We’re talking solve extreme hunger and create international api standards afterwards money.
Having lurked around in the edges of various standardization processes for 20+ years I don't think this is a problem that gets fixed by money.
You can spend an enormous amount of money building out a standard like SOAP which might then turn out not to have nearly as much long-running as the specification authors expected.
That’s totally fair. Money though is a representation of desire and the reality is people don’t have interest in solving these problems. We live in a society where there’s much more interest in creating something that might be god than solving other problems. And that’s really the main point of the article.
But also even if the W3C spent $10m a year for the 10 years SOAP was being actively developed according to Wikipedia that would still be 1/1000 of the 100billion we’re talking about. So we really have no idea what this sort of money could do if mobilized in other ways.
Yeah. Much of that money is going to physically building data centers, in the middle of an affordable housing crisis. "Look, I just need a few billion, to build a server farm, to build the machine god, who will tell us how to solve the homelessness and housing insecurity." If it works? That'd be neat. Right now, it sounds like crackhead logic.
Agree. I often prefer to screen scrape even when an API is available because the API might contain limited data or other restrictions (e.g. authentication) that web pages do not. If you don't depend on an API, you'll never be reliant on an API.
> LLMs offer the single biggest advance in interoperability I've ever seen.
> ... we need to build APIs with just enough documentation (and/or one or two illustrative examples) that an LLM can help spit out the glue code needed to hook them together.
If a developer relies on client code generated by an LLM to use an API, how would they know if what was generated is a proper use of said API? Also, what about when lesser used API functionality should be used instead of more often used ones for a given use-case?
If the answer is "unit/integration tests certify the production code", then how would those be made if the developer is reliant upon LLM for code generation? By having an LLM generate the test suite?
And if the answer is "developers need to write tests themselves to verify the LLM generated code", then that implies the developer understands what correct and incorrect API usage is beforehand.
Which begs the question; why bother using an LLM to "spit out the glue code" other than as a way to save some keystrokes which have to be understood anyway?
The developer has the LLM write the test suite, then the developer reviews those tests.
This pattern works really well.
"other than as a way to save some keystrokes which have to be understood anyway?"
It's exactly that. You can save so many keystrokes this way.
As if the challenges in writing software are how to hook APIs together.
I get that in the webdev space, that is true to a much larger degree than has been true in the past. But it's still not really the central problem there, and is almost peripheral when it comes to desktop/native/embedded.
Today I’ve compiled a few thousand classes of Javadocs in .978 second. I was so impressed, with a build over 2 minutes, each byte of code we write takes a second to execute, computing is actually lightening fast, just now when it’s awfully written.
Time of executing bytecode << REST APIs << launching a full JVM for each file you want to compile << launching an LLM to call an API (each << is above x10).
The point is that you call the LLM to generate the code that lets you talk to the API, rather than writing that glue code yourself. Not that you call the LLM to talk to that API every time.
Exactly.
Basically the opposite has happened. Not only has every API either been removed or restricted. Every company is investing a lot of resources in making their platforms impossible to automate even with browser automation tools.
Mix of open platforms facing immense abuse from bad actors, and companies realising their platform has more value closed. Reddit for example doesn't want you scraping their site to train AIs when they could sell you that data. And they certainly don't want bots spamming up the platform when they could sell you ad space.
We work with American health insurance companies and their portals are the only API you’re going to get. They have negative incentive to build a true API.
LLMs are 10x better than the existing state of the art (scraping with hardcodes selectors). LLMs making voice calls are at least that compared to the existing state of the art (humans sitting on hold.)
The beauty of LLMs is that they can (can! not perfectly!) turn something without an API into one.
I’m 100% with you that an API would be better. But they’re not going to make one.
I feel like it’s not technically difficult to achieve this outcome… but the incentives just aren’t there to make this interoperable dream a reality.
Like, we already had a perfectly reasonable decentralized protocol with the internet itself. But ultimately businesses with a profit motive made it such that the internet became a handful of giant silos, none of which play nice with each other.
the appeal of investors to AI is anti API/access.
While I'm somewhat sympathetic to this view, there's another angle here too. The largesse of investment on a vague idea means that lots of other ideas get funding, incidentally.
Every VC pitch is about some ground-breaking tech or unassailable moat that will be built around a massive SAM; in reality early traction is all about solving that annoying and stupid problem your customers hate doing but that you can do for them. The disconnect between the extraordinary pitch and the mundane shipped solution is the core of so much business.
That same disconnect also means that a lot of real and good problems will be solved with money that was meant for AGI but ends up developing other, good technology.
My biggest fear is that we are not investing in the basic, atoms-based tech that we need in the US to not be left behind in the cheap energy future: batteries, solar, and wind is being gutted right now due to chaotic government behavior, the actions of madmen that are incapable of understanding the economy today, much less where tech will take it in 5-10 years. We are also underinvesting in basics like housing, or construction tech. Hopefully some of the AI money goes to fixing those gaping holes in the country's capital allocation.
It would be much better if we invested in meaningful things directly. So much time and effort is being put into making things AI shaped for investors.
The elephant in the room is that capital would likely be better directed if it was less concentrated.
If a million families each has a $1,000 to invest in new business, how would you envision the money to be invested collectively? what would be the process?
The money would distributed mostly as people purchasing things rather than as upfront investment (although it's far from unheard of for startup capital to come from people's local communities where those communities have the resources to enable this). It would be harder to start a business, but easier to maintain a sustainable business model built on actual demand.
it’s peculiar because i love to use chat gpt to fill my knowledge gaps as i work through solutions to building and energy problems that i want to solve. i wonder how many people are doing something similar and, although i haven’t* read through all the comments, i doubt much is being said let alone giving credence to that simple but potentially profound idea. learning amplified.
> Every VC pitch is about some ground-breaking tech or unassailable moat that will be built around a massive SAM
A surface-to-air missile?
As funny as that would be, maybe you should define your terms before you try to use them.
The reply defining terms from busterarm was flagged, so I'm repeating them here:
> TAM or Total Available Market is the total market demand for a product or service. SAM or Serviceable Available Market is the segment of the TAM targeted by your products and services which is within your geographical reach. SOM or Serviceable Obtainable Market is the portion of SAM that you can capture.
[flagged]
Hacker News may be hosted as part of the Y Combinator website, but as the name suggests, the primary audience is hackers, not entrepreneurs. Your answer is good, but could have done without the condescension.
Reading the original context, missiles don’t even make sense. I agree condescension isn’t helpful though.
That's probably how the poster knew it was wrong.
[flagged]
By all means; I also work at a start-up. That doesn't mean that everyone here does, or is interested in doing so, or will have the necessary background. All I ask of you is to present information in the spirit of the XKCD 10,000: https://xkcd.com/1053/
perfect
[flagged]
Stack Overflow is that way, sir. This is Hacker News.
> most of us are hackers employed at startups
Bold claim, should we do a poll? How long should we let it run for, a week, two weeks?
Or you're not interested in finding out and just downvote, cool, cool.
Been here for years (across many different accounts), and this is the first time I've heard of these terms. I am here for programming content, not business.
Why should I expect anyone who openly admits to having multiple accounts on the site in clear opposition to the site rules to have any level of awareness?
I'm sure dang will come and ding me for this one, but I'm sitting here having my points undermined by literal sockpuppets.
Weird ad hominem flex. And "across" doesn't imply multiple accounts at once. People who register with their company email may be forced to create a new account when they leave the company (perhaps with sudden and unexpected loss of access to the account).
Here's the actual guideline (not rule):
"Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to."
People have commented both appreciating your clear definitions and calling you out for the condescension, with a perfect xkcd suggesting an attitude change. It's up to you how you react to such feedback.
Your definitions provided immediate clarity. Thank you!
More less you could say similar things about most of the crypto space too. I think maybe it's because we're at the point where a lot of things that tech can do, it's more than capable of doing, but they're just not easy to do out of a dorm room and without a lot of domain knowledge.
There is still so much one can build and do in a dorm room. The hardest part is still the hardest part in every business, which is getting sufficient money to get sufficient runway for things to be self sufficient.
>What could have been if instead of spending so much energy and resources on developing “AI features” we focused on making our existing technology better?
I think we'd still be talking about Web 3.0 DeFi.
The author doesn't seem to appreciate that investors aren't incompetent, but malicious.
Investing 100 years of open-source Blender does not give them any fraction of monopoly.
Even if scientists present 100's of proposals for computation (optical, semiconductor,...) they will specifically invest in technologies that are hard to decentralize: growing monocrystalline ingots, reliant on dangerous chemicals, ... if there is no money in easily decentralizable processor manufacture, then it could easily be duplicated then proposals to pursue it would basically be equivalent to begging investors to become philantropists. Quite a naive position.
It's in the interest of the group to have quality software, manufacturing technologies, ... so the onus is on representatives of the group of taxpayers to invest in areas investors would prefer to see no investment in (even if someone else invests it). Perhaps those "representatives" are inept or malicious or both.
There is real value being created by creating interactive summaries of the human corpus. While it is taking time to unlock the value, it will definitely come.
I've been watching this my whole life. UML, SOA, Mongo, cloud, blockchain, now LLMs, probably 10 others in between. When tools are new there's a collective mania between VCs, execs, and engineers that this tool unlike literally every other one doesn't have trade offs that make it only an appropriate choice in some situations. Sometimes the trade offs aren't discoverable in the nascent stage, a lot of it is monkey-see-monkey-do which is the case even today with React and cloud as default IMHO. LLMs are great but they're just a tool.
The big difference is LLMs are as big as Social Media and Google in the pop culture, but with a promise of automation and job replacement. My 70 year parents use it every day for work and general stuff (with generally understanding the limitations), and they’re not even that tech savvy.
We haven’t mapped the hard limitations of LLMs yet but they’re energy bound like everything else. Their context capacity is a fraction of a human’s. What they’ll replace isn’t known yet. Probabilistic answers are unacceptable in many domains. They’re going to remain amazingly helpful for a certain class of tasks but marketing is way ahead of the engineering, again.
you forgot IoT
IoT wasn't exactly a waste of money. If anything, the problem was that companies didn't spend enough doing it properly or securely. People genuinely do want their security cameras online with an app they can view away from home. It just needs to be done securely and privately.
I want a Wireguard-like solution - preferably with an open source Home Assistant plugin - rather than yet-another-subscriber-lockin-on-company-servers.
Investors want otherwise.
synology has surveillance station, synology supports wireguard (tailscale, too), there's ios and android surveillance station apps.
The software that the synology uses on the backend is open source, so you could set this all up with proxmox or a debian server or something, too.
you need to ensure your cameras support either direct access inside the network, or onvif or something like that. IDK, i don't use it anymore, but i did for a good long while, with wifi and wired IP cameras. My synology had a "license" for 12 cameras, but lightning took it out (something about a bunch of ethernet cables in trees), and my new synology doesn't have enough licenses to bother with.
anyhow, just thought you should know - "software" "NVR" is available, and has been for over a decade.
I have 4 cameras, a home security system, a remotely monitored smoke detector, a smart plug, 4 leak sensors, smart bulbs, a car whose location and state of charge I can track remotely, a smart garage door opener, a smart doorbell, and 7 smart speakers.
I think IoT was more than just hype.
Wait until the kids find out about LAMP
This raises an interesting question.
The amount of money that's been spent on AI related investments over the past 2-5 years really has been astonishing - like single digit percentage points of GDP astonishing.
I think it's clear to at there are productivity boosts to be had from applying this technology to fields like programming. If you completely disagree with that statement I have a hunch that nothing could convince you otherwise at this point.
But at what point could those productivity boosts offset the overall spend? (If we assume we don't get to some weird AGI that upturns all forms of economics.)
Two points of comparison. Open source has been credibly estimated to have provided over 8 trillion dollars of value to the global economy over the past few decades: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4693148 - could AI-assisted programming provide a similar boost over the next decade or so?
The buildout of railways in the 1800s took resources comparable to the AI buildout of the past few years and lost a lot of investors a lot of money, but are regarded as a huge economic boost despite those losses.
Green energy, including generation, but also storage, transmission, ev chargers, smart grid technology, etc would be the obvious thing to invest in that I would expect to have a much higher payoff.
Are they really? Is this one of those "just because people say so" beliefs?
The countries adopting these the most are declining economies. It's places that are looking for something to do after there's no more oil left to drill up and export.
You know where fossil fuel use is booming? Emerging (e.g., growing) economies. Future scarcity of such resources will only make them more valuable and more profitable.
Yes, this is a dim view on the world, but until those alternatives are fundamentally more attractive than petrochemicals, these efforts will always be charity/subsidy.
If you're expecting that to be the area of strong and safe returns on investment, I've got some dodgy property to sell you.
Isn't it China that adopting renewable more rapidly than anywhere else and also has a booming economy? Although they're also investing in non-renewable energy sources.
My understanding is that "green" investment portfolios which were intended as "ethics over return on investment" have actually outperformed petrochemical stocks for years now, and it's more ideology than economics that's preventing further investment (hence why you see so much renewable energy in Texas which is famously money driven)
They are bringing neatly one new coal power plant online per day. To the extent they’re building any renewable energy capacity, it’s nuclear power, and some PV farms with their excess solar panel production that can’t be sold internationally. The Chinese are smart: they want reliable, cheap energy, and know what it takes to get it.
Renewable energy in the form of wind and solar are direct subsidies for the oil and gas industry. Wind and solar are intermittent and you have to build a proportional amount of natural gas power plants to maintain a stable grid. Every new PV farm creates demand for another natural gas power plant.
The IRA subsidies for renewables were extremely lucrative. That’s why you see renewable energy projects in Texas. Government money spends just as good as profits from oil and gas.
“Renewable” energy exists where the subsidies are, and in some rare niche cases like geothermal power in Iceland. Many of these wind farms won’t produce more energy than it took to create them.
railways only lost investor money because everyone was investing in a national Monopoly, so the when we did get the Monopoly everyone else lost everything. sounds like a skill problem. plenty of value was created and remain in use for decades, completely different from Slop today.
Not to mention that rail only got better as more was built out. With LLMs the more you allow them to create, to scrape, and to replace deterministic platforms that can do the same thing better and faster - the further down the rabbit hole we all go.
I look around and the only people that are shilling for AI seem to be selling it. There are those that are also in a bubble and that's all they hear day in and out. We keep hearing how far the 'intelligence' of these models has come (models aren't intelligent). There are some low hanging fruit edge cases, but just again today I spent an extra hour thinking I could shortcut a PoC by having LLMs bang out the framework. I leveraged all the latest versions of Opus, Kimi, GLM and Grok. For a very specific ask (happened to be building a quick testing setup for PaddleOCR) none of them got it right. Even when asking for very specific aspects of the solution I had in mind Opus was off the rails and "optimizing" within a turn or two.
I probably ended up using about 20% of the structure it gave me - but I could have easily gone back to another project that I've done where that framework actually had more thought put into it.
I really wish the state of the art was better. I don't use LLMs for searching much as I believe it's a waste of resources. But the polarization from the spin pieces by C-levels on top of the poor performance by general models for very specific asks looks nothing like the age of rail.
Do I believe that there are good use cases for small targeted models built on rich training data? I do. But that's not the look and feel from most of what we're seeing out there today. The bulk of it is prompt engineering on top of general models. And the AI slop from the frontier players is so recognizable and overused now that I can't believe anyone still isn't looking at any of this and immediately second guessing the validity. And these are not hallucinations we're seeing because these LLMs are not intelligent. They lack cognition - they are not truly thinking or reasoning.
Again - if LLMs were capable of mass replacement of workers today OpenAI wouldn't be selling anyone a $20/month subscription, or even a $200 one. They'd be selling directly to those C-levels the utopia of white collar replacements that doesn't exist today.
Humans are fundamentally irrational. Not devoid of rationality, but not limited by it. Many social phenomena are downstream from that fact.
Humans have fashions. If something is considered cool, many people start doing that thing, because it automatically gives them a bit of appreciation from most other people. It is often rational to follow a fashion and reap the social benefits it brings.
People are bad at estimating probabilities. They heavily discount the future, and want everything now, hence FOMO. At the same time, they tend to believe in glowing future prospects uncritically, because it helps build social cohesion and power structures.
This is why fads periodically flush all over our industry, and our society, and the whole civilization. And again, it becomes rational to follow the trend and ride the wave. Say the magic word (OOP, XML, Agile, Social, Mobile, Cloud, SaaS, ML, more to come), and it becomes easier to get a job, press coverage, conference invites, investments.
Then the bubble deflates, the useful parts remain (often quite a bit), the fascination, hype, attention, and FOMO find a new worthy object.
So companies add "AI features" partly because it's cool (news coverage, promotions), partly because of the FOMO (uncertainty is high, but what if we'd be missing a billion-dollar opportunity?), partly because of social cohesion (following fashion is natural, being a contrarian may be respectable, but looking ignorant is unacceptable). It's not about carefully calculated material returns on a a carefully measured investment. It may look inane, but it's not always stupidity, much like sacrificing some far-future perspectives in exchange of stock growing this quarter is not about stupidity.
I've been using an app recently that added a bunch of AI features, but the basic search is still slow and often doesn't work. Every time I open it, I kind of brace myself, but it still disappoints me.
It feels like more and more products are focused on looking impressive, when all I really want is for the everyday features to just work well.
More than anything, I want to get back the era when the users were the customers, not the product.
Very few people are willing to pay cash for the things that they currently pay for in attention and data.
That equilibrium is changing as more people discover the high costs of paying in attention.
You could have a society where there's one single spreadsheet package made by a team of 20 people, a few operating systems, a new set of 50 video games every year (with graphics that are good enough but nothing groundbreaking so they'll run on old hardware) created according to quota by state-run enterprises, Soviet style.
This would be very efficient in avoiding duplication, the entire industry would probably only need a few thousand developers. It would also save material resources and energy. But I think that even if the software these companies produced was entirely reliable and bug-free it it would still be massively outcompeted by the flashy trend-chasing free-market companies which produce a ton of duplicated outputs (Monday.com, Trello, Notion, Asana, Basecamp - all these do basically the same thing).
It's the same with AI, or any other trend like tablets, the internet, smartphones - people wanted these and companies put their money into jumping aboard. If ChatGPT really was entirely useless and had <10,000 users then it would be business as usual - but execs can see how massive the demand is. Of course plenty are going to mess it up and probably go broke, but sometimes jumping on trends is the right move if you want a sustainable business in the future. Sears and Blockbuster could've perfected their traditional business models and customer experience without getting on the internet, and they would have still gone broke as customers moved there.
A lot of how I use AI is to assist me to build the software manually. I focus 1 function and ask it to fix or implement it. That's a good way to use AI. But if you mean using AI to improve existing systems, I also think that's being a done a lot. For instance, you know Krita the KDE drawing program? They naturally added a way to prompt image generation, based on your initial doodles, which makes a lot of sense.
Greenfield development is infinitely easier than brownfield.
It's damn hard work to dig in, uncover what's wrong, and fix something broken - especially if someone's workflow depends on the breakage.
Flashy AI features get attention and even if they piss you off, they make you believe the thing is fresh. Sorry but you're human.
What could have been if instead of crypto trillions were invested in something actually useful? What about the housing bubble of which we learned nothing as we are falling into it again?
There is a lot of stinky garbage in AI, but at least you can rescue some value from it, in fact it could be most of the activity out there, but you only notice what stinks.
TFA kind of assumes that the companies involved would have improved their software in a world in which those resources weren't spent on AI. Since much software contained long-unfixed bugs well before the GenAI boom, I'm not convinced.
Maybe the epitome of shoving AI into everything is Gemini showing up in my Gmail to help me write (which I do not need) but their spam filters still allowing obvious phishing emails through.
I understand organizationally how this happens, and the incentives that build such a monstrosity but it’s still objectively a shame.
Hopefully users can start banding together and paying for the features they want directly instead of having all the income be funneled from ads.
the recent stackoverflow survey said that only 25% of developers are actually happy at work https://survey.stackoverflow.co/2025/work#job-satisfaction-j... . Gallup said its only 33% of employees are engaged in the economy in general. Not everyone gets to go to conferences and network for Godot game engine like the author, most are doing super repetitive jobs. I definitely want an AI to automate as many as possible ASAP
Let's see.
1. Unwanted but coercive upgrades.
2. Delivering genuinely upgraded complexity and incoherence, under the cloak of new feature theatre.
3. In a context where customers just wish they could keep using what they have already paid for. Without paying again.
4. But they have to, because of artificially introduced cross-user new-version incompatibilities and strategically scheduled bit rot.
5. All so a company can keep extracting money from people it is no longer actually serving.
This is very similar to enshittification. Customers invest (time, money, data), in something they love. Their common investment has been transmogrified into a dairy, er, prison. And now they are all milked by all the dark patterns and milk pumps their supplier/master can insert between the users and water, users and grass, users and the midden, er, I mean into every remaining useful part of the service. While they shamelessly and unconvincingly claim to still be their hope, light and beloved benefactor.
Encrapification? No...
Fucked-up-grades? Choose which ever silla-bic em-fassis you prefer. (Sorry for the saucy stout boldness, to you of youth and gentle heart. But the point of crass terms is to not let targets off the verbial hook, with a weak or cute euphemism.)
Quickbooks, anyone? The list is long.
is more maintenance work on old software really the highest aspiration of the tech industry?
What's the alternative? Churning out new software for the same use cases until Kingdom come because maintaining and improving older software is somehow icky?
Let's make a startup that is just to do PR on long-standing ignored issues that really would fix things.
It would be philanthropy (not-for-profit), but i imagine if you get a few big names behind it (some around here, some not), you could probably have a fair group of developers, translators, UI experts, API experts, hardware bug experts, etc. Like a Think Tank that outputs bugfixes and incremental improvements to software. The sort that reduce electricity usage - you could get some government funding out of that, i bet. How much did Britain pay for those "delete old emails to save water" marketeering?
I guess the downside is meta, google, microsoft, et al will benefit the most, but whatever.
Also someone the other day mentioned something like this, where users could subscribe for $50 to pursue legal avenues - sorry i don't have the link, but:
> What we actually need is a Consumer Protection Alliance that is made by and funded by people who want protection from this and are willing to pay for the lawyers needed to run all of the cases and bring these cases before a judge over and over and over again until they win.
> This would mean people like you and me and a million others of us paying $20-$50/month out of pocket to hire people to sue companies that do this [...]
This is another genre of AI article that annoys me: The one where the author starts by agreeing with you that AI is a definitely a bubble, and we’re gonna just “know” that for the rest of the article, no argument necessary.
I don’t feel like this article is trying to start a conversation, it wants to end the conversation so we can have dessert (aka, catastrophizing about the outcome of the thing “we know” is bad).
The basic assumption of the article is that there is a bubble. You want an entirely different article then about whether there is a bubble. You just don't like the conversation it's starting because you also disagree with its base assumption.
> You just don't like the conversation it's starting because you also disagree with its base assumption.
Yes! That’s correct!
ZIRP->AI/enshitification is the one-two punch combo that I think is going to devastate our economy for 50 years or more. We have an entire generation of executives, financiers and government that have only ever operated in an era of free money.
They've never had to generate a real return, create a product of real value, etc. This wave-of/gamble-on AI slop just shows that they don't even know what value looks like. We've operated for ~40 years on a promise of...something.
> organizations such as Blender, Godot, or Ladybird and...
So you want an open source project to really succeed? It's not money, but real passion for the work.
Write better documentation (with realistic examples!) and fix the critical bugs users have been screaming about since over a decade ago.
Sure fine pay a few people real wages to work on it full time, but that level of funding has to deliver something more than barely documented functionality.
> It's not money, but real passion for the work.
Yeah, nah... passion only sustains a person for 3 days max before they expire.
My theory is that open source boomed in the last few decades because developers had enough income and free time from their day jobs to moonlight as contributors. With the gravy train ending, I suspect open source will suffer greatly. Maybe LLMs can cover what was lost, or maybe corporations will pay their engineers to contribute directly (even more so than what they do now), but there will definitely be some losses here.
>"while delivering absolutely nothing of value"
Well maybe for you and not the millions of people that use this technology daily.
Unlike "value demand," which is genuine demand arising from customer needs, failure demand is demand caused by failures such as errors, defects, inefficiencies, or poor service delivery. For example, if a service does not fulfill a customer's need properly, the customer must come back, creating more demand that is essentially avoidable. Failure demand leads to inefficiency, additional costs, and deteriorated customer and employee experiences.
So you're implying that ChatGPT and similar are so popular because of "errors, defects, inefficiencies, or poor service delivery", how does this make any sense in the context?
Yet another article that could can be losslessly translated to a single sentence.
That would be lossy translation...
Perhaps not ironically, the careless distribution of incorrect information, combined with a dismissal of human endeavor, is such a perfect encapsulation of why so many people absolutely despise everything surrounding LLM hype.
> What could have been if instead of spending so much energy and resources on developing “AI features” we focused on making our existing technology better?
The implied answer to this question really just misunderstands the tradeoffs of the world. We had plenty of money and effort going into our technology before AI, and we got... B2B SaaS, mostly.
I don't disagree that the world would be better off if all of the money going into so many things (SaaS, crypto, social media, AI, etc.) was better allocated to things that made the world better, but in order for that to happen, we would have to be in a very different system of resource allocation than capitalism. The issue there is that capitalism has been absolutely core to the many, many advances in technology that have been hugely beneficial to society, and you if you want to allocate resources differently than the way capitalism does, you lose all of those benefits and probably end up worse off as a result (see the many failures of communism).
> So I ask: Why is adding AI the priority here? What could have been if the investment went into making these apps better?
> I’m not naive. What motivates people to include AI everywhere is the promise of profit. What motivates most AI startups or initiatives is just that. A promise.
I would honestly call this more arrogant than naive. Doesn't sound like OP has worked at any of the companies that make these apps, but he feels comfortable coming in here and presuming to know why they haven't spent their resources working on the things he thinks are most important.
He's saying that they're not fixing issues with core functionality but instead implementing AI because they want to make profit, but generally the sorts of very severe issues with core functionality that he's describing are pretty damaging to the revenue prospects of a company. I don't know if those issues are much less severe than he's describing or if there's something else going on with prioritization. I don't know if the whole AI implementation was competitive with fixing those - maybe it was just an intern given a project, and that's why it sucks.
I have no idea why they've prioritized the things they have, and neither does the author. But just deciding that they're not fixing the right things because they implemented an AI feature that he doesn't like is not a particularly valid leap of logic.
> Tech executives are robbing every investor blind.
They are not. Again, guy with a blog here is deciding that he knows more than the investors about the things they're investing in. Come on. The investors want AI! Whether that's right or wrong, it's ridiculous to suggest they're being robbed blind.
> Unfortunately, people making decisions (if there are any) only chase ghosts and short term profits. They don’t think that they are crippling their companies and dooming their long term profitability.
If there are any? Again, come on. And chasing short term profits? That is obviously and demonstrably incorrect - in the short term, Meta, Anthropic, OpenAI and everybody else is losing money on AI. In the long term, I'm going to trust that Mark Zuckerberg and Sam Altman, whether you like them or hate them, have a whole lot better idea of whether or not they're going to be profitable in the long term than the author.
This reads like somebody who's mad that the things he wants to be funded aren't being funded and is blaming it on the big technology of the day then trying to back into a justification for that blame.
[dead]
[dead]
It’s not just the AI bubble. Think of all the public services, rights, and scientific and medical research being destroyed by rightwing extremists and their billionaire enablers. It will take years, decades perhaps, to undo the damage they’ve already done, in just over 6 months in power.
I disagree. The hype is wearing people down and making it think it's a waste of time but LLMs just came out a couple years back and even the trendline from the past decade (pre-LLMs) is up up up.
The amount of interest to explore this opportunity is worth it. The bubble is worth it. I don't think it's lost years, and even if it is, the technology is compelling enough to make the gamble worth it.
The fatigue of reading the same shit over and over again makes people forget that it's only a couple years. It also makes people forget how ground breaking and paradigm shifting this technology was. People are complaining about how stupid LLMs are when possibly just 5 years back no one could even predict that that such levels of intelligence in machinese was even possible.
Of course fairly quick progress was made - a truly astounding amount of money was poured into this industry in a short timeframe. The thing is, now it’s clear that AI isn’t really valuable enough to justify investment on the same scale anymore.
It feels like after people were still flush with cash at the end of the pandemic, reality hit and as people were profit taking from the market, LLMs seemed to emerge from the æther as the next best thing to glom on to. So now the hive mind dumped all their money into that and we are riding an incredible bubble.
So cheap gaming hardware in the future (similar to when telecoms over invested in transcontinental undersea fiber-optic cables)? What's the hangover gonna look like after this? What's the next grift?
Quantum computing?
/agree
We are in the infancy of LLM technology.
I'm a massive AI skeptic, and I think the amount of money being spent is astonishing, but I really don't want to go back to searching the web the old way.
Asking Gemini _is_ just much better at finding you the answers you need, _and_ providing links for you to verify that information.
It will be a sad day when they start injecting ads, I really hope the foss alternatives catch up.
I still don't trust the non-determinism of current LLMs. I feel like I can't trust the results unless they are very simple ones.
Yeah. Everyone keeps comparing it to higher level languages.
But higher level languages were deterministic and reliable, allowing users to offload the cognitive load of the lower levels to the computer.
With LLMs you can’t fully offload the load, you need to keep a close eye on it which kind of defeats the purpose (IMO)
It's not really all that different from trusting some random post on stack overflow. You always needed to be a little skeptical.
I'm asking the AI things that are easy to verify, and often ask it to provide web references. It's working well for me.
I don't ask it about niche topics. I occasionally ask it about myself or my games and it's always funny. It's a good reminder how wrong the AIs can be.