Nevermark 15 hours ago

Well run large companies often waste a lot, in order to (1) hedge risks of being left behind, (2) ensure they have options in the future in possible growth or new efficiency areas, and (3) to start on long learning curves for skills and capabilities that appear likely to be a baseline necessity in the long run.

Bonfires of money.

Predictably. Because all three of those concerns require highly speculative action to properly address.

That doesn't make those reasons invalid. Failures are expected, especially in early days. And are not a sign they are making spurious bets, or starry eyed about industry upheavals. The minimal return is still experience gained and a ramped up institutional focus.

How many of us here speed up our overall development by coding early on new projects before we have complete clarity? Writing code we will often throw away?

  • jimmaswell 15 hours ago

    Agreed. Smug dismissal of new ideas is such a lazy shortcut to trying to look smart. I'd much rather talk to someone enthusiastic about something than someone who does nothing but sit there and say "this thing sucks" every time something happens even if person #2 is incidentally right a lot of the time.

    • jqpabc123 5 hours ago

      Smug acceptance of new ideas is such a lazy shortcut to trying to look smart. I'd much rather talk to someone who has objectively analyzed the concept rather than someone who does nothing but sit there and say "this thing is great" for no real reason other than "everyone is using it".

      Incidentally, "everyone" is wrong a lot of the time.

    • etrautmann 14 hours ago

      While I agree in principle, someone has to make decisions about resource allocation and decide that some ideas are better than others. This takes a finely tuned sense of BS detector and technical feasibility estimation, which I would argue is a real skill. It thus becomes subtly different to be an accurate predictor of success vs default cynic if 95% of ideas aren’t going to work.

      • Nevermark 14 hours ago

        > if 95% of ideas aren’t going to work

        Well, if only 95% of our ideas don't work, with a little hard work and sacrifice, we are livin' in an optimish paradise.

  • Gazoche 10 hours ago

    Or (4), because a sales team convinced them there was some magic wand they could buy that would triple their productivity.

  • nazgul17 13 hours ago

    You put my thoughts into words better than I could.

    This reminds me of the exploration-exploration trade-off in reinforcement learning: you want to maximise your long term profits but, since your knowledge is incomplete, you must acquire new knowledge, which companies do by trying stuff. Prematurely dismissing GenAI could mean missing out on new efficiencies, which take time to be identified.

  • feoren 2 hours ago

    You're giving company executives way too much credit in general. I'm sure there are unicorns out there where conscientious stewards of the company's long-term health are making measured, rational choices that may pay off in a decade, but it's a tiny minority of companies. Most are run by narcissistic short-term corporate raiders whose first priority is looting the company for their own profit and second priority is cosplaying as a once-in-a-generation genius "thought leader" in a never-ending battle to beat away the nagging (and correct) thought that they're nepo-babies who have no clue what they're doing. These morons are burning money because they are stupid and/or because it benefits them and their buddies in the short-term. They burned money on blockchain bullshit, they burned money on Web 2.0 bullshit, they are burning money on AI, and they will be burning money on the next fad too. The fact that AI might actually turn out to be something real is complete serendipity; it has nothing to do with their insight or foresight. The only reason they ever look smart is because they're great at taking credit for every win, everyone else immediately forgets all their losses, and op-ed writers and internet simps all compete to write the most sycophantic adulations of their brilliance. They could start finger-painting their office windows with their own feces and the Wall Street Journal would pump out op-eds saying "Here's why smearing poop on your windows is actually a Really Brilliant Business Move made by Really Good-Looking Business Winners!" Just go back and re-read your comment but think "blockchain" instead of "AI" and you'll see clearly how silly and sycophantic it really is.

    • Nevermark 2 hours ago

      > Well run large companies [...]

      Yes, of course. Incompetent leaders do incompetent things.

      No argument or surprise.

      The point I made was less obvious. Competent leaders can also/often appear to throw money away, but for solid reasons.

  • epistasis 15 hours ago

    Similarly, VC sets barrels of money on fire.

    And depending on how you look at it, science itself is experimentation, but at least it mostly results in publications in the end, that may or may not be read, but at least serve as records of areas explored.

    • Nevermark 13 hours ago

      I think your take covers science too.

      Scientists and mathematicians often burn barrels of time and unpublished ideas, not to mention following their curiosities into random pursuits, that give their subconscious the free space to crystalize slippery insights.

      With their publishable work somehow gelling out of all that.

kayodelycaon 20 hours ago

Well, at least AI is going to be better than the blockchain hype. No one knew what “blockchain” was, how it worked, or what could be used for.

I had a very hard time explaining once you put something in the chain, you can’t easily pull it back out. If you wanted to verify documents, all you have to do is put a hash in a database table. Which we already had.

It has exactly one purpose: prevent any single entity from controlling the contents. That includes governments, business executives, lawyers, judges, and hackers. The only good thing is every single piece of data can be pulled out into a different data structure once you realize your mistake.

Note, I’m greatly oversimplifying all the details and I’m not referring to cryptocurrency.

  • Terr_ 19 hours ago

    > has exactly one purpose: prevent any single entity from controlling the contents.

    I'd like to propose a different characterization: "Blockchain" is when you want unrestricted membership and participation.

    Allowing anybody to spin up any number of new nodes they desire us the fundamental requirement which causes a cascade of other design decisions and feedback systems. (Mining, proof-of-X, etc.)

    In contrast, deterring one entity from taking over can also be done with a regular distributed database, where the nodes--and which entities operate them--are determined in advance.

    • johnecheck 15 hours ago

      Sure, blockchain development has always been deeply tied to ideas of open membership and participation. I like those ideas too.

      But that's a poor definition of a blockchain. A blockchain is merely a distributed ledger with certain properties from cryptography.

      If you spin up a private bitcoin network, it's a blockchain even if nobody else knows or cares about it. Now, are non-open blockchains at all useful? I suspect so, but I don't know of any great examples.

      The wide space between 'membership is determined in advance' and 'literally anyone can make a million identities at a whim' is worth exploring, IMO.

      • Terr_ 14 hours ago

        > A blockchain is merely a distributed ledger with certain properties from cryptography.

        If we charitably assume "blockchain" has some engineering meaning (and it isn't purely a word for marketing/scamming) then there's some new aspect which sets it apart from the distributed-databases we've been able to make just fine for decades.

        Uncontrolled participation is that key aspect. Without that linchpin, almost all the other new stuff becomes weirdly moot or actively detrimental.

        > If you spin up a private bitcoin network, it's a blockchain even if nobody else knows or cares about it.

        That's practically a contradiction in terms. It may describe the ancestry of the project, but it doesn't describe what/how it's being used.

        Compare: "If you make a version of Napster/Gnutella with all the networking code disabled, it's still a Peer-to-Peer file sharing client even when only one person uses it."

        • johnecheck 5 hours ago

          You misunderstand. The analogy wouldn't be Napster with networking code disabled. The analogy is Napster on a LAN. Only those on the LAN can access it so it's not open to the world, but nonetheless you've still got a p2p file-sharing client.

          And yes. I'm using the engineering definition. I don't believe in letting a gaggle of marketers and scammers define my terms. A blockchain is a specific technology. It doesn't mean 'whatever scam is happening this week', even if said scam involves a blockchain.

          I don't blame you for associating blockchains with scams and fully open projects, that's undeniably what we've seen it used for. But that's not what defines a blockchain.

          "A scalpel can only be used for surgery"

          "If you use a scalpel to cut a steak, it's still a scalpel."

          "There must be some new aspect to scalpels! We've been able to make steak knives for decades!"

        • mritterhoff 14 hours ago

          I interpreted "private" from the comment above yours to mean membership determined by some authority. So your example doesn't hold well, because networking would still be enabled in the file sharing fork, but on a private network rather than the open internet.

  • quantum_state 18 hours ago

    let's use the term xype for anything being hyped without a good reason .. :-)

  • Analemma_ 20 hours ago

    My monkey jpegs are surely going back up! Just have to keep hodling.

    • gerdesj 19 hours ago

      I've lost track of my memes. Should I be locking up the green crayons or my daughters?

      • krapp 18 hours ago
        • gerdesj 18 hours ago

          "monkey jpegs": NFT

          "hodl" - I think r/WallStreetBets came up with that. Perhaps not - its too ubiquitous, similar to teh. r/WSB did originate the eating green crayons thing along with smooth brain monkeys etc

          I simply mixed and matched and riffed in an old school "lock up your daughter" meme.

          Sorry, I don't have a Spongebob illustration. I do words.

      • bradgessler 14 hours ago

        You should be driving your Lambo on the moon.

  • inkyoto 16 hours ago

    > No one knew what “blockchain” was, how it worked, or what could be used for.

    Not the blockchain itself, but the concept of an immutable, append only, tamper-proof ledger underpinning it is a very useful one in many contexts where the question of authenticity of datasets arises – the blockchain has given us a ledger database.

    The ledger database is more than just a hash as it also provides a cryptographic proof that the data is authentic via hash chaining, no ability to delete or modify a record, and the entire change history of any record. All these properties make the ledger databases very useful in many contexts, especially the ones where official documents are involved.

    • Ekaros 6 hours ago

      I often feel that immutability is very much over-rated and goes against real word. Lot of legal system is build on reverting things. Thus things being harder to revert is not actually desirable property.

      • inkyoto 2 hours ago

        No. Official and legal records detest the in place changes, love a full history of changes, and also love to walk the entire history of changes back. You do not have to convince me, you can talk to the state auditors instead to get their perspective.

        Consider two simple and common scenarios (there are more): citizenship certificates and the recognition of prior learning.

        1. Citizenship – the person with the name of Joanna Doe in the birth certificate was issued with a citizenship certificate in the same name. As time goes by, Joanna Doe changes the legal name to Joanna Smith; the citizenship certificate is reissued with the new legal name. We do not want to truly update the existing record in the citizenship database and simply change Doe -> Smith as it will create a mismatch with the name in the birth certificate. If we use a ledger database, an update operation will create a new revision of the same record and all subsequent simple query operations will return the latest updated revision with the new name. The first revision, however, will still be retained in the table's cryptographically verifiable audit/history log.

        Why should we care? Because Joanna Smith can accidentally throw their new citizenship certificate away and later they will want to renew their passport (or the driver's licence). The citizenship certificate may be restored[0] by presenting the birth certificate in the original name and the current or expired passport, but the passport is in the new name. From a random system's point of view, Joanna Doe and Joanna Smith are two distinct individuals with no apparent link between them. However, the ledger database can provide proof that it is the same person indeed because the unabridged history of name changes is available, it can be queried and relied upon.

        2. Recognition of prior learning – a person has been awarded a degree at institution A. Credits from Institution A contributed to a degree at Institution B. The degree at B is revoked due to issues with source evidence (i.e. Institution A). The ledger database makes such ripple effects deterministic – a revocation event at B triggers rules that re-evaluate dependent awards and enrolments at partners, with a verifiable trail of who was notified and when. If Institution A later corrects its own records, Institution B and downstream bodies can attach a superseding record rather than overwrite, preserving full lineage. The entire drama unfolded will always be available.

        2½. Recognition of prior learning (bonus) – an employer verified the degree on the hiring date. Months later it is revoked. The employer can present a ledger proof that, on the hiring date, the credential existed and was valid. It reduces dispute risk and supports fair-use decisions such as probation reviews rather than immediate termination.

        All this stuff is very real and the right tech stack (i.e. the ledger DB) reduces the complexity tremendously.

        [0] Different jurisdictions have different rules but the procedure is more or less similar amongst them.

    • kstrauser 16 hours ago

      For all that stuff, I like the blockchain implementation known as "git".

      • inkyoto 15 hours ago

        I have heard an argument for git before, and it is a viable fit for some use cases.

        The problem, however, stems from the fact that the git commit history can be modified, which automatically disqualifies git in many other use cases, e.g. official government-issued documents, financial records, recognition of prior learning, and similar – anywhere where the entire, unabridged history of records is required.

        • kstrauser 15 hours ago

          It can't without destroying every subsequent commits' digest, unless you find a way to generate commits with identical SHA digests.

          • inkyoto 14 hours ago

            There is no such a thing as «subsequent commits» in git.

            Commits in git are non-linear and form a tree[0][1]. A commit can be easily deleted without affecting the rest of the tree. If the git commit represent a subtree with branches dangling off it, deleting such a commit will delete the entire subtree. Commits can also be moved around, detached and re-attached to other commits.

            [0] https://www.baeldung.com/ops/git-objects-empty-directory#2-g...

            [1] https://www.baeldung.com/ops/git-trees-commit-tree-navigatio...

            • kstrauser 14 hours ago

              True but irrelevant. You can't remove a commit in a way that someone else with a copy of the repo can't detect, in exactly the same way and for the same reason that you can't remove a blockchain entry without making instantly obviously to later items.

              • inkyoto 13 hours ago

                Very much relevant. The definition of the ledger database includes immutability of datasets as a hard constraint[0][1]. The mere fact that the git commit history can be altered automatically disqualifies git from being an acceptable alternative to the ledger database in highly regulated environments.

                If strict compliance is not a hard requirement (open source project are the prime example), git can be used to prove provenance, provided you trust the signer’s public key or allowed signers file.

                [0] https://www.techtarget.com/searchCIO/definition/ledger-datab...

                [1] https://www.moderntreasury.com/learn/ledger-database

                • kstrauser 11 hours ago

                  It is immutable in exactly the same way. The git commit history cannot by altered, except in the same sense that you could manually edit the backing store of a blockchain to alter the data, and with the same consequence that the alteration would be instantly noticeable in either case.

                  • inkyoto 2 hours ago

                    I respectfully disagree.

                    Consider a leaf commit (or a leaf which is a subtree of commits). I am a person with nefarious intentions, and I delete the leaf commit, forcefully expire the reflogs, or force garbage collect them. At that point, there is no longer remaining evidence in git that the commit ever existed. If git were to be used to record a history of criminal offences, I would be able to single-handedly delete the last offence by running «git reflog expire --expire=now --all» followed by «git gc --aggressive --prune=now».

                    Ledger databases, on the other hand, do not have the «delete» operation. The «update» operation does not factually update the document/record and creates a new revision instead (just as git does), whilst retaining a full history of updates to the document/record. This is the fundamental difference.

    • Blockingto 8 hours ago

      Blockchain did not give us that.

      It only moved the goal post.

      As long as you can't guarauntee that the data you put onto a blockchain is trustworth in the first place, whatever you put on a blockchain is not 'tamper-proof'.

      Therefore the ONLY thing you can handle on a blockchain 'tamper-proof' is stuff only existing on the blockchain itself. Which means basically nothing.

      And there is a second goal post which was moved: the ignorance about a blockchain being 'tamper-proof'. 51% attack are real, you don't know if a country just owns and controls a lot of nodes and the latest rumor: NSA was involved in blockchain creation. You don't know if something is hidden in the system which gives one entity an edge over others.

    • phonon 15 hours ago

      Yes, that kind of database was so in demand, the AWS version, "Amazon Quantum Ledger Database" was hugely successful. Oh wait, it was a flop and is being shut down....

      • inkyoto 15 hours ago

        No snark required. Whatever the reason for discontinuing QLDB was, it has been replaced with the pgAudit[0] Postgres extension in AWS. I did look into using QLDB for a project involving the management of financial records about 4 years ago and found it to be a somewhat unwieldy to use document database.

        Demand for ledger databases is strong in the government and elsewhere where compliance is a non-negotiable. Microsoft have their own on the offer[1] as well.

        [0] https://www.pgaudit.org/

        [1] https://learn.microsoft.com/en-us/sql/relational-databases/s...

  • xyst 19 hours ago

    [flagged]

fancyfredbot 20 hours ago

Article doesn't understand that cutting "between five and 20 percent of support and admin processing" is really valuable, instead it seems to want to dismiss that as a dull failure.

Business process outsourcing companies are valued at $300bn according to the BPO Wikipedia page. So 5%-20% of that is 15-60bn. So even if we're valuing all the other GenAI impact at zero the impact on admin and support alone could plausibly justify this investment.

  • kurthr 17 hours ago

    What is missing even in this article is the install and expected failure rate of the dominant GB300 servers. Numbers I heard were, "~15% annual failure and it's not worth trying to swap/repair". That means in 5 years these entire installs are down more than half. Of course they can install the NEW GX500turbo servers which are 4x the compute, but 2x more power hungry. How much will that cost? What is the hyperscaler write down ~$200B/yr? Better have some income to make that up. They've got only 3 years to get there.

    That still means All New data centers. They aren't being built for for this now, and so the old ones'll have to get ripped out and rebuilt (in place?) before they get the new servers. I do think they've planned the external power delivery, but not cooling or IP infra. It's a CF.

    • fancyfredbot 10 hours ago

      The article is right to focus on the end customer and not on the hyperscalars.

      The hyperscalars are not the ones having trouble generating income. They have plenty of paying customers. They certainly understand capital depreciation and the need to refresh hardware. Premature hardware failure will be charged back to Nvidia who are not exactly struggling for cash either.

  • myhf 16 hours ago

    But business process outsourcing companies actually perform the services they get paid for, so it doesn't make sense to compare them to GenAI.

  • johnsmith1840 15 hours ago

    BPO is a growing industry that isn't shrinking call centers have expanded as well.

    If it was true these were replacing anything it would be very clear in those sectors and it isn't. The real effect of end to end automation from LLMs is small to negligible. The entire "boring" industry is still chunging along and growing as it did before.

  • weakfish 19 hours ago

    I suppose my question that I’m too tired to napkin math on is if those benefits justify the sometimes astronomical cost of the tooling or integration

    • fancyfredbot 18 hours ago

      The article author was also too tired to do any napkin math but unfortunately that does not seem to have stopped them from confidently declaring $40bn has been lit on fire.

    • __loam 19 hours ago

      Or the reputational risk.

  • croes 19 hours ago

    But that's only the company side and not the customer side.

    Klarna also cut costs replacing support with AI. Didn't work well so ha to rehire.

  • palmotea 10 hours ago

    > Article doesn't understand that cutting "between five and 20 percent of support and admin processing" is really valuable, instead it seems to want to dismiss that as a dull failure.

    To whom? From the customer perspective, it sounds like a shittier level of service is coming, which is a kind of failure.

    • simianwords 10 hours ago

      Massive cost savings vs slightly more shitty customer service means it was probably worth the trade off. Nothing usually comes without tradeoffs.

jsight 20 hours ago

I'm not shocked. It reminds me a bit of the way some people talk about their personal investing. They'll talk about wins (often exaggerated) and leave out the travails and failures. Next think you know, your friend is telling you about their new day trading plan. :)

Unfortunately, the same thing is playing out here. Nobody likes being the guy that points out the gains are incremental when everyone is bragging about their 100x gains.

And everyone in the management side starts getting, understandably, afraid that their company will miss out on these magical gains.

It is all a recipe for wild overspending on the wrong things.

andy99 19 hours ago

$40B seems very low. I wouldn't be surprised to find the annual corporate churn on innovation / transformation / IT/IM new initiatives whatever you call it, is way higher than that. There's some subset of corporate spend that's just chasing new stuff and keeping up on what the hot topics are.

I think there is a bubble, if it's really just $40B maybe I'm wrong.

DiscourseFan 19 hours ago

Yes and no. Its clear from the article that there is industrial integration. But only workers who are very highly skilled at utilizing the technology--and there are very few of those--are seeing benefits, and only managers who have experience effectively utilizing it are adopting it well. Time will tell, but yes, most projects aren't going anywhere, because they make the fundamental error of thinking that its equivalent to a human in terms of intelligence.

throwmeaway222 19 hours ago

There is no real source for this data other than "executives" that only think in numbers and of course those are the types that collude with their CFOs to come up with great ways to get giant tax write-offs. I would imagine they are not "burning billions" they are coming up with new ways to describe how they ALREADY burned billions.

  • compiler-guy 18 hours ago

    Do you have reason to believe that the MIT researchers didn't interview who they say they did? Or that those people don't have the credentials the researchers claim they do?

    It's possible the study is flawed, or is more limited than the claims being made. But some evidence is necessary to get there.

  • 0cf8612b2e1e 18 hours ago

    I can just speak for myself, but my F500 has been setting money on fire chasing AI. Truly terrible ideas are being pursued just for the sake of movement. Were “AI” not part of the pitch, it would have been immediately tossed in the bin as a waste of time.

    Ideas which are not terrible, instead have awful ROIs. Nobody has a use case beyond generating text, so lots of ideas about automating some text generation in certain niches. Not appreciating that those bits represent 0.1% of the business ventures. Yet they are technically feasible, so full steam ahead.

    • JSR_FDED 17 hours ago

      My F150 set itself on fire the other day

    • tokioyoyo 16 hours ago

      I know first hand companies that have replaced parts of CS with Elevenlabs with measurable wins in customer acquisition and satisfaction. Generally speaking, I agree that a lot of people are chasing something that doesn't exist, but there are real use cases in the current environment.

    • bongodongobob 18 hours ago

      Same here. We are apparently obsessed with chatbots that no one asked for. If I brought up the same idea minus the AI a couple years ago, management would have been very confused as to why I wanted to build things no one asked for.

      The funniest thing is that management has no idea how AI works so they're pretty much just Copilot Agents with a couple docs and a prompt. It's the most laughable shit I've ever seen. Management is so proud while we're all just shaking our heads hoping this trend passes.

      Don't get me wrong, AI definitely has its use cases, but tossing a doc about company benefits into a bot is about the most worthless thing you could do with this tech.

      • toraway 17 hours ago

        > tossing a doc about company benefits into a bot is about the most worthless thing you could do with this tech.

        Hahaha, my company has spent half a year pursuing the exact same thing to the letter after one of our VPs got the idea in his head at some AI conference. Rollout kept getting pushed back because of hallucinations in testing. I'm not 100% sure at this point if he forgot he made it a top internal priority and it was quietly shelved or if it's still limping along with no one in HR/upper management willing to give it the green light for release.

        (using throwaway because my HN profile is linked to my real identity).

        • bongodongobob 15 hours ago

          Ha, same story here. We had a little demo for some people in IT and it couldn't get some of the most basic background of our company correct.

IT4MD 38 minutes ago

All while paying less taxes, at least in the US, because reasons!

woeirua 20 hours ago

It's 1999 again.

  • jonas21 20 hours ago

    So you're saying they're basically right, but 5 or 10 years too optimistic on the timeline?

  • snerbles 19 hours ago

    It'll be interesting to see what sort of Chewy-equivalents emerge after the pets.com-analogues collapse.

  • mensetmanusman 19 hours ago

    1999 was triggered by fraud, there was far more demand on paper for high bandwidth applications than was justified for all the fiber layout and over-investment in Internet technologies.

    The difference today is that every piece of capitalism immediately 100% utilized once it is plugged in.

    • ModernMech 18 hours ago

      Announcing to the world "AGI has been achieved internally" when that's not true seems like fraud to me.

      • fzzzy 17 hours ago

        Who did that?

        • bigtunacan 7 hours ago

          Sam Altman made a post on Reddit in 2023 implying OpenAI had achieved AGI internally. Later he “clarified” it was a joke, however there was some speculation that he didn’t understand what AGI actually meant at the time.

    • woeirua 18 hours ago

      Tell me you don't understand why the dot com crash happened without telling me you don't understand why the dot com crash happened.

      • ac29 17 hours ago

        Calling the whole bubble fraud is wrong, but Enron, Worldcom and Microstrategy all blew up due to fraud in the 2000-2001 time frame.

  • j45 20 hours ago

    Truly, except now everyone's online and spending time online and spending cash online.

    • pixl97 20 hours ago

      And, the .com bomb killed some companies and slowed down spending for a bit. It did not kill the internet, and now companies like Amazon(.com) are absolutely enormous.

      Some people think the GenAI bomb is going to kill GenAI, I think it's just going to weed out those with too high of expenses and no way to evolve the compute to be cheaper over time.

      • sech8420 19 hours ago

        "Some people think the GenAI bomb is going to kill GenAI,"

        Sure, a very very small percentage of people who know hardly anything about GenAI might think this.

        • __loam 18 hours ago

          It's the opposite. The more I learn about the financial state of these companies the worse my opinion gets.

      • j45 16 hours ago

        There was barely no one using the internet then let alone doing e-commerce.

        It’s worth reading up on it to see what’s actually comparable.

roxolotl 19 hours ago

Veracity of the article’s numbers is obviously an issue but the thing that’s interesting is that this number isn’t about investment in AI infrastructure or models. It’s money spent on consuming and using the models. Which puts the infra spend in an even starker light. If people don’t feel like they are getting value out of this spend at the peak of hype that’s not a great sign.

cc62cf4a4f20 12 hours ago

Aren't executives just responding rationally to the current environment? Right or wrong, the broadly speaking, the current thinking is that GenAI will be super impactful. Which means there is a lot of risk to be seen as underinvesting in GenAI, even when the ROI isn't there. Until the hype dies down and there is a broad, practical understanding of the value of GenAI, I don't see how it could work anyother way.

JCM9 20 hours ago

There’s no question we’re in a “GenAI” bubble, the only question is how big of a smoking crater in the ground is going to be created when this thing pops.

The tech isn’t going away, and is cool/useful, but from a business standpoint this whole thing looks like a dumpster fire burning next to a gasoline truck. Right now VC FOMO is the only thing stopping all that from blowing up. When that slows down buckle up.

  • jandrese 20 hours ago

    CEOs are just salivating at the prospect of firing most of their staff and replacing them with AIs. And then you have hype men at these AI companies blowing a burning coal mine of smoke up their asses. And every month the products become even more expensive for even more incremental gains. The hype is unsustainable.

    How many "we will have AGI by X/X/201X" predictions have we blown past already?

    • arcanemachiner 20 hours ago

      > How many "we will have AGI by X/X/201X" predictions have we blown past already?

      Just imagine how many predictions we'll have in six months, or even a year from now!

      • ed_elliott_asc 19 hours ago

        We don’t need to imagine, we can ask ChatGPT

        • Terr_ 18 hours ago

          Yeah, everyone knows LLMs excel at providing mathematically-sound answers to novel questions. :p

    • sebastiennight 15 hours ago

      > How many "we will have AGI by X/X/201X" predictions have we blown past already?

      This seems wildly inaccurate.

      Can you find any single such claim from any credible source? Anybody hyping up an AGI timeframe within the 2010s?

  • OtherShrezzing 19 hours ago

    The VCs are the small players in this bubble now. Big traditional finance is helping Microsoft, Google, Meta, and Amazon build out their datacenters, and the infrastructure to power them. VCs have a lot of money compared to your startup, but they can’t finance a trillion dollars in construction projects.

    They’re all so highly levered up that they can’t afford for the bubble to pop. If this goes on for another couple of years before the pop, we may see “too big to fail” wheeled out to justify a bailout of Google or Microsoft.

    • impossiblefork 19 hours ago

      I don't think Microsoft has a lot of corporate debt relative to its profits etc., and Google has even less.

      I'm sure there will be losers, but I'm not quite sure who.

      • OtherShrezzing 10 hours ago

        Their recent filings show that they’re planning $100bn/yr in AI expenditures as early as 2027. They’re raising debt, rather than spending from revenues, because they get a better multiplier there.

        They’re also acting as a guarantor to lots of infrastructure project - meaning the debt is their responsibility, but not on their books.

        If the creditworthiness of any of the hyperscalers slip, even a tiny amount, the tech and banking sectors are in some hot water.

        • impossiblefork 7 hours ago

          Ah, I see.

          But 100 billion is still on the order of the current profit of each. I suppose with interest, if it's sustained over time it could be a problem though.

    • HDThoreaun 15 hours ago

      msft and google both made $100 billion in profit last year. Theyre nowhere near a bailout

  • pier25 20 hours ago

    > Right now VC FOMO is the only thing stopping all that from blowing up

    That was probably 2-3 years ago.

    I'd be surprised if VCs hadn't already figured out they're in a bubble. They're probably playing a game of chicken to see who will remain to capture the actually profitable use cases.

    There's also a number of lawsuits going on against AI companies regarding piracy and copyright. It's already an established fact in the courts that these companies have downloaded ebooks, music, videos, and images illegally to train their models.

  • sema4hacker 17 hours ago

    I like your phrases "a smoking crater in the ground" and "a dumpster fire burning next to a gasoline truck" enough to steal them for use in my own critiques.

  • j45 20 hours ago

    It's a bubble if the tech or audience wasn't there and capabilities weren't improving weekly.

    There's definitely people who don't understand the tech talking about applying it increasing the failure rate of software projects.

    • throwawayoldie 20 hours ago

      > It's a bubble if the tech or audience wasn't there and capabilities weren't improving weekly.

      But...is it and are they? Gen AI boosters tend to make assertions like this as if they're unassailable facts, but never seem interested in backing them up. I guess it's easier than thinking.

      • wiml 18 hours ago

        The whole mindset of LLM boosterism is that thinking is obsolete anyway — all you need is pattern matching on existing data — so it's not surprising that they don't want to indulge in it excessively.

  • mrbluecoat 20 hours ago

    It'll certainly be lonely on HN without pages of AI articles to read daily..

    /s

dehugger 18 hours ago

There is a lot of organizational inertia to overcome, but it is happening. The first big wave of savings we are seeing in my org (which deals in physical goods, not software) is replacing very expensive enterprise software with in-house developed replacements.

  • tamersalama 18 hours ago

    Would love to hear more (private is OK)

exasperaited 14 hours ago

In short: we have seen no real evidence it will work, we don’t believe it will work, but we’re definitely going to hire fewer people because of it

It’s like the New York Jewish joke about the terrible food and the too small portions.

marenkay 6 hours ago

The wonderful thing about money is: its roots are only in belief, not in fact. If you get a sufficient amount of money users to believe pink colored dog poop has more value than a diamond, than it has.

The entire shtick is made up. People just tend to forget this too quick.

  • FooBarBizBazz 4 hours ago

    Moreover, money is not destroyed by spending, but rather circulates. Right now it's flowing around faster -- a manic phase for a portion of the economy.

    In other eras, everyone got excited and went to tent revivals.

    Granted there could be an opportunity cost -- the real effort and electricity could be used elsewhere -- but only if it were possible to create a similar amount of excitement about something useful, like putting solar panels everywhere. But that takes different people and different skills, so maybe this costs nothing?

    (Money can be created and destroyed -- it doesn't just circulate -- but that destruction happens when loans are repaid within a fractional reserve system. Which is kind of a scam, but money itself is, so, whatever.)

    Once you internalize that this is all sort of a scam, does that change your behavior? Maybe you start making NFTs or minting shitcoins.

at-fates-hands 20 hours ago

Feels like 2000 all over again.

The arms race to throw money at anything has "AI" in their business name is the same thing I saw back in 2000. No business plan, just some idea to somehow monetize the internet and VC's were doing the exact same thing. Throwing tons of good money after bad.

Although you can make an argument this is different, in a lot of ways, its just feels the same thing. The same energy, the same half baked ideas trying to get a few million to get something off the ground.

  • ebiester 20 hours ago

    I would say a bit more like 1997. I was young but kept thinking, "That which is good is not novel, and that which is novel isn't good." That said, we had the giant crash, but based on the dow jones the low in 2002 was the same as August 1998 but you can really look at the lost decade as to the true impact of the bubble.

    The question isn't if there will be a crash - there will - but there are always crashes. And there are always recoveries. It's all about "how long." And what happens to the companies that blow the money and then find out they can't fire all their white collar workers?

    (Or, what happens if they find out they can?)

  • throwawaysleep 20 hours ago

    I imagine it is the same, but 2000 was hardly a dead end. So there will be lots burned but keep on at it as the tech will revolutionize the world.

    • kalleboo 14 hours ago

      Yeah the market of 2000 crashed and burned but we were left with a bunch of dark fiber and developed tech that laid the foundation for the next decade.

mrsilencedogood 20 hours ago

I think vibe coding will get good enough that things like vercel's "0 to POC" thing are going to stick around.

I think AI-powered IDE features will stick around. One notable head-and-shoulders-above-non-AI-competitor feature I've seen is "very very fuzzy search". I can ask AI "I think there's something in the code that inserts MyMessage into `my.kafka.topic`. But the gosh darn codebase is so convoluted that I literally can't find it. I suspect "my", "kafka", and "topic" all get constructed somewhere to produce that topic name because it doesn't show up in the code as a literal. I also think there's so much indirection between the producer setup and where the "event" actually first gets emitted that MyMessage might not look very much like the actual origination point. Where's the initial origin point?"

Previously, that was "ctrl-shift-F my.kafka.topic" and then ask a staff engineer and hope to God they know off-hand, and if they don't, go read the entire codebase/framework for 16 hours straight until you figure it out.

Now, LLMs have a decent shot at figuring it out.

I also think things like "is this chest Xray cancer?" are going to be hugely impactful.

But anyone expecting anything like Gen AI (being able to replace a real software engineer, or quality customer support rep, etc) is going to be disappointed.

I also think AI will generally eviscerate the bottoms of industries (expect generic gacha girl-collection games to get a lot of AI art) but also leave people valuing the tops of industries a lot more (lovingly crafted indie games, etc). So now this compute-expensive AI is targeting the already low-margin bottoms of industries. Probably not what VCs want. They want to replace software engineers, not make a slop gacha game cost 1/10th of its already low cost.

  • kgwgk 20 hours ago

    > I also think things like "is this chest Xray cancer?" are going to be hugely impactful.

    Yes, but https://radiologybusiness.com/topics/artificial-intelligence...

    Nine years ago, scientist Geoffrey Hinton famously said, “People should stop training radiologists now,” believing it was “completely obvious” AI would outperform human rads within five years.

    • borroka 17 hours ago

      One problem is considering a solution effective only if it, at launch, completely solves the problem, for example, in the case of AI and LLMs, by coding an entire application without any human intervention, retiring radiologists, or driving autonomously in the five boroughs of New York City.

      If we expect a technology to completely solve a problem as soon as it is launched, only a few in history could be considered a success. Can you imagine what it would be like if the first radios were considered a failure because you couldn't listen to music?

      • npilk 4 hours ago

        Agree. And then people anchor on what the technology was like when it launched, and don't notice or account for the additional improvements and iterations that happen over time.

        E.g. - I was considering a 3D printer but I had heard they were expensive, messy, complicated, it was hard to get prints to come out right, etc. But it turned out I was anchored on ~2016 era technology. I got a simple modern printer for a few hundred dollars and it (mostly) just works.

    • HDThoreaun 15 hours ago

      AI does outperform radiologists right now. The issues are liability and the radiologist lobby(which you linked too) throwing a fit.

    • Eisenstein 18 hours ago

      If you want to go back in history you will find people confidently claiming things in either direction of what eventually happened.

  • chaboud 16 hours ago

    I've been quite happy with thinking of agentic IDE operation as being akin to a highly energetic intern. It's prone to spiraling off into the weeds, makes silly mistakes, occasionally mangles whole repos (commit early, and often), and needs very crisp instruction and guidance. That said, I get my answers back in minutes/hours rather than days/weeks. For the cost, for things that would otherwise be delivered by an intern or college-hire SDE, it's a pretty solid value vs. paying a salary and keeping a desk available.

    What it isn't, at present, is an investment in the future. I'm not making these virtual interns better coders, more thoughtful about architecture, or more autonomous in the future. Those aspects of development of new hires are vastly more valuable than the code output I'm getting in my IDE. So I'm hoping that we land in a place where we're fostering both rather than hoping that someone else is going to do the hard work of closing the agentic coding gap and growing maturity. Pulling an Indiana Jones style swap could be a really destructive move if we try to pull the human pipeline out of the system too early.

    Just paying attention to near term savings runs a real risk falling into that trap.

  • strange_quark 14 hours ago

    Agree with this, the "find this thing in my spaghetti codebase" is far and away the best use of LLMs I've seen. Fill in the rest of this switch statement, populate this struct from this database call, etc. also work pretty well. I would love if I could get a small model that ran locally that was able to pull off those 2 tricks. Explaining code works sometimes, but even the biggest models are still prone to getting confused and/or making stuff up that isn't there. I don't like the agentic features at all and expect these to mostly die because they're expensive and, IMO, only provide the illusion of productivity.

m0llusk 15 hours ago

Looking back at recent technology advancements it seems pretty clear that first movers end up getting buried by the second wave. This may be a sign that AI is dramatically increasing the scale of the errors we make.

dade_ 15 hours ago

Strange advice to companies that can’t figure out how to deploy AI is to not use BPO when obviously they can’t figure it out themselves. Also, I am looking at a BPO RFP right now for a huge organization and they mostly care about is how the vendor going to leverage AI to reduce their price over the term of the contract.

linotype 19 hours ago

Wait until someone tells them the ratio of public cloud vs self-hosted.

jstummbillig 20 hours ago

What arrogant and cynical way of looking at business operations. You don't know how anything interesting turns out in the future. You make educated guesses and those cost money. When you run a business, everything costs money. But that is not "lighting money on fire", that is how all learning happens.

If anyone thinks they have figured it all out, stop blabbering around. Short the market.

  • compiler-guy 18 hours ago

    "The market can remain irrational longer than you can remain insolvent." --John Maynard Keynes

    Lots of people lost their shirts shorting the housing market prior to the 2008 crash. (_The Big Short_ highlights those who were successful, but plenty of people weren't.) But it was undoubtedly a bubble and there was a world-wide recession when it popped.

    • sebastiennight 15 hours ago

      nitpick:

      longer than you can remain "*solvent", not "insolvent"

    • jstummbillig 9 hours ago

      If someone claims to be certain of the future with much more clarity than the rest of us, yet is unable to find ANY way to turn that advantage into leverage, I certainly would not trust the person on that issue.

    • rvz 16 hours ago

      Maybe we will get another one soon if we don’t realize quick enough that the promise of AGI is a complete scam.

  • mangamadaiyan 18 hours ago

    The market can stay irrational longer than the investor can stay solvent.