This is amazing. My entire web browser session state for every private and personal website I sign onto every day will be used for training data. It's great! I love this. This is exactly the direction humans should be going in to not self-destruct. The future is looking bright, while the light in our brains dims to eventual darkness. Slowly. Tragically. And for what purpose exactly. So cool.
When I ask 20-somethings whether they’ve seen the matrix the answer is ‘no’ usually. They have little idea what they’re working towards, but are happy to be doing it so they have something to eat.
Yet they have seen Black Mirror and the likes, which also portray the future we’re heading towards. I’d argue even better because matrix is still far off.
But also, it’s not the 20-somethings building this people making decisions are in their 40’s and 50’s.
Knowing this is the direction things were headed, I have been trying to get Firefox and Google to create a feature that archives your browser history and pipes a stream of it in real time so that open-source personal AI engines can ingest it and index it.
I have no plans in downloading Atlas either, but I think your browsing isn't used for training unless you opt in.
> By default, we don’t use the content you browse to train our models. If you choose to opt-in this content, you can enable “include web browsing” in your data controls settings. Note, even if you opt into training, webpages that opt out of GPTBot, will not be trained on.
Hate to be the dum-dum, but what's leading to humanity's self-destruction here? Loss of privacy? Outsized corporate power? Or, is this an extreme mix of hyperbole and sarcasm?
I think it's more like: investors permanently unhappy because they were promised ownership of God and now we're built out they're getting a few percent a year instead at best. Squeeze extra hard this quarter to get them off the Board's backs for another couple of months.
Investors are never happy long term because even if you have a fantastic quarter, they'll throw a tantrum if you don't do it even better every single time.
This thing is an absolute security nightmare. The concept of opening up the full context of your authenticated sessions in your email, financial, healthcare or other web sites to ChatGPT is downright reckless. Aside from personal harm, the way they are pushing this is going to cause large scale data breaches at companies that harbour sensitive information. I've been the one pushing against hard blocking AI tools at my org so far but this may have turned me around for OpenAI at least.
Yeah, I think there are profound security issues, but I think many folks dug into the prompt injection nightmare scenarios with the first round of “AI browsers”, so I didn’t belabor that here; I wanted to focus on what I felt was less covered.
no I'm talking about the general concept of having ChatGPT passively able to read sensitive data / browser session state. Apart from the ever present risk they suck your data in for training, the threat of prompt injection or model inversion to steal secrets or execute transactions without your knowledge is extreme.
I think the charitable interpretation is that the author is referring to particular use-cases which stopped being served by CLIs.
Heck, just look at what's happening at this very moment: I'm using a textarea with a submit button. Even as a developer/power-user, I have zero interest in replacing that with:
echo "I think the..." | post_to_hn --reply-to=45742461
If we take the total number of computer users globally, and look at who use GUI vs CLI, the latter will be a teeny tiny fraction.
But most of those will likely be developers, that use the CLI in a very particular way.
If we now subdivide further, and look at the people that use the CLI for things like browsing the web, that's going to be an even smaller number of people. Negligible, in the big picture.
It's actually an interesting example, because unlike Warp that tries to be a CLI with AI, Claude defaults to the AI (unless you prefix with an exclamation mark). Maybe it says more about me, but I now find myself asking Claude to write for me even relatively short sed/awk invocations that would have been faster to type by hand. The uncharitable interpretation is that I'm lazy, but the charitable one I tell myself is that I don't want to context-switch and prefer to keep my working memory at the higher level problem.
In any case, Claude Code is not really CLI, but rather a conversational interface.
Claude Code is a TUI (with "text"), not a CLI (with "command line"). The very point of CC is that you can replace a command line with human-readable texts.
Also, to be clear, I’m mostly goofing about it CLIs, and — as I mentioned in the piece — I use one every day. But yes, there are four or five billion internet users who don’t and never will. And CLIs are a poor user interface for 99+% of the tasks that people accomplish on computing devices, or with browsers, which is pertinent for the point I was making.
If I’d anticipated breaching containment and heading towards the orange site, I may not have risked the combination of humor and anything that’s not completely literal in its language. Alas.
The comparison with Zork and the later comment about having to "guess" what to input to get a CLI to work we're also bizarre. He's obviously stretching really hard to make the analogy work.
Don't get me wrong, I'm not arguing that expansion of GUI based interfaces wasn't a good thing. There's plenty of things I prefer to interact with that way, and the majority of people wouldn't use computers if CLIs were still the default method. But what he's describing is literally not how any ever used the commandline.
Also most of the infocom games were pretty much improved against Zork in the 80's.
In the 90's, the amateur Inform6 games basically put Infocom games almost as if they were the original amateur ones, because a lot of them were outstanding and they could ran on underpowered 16 bit machines like nothing.
Ask your non-dev peers if they know what the command line is and if they have ever used it or seen especially when most people use the web on their smartphone.
I recently demonstrated ripgrep-all with fzf to search through thousands of PDFs in seconds on the command line and watched my colleague’s mind implode.
> We left command-line interfaces behind 40 years ago for a reason
Man I still love command-line so much
> And it would go on like this for hours while you tried in vain to guess what the hell it wanted you to type, or you discovered the outdoors, whichever came first. [...] guess what secret spell they had to type into their computer to get actual work done
The games he is talking about deliberately didn't have docs or help because that WAS the game, to guess.
I think same here, while there are docs "please show me the correct way to do X," the surface area is so large the the analogy holds up still, in that you might as well just be guessing fo rhte right command.
The Atlas implementation isn't great, but I'll pick something that tries to represent my interests every time. The modern commercial web is an adversarial network of attention theft and annoyance. Users need something working on their behalf to mine through the garbage to pull out the useful bits. An AI browser is the next logical step after uBlock.
It seems naive to expect a product by a company that desperately needs a lot of revenue to cover even a tiny part of investor money that it burned—where said product offers unprecedented opportunity to productize users in ways never possible before, and said company has previously demonstrated its disregard for ethics—to represent user’s interests.
It’s unlikely LLM operators can break even by charging per use, and it should be expected that they’ll race to capture the market by offering “free” products that in reality are ad serving machines, a time-tested business model that has served Meta and friends very well. The fact that Atlas browser is (and they don’t even hide it) a way to work around usage limits of ChatGPT should ring alarm bells.
Completely agree. Consumers won’t pay for anything online, which means every business model is user-hostile. Use the web for five minutes without an ad blocker and it’s obvious.
Atlas may not be the solution but I love the idea of an LLM that sits between me and the dreck that is today’s web.
Complete unrelated to what you actual wrote about, but... This is the second time in a week that I hear dreck used in english. Before, I never noticed it anywhere. First was from the "singing chemist" scene in Breaking Bad, and now in your writing. I wasn't aware english adopted this word from German as well. Weird, that I never heard it until just now, while the scene I watched is already 15 years or so old...
As another commenter noted, it was loaned from Yiddish rather than German, although the two languages are very closely related. There are many Yiddish words in English that have come from the Jewish diaspora. Common ones I can think of are schlep (carry something heavy) and kvetch (complain loudly). Since this is Hacker News, the Yiddish word I think we use the most is glitch. Of course there are also words from Hebrew origin entering English the same way, like behemoth, kosher, messiah...
Well, while I can never be sure about my own biases, I submit this particular instance is/was not an "illusion". I truely never heard the word dreck used in english before. Why do I know? I am a german native speaker, and I submit that I would have noticed it if I ever heard it earlier, simply because I know its meaning in my native lang and that would have made spotting it rather easy.
I also believe noticing Baader-meinhof in the 90s is rather unsurprising, since RAF was just "a few years" ago. However, "dreck" as someone else noted is documented since the early 20th century. So I dont think me noticing this just recently isn't a bias, rather a true coincidence.
One reason I now often go to ChatGPT instead of many Google queries is that the experience is ads free, quick and responsive.
That said, don't be lured, you know they're already working on ways to put ads and trackers and what not inside ChatGPT and Atlas, those 20$ sound won't pay enough to recoup all that investment and cost and maximize profits.
So I think we should be careful what we wish for here.
This is kind of surprising, because those are precisely the ways I would say that a Web search is better than ChatGPT. Google is generally sub second to get to results, and quite frequently either page 1 or 2 will have some relevant results.
With ChatGPT, I get to watch as it processes for an unpredictable amount of time, then I get to watch it "type".
> ads free
Free of ads where ChatGPT was paid to deliver them. Because it was trained on the public Internet, it is full of advertising content.
Update: Example query I just did for "apartment Seville". Google completed in under a second. All the results above the fold are organic, with sponsored way down. Notably the results include purchase, long-term and vacation rental sites. The first 3 are listing sites. There's an interactive map in case I know where I want to go; apartments on the map include links to their websites. To see more links, I click "Next."
ChatGPT (MacOS native app) took ~9 seconds and recommended a single agency, to which it does not link. Below that, it has bullet points that link to some relevant sites, but the links do not include vacation rentals. There are 4 links to apartment sites, plus a link to a Guardian article about Seville cracking down on illegal vacation rentals. To see more links, I type a request to see more.
For all the talk about Google burying the organic links under a flood of ads, ChatGPT shows me far fewer links. As a person who happily pays for and uses ChatGPT daily, I think it's smart to be honest about its strengths and shortcomings.
Google is SEOed a lot. And while apartment sevile is a subset where Google is probably very good. For many things it gives me very bad results e.g. searching for affordable haircut always gives me a yelp link(there is a yelp link for ever adjective + physical storefront SMB).
That being said I've never really come across on what are some good general ways to Google to give me good results.
I know some tricks e.g. Filetype:PDF
, use scholar for academic search, use "site:...". smth like "site:reddit.com/r/Washington quiet cafe" for most things people would want to do in a city, because people generally ask about those things on community forums.
But I have a poor time with dev related queries because 1/2 the time it's seo'd content and when I don't know enough about a subject, LLMs generally gives me a lot of lines of inquiries(be careful of X and also consider Y) that I would not bother to ask cause I don't know what I don't know.
I use google and ChatGPT for totally different reasons - ChatGPT is generally far better for unknown topics, which google is better if I know exactly what I’m after.
If I’m trying to learn about a topic (for example, how a cone brake works in a 4WD winch), then ChatGPT gives me a great overview with “ Can you explain what a cone brake is and how it works, in the context of 4WD winches?” while google, with the search “4wd winch cone brake function explained”, turns up a handful of videos covering winches (not specifically cone brakes) and some pages that mention them without detailing their function. ChatGPT wins here.
If I were trying to book a flight I’d never dream of even trying to use ChatGPT. That sort of use case is a non-starter for me.
My mistake, you're completely correct, perhaps even more-correct than the wonderful flavor of Mococoa drink, with all-natural cocoa beans from the upper slopes of Mount Nicaragua. No artificial sweeteners!
This sentiment has been rolling around in my head for a while. I assume one day I'll be using some hosted model to solve a problem, and suddenly I won't be able to get anything out beyond "it would actually work a lot better if you redeployed your application on Azure infra with a bunch of licensed Windows server instances. Here's 15 paragraphs about why.."
It's a lot more than that. The U.S. online ad market is something like $400-500 billion, so that's about $100/mo per person. The problem is that some people are worth a lot more to advertise to than others. Someone who uses the internet a lot and has a lot of disposable income might be more like $500+ a month.
Where do I cut the $10/month? No like seriously, I'd easily pay $10/month to never see another ad, cookie banner, dark pattern, or have my information resold again. As long as that $10 is promised to never increase, other than adjustments for inflation.
But I can't actually make that payment - except maybe by purchasing a paid adblocker - where ironically the best open source option (uBlock Origin) doesn't even accept donations.
Meta publishes some interesting data along these lines in their quarterly reports.
I think the most telling is the breakdown of Average Revenue Per User per region for Facebook specifically [1]. The average user brought in about $11 per quarter while the average US/CA user brought in about $57 per quarter during 2023.
Setting up Kagi is as big an improvement to search as an ad blocker is to your general internet experience. After about a week you forget how bad the bare experience is, and after a month you'll never go back.
I'm definitely behind some of my peers on adopting LLMs for general knowledge questions and web search, and I wonder if this is why. Kagi does have AI tools, but their search is ad free and good enough that I can usually find what I'm looking for with little fuss.
Ads are not the only problem with the modern web. Accessibility (or, the lack thereof) is more of an issue for me. 20 years ago, we were still hopeful the Internet would bring opportunities to blind people. These days, I am convinced the war has been lost and modern web devs and their willingness to adopt every new nonesense are the people that hurt me most in current society.
Ublock Origin allows me to control what I see while that information is still in its original context so that I can take that into account when doing research, making decisions, etc.
But isn't this, instead, letting a third party strip that context away and give it its own context so that you can't make those choices and decisions properly? Information without context is, to me, nearly worthless.
And even if you believe they are neutral parties and have your own interests at heart (which, quite frankly, I think is naive), once companies like that know everything about you, you don't think they'll abuse that knowledge?
uBlock just takes stuff off of a page that shouldn't be there in the first place. All the content that should be there is still there, unchanged.
An AI browser is choosing to send all the stuff you browse, to a third party without a demonstrated interest in keeping it all private, and getting back stuff that might or might not be true to the original content. Or maybe not even true at all.
Oh and - Atlas will represent your interests, right up until OpenAI decides it's not in their financial interest to do so. What do you do when the entire web browser UI gets enshittified?
There's 2 dimensions to it: determinism and discoverability.
In the Adventure example, the ux is fully deterministic but not discoverable. Unless you know what the correct incantation is, there is no inherent way to discover it besides trial and error. Most cli's are like that (and imho phones with 'gestures' are even worse). That does not make a cli inefficient, unproductive or bad. I use cli all the time as I'm sure Anhil does, it just makes them more difficult to approach for the novice.
But the second aspect of Atlas is its non determinism. There is no 'command' that predictivly always 'works'. You can engineer towards phrasings that are more often successfull, but you can't reach fidelity.
This leeway is not without merrit. In theory the system is thus free to 'improve' over time without the user needing to. That is something you might find desirable or not.
Wow! Amazing post! You really nailed the complexities of AI browsers in ways that most people don't think about. I think there's also a doom paradox where if more people search with AI, this disincentives people from posting on their own blog and websites where incentives are usually ads could help support them. If AI is crawling and then spitting back information from your blog (you get no revenue), is there a point to post at all?
> If you post for ad revenue, I truly feel sorry for you.
I think this is a bit dismissive towards people who create content because they enjoy doing it but also could not do it to this level without the revenue, like many indie Youtubers.
I tested Google Search, Google Gemini and Claude with "Taylor Swift showgirl". Gemini and Claude gave me a description plus some links. Both were organized better than the Google search page. If I didn't like the description that Claude or Google gave me I could click on the Wikipedia link. Claude gave me a link to Spotify to listen while Gemini gave me a link to YouTube to watch and lisen.
The complaint about the OpenAI browser seems to be it didn't show any links. I agree, that is a big problem. If you are just getting error prone AI output then it's pretty worthless.
This and the new device that OpenAI is working on is more of a general strategy to make a bigger moat by having more of an ecosystem so that people will keep their subscriptions and also get pro.
- Atlas slurps the web to get more training data, bypassing Reddit blocks, Cloudflare blocks, paywalls, etc. It probably enriches the data with additional user signals that are useful.
- Atlas is an attempt to build a sticky product that users won't switch away from. An LLM or image model doesn't really have sticky attachment, but if it starts storing all of your history and data, the switching costs could become immense. (Assuming it actually provides value and isn't a gimmick.)
- Build pillars of an interconnected platform. Key "panes of glass" for digital lives, commerce, sales intent, etc. in the platformization strategy. The hardware play, the social network play -- OpenAI is trying to mint itself as a new "Mag 7", and Atlas could be a major piece in the puzzle.
- Eat into precious Google revenue. Every Atlas user is a decrease in Google search/ads revenue.
Am I missing something here? I used it a few days ago and it does actually act like a web browser and give me the link. This seems to be a UI expectation issue rather than a "real philosophy".
Tangent but related, if only google search would do a serious come back instead of not finding nothing anymore, we would have a tool to compare ai to. Sure gemini integration might still be a thing but with actual working search results
>I had typed "Taylor Swift" in a browser, and the response had literally zero links to Taylor Swift's actual website. If you stayed within what Atlas generated, you would have no way of knowing that Taylor Swift has a website at all.
Sounds like the browser did you a favor. Wonder if she'll be suing.
> There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet.
>There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet.
I can barely stomach it with John Oliver does it, but reading this sort of snark without hearing a British voice is too much for me.
Also, re: "a tiny handful of incredible nerds" - page 20 of this [0] document lists the sales figures for Infocom titles from 1981 to 1986: it sums up to over 2 million shipped units.
Granted, that number does not equal the number "nerds" who played the games because the same player will probably have bought multiple titles if they enjoyed interactive fiction.
However, also keep in mind that some of the games in that table were only available after 1981, i.e., at a later point during the 1981-1986 time frame. Also, the 80s were a prime decade for pirating games, so more people will have played Infocom titles than the sales figures suggest - the document itself mentions this because they sold hint books for some titles separately.
'Modern' Z-Machine games (v5 version compared to the original v3 one from Infocom) will allow you to do that and far more.
By 'modern' I meant from the early 90's and up.
Even more with v8 games.
>This was also why people hated operating systems like MS-DOS, and why even all the Linux users reading this right now are doing so in a graphical user interface.
The original v3 Z Machine parser (raw one) was pretty much limited compared to the v5 one. Even more with against the games made with Inform6 and the Inform6 English library targetting the v5 version.
Go try yourself. Original Zork from MIT (Dungeon) converted into a v5 ZMachine game:
Spot the differences.
For instance, you could both say 'take the rock' and, later, say 'drop it'.
>take mat
Taken.
>drop it
Dropped.
>take the mat
Taken.
>drop mat
Dropped.
>open mailbox
You open the mailbox, revealing a small leaflet.
>take leaftlet
You can't see any such thing.
>take leaflet
Taken.
>drop it
Dropped.
Now, v5 games are from late 80's/early 90's. There's Curses, Jigsaw, Spider and Web ... v8 games are like Anchorhead, pretty advanced for its time:
You can either download the Z8 file and play it with Frotz (Linux/BSD), WinFrotz, or Lectrote under Android and anything else. Also, online with the web interpreter.
Now, the 'excuses' about the terseness of the original Z3 parser are now nearly void; because with Puny Inform a lot of Z3 targetting games (for 8086 DOS PC's, C64's, Spectrums, MSX'...) have a slightly improved parser against the original Zork game.
This article is deep, important, and easily misinterpreted. The TL;DR is that a plausible business model for AI companies is centered around surveillance advertising and content gating like Google or Meta, but in a much more insidious and invasive form.
I found the article is no more than ranting about something that they are just projecting. The browser may not be for everyone, but I think there’s a lot of value to an AI tool that helps you find what you’re looking for without shoving as many ads as possible down your throat while summarizing content to your needs. Supposing OpenAI is not the monster that is trying to kill the web and lock you up , can’t you see how that may be a useful tool?
The SV playbook is to create a product, make it indispensable and monopolise it. Microsoft did it with office software. Payment companies want to be monopolies. Social media are of course the ultimate monopolies - network effects mean there is only one winner.
So I guess the only logical next step for Big AI is to destroy the web, once they have squeezed every last bit out of it. Or at least make it dependent on them. Who needs news sites when OpenAI can do it? Why blog - just prompt your BlogLLM with an idea. Why comment on blogs - your agent will do it for you. All while avoid child porn with 97% accuracy - somerhing human curated content surely cannot be trusted to do.
It's really crazy that there is an entire ai generated internet. I have zero clue what the benefit of using this would be to me.Even if we argue that it is less ads and such, that would only be until they garner enough users to start pushing charges. Probably through even more obtrusive ads.
I also need to laugh. Wasn't open AI just crying about people copying them not so long ago?
The purpose is total control. You never leave their platform, there are no links out. You get all of your information and entertainment from their platform.
I normally dont waste a lot of energy on politics.
But this feels truly dystopian. We here on HN are all in our bubble, we know that AI responses are very prone to error and just great in mimicking. We can differentiate when to use and when not (more or less), but when I talk to non-tech people in a normal city not close to a tech hub, most of them treat ChatGPT as the all-knowing factual instance.
They have no idea of the concious and unconcious bias on the responses, based on how we ask the questions.
Unfortunately I think these are the majority of the people.
If you combine all that with a shady Silicon Valley CEO under historical pressure to make OpenAI profitable after 64 billion in funding, regularly flirting with the US president, it seems always consequential to me that exactly what the author described is the goal. No matter the cost.
As we all feel like AI progress is stagnating and mainly the production cost to get AI responses is going down, this almost seems like the only out for OpenAI to win.
Atlas confuses me. Firefox already puts Claude or ChatGPT in my sidebar and has integrations so I can have it analyze or summarize content or help me with something on the page. Atlas looks like yet another Chromium fork that should have been a browser extension, not a revolutionary product that will secure OpenAI's market dominance.
I think we're returning to CLIs mostly because typing remains one of the fastest ways we can communicate with our computers. The traditional limitation was that CLIs required users to know exactly what they wanted the computer to do. This meant learning all commands, flags etc.
GUIs emerged to make things easier for users to tell their computers what to do. You could just look at the screen and know that File > Save would save the file instead of remembering :w or :wq. They minimized friction and were polished to no end by companies like MSFT and AAPL.
Now that technology has got to a point where our computers now can bridge the gap between what we said and what we meant reasonably well, we can go back to CLIs. We keep the speed and expressiveness of typing but without the old rigidity. I honestly can't wait for the future where we evolve interfaces to things we previously only dreamt of before.
It’s less rigid than a command line but much less predictable than either a CLI or a GUI, with the slightest variation in phrasing sometimes producing very different results even on the same model.
Particularly when you throw in agentic capabilities where it can feel like a roll of the dice if the LLM decides to use a special purpose tool or just wings it and spits out its probabilistic best guess.
that being said, asking chatgpt to do research in 30 seconds for me that might require me to set aside an hour or two is causing me to make decisions about where to tinker and ideas to chase down much faster
It’s not so much a conspiracy theory as it is a perfect alignment of market forces. Which is to say, you don’t need a cackling evil mastermind to get conspiracy-like outcomes, just the proper set of deleterious incentives.
Me too, and as the number and maturity of my projects have grown, improving and maintaining them all together has become harder by a factor I haven’t encountered before
Every professional involved in saas, web , online content creation thinks the web is a beautiful thing.
In reality the fact of social media means web failed long time ago, and it only serves a void not taken by mobile apps , and now llm agents.
Why do I need to read everything about tailor Swift on you her web site , if I don’t know a single song of her ? ( I actually do ) .
I don’t want a screaming website tells me about her best new album ever , and her tours if LLM knows I don’t like pop music . The other way around if you like her you’d like a different set of information. Website can do that for you
OpenAI should be 100% required to rev share with content creators (just like radio stations pay via compulsory licenses for the music they play), but this is a weird complaint:
> “sometimes this tool completely fabricates content, gives me a box that looks like a search box, and shows me the fabricated content in a display that looks like a web page when I type in the fake search box”
If a human wrote that same article about Taylor Swift, would you say it completely fabricates content? Most “articles” on the web are just rewrites of someone else’s articles anyway and nobody goes after them as bad actors (they should).
At this point, my adoption of AI tools is motivated by fear of missing out or being left behind. I’m a self-taught programmer running my own SaaS.
I have memory and training enabled. What I can objectively say about Atlas is that I’ve been using it and I’m hooked. It’s made me roughly twice as productive — I solved a particular problem in half the time because Atlas made it easy to discover relevant information and make it actionable. That said, affording so much control to a single company does make me uneasy.
Not sure why this got downvoted, but to clarify what I meant:
With my repo connected via the GitHub app, I asked Atlas about a problem I was facing. After a few back-and-forth messages, it pointed me to a fork I might eventually have found myself — but only after a lot more time and trial-and-error. Maybe it was luck, but being able to attach files, link context from connected apps, and search across notes and docs in one place has cut a lot of friction for me.
This is amazing. My entire web browser session state for every private and personal website I sign onto every day will be used for training data. It's great! I love this. This is exactly the direction humans should be going in to not self-destruct. The future is looking bright, while the light in our brains dims to eventual darkness. Slowly. Tragically. And for what purpose exactly. So cool.
Seeing people work tirelessly to make The Matrix a reality is great. I can't wait!
When I ask 20-somethings whether they’ve seen the matrix the answer is ‘no’ usually. They have little idea what they’re working towards, but are happy to be doing it so they have something to eat.
Yet they have seen Black Mirror and the likes, which also portray the future we’re heading towards. I’d argue even better because matrix is still far off.
But also, it’s not the 20-somethings building this people making decisions are in their 40’s and 50’s.
Nobody likes the Torment Nexus [0] but everyone has to use it because that's all the eyeballs are. Sometimes still attached.
[0] https://knowyourmeme.com/memes/torment-nexus
You're absolutely right!
> and for what purpose exactly.
The end goal for AI companies has always been to insert themselves into every data flow in the world.
They also need an outlet for all the garbage they generate, hence the transformation of Sora into a shitty social network.
Knowing this is the direction things were headed, I have been trying to get Firefox and Google to create a feature that archives your browser history and pipes a stream of it in real time so that open-source personal AI engines can ingest it and index it.
https://connect.mozilla.org/t5/ideas/archive-your-browser-hi...
Why not Chrome Devtools MCP?
I have no plans in downloading Atlas either, but I think your browsing isn't used for training unless you opt in.
> By default, we don’t use the content you browse to train our models. If you choose to opt-in this content, you can enable “include web browsing” in your data controls settings. Note, even if you opt into training, webpages that opt out of GPTBot, will not be trained on.
https://openai.com/index/introducing-chatgpt-atlas/
Until the next update, when they conveniently have a "bug" that enables it by default
[dead]
Hate to be the dum-dum, but what's leading to humanity's self-destruction here? Loss of privacy? Outsized corporate power? Or, is this an extreme mix of hyperbole and sarcasm?
Creating an impermeable barrier between truth and real-time slop generation.
How can you know what you're reading is true when you can't verify what's happening out there?
This is true from global events to a pasta recipe.
Option 1: Training data
Option 2: Ad data
Option 3: None of the above
I'm going with the first two because I like to contribute my data to help out a trillion dollar company that doesn't even have customer support :)
Option 3 is probably "engagement numbers go up -> investors happy"
I think it's more like: investors permanently unhappy because they were promised ownership of God and now we're built out they're getting a few percent a year instead at best. Squeeze extra hard this quarter to get them off the Board's backs for another couple of months.
Investors are never happy long term because even if you have a fantastic quarter, they'll throw a tantrum if you don't do it even better every single time.
"You're absolutely right!"
Being "anti-web" is the least of its problems.
This thing is an absolute security nightmare. The concept of opening up the full context of your authenticated sessions in your email, financial, healthcare or other web sites to ChatGPT is downright reckless. Aside from personal harm, the way they are pushing this is going to cause large scale data breaches at companies that harbour sensitive information. I've been the one pushing against hard blocking AI tools at my org so far but this may have turned me around for OpenAI at least.
Let’s make a not-for-profit, we can make rainbows and happiness.
Yay!! Let’s all make a not-for-profit!!
Oh, but hold on a minute, look at all the fun things we can do with lots of money!
Ooooh!!
Yeah, I think there are profound security issues, but I think many folks dug into the prompt injection nightmare scenarios with the first round of “AI browsers”, so I didn’t belabor that here; I wanted to focus on what I felt was less covered.
Is this the security flaw thingy that stores OAuth or Auth0 tokens in sqllite database with overly permissive read privileges on it?
no I'm talking about the general concept of having ChatGPT passively able to read sensitive data / browser session state. Apart from the ever present risk they suck your data in for training, the threat of prompt injection or model inversion to steal secrets or execute transactions without your knowledge is extreme.
Right, the software is inherently a flaming security risk even if the vendor were perfectly trustworthy and moral.
Well, unless the scenario is moot because such a vendor would never have released it in the first place.
I mean... Edge already have copilot integrated for years, and Edge actually have users, unlike Atlas. Not sure why people are getting shocked now...
Microsoft calls everything copilot. It is unclear what they had back then under that name, and what they will have under it.
"This bad no good thing is already happening, so why are you complaining"
>We left command-line interfaces behind 40 years ago for a reason
No we didnt.
I think the charitable interpretation is that the author is referring to particular use-cases which stopped being served by CLIs.
Heck, just look at what's happening at this very moment: I'm using a textarea with a submit button. Even as a developer/power-user, I have zero interest in replacing that with:
A TUI client (with some embedded CLI) would perfectly work for HN.
That's still not a command-line interface. An ed-based mailer is a command-line interface: what you're describing sounds more like *shudder* vi.
If we take the total number of computer users globally, and look at who use GUI vs CLI, the latter will be a teeny tiny fraction.
But most of those will likely be developers, that use the CLI in a very particular way.
If we now subdivide further, and look at the people that use the CLI for things like browsing the web, that's going to be an even smaller number of people. Negligible, in the big picture.
The web wasn't made for cli. Gopher was.
If at all anything, Claude Code's success disproved this
It's actually an interesting example, because unlike Warp that tries to be a CLI with AI, Claude defaults to the AI (unless you prefix with an exclamation mark). Maybe it says more about me, but I now find myself asking Claude to write for me even relatively short sed/awk invocations that would have been faster to type by hand. The uncharitable interpretation is that I'm lazy, but the charitable one I tell myself is that I don't want to context-switch and prefer to keep my working memory at the higher level problem.
In any case, Claude Code is not really CLI, but rather a conversational interface.
Claude Code is a TUI (with "text"), not a CLI (with "command line"). The very point of CC is that you can replace a command line with human-readable texts.
Claude Code is a Terminal User Interface, not a Command Line Interface.
You know of many people who browse the web using CLI?
I think there is a misunderstanding who is meant by "we"
I mean, it's clear he means for the majority of users and OSes... not the HN crowd specifically.
Also, to be clear, I’m mostly goofing about it CLIs, and — as I mentioned in the piece — I use one every day. But yes, there are four or five billion internet users who don’t and never will. And CLIs are a poor user interface for 99+% of the tasks that people accomplish on computing devices, or with browsers, which is pertinent for the point I was making.
If I’d anticipated breaching containment and heading towards the orange site, I may not have risked the combination of humor and anything that’s not completely literal in its language. Alas.
Long live Doug McIlroy!
Came here to say this. As a software dev I'm deeply offended lol
Exactly, the whole world runs on CLI based software.
The comparison with Zork and the later comment about having to "guess" what to input to get a CLI to work we're also bizarre. He's obviously stretching really hard to make the analogy work.
Don't get me wrong, I'm not arguing that expansion of GUI based interfaces wasn't a good thing. There's plenty of things I prefer to interact with that way, and the majority of people wouldn't use computers if CLIs were still the default method. But what he's describing is literally not how any ever used the commandline.
Also most of the infocom games were pretty much improved against Zork in the 80's. In the 90's, the amateur Inform6 games basically put Infocom games almost as if they were the original amateur ones, because a lot of them were outstanding and they could ran on underpowered 16 bit machines like nothing.
Ask your non-dev peers if they know what the command line is and if they have ever used it or seen especially when most people use the web on their smartphone.
Network Engineers, Systems Engineers, Devops.
Anyone who deals with any kind of machine with a console port.
CLIs are current technology, that receive active development alongside GUI for a large range of purposes.
Heck Windows currently ships with 3 implementations. Command Prompt, Powershell AND Terminal.
I recently demonstrated ripgrep-all with fzf to search through thousands of PDFs in seconds on the command line and watched my colleague’s mind implode.
Run iomenu instead of fzf, it might run faster than the Go binary.
I'm aware. Just having a bit of fun. Obviously the vast majority of computer users don't even know what a command line is.
I don't think the CLI one is a good analogy.
> We left command-line interfaces behind 40 years ago for a reason
Man I still love command-line so much
> And it would go on like this for hours while you tried in vain to guess what the hell it wanted you to type, or you discovered the outdoors, whichever came first. [...] guess what secret spell they had to type into their computer to get actual work done
Well... docs and the "-h" do a pretty good job.
The games he is talking about deliberately didn't have docs or help because that WAS the game, to guess.
I think same here, while there are docs "please show me the correct way to do X," the surface area is so large the the analogy holds up still, in that you might as well just be guessing fo rhte right command.
The Atlas implementation isn't great, but I'll pick something that tries to represent my interests every time. The modern commercial web is an adversarial network of attention theft and annoyance. Users need something working on their behalf to mine through the garbage to pull out the useful bits. An AI browser is the next logical step after uBlock.
It seems naive to expect a product by a company that desperately needs a lot of revenue to cover even a tiny part of investor money that it burned—where said product offers unprecedented opportunity to productize users in ways never possible before, and said company has previously demonstrated its disregard for ethics—to represent user’s interests.
It’s unlikely LLM operators can break even by charging per use, and it should be expected that they’ll race to capture the market by offering “free” products that in reality are ad serving machines, a time-tested business model that has served Meta and friends very well. The fact that Atlas browser is (and they don’t even hide it) a way to work around usage limits of ChatGPT should ring alarm bells.
Completely agree. Consumers won’t pay for anything online, which means every business model is user-hostile. Use the web for five minutes without an ad blocker and it’s obvious.
Atlas may not be the solution but I love the idea of an LLM that sits between me and the dreck that is today’s web.
Complete unrelated to what you actual wrote about, but... This is the second time in a week that I hear dreck used in english. Before, I never noticed it anywhere. First was from the "singing chemist" scene in Breaking Bad, and now in your writing. I wasn't aware english adopted this word from German as well. Weird, that I never heard it until just now, while the scene I watched is already 15 years or so old...
As another commenter noted, it was loaned from Yiddish rather than German, although the two languages are very closely related. There are many Yiddish words in English that have come from the Jewish diaspora. Common ones I can think of are schlep (carry something heavy) and kvetch (complain loudly). Since this is Hacker News, the Yiddish word I think we use the most is glitch. Of course there are also words from Hebrew origin entering English the same way, like behemoth, kosher, messiah...
Very interesting, thanks! Quetsch in german actually means "to squeeze". So while apparently related, has changed its meaning.
Merriam Webster dates the English word "dreck" to 1922, though it seems to come from the Yiddish drek and is therefore much older.
Baader-meinhof phenomenon
Well, while I can never be sure about my own biases, I submit this particular instance is/was not an "illusion". I truely never heard the word dreck used in english before. Why do I know? I am a german native speaker, and I submit that I would have noticed it if I ever heard it earlier, simply because I know its meaning in my native lang and that would have made spotting it rather easy.
I also believe noticing Baader-meinhof in the 90s is rather unsurprising, since RAF was just "a few years" ago. However, "dreck" as someone else noted is documented since the early 20th century. So I dont think me noticing this just recently isn't a bias, rather a true coincidence.
https://en.wikipedia.org/wiki/Frequency_illusion
So you believe this browser is attempting to represent your interests, and work on your behalf?
One reason I now often go to ChatGPT instead of many Google queries is that the experience is ads free, quick and responsive.
That said, don't be lured, you know they're already working on ways to put ads and trackers and what not inside ChatGPT and Atlas, those 20$ sound won't pay enough to recoup all that investment and cost and maximize profits.
So I think we should be careful what we wish for here.
> quick and responsive
This is kind of surprising, because those are precisely the ways I would say that a Web search is better than ChatGPT. Google is generally sub second to get to results, and quite frequently either page 1 or 2 will have some relevant results.
With ChatGPT, I get to watch as it processes for an unpredictable amount of time, then I get to watch it "type".
> ads free
Free of ads where ChatGPT was paid to deliver them. Because it was trained on the public Internet, it is full of advertising content.
Update: Example query I just did for "apartment Seville". Google completed in under a second. All the results above the fold are organic, with sponsored way down. Notably the results include purchase, long-term and vacation rental sites. The first 3 are listing sites. There's an interactive map in case I know where I want to go; apartments on the map include links to their websites. To see more links, I click "Next."
ChatGPT (MacOS native app) took ~9 seconds and recommended a single agency, to which it does not link. Below that, it has bullet points that link to some relevant sites, but the links do not include vacation rentals. There are 4 links to apartment sites, plus a link to a Guardian article about Seville cracking down on illegal vacation rentals. To see more links, I type a request to see more.
For all the talk about Google burying the organic links under a flood of ads, ChatGPT shows me far fewer links. As a person who happily pays for and uses ChatGPT daily, I think it's smart to be honest about its strengths and shortcomings.
Google is SEOed a lot. And while apartment sevile is a subset where Google is probably very good. For many things it gives me very bad results e.g. searching for affordable haircut always gives me a yelp link(there is a yelp link for ever adjective + physical storefront SMB).
That being said I've never really come across on what are some good general ways to Google to give me good results.
I know some tricks e.g. Filetype:PDF , use scholar for academic search, use "site:...". smth like "site:reddit.com/r/Washington quiet cafe" for most things people would want to do in a city, because people generally ask about those things on community forums.
But I have a poor time with dev related queries because 1/2 the time it's seo'd content and when I don't know enough about a subject, LLMs generally gives me a lot of lines of inquiries(be careful of X and also consider Y) that I would not bother to ask cause I don't know what I don't know.
I use google and ChatGPT for totally different reasons - ChatGPT is generally far better for unknown topics, which google is better if I know exactly what I’m after.
If I’m trying to learn about a topic (for example, how a cone brake works in a 4WD winch), then ChatGPT gives me a great overview with “ Can you explain what a cone brake is and how it works, in the context of 4WD winches?” while google, with the search “4wd winch cone brake function explained”, turns up a handful of videos covering winches (not specifically cone brakes) and some pages that mention them without detailing their function. ChatGPT wins here.
If I were trying to book a flight I’d never dream of even trying to use ChatGPT. That sort of use case is a non-starter for me.
> on ways to put ads
My mistake, you're completely correct, perhaps even more-correct than the wonderful flavor of Mococoa drink, with all-natural cocoa beans from the upper slopes of Mount Nicaragua. No artificial sweeteners!
(https://www.youtube.com/watch?v=MzKSQrhX7BM&t=0m13s)
I’d probably agree with you if I didn’t have Kagi.
As it is, I find there are some things LLMs are genuinely better for but many where a search is still far more useful.
As bad as AI experiences often are, I speculate that we are actually in a golden age before they are fully enshittified.
This sentiment has been rolling around in my head for a while. I assume one day I'll be using some hosted model to solve a problem, and suddenly I won't be able to get anything out beyond "it would actually work a lot better if you redeployed your application on Azure infra with a bunch of licensed Windows server instances. Here's 15 paragraphs about why.."
I found myself avoiding google lately because of their AI responses at the top. But you can block those and now google is much nicer.
>> The modern commercial web is an adversarial network of attention theft and annoyance
It feels like $10 / month would be sufficient to solve this problem. Yet, we've all insisted that everything must be free.
I now pay for:
- Kagi
- YouTube Premium
- Spotify Premium
- Meta ad-free
- A bunch of substacks and online news publications
- Twitter Pro or whatever it’s called
On top of that I aggressively ad-block with extensions and at DNS level and refuse to use any app with ads. I have most notifications disabled, too.
It is a lot better, but it’s more like N * $10 than $10 per month.
I'm not familiar with YouTube or Spotify premium, so this may be a dumb question.
But, doesn't Youtube Premium include Youtube Music? So why pay for Spotify premium too?
It's a lot more than that. The U.S. online ad market is something like $400-500 billion, so that's about $100/mo per person. The problem is that some people are worth a lot more to advertise to than others. Someone who uses the internet a lot and has a lot of disposable income might be more like $500+ a month.
Where do I cut the $10/month? No like seriously, I'd easily pay $10/month to never see another ad, cookie banner, dark pattern, or have my information resold again. As long as that $10 is promised to never increase, other than adjustments for inflation.
But I can't actually make that payment - except maybe by purchasing a paid adblocker - where ironically the best open source option (uBlock Origin) doesn't even accept donations.
You'd need to pay a lot more, because advertisers pay way more than 10$ per month per user, you'd have to outpay the advertisers.
How much do advertisers pay per customer, and where can I find this analysis?
Meta publishes some interesting data along these lines in their quarterly reports.
I think the most telling is the breakdown of Average Revenue Per User per region for Facebook specifically [1]. The average user brought in about $11 per quarter while the average US/CA user brought in about $57 per quarter during 2023.
[1] https://s21.q4cdn.com/399680738/files/doc_financials/2023/q4... (page 15)
Now Meta does paid ad free for my private Instagram account I feel like my online world is pretty close to ad free.
It’s closer to $100 than $10 though, for all the services I pay for to avoid ads, and you still need ad blockers for the rest of the internet.
Kagi.com
Setting up Kagi is as big an improvement to search as an ad blocker is to your general internet experience. After about a week you forget how bad the bare experience is, and after a month you'll never go back.
I'm definitely behind some of my peers on adopting LLMs for general knowledge questions and web search, and I wonder if this is why. Kagi does have AI tools, but their search is ad free and good enough that I can usually find what I'm looking for with little fuss.
Add actual accessibility on top, and I'd happily pay 20 EUR/month.
Yes please!
Ads are not the only problem with the modern web. Accessibility (or, the lack thereof) is more of an issue for me. 20 years ago, we were still hopeful the Internet would bring opportunities to blind people. These days, I am convinced the war has been lost and modern web devs and their willingness to adopt every new nonesense are the people that hurt me most in current society.
Oh sweet summer lamb
Gopher. Gemini (the protocol not the AI). IRC.
Ublock Origin allows me to control what I see while that information is still in its original context so that I can take that into account when doing research, making decisions, etc.
But isn't this, instead, letting a third party strip that context away and give it its own context so that you can't make those choices and decisions properly? Information without context is, to me, nearly worthless.
And even if you believe they are neutral parties and have your own interests at heart (which, quite frankly, I think is naive), once companies like that know everything about you, you don't think they'll abuse that knowledge?
uBlock just takes stuff off of a page that shouldn't be there in the first place. All the content that should be there is still there, unchanged.
An AI browser is choosing to send all the stuff you browse, to a third party without a demonstrated interest in keeping it all private, and getting back stuff that might or might not be true to the original content. Or maybe not even true at all.
Oh and - Atlas will represent your interests, right up until OpenAI decides it's not in their financial interest to do so. What do you do when the entire web browser UI gets enshittified?
not sure about that. I'll be happy with ublock thanks
People are misinterpreting the gui/cli thing.
There's 2 dimensions to it: determinism and discoverability.
In the Adventure example, the ux is fully deterministic but not discoverable. Unless you know what the correct incantation is, there is no inherent way to discover it besides trial and error. Most cli's are like that (and imho phones with 'gestures' are even worse). That does not make a cli inefficient, unproductive or bad. I use cli all the time as I'm sure Anhil does, it just makes them more difficult to approach for the novice.
But the second aspect of Atlas is its non determinism. There is no 'command' that predictivly always 'works'. You can engineer towards phrasings that are more often successfull, but you can't reach fidelity.
This leeway is not without merrit. In theory the system is thus free to 'improve' over time without the user needing to. That is something you might find desirable or not.
> In theory the system is thus free to 'improve' over time without the user needing to.
It could just as well degrade, improvement is not the only path.
Wow! Amazing post! You really nailed the complexities of AI browsers in ways that most people don't think about. I think there's also a doom paradox where if more people search with AI, this disincentives people from posting on their own blog and websites where incentives are usually ads could help support them. If AI is crawling and then spitting back information from your blog (you get no revenue), is there a point to post at all?
The point to posting anything is to share with your fellow kind new knowledge that lifts them, entertains them, or teaches them.
If you post for ad revenue, I truly feel sorry for you. How sad.
> If you post for ad revenue, I truly feel sorry for you.
I think this is a bit dismissive towards people who create content because they enjoy doing it but also could not do it to this level without the revenue, like many indie Youtubers.
I tested Google Search, Google Gemini and Claude with "Taylor Swift showgirl". Gemini and Claude gave me a description plus some links. Both were organized better than the Google search page. If I didn't like the description that Claude or Google gave me I could click on the Wikipedia link. Claude gave me a link to Spotify to listen while Gemini gave me a link to YouTube to watch and lisen.
The complaint about the OpenAI browser seems to be it didn't show any links. I agree, that is a big problem. If you are just getting error prone AI output then it's pretty worthless.
Google search is an ad platform. Wait until the honeymoon days of the "AI" LLMs are over for the enshitification to ensue.
This and the new device that OpenAI is working on is more of a general strategy to make a bigger moat by having more of an ecosystem so that people will keep their subscriptions and also get pro.
Atlas strategy:
- Atlas slurps the web to get more training data, bypassing Reddit blocks, Cloudflare blocks, paywalls, etc. It probably enriches the data with additional user signals that are useful.
- Atlas is an attempt to build a sticky product that users won't switch away from. An LLM or image model doesn't really have sticky attachment, but if it starts storing all of your history and data, the switching costs could become immense. (Assuming it actually provides value and isn't a gimmick.)
- Build pillars of an interconnected platform. Key "panes of glass" for digital lives, commerce, sales intent, etc. in the platformization strategy. The hardware play, the social network play -- OpenAI is trying to mint itself as a new "Mag 7", and Atlas could be a major piece in the puzzle.
- Eat into precious Google revenue. Every Atlas user is a decrease in Google search/ads revenue.
Ycombinator Application: Replace entire World Wide Web with my own WWW
Response: Already achieved by OpenAI!
https://stockanalysis.com/list/magnificent-seven/
I guess Mag 7 is the new FAANG, not the mag-7 shotgun
Am I missing something here? I used it a few days ago and it does actually act like a web browser and give me the link. This seems to be a UI expectation issue rather than a "real philosophy".
Reposting on Bluesky.
I like anti-web phrase. I think it will be a next phase after all those web 2.0 and web x.0 things.
https://bsky.app/profile/kkarpieszuk.bsky.social/post/3m4cxf...
Tangent but related, if only google search would do a serious come back instead of not finding nothing anymore, we would have a tool to compare ai to. Sure gemini integration might still be a thing but with actual working search results
You should try Kagi for this experience.
They gotta do this.
If they don't put AI in every tool, they won't get new training data.
The thing about command lines is off base, but overall the article is right that the ickiness of this thing is exceeded only by its evil.
>I had typed "Taylor Swift" in a browser, and the response had literally zero links to Taylor Swift's actual website. If you stayed within what Atlas generated, you would have no way of knowing that Taylor Swift has a website at all.
Sounds like the browser did you a favor. Wonder if she'll be suing.
> There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet.
:skull:
It really lost me at
>There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet.
I can barely stomach it with John Oliver does it, but reading this sort of snark without hearing a British voice is too much for me.
Also, re: "a tiny handful of incredible nerds" - page 20 of this [0] document lists the sales figures for Infocom titles from 1981 to 1986: it sums up to over 2 million shipped units.
Granted, that number does not equal the number "nerds" who played the games because the same player will probably have bought multiple titles if they enjoyed interactive fiction.
However, also keep in mind that some of the games in that table were only available after 1981, i.e., at a later point during the 1981-1986 time frame. Also, the 80s were a prime decade for pirating games, so more people will have played Infocom titles than the sales figures suggest - the document itself mentions this because they sold hint books for some titles separately.
[0] https://ia601302.us.archive.org/1/items/InfocomCabinetMiscSa...
>Sorry, I can't do that.
'Modern' Z-Machine games (v5 version compared to the original v3 one from Infocom) will allow you to do that and far more. By 'modern' I meant from the early 90's and up. Even more with v8 games.
>This was also why people hated operating systems like MS-DOS, and why even all the Linux users reading this right now are doing so in a graphical user interface.
The original v3 Z Machine parser (raw one) was pretty much limited compared to the v5 one. Even more with against the games made with Inform6 and the Inform6 English library targetting the v5 version.
Go try yourself. Original Zork from MIT (Dungeon) converted into a v5 ZMachine game:
https://iplayif.com/?story=https%3A%2F%2Fifarchive.org%2Fif-...
Spot the differences. For instance, you could both say 'take the rock' and, later, say 'drop it'.
Now, v5 games are from late 80's/early 90's. There's Curses, Jigsaw, Spider and Web ... v8 games are like Anchorhead, pretty advanced for its time:https://ifdb.org/viewgame?id=op0uw1gn1tjqmjt7
You can either download the Z8 file and play it with Frotz (Linux/BSD), WinFrotz, or Lectrote under Android and anything else. Also, online with the web interpreter.
Now, the 'excuses' about the terseness of the original Z3 parser are now nearly void; because with Puny Inform a lot of Z3 targetting games (for 8086 DOS PC's, C64's, Spectrums, MSX'...) have a slightly improved parser against the original Zork game.
This article is deep, important, and easily misinterpreted. The TL;DR is that a plausible business model for AI companies is centered around surveillance advertising and content gating like Google or Meta, but in a much more insidious and invasive form.
Worth reading to the end.
I found the article is no more than ranting about something that they are just projecting. The browser may not be for everyone, but I think there’s a lot of value to an AI tool that helps you find what you’re looking for without shoving as many ads as possible down your throat while summarizing content to your needs. Supposing OpenAI is not the monster that is trying to kill the web and lock you up , can’t you see how that may be a useful tool?
Atlas feels more like a task tool than a browser. It’s fast, but we might lose the open web experience for convenience.
The SV playbook is to create a product, make it indispensable and monopolise it. Microsoft did it with office software. Payment companies want to be monopolies. Social media are of course the ultimate monopolies - network effects mean there is only one winner.
So I guess the only logical next step for Big AI is to destroy the web, once they have squeezed every last bit out of it. Or at least make it dependent on them. Who needs news sites when OpenAI can do it? Why blog - just prompt your BlogLLM with an idea. Why comment on blogs - your agent will do it for you. All while avoid child porn with 97% accuracy - somerhing human curated content surely cannot be trusted to do.
So I am 0% surprised.
It's really crazy that there is an entire ai generated internet. I have zero clue what the benefit of using this would be to me.Even if we argue that it is less ads and such, that would only be until they garner enough users to start pushing charges. Probably through even more obtrusive ads.
I also need to laugh. Wasn't open AI just crying about people copying them not so long ago?
The purpose is total control. You never leave their platform, there are no links out. You get all of your information and entertainment from their platform.
It’s also a classic tactic of emotional abuse:
https://www.womenslaw.org/about-abuse/forms-abuse/emotional-...
I normally dont waste a lot of energy on politics.
But this feels truly dystopian. We here on HN are all in our bubble, we know that AI responses are very prone to error and just great in mimicking. We can differentiate when to use and when not (more or less), but when I talk to non-tech people in a normal city not close to a tech hub, most of them treat ChatGPT as the all-knowing factual instance.
They have no idea of the concious and unconcious bias on the responses, based on how we ask the questions.
Unfortunately I think these are the majority of the people.
If you combine all that with a shady Silicon Valley CEO under historical pressure to make OpenAI profitable after 64 billion in funding, regularly flirting with the US president, it seems always consequential to me that exactly what the author described is the goal. No matter the cost.
As we all feel like AI progress is stagnating and mainly the production cost to get AI responses is going down, this almost seems like the only out for OpenAI to win.
1.0 - algorithmic ranking of real content, with direct links
2.0 - algorithmic feeds of real content with no outbound links - stay in the wall
3.0 - slop infects rankings and feeds, real content gets sublimated
4.0 - algorithmic feeds become only slop
5.0 - no more feeds or rankings, but on demand generative streams of slop within different walled slop gardens
6.0 - 4D slop that feeds itself, continuously turning in on itself and regenerating
Atlas confuses me. Firefox already puts Claude or ChatGPT in my sidebar and has integrations so I can have it analyze or summarize content or help me with something on the page. Atlas looks like yet another Chromium fork that should have been a browser extension, not a revolutionary product that will secure OpenAI's market dominance.
Yep. I was playing around with both Atlas and Comet and, security and privacy issues aside, I can’t figure out what they’re for or what the point is.
Except one: it gives them the default search engine and doesn’t let you change it.
I asked Atlas about this and it told me that’s true, the AI features are just a hook, this is about lock in.
Make of that what you will.
I think the idea of "we're returning to the command line" is astute tbh, I've felt that subconciously and I think the author put it into words for me.
The article does taste a bit "conspiracy theory" for me though
I think we're returning to CLIs mostly because typing remains one of the fastest ways we can communicate with our computers. The traditional limitation was that CLIs required users to know exactly what they wanted the computer to do. This meant learning all commands, flags etc.
GUIs emerged to make things easier for users to tell their computers what to do. You could just look at the screen and know that File > Save would save the file instead of remembering :w or :wq. They minimized friction and were polished to no end by companies like MSFT and AAPL.
Now that technology has got to a point where our computers now can bridge the gap between what we said and what we meant reasonably well, we can go back to CLIs. We keep the speed and expressiveness of typing but without the old rigidity. I honestly can't wait for the future where we evolve interfaces to things we previously only dreamt of before.
It’s less rigid than a command line but much less predictable than either a CLI or a GUI, with the slightest variation in phrasing sometimes producing very different results even on the same model.
Particularly when you throw in agentic capabilities where it can feel like a roll of the dice if the LLM decides to use a special purpose tool or just wings it and spits out its probabilistic best guess.
that being said, asking chatgpt to do research in 30 seconds for me that might require me to set aside an hour or two is causing me to make decisions about where to tinker and ideas to chase down much faster
can never go back
It’s not so much a conspiracy theory as it is a perfect alignment of market forces. Which is to say, you don’t need a cackling evil mastermind to get conspiracy-like outcomes, just the proper set of deleterious incentives.
Another edition of “if it’s free you are their product”…
It’s not entirely free though, agent mode and a few other features are paid. I’m paying $200/mo to OpenAI for my subscription
this website is mentioned best on Atlas Browser 25A362 in 1920x1080 resolution
What now remains is, after hearing glowing feedback, Satya making this the default browser in Windows as part of Microsoft and OpenAI's next chapter.
Eh I use ChatGPT for so many things I realize how many projects I used to just let go by.
Me too, and as the number and maturity of my projects have grown, improving and maintaining them all together has become harder by a factor I haven’t encountered before
[dead]
1 - nobody cares about being "pro-web" or "anti-web"
2 - we didn't leave command-line interfaces behind 40 years ago
[delayed]
1 - I care
2 - That's an entirely different situation and you know it.
Every professional involved in saas, web , online content creation thinks the web is a beautiful thing.
In reality the fact of social media means web failed long time ago, and it only serves a void not taken by mobile apps , and now llm agents.
Why do I need to read everything about tailor Swift on you her web site , if I don’t know a single song of her ? ( I actually do ) .
I don’t want a screaming website tells me about her best new album ever , and her tours if LLM knows I don’t like pop music . The other way around if you like her you’d like a different set of information. Website can do that for you
OpenAI should be 100% required to rev share with content creators (just like radio stations pay via compulsory licenses for the music they play), but this is a weird complaint:
> “sometimes this tool completely fabricates content, gives me a box that looks like a search box, and shows me the fabricated content in a display that looks like a web page when I type in the fake search box”
If a human wrote that same article about Taylor Swift, would you say it completely fabricates content? Most “articles” on the web are just rewrites of someone else’s articles anyway and nobody goes after them as bad actors (they should).
At this point, my adoption of AI tools is motivated by fear of missing out or being left behind. I’m a self-taught programmer running my own SaaS.
I have memory and training enabled. What I can objectively say about Atlas is that I’ve been using it and I’m hooked. It’s made me roughly twice as productive — I solved a particular problem in half the time because Atlas made it easy to discover relevant information and make it actionable. That said, affording so much control to a single company does make me uneasy.
Not sure why this got downvoted, but to clarify what I meant:
With my repo connected via the GitHub app, I asked Atlas about a problem I was facing. After a few back-and-forth messages, it pointed me to a fork I might eventually have found myself — but only after a lot more time and trial-and-error. Maybe it was luck, but being able to attach files, link context from connected apps, and search across notes and docs in one place has cut a lot of friction for me.