jauntywundrkind 2 days ago

Having smaller scaling units is a convenience. Yes. The mainframe world has tended to require massive upgrades to scale, yes.

Yet, in many ways, the world we are in today is actually intensely terminal based. When you are on Facebook, on Google, ask yourself where the compute is happening. Ask yourself what you think the split is, what the ratio of client : server compute is. My money is that Facebook and Google spend far more compute than I do when I use their services. And that's with these companies spending probably billions of dollars carefully optimizing their server side architectures to relentlessly reduce cost.

The hyperscalers use lots of powerful dense small servers. It could be argued that these servers more resemble one kind of shared personal computer model, since there are many dozens of them in the rack, not one big system. But still, these systems host many dozens of requests at any given point in time, are being used as a shared system with many terminals connected to them.

Last, I do think we have mis-optimized like hell, and have almost no data points about how incredibly bad a job we are doing, have almost no idea what a thin-client / terminal based world could look like. Much of the reason for personal computers is that individuals had power over these systems, that it didn't take getting departmental them IT approval for each use case: you have your computer and do with it as you will. The organizational complexity pushed us away from the terminal model long long long ago.

But today, there's enormously popular container options. There's incredibly festureful container-based development options abundant, with web or GUI options abound. Toolbox is a great example. https://github.com/containers/toolbox I guess it could be argued that this brings a personal computer model to a shared terminal system, hybridized the option?

But I long to see better richer shared systems, that strive to create interconnected multi user systems, that we terminal into. There is so much power on tap now, so many amazing many core chips that we could be sharing very effectively, offloading work to, while sparing our batteries & reducing our work:idle compute ratios drastically with.

  • mananaysiempre 8 hours ago

    > Yet, in many ways, the world we are in today is actually intensely terminal based.

    At the same time, I feel that in many cases that’s because it’s easier to bill users as a provider of terminals instead of a provider of personal computers. Depending on interest rates, being billed for connecting by terminal can also be more attractive to a business.

    For the absolute overwhelming majority of businesses, there is zero reason to deal with the failure modes and resulting tremendously complex engineering that comes with running the infrastructure for Amazon. (It’s not that, for example, the small number of computers they need can’t fail, it’s that, unlike Amazon, they’d need to plug a lot of other much more probable disaster-planning holes before the probability of such failure rises above the noise.) Yet many do. For the absolute overwhelming majority of individuals, their file-sharing needs do not require a distributed storage system sized for Google. Yet many use one. And then they have to deal with the fact that they are small enough that Amazon’s and Google’s infrastructure could literally never work for them and still not cause even the slightest dent in the overall uptime percentage[1].

    [1] http://rachelbythebay.com/w/2019/07/15/giant/

  • toast0 8 hours ago

    > My money is that Facebook and Google spend far more compute than I do when I use their services.

    It kind of depends. UI rendering and painting is actually a lot of work. TLS handshaking with eliptic keys is close to balanced (may be more work for the client). TCP connections are usually harder for the client than the server. Giant services have giant compute farms, but they're also doing work for millions of users at any one time.

    • jauntywundrkind 6 hours ago

      I don't have the direct insight to know but I'd put money on running the personalized recommendation algorithms (like a Google and Facebook do) taking a good lot more effort than UI / render work.

      Walking down the rabbit hole, we get into interesting realms when we talk about UI; there's layout compute, then actual render compute. Theatter is gonna get some on the GPU, and is intense compute, but it's done on this absurdly efficient ultra parallel highly tuned computer: the amount of compute is high but it's got all this hardware tuned specifically for rendering. There's a nearly philosophical question of how we want to regard that compute; significant if measured absolutely, small-ish if measured in the context of energy used.

      Keeping the page alive, loaded, scrolling is going to drain some compute. Maybe we deduct the compute required to keep the OS up, an empty app or page running? Discord the "not the workload".

      My understanding is that when a request comes in to Google Search and many other systems there's a often a fan out to hundreds of systems, unimaginaly many little bits of work and telemetry flying around. And then double that again for ad-buy auctions! I'd love a better resource that does better than I'm doing here, but I tend to think there's just a wild amount of work and data shuffling around as requests come in, that that's why these hyperscalers need these millions upon millions of servers.

ggm 2 days ago

I think when commodity NAS arrived and home users had some expectations of reliable storage at home, combined with ubiquitous cloud storage, to me at least, centrally managed systems became less interesting. Once I realised I didn't run CPU intensive code, the benefit was managed reliable filestore, and access to other people's data.

What I'm left with is old timer regret for time passing. The ozone rich, oily smell of a well managed machine room. A display of lights showing intensity of work on a CPU. Tapes stuttering. Well.. romance is fine. We got work to do old timer.

trebligdivad 8 hours ago

Another roll of the 'wheel of reincarnation' http://www.catb.org/jargon/html/W/wheel-of-reincarnation.htm...

as we move back and forward between terminals to big machines and back over time.

  • ethan_smith 7 hours ago

    The wheel of reincarnation perfectly captures this pattern - we've cycled from mainframes to PCs to cloud and now edge computing is pushing processing back to endpoints again, driven primarily by latency requirements and data sovereignty concerns.

  • xeonmc 7 hours ago

    Simiarly: pocket watches -> wristwatches -> phones -> smartwatches

socalgal2 6 hours ago

Is that dichotomy really appropriate anymore? To me a terminal really has no power. Everything has to be rendered on the server. Photoshop or Blender over a "terminal" would suck.

Google Maps doesn't render on the server. It sends that data to your device and your device renders it, in 3d if you want

https://postimg.cc/LnBtLfWf

That seems far removed from "terminals" to me. You could say it's just a smarter terminal but then is multiplayer Edlin Ring just a smarter terminal that just happens to cache some data locally?

  • smadge 40 minutes ago

    A telnet terminal isn’t a terminal by this logic because the text isn’t rendered on the server. A telnet server sends data to your device and your device renders it.

bensandcastle 7 hours ago

The scaling issue the author identifies is exactly what we've built Strong Compute to solve. Dynamically scalable super computing.

"In this environment the problem with a single big unit is that significant updates to it are likely to be quite expensive and you probably simply won't have that money all in one lump at once."

This is true if you approach it as a long term lease, or a fixed purchase, which is often a requirement given even if using on demand or spot there's still a lot of setup work to do. Our focus to crush this issue. Zero code change from a small VM to a large cluster.

tacticus 8 hours ago

One aspect missed with why VDI style solutions have become popular is that they do reduce some security overheads (kinda in some ways) while making the vendors happy with the additional licenses everyone needs

nixpulvis 7 hours ago

This seems at odds with the success of cloud computing. Though I generally believe personal computing is more empowering and therefore should be preferred.

  • michaelmior 6 hours ago

    One big difference (at least for big providers such as Azure, AWS, and Google Cloud) is that they have so many customers, they can still scale incrementally, just with much larger increments than a personal computer.

rbc 7 hours ago

Another way to think about it is that the terminal has become software, like many other things. Web browsers have become a terminal of sorts too.

superkuh 8 hours ago

It's a sad day when a text blog goes $bleedingedgebrowser only. Apparently I'm using a suspiciously old browser and I don't get to see the text.

  • stockerta 3 hours ago

    In an older blog post he said that he will block some browser agents because AI scrappers are using them to identify themselves.

  • o11c 7 hours ago

    Er, what? It works fine for me even on `lynx`. And I don't see any scripts, so unless it's user-agent sniffing or doing something based on IP reputation, I'm not sure what you're seeing.