jbreckmckye 21 hours ago

About a year ago I was looking at Crash Bandicoot timer systems and I found that Crash 3 has a constantly incrementing int32. It only resets if you die.

Left for 2.26 years, it will overflow.

When it does finally overflow, we get "minus" time and the game breaks in funny ways. I did a video about it: https://youtu.be/f7ZzoyVLu58

  • jsheard 21 hours ago

    There's a weapon in Final Fantasy 9 which can only be obtained by reaching a lategame area in less than 12 hours of play time, or 10 hours on the PAL version due to an oversight. Alternatively you can just leave the game running for two years until the timer wraps around. Slow and steady wins the race.

    https://finalfantasy.fandom.com/wiki/Excalibur_II_(Final_Fan...

    • Gravityloss 9 hours ago

      Am reminded by this quote from Ferdinand Porsche:

      "The perfect racing car crosses the finish line first and subsequently falls into its component parts."

      Games fit this philosophy, compared to many other pieces of software that are expected to be long-lived and receiving a lot of maintenance and changes and evolve.

      • WJW 9 hours ago

        The Porsche quote reflects a wider design philosophy that says "Ideally, all components of a system lasts as long as the design life of the entire system and there should be no component that lives significantly longer. If there is such a component, it has been overengineered and thus the system will be more expensive to the end consumer than it needs to be.". It kinda skips over maintenance, but overall most people find it unobjectionable when stated like this.

        But plenty of people will find complaints when they try to drive their car beyond its design specs and more or less everything starts failing at once.

        • creaturemachine 7 hours ago

          Porsche was talking about racing, where the primary focus is reaching the finish line faster an anyone else, and over-engineering can easily get in the way of that goal. Back in the real world, no race team would agree that their cars should disintegrate after one race.

          • AzN1337c0d3r 6 hours ago

            > Back in the real world, no race team would agree that their cars should disintegrate after one race.

            Wasn't F1 teams basically doing this by replacing their engines and transmissions until the rules introduced penalties for component swaps in 2014?

            • jperras 5 hours ago

              If you go back further than that, teams used to destroy entire engines for a single qualifying.

              The BMW turbocharged M12/M13 that was used in the mid-eighties put out about 1,400 horsepower at 60 PSI of boost pressure, but it may have been even more than that because there was no dyno at the time capable of testing it.

              They would literally weld the wastegate shut for qualifying, and it would last for about 2-3 laps: outlap, possibly warmup lap, qualifying time lap, inlap.

              After which the engine was basically unusable, and so they'd put in a new one for the race.

              • gnatolf 38 minutes ago

                Current examples would be drag racing cars that have motors that are designed and used in a way that they only survive for about 800 total revolutions.

            • creaturemachine 5 hours ago

              Yup, cigarette money enabled all kinds of shenanigans. Engine swaps for qualification, new engines every race, spare third cars, it goes on. 2004 was the first year that specified engines must last the entire race weekend and introduced penalties for swaps.

              • lostlogin 4 hours ago

                > cigarette money enabled all kinds of shenanigans.

                It still does. New Zealand has a crop of tobacco funded politicians.

                • lawlessone 3 hours ago

                  >New Zealand has a crop of tobacco funded politicians.

                  when they leave politics do they just rapidly age and dissolve like that guy in the Indiana Jones film?

              • TylerE 5 hours ago

                F1 income is way way higher than the 80s.

            • kllrnohj 5 hours ago

              Even today F1 teams are allowed 4 engine replacements before taking a grid place penalty, and those penalties still show up regularly enough. So nobody is making "reliable" F1 engines.

              You can see this really on display with the AMG ONE. It's a "production" car using an F1 engine that requires a rebuild every 31,000 miles.

            • pfdietz 4 hours ago

              Don't highly optimized drag racers do this? I mean, a clutch that in normal operation gets heated until it glows can't be very durable.

        • ortusdux 7 hours ago

          Anyone can build a bridge, but it takes an engineer to barely build a bridge.

          • mikepurvis 6 hours ago

            Alan Weisman's lovely book World Without Us speculates a bit about this, basically saying that more recently built structures would be the first to collapse because they've all be engineered so close to the line. Meanwhile stuff that already been standing for 100+ years like the Brooklyn Bridge will probably still be there in another 100 years even without any maintenance just on account of how overbuilt it all had to be in an era before finite element analysis.

          • The_Fox 2 hours ago

            This is a great quote for the topic, but the quote is normally about a bridge that barely stands.

            I'm chuckling at the thought of barely building something. (All in good fun, thank you.)

        • signalToNose 8 hours ago

          Consumer protection laws prevents businesses following this to it’s extreme. For many businesses the ideal would be to just sell stuff that immediately breaks down as soon as it’s sold. It has the fulfilled its purpose from their point of view

          • delichon 8 hours ago

            I run sous vide cookers 24*7, and they uniformly break within 90 days or less. But they don't like to admit their smaller duty cycle, so they don't, and keep sending me warranty replacements instead. I keep buying different brands looking for one with a longer life. I'll bet most people do that when their gadgets die, and purposely making products that die as soon as sold isn't often a successful business model.

            • rlander 7 hours ago

              That’s not a small cycle count for a normal household. 90 × 24 = 2,160 total hours.

              I sous vide now and then, about twice a week for 6 hours each, so around 12 hours a week. That works out to roughly 15 years of usable machine time for the average person.

              Not bad at all.

              • josephg 7 hours ago

                Photography is the same way. Most SLR / DSLR / mirrorless cameras have a mechanical shutter which is expected to last around 200k-1m activations. I've had a camera for a bit over a year. I've used it quite heavily, and my shutter count is at about 13k photos. At this rate, the shutter will probably last for 20+ years - which seems fine. If I'm still using the camera by then, spending a few hundred dollars to replace the shutter mechanism sounds totally reasonable.

              • plywoodShadow 7 hours ago

                2160/12 is 180 weeks, or roughly 3.5 years, not 15 years

              • 47282847 7 hours ago

                Assuming linearity, which I doubt is the case.

              • account42 7 hours ago

                You think a measly 360 uses at your 6 hours typical operation is even remotely acceptable for a glorified heating element?

                And yes, 15 years is bad. I don't want to replace my entire household every 15 years FFS.

            • hnuser123456 7 hours ago

              Are there not industrial ones meant to last longer? Maybe you can buy a used but good condition one of those.

              • WJW 7 hours ago

                There are, and if you really have the workload that you need to cook stuff 24/7 (what in gods name is OP cooking btw?) then you should definitely get one of those. Maybe not even secondhand but just a new one. The cheap consumer grade ones are meant for people who use them once or twice a year.

                This is a fine example of what I meant about people complaining when they use products beyond their design parameters.

                • tracker1 6 hours ago

                  I got one that seems to be kind of in the middle, it's better built than most of the consumer models but not quite as "industrial" feeling as some of the commercial models. I use it a few times a week for a few hours each.

                  I'm on a mostly carnivore, mostly ruminant meat diet and for costs tend to do a lot of ground beef... I sous vide a bunch of burgers in 1/2lb ring molds, refrigerate and sear off when hungry. This lets me have safer burgers that aren't overcooked. I do 133F for 2.5+ hours.

                  I also do steaks about once or twice a week. I have to say it's probably the best kitchen investment I could have made in terms of impact on the output quality.

                • elzbardico 7 hours ago

                  It is easy to have to run a bunch of sous vide cooker 24/7 if you have a small restaurant or food delivery business.

                  • compiler-guy 5 hours ago

                    In which case one shouldn't be using consumer-grade kitchen equipment.

                • lawlessone 3 hours ago

                  If the manufacturers keep replacing the machines because they're within warranty isn't this cheaper for OP?

              • mattkrause 6 hours ago

                Definitely -- get something meant for a lab. I worked in one that had a 150F water bath running day and night.

            • cestith 7 hours ago

              A friend of mine gets new headphones/headsets every six to eighteen months, and hasn’t bought a pair entirely out of pocket in years. For him it’s all down to buying the Microcenter protection plan every time they’re replaced. They fail, he takes them back, he gets store credit for the purchase price, and he buys a new set and a new plan. He doesn’t even care about the manufacturer’s warranty anymore.

              Personally, most of my headphones I look for metal mechanical connections instead of plastic and I buy refurbished when I can. I think I pay about as much as he does or less, but we haven’t really hashed out the numbers together. I’m typing this while wearing a HyperX gaming headset I bought refurbished that’s old enough that I’ve replaced the earpads while everything else continues to work.

              Computers and computer parts often have, in my experience, a better reliability record competently refurbished than when they first leave the factory too. I wonder if sous vide cookers would.

            • account42 7 hours ago

              Well from an evil business perspective their options are either

              - the product doesn't break and you don't buy a replacement from them because you still have a working product

              - the product breaks and there is a greater than 0% chance that you will buy a replacement product from them

              Of course in practice it's more complicated but I wouldn't be so quick to declare that the math doesn't work out.

            • muzani 6 hours ago

              What do you sous vide 24*7? It sounds like it would be party grounds for bacteria. Also curious if the bags and other components break as well.

              • delichon 6 hours ago

                Beef, lamb, sometimes pork. I have a daily meal of a cheap, tough cut of meat cooked for 48 hours at 150F.

                Sous vide is generally not a bacterial growth risk above 140F. At 150F throughout, you get decent pasteurization in under two minutes. Two days of that is such extreme overkill that I'm concerned about the nutritional effect of over cooking.

                The Food Saver style vacuum sealers fail fast for me, so I bought a $400 chamber sealer, and I'm on year 5 with it.

                • Nathanael_M 6 hours ago

                  I think I love you? This is great. Do you have them running in arrays of 3? What’s your favourite cut? What’s the best cost:deliciousness cut? What bags do you use to minimize plastic leeching?

                  • delichon 5 hours ago

                    It's just me, so I only need one running at a time. Every day I take one serving out and put another one in. I clean the tank about once per week, or if something breaks. My favorite is short ribs, my daily drivers are chuck roast or shank. The prices have skyrocketed in the last few years. I buy in bulk on sale and portion it into bags with a chamber style vacuum sealer. It goes straight from the freezer into the tank.

                    • Nathanael_M 5 hours ago

                      Do you take pride in knowing that you eat cooler than anyone else, because you should.

                      Short rib is shocking where I am. Even chuck is pushing past $15 a pound.

                      What are you doing for sides/sauce? Generally when I think braise/sous-vide I think some rich, flavourful sauce, but that seems unpractical for daily consumption.

                      • delichon 5 hours ago

                        Chuck on sale is now $8 a pound, more than double since Covid started. I am eating less of it and more ground beef, pork and eggs.

                        I crisp it up in an air fryer before serving. Here's the full ingredient list: meat, butter, salt. After five years I still look forward to every repeat.

                        I just replaced an air fryer that lasted two years of daily use, a personal record. I was ready to replace it anyway, because they accumulate grease where you can't clean, and the smell gets interesting.

        • doubled112 8 hours ago

          When the design spec seems to be a 3 year long lease I can see why people get bothered.

      • aleks224 4 hours ago

        There's a quote in the bible that says something similar:

        "Verily, verily, I say unto you, Except a corn of wheat fall into the ground and die, it abideth alone: but if it die, it bringeth forth much fruit.”

        (John 12:24)

    • lelandfe 20 hours ago

      So the invisible 12h timer runs during cutscenes. During Excalibur 2 runs, I used to open and close the PS1 disc tray to skip (normally unskippable) cutscenes. Never knew why that worked.

      (I also never managed to get it)

      • jonhohle 19 hours ago

        I’m going to wager that the cutscenes are all XA audio/video DMA’d from the disc. Opening the disc kills the DMA and the error recovery is just to end the cutscene and continue. The program is in RAM, so a little interruption on reading doesn’t hurt unless you need to time it to avoid an error reading the file for the next section of gameplay.

        • ad133 11 hours ago

          This is a significantly better handling than the previous game (final fantasy viii). My disk 1 (it had four disks) got scratched over time (I was a child after all), and the failure mode was just to crash - thus the game was unplayable. The game had a lot of cutscenes.

        • Insanity 19 hours ago

          That’s a solid guess. And if that’s the case, that’s actually pretty good error handling!

          • Jare 14 hours ago

            I recall that handling disc eject was an explicit part of the Tech Requirements Doc (things the console manufacturer requires you to comply with). They'd typically check while playing, while loading and while streaming.

      • p1necone 20 hours ago

        > Never knew why that worked.

        I'm guessing the game probably streams FMV cutscenes of the disc as they play, and the fallback behaviour if it can't find them is to skip rather than crash.

    • jbreckmckye 21 hours ago

      Oh yeah. The sword you pick up in Memoria. The problem there is that the PAL version runs slower; the way PSX games "translated" between the two video systems was just to have longer VSync pauses for PAL. So the game is actually slower, not interpolated

      • reactordev 20 hours ago

        Longer vsync pauses but larger frame time deltas so it’s basically the same speed of play. The only thing that was even noticeable was the UI lag.

        • fredoralive 15 hours ago

          Erm. No, like lots of games during the era quite a lot of stuff is tied to the frame rate, so the 50Hz region game just runs slower than the 60Hz one as next to nobody bothers to adjust for it. The clock for the hidden weapon does run at the same rate for both unfortunately, hence it being harder to get in 50Hz regions.

          • reactordev 13 hours ago

            Incorrect. I’m looking at the source code. It’s not perfect but it’s not just “slowed down to 50hz” like people claim.

            • jbreckmckye 11 hours ago

              When you say looking at the source code, what do you mean here?

              AFAIK the source for FF9 PSX (and all the PSX ff games) has been lost as Square just used short term archives

              Also, FF9 does not run at a constant framerate. Like all the PSX FF games it runs at various rates, sometimes multiple at a time (example: model animations are 15fps vs 30 for the UI)

              In terms of timers, the bios does grant you access to root timers, but these are largely modulated by a hardware oscillator

              (Incidentally, the hardware timing component is the reason a chipped PAL console cannot produce good NTSC video. Only a Yaroze can support full multiregion play)

              • reactordev 9 hours ago

                It’s definitely not lost…

                • jbreckmckye 9 hours ago

                  What code are you looking at?

                  FFIX for PSX would have been written in C (or possibly C++) with PSY-Q. It will not be one program - those games were composed of multiple overlays that are banked in / out over the PlayStation's limited memory.

                  From what I know the PC release was a port to a new framework, which supports the same script engines, but otherwise is fresh code. This is how it can support mobile, widescreen, Steam achievements etc.

              • anthk 9 hours ago

                FF VII-IX were reimplemented under a custom engine.

                • reactordev 5 hours ago

                  Except I’m looking at the original source, not the remake, the crappy C/C++ Square engine. Not C# unity code.

                  There are a number of timers and things used. But the claim that it runs slower is absolutely false. It’s just perceived that way because it’s “drawn” slower.

                  • jbreckmckye 4 hours ago

                    Firstly, could you elaborate what code you're looking at? Square have never shared the source code for these titles and were not even practicing real version control at this time (see: Eidos FF7/8 debacle)

                    Secondly, it absolutely will run slower. Animations will take longer to complete; FMVs will play at a different rate ; controller sampling will be reduced.

                    My scepticism isn't coming from hearsay or ignorance: I have written PlayStation software, and PSX software is not parallelised, even though it can support threading and cooperative concurrency. The control flow of the title is very locked into the VSync loop, from your first ResetGraph(0) right to your final DrawOTable(*p).

                    In addition, I have done a bunch of reversing work on the other two PSX games, and they are not monolithic programs. They can't be because there simply isn't enough RAM to store the .TEXT of the entire thing at once. So when you say "the source code", I'm inclined to ask - for which module? The kernel or one of the overlays?

          • mungoman2 15 hours ago

            Wouldn't a slower tick make it easier as you get more wall time to do the same challenge.

            • fredoralive 15 hours ago

              No? Wall time (that the challenge runs on) is unchanged, game time (Vsync) is running at 83% of full speed (50Hz vs 60Hz), so if something tied to frame rate (animation, walking speed etc.) takes 1 second to do on NTSC, it'll take 1.2 seconds to do on PAL etc.

    • elcritch 7 hours ago

      We should rally together to force game companies to use 32 bit timers rather than 64bit ones so we can keep finding these fun little glitches. The time to protect overflows is now! ;)

    • BolexNOLA 9 hours ago

      Lord have mercy fandom really has become unbearable with the ads and pop ups.

      • coldpie 8 hours ago

        Install an ad blocker.

        • BolexNOLA 8 hours ago

          I opened this on an iPhone which has fewer adblock options. Desktop is better locked down.

          Regardless I can still complain about how intrusive the ads are.

          • coldpie 7 hours ago

            There are many ad block options on iPhone. I currently use Wipr 2, but in the past I've used both 1Blocker and AdBlock Pro with success.

          • account42 7 hours ago

            Don't accept devices that limit your ad blocker options.

            • BolexNOLA 6 hours ago

              Does this discussion strike you as one where I’m deliberating whether or not to chuck my smartphone and buy into a new ecosystem to avoid ads on fandom?

              These types of comments are always very unhelpful.

              • ogurechny 4 hours ago

                No, that's just a reminder that you had a choice, and chose empty talk about “ecosystems” over ability to control what you can see on “your” screen. You've stepped on a rake once, you got some experience, why repeat it over and over again?

                • BolexNOLA 3 hours ago

                  Or another option: we could remember that the ultimate offender here is Fandom.

                  My choice of device is irrelevant when assessing their crappy site.

          • JustExAWS 7 hours ago

            I just opened this my iPhone with 1Blocker installed. I saw no ads. It’s been around since iOS 8

            • BolexNOLA 6 hours ago

              Never heard of it, appreciate the recc!

              Edit: ah only works on safari

              • mrguyorama 6 hours ago

                You are on iOS. There is only safari. Any other "web browser" is just a skin over safari

                • BolexNOLA 5 hours ago

                  Yes I know everything is wrapped around safari. But I like having Firefox syncing across devices.

                  Edit: ah forgot my vpn was off, usually clears all that up for me. Much better now

    • debo_ 20 hours ago

      So that's why it's called Excalibur 2!

  • stevage 20 hours ago

    You really managed to make the whole video without making a single "crash" pun? (Those freezes come close enough that you could call them crashes...)

  • xhrpost 7 hours ago

    Is it common to default to a signed integer for tracking a timer? I realize being unsigned it would still overflow but at least you'd get twice the time, no?

    • jbreckmckye an hour ago

      Some C programmers take the view that unsigneds have too many disadvantages: undefined behaviour for overflows, and weird type promotion rules. So, they try and avoid uints.

    • aidenn0 5 hours ago

      If you get to right before you need to be (taking as long as you want), then wait until overflow, then you still have 12h to do the last tiny part if it's unsigned.

  • jonhohle 20 hours ago

    I think many games were that way. SotN definitely has a global timer. On a native 32-bit system it makes sense, especially when the life of a game was a few months to a few years on the retail shelf. No player is going to leave their system running for 2.27 years so what’s the point of even tesing it?

    Who knew at the time they were creating games that would be disassembled, deconstructed, reverse engineered. Do any of us think about that regarding any program we write?

    • rybosome 30 minutes ago

      It’s a totally reasonable choice in that context.

      I wonder if any sense this is criticism (or actual criticism) is based on implementers of SaaS who have it so deeply ingrained that “haha what if the users of this software did this really extreme thing” is more like “oh shit what if the users of this software did this really extreme thing”.

      When I worked on Google cloud storage, I once shipped a feature that briefly broke single-shot uploads of more than 2gb. I didn’t consider this use case because it was so absurd - anything larger than 2mb is recommended to go through a resumable/retryable flow, not a one-shot that either sends it all correctly the first time or fails. Client libraries enforced this, but not the APIs! It was an easy fix with that knowledge, but the lesson remained to me that whatever extreme behaviors you allow in your API will be found, so you have to be very paranoid about what you allow if you don’t want to support it indefinitely (which we tried to do, it was hard).

      Anyway in this case that level of paranoia would make no sense. The programmers of this age made amazing, highly coreographed programs that ran exactly as intended on the right hardware and timing.

    • Gamemaster1379 18 hours ago

      Can be more than timers too. There's a funny one in Paper Mario where a block technically can be hit so many times it'll reset and award items again. Hit enough times it'll eventually crash. Of course it'd take around 30 years for the first rollover and 400 or so for the crash. https://n64squid.com/paper-mario-reward-block-glitch/

    • account42 7 hours ago

      For some games the timer is stored is save files so it doesn't even have to be continuous play time. 2 years is still longer than anyone is expected to spend on a game.

    • technion 13 hours ago

      Let's say youre pedantic with code. Ive been trying to be lately - clippy has an ovefflow lint for rust i try to use.

      Error: game running for two years, rebooting so you cant cheese a timer.

      Does this make the bug any better handled? Bugs like this annoy me because they arent easily answered.

      • account42 6 hours ago

        There are always limits to what a program can do. The only fix is to choose large enough integers (and appropriate units) so that you can represent long enough times / large enough sizes / etc. that anyone could reasonably encounter. What sizes make sense also include how they impact performance and for a game from the 32-bit era, a crash (controlled abort or not) after over two years is probably a better choice than slowing everything down by using a 64-bit integer.

    • jraph 15 hours ago

      Isn't this common in the computer game scene? Shouldn't you asume your game will be disassembled, deconstructed, reverse engineered?

      Although for old games released before internet was widespread in the general population, it might have not been this obvious.

      • sim7c00 12 hours ago

        aslong as it doesnt lead to online cheats having such code is fine. if someone wants to reverse the game find an obscure almost untriggerable bug and then trigger it or play with it. 2.6 year game session is crazy if its not a server, and if its a server, thats still really crazy even for some open-world open-ended game... its a long time to keep a server up w/o restarts or anything (updates?).

        looking at the various comments, there might be even some kind of weird appeal to leave such things in your game :D for people to find and chuckle about. it doesnt really disrupt the game normally does it?

        • lstodd 3 hours ago

          > if its a server, thats still really crazy even for some open-world open-ended game... its a long time to keep a server up w/o restarts or anything (updates?).

          Pretty much doable even without resorting to VM migrations or ksplice. My last one had uptime in 1700s (days). Basically I leased it, put a debian on it and that was that until I didn't need it anymore.

    • lentil_soup 13 hours ago

      they're still made like this. Just now I made a frame counter that just increments every frame on a int64. It would eventually wrap around but doubt anyone will still be around to see it happen :|

  • teeray 6 hours ago

    The true Time Twister unlocked

Insanity 19 hours ago

Literally unplayable, someone should fix that.

Doom is actually such a good game, I always go back to it every few years. The 2016 reboot is also pretty fun, but the later two in the series didn’t do it for me.

  • jama211 16 hours ago

    Same. Something about the metroidvania design with the home hub of the later ones didn’t give the same feeling. It should be run, kill, find secrets, end, next level.

    • lapetitejort 28 minutes ago

      > find secrets

      I'll be honest, I don't like this part. I'm a rabid collector. If the game gives a metric to an item, I must have all of the items. I end up killing the flow by scouring the level looking for secrets. This is entirely my fault of course

    • jeffwask 7 hours ago

      I just finished Robocop: Rogue City and it was exactly this a linear level by level shooter that felt like a pure Robocop power fantasy movie. I played new game plus it was so much fun and I never do that.

      It's like the game industry got a fake memo saying no one wanted linear story-based games anymore. I ended up buying two more Teyon games because I was so happy with their formula and they are playable in a dozen or so hours. Tight, compact, linear, fun story and game play... No MTX or always online BS and they don't waste my time with busy work.

    • Insanity 6 hours ago

      This is exactly how I want my FPS games to be. Just linear, run & gun. TBH, I can even do without weapon upgrades or any "RPG" style elements.

      It's even worse in multiplayer games like COD and BF. As soon as I need to figure out combinations of 5x attachments to guns I lose all my interest in playing the game. That's why I'm still on CS I guess lol.

    • bombela 5 hours ago

      The latest DOOM: Dark Ages ditched the home hub. I think it's a really great DOOM game.

      • Insanity 5 hours ago

        I was quite excited for it, despite not enjoying Eternal as much. But after about ~2 hours of playing it, I lost interest. I'm happy you're enjoying it, sadly it didn't click for me.

        Especially the 'mech scale' stuff was just boring. I don't remember what they call it in-universe, but essentially the parts of the game where you're playing from a giant robot and just walking over tanks and fighting supersized demons.

  • xmonkee 19 hours ago

    Same. And love those brutality mods.

  • bitwize 18 hours ago

    Fun fact: Doom is now a Microsoft property, along with Quake, StarCraft, WarCraft, Overwatch, all of the adventure games from Infocom and Sierra, and of course Halo. Microsoft pretty much owns most of PC gaming. Which is what they've wanted since 1996 or so.

    • kodarna 15 hours ago

      They own the past of PC gaming, as well as Call of Duty but that is more popular on consoles than PC nowadays. Those listed are small time compared to Counter-Strike 2, Dota 2, League of Legends, Valorant, Roblox, Apex Legends, Marvel Rivals and a number of hard-hitting games every year such as Witcher 3, Elden Ring, Baldur's Gate 3 etc.

      • account42 6 hours ago

        So in other words the own the part of PC gaming that's actually good.

    • Novosell 12 hours ago

      They own Minecraft as well.

    • nurettin 16 hours ago

      > Microsoft pretty much owns most of PC gaming.

      So valve next?

      • Lightkey 14 hours ago

        They missed that window when Sierra was still the publisher for Half-Life. Besides, Valve is not a publicly traded company and Gabe Newell as former manager at Microsoft has no interest in getting back together. Valve is betting everything on Linux right now to be more independent from Microsoft.

        • account42 6 hours ago

          All the more reason for Microsoft to make a play now while Valve still at least somewhat depends on them.

          And Gabe won't be around forever and the guy is already over sixty. Statistically he's got about two decades left to live and not all of that will be at a level where he can lead Valve.

        • lukan 10 hours ago

          "Valve is betting everything on Linux right now"

          Not everything, but they do invest in it.

        • simoncion 6 hours ago

          > Valve is betting everything on Linux right now...

          They've been working on Linux support since at least around the time that Microsoft introduced the Windows Store... so for the last twelve years or so.

          And, man, a couple of months ago I figured out how to run Steam as a separate user on my Xorg system. Not-at-all-coincidentally, I haven't booted into Windows in a couple of months. Not every game runs [0], but nearly every game in my library does.

          I'm really gladdened by the effort put in to making this work.

          [0] Aside from the obvious ones with worryingly-intrusive kernel-level anticheat, sometimes there are weird failures like Highfleet just detonating on startup.

          • Insanity 5 hours ago

            I used to game on Linux back in the late 2000s through Wine. And I always found the mouse support to be jarring, even if I could get support to a decent level, for some reason the mouse input was never quite as fluid as it should have been.

            And now I'm reluctant to move back to Linux for gaming, even though they've clearly come so far. I guess I should just go ahead and give it another shot.

            • Spoom 30 minutes ago

              Stating my bias up front, I've been using Linux since Windows Vista, and I'm a fan. That said, I have experienced the same things you did whenever I needed to run Wine for... well, anything. It was clunky as hell.

              You should absolutely revisit. Proton has changed the game. Literally the only game I've tried that was remotely difficult to play in SteamOS is Minecraft, likely because Microsoft owns it now. But I was able to get that working too (if anyone's wondering: you want Minecraft Bedrock Launcher, which is in the Discover store if you're on the Steam Deck and here[1] if you're somewhere else; basically it downloads and runs the Android version of Minecraft through a small translation layer, which is essentially identical to the Windows version).

              Speed also is greatly improved from previous solutions. Games played through Proton are often very close in terms of performance to playing them natively.

            • jerf 4 hours ago

              It has come lightyears.

              ProtonDB has a feature where you can give it access to your Steam account for reading and it'll give you a full report based on your personal library: https://www.protondb.com/profile

              And I find if anything it tends to the conservative. I've encountered a few things where it was overoptimistic but its outweighed by the stuff that was supported even better than ProtonDB said.

              In the late 2000s, I played a few things, but I went in with the assumption it either wouldn't work, or wouldn't work without tweaking. Now I go in with the assumption that it will work unless otherwise indicated. Except multiplayer shooters and VR.

      • tomwojcik 15 hours ago

        As long as Gabe is alive, no way.

        • HeckFeck 12 hours ago

          We must find a way to extend his life indefinitely.

        • account42 6 hours ago

          *in control of Valve

          Old age can make him give that up before death.

  • shpongled 17 hours ago

    2016 remains one the greatest single player FPS games I've played (Titan Fall 2 is the other)

  • pizza234 11 hours ago

    I'm under the impression that since Doom Eternal (the first after Doom 2016), the gameplay has considerably shifted to an "interconnected arenas" style, and with more sophisticated combat mechanics. Many games have started adopting this design, for example, Shadow Warrior 3.

    I also dislike this trend. As a sibling comment noted, boomer shooters are generally closer to the old-school Doom gameplay, although some are adopting the newer design too.

    • billyp-rva 7 hours ago

      The enemy cap all but forces the arena style gameplay. Doom 2016 tried to hide it more, but it still felt very stifling.

shultays 12 hours ago

Does that hardware traps overflows or something?

  I had read an article about how DOOMs engine works and noticed how a variable for tracking the demo kept being incremented even after the next demo started. This variable was compared with a second one storing its previous value

Doesn't sound like something that would crash, I wonder what was the actual crash
  • Sharlin 10 hours ago

    Signed overflow is undefined behavior in C, so pretty much anything could happen. Though this crash seems to be deterministic between platforms and compilers, so probably not about that. TFA says the variable is being compared to its previous value, and that comparison presumably assumes new < old cannot happen. And when it does, it could easily lead to eg. stack corruption. C after all happily goes to UB land if, for example, some execution path doesn’t return a value in a function that’s supposed to return a value.

    • account42 6 hours ago

      Just because the language standard allows for anything to happen doesn't mean that actually anything can happen with real compilers. It's still a good question to think about how it could actually lead to a crash.

      • Sharlin 3 hours ago

        That’s what I said? It’s easy to come up with scenarios where signed overflow breaks a program in a crashy way if the optimizer, for example, optimizes out a check for said overflow because it’s allowed to assume that `++i < 0` can never happen if i is initialized to >= 0. That’s something that very real optimizers take advantage of in the very real world, not just on paper. For example, GCC needs -fwrapv to give you guaranteed wrapping behavior (there’s sctually -ftrapv which raises a SIGFPE on overflow – that’s likely the easiest way to cause this crash!)

        But I specifically said that it doesn’t look like SOUB in this particular case, and proposed an alternative mechanism for crashing. What’s almost certain is that some type of UB is involved because "crashing" is not any behavior defined by the standard, except if it was something like an assertion failing, leading to an intentional `abort`.

    • phkahler 9 hours ago

      That doesn't make sense. If new < old cant happen there is no need to make a comparison. Stack corruption? Nah, its a counter not an index or pointer or it would fail sooner. But then what is the failure? IDK

      • jraph 8 hours ago

        Assuming new > old doesn't mean you actually make the comparison, but rather that the code is written with the belief that new > old. This code behaves correctly under this assumption, but might be doing something very bad that leads to a crash if the new < old.

        An actual analysis would be needed to understand the actual cause of the crash.

      • Sharlin 7 hours ago

        Um, there are the cases new == old and new > old. And all the more specific cases new == old + n. I haven’t seen the code so this is just speculation, but there are plenty of ways how an unexpected, "can never happen" comparison result causes immediate UB because there’s no execution path to handle it, causing garbage to be returned from a function (and if that garbage was supposed to be a pointer, well…) or even execution never hitting a `ret` and just proceeding to execute whatever is next in memory.

        Another super easy way to enter UB land by assuming an integer is nonnegative is array indexing.

          int foo[5] = { … }
          foo[i % 5] = bar;
        
        Everything is fine as long as i isn’t negative. But if it is… (note that negative % positive == negative in C)
        • account42 6 hours ago

          Dividing by a difference that is suddenly zero is another possibility.

          • ogurechny 5 hours ago

            The error states that the window can't be created. It might be the problem with parameters to the window creation function (that should not depend on game state), or maybe the system is out of memory. Resources allocated in memory are never cleaned up because cleanup time overflows?

            Doom4CE (this port) was based on WinDoom, which only creates the program window once at startup, then switches the graphical mode, and proceeds to draw on screen independently, processing the keyboard and mouse input messages. I'm not sure, but maybe Windows CE memory management forced the programmer to drop everything and start from scratch at the load of each level? Then why do we see the old window?

            There are various 32 bit integer counters in Doom code. I find it quite strange that the author neither names the specific one, nor what it does, nor tries to debug what happens by simply initialising it with some big value.

            Moreover, 2^32 divided by 60 frames per second, then by 60 seconds, 60 minutes, 24 hours, 30 days, and 12 months gives us a little less than 2.5 years. However, Doom gameplay tick (or “tic”), on which everything else is based, famously happens only 35 times a second, and is detached from frame rendering rate on both systems that are too slow (many computers at the time of release), or too fast (most systems that appeared afterwards). 2^32 divided by 35, 60 seconds, etc. gives us about 4 years until overflow.

            Would be hilarious if it really is such an easy mistake.

            • BearOso 3 hours ago

              The VGA 320x200 mode, either 13h or "Mode Y", ran at 70.086 Hz, so that adding up to ~2.5 years is just coincidental.

              It's a shame the source code for doom isn't available, and that the author couldn't just link directly to a specific line in a gitweb repository. /s

spjt 18 hours ago

Just be glad you knew what the bug was before you started. After 2.5 years... "Shit, I forgot to enable debug logging"

jraph 15 hours ago

Notably, DOOM crashed before Windows CE.

  • chatmasta 2 hours ago

    Seriously… I’m most impressed that this PDA kept an application running for 2.5 years. I’d be shocked if any modern hardware could do this, even while disconnected from the Internet.

    • jraph an hour ago

      I'd be more impressed by current software not crashing for 2.5 years than hardware, but that might be I'm a software developer, not a hardware developer :-)

  • wingi 13 hours ago

    Yes, great achivement!

JoshGlazebrook 21 hours ago

2038 is going to be a fun year.

  • kevin_thibedeau 19 hours ago

    Everybody is sleeping on 2036 for NTP. That's when the fun begins.

    • wiredpancake 18 hours ago

      Assuming correct implementation of the NTP spec and adherence to the "eras" functions, NTP should be resistant to this failure in 2036.

      The problem being so many micro-controllers, non-interfaceable or cheaply designed computers/devices/machines might not follow the standards and therefore be susceptible although your iPhone, Laptop and Fridge should all be fine.

  • jonhohle 20 hours ago

    That seems much closer than it did in y2k.

  • cestith 6 hours ago

    You have 13 years to upgrade to 64-bit ints or switch to a long long for time_t. Lots of embedded stuff or unsupported closed-source stuff is going to need special attention or to be replaced.

    I know the OpenFirmware in my old SunServer 600MP had the issue. Unfortunately I don’t have to worry about that.

    • chatmasta 2 hours ago

      You’ve got 13 years to update unless any of your code includes dates in the future. Just stay away from anything related to mortgages, insurance policies, eight year PhD programs, retirement accounts…

    • account42 6 hours ago

      Most 32-bit games won't be updated, we'll have to resort to faking the time to play many of them.

      • cestith 2 hours ago

        Most 32-bit games written for some form of Unix will use the system time_t if they care about time. The ones written properly, anyway. Modern Unix systems have a 64-bit time_t, even on 32-bit hardware and OS. If it’s on some other OS and uses the Unix epoch on a signed 32-bit integer that’s another design flaw.

  • pjc50 11 hours ago

    Fixing that is my retirement plan.

cestith 6 hours ago

Once upon a time, Windows NT 4 had a similar bug. Their counter was high precision, though, and was for uptime of the system. Back before Service Pack 3 (or was it SP2?) we had a scheduled task reboot the system on the first of the month. Otherwise it would crash after about 42 days of uptime, because apparently nobody at Microsoft tested their own server OS to run for that long.

Zobat 14 hours ago

This is a level of testing that exceeds what the testers I know commit to. I myself was annoyed the five or so times yesterday we had to sit and wait to check the error handling after a 30 second timeout in the system I work on.

glitchc 7 hours ago

I love the post, but your blurry text is hurting my eyes. Looks like it's intentionally blurry but I can't figure out why. This can't be a holdover from older systems, they had razor-sharp text rendering on CRTs.

  • jraph 6 hours ago

    Looks crisp on my setup, but I block fonts and scripts. Reader mode is your friend :-)

piker 12 hours ago

Props again to the id team. No doubt something like that engineered by most folks today would have died long before the 2 year mark due to memory fragmentation if not outright leaks.

qiine 7 hours ago

In games I worked on I use time to pan textures for animated FX.

After a few hours precision errors accumulate and the texture become stretched and noisy, but since explosions are generally short-lived its never a problem.

Yet this keep bothering me..

ustad 14 hours ago

Was this specific to the PDA port or the core doom code?

@ID_AA_Carmack Are you going to write a patch to fix this?

patchtopic 7 hours ago

I haven't opened my DOOM software box, it's still in the shrinkwrap. I guess I can take it back and ask for a refund now?

0cf8612b2e1e a day ago

I am going to need to see this replicated before I can believe.

otikik 11 hours ago

Quick! John Carmack needs to be brought into this immediately.

serf 21 hours ago

The easy way to e-Nostradamus predictions:

"See this crash?

I predicted it years ago.

Don't ask me how, I couldn't tell you."

p.s. I had an old iPaq that I wouldn't have trusted to run for longer than a day and stay stable, kudos for that at the very minimum.

  • prmoustache 14 hours ago

    I had an iPaq for a while and I don't remember seeing OS/hardware crashes.

jeffrallen 16 hours ago

This headline gave me a heart attack... I misread the site's name as Lenovo, and as I'm responsible for a whole lot of their servers running for years in a critical role... heart attack.

Maybe I need my morning coffee. :)

  • minki_the_avali 14 hours ago

    I mean I wouldn't mind getting a subdomain there but I do like lenowo more :3

ranger_danger 21 hours ago

Seems to be a PocketPC port of Doom, with no source given or even a snippet of the relevant code/variable name/etc. shown at all.

  • unixhero 21 hours ago

    Yes. I think it it seems like it was the os that overflowed, and not Doom in this case.

    • jama211 16 hours ago

      They explained it was in the game code though?

      • unixhero 13 hours ago

        To me, that error message was caused by some panic, and then the OS began gracefully shutting down the application in this case DooM - which would not have been done by the program itself. Therefore I conclude it was the OS.

        I am not an OS developer, so I take my own conclusion with a grain of salt.

EbNar 7 hours ago

Love the look of that board :-)

DeathArrow 16 hours ago

It's good it didn't took a billion years to overflow. That would have been quite a long wait.

casey2 19 hours ago

Has this ever come up in a TAS of custom levels?

moomin 14 hours ago

Literally unplayable.

ZsoltT 17 hours ago

glitchless?

sunrunner 21 hours ago

Not a comment on the post, but I sure wish Jira would load even half as quickly as this site.

  • antsar 21 hours ago

    It takes serious hardware investment [0] to pull that off.

    [0] https://lenowo.org/viewtopic.php?t=28

    • fifteen1506 12 hours ago

      Meta-Meta-Meta:

      Update:

      After the recent hacker news "invasion", I have now determined that the page can handle up to 1536 users before running out of RAM, meaning that the IP camera surprisingly is fully sufficient for its purpose. In other words, I will not be moving the forum in the near future as 32 MB of RAM seem to be enough to run it

      Source: https://lenowo.org/viewtopic.php?t=28

    • skilled 14 hours ago

      > Host it on the Fritzbox 7950 instead?

      It's a router.. oh my god that made me laugh

  • stevage 20 hours ago

    It's not loading for me at all.

  • 9dev 18 hours ago

    We recently moved to Linear and couldn’t be happier, can recommend!

  • hughes 21 hours ago

    Is this a joke because the site isn't loading at all?

    • sunrunner 20 hours ago

      At the time of writing the comment it was practically instantaneous for me and the comment was genuine. Now it seems to be having trouble and I'm choosing to retroactively make the comment a joke about Jira ;)

    • SpicyUme 20 hours ago

      Came back to check this since the tab never loaded. I'm guessing traffic caused some issues?

      • minki_the_avali 14 hours ago

        You folks overflowed the 32 MB of RAM that my forum is running on and caused it to restart a few times due to the high amount of simultaneous connections. It has recovered now though

      • Insanity 19 hours ago

        I’m guessing HN hug of death. Probably smarter than just auto scaling to handle any surge traffic and then get swamped by crawlers & higher bills.

shadowgovt 5 hours ago

"I hope someone got fired for that blunder." /s