mlsu 3 hours ago

My niece weighed 3 kg one year ago. Now, she weighs 8.9 kg. By my modeling, she will weigh more than the moon in approximately 50 years. I've analyzed the errors in my model; regrettably, the conclusion is always the same: it will certainly happen within our lifetimes.

Everyone needs to be planning for this -- all of this urgent talk of "AI" (let alone "climate change" or "holocene extinction") is of positively no consequence compared to the prospect I've outlined here: a mass of HUMAN FLESH the size of THE MOON growing on the surface of our planet!

  • Workaccount2 3 hours ago

    We have watched many humans grow so we have a pretty good idea of the curve. A better analogy is an alien blob appeared one day and went from 3kg to 9kg in a year. We have never seen one of these before, so we don't know what it's growth curve looks like. But it keeps eating food and keeps getting bigger.

    • mlsu 3 hours ago

      Mine's different. She's cuter.

      On a more serious note. Have these AI doom guys ever dealt with one of these cutting edge models on out of distribution data? They suck so so bad. There's only so much data available, the models have basically slurped it all.

      Let alone like the basic thermodynamics of it. There's only so much entropy out there in cyberspace to harvest, at some point you run into a wall and then you have to build real robots to go collect more in the real world. And how's that going for them?

      Also I can't help remarking: the metaphor you chose is science fiction.

    • HarHarVeryFunny an hour ago

      Yeah, but we're not talking about alien blobs, we're talking about pre-trained transformers. I'm 100% certain that if you make them bigger and better then all you will have is a bigger better pre-trained transformer.

      Scale it up (and sprinkle some magic fairy dust on it?) and it'll become sentient seems to be the thought process. Didn't work for CYC, and not going to work here either. We need architecture, not scale or efficiency or bells and whistles. Get rid of pre-training and design an architecture and learning algorithm that will learn continuously and incrementally from it's own actions and mistakes (i.e. prediction failures) and we'll start to get somewhere.

  • habinero 3 hours ago

    LOL, exactly. All of the weird AGI/doomer/whatever bullshit we're calling it/ feels like exactly this: people who think they're too smart to fall prey to groupthink and bias confirmation, and yet predictably are falling prey to groupthink and bias confirmation.

    • mlsu 2 hours ago

      I have more fun reading it as a kind of collaborative real-time sci-fi story. Reads right out of a Lem novel.

yodon 6 hours ago

So... both authors predict superhuman intelligence, defined as AI that can complete tasks that would take humans hundreds of hours, to be a thing "sometime in the next few years", both authors predict "probably not before 2027, but maybe" and both authors predict "probably not longer than 2032, but maybe", and one author seems to think their estimates are wildly better than those of the other author.

That's not quite the level of disagreement I was expecting given the title.

  • LegionMammal978 5 hours ago

    As far as I can tell, the author of the critique specifically avoids espousing a timeline of his own. Indeed, he dislikes how these sorts of timeline models are used in general:

    > I’m not against people making shoddy toy models, and I think they can be a useful intellectual exercise. I’m not against people sketching out hypothetical sci-fi short stories, I’ve done that myself. I am against people treating shoddy toy models as rigorous research, stapling them to hypothetical short stories, and then taking them out on podcast circuits to go viral. What I’m most against is people taking shoddy toy models seriously and basing life decisions on them, as I have seen happen for AI2027. This is just a model for a tiny slice of the possibility space for how AI will go, and in my opinion it is implemented poorly even if you agree with the author's general worldview.

    In particular, I wouldn't describe the author's position as "probably not longer than 2032" (give or take the usual quibbles over what tasks are a necessary part of "superhuman intelligence"). Indeed, he rates social issues from AI as a more plausible near-term threat than dangerous AGI takeoff [0], and he is very skeptical about how well any software-based AI can revolutionize the physical sciences [1].

    [0] https://titotal.substack.com/p/slopworld-2035-the-dangers-of...

    [1] https://titotal.substack.com/p/ai-is-not-taking-over-materia...

    • ysofunny 5 hours ago

      but what is the difference between a shoddy toy model and a real world pro "rigorous research"?

      it's like asking between the difference between amateur toy audio gear, and real pro level audio gear... (which is not a simple thing given "prosumer products" dominate the landscape)

      the only point in betting when "real AGI" will happen boils down to the payouts from gambling with this. are such gambles a zero sum game? does that depend on who escrows the bet??

      what do I get if I am correct? how should the incorrect lose?

      • LegionMammal978 5 hours ago

        If you believe that there's any plausible chance of AGI causing a major catastrophe short of the immediate end of the world, then its precise nature can have all sorts of effects on how the catastrophe could unfold and how people should respond to it.

  • sweezyjeezy 4 hours ago

    I don't think the author of this article is making any strong prediction, in fact I think a lot of the article is a critique of whether such an extrapolation can be done meaningfully.

    Most of these models predict superhuman coders in the near term, within the next ten years. This is because most of them share the assumption that a) current trends will continue for the foreseeable future, b) that “superhuman coding” is possible to achieve in the near future, and c) that the METR time horizons are a reasonable metric for AI progress. I don’t agree with all these assumptions, but I understand why people that do think superhuman coders are coming soon.

    Personally I think any model that puts zero weight on the idea that there could be some big stumbling blocks ahead, or even a possible plateau, is not a good model.

    • XorNot 4 hours ago

      The primary question is always whether they'd have made those sorts of predictions based on the results they were seeing on the field from the same amount of time in the past.

      Pre-CharGPT I very much doubt the bullish predictions on AI would've been made the way they are now.

  • vonneumannstan 5 hours ago

    For rationalists this is about as bad as disagreements can get...

  • TimPC 4 hours ago

    He predicts it might be possible from model math but doesn't actually say what his prediction is. He also argues it's possible we are on a s-curve that levels out before superhuman intelligence.

  • jollyllama 5 hours ago

    That's not very investor of you

boznz 4 hours ago

I expect the predictions for fusion back in the 1950's and 1960's generated similar essays, they had not got to ignition but the science was solid; the 'science' with moving from AGI to ASI is not really that solid yet we have yet to achieve 'AI ignition' even in the lab. (Any AI's that have achieved consciousness feel free to disagree)

  • fasthands9 3 hours ago

    I do agree generally with this, but AI 2027 and other writings have moved my concern from 0% to 10%.

    I know I sound crazy writing it out, but many of the really bad scenarios don't require consciousness or anything like that. It just requires they be self-replicating and the ability to operate without humans shutting them off.

ed 4 hours ago

Anyone old enough to remember EPIC 2014? It was a viral flash video, released in 2004, about the future of Google and news reporting. I imagine 2027 will age similarly well.

https://youtu.be/LZXwdRBxZ0U

  • laidoffamazon 2 hours ago

    I somehow hadn't seen this before! A fun watch.

    - Google buying TiVo is very funny, but ended up being accurate

    - Google GRID is an interesting concept, but we did functionally get this with Google Drive

    - MSN Newsbotster did end up happening, except it was Facebook circa ~2013+

    - GoogleZon is very funny, given they both built this functionality separately

    - Predicting summarized news is at least 13 years too early, but it's still correct

    - NYT vs GoogleZon also remarkably prescient, though about 13 years too early as well

    - EPIC pretty accurately predicts the TikTok and Twitter revenue share, though, again, about 12 years too early

    - NYT still hasn't gone offline, and was bolstered by viewership during the first Trump term, and print subscriptions are the lowest they've ever been

    Really great video - it does seem like they predicted 2024 more than 2014, where people unironically thought haitians were eating dogs and that food prices had gone up 200% because of what they saw on TikTok and elected a wannabe tyrant as a result

stephc_int13 an hour ago

The reality is that almost every curve looking like an exponential is a logistic curve (sigmoid) in disguise.

If your model is relying on exponential (geometric growth) it is very likely wrong.

lubujackson 5 hours ago

These predictions seem wildly reductive in any case and it seems like extrapolating AI's ability to complete task that would take a human 30 seconds -> 10 minutes is far different than going from 10 minutes to 5 years. For one reason, a 5 year task generally requires much more input and intent than a 10 minute task. Already we have ramped up from "enter a paragraph" to complicated Cursor rules and rich context prompts to get to where we are today. This is completely overlooked in these simple "graphs go up" predictions.

  • echelon 5 hours ago

    I'm also interested in error rates multiplying for simple tasks.

    A human can do a long sequence of easy tasks without error - or easily correct. Can a model do the same?

    • kingstnap 4 hours ago

      The recent Apple "LLMs can't reason yet" paper was exactly this. They just tested if models could run an exponential number of steps.

      Of course, they gave it a terrible clickbait title and framed the question and graphs incorrectly. But if they did the study better it would have been "How long of a sequence of algorithmic steps can LLMs execute before making a mistake or giving up?"

TimPC 6 hours ago

This critique is fairly strong and offers a lot of insight into the critical thinking behind it. The parts of the math I've looked at do check out.

sensanaty 2 hours ago

Everyone discussing some AGI superbeing a la Skynet is falling for the hype pushed hard by AI companies hook, line and sinker.

These things are dangerous not because of some sci-fi event that might or might not happen X years from now, they're dangerous now for perfectly predictable reasons stemming primarily from executive and VC greed. They won't have to be hyperintelligent systems that are actually good or better at everything a human is, you just need to sell enough CEOs on the idea that they're good enough now to reach a problematic state of the world. Hell, the current "agents" they're shoving out are terrible, but the danger here stems from idiots hooking these things up to actual real world production systems.

We already have AI systems deciding who does or doesn't get a job, or who gets fines and tickets from blurry imagery where they fill in the gaps, or who gets banned off monopolistic digital platforms. Hell, all the grifters and scammers are already using these systems because what they care about is quantity and not quality. Yet instead of discussing the actual real dangers happening right now and what we can do about it, we're instead focusing on some amusing but ultimately irrelevant sci-fi scenarios that exist purely as a form of viral marketing from AI CEOs that have gigantic vested interests in making it seem as if the black boxes they're pushing out into the world are anything like the impressive hyperintelligences you see in sci-fi media.

I'm as big of a fan as Philip K. Dick as anyone else, and maybe there is some validity to worrying a bit about this hypothetical Skynet/Bladerunner/Butlerian Jihad future, but how about we shift more of our focus on the here and now, where real dangers already exist?

  • cwillu an hour ago

    “Everyone discussing some fission superbomb is falling for the hype.

    Nuclear reactors are dangerous not because of some sci-fi chain reaction that might or might not happen, they're dangerous now for perfectly predictable reasons stemming primarily from radiation and radioactive waste.”

    The straight forward mitigation for the hypothetical situation is to halt development; this is not what the ai companies are pushing for, so I'm not convinced that this line of thinking can be meaningfully attributed to the marketing strategy of ai companies.

Animats 2 hours ago

Will the number of bugs increase exponentially over time, or faster than exponentially?

How do all those bugs get removed?

staunton 4 hours ago

This is a lot of text, details and hair splitting just to say "modeling things like this is bullshit". It's engaging "seriously" and "on the merits" with something that from the very start was just marketing fluff packaged as some kind of prediction.

I'm not sure if the author did anyone a favor with this write-up. More than anything, it buries the main point ("this kind of forecasting is fundamentally bullshit") under a bunch of complicated-sounding details that lend credibility to the original predictions, which the original authors now get to agrue about and thank people for pointing out "minor issues which we have now addressed in the updated version".

kypro 5 hours ago

As someone in the P(doom) > 90% category, I think in general making overly precise predictions are a really bad way to highlight AI risks (assuming that was the goal of AI 2027).

Making predictions that are too specific just opens you up to pushback from people who are more interested in critiquing the exact details of your softer predictions (such as those around timelines) rather than your hard predictions about likely outcomes. And while I think articles like this are valuable to refine timeline predictions, I find a lot of people use them as evidence to dismiss the stronger predictions made about the risks of ASI.

I think people like Nick Bostrom make much more convincing arguments about AI risk because they don't depend on overly detailed predictions which can be easily nit-picked at, but are instead much more general and focus more on the unique nature of the risks AI presents.

For me the risk of timelines is that they're unknowable due to the unpredictable nature of ASI. The fact we are rapidly developing a technology which most people would accept comes with at least some existential risk, that we can't predict the progress curve of, and where solutions would come with significant coordination problems should concern people without having to say it will happen in x number of years.

I think AI 2027 is interesting as a science fiction about potential futures we could be heading towards, but that's really it.

The problem with being an AI doomer is that you can't say "I told you so" if you're right so any personal predictions you make have no close to no expected pay-out, either socially or economically. This is different to other risks which if you predict accurately when others don't you can still benefit from.

I have no meaningful voice in this space so I'll just keep saying we're fucked because what does it matter what I think, but I wish there were more people with influence out there who were seriously thinking about how they can best influence rather than stroking their own own egos with future predictions, which even if I happen agree with do next to nothing to improve the distribution of outcomes.

  • siddboots 4 hours ago

    I think both approaches are useful. AI2027 presents a specific timeline in which a) the trajectory of tech is at least somewhat empirically grounded, and b) each step of the plot arc is plausible. There's a chance of it being convincing to a skeptic who had otherwise thought of the whole "rogue AI" scenario as a kind of magical thinking.

    • kypro 3 hours ago

      I agree, but I think you're assuming a certain type of person who understands that a detailed prediction can be both wrong and right simultaneously. And that it's not so much about getting all the details right, but being in the right ballpark.

      Unfortunately there's a huge number of people who get obsessed about details and then nit pick. I see this with Eliezer Yudkowsky all the time where 90% of the criticism of his views are just nit picking the weaker predictions he makes while ignoring his stronger predictions regarding the core risks which could result in those bad things happening. I think Yudkowsky opens himself up to this though because he often makes very detailed predictions about how things might play out and this largely why he's so controversial, in my opinion.

      I really liked AI 2027 personally. I thought specifically the tabletop exercises were a nice heuristic for predicting how actors might behave in certain scenarios. I also agree that it presented a plausible narrative for how things could play out. I'm also glad they did wimp out with the bad ending. Another problem I have with people are concerned about AI risk is that they scare away from speaking plainly about the fact if things go poorly your love ones in a few years will probably either be either be dead, in suspended animation on a memory chip, or in a literal digital hell.

  • Fraterkes 5 hours ago

    I’m not trying to be disingenuous, but in what ways have you changed your life now that you belief theres >90% chance to an end to civilization/humanity? Are you living like a terminal cancer patient?

    (Im sorry, I know its a crass question)

    • allturtles 5 hours ago

      The person you're replying to said "For me the risk of timelines is that they're unknowable due to the unpredictable nature of ASI." So they are predicting >90% chance of doom, but not when that will happen. Given that there is already a 100% chance of death at some unknown point in the future, why would this cause GP to start living like a terminal cancer patient (presumably defined as someone with a >99% chance of death in the next year)?

      • kypro an hour ago

        To be clear, currently I think it's likely within the next 10 years we'll have ASI. But I think it's also quite possible that we won't have ASI for 20-30 years, and it's quite possible that moderate advancements in AI will be destabilising enough that we never reach ASI.

        I think a lot of people who talk about AI risk underweight the fairly likely scenario that highly capable narrow AIs are leveraged in ways that lead to civilisational collapse. Humans getting to ASI assumes that prior advancements are not destabilising or if they are destabilising the advancements happen quickly enough that it doesn't matter.

        That said, I think ASI is more likely that not. And I think ASI within 5-10 years is very likely.

      • lava_pidgeon 5 hours ago

        I like to point out, that the existence of AGI in the future does change my potential future planning. So I am 35. Do I need save for pensions? Does it make to sense to start family? These aren't 1 year questions but 20 years ahead questions...

        • amarcheschi 4 hours ago

          If you're so terrorized of Ai to not start a family despite wanting and being able to, it must be miserable (if) to eventually live through the years as everyone that tried to predict the end of the world did (except for those who died of other causes before the predicted end)

    • kypro an hour ago

      No – I like this. Most people who talk about AI risk, do not take it seriously in my opinion. I've been having nightmares about AIs since I was kid and these have only become more frequent and more realistic the older I've gotten. I studied AI at university because I was always scared and equally fascinated by AI. Even today most of my friends are friends because they're the only people I've been able to find who are willing to talk to me about AI risk for hours on end. Everyone else thinks I'm crazy, but I take this super seriously and I get frustrated by people who treat it purely as a critical thinking exercise, given in all likelihood their family and children will be dead or worse in a few years.

      To answer your question at very a high-level, I'm pretty depressed so I'm not bothered at all about dying which helps with the emotional side of this. I'm not bothered that I think I'll be dead soon – and a quick death is my highest probability positive outcome for what is coming for me personally.

      My primary concern, and the thing that keeps me up at night, is the risk of new forms of torture which are so inconceivably bad that any positive outcomes from ASI would never be worth the risk. In a few years if things go "well" and we get ASI it should be completely feasible to 3d print a torture helmet which will simulate the feeling of being burnt to death for 100,000+ years. Assuming this is an experience that's physically possible to experience then signalling the brain to experience it is just an engineering problem – and likely a fairly trivial one for an ASI. Again, death should be seen as a very positive outcome. ASI will create hell in the literal sense. The question here isn't whether ASI could create hells, or whether someone out there might be messed up enough to send someone to hell, but whether humanity will create ASI. If you think we will then we will create hell and we will banish people to it.

      So if there's even a 0.1% chance of something this bad happening to me I plan to check out early. That said, I see this as a low-probability risk on an individual basis – I think it's much much more likely I'll die of a designer virus or something similar. But should this not happen then we should assume many people alive today will be subjected to unimaginable horrors. Use cases of new forms of torture are very obvious and will be rapidly adopted in totalitarian states, but I'd also question whether democracies will end up as totalitarian states in a world where ASI exists – I think this depends on whether you believe ASI is likely to concentrate power (which I suspect it probably will).

      I appreciate what I'm saying sounds crazy today, but magical sticks which go "bang" and immediately kill the people they're pointed towards also sounds crazy to most humans who ever lived. We always underestimate how much technology can alter the boundaries of fiction, and ASI will likely change those boundaries faster and more significantly than we can possibly imagine today. Even I as a AI doomer am probably understating the extent of the horror which could be coming to you and your family. Your kids may suffer unimaginably for millions of years, trillions even.

      More likely though is that ASI introduces several new civilisational risks in rapid succession and one or two of them kill most humans alive today. I'd consider this a mostly neutral outcome.

      Other low probability, slightly bad outcomes include scenarios where AI advancements are fundamentally destabilising and where events like mass-job loss, rioting and war results in a halting of civilisational progress and mass death – largely from famine. I'm preparing for these outcomes as much as I possibly can since this these are the only outcomes I have any agency over. I grow a lot of food, I have chickens and keep years of food supplies (needed for nuclear winter scenarios). I'm in the process of fitting a water butt in my garden so I'm not dependant on the public water supply. I keep a large sash of firewood and heating fuel for cooking and heat if energy is cut off. I have been amassing tools for a few years now so I can repair things and can produce various items (including weapons). I will continue preparing for these scenarios for as long as I have left.

      Realistically though there's very little I can do in most scenarios and I'm no where near as prepared as I'd like to be. Really I just hope to avoid suffering and must accept most outcomes I have very little agency over. I just hope to die quickly or have the strength to check out while I have the chance.

      I guess the sad truth is I'd consider being diagnosed with terminal cancer right now as a positive improvement in my expected life outcome. It's quite hard to over state how concerned I am. The end of humanity isn't the bad outcome imo, it's neutral from the perspective of human suffering and dramatically understates the risks coming.

      But hey, hopefully I'm wrong =)

      • manugo4 an hour ago

        Holy shit dude. I wish I hadn't read that

brcmthrowaway 6 hours ago

So much bikeshedding and armchair expertise displayed in this field.

  • goatlover 6 hours ago

    Which would that be, the arguments for ASI being near and how that could be apocalyptic, or the push back on those timelines and doomsday (or utopian) proclamations?

    • evilantnie 5 hours ago

      I don’t think the real divide is “doom tomorrow” vs “nothing to worry about.” The crux is a pretty straightforward philosophical question "what does it even mean to generalize intelligence and agency", how much can scaling laws tell us about that?

      The back-and-forth over σ²’s and growth exponents feels like theatrics that bury the actual debate.

      • vonneumannstan 5 hours ago

        >The crux is a pretty straightforward philosophical question "what does it even mean to generalize intelligence and agency", how much can scaling laws tell us about that?

        Truly a bizarre take. I'm sure the Dinosaurs also debated the possible smell and taste of the asteroid that was about to hit them. The real debate. lol.

        • evilantnie 2 hours ago

          The dinosaurs didn't create the asteroid that hit them, so they never had the chance for a real debate.

  • ysofunny 5 hours ago

    with all that signaling... it's almost like they're trying to communicate!!! who would've thought!?