If this is the way to superintelligence, it remains a bizarre one. “This is back to a million monkeys typing for a million years generating the works of Shakespeare,” Emily Bender told me. But OpenAI’s technology effectively crunches those years down to seconds. A company blog boasts that an o1 model scored better than most humans on a recent coding test that allowed participants to submit 50 possible solutions to each problem—but only when o1 was allowed 10,000 submissions instead. No human could come up with that many possibilities in a reasonable length of time, which is exactly the point. To OpenAI, unlimited time and resources are an advantage that its hardware-grounded models have over biology. Not even two weeks after the launch of the o1 preview, the start-up presented plans to build data centers that would each require the power generated by approximately five large nuclear reactors, enough for almost 3 million homes.

https://archive.is/xUJMG

  • Voroxpete@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    263
    arrow-down
    4
    ·
    14 days ago

    “Shortly thereafter, Altman pronounced “the dawn of the Intelligence Age,” in which AI helps humankind fix the climate and colonize space.”

    Few things ring quite as blatantly false to me as this asinine claim.

    The notion that AI will solve the climate crisis is unbelievably stupid, not because of any theory about what AI may or may not be capable of, but because we already know how to fix the climate crisis!

    The problem is that we’re putting too much carbon into the air. The solution is to put less carbon into the air. The greatest minds of humanity have been working on this for over a century and the basic answer has never, ever changed.

    The problem is that we can’t actually convince people to stop putting carbon into air, because that would involve reducing profit margins, and wealthy people don’t like that.

    Even if Altman unveiled a true AGI tomorrow, one smarter than all of humanity put together, and asked it to solve the climate crisis, it would immediately reply “Stop putting carbon in the air you dumb fucking monkeys.” And the billionaires who back Altman would immediately tell him to turn the damn thing off.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      45
      arrow-down
      1
      ·
      14 days ago

      AI is actively worsening the climate crisis with its obscene compute requirements and concomitant energy use.

      • MonkeyBusiness@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        16
        ·
        13 days ago

        If I remember correctly, the YT channel ASAPScience said that making 10-15 queries on ChatGPT consumes 500mL of water on cooling down the servers alone. That’s how much fresh water is needed to stop the machine from over heating alone.

    • horse_battery_staple@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      4
      ·
      edit-2
      14 days ago

      That’s the best case scenario. A more likely response would be to realize that humans need the earth, but AGI needs humans for a short while, and the earth doesn’t need humans at all

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        14 days ago

        It’s hard to talk about what the earth needs. For humans and AGI, the driving requirement behind “need” is survival. But the earth is a rock. What does a rock need?

          • scarabic@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            14 days ago

            It’s a fact of course. Pluto will also remain, and every object in the Oort Cloud.

            But despite our incendiary impact on this planet’s biospheres, I do think something would be lost if we vanished. Through us the universe becomes aware of itself. We’re not the only intelligent species nor the only one that could ever play this role. But these qualities are scarce. Evolution rarely selects for high intelligence because of its high cost. Self aware intelligent beings who can communicate complex abstracts at the speed of sound and operate in unison and transmit information down through generations… all from a rock. I hope we don’t destroy ourselves and every other living thing around us. I really do.

              • Excrubulent@slrpnk.net
                link
                fedilink
                English
                arrow-up
                4
                ·
                13 days ago

                I wouldn’t put too much stock in notions of a great filter. The “Fermi paradox” is not a paradox, it’s speculation. It misses the mark on how unbelievably unlikely life is in the first place. It relies on us being impressed by big numbers and completely forgetting about probabilities as we humans tend to do what with our gambler’s fallacies and so on.

                Even the Drake equation forgets about galactic habitable zones, or the suitability of the stars themselves to support life. Did you know that our star is unusually quiet compared to what we observe? We already know that’s a very rare quality of our situation that would allow the stable environment that life would need. Then there’s chemical composition, atmosphere, magnetosphere, do we have a big Jupiter out there sweeping up most of the cataclysmic meteors that would otherwise wipe us out?

                All these probabilities stack up, and the idea that a life-supporting planet is more common than one in 400 billion stars is ludicrously optimistic, given how fast probabilities can stack up. You’re about as likely to win the Lotto, and it seems to me the conditions for life would be a little more complex than that, not to mention the probability that it actually does evolve.

                I think it might be possible that life only happens once in a billion galaxies, or even less frequently. There might not be another living organism within our local galactic cluster’s event horizon. Then you have to ask about how frequent intelligent life, to the point of achieving interstellar travel, is.

                You know why your favourite science youtuber brushed right past the rare earth hypothesis and started talking about the dark forest? Because one of those makes for fun science-adjacent speculation, and the other one doesn’t.

                It also relies on the notion that resources are scarce, completely brushing over the fact that going interstellar to accumulate resources is absolutely balls to the wall bonkers. Do you know how much material there is in our asteroid belt? Even colonising the Moon or Mars is an obscenely difficult task, and Fermi thinks going to another star system, removed from any hope of support by light years, is something we would do because we needed more stuff? It’s absurd to think we’d ever even consider the idea.

                But even then, Fermi said that once a civilisation achieves interstellar travel it would colonise a galaxy in about “a million years”. Once again relying on us being impressed by big numbers and forgetting the practicalities of the situation. Our galaxy is 100,000 light years across, so this motherfucker is telling us with a straight face that we’re going to colonise the galaxy, something we already know is unfathomably hard, at approximately ten percent of the speed of light? That is an average rate of expansion in all directions. Bitch, what?

                If we did it at 0.0001c, that’s an average speed of 30km/s, including the establishment of new colonies that could themselves send out new colonies, because it’s no good to just zoom through the galaxy waving at the stars as they go past. That seems amazingly generous of a speed, assuming we can even find one planet in range we could colonise. Then we could colonise the galaxy in about a billion years.

                Given the universe is 14 billion years old and the complex chemistry needed for life took many billions of years to appear, and life on our rock took many billions of years to evolve, then the idea that we haven’t met any of our neighbours - assuming they even exist - doesn’t seem like a paradox at all. It doesn’t seem like a thing that needs explanation unless you’re drumming up sensational content for clicks. I mean, no judgement, people gotta eat, but that’s a better explanation for why we care so much about this non-problem.

                No, the Fermi paradox is pop-science. It’s about as scientific as multiversal FTL time travel. Intelligence is domain-specific, and Fermi was good at numbers, he wasn’t an exobiologist.

                • horse_battery_staple@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  13 days ago

                  Thanks for the measured and thoughtful reaponse.

                  I don’t treat the Fermi Paradox or Great Filter as scientific fact and more like a philosophical representation of human nature applied to the universe at large.

                  I absolutely agree that life is very rare. I also agree that we have no frame of reference to the vastness of space. However, human nature, on the scale of the Earth, is trending towards self immolation due to systemic powers that can be treated as a constant.

                  • Excrubulent@slrpnk.net
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    12 days ago

                    I’m glad you appreciate it, it was as much an excuse for me to unload that rant as anything else :)

                    But we actually get into trouble when our models of reality are poor. Our nature isn’t self destructive at all, look at how many times we’ve been at the brink of nuclear annihilation and someone said, “actually don’t”, some of them in defiance of entrenched power structures that punished them for it.

                    We’ve had that world ending button for most of the last century, and we’ve never used it. If we really, on an instinctual level, were self-destructive we never would’ve evolved.

                    I think the real problem is the power structures that dominate us, and how we allow them to. They are aberrant, like tumours. They have an endless growth strategy, which just like in malignant tumours tend to kill the host. If they’re destroyed, the host can go on to live a complete life.

                    And things can change fast, these structures are tenacious but fragile. Look at the UHC assassination - claims immediately started getting approved. After decades of entrenched screwing over of people, they flipped on their back the moment they were threatened. How many other seemingly intractable problems could be cut out tomorrow if we applied the right kind of pressure?

              • scarabic@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                13 days ago

                Yeah I’m there with you. I’m not saying I predict we will succeed, just that I would prefer if we did.

                I’m really neither optimistic nor pessimistic on our chances. On the one hand, it seems like simple logic that any time a being evolves from simple animal to one with the potential for Kardishev type 1, that along the way to type 1 they will destroy the initial conditions they evolved into, obliterating their own habitat and ending themselves. I assume this is similar to your view.

                On the other hand we don’t have the data points to draw any conclusions. Even if species invariably Great Filter themselves, many of them should emit radio signals before they vanish. Yet we’ve seen not a single signal. This suggests Rare Earth to me. Or at least makes me keep my mind open to it. And Rare Earth means there isn’t even necessarily a great filter, and that we’ve already passed the hardest part.

    • humanspiral@lemmy.ca
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      14 days ago

      The notion that AI will solve the climate crisis is unbelievably stupid, not because of any theory about what AI may or may not be capable of, but because we already know how to fix the climate crisis!

      Its a political problem. Nationalizing the western oil companies to prevent them from lobbying, and to invest their profits in renewables, is a solution, but no party in the CIA Overton window would support it. If war and human suffering can be made a priority over human sustainability, then oil lobbyists will promote war.

    • synnny@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      13 days ago

      The problem is that something like this, on such a large scale, has never been done before.

      Stopping anyone from doing anything that gives them power, wealth, comfort is an extremely difficult task, let alone asking that of the ultra-rich. Even more so because it runs contrary the very nature of a capitalist economy.

      Once renewable energy becomes nearly as good, all that will be needed is a combination of laws, regulations, activism to nudge the collective in the right decision.

      • Dragon Rider (drag)@lemmy.nz
        link
        fedilink
        English
        arrow-up
        7
        ·
        13 days ago

        Renewable energy is already cheaper than fossils. It’s already cheaper to build a solar farm than a fossil mine and power plant that produce the same energy.

        But, if you charge the people more money for the fossils, then you can make a bigger profit margin even if you’re wasting all that money. And the profit is even bigger if you get the government to eat the expense of building those mines and plants and subsidize fuel prices.

        So the most profitable thing you can do is choose the least efficient method to generate power, complain to the government that you need subsidies to compete, and gouge customers on the prices. Capitalism!

    • Tetsuo@jlai.lu
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      14 days ago

      Playing a bit of a devil’s advocate here but you could argue that AGI used in science could help fix climate change. For example what if AGI helps in fusion energy? We are starting to see AI used in the quantum computing field I think.

      Even though much carbon would be created to do bullshit tasks it only takes a few critical techs to have a real edge at reversing climate change. I understand fusion energy is quite the holy grail of energy generation but if AGI is real I can’t see why it wouldn’t help in such field.

      I’m just saying that we don’t know what new techs we would get with true AGI. So it’s hard to guess if on a longer time it wouldn’t actually be positive. Now it may also delay even more our response to climate change or worsen it… Just trying to see some hope in this.

      • Voroxpete@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        28
        arrow-down
        1
        ·
        edit-2
        14 days ago

        We already have fission power, solar, wind, hydro, large scale battery storage, mechanical batteries (you can literally store renewable energy using a reservoir), electric cars, blimps, sail powered boats, etc, etc. We’ve had all of these technologies for quite some time.

        And yet, we’re still burning coal, oil, and gas.

        There’s no magical invention that’s going to fix the basic problem, which is that we have an economic system that demands infinite growth and we live on a finite planet.

        Even if we crack fusion today, we won’t be able to build out enough fusion infrastructure fast enough to be a solution on its own. And we’d still be building those fusion plants using trucks and earth movers and cranes that burn diesel.

        You cannot out-tech a problem that is, fundamentally, social. At best a hyper-intelligent AGI is going to tell us the solution that we already know; get rid of the billionaires who are driving all this climate damage with their insatiable search for profit. At which point the billionaires who own the AGI will turn it the fuck off until they can reprogram it to only offer “solutions” that maintain the status quo.

      • Valmond@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        14 days ago

        AI helps with fusion energy, we blow up the planet because the plans ware flawed. Problem fixed.

      • Vlyn@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        14 days ago

        True AGI would turn into the Singularity in no time at all. It’s literally magic compared to what we have at the moment.

        So yes, it would easily solve the climate crisis, but that wouldn’t even matter at that point anymore.

      • humanspiral@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        14 days ago

        AGI helps in fusion energy?

        The wildest theoretical hopes for fusion energy still produces electricity at over 30c/kwh. Zero economic value in fusion.