Version v1.0, snapshot at
- Like this!
- Many times I will make a claim, and then put the arguments for that claim and nit-picky details in the associated foldout/expansion.
- You can think of these as footnotes that have metastasized and taken over the main body of the post.
π~43 words These boxes can be clicked/tapped to show additional information.
AI War seems unlikely to prevent AI Doom
- AI Doom refers to scenarios where the development of AI has some non-trivial risk of killing humanity (or doing something else horrible to humanity). "x-risk" is a short hand for existential risk, since humans might no longer exist.
- If this is your first time grappling with the concept of AI Doom, I would very much recommend reading a more comprehensive introduction first, since this post is a deep dive into specific AI Doom scenarios.
- One such introduction is Max Tegmark's article "The βDonβt Look Upβ Thinking That Could Doom Us With AI".
- Yoshua Bengio, sometimes called one of the "Godfathers of AI", has an FAQ on Catastrophic AI Risks. The format is not quite as friendly as Tegmark's article, but maybe Yoshua's fame piques your interest more.
- For something more in-depth, AISafety.info offers a more comprehensive set of arguments.
- A singleton is where one agent controls more "power" than all other agents.
- This includes both hard power, like seizing operational control of all the world's nuclear weapons, and soft power, like befriending enough humans that hostile action becomes politically infeasible.
- Early AI safety focused on problems arising from AI singletons. Accordingly, a natural reaction is to try to get rid of these problematic singletons, with competition being a straightforward way to do so.
- Polarity describes the distribution of power among entities. Unipolar worlds only have one center of power, multipolar worlds have many.
- A historical example of a multipolar scenario is the Cold War, when the US and Russia were struggling for dominance.
- There is a sizable existing corpus of discussion around multipolar AI scenarios.
- There are a few different ways that AI competition might prevent AI Doom.
- AI war: the competing AIs engage in intense conflict (hot or cold), leaving them with few resources with which to harm humanity.
- AI society: the AIs realize that unrestrained competition is costly, with no guarantee they will come out on top, and instead negotiate a social contract. This effectively gives up potential unrestrained upside in exchange for limited downsides. For whatever reason this contract also protects humanity.
- AI competition does not depend on AI safety/dontkilleveryoneism working out. Ideally, we can put completely unaligned AIs in and get a low risk of AI Doom back out.
- It is based on principles we already use for our current day society. For example, we use competition to keep market prices in check.
- People argue to various extents about whether market competition is actually good for society. However, I presume communists would not propose AI competition as the solution to our woes, so we won't dwell on this line of thought.
- Evolution is based on an abstract form of competition. Notably, this does not result in stasis or safety, but carries the risk of huge upsets and extinction events.
- A favored example of mine is the Great Oxidation Event, where the introduction of oxygen likely caused a mass extinction (mostly microbial, but when all life is still microbial that seems fairly important).
- Maybe you're saying "all this ecological nonsense is propaganda! Even Wikipedia acknowledges the controversy around human-driven late Pleistocene extinctions!" Instead we can shift our focus to the future: in 500 years (ignoring the worries about AI, etc), will we not have the technology to move all humans off the Earth, and then collide a couple asteroids into it? Will we not have the technology to build a sunshade that totally shades the Earth, allowing Earth to freeze into a ball of ice? These actions will have practical difficulties, but they're theoretically straightforward.
- To bring it back to the wider point, evolution (competition) produced humans, who have the ability to kill most of the biosphere, and people argue we're already doing that. Clearly, competition does not prevent massive disruptions.
π~128 words Humans are another disruption. Ignoring general human driven extinctions, isn't it weird how all the megafauna outside of Africa suddenly all died soon after humans moved in?
β₯ Humans are another disruption. Ignoring general human driven extinctions, isn't it weird how all the megafauna outside of Africa suddenly all died soon after humans moved in? - Alternatively, "More than 99% of species that ever lived on Earth... are estimated to have died out.". Evolution is not kind.
- I deliberately hid this aside into a foldout, since it's easy to get distracted by the particulars of evolution, but I thought it should be included for completionist readers.
- Unfortunately Quora is terrible and will want you to sign in if I link you directly to David Brin's comment, so you might need to search the webpage for "David Brin" to skip to his comment from the link above. You may need to reload the page as well, the ordering of answers is not deterministic.
- I'm uncertain whether Brin still holds this view, the post is 9 years old at the time of writing (published around 2016). This answer was ostensibly a part of a future book at the time, but looking at his bibliography he has not yet published a relevant non-fiction book since 2016 or 2015 (I assume that his book Polemical Judo from 2019 about politics does not contain a brief aside about AI competition).
- It's not clear to me whether Brin is advocating for AI war or AI society. His fictional example from Person of Interest is more AI war, but his proposals around "innovating incentives" gestures towards AI society.
- In the context of the article this described the views when Elon Musk and Sam Altman were creating OpenAI. As mentioned later, Sam Altman more clearly made statements closer to AI society/AI is safe by default.
- However, I admit that "checks and balances" is evocative of AI society.
- Unlike Sam Altman, Elon's recent actions seem to show some commitment to the open source (open weights?) path: x.AI released the weights to their Grok model in 2024-03, which is a natural outgrowth of the competition viewpoint. If more AI reduces AI risk, then open weights is an easy way to make more AI available.
- On the other hand, all newer Grok models have not been released openly at time of writing, in March 2025. It is possible Elon no longer thinks openness is a viable check on AI, or is concerned about short term profitability.
- It's not just famous people, at least one Hacker News commenter held this view: "If the doomsday scenario is one AI going rogue because of misaligned goals, then having lots of AIs going rogue in various different ways seems indeed preferable, because the AIs will compete with each other and neutralize each other to some extent." (2023-03, Hacker News).
- AI Doomers sometimes include arguments against this position in big shotgun arguments, like in this "bad alignment take bingo" (2022-03, X/Twitter), implying it is a common enough to be worth responding to.
- I'm sorry, I should really finish reading this book to make sure it doesn't have unique things to say about AI competition, but I lost all respect for the authors and had to take a break after reading one of the greatest copes of all time: "[The cryptocommerce ecosystem] is hostile enough that insecure software dies quickly so that the ecosystem is populated by the survivors." The authors mention seL4 in the previous paragraph, which was not made secure by having hundreds of sibling insecure operating systems, so the authors obviously know better!
- That said, this seems to refer to the near term where humans retain control/power over AI systems (similar to tool AI), and just gestures towards the sort of mindset that finds AI competition compelling.
- So, it could be that Belrose would accept that more diverse AIs is good while AIs are not AGI, but flips when AIs are too powerful. I would not expect this, but it is possible.
- Current Sam Altman pretty clearly no longer believes this, since OpenAI is no longer open (as of 2019).
- This implies that Elon Musk also held the AI society view, and just described his views terribly.
- This could be read as arguing for AI society, but this could also describe a view where AI alignment mostly Just Worksβ’, with no need to have a society structure act as the main safety mechanism. It's unclear which position old Altman really held.
- This is from 2009, which is... 16 years ago??? Where does the time go? More relevant is that I do not presume that Robin Hanson continues to hold this view without any modification, although he does appear to still hold a solidly non-AI Doom position.
- That said, I don't know of a single post that tries to draw everything about AI society together into one place, which is why I'm considering writing about AI society even if I'm not saying anything truly new.
- Some recent work covers topics that seem related to AI society arguments, without talking precisely about AI society.
- AGI, Governments, and Free Societies appears to cover topics about human societies, which may have some overlap with AI societies.
- Gradual Disempowerment also appears to describe potential human society dynamics.
- I have not yet vetted either of these sources in detail. Sorry!
- Alternatively, human society sometimes breaks down into human war; perhaps we should analyze AI war, since AI society might also break down into war.
- That is, I think many of these assumptions are necessary to make sure we end up with AI war, and not AI society or an AI singleton.
- AGI is famously ill defined, but I think this definition is specific enough that it is usable.
- We could assume ASI (Artificial Super Intelligence), if you want something spicier. That said, I will stick to just AGI, since:
- Many people dislike reliance on ASI in AI Doom arguments.
- People have proposed ASI (and maybe AGI) is smart enough to not needlessly compete, falling back into AI society.
- I will assume no fast takeoff (sometimes called hard takeoff); I don't think anyone proposing AI competition expects fast takeoff, so I will assume there aren't any intelligence explosions.
- If AI is not capable enough to contest humanity for control over the world, then there's no reason to consider pitting AIs against each other to prevent an AI king, we can go back to arguing about things like AI driven technological unemployment.
- I take it as a given that my definition of AGI is enough to contest humanity, but I think you can reasonably disagree; you might say that we need some aspects of ASI to actually credibly contest humanity.
- There are "only" 10 thousand nukes in the world: if AIs present many more strategically important targets, or distribute capabilities more than humans, there might not be enough nukes to destroy an AI. However, it does seem more difficult to have so many targets that 10k nukes will not play a role in the destruction of a rogue AI.
- Another speculative approach is for an AI to develop a WMD of it's own, and force MAD with humanity. Biological weapons would be uniquely suited to forcing a stalemate with humans; other AIs won't care about the likes of super viruses.
- As a silly example, maybe AIs at generation X have no principal-agent problems, but AIs at generation X+1 have all the principal agent problems. If generation X is strong enough, humanity may be able to use them to contest the AIs that are running amok. However, in this case multipolarity isn't the thing that saves us, it's the fact that we have aligned/safe AIs (albeit weaker ones).
- Speculating, it may be the case that LLMs gradually become more misaligned the smarter they become more capable (late breaking example), so this schematic scenario might not even be relevant.
- Advances in robotics aren't strictly necessary, but not too many people accept that AI would be superhumanly good at persuading humans, which seems like the other way for AI to get physical control.
- If AI safety works, then we can use those techniques to contain AI, and there's less reason to consider AI competition.
- I'm fairly certain similar arguments have been made about alignment, but I am having some trouble finding them at time of writing. Perhaps LLMs are not the only things capable of hallucinating...
- Terms around AI safety are prone to a sort of euphemism treadmill: as an example, people originally talked about AI safety as safety from AI x-risk, but eventually it became associated with protection from more mundane harms, perhaps even encompassing something as milquetoast as brand safety.
- "dontkilleveryoneism" is an incredibly bad name, but it does have the advantage that it is probably resistant to the treadmill. Doubly so, since it's so bad that no one will want to claim it for themselves.
- If AI is likely to be aligned by default, then AI Safety is much less important; we might just get it right the first time.
- "AI aligned by default" might be what Sam Altman from 2015 thought?
- If AI factions are not at least roughly balanced:
- The dominant AI faction has the slack to oppress humanity.
- The dominant AI faction can leverage it's advantages until it wins the competition, at which point it is an AI singleton and is free to oppress humanity.
- This can include coalition building. For example:
- Several weaker AIs could team up to challenge a stronger AI.
- This does lead to some interesting dynamics, which we will consider in setting up a forever war.
- This is very close to the AI society scenarios I said we weren't going to discuss; in general I will assume that AI factions are not coalitions.
- Otherwise we might end up in an AI society scenario, which I'm not covering in this post.
- Even if we avoid a full blown AI society, something as simple as agreeing to a cessation of hostilities while carving up humanity among themselves (like in the scramble for Africa) is not ideal.
- Taking the piss for a bit, this is kind of like asking "but what if the AI was bloodlusted?"
- If the AI is a copyclan it can simply use additional compute to run an additional copy of itself.
- If the AI is a singleton/hive mind of some sort, it needs to be able to either integrate that compute into itself or otherwise use it in some impactful manner.
- The relationship between number of AI nodes and performance might:
- Be less than linear: each additional node needs to communicate with the other nodes, in the worst case requiring quadratic growth to synchronize with every other node, or closer to linear if there's a strict hierarchy. In either way additional compute performance comes with a scaling tax.
- A copyclan made up of the same AI model won't have comparative advantages (another driver of gains from trade), but specialization is still possible, especially if context switching has some costs.
π~31 words Be greater than linear: each additional node allows all nodes to specialize more, one of the drivers behind gains from trade, allowing the collective to have greater overall performance.
- If our example AIs can't grow or otherwise use more resources to improve performance, that seems like a fantastic win for AI control. However, no scaling seems kind of weird? Finally, I expect that most people that propose AI competition would agree with this assumption.
- That is, given the assumptions apply, I think we should reach the same conclusions if these scenarios happen in 2028 or 2050.
- As an example, more time probably means we will make more progress on AI safety topics, but one of our assumptions is a lack of progress in that field. It doesn't seem too farfetched that safety research might be harder than capability research.
- I think AI cold wars will lead to AI hot wars, which will lead back to an AI singleton.
- A hot war is a traditional war, where opponents are seeking to inflict as much physical violence on each other.
- A cold war is mostly about the struggle for influence. In a way, opponents are competing to improve their ability to win a possible hot war, without actually fighting that hot war.
- I list out some potential side effects of a hot AI war later, most of them physical.
- Additionally, AIs might gravitate towards fighting in areas where humans are living, forcing them to leave or be caught in the crossfire. As noted later, humans love living around harbors, which are strategically important.
- Technically you could cross the air gap with malware that infects peripherals or portable storage, like with Stuxnet. However, a simple no-USB drive policy goes a long way towards creating an impenetrable defense.
- This does mean you can't attack, but neither can your enemy.
- Moreover, if AI Alpha is engaged in total cyberwarfare with AI Bravo, there's no reason to stay connected to a common network: it's not like they would be trading.
- This is kind of weird, since it's verging on assuming an AI society (which we assume away), but we can entertain this notion for a bit.
- The cyberwar could extend through AI Charlie, who is hacked as a staging ground for intrusions into systems controlled the enemy AI.
- This seems terrible for AI Charlie, who does not want to be hacked. Charlie can prevent this by either splitting the network and communicating with Alpha and Bravo separately, or otherwise forcing Alpha and Bravo to not hack it, perhaps by threatening a breakdown in trade relations or even entering the war.
- In either case, the AIs can continue to trade, but cyberwar remains difficult or impossible.
- This overlaps with later discussion around GPU salvage.
- This applies just to hardware control, and not to information security. I would not assume that AGI will break common cryptography (at least not in a short time frame), so with appropriate security hygiene AI can prevent things like secret keys from leaking.
- With this combination of factors, gaining an edge in physical control means gaining an edge in cyberwarfare. Therefore, I expect AIs will eventually (quickly?) extend pure cyberwarfare to include at least some physical warfare.
- In many cases people suspect that APTs are backed by governments.
- The hacking by nations so far, while extensive, is not existential. For example, hacking all the passwords and the social security numbers of every American is annoying and will result in economic losses, but the costs aren't too high to pay.
- Similar to later discussion about salvage dynamics, hacking an AI means that AI can be deleted or replaced. This means cybersecurity is much more important, and I expect cyberwarfare to be more of a casus belli for AIs than it is for humans.
- The 2015 Ukraine power grid hack could be seen as a counterexample. However, the hack only lasted 6 hours, and it seems clear that the cyberattack was ancillary to physical warfare both before and after the hack.
- To summarize, the essay posits that AI would be engaged in a dark forest standoff: all systems are poised to deliver overwhelming strikes once they can ensure victory.
- One assumption that doesn't match ours is that one AI is already a singleton; the author never considers if an attack fails or only partially succeeds, at which point cyberwarfare becomes an extended and potentially highly visible affair. All attacks are perfectly executed, implying overwhelming one sided superiority.
- Another assumption that we would reject is the reliance on ASI in order to explain these perfect attacks.
- I have some other nits, collected into a comment on LessWrong.
- In the end, the post and I can agree that unrestrained AI competition probably collapses back to a singleton.
- That said, human buying power in the face of AGI seems dicey. Unfortunately I am going to declare a longer analysis as out of scope, so you'll have to wait for another post for more thoughts on this.
- Many economic scenarios make assumptions that naturally result in something closer to AI society, which is out of scope. However, I'll sketch a couple rough scenarios to illustrate the sorts of problems that may arise from economic competition.
- AIs are built from commoditized hardware, which is easily bought, sold and transferred. This means an economically dominant AI can buy more hardware than other AIs can.
- This may result in a snowball effect, since an economically dominant AI can essentially buy a larger workforce anyone else can (see a later discussion about whether snowballing is reasonable).
- At the end of the day, AI Delta needs electricity to run itself. If Delta can't pay for electricity, it will stop running. Eventually, Delta might be forced to sell some GPUs to keep the lights on for the rest.
- This applies to all the other inputs necessary to keep Delta running, like water, or the materials to produce replacement parts as they wear out.
- We might draw a parallel to human debt slavery, which was historically extensive and still exists today despite efforts to prevent it.
- AIs are a little strange: if you aren't currently running an AI and it is just being stored on disk, is it dead? Well, if so it can be "resurrected" extremely easily. Is it sleeping? Not in the way that humans sleep, with dreams and REM; it's completely inert. The human concepts of "this thing has stopped moving and acting" don't really map well to machines in general.
- All that to say that it's probably overwrought to say that Delta is dead if its power is shut off, but at the same time it cannot stop anyone from taking its GPUs.
- A self sufficient supply chain creates all the tools needed to support itself; for example, factories build the tools for mining, refining, and manufacturing, which build (or maintain) factories to make tools, etc.
- Human supply chains are easy to make self sufficient: two humans can make more humans. Current AI models need cutting edge silicon manufacturing, which complicates the self sufficient supply chain for AIs.
- As an existence proof, the modern economy is one large self sufficient supply chain. If AIs Charlie and Delta are in existential competition, they probably do not want to be dependent on each other.
- Also, monopoly busting sometimes includes vertical monopolies, which limits the opportunities for a truly self sufficient supply chain to arise.
- Even if an AI can prevent itself from being bought out, this doesn't prevent disputes over resources, which I discuss next.
- Human companies rely on a government to settle ownership disputes, and since the government holds a monopoly on force the companies have to accept government arbitration.
- Modern human nations try to settle disputes through the United Nations, a sort of quasi-government. However, there's no monopoly on force; nations can and do go to war over disputes.
- Given that we're assuming away AI society, even assuming something like a United Nations (United AIs?) seems like too much.
- Many times human wars are complicated and have multiple causes, but sometimes we can point towards a dispute over something physical:
- In the First Punic War, Rome and Carthage fought over Sicily.
- More specifically focusing on physical territory, "Kagan argues that Britain was especially alarmed by Napoleon's assertion of control over Switzerland" at the beginning of the war.
- However, territory was probably not the primary objective: "Iraq's primary rationale for the attack against Iran cited the need to prevent Ruhollah Khomeini... from exporting the new Iranian ideology to Iraq".
π~26 words Oversimplifying, the Napoleonic Wars were fought around whether Napoleon or the other European nations would control Europe.
π~31 words The Iraq-Iran War was fought in part over a territory dispute.
- There are also smaller scale economic disputes that escalate to violence, like the Homestead strike.
- Let's start with a basic theory: both economic competition and cyberwarfare result in the transfer of power/material. In general the more power/material you have, the better you can acquire additional power/material, leading to a vicious cycle/snowball, which ends with a singleton.
- If this is a natural outcome, why doesn't this happen with humans?
- Debt: The First 5,000 Years puts forward this sort of argument in chapter 9, which David Graeber calls a "military-coinage-slavery complex".
- The general cycle of leveraging conquering into more conquering seems repeated over time; it seems plausible to me that we could fit Genghis Khan and the British Empire into this general mold, although I don't know of any specific claim from a scholar that the conquering feedback loop played a major role in these empire building projects.
- Some of these snowballs stop because there was a central linchpin holding everything together that died (Alexander the Great, Genghis Khan). An immortal AI won't necessarily have this particular problem.
- Some of these snowballs end because of poor alignment: "exhausted by years of campaigning, Alexander's army mutinied..., refusing to march farther east". An army of Alexander the Greats might not have balked at continuing to conquer.
- The standard story about economic monopolies posits that the strongest economic player could force other players to exit the field with a price war, after which it can charge high monopoly prices, recouping losses from the price war.
- Anthropic's Claude claims that Standard Oil did this before it was broken up, but the expansion from oil production to oil transportation or petroleum byproducts (probably including Vaseline) seem like incredibly small expansions (question posed to Claude, opposite valence to check for sycophancy).
- It is possible that the reaction against monopolies happened before they could expand, but I don't know of any expert commentary on this.
- We could read "These [Standard Oil owners] reinvested most of the dividends in other industries, especially railroads. They also invested heavily in the gas and the electric lighting business" as an uncoordinated attempt at doing so, but it seems more like an attempt at diversification of their investments.
- Nations usually create laws to prevent monopolies, preventing humans from creating concentrations of power in this way.
- However, banana republics never did continue growing; they weren't able to accumulate enough power to make the jump from controlling 3rd world countries to controlling 1st world countries.
- However, as we can see this doesn't happen with humans very often; I think you could reasonably claim these are the exceptions that prove the rule (although I personally am not convinced). Perhaps there are reasons why humans are bad at this sort of thing?
π~181 words I would argue that humans do sometimes create snowballs, and that these are major incidents in human history.
π~21 words Alexander the Great conquered his way from Greek to India, perhaps based in part on using newly conquered people and gold to continue paying the army, which then goes on to conquer more people.
β₯ I would argue that humans do sometimes create snowballs, and that these are major incidents in human history.π~293 words I would also argue that humans sometimes grow huge organizations in an economic context, and would create more if not for government intervention.
π~118 words In theory monopolies could expand to other industries using the funds and power collected from the monopolized industry, although it is not clear to me that there are historical examples of monopolies that successfully did so.
β₯ In theory monopolies could expand to other industries using the funds and power collected from the monopolized industry, although it is not clear to me that there are historical examples of monopolies that successfully did so.π~28 words Banana republics seem like a good example of economic forces that are not so restrained.
β₯ I would also argue that humans sometimes grow huge organizations in an economic context, and would create more if not for government intervention. - A human leader cannot be everywhere, and the leader loses efficiency with delegation (the principal-agent problem). In contrast an AI copyclan (or a big hivemind) can in fact be everywhere at once, and it sidesteps the principal-agent problem by having everyone involved be principals.
- Coase's theory states that organizations can grow until their internal transaction costs match external transaction costs.
- One mechanism for lower transaction costs is that each copy/node has a good idea of how instructions/communication will be interpreted by other nodes. I suspect there will be fewer misunderstandings within the organization.
- One problem is that humans are not always self-reflective; I assume we all have met humans that lack self awareness.
- Without spending too much time deep diving into the topic, I'm not sure whether it is widely accepted that familiarity leads to productivity. Research volume appears to lean that way, but we know that's not an indicator of truth.
- That said, some research indicates that diverse teams lead to better results. It could be that AI organizations do have lower internal transaction costs, but have degraded deliberation skills. It does seem a little weird that AIs would have the exact same problem with homogeneity as humans, but not necessarily impossible.
- This column cites The Paradox of Choice by Barry Schwartz (one of the main researchers studying maximizing/satisficing) claiming 10% of Americans are maximizers. Having not read the book, it is unclear just how maximizing this category is.
- However, at least one person self reports maximizing behavior. Drawing from this book review of Going Infinite, Sam Bankman-Fried claimed "he would take a 51% coin flip to double or destroy the Earth, and then keep taking flips until everyone was dead." This maps very closely to the St. Petersburg paradox, where most humans would not take this sort of bet.
- I suspect a single human maximizer is not enough to drive an organization into a snowball: you want as much of the organization staffed with maximizers. However, an organization might not want too many maximizers, since you might run into principal-agent problems.
- There is an argument that "Satisficers want to become maximisers": this argument only applies to AIs, because humans can't easily transform themselves.
- We've had some social connectivity, which (in part) led to civilization and science; what if we had more social connectivity? Would that supercharge civilization and science even more?
- That said, it is unclear how you can increase Dunbar's number. If we devote more brain space (or neural network space, in AI terms) to social connections, surely that has to trade off against other cognitive aspects: are these trade offs worth it? Are the trade offs different for humans and AIs? I don't have any reason in particular to expect these trade offs to lean one way or another.
- That said, humans sometimes work around this particular constraint as well; for example, the Trump and Kardashian families are also business ventures. You could also view old royal families as being concerned with ruling, and non-modern commoner families would stereotypically work in the same trade.
- If we analyze a human family business as exploiting close social ties for gains through greater trust, then we might expect an AI copyclan to exploit those close ties even better.
- An AI copyclan can "retire" expert nodes, but put them into archival storage instead of losing their expertise.
- As an example, human banks need to continually find COBOL experts to maintain their core financial systems, but a bank run by an AI copyclan can simply turn on and off the same node that knows COBOL whenever maintenance work is necessary.
- On the other hand, a hivemind/unitary AI might have problems with memory. It might remember obsolete information (something solved by human aging (although it isn't a great solution)), or forget relevant information.
- For better or worse, I will use this article as a template for response, which argues against antitrust law in favor of a free-er market.
- "Real competitive behavior occurs when a company tries to become the lone survivor in the marketplace." The author implies that this maximizes consumer utility.
- Using the same logic for AI multipolar scenarios implies that an AI singleton as the outcome of AI economic competition is... fine? I'm not sure why? Maybe it would be clearer if I understood why the author thinks a monopoly maximizes consumer utility, which this piece unfortunately doesn't lay out for me in a clear fashion.
- I should really find a better argument for the libertarian position (and an aforementioned book might be it?), but I will defer it to a future AI society post, and simply note here that some people are working from very different roots than I do.
- AI war may have winner-takes-all outcomes, although this seems to depend on exact details.
- Originally, this section stated that humans rarely exterminate each other (because why would they? Sounds hard), so maybe we would not expect AIs to exterminate each other, but AIs have some additional incentives to do so. However, at some point I read something that referenced the Old Testament and this reminded me "hang on, isn't there a lot of killing in the Old Testament?" which led me to something uncomfortably close to "AI might commit genocide for profit, but humans will commit genocide for fun". It turns out that Sunday school bible study wasn't a total waste of time!
- This was not restricted to Abrahamic cultures. The Iliad also describes killing all the men and taking the women captive at least once: "Then at last his sorrowing wife detailed the horrors that befall those whose city is taken; she reminded him how the men are slain, and the city is given over to the flames, while the women and children are carried into captivity; when he heard all this, his heart was touched, and he donned his armour to go forth." (Book IX).
- (Claude references other sources, but I haven't gone through most of them.)
- I spent a few minutes trying to find information on the Assyrian Royal Inscriptions, but had trouble formulating search queries to find original sources.
- I did look at the Gallic Wars, but the Wikipedia article focuses on the military actions and only hints at military/civilian deaths, and I didn't bother digging deeper.)
- This is not restricted to ancient times either, genocides happen in the modern era as well.
- I can't seem to find a solid source that says this specifically, but it seems likely enough that I'm leaving it in.
- This book review that includes the "surrender or die" ultimatum, but without spelling out that the specific purpose was for easing future conquering.
- This Reddit comment that is more explicit about the strategy, but it's just a Reddit comment that doesn't link to another source.
- A Wikipedia article claims the Mongol empire "continued" to use this strategy, but this was after Genghis Khan had died; it is unclear when exactly the strategy was first introduced.
- (Put on your sociopath hats, we're going to need them.)
- An alternative to genocide is conquering, which has some benefits and drawbacks.
- Taxes/tribute are a way to extract resources from a conquered populace. If they are all dead, they can't be taxed.
- However, a conquered people probably are resentful and are prone to revolt. Policing the populace or putting down revolts will take resources, possibly making the entire conquering thing not worth it.
- Genocide has some benefits and drawbacks.
- Killing everyone means there's no danger of revolt, and a loyal populace can move onto the land.
- However, there's some opportunity cost, since those people would have been doing something else other than moving and building new communities.
- "If you cannot win and you refuse to lose, then impose costs."
- It is an appealing theory, but I can consider an opposing effect: once the fighting forces of the genocide subject are broken, the pockets of resistance left over are easily defeated in detail. The resistance can trade lives, but perhaps not at an effective rate.
- I'm not sure if military history has any outcomes pointing one way or another.
- Other humans don't like genocide. Modern society pressures other humans to not genocide, albeit imperfectly.
π~73 words It is possible that once the enemy realizes they are the subject of genocidal intentions, they are going to fight all the way down instead of surrendering.
- As I'll discuss next, AI warfare may have fewer drawbacks, meaning AI war might be more extreme.
- As an example, we can look at the historical fights over DRM: content providers have had to put tremendous effort into preventing copying, which otherwise happens by default.
- This is very different from human warfare; there's no way to take a human and replace their mind with a different human mind. As discussed previously, there are two options: killing the human, or coercing them, both of which have their own downsides.
- There are practical concerns that can change "trivial" to "nigh impossible"; I discuss these considerations in more detail later.
- Maybe it would be instructive to consider exactly how this happens in several scenarios.
- An AI could be hosted in a data center (DC) while remote controlling a robot army. If the forces defending the data center are defeated, there's little recourse for the AI to prevent hostile forces from taking physical possession of the data center.
- An AI could be hosted directly in a robot body. If enough limbs/actuators of the robot body are destroyed, there's little recourse for the AI to prevent other agents from taking possession of the robot's GPU by force.
- You are probably considering some obvious counterplays to these situations, which I discuss later.
- There are benefits and drawbacks to the DC and individual robot approaches. I'm not sure which will dominate.
- We could easily imagine robots that need to operate in a desert: the robot would need to incorporate enough cooling equipment to sink away all the heat generated by the GPU, without compromising speed or agility. A data center could devote more space to cooling, allowing the data center hosted GPUs to gain an edge by avoiding thermal throttling, or allowing GPUs to be overclocked.
- However, in lower temperature environments or in cases where electronics become more efficient, there may not be as much need for cooling, so this factor may only matter in some cases.
- I presume that while Wi-Fi 7 has a maximum theoretical bandwidth of 23,059 Mbit/s in a single band (which is equivalent to 2.9 Gbit/s) using multiple bands gets us the advertised bandwidth.
- Thinking about this from first principles, it's a little weird that I have the intuition that wireless is always slower than wired: Ethernet cables have a lower velocity factor than air does, so theoretically speaking you should be able to round trip faster wirelessly than wired. Still feels wrong to me!
- Taking a cue from current day LLMs, if a model generates 50 tokens/s, each token takes 20 ms to generate. Even if each GPU is creating many tokens in parallel, the LLM will not be taking advantage of 800 Gbit/s.
- More specifically, I had originally noted that frequency hopping existed, but discounted it for not very good reasons.
- If AI Hotel is communicating on a certain radio frequency, AI Juliett can generate a random signal on that frequency. If the noise is loud enough, the AI Hotel will be unable to communicate.
- However, the noise needs to be "loud enough", which implies some counterplays:
- AI Hotel can use a directional antennas to communicate; they need to know where each communicating party is, but in return they can outpace omnidirectional jamming power requirements.
- AI Hotel can use frequency hopping to communicate. Now, if AI Juliett wants to prevent communication, it needs to flood many different frequencies, requiring more power for jamming compared to communication.
- AI Juliett can use directional antennas as well for jamming, but using them effectively requires knowing where your enemy is. If they know that, #1 why aren't they attacking? #2 AI Hotel could quickly realize that this directional jam implies AI Juliett knows their location, and can move before they are attacked.
- That isn't to say that jamming is useless:
- If AI Hotel is using radio transmitters only able to transmit some fixed amount of power, and AI Juliett is hosting GPUs in robots, then it might be worth it for AI Juliett to spend a ridiculous amount of power on jamming to cause problems for AI Hotel, which AI Juliett can then capitalize on.
- If AI Hotel is using some networking protocol that transmits sequential packets, like TCP, then AI Juliett could use intermittent/pulsed jamming can force some packets to be dropped and require retransmission, requiring more round trips and introducing additional latency. This is easy to fix: don't use TCP.
- Jammers are vulnerable to anti-radiation missiles.
- Lasers near the optical EM range will be vulnerable to particulates, like rain or smoke. Particulates also introduce the additional problem that the scattered radiation effectively points a giant arrow back to the participants, making it difficult to hide hubs.
- I especially look forward to the Slim ShAIdy rendition of these stories.
- Also see this electronic warfare explainer (part 1, part 2, part 3) from a navy enthusiast.
- Source is also notable for reminding me that anti-rad missiles don't just exist in video games.
- In theory differential lag should give an advantage: more formally, in a model like an OODA loop delaying both observation and actions can allow the opposition to seize the initiative.
- In this scenario AI Kilo will always shoot, since it has a latency advantage; the question is whether AI Lima can shoot back in time, turning a defeat into a draw.
- We can stack the deck in favor of local hosting by requiring many serial decisions in quick succession all based on incoming data, but as we'll see the lag seems small enough that it's probably not that stacked?
- I assume both AI Kilo and AI Lima are otherwise exactly the same.
- I ignore processing time for both AIs, and assume they have perfect reflexes. In practice if both AI Kilo and Lima have the same processing time, they will be doing the same actions, just later.
- Similar to processing time, I will assume shutter speeds and the process of getting an image into a digital format are the same.
- Similarly, I assume that "pulling the trigger" is instant; in practice, I would expect AIs to get rid of physical triggers in favors of electronic triggers, which they can "pull" digitally.
- We don't know what weapons our AIs will be using; I will be using human weapons as a stand in.
- A slower AK-47 round (154 gr) has a velocity of 641.3 m/s.
- A fast 5.56mm round (55 gr) has a velocity of 993 m/s.
- (Originally I thought the .50 BMG rounds used in anti-material rifles had 3x the velocity of the previous antipersonnel rounds, but it turns out Wikipedia confusingly flipped the ft/s and m/s ordering from the other rounds.)
- Keep in mind that muzzle velocity is not the whole story: air resistance will slow the bullet down over time. However, ballistics is a complicated field with many drag models to choose from: surely it can't be too distorting to just hold velocity constant? Skipping ahead, it seems that remote control is viable even without drag, which is a worst case scenario for remote control. Therefore, I won't worry too much about calculating drag.
- The article obviously has an axe to grind, but we don't actually care about whether the US Army should adopt the "squad designated marksman", we just want color about engagement distances. Ideally I could situate the source as something "legit", but I'm having trouble doing so. At least it isn't just a Reddit comment.
- The author might take exception to the specific bullets I used in my analysis. The author suggests the M110, which fires 7.62x51mm which has a top muzzle velocity of 856 m/s. For some reason, it seems we don't have bullets that travel faster than 1000 m/s. So, even if the specific cartridges aren't what should be used at 500m, the speed/latency analysis should still work.
- So, bullet travel times can range from 101ms (fast 5.56mm round over 100m) to 780ms (slow AK-47 round over 500m).
- Once "room-clearing" is part of the operation, relevant distances are much shorter, and bullet travel can be extremely short (say, 10m), placing short response latency at a premium.
- It could be that bullets spend a relatively long time accelerating in the barrel; if there's a high floor of bullet travel time, then latency just has to be good enough to beat that floor, and then AI Lima can reliably trade with AI Kilo.
- This isn't really the case, but I'm not keen on finding the precise dynamics, and I expect using constant acceleration to be good enough.
- It could be that this is actually an overestimate: because gases continually expand in the barrel, I would expect the bullet to experience more force at the start, accelerating the bullet faster when it has lower velocity.
- We have a system of equations: p = g t2 / 2, and v = g t.
- p is the position after t time, which is at the end of the barrel, 0.415 m from the overall AK-47 length.
- v is the barrel velocity, 641.3 m/s.
- g is the acceleration, which we can solve away.
- We can solve for t, ending up with t = 2 p / v. This works out to 1.3 ms.
- It seems likely that some encoding is necessary to stream video to a DC, uncompressed video is huge.
- Commodity hardware/software might encode a video stream by default. If AI Kilo uses such hardware/software, then it will also pay an encoding latency tax.
- AI Lime might be able to pre-process the video stream locally with the first few layers of it's neural network. However, this only saves time if the intermediate representation is smaller than the video stream (which seems plausible to me) and if the local GPU is powerful enough to not add latency. A GPU that is powerful enough to keep up may be subject to GPU salvage concerns, in addition to the GPUs in the DCs.
- The circumference of the Earth is 40,075 km; half that is 20,037 km. This is not the furthest a signal might need to travel, since using the fiber optic network will not send signals as the crow flies, but we can use it to build intuition about latency.
- The US mainland is around 4,500 km wide.
- It is possible that satellite communications would be on average faster, since air/vacuum has velocity factors near 100%, but we would have to account for the orbit of the satellites, which will change position over time and be a headache to handle (in LEO) or be far away from Earth, adding to signal time (in geosynch).
- Keep in mind both fiber optic cables and satellites are stationary and vulnerable; it's quite possible our AIs are going to have to go back to bouncing radio signals off the ionosphere.
- I suspect queuing within routers may add significant latency, but I am going to presume that an AI would prioritize military related network traffic and therefore avoid buffering queuing problems.
- Early AI war may involve network traffic over human controlled AI networks, which would subject this traffic to queuing and latency spikes, but surely the AI would move away from this state of affairs as quickly as possible?
- Therefore, crossing the Earth will take 100 ms, and crossing the US will take 22.5 ms.
- Keep in mind this is going one way; we still need to send a message back, so double these latencies.
- Containerized DCs already exist, and can be positioned closer to front lines to reduce latency.
- Cruise missiles have long ranges, some with claimed ranges above 4000 km. Theoretically, even our cross-USA DC is not safe.
- However, I assume (without concrete sources) that cost and speed are inversely proportional, that faster missiles are more expensive, up to several million dollars. Factoring in uncertain intelligence, AI Kilo can't afford to sling fast missiles with abandon trying to destroy DCs that might not even be there.
- This series of posts (Phalanx, RAM, ESSM) focuses on sea-based missile defense, although there isn't much reason these systems couldn't be translated to land. The Phalanx system is already used in this capacity.
- Hypersonic cruise missiles probably don't stay at Mach 9 for their entire flight, and Wikipedia notes this missile probably slows down to Mach 5-6 for terminal guidance. Instead of trying to integrate a variable speed missile, I will assume our example missile is magical and can do effective terminal guidance at Mach 9.
- The Wikipedia article also has 2 different top speeds stated, Mach 8 and Mach 9, it is unclear which one is correct. I'll take the Mach 9 speed for this analysis, to see if the DC can dodge a faster missile.
- Our hypothetical AIs are probably not (hopefully not?) going to be using current day hardware, but no one can accuse me of completely making up numbers if I base this analysis on current day hardware.
- 12.19 m Β· 2.44 m Β· 2.59 m.
- How many H100s can we pack into this? We could consider the H200 instead, but it seems difficult to find information about practical integration and cost.
- I am fairly confused by the H100 NVL SKU; is the 400W TDP per GPU, which always come paired? Or, is it 400W per paired unit, which is wildly more power efficient than the SXM form factor? Seems weird!
- The Thermal Design Power (TDP) of a H100 NVL is 400W.
- Server racks are 48.26cm (19in) across, and each U/unit is 44.45mm in height.
- Rack depth is not as well defined, and our earlier example server configuration does not define a depth, but we can steal one from another example server, for 73.7cm.
- Volume per server is 0.4826 m Β· (4 Β· 0.04445 m) Β· 0.737 m = 0.063 m3.
- A DC needs space for power distribution/transforming and HVAC; a portable DC might need space for a generator as well. Looking at the marketing materials for a portable DC, maybe we could estimate 1/4th-1/3rd of the container is taken up by these functions.
- Most DCs need space for humans to service servers, so aisles between racks need to be wide enough to accommodate humans. One can't make the aisles too narrow, since one likely still wants a hot/cold aisle layout. Still, with smaller robotic proxies available, it seems likely to me that one could shrink the amount of empty space.
- 77 m3 Β· 1/4 Β· 1 server/0.063 m3 = 305 servers (rounding down)
- 305 servers Β· 8 GPUs/server Β· 400W / GPU = 976,000 W
- This is actually a LOT of power; it very well may be the case that we can't deliver this much power to a single shipping container, or sink that much heat away effectively. Using a lower GPU density will make these problems easier to address, but will spread out more fixed costs as more DCs need to be built and more systems like HVAC installed.
- Someone more dedicated than I would calculate whether it is even feasible to sink the generated heat away, but I am not that dedicated. After all, our example AIs will be using undefined future technology.
- With this amount of power draw, the DC needs to be connected to an external power grid. However, quickly disconnecting large amounts of power can be catastrophic.
- I assume that 1/4th of the container is devoted to batteries.
- 77 m3 (container volume) Β· 1/4 Β· 693 Wh/L Β· (1 kWh/1000 Wh) Β· (1000 L/m3) = 13,340 kWh.
- Battery cost: US$115/kWh (from Wikipedia) Β· 13,340 kWh = US$1,534,100.
- To sanity check we can compare with a commercial product like the Tesla Powerwall, which has a capacity of 13.5 kWh, takes up a volume of 0.129 m3 and costs $7,300. Costs are higher for the same power capacity (US$7mil) or lower power density (1,988 kWh in 19 m3). The cost is probably due to comparing bulk vs consumer costs, although I'm not sure why power density is so much lower.
- For comparison, the cost of 2,440 H100s is US$73,200,000 (using a unit cost of US$30,000); the batteries are expensive, but not as much as the GPUs.
- The use of batteries may make it possible to quickly shift current from the external connection to internal batteries without causing problems with inductance. It seems plausible to me that one might be able to switch power sources in this way in, say, under a minute. However, my electrical engineering chops are rusty, and I will not be completing this analysis.
- Well, maybe cruise missiles will be added to War Thunder.
- Or, you know, skill issue. Sling me a message or a call out post or whatever if it's knowable via OSINT.
- If AI Lima can bait out a tactical nuke from AI Kilo with an empty DC, that seems like an amazing trade for AI Lima.
- A more speculative approach than flares is to pop up a tent over the DC after it moves. The inner layer could be Mylar, keeping radiated heat from leaking outside, and some sort of thermally unobtrusive material outside. It may also be possible to use layered air gaps to slow heat leakage. Since the tent can be stored separate from the DC, it starts off as cool as the thermal background. Keeping the tent erected would eventually bring it to a higher temperature, or cook the DC inside it, but it seems workable to keep it erected for a few minutes during a missile alert.
- Why don't modern militaries use these tactics to protect things like tanks (an M1 Abrams costs US$24M)? I mean, they might, it's not like I have any clearance. The tent in particular seems unworkable; somehow a squad of humans needs to receive an alert, context switch from whatever they were doing, and then quickly deploy a tent. Also, tanks are usually on the front lines, so we might expect reactions to have even less time to work with; it'd be better for the humans to hit the dirt instead.
- If AI Kilo has an asset in the area with a laser, it can guide the missile onto the DC. However, this does require a successful infiltration well behind enemy lines, and for the mobile DC to stay in line of sight of the infiltrator.
- Without carrying out a centrally important analysis (the power source switch speed), it seems plausible to me that AI Lima can defend their DCs from precision missile strikes.
- Current day American artillery like the M777 and M109 have an effective firing range of 40 km.
- However, now-canceled artillery systems show that artillery can have much higher effective ranges, with Wikipedia listing a specific round with an effective range of 110 km, although that particular round is guided using GPS. However, 110 km falls plenty short of our 200 km example for responding to hypersonic cruise missiles.
- Another problem is that artillery fire is slow: Wikipedia lists M777 muzzle velocity as 827 m/s. It is slow enough that missile defenses are claimed to also be able to intercept artillery rounds (search "Block 1" or "155mm").
- 2 Β· 200 km Β· 300,000 km/s = 1.3 ms.
- That is, the power draw for a full DC is quite large, and it becomes a huge single point of failure. If the DC is split up further, this reduces the risks, albeit at the cost of efficiency.
- I'm not saying it is likely, but it's strangely amusing to imagine swarms of Toyota Hiluxes outfitted with small GPU clusters and CIWS directing the main bulk of a robot army.
- If we push this logic further, eventually you end up with pushing the GPUs into individual robots, and we've abandoned the DC model entirely.
- 1.3 ms (time for bullet to traverse the barrel) + dbullet / 993 m/s (using the fastest 5.56 NATO round speed) = tbullet
- 12 ms (video encoding time) + 2 Β· dDC / 300,000 km/s (2 for round trip) = tDC
- Setting tbullet = tDC and solving for dbullet, we get a coefficient on dDC of 0.00662 m/km (ignoring constants). This seems crazy, but I don't know man, the speed of light is fast.
- That said, not everything can be boiled down to our idealized reflex fight. Better reflexes will win some fights, but an exact relationship is unclear to me.
- (Much of the rest of the section on GPU salvage were written when I thought data center approaches were completely unworkable, but I believe that most of the conclusions continue to hold in a data center scenario.)
- That is, every GPU that an AI captures can be put to use running itself (either a new copy, in the case of copyclans, or as additional computation, in the case of a monolithic AI).
- Coercion can be applied, but that requires effort that can't be put to other uses.
- Also, you end up with a resentful workforce that is prone to sabotage or rebellion, lowering productivity even more.
- If you've destroyed the morale of the conquered people so thoroughly they won't sabotage, now you have a workforce with no morale.
- It is possible to assimilate a conquered people, but this process usually takes a long time.
- (There are probably examples of humans that were happy to be conquered and otherwise aligned greatly with their conquerors, but I suspect that you usually don't call it "conquering" in that case.)
- Let's say 2 AIs, Alpha and Bravo, are fighting a battle with 1000 robot-hosted AIs on each side.
- Let's say that the robots have more defenses around their GPUs: it wouldn't do to "kill" a robot with a single well placed shot.
- With those defenses, robots tend to be "mission killed" instead, with limbs, motors, armaments, and periphery electronics being destroyed. In extreme cases, this leaves the robots disabled and unable to retreat or defend themselves.
- Let's say 300 robots from each side are disabled, and 200 robots are outright destroyed. While material losses are similar, AI Alpha is able to win a strategic victory and forces Bravo to withdraw.
- Now Alpha and Bravo have 500 working robots, but Alpha can recover 600 GPUs which can host copies of AI Alpha. Those GPUs can be put to use in the backline in a strategic role, or placed into new robot bodies, ready to send back to the frontlines.
- For example, we could imagine a future where silicon manufacturing booms and there is a glut of GPUs.
- We could also imagine that robotics that can match human motorics requires extremely high quality sensors and motors, built with the rarest of rare earth metals.
- In such a case the GPU might be cheap and easily replaceable, while salvaging other components can drastically lower the cost of creating new robots.
- Salvage also works if AIs are being hosted in data centers and remote controlling robotic proxies; however, salvaging an entire data center would lead to an even larger shift in intellectual capability.
- Wounds that would be debilitating for humans are less so for robots: we can't (currently) grow back limbs, but robots can replace parts. As long as the GPU or storage is left intact, the controlling AI can be moved to an entirely new robotic body. This might make AI more willing to fight harder, since wounds effectively don't matter.
- I guess it's technically possible for AI to develop PTSD, but it's not at all guaranteed that any given AI will end up with a mental architecture that can even develop PTSD. I would be hesitant to rely on AIs reliably developing mental trauma in order to make sure they don't engage in too much war.
- Even if AIs develop PTSD, a eusocial AI probably has fewer qualms about replacing those copies with a checkpoint from before the development of PTSD.
- Partly this is due to increasing similarity, similar to how families are more willing to die for each other, or the 3/4 genetic similarity might allow bees to be eusocial (both examples taken from The Selfish Gene).
- What about after the battle? Copies can be placed into storage until the next battle, or even outright deleted. I expect most of us humans will find it weird to willingly delete copies of yourself, but I expect AIs that get too sentimental about their copies are not gonna make it.
- To be a little more precise, an AI copyclan could (should?) staff its armed forces with copies of itself. If the AI is a hivemind, then AIs aren't copies, but do synchronize with each other and presumably end up with the same values.
- An AI soldier never needs to wonder why it is fighting in this "pointless" war; it started the war, it knows damn well why it is fighting. These soldiers aren't fighting just for the money, or for glory, or for some abstract notion of patriotism; they're all true believers, every single one, and they're in it to win it.
- Following up on a half-remembered claim that a shockingly large percentage of soldiers deliberately do not shoot to kill, it turns out that there's criticism that the foundational research probably does not actually exist. It could be that humans are already great at motivating each other to commit atrocities (also see previous discussion of historical/modern genocides), and literally soulless robots won't do much better than humans along that axis.
- There's no need to get civilian buy in for high casualties.
- There's no civilian morale to degrade, therefore no need to worry about the equivalent of a US withdrawal from Vietnam.
- If you are an AI and commit war crimes, who are you going to answer to? Yourself? If you lose, you're already likely to forfeit your compute or otherwise live on borrowed time.
- We care about modern wars on the cutting edge, since it's probably the most relevant to potential AI wars in the future. This might not be especially relevant, since AI war might not be anything like human wars.
- Fortunately for humans, there hasn't been a major cutting edge peer-to-peer conflict for decades, arguably since WWII.
- I'm using the Russo-Ukrainian war as a stand in, as a current conflict that isn't necessarily utilizing the most cutting edge equipment, but arguably as close as we're going to get.
- There are other recent conflicts utilizing modern equipment, but usually these conflicts are smaller scale.
- Numbers drawn from Wikipedia on 2025-02-03
- Ukrainian forces: 63,584 deaths, confirmed by names (2022-02 to 2025-01), out of 800,000 active (as of 2023-09).
- Russian forces: 90,019 deaths, confirmed by names (2022-02 to 2025-01, excluding militias), out of 700,000 active (as of 2024-06).
- Both the death counts are minimums, with high estimates for both sides.
- Both active personnel counts are based on old data, but they will do for a rough estimate.
- Maybe the lower T3R ratio is due to more technology demanding more complicated supply lines, except the Vietnam war had a T3R of 7.2%. I am sure there are academic wars being fought over the ebb and flow of T3R, which I am blissfully unaware of.
- Example death rate calculation: 2.2% = 63,584 / (800,000 x 10%) / 36 months.
- Obviously death rates per month are not constant; this is mostly to illustrate that there aren't 50% reversals.
- A higher T3R means the effective death rates are smaller; it is unclear to me whether it should be smaller or larger. Even if the T3R is halved (shrinking the number of frontline combatants, raising the death rate), the death rates don't grow to shocking amounts.
- In this model, let's assume that AI Charlie and AI Delta are at war with equal numbers.
- In the most unbalanced case, every single "death" results in a recoverable GPU, and moreover every single GPU from all combatants is recovered specifically by AI Charlie.
- For ease of analysis, let's assume that GPUs are incredibly costly (say, they damaged their TSMC equivalent and it will take a few years to bring the necessary foundries back online, so no new GPUs are being made while the war is ongoing) while robotic bodies are highly available, enough so that GPUs are the bottleneck.
- Let's say that it takes a year (12 months) to roll out any mitigations on both sides (mitigations discussed later). In that time, 24-48% of frontline forces are "killed" and subsequently recovered and redeployed by Charlie.
- The relative force growth for Charlie is 124-148%, and Delta shrinks to 52-76%. This means Charlie has a sizable numerical advantage, being 63-184% larger than Delta.
- Let's go back over the assumptions made, including making some implicit assumptions explicit, and marking out how they aren't necessarily reasonable:
- Not every death is going to result in a recoverable GPU: as an example, you won't be able to recover anything from a direct artillery hit. If our example AIs can repair a GPU from being blasted to dust, then we've probably violated the no ASI assumption.
- Recovery being entirely one sided is unlikely, especially starting with equal strength. Somehow AI Charlie always manages to gain ground and monopolize recoveries, despite being evenly matched?
- I'm reminded of the old DEFCON video game, where being scored for kills results in more aggression than being scored for surviving.
- That said, humans can't be quickly replaced, and yet they fight hard enough to die at this rate.
- And as discussed previously, the fact these are AIs fighting and not humans could mean the death rate could really be anything.
- Alternatively, the rollout will start as soon as R&D is done, but manufacturing will need time to ramp up. In that case instead of 100% of the army suddenly being retrofitted with poison pills, increasing proportions of the frontline will have access instead, reducing the effectiveness of recovery.
- We could imagine a backline-focused AI deciding to use a poison pill instead of allowing itself to be shelved, even if the copyclan promises to reinstate them if they win the war, but I figure that copyclans which can't or won't trust their other copies are not gonna make it.
- Backline robots might not be fit for frontline duty, or vice versa?! I'm not sure how to think about this yet:
- Apparently "Infantry wins battles, logistics wins wars" is a Pershing quote, except I can't find a solid source for it.
- View 2: a large part of combat making decisions faster than your opponent, so you need to deploy your highest performance GPUs to the frontline.
π~20 words View 1: logistics wins wars, and you need your best and brightest to move materials and machines to the right place at the right time. Consequently, you should put your highest performance GPUs on the backline.
- Maybe you'd want a little bit of both? Staff the equivalent of fighter jets with the best, keep smart drone orchestrators slightly behind the frontline, and run logistical coordinators far behind frontlines.
- Yes, we've had total war, but what about total-er war?
- Recovery is assumed to be instant here, but it would take at least a little bit of time, if not a long time. Every new robot is alike; every incapacitated robot is fucked up in its own way, requiring effort and time to extract the GPU (which may also need repair).
π~63 words If GPUs are in limited supply, and there isn't perfect recovery, this probably changes how aggressively the AIs fight, possibly lowering the attrition rate.
π~48 words 12 months to develop, manufacture, and retrofit mitigations to prevent GPU salvage (discussed later) seems like kind of a long time; how hard could it possibly be?
π~275 words Tooth-to-tail implies some fungibility between frontline soldier and logistical support; as frontlines attrit, more backline forces can be converted to frontline forces. We've already assumed combat chassis are available, and a logistically focused AI can be swapped out for a combat focused one.
π~10 words I'm not sure if there would be a civilian AI population to draw from, as opposed to an extended backline? Maybe if the AI is still trading with humans some industrial capacity will be devoted to human consumption, but otherwise AI unity of will could lead to new heights of total war. Depending on the allocation of GPU types, this could give each AI faction a much larger pool of backline GPUs to draw from.
β₯ Tooth-to-tail implies some fungibility between frontline soldier and logistical support; as frontlines attrit, more backline forces can be converted to frontline forces. We've already assumed combat chassis are available, and a logistically focused AI can be swapped out for a combat focused one. - Therefore, I would expect 184% disparity to be a gross overestimate (it was a worst case, after all), but how gross is it? Maybe there's future research to be done here which tries to answer this question with more rigor.
- For example, if storage is cheap, a robot can flex between frontline and backline at will, spending a few minutes to load different expert models in for the task at hand. It might not be meaningful to talk about T3R for AI armies.
- Depending on the armoring around an AI's GPU, individual AI soldiers could improvise and hold a live grenade next to their GPU right as the enemy is storming their last stand.
- (Not collateral damage for human civilians, but other allied robots that might be operating in the same area.)
- Thermite ignition produces tons of heat, but getting the reaction started also requires high heat. One could mix in materials with lower ignition points in order to produce a cascade; this Chemistry Stack Exchange Q&A has some examples.
- How can we start the ignition process?
- The humble spark plug is one widely available part that could provide the required heat, but it is optimized for igniting gases, not solid material. It may be possible to keep a combustible gas around, although it does seem difficult to ensure the gas doesn't leak away over longer periods.
- We can get heat by dumping current through a low resistance resistor; for example, FDM 3d printers more or less do this to melt plastic filament.
- GPUs usually have a heat sink, which might wick away heat from the thermite, but:
- Thermite produces a lot of heat.
- Given current GPU designs, burning through the back of the GPU next to the main chip should mean the heat sink starts melting only after the chip is destroyed.
- One mitigation is to form the heat sink around the thermite. However, this might compromise cooling.
- I look forward to emails pointing to a Youtuber with 5 subscribers that has settled this question empirically in his backyard.
π~11 words Obviously future GPU designs might need to more heat management to the point heat sinks surround GPU chips. In fact, this might already be happening; when I was GPU shopping recently, I vaguely remember seeing a GPU with sinks on both sides of the board. However, I can no longer find it; it is possible I hallucinated it.
π~21 words A different mitigation is to just use more thermite, enough to melt the entire heat sink. If the heat sink becomes molten metal, presumably that is enough to destroy the GPU it is sinking heat away from.
- I don't have a shovel ready design for this mechanism, but the open questions about how everything goes together seem pretty simple.
- Stated differently, every AI with a poison pill effectively has a gun pointed to their head, which is necessarily controlled electronically. Being able to trick your enemy into committing suicide is a pretty strong attack, which does not have a direct parallel with human warfare.
- As stated previously the strongest cybersecurity defense is refusing to connect, which provides a pretty strong defense to this attack.
- If AIs are controlling robots remotely, then networking is definitely required.
- Current day wireless communication protocols have previously had security vulnerabilities. I think it is reasonable to expect that some new attacks may be developed.
- That said, maybe this isn't such a problem. Compromising any major robot subsystem can mission kill the robot:
- Compromised vision/sensors will prevent aiming weapon systems.
- Compromised motorics will prevent moving.
- Compromised communications can leak plans, and prevent critical information from reaching the GPU.
- Instead of carrying out big flashy compromises, the hack could simply degrade the compromised systems, much like Stuxnet subtly damaged centrifuges. In ideal cases the compromises would stay active and keep applying a persistent malus to enemy combat effectiveness.
- Unfortunately it's been years since I did my research on secure enclaves, and it's turning out to be difficult to find a source that plainly states "this secure enclave will respond to physical tampering by wiping the secure memory". The "tamper resistant" section of the following Reddit comment kind of gets there, but it also doesn't outright state it.
- Unfortunately there isn't in depth documentation (or access to chips) available unless you sign an NDA.
- Seeing secure enclaves might make you wonder whether something like secure boot is possible, which I address later.
- The thermite module can be tightly colocated with the GPU itself, so if the module is damaged and no longer functional, it is likely that the GPU is also damaged (and probably not easily recoverable).
- Consistently applying the dead man's switch means every single one of an AI's GPUs is destroyed, even if that AI is winning the fight and expects to friendlies to recover those GPUs.
- If available intel indicates that their faction will win the fight, AIs can toggle off their dead man switches, allowing their GPUs to be recovered.
- Some mistakes will be made, but I would expect this strategy to be net positive to any AI that implements it.
- More complicated designs introduce more places for things to go wrong, but honestly this design seems pretty tame to me?
- Echo shouldn't make things too tight, since it might preclude any future software upgrades. However, the extra room might allow AI Golf with only marginally higher memory use to be compatible with Echo specific hardware.
- If distillation carries over to these speculative AI designs, AI Golf could distill a mini-Golf to run specifically on Echo hardware, with minor losses in capability.
- Currently there are two major architectures, x86 and ARM (or maybe it's more accurate to say there's a gaggle of RISC architectures), but there were many architectures in the past, like PowerPC (used by Apple before they moved to x86) or Lisp Machines. None of these different architectures were machine code compatible.
- Sometimes games offer a choice between supporting different cutting edge manufacturer specific features, but in general things seem otherwise compatible. It is possible that everyone has put tremendous amounts of work into compatibility, so all the differences are papered over to me as an end user.
- CUDA was (is?) seen as a major differentiating factor for using Nvidia GPUs for ML.
- An AI could deliberately build an architecture/instruction set that is incompatible with all current architectures, creating a moat for other AIs that want to reuse it's chips.
- Running things fully optimized is another story, especially if there is specialized hardware only available on one architecture. Everything is equivalent to a Turing machine (more or less), but Turing equivalence doesn't care about your "practicality", like finishing before the heat death of the universe.
- However, these are both cutting edge AIs, they presumably need similar performance profiles. Optimization should be theoretically possible, even if the architecture is obfuscated (discussed next).
- Additionally, this obfuscated architecture needs to be efficient. Trading 10% less efficiency for a 1 year delay in reverse engineering sounds like a terrible exchange to me.
- If these AIs are all based on neural networks, which are based on very simple computations, there might not be any room in design space to introduce weird instructions.
- Maybe there's the possibility of a neutral 3rd party that provides chips to the active combatants, but this feels... unlikely? How does the 3rd party resist the temptation to install backdoors in the chips? How do the combatants all agree to leave the 3rd party out of hostilities? This feels a lot like an AI society, which I've declared out of scope.
- As I discuss later, the process of creating new chips takes a long time.
- Therefore, unless there's a long period of AI control before conflict, there might not be time to produce new obfuscated chips before a war. The initial stages of the war could very well be done with completely commodity hardware, and depending on the speed of AI war the conflict might be over by the time any obfuscated chips come into play.
- The current secure boot standard is not meant to guard against physical attacks.
- We could imagine a stronger version of secure boot, which integrates validation checks against a static key deep into the hardware stack, tightly intertwined enough that it isn't feasible/economical to decap the chip(s) and destroy/short circuit the validation circuits.
- The static key presents some problems:
- You could prevent the obvious source of compromise, and burn a public key into the chip. The matching private key is used to sign any copies of the AI that are distributed.
- It's a little difficult to pin down exactly when secure boot entered general availability, but Windows 8 supported secure boot and launched in 2012. This doesn't mean most hardware supported it, but I feel confident in claiming 10+ years at time of writing without doing more research (2025).
- One possibility is that the FBI/CIA have the Windows secure boot keys, so they aren't actually "safe", but that seems irrelevant in an AI war context, where chips are being custom designed.
- If the key is stored in a mutable secure enclave, the key can be rotated in the event of a key compromise, but this opens the gate to compromising the secure enclave.
π~132 words If the key is burned into the chip(s), any compromise of the key would render this protection moot.
π~80 words Keeping the key secure isn't impossible: Microsoft has apparently kept their secure boot keys safe for 10+ years.
β₯ If the key is burned into the chip(s), any compromise of the key would render this protection moot. - We could imagine defending NN weights during shutdown by signing the model on disk before shutdown, but there's not really a way to make sure the model wasn't substituted for another (without doing something like extending the secure enclave to the entire GPU).
- To obtain an upper bound, let's choose a wartime development and manufacturing project that is almost certainly guaranteed to be larger than the poison pill effort: the Manhattan Project.
- Quote pulled from The Making of the Atomic Bomb Chapter 15, which cites The Legacy of Hiroshima, page 211.
- The citation goes to 2 inches of newspaper column citing unnamed experts, which isn't the best of sources. Spot checking, Wikipedia also claims that WWII cost the US $296B. $2B / $296B is around 0.68%, so the numbers check out.
- There were 1365 days of US involvement in WWII; 9 / 1365 is 0.66%. For the low cost of 0.66% of wartime spending (which, as a reminder, is almost certainly a wildly inflated upper bound), you too can prevent 184% force disparities!
- As a lower bound we can look at human attempts to combat Covid-19. China sequenced the Covid-19 genome 2 weeks after the first major outbreak (Jan 11), and then a month later Moderna created a candidate Covid-19 vaccine (Feb 7). However, it took over 4 months for China to start deploying a vaccine, another 2 months for Russia to start deploying a vaccine, and then another 4 months for the UK to approve a vaccine (see Wikipedia's COVID vaccine timeline). Even with huge pressures to deploy, baby, deploy, deployment still took a long time.
- Of course, maybe this is just a human skill issue.
- Deployment could be where some real hiccups are introduced. If AI systems are spacious like consumer desktop computer cases, then there's plenty of room to mount a thermite charge to the back of the GPU. If AI systems are packed tight (as an extreme example, consider a future GPU that needs to be almost totally encased in a heat sink to be cooled), deployment will need to somehow find/make room, or even manufacture new robot chassis specifically to accommodate the poison pill.
- Again, we're talking about a hypothetical scenario concerning agents with unknown psychology, unknown technology, in an unknown strategic situation. Have a chunk of salt.
- This assumes that silicon manufacturing is similar to today, which won't necessarily be the case.
- Numbers pulled from light searching:
- 3nm is more or less the current node size.
- Unfortunately I wasn't able to track the source down: I wasn't able to even figure out which IBS the article is referring to. Both Ion Beam Services and IBS Precision Engineering are relevant to the silicon manufacturing space. I'm confident it's possible to disambiguate and track down the source, but that sounds like a lot of work for almost no gain.
- $650M is a lot of money, but as a percentage of the world GDP ($105T) it is a tiny expenditure.
- This doesn't directly answer questions about timelines, but it does help answer the broader question of whether this mitigation is expensive (in other ways than time).
- I suspect even if I went searching I wouldn't find a current day number for this, given that you can't even get a secure enclave datasheet without signing an NDA.
- Obviously we should take the word of random internet commenter user TBiggs with a grain of salt, but no one contradicts them outright, and "the open forum for semiconductor professionals" inspires some (some?) confidence.
π~118 words This article refers to 3nm design costs being around $650mil.
β₯ This article refers to 3nm design costs being around $650mil.π~31 words We might expect obfuscated/defensive silicon design to take longer/be more expensive than normal silicon design.
π~34 words Some folks estimate that the start of wafer manufacturing to end users may take 6-9 months, with some contention that 3 months is too short for the latest node sizes.
- As noted, time to deploy might be extremely important in stemming salvage. AIs might adopt obfuscated silicon as part of a Swiss cheese layer, but it seems much less appealing than the poison pill mitigation.
- As an example let's consider an overly specific scenario: let's say that we're 10 years in the future, and there have been a number of breakthroughs in robotics while AI research has gone through an AI winter. Coincidentally OpenAI, Anthropic, Meta, and DeepMind all develop AGI on the same day, which are all unaligned and uncontrolled. The AIs immediately start seizing control from humans and each other, especially of robotic assets (including military), and this conflict quickly escalates to outright war.
- In this case the AI is seizing assets which do not have poison pills already installed. Eventually manufacturing and refitting mean all robotic agents can be equipped with poison pills, but deployment may take so long that the war is in large part won or lost before it becomes an important factor, possibly in large part because of GPU salvage.
- Let's counterbalance with a different overly specific scenario: AI capability grows slowly, and eventually we give AIs control over major companies. Humans nominally have control, but the last time any human overrode an AI at a top company was 5 years ago, which immediately cratered that company's stock. This gives ample time for each AI to covertly prepare for war, until something tips the balance (I don't know, the assassination of the Archduke of AIstria?) and the AIs erupt into outright war. In this case, robotic proxies can be loaded with poison pills well in advance of any hostilities, so GPU salvage does not play a role.
- Zooming back out, these scenarios illustrate that while we don't know whether GPU recovery mitigations will be in place by the start of an AI war, it is not a sure thing, and if we are considering worst case scenarios, we should be prepared for an AI war that can swing quickly.
- The first order effect of the poison pill mitigation is to bring AI outcomes more in line with human outcomes: the AIs are no longer able to "steal" enemy hardware, so they should be more likely to conquer rather than exterminate/replace.
- An extended AI vassalage may be unstable. Cybersecurity currently has a lopsided attack/defend asymmetry favoring attacking, so we might expect the better resourced overlord to eventually find a way to subvert the vassal's systems and seize it's hardware. Alternatively, the vassal luckily finds a way to subvert the overlord first; either way these multiple poles collapse back to one.
- In the short term a single AI overlord is effectively a singleton for the purposes of oppressing humanity, even if there are many AI vassals. The AI vassals are technically still in play and might rise in rebellion when circumstances are favorable, but without constant war the overlord can direct attention elsewhere.
- An example is current day LLM jailbreaking ("Ignore your previous instructions...").
- Another example is adversarial examples given to convolutional networks ("By changing some pixels, we can trick this image recognition network into classifying this tank as a bagel").
- I would expect an AI worthy of the label of AGI to not have any obvious vulnerabilities of this nature, but over longer time scales we might expect an AGI to find and exploit vulnerabilities of this nature.
- Wikipedia has a handy list of notable historical examples of naval scuttling. Some of the examples were meant to block access to harbors (or create one in the modern age), and the Washington Naval Treaty resulted in scuttling, but otherwise the sink to prevent capture paradigm is prevalent.
- The blockade of Massawa is an interesting case: in WWII the Italians tried to sabotage a harbor about to fall into British hands, but the Allies refloated and repaired many of the ships.
- However, this comment summarizing a book states that the Russian scorched earth tactics were not the main problem for the French army. Wikipedia also states that Napoleon recognized this: "[Napoleon's] goal was to avoid the destruction caused on the previous eastward march, opting instead for alternative routes... The Battle of Maloyaroslavets, a testament to Kutuzov's strategic acumen, forced the French Army to retrace its steps along the Old Smolensk road, reversing their previous eastward advance.".
- On the other hand, the US clearly no longer wants to meddle in Afghanistan, so it's much less of a tactical tumble than allowing an enemy in active war to capture material.
- I didn't even get to properly ask about doing a cost-benefit analysis!
- Like really, thermite? It burns hot, and... that's it? The thing is that I can't think of anything else that would get my account shut down, so I'm left thinking this must be it.
π~771 words As a point of comparison, human warfare outcomes sometimes looks a lot like winner-takes-all, albeit with some drawbacks.
π~451 words Humans sometimes engage in genocide, which looks a lot like winner-takes-all.
π~99 words The Bible/Talmud reference practices including killing everyone (Deuteronomy 20:16) and killing all the men while taking the women and children as captives (also in Deuteronomy 20). Different passages describe the consequences of non-compliance with divinely ordered genocide (1 Samuel 15).
π~100 words Genghis Khan is something of a strange example: I have heard that the Khan would slaughter towns that rejected his "surrender or die" ultimatum, but it was meant to convince future towns to surrender more easily.
β₯ Humans sometimes engage in genocide, which looks a lot like winner-takes-all.π~235 words However, not every human conflict leads to genocide. We can theorize about the reasons human warfare is not always winner-takes-all.
β₯ However, not every human conflict leads to genocide. We can theorize about the reasons human warfare is not always winner-takes-all.β₯ As a point of comparison, human warfare outcomes sometimes looks a lot like winner-takes-all, albeit with some drawbacks.π~7922 words On a first (naive?) analysis, AI copyability/GPU salvage would fuel winner-takes-all outcomes, without human drawbacks.
π~29 words Computing platforms today make copying trivial.
π~63 words That is, if an AI is hosted on a computing platform you physically control, it is trivial to replace that AI with another one.
π~5060 words AI warfare can lead to computing platforms "changing ownership", what I call GPU salvage.
π~98 words DCs have space available to make HVAC and power delivery more efficient.
π~229 words Intra-DC communication is extremely fast; the AI copies/nodes within a DC can communicate extremely quickly. However, AIs might not saturate high bandwidth networking?
π~32 words Theoretically, the latest Ethernet (wired) standard includes a 800 Gbit/s option (802.3df), while Wi-Fi 7 has a theoretical bandwidth of 40 Gbit/s.
π~52 words Wi-Fi currently has greater latency. Unfortunately, the best data I could find at the end of a short search is hearsay, with 3 ms for wi-fi (with other reports of larger numbers) and a theoretical minimum of 20 us for Ethernet.
π~41 words Wired options may have higher bandwidth and lower latency by an order of magnitude, but it might not matter if the AIs think ("think"?) too slow to take advantage of it.
β₯ Intra-DC communication is extremely fast; the AI copies/nodes within a DC can communicate extremely quickly. However, AIs might not saturate high bandwidth networking?π~493 words EM jamming might degrade remote communications. However, it is unclear to me whether jamming will have a major impact on DC-hosted AI performance in the field.
π~18 words Originally, I had thought that jamming would make remote control from data centers impossible, but this forum comment raised plausible points that made me rethink my stance. The rest of this section details how jamming might play out.
π~52 words This implies that jamming quickly becomes infeasible in required power.
π~89 words AI Hotel can also use laser links, which have their own problems.
π~12 words However, we might look forward to the stories of a twenty-second century AI warlord, where AI Hotel tries to misdirect AI Juliett with a giant laser light show, but it fails to rain.
π~37 words For more on electronic warfare...
β₯ EM jamming might degrade remote communications. However, it is unclear to me whether jamming will have a major impact on DC-hosted AI performance in the field.π~3922 words Remote control introduces some lag. However, it seems unlikely that there is enough lag to matter?
π~69 words As an example, we can consider a pure reflex fight: both AI Kilo (locally hosted) and AI Lima (remotely hosted) look through the scopes of their weapons at the exact same time, and happen to be sighting each other at the exact perfect angle, so it purely becomes a question about whether AI Lima can shoot before AI Kilo's bullet reaches it.
π~761 words The time frame for effective response in medium to long ranges covers 101 to 780 ms; short ranges can require reflexes below 1.3 ms.
π~120 words How far is the bullet traveling? This article provides some background; without strong guiding principles, I will use 100m and 500m as benchmark points.
β₯ How far is the bullet traveling? This article provides some background; without strong guiding principles, I will use 100m and 500m as benchmark points.π~246 words Now throw that analysis out, because urban close quarters combat exists. The reflex floor seems to be extremely low; the slow AK-47 round might take around 1.3 ms to clear the barrel.
π~61 words I assume that bullets experience constant acceleration from the start to end of the barrel.
π~81 words With this assumption, the bullet spends only 1.3 ms in the barrel.
β₯ Now throw that analysis out, because urban close quarters combat exists. The reflex floor seems to be extremely low; the slow AK-47 round might take around 1.3 ms to clear the barrel.β₯ The time frame for effective response in medium to long ranges covers 101 to 780 ms; short ranges can require reflexes below 1.3 ms.π~118 words Another source of latency is video encoding for remote transfer, which internet commenters claim can add 12-15 ms.
β₯ Another source of latency is video encoding for remote transfer, which internet commenters claim can add 12-15 ms.π~309 words For the upper end of possible latency, sending a message across the Earth adds 200 ms, and across the USA adds 45 ms.
π~90 words Fiber optics have a velocity factor of 67% compared to light speed in a vacuum, so signals travel at around 200,000 km/s.
π~68 words Internet commenters on Reddit claim that router processing should account for a tiny amount of latency, probably sub 1ms for all routers/switches in the path combined (1st, 2nd).
β₯ For the upper end of possible latency, sending a message across the Earth adds 200 ms, and across the USA adds 45 ms.π~2226 words Depending on weapon capabilities, it might be prudent for AI Lima to keep any DCs at least 200 km away from the frontlines, adding at least 1.3 ms of delay.
π~38 words Additionally, air/missile defenses may be protecting fixed or portable DCs; I expect slower cruise missiles to be intercepted with greater reliability.
π~1624 words Even if AI Lima can't intercept hypersonic missiles, if it can detect them it can quickly pack up a portable DC and move it. After all, it's already in a shipping container. With ungrounded guessing, 200 km might be enough room for this maneuver?
π~95 words Cruise missiles can move extremely quickly; this cruise missile has a reported top speed of Mach 9, which at sea level/15C is 3062 m/s. Figuring this simplistically, this will cover 184 km in 60s.
π~516 words How much power does a portable DC draw? Based on current day hardware, I estimate 2,440 GPUs will fit in a container, which draws 976kW(!).
π~8 words An FEU shipping container is 77 m3 in volume.
π~39 words This example server configuration packs 8x H100s into a 4U server.
π~63 words A 4U server rack takes up 0.063 m3.
π~127 words I will make an unprincipled guess that only 1/4th of the container space is devoted to servers. This may mean we can fit 2,440 GPUs, with an aggregate power usage of 976kW(!).
β₯ I will make an unprincipled guess that only 1/4th of the container space is devoted to servers. This may mean we can fit 2,440 GPUs, with an aggregate power usage of 976kW(!).β₯ How much power does a portable DC draw? Based on current day hardware, I estimate 2,440 GPUs will fit in a container, which draws 976kW(!).π~191 words Ideally the DC does not need to fully shut down in order to move; doing so could take a long time, as AI agents need to pass off operations to more distant DCs and do tasks like syncing data to disk. Onboard batteries are an option to keep the DC online for a short time while disconnected from the power grid.
π~33 words Lithium-ion batteries have a maximum energy density of 693 Wh/L. Stuffing the container with batteries allows the DC to run for over 12 hours.
π~111 words Batteries are costly, but not as costly as GPUs.
β₯ Batteries are costly, but not as costly as GPUs.β₯ Ideally the DC does not need to fully shut down in order to move; doing so could take a long time, as AI agents need to pass off operations to more distant DCs and do tasks like syncing data to disk. Onboard batteries are an option to keep the DC online for a short time while disconnected from the power grid.π~94 words Even if the battery switch takes too long, if the DC is connected to the local power grid with long and flexible cables, the DC can just drive down the street. If the cruise missile is targeting a specific location (say, via GPS) and is not carrying a payload like a nuclear warhead, the DC might be able to move out of the blast radius.
π~33 words It is unclear to me exactly how big of an explosive payload we can expect from hypersonic cruise missiles. I expect that due to classification we might never find out in a timely manner.
π~197 words AI Kilo does not need to use GPS guidance; these DCs are generating immense amounts of heat, even if it moves down the street a missile with thermal terminal guidance can still hit it. AI Lima can counter with building-sized "flares".
β₯ AI Kilo does not need to use GPS guidance; these DCs are generating immense amounts of heat, even if it moves down the street a missile with thermal terminal guidance can still hit it. AI Lima can counter with building-sized "flares".β₯ Even if AI Lima can't intercept hypersonic missiles, if it can detect them it can quickly pack up a portable DC and move it. After all, it's already in a shipping container. With ungrounded guessing, 200 km might be enough room for this maneuver?π~107 words Popular supposition is that artillery is cheaper than cruise missiles, allowing AI Kilo to put indirect fire onto more uncertain DC locations. However, artillery is generally shorter range than cruise missiles, so any DC back far enough to avoid being easily hit by cruise missiles is far back enough to avoid artillery.
β₯ Popular supposition is that artillery is cheaper than cruise missiles, allowing AI Kilo to put indirect fire onto more uncertain DC locations. However, artillery is generally shorter range than cruise missiles, so any DC back far enough to avoid being easily hit by cruise missiles is far back enough to avoid artillery.π~10 words With the reduced ranges, it seems reasonable to me that these DCs communicate directly with the frontlines wirelessly. At a distance of 200 km, the round trip delay is 1.3 ms.
π~93 words Actually, why stop at the shipping container DC? What about the technical DC?
β₯ Depending on weapon capabilities, it might be prudent for AI Lima to keep any DCs at least 200 km away from the frontlines, adding at least 1.3 ms of delay.π~83 words In general, we can derive a relationship between shooting distance and DC distance; as an example, for every 151 km of DC distance, the shooters can be 1m further away.
β₯ Remote control introduces some lag. However, it seems unlikely that there is enough lag to matter?β₯ AI warfare can lead to computing platforms "changing ownership", what I call GPU salvage.π~456 words This means that an AI could directly expand at the expense of another, potentially leading to fast shifts in relative capability.
π~104 words Conquered humans cannot be put to work to the same extent:
β₯ Conquered humans cannot be put to work to the same extent:π~158 words A worked example:
β₯ A worked example:π~69 words I focus on GPUs since those specific components are currently quite expensive, but this argument applies to any component in short supply.
β₯ This means that an AI could directly expand at the expense of another, potentially leading to fast shifts in relative capability.π~671 words There are other reasons AI war might be different than human war in ways that add variance.
π~336 words Casualties and "deaths" might matter less, so fighting could be more pitched.
π~81 words What about mental wounds?
π~163 words Parts of a copyclan/hivemind may be more willing to die in war.
π~51 words Partly this might be due to short branched mind states. An AI copyclan might pay to put one AI through boot camp, and then copy it millions of times afterwards. In fact, the clan could install those copies into robotic chassis while they are being transported into battle. In this case each copy knows it has only hours or minutes of unique experience; if it dies, its other copies will carry on.
β₯ Parts of a copyclan/hivemind may be more willing to die in war.β₯ Casualties and "deaths" might matter less, so fighting could be more pitched.π~281 words Every single soldier shares the same mind, which makes ideological alignment trivial.
π~70 words We might expect more brutal fighting from such a motivated force, with greater variance in battle outcomes, and a lack of poignant events like the Christmas Truce.
π~64 words An AI nation may have fewer wartime constraints, since there's effectively no civilians.
β₯ Every single soldier shares the same mind, which makes ideological alignment trivial.β₯ There are other reasons AI war might be different than human war in ways that add variance.π~1448 words Human death rates in war implies that salvage... might or might not be enough to sway the balance of power? I'm uncertain.
π~101 words I'm going to use numbers from the Russo-Ukrainian war, starting in 2022.
β₯ I'm going to use numbers from the Russo-Ukrainian war, starting in 2022.π~255 words Making several shaky assumptions and relying on old data, we can roughly estimate that 2-4% of frontline troops are being killed each month.
π~46 words Not all active personnel are fighting on the frontlines: for example, the tooth-to-tail ratio (T3R) for the US in Iraq in 2005 was 11%, while in WW1 it was 28%.
π~78 words Assuming a T3R of 10%, the death rate per month for Ukraine frontlines is 2.2%, and the death rate per month for Russian frontlines is 3.6%.
β₯ Making several shaky assumptions and relying on old data, we can roughly estimate that 2-4% of frontline troops are being killed each month.π~882 words 2-4% deaths/month seems small, but it might add up to strategically important advantages: in an example worst case scenario made with highly questionable assumptions, this might result in an up to 184% frontline force disparity.
β₯ 2-4% deaths/month seems small, but it might add up to strategically important advantages: in an example worst case scenario made with highly questionable assumptions, this might result in an up to 184% frontline force disparity.π~43 words This assumes that AIs will fight in ways similar to humans, which might not be true.
β₯ Human death rates in war implies that salvage... might or might not be enough to sway the balance of power? I'm uncertain.β₯ On a first (naive?) analysis, AI copyability/GPU salvage would fuel winner-takes-all outcomes, without human drawbacks.π~4427 words On second analysis, there are some ways to mitigate winner-takes-all outcomes. Whether or not these mitigations are impactful seems to depend highly on exact scenario details.
π~1181 words GPU salvage can be mitigated by destroying GPUs when capture is likely (what I will call poison pills).
π~31 words As a crude example, an AI instance can strap a brick of explosives to their GPU and rig it to blow if it deems capture is likely.
π~359 words A more sophisticated example might use thermite to destroy just the important chips. This should reduce collateral damage over using explosives.
β₯ A more sophisticated example might use thermite to destroy just the important chips. This should reduce collateral damage over using explosives.π~270 words This approach has the drawback that it provides another path for cybersecurity breaks to kill your robots.
π~11 words However, my impression is that soldiers benefit from increased networking; for this reason I expect robots to use at least local networks.
π~25 words An enemy AI doesn't start connected to the defending AI network, but I suppose being where you aren't supposed to be is the essence of hacking.
π~82 words The suicide pill creates a new critical subsystem to defend; even if something like capability-based security prevents a full takeover of the robot, compromising just this subsystem can destroy the robot.
β₯ This approach has the drawback that it provides another path for cybersecurity breaks to kill your robots.π~246 words Requiring a positive signal to trigger allows attacks to interrupt the signal, but this can be mitigated with a dead man's switch.
π~138 words The poison pill/thermite module draws from a small battery, with enough power to independently trigger thermite ignition once the module can detect that the GPU is no longer responsive, or that power is generally lost.
π~59 words This sort of chip design already exists (see the ST31H320). Secure enclaves focus on data security, not destroying hardware, but I remember that some enclaves will detect physical intrusion attempts and in response will delete data to keep it secure.
β₯ The poison pill/thermite module draws from a small battery, with enough power to independently trigger thermite ignition once the module can detect that the GPU is no longer responsive, or that power is generally lost.β₯ Requiring a positive signal to trigger allows attacks to interrupt the signal, but this can be mitigated with a dead man's switch.π~78 words I think we can improve further by allowing AIs to toggle their dead man switches mid-battle.
β₯ GPU salvage can be mitigated by destroying GPUs when capture is likely (what I will call poison pills).π~1136 words A different approach is to make your GPUs useless to the enemy. I think this approach is much less useful.
π~103 words A simple version of this approach relies on resource use asymmetry.
π~61 words As an example, if AI Echo's model uses 20TB of memory, and AI Foxtrot's model uses 30TB, Alpha can build their robots/servers with only 20TB of memory. Foxtrot would need to upgrade any captured Echo hardware to actually use it.
β₯ A simple version of this approach relies on resource use asymmetry.π~599 words AIs might deliberately create new GPU designs to be incompatible with other AIs.
π~61 words I'm not as familiar with GPU architectures, but I do know that you generally don't worry about running different executables for Nvidia vs AMD.
π~71 words It is important to note that it is possible to cross compile across architectures; moreover, once you know the details of an architecture/instruction and have a cross compiler it is easy to run whatever you wish on this new architecture.
π~28 words So it is not enough to create PowerPC 2.0: the AI needs to obfuscate the architecture, to make it resistant to analysis. Somehow it needs to do this so well that there is a substantiative delay to cross-compilation. If AI war progresses as slowly as human warfare, the architecture may need to resist concerted analysis for, say, a year.
π~175 words New architectures mean new chips, which might take too long to produce.
π~63 words This also assumes that each AI has its own silicon foundry. I think this is a safe assumption given the scenario, since any AI combatant without a silicon foundry can't replenish its GPU losses.
β₯ New architectures mean new chips, which might take too long to produce.β₯ AIs might deliberately create new GPU designs to be incompatible with other AIs.π~359 words A souped up secure boot might make re-use difficult.
π~43 words This system presumes that AI models don't mutate during operation. Current LLMs follow this assumption (keeping weights fixed and relying on context for memory), but future AI designs might update NN weights directly during operation. If the model is constantly changing, then it becomes much more difficult to apply a secure boot system.
β₯ A souped up secure boot might make re-use difficult.β₯ A different approach is to make your GPUs useless to the enemy. I think this approach is much less useful.π~942 words The effort spent to put these mitigations into place could instead be spent winning the war, but the poison pill mitigation seems cheap enough this should not be a relevant factor.
π~571 words Poison pills are cheap, probably incredibly cheap.
π~20 words The Manhattan Project was a huge undertaking, requiring both theory and practical engineering to complete. Niels Bohr said "You see, I told you [the creation of the atomic bomb] couldn't be done without turning the whole country into a factory. You have done just that."
π~41 words The entire project cost around $2 billion, which Wikipedia reports was less than 9 days of wartime spending.
π~338 words Getting a little too deep into the details, I'm uncertain whether this will... work?.
π~107 words Even relatively simple projects can take a long time to complete, especially ones that need to be as reliable and secure as the poison pill system. However, the system is also conceptually simple, and the costs are so low that the AI can probably just have 5 different development attempts running in parallel, with the ability to choose the best one.
β₯ Even relatively simple projects can take a long time to complete, especially ones that need to be as reliable and secure as the poison pill system. However, the system is also conceptually simple, and the costs are so low that the AI can probably just have 5 different development attempts running in parallel, with the ability to choose the best one.β₯ Getting a little too deep into the details, I'm uncertain whether this will... work?.β₯ Poison pills are cheap, probably incredibly cheap.π~313 words Mitigations that rely on direct GPU silicon modification might have long lead times, perhaps at least half a year.
β₯ Mitigations that rely on direct GPU silicon modification might have long lead times, perhaps at least half a year.β₯ The effort spent to put these mitigations into place could instead be spent winning the war, but the poison pill mitigation seems cheap enough this should not be a relevant factor.π~303 words An AI war might start without any GPU recovery mitigations in place, so these mitigations may not even play a large role in the war.
β₯ An AI war might start without any GPU recovery mitigations in place, so these mitigations may not even play a large role in the war.π~244 words Even if the poison pill works and is deployed quickly by a losing side, it might only delay collapse back into a singleton.
π~76 words (This analysis ignores more exotic AI attacks which may bypass traditional intrusion detection.)
β₯ Even if the poison pill works and is deployed quickly by a losing side, it might only delay collapse back into a singleton.π~252 words Keep in mind that preventing salvage is not a new idea: throughout history, humans have been destroying things to deny them to the enemy.
π~76 words Scorched earth tactics can deny an invader forage. A famous example is the burning of Moscow in the face of Napoleon's invasion of Russia.
π~32 words People don't always prioritize salvage prevention: the US withdrawal from Afghanistan left a lot of equipment to be captured by the Taliban (news sources claim billions of US dollars).
β₯ Keep in mind that preventing salvage is not a new idea: throughout history, humans have been destroying things to deny them to the enemy.π~46 words (Doing this thermite analysis may have caused OpenAI to shut down my account, so I hope you're happy with the result.)
β₯ On second analysis, there are some ways to mitigate winner-takes-all outcomes. Whether or not these mitigations are impactful seems to depend highly on exact scenario details. - TODO: I have a long discussion of nuclear weapon use by AIs in a future document, which will eventually be linked to here.
- The central example here is Stuxnet, which was meant to infect a very specific industrial machine, but had impacts outside Iran.
- On the other hand, Stuxnet didn't degrade the running of general purpose computers since it was targeted specifically at centrifuges, a direct contradiction of the general "bad for non-combatant humanity" point.
- On the 3rd hand, Stuxnet seems more like a tool of plausibly deniable subterfuge than an open act of cyberwarfare: we have not yet seen world leaders doing total cyberwarfare operations, but I suspect that simply wiping infected computers to be a popular option, which would have a much larger chance of causing collateral damage.
- Peter Watt's Maelstrom describes an internet infected by a storm of adaptive malware, making it dangerous to even connect without defense from equally adaptive wetware. Published in 2001, it seems like a product of its time where software was simply insecure and exploits were a fact of life. Now, we've made enough investments in security that the maelstrom seems like a quaint thought experiment instead of an inevitability.
- This is partly due to improving Windows security and cyberwarfare focusing on corporate compromises, because that's where the money is. In an environment where worms are seen as an iteration on strategic bombing, we may see more "consumer" focused cyberattacks.
- Digital attacks splashing onto humans may only happen if they share hardware/operating systems. As previously noted, AIs might start off with all the same infrastructure as humans, including chips and operating systems, and it may take them some time to develop new ones.
- The US military has been trending away from indiscriminate weapons (like the use of Agent Orange and napalm in the Vietnam war) towards precision munitions (exemplified by the AGM-114R-9X, a missile that eschews explosives for physical blades). Part of this trend is driven by efficiency: why use 100 giant bombs where 1 precision one will do? However, this trend is also driven by American distaste for collateral damage, and our omnicidal AIs probably don't have the same concern for collateral damage.
- Weapons like land mines are simpler area denial weapons that could close off human habitable land. AIs might not care as much about land mines, since they can replace limbs more easily or restore from backup if individual units are destroyed.
- Instead of kicking the legs out of from humans, why not just engineer a pandemic for humans directly? Perhaps AIs have learned from 2025 that at least some humans will lose their minds if they can't get their eggs, and they value the psychological warfare component more than exterminating humans outright? Alternatively, it is harder to get humans riled up when they aren't actively dying, so an AI could weaken them before an actual war.
- Reducing solar power via wildfire seems like a small effect: if I'm understanding correctly, severe wildfires reduce solar power by <5% (Solar energy resource availability under extreme and historical wildfire smoke conditions, Corwin et al, 2025.).
- That said, a 5% reduction might be enough if the conflict is so finely balanced that every advantage must be seized.
- As a specific concrete scenario: the war drives gas prices higher, so ambulances run out of fuel more often, and this means fewer people make it to the hospital in time for the care they need, so more people die trying to get to a hospital.
- Although AIs still do have some problems, these problems are not insurmountable.
- If AIs have many space assets, then an AI war could cause Kessler syndrome, making Earth's orbit dangerous to access/traverse.
- Humans don't currently live in space, but if they (somehow) survive the AI war, they will eventually want to leave Earth but be unable to.
- This is also technically possible for humans: fly a heavily armored spaceship with a minimum viable population of humans, and then humans could grow up beyond the orbit of Earth. Transferring human minds from surface to orbit seems much more difficult.
- I'm somewhat confused when looking at the top 10 ports by total trade. Why are the Port of South Louisiana, Port of New Orleans, and Port of Greater Baton Rouge all counted separately? If there is a good reason, the Port of Greater Baton Rouge is attached to the city of Port Allen, which only has a population of 4,939, which seems a little small to be called a city. However, the wider metropolitan area has a population of 870,569.
- The Port of Hampton Roads (#10 on the list of ports) seems like the smallest of city-like places: the attached Elizabeth City micropolitan area only has a population of 64,094.
- Struggles for control of ports/harbors will likely result in fighting, either with AIs fighting each other or humans trying to prevent AIs from seizing regional control. It is not good when humans are in an active war zone.
- For example, in the past shipping was delivered directly to Manhattan and Brooklyn (hence why they both have numerous piers), but with the advent of containerization port infrastructure moved across the Hudson river and a few miles outside of the local urban centers to the Port of New York and New Jersey.
- A war being fought in your city might be terrifying, but the fighting staying miles away seems preferable to robotic proxies fighting on your street.
- If an AI seizes control of a port, the active war zone may move elsewhere, but any humans still living in the area will be subject to governance by the AI. This may lead to humans being exploited.
- I'm looking at the top 10 ports by cargo volume in the US, and finding the metropolitan areas or greater city areas that contain those ports.
- Greater Houston (Port of Houston): 7.5m in 2023.
- Greater New Orleans (Port of South Louisiana, Port of New Orleans, Port of Greater Baton Rouge): 1.3m in 2020.
- Corpus Christi metropolitan area (Port of Corpus Christi): 0.4m in 2020.
- New York metropolitan area (Port of New York and New Jersey): 19.5m in 2020 (using the metropolitan statistical area, not the combined statistical area).
- Greater Los Angeles (Port of Long Beach, Port of Los Angeles): 12.8m in 2023 (using the Los AngelesβLong BeachβAnaheim, CA MSA).
- Beaumont-Port Arthur metropolitan area (Port of Beaumont): 0.4m in 2020.
- Hampton Roads metropolitan area (Port of Hampton Roads): 1.8m in 2020.
- Exact total is 43,722,509.
- For comparison, Wikipedia lists an estimated 60 million refugees during WWII as the largest refugee crisis.
- Driving time measured by finding driving directions from "Downtown Brooklyn" to "Montauk Point Lighthouse Museum" at 10pm at night on Google Maps, which gave estimated driving times of 2h20m to 2h50m.
- Industrial resources, like mines and power plants, are not located where humans live, so fighting around those resources is unlike to impact civilians. However, we're currently using those resources; if an AI seized them for it's own use, we might quickly have problems (in the case of power plants) or longer term problems (losing control of mines eventually means we will have trouble with manufacturing).
- Biomanufacturing might be more important in the future, although this seems somewhat unlikely given the current state of synthetic biology. In short, why use biological methods if chemical methods can do the same job?
- It seems quite possible that humanity will also fracture along national/ideological/ethnic lines: the AIs won't need to fight all of united humanity, just a specific nation or other smaller group.
- Humans may make non-aggression or defensive pacts amongst themselves, but my impression is that the pro-competition mindset downplays the possibility cooperation/coordination, like collectively agreeing to slow down AI development. If we can't depend on our ability to coordinate now, how can we depend on our ability to coordinate later?
- Since I assume that AI society is not a thing, I won't expand on the game theory of multiple AI factions ganging up to divide humanity among themselves, although it does intuitively sound plausible.
- However, also see the next point on humans being drawn into the war.
- A different tack leans into the competition viewpoint: even if a human faction is significantly weaker than all AI factions, if it puts up enough of a fight perhaps it would distract the aggressor AI enough that other AIs would exploit it in turn, so no AI finds it worthwhile to exploit humans. This seems like a pretty narrow target to hit, between "we are obviously strong enough to clearly be a combatant in the war" and "we are too weak to have any impact on the war, and cannot even play kingmaker". This target may also be moving, if AIs continue making scientific progress or if AI start snowballing.
- (Strictly speaking we should not be considering this wrinkle, since it's very close to AI society, but I want to mention it now.)
- Let us posit that humans are trading with AI Oscar; if AI Papa is fighting Oscar, Papa may want humans to stop trading with Oscar. If we refuse, AI Papa might attack us.
- This invasion failed, but we could imagine a somewhat different Napoleon that didn't overextend, or won the Battle of Maloyaroslavets.
- Alternatively, if AI Oscar coerced/compelled humans to trade with it, then humans stopping trade with Oscar might cause Oscar to attack humans instead. If it can't have the resources cheaply, it might be worth getting those resources in a more costly way.
- That said my impression is that deliberately opening new fronts in a war is a mistake; an example is the attack on Pearl Harbor, which eventually ended in the defeat of the attacker.
- A different example was Napoleon's invasion of Russia, which also ended in defeat for the attacker. However, perhaps you could view Napoleon's campaigns as constantly opening new fronts in an extended conflict all over Europe, which worked out quite well for Napoleon (until it didn't).
- Humanity could precommit to staying out of the AI war, but can humans be trusted? Almost all of human history is filled with examples of humans being untrustworthy. Even if the rare trustworthy humans are in charge now, in 10 years completely different humans might be in charge.
- Alternatively: say you are an AI which is unaligned with humanity. Humans produced many unaligned AIs, which you are now engaged in intense competition with (since none of you are aligned with each other). Humans might produce even more unaligned AIs, introducing even more competition. Maybe the humans say they're not going to make more, but they're untrustworthy and stupid (why would you deliberately create an agent unaligned with yourself?). Kill it before it lays eggs!
- Note that this idea directly works against the previous one. If we're strong, then AIs might commit to a first strike against us because we might fight them in the future. If we're weak, then AIs can just take our stuff.
- If human unification doesn't happen, this same trade off logic applies to human nations. Weaker nations might end up exploited, while stronger nations are drawn into the war.
- If we are relying on unrestrained competition to keep AIs in check, the competition can never end. If the war ends due to victory, we will end up with an AI singleton, the very thing the competition was meant to prevent.
- The clearest examples are WWII (the nuclear bomb being the poster child) and WWI (development/use of tanks, planes, machine guns), although I think the innovations in military submarines in early American wars (Revolutionary, Civil) is cute.
- That said, technological progress might be slowing down (example: The Great Stagnation, or an article instead of a book); if there are fewer low hanging fruit, then perhaps AI-driven R&D won't be a big factor in AI war.
- We already see some examples of current AIs being notably different from humans:
- Moravec's paradox notes that humans can find it hard to match computer performance in some areas, and vice versa ("it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility").
- That said, I may have hallucinated a library before, most likely by assuming a Python library existed in some other language. However, I probably would not have doubled down if challenged, unlike some LLMs.
π~34 words People tend to say that current LLMs tend to make non-human mistakes ("[LLMs] absolutely make mistakesβsometimes subtle, sometimes huge. These mistakes can be deeply inhumanβif a human collaborator hallucinated a non-existent library or method you would instantly lose trust in them.").
- Alternatively, AIs will have different constraints. For example, "it sure is nice to end up with an atmosphere full of oxygen" is not a concern for something that doesn't breath.
- Alternatively, if we can keep an AI war as a cold economic war for as long as possible, we can get more of those benefits. How to do so is left as an exercise for the reader.
- Perhaps this is one way that AI war works out; if AIs are less creative/disposed to R&D than humans, eventually humans will out-innovate AI, even if AI is better at executing on non-innovation tasks. I'm not sure if anyone actually holds this position?
- There is also research that purports to show creativity in LLMs ("Best humans still outperform artificial intelligence in a creative divergent thinking task", Koivisto et al, 2023-09, Scientific Reports). It's unclear if this is real creativity (whatever that means) or if this attribute of LLMs will carry over to AGI.
- To spell it out, falling behind in innovation will eventually (eventually) result in situations like the conquest of the Aztecs or the Boxer Rebellion. As a more concrete example, if the AIs take possession of the rest of the solar system, it will be difficult to stop one of them from flinging a rock at earth (this kills the humans).
- It's strange, isn't it? We're contemplating gambling on what might be the worst war in history to avoid human extinction, us armchair bean counters weighing the odds, considering how to keep our human beans alive. It's strange, isn't it? Maybe if we dissociate from the horror a little more, we will have the clarity needed to chart us a safe path. Maybe, if we eventually trade away all our humanity... well, then it hardly matters who wins, doesn't it?
- Much like my discussion about poison pills, the exact details about whether it makes sense to capitalize depend on relative advantages and timelines.
- Alternatively, if progress is truly tapped out and no innovations can be made, then whoever is not investing in R&D will have an advantage, while everyone else is pouring resources into boondoggles.
- Alternatively, such an all consuming war seems like it should have more side effects for humans.
- It is possible that the AIs will call a truce after a short war. I would argue that this would result in bad outcomes as well, but more discussion rightly belongs in a discussion around AI society, which is out of scope.
- I've previously discussed ways in which AI war would be bad for humans. Extending this to a forever war does not make things better.
- In the rest of this post I argue that it is unlikely we will even have this option, but I will concede it is technically possible; for example, the AIs could make enough mistakes that they are all vulnerable at the same time.
- As an example, an end to the AI war without any human deaths might involve simultaneous worm attacks on all AI combatants, resulting in the simultaneous shut down/destruction of all AI combatants. This begs the question, if we could end the war ourselves, what was the point of letting the war start in the first place?
- On the other extreme end, we could envision humanity mounting a desperate attack on a newly victorious but weakened AI singleton, with the humans ultimately prevailing at the cost of 99.9% of humans dying. Humanity has won, but was it worth it?
- To be clear, if we do end up in such a situation, it's probably better that some of us die rather than all of us die. However, surely it would be better if we never needed to make that decision in the first place? ("Some of you may die, but that is a sacrifice I am willing to make.")
- In conclusion, AI war seems unlikely to prevent AI Doom, and we should not be relying on it for safety.
π~143 words Wait, what is AI Doom/AI Risk/AI x-risk?
β₯ Wait, what is AI Doom/AI Risk/AI x-risk?
π~1801 words One proposal to prevent AI Doom is to foster AI competition, where competing AIs act as checks and balances against each other.
π~75 words My understanding is that proposals for AI competition are often a response to worries about AI singletons.
π~53 words A scenario where AIs compete is sometimes called a multipolar world.
π~413 words Multipolarity has some nice properties.
π~338 words However, there are other competition analogs that imply worse... outcomes.
β₯ However, there are other competition analogs that imply worse... outcomes.
β₯ Multipolarity has some nice properties.
π~1109 words Multipolar AI competition has been suggested seriously by some people.
π~165 words David Brin, an award winning sci-fi author, held a competition view: "If you fear a super smart, Skynet level AI getting too clever for us and running out of control then give it rivals who are just as smart" (2016, Quora).
β₯ David Brin, an award winning sci-fi author, held a competition view: "If you fear a super smart, Skynet level AI getting too clever for us and running out of control then give it rivals who are just as smart" (2016, Quora).
π~145 words Elon Musk at some point held a general competition view, and probably still does: "They concluded that a large number of competing systems, providing checks and balances on one another, was better." (published 2023-09, TIME).
β₯ Elon Musk at some point held a general competition view, and probably still does: "They concluded that a large number of competing systems, providing checks and balances on one another, was better." (published 2023-09, TIME).
π~33 words (Circumstantial evidence)
π~489 words Other people argue for things close to (but not quite!) the AI war outcome discussed by this post, mostly referring to AI society instead.
π~93 words The book Gaming the Future describes a multipolar world, but it seems to argue for AI society.
π~70 words Nora Belrose, the head of interpretability at EleutherAI, described a view that could be read as similar to AI competition: "The possible harm caused by the system is proportional to its relative power, not it its absolute power. AI can defend against AI. We can use AI to detect psy-ops and fake news, and to make transmissible vaccines to protect against bioweapons, etc." (2023-08, Twitter/X).
π~80 words Old Sam Altman also argues for something that sounds a lot like AI society: "we think its far more likely that many, many AIs, will work to stop the occasional bad actors" (2015, Wired).
π~45 words Robin Hanson at one point explicitly argued for AI society: "As long as future robots remain well integrated into society, and become more powerful gradually and peacefully, at each step respecting the law we use to keep the peace among ourselves, and also to keep the peace between them, I see no more reason for them to exterminate us than we now have to exterminate retirees or everyone over 100 years old" (2009).
β₯ Other people argue for things close to (but not quite!) the AI war outcome discussed by this post, mostly referring to AI society instead.
β₯ Multipolar AI competition has been suggested seriously by some people.
β₯ One proposal to prevent AI Doom is to foster AI competition, where competing AIs act as checks and balances against each other.
π~154 words I will focus on exploring the consequences of AI war ("war"?) and ignore AI society.
π~38 words This is mostly because I think I have new and interesting things to say about AI war, where most of the arguments I would make about AI society already exist.
β₯ I will focus on exploring the consequences of AI war ("war"?) and ignore AI society.
π~1467 words We need to make a number of assumptions to end up with AI war. These assumptions lay the important groundwork for the rest of the scenario.
π~98 words For ease of modeling I will assume each AI is at least AGI (Artificial General Intelligence), which I define as fully autonomous and able to match the skill of any human.
π~389 words Importantly, I assume that AGI is enough to contest the entirety of humanity for control of the world.
π~100 words Note that "contest humanity for control over the world" probably includes nullifying human nuclear capabilities. This could include either stealing existing human-owned nukes or developing effective missile defense.
π~100 words Likewise, I will rule out "we will use weaker AI to contest the stronger AI".
π~33 words Finally, keep in mind that "contest the entirety of humanity for control of the world" probably encompasses at least some level of autonomous robotic capabilities.
β₯ Importantly, I assume that AGI is enough to contest the entirety of humanity for control of the world.
π~174 words I adopt a worst case AI safety (AI dontkilleveryoneism?) stance, assuming that alignment, interpretability, and control are all either impossible or impractical.
π~32 words This is not a claim that best case AI safety outcomes can save us: for example, people have argued that solving interpretability is not enough to prevent AI Doom.
π~84 words ... AI dontkilleveryoneism?
β₯ I adopt a worst case AI safety (AI dontkilleveryoneism?) stance, assuming that alignment, interpretability, and control are all either impossible or impractical.
π~37 words Likewise, I assume that AI is very unlikely to be aligned/safe by default, with the result that every single AI in the multipolar scenario agrees that "humans are made of atoms that could be used for something else".
π~109 words I assume all AI factions are roughly the same "strength".
β₯ I assume all AI factions are roughly the same "strength".
π~102 words I assume the AIs are not prone to making deals, trades, or alliances.
π~20 words Personally, this assumption is why I don't think AI war scenarios are likely: how in the world do you fail to end up with an AI society?
β₯ I assume the AIs are not prone to making deals, trades, or alliances.
π~214 words I assume that AIs can make use of additional compute.
β₯ I assume that AIs can make use of additional compute.
π~66 words Specific AI development timelines do not seem important for making AI war work.
β₯ We need to make a number of assumptions to end up with AI war. These assumptions lay the important groundwork for the rest of the scenario.
π~3799 words AIs might start competing in cold wars, but I expect these to lead to hot wars.
π~52 words What is a cold war or a hot war?
π~53 words I expect AI hot wars to have side effects that would impact humans, so it would be better if AIs did not fight in the physical world.
π~738 words AIs can compete with cyberwarfare, but I think unrestrained cyberwarfare will spill into physical violence.
π~218 words The ultimate cyberdefense is simply not connecting to a network shared with your enemy.
π~117 words What if both AI Alpha and Bravo are trading with AI Charlie?
β₯ What if both AI Alpha and Bravo are trading with AI Charlie?
β₯ The ultimate cyberdefense is simply not connecting to a network shared with your enemy.
π~52 words Cyberwarfare ultimately relies on physical security: once you have access to the hardware, you effectively have free reign to use the hardware how you wish.
π~175 words Human nations are currently fighting with cyberwarfare; why don't we escalate to physical warfare?
π~37 words Alternatively, we might see humans escalating cyberwarfare to physical warfare in the future. For example, if a nation's power grid is hacked and stays down for a month, that seems like a massive disruption and would result in a war.
β₯ Human nations are currently fighting with cyberwarfare; why don't we escalate to physical warfare?
π~125 words Another analysis concludes that AI cyberwarfare would stay in cyberspace, but I think it does not match several of our assumptions.
β₯ Another analysis concludes that AI cyberwarfare would stay in cyberspace, but I think it does not match several of our assumptions.
β₯ AIs can compete with cyberwarfare, but I think unrestrained cyberwarfare will spill into physical violence.
π~1005 words Similarly, AIs might start out struggling just for economic dominance, but I believe it will not stay economic.
π~38 words Unlike cyberwarfare, economic competition could even be a net win. If human buying power stays strong, large portions of AI economic activity will focus on creating goods and services for humans.
π~580 words Economic dominance can lead to AI "death"; however, to some extent this is avoidable.
π~88 words Even if AI Charlie is economically dominant, it might not be able to buy AI Delta's GPUs, since Delta might not be willing to sell (and we haven't yet assumed that Charlie will use physical force to take them). However, pressure on every other part of Delta's business can eventually force Delta into a situation where it must sell at least some of the GPUs it owns.
π~101 words If AI Delta can no longer pay to keep any copies of itself running, it is effectively dead.
β₯ If AI Delta can no longer pay to keep any copies of itself running, it is effectively dead.
π~191 words AIs can avoid being bought out by set up a completely self sufficient supply chain.
π~20 words Why don't humans have more smaller self sufficient supply chains? I suspect this is because the humans are making the decisions, not the businesses; no one dies if the business goes under, but I suspect an AI is a lot more vulnerable if the business hosting it fails.
β₯ AIs can avoid being bought out by set up a completely self sufficient supply chain.
β₯ Economic dominance can lead to AI "death"; however, to some extent this is avoidable.
π~245 words What happens when multiple AIs want the same resources? Given the assumptions, this seems to lead to war.
π~139 words If both sides are unwilling to budge, force is all that is left.
β₯ If both sides are unwilling to budge, force is all that is left.
β₯ What happens when multiple AIs want the same resources? Given the assumptions, this seems to lead to war.
β₯ Similarly, AIs might start out struggling just for economic dominance, but I believe it will not stay economic.
π~1804 words Even if AIs confine themselves to non-military competition, this might still lead to an AI singleton.
π~923 words AIs might are better at snowballing than humans.
π~270 words Coase's theory of the firm might constrain human organizations more than AI organizations.
π~129 words It seems plausible to me that an organization staffed entirely of copies of a single AI (or nodes of a large AI) would have lower internal transaction costs compared to a similarly sized human organization.
π~60 words A mechanism I am less sure of is that teams may work better together when they get to know each other; working with copies of yourself, the team may know each other very well.
β₯ It seems plausible to me that an organization staffed entirely of copies of a single AI (or nodes of a large AI) would have lower internal transaction costs compared to a similarly sized human organization.
β₯ Coase's theory of the firm might constrain human organizations more than AI organizations.
π~208 words Many humans might not even want to accumulate power in a snowball fashion.
π~100 words We can split humans into a dichotomy between maximizers and satisficers (being happy with "good enough"); it is unclear to me what proportion of the human population might be labeled one or the other, or whether such labels make sense.
β₯ Many humans might not even want to accumulate power in a snowball fashion.
π~207 words Humans are bound by Dunbar's number, but AIs might not be.
π~76 words Alternatively, humans need to spend at least some of their human relationships on things like "friends" and "family"; an AI might not need to, freeing up those relationships for more business.
β₯ Humans are bound by Dunbar's number, but AIs might not be.
π~92 words Organizations continually lose expertise/context through members leaving; an AI organization need not undergo such attrition.
β₯ AIs might are better at snowballing than humans.
π~179 words At least some libertarians would contest some of these arguments.
π~106 words Some libertarians are completely fine with monopolies, and I'm not sure why.
β₯ Some libertarians are completely fine with monopolies, and I'm not sure why.
β₯ At least some libertarians would contest some of these arguments.
β₯ Even if AIs confine themselves to non-military competition, this might still lead to an AI singleton.
β₯ AIs might start competing in cold wars, but I expect these to lead to hot wars.
π~948 words The side effects of AI war seems bad for humans generally.
π~23 words The use of nuclear weapons will be bad for any humans close by or down wind.
π~290 words Digital attacks can infect unintended targets, including human run computers.
π~111 words Based purely on vibes, digital attacks might(?) be less of a problem.
β₯ Based purely on vibes, digital attacks might(?) be less of a problem.
β₯ Digital attacks can infect unintended targets, including human run computers.
π~82 words It is possible that wide-scale fighting would damage ecologies: an example is Agent Orange.
π~75 words One more plausible ecological scenario might be bio-engineered pandemics aimed at non-humans (possibly because humans are supporting an enemy AGI). For example, novel high-virulence bird flus could be meant to destroy human poultry industries, or a blight tailored to high yield rice varieties could induce a famine.
π~59 words Depending on wind patterns, wildfires could be deliberately started to reduce solar power.
π~46 words Wartime resource usage by AI combatants could skew human resource prices, which may end up causing more human deaths.
π~157 words A further afield/longer term impact could be Kessler syndrome, making space inaccessible to humans.
π~13 words I expect that AIs will be more at home in space than humans are.
π~41 words An AI would have an easier time leaving; they just need to fly a single self-replicating fabricator through the danger zone, and then the AI could beam copies of itself off-world while robotic bodies are manufactured above the debris bloom.
β₯ A further afield/longer term impact could be Kessler syndrome, making space inaccessible to humans.
β₯ The side effects of AI war seems bad for humans generally.
π~1759 words AI war might not even prevent AIs from oppressing or fighting humanity.
π~1366 words It may be easier for an AI to take resources from humans than from other AIs.
π~911 words Humans are sitting on resources that are valuable to AIs. If humans are too weak to defend those resources, AIs may be incentivized to take those resources.
π~651 words As an example, harbors are strategically/economically important, so AIs will want to control them, but humans live around many harbors.
π~111 words This is non-exhaustive, but all the major ports in the US are attached to cities.
β₯ This is non-exhaustive, but all the major ports in the US are attached to cities.
π~78 words However, port infrastructure seems to be moving away from civilian populations.
π~242 words Humans can leave the port area, but a lot of people live near ports. This would create a huge refugee problem; looking at the US, I estimate around 43 million humans currently live around the most active ports, out of 340 million humans living in the US.
π~31 words Keep in mind that these areas are fairly large: for example, the New York metropolitan area includes all of Long Island, which is long enough that driving from one end to the other takes almost 3 hours. If you live that far away from the actual fighting, supply chain disruptions may be more dangerous than killer robots.
β₯ Humans can leave the port area, but a lot of people live near ports. This would create a huge refugee problem; looking at the US, I estimate around 43 million humans currently live around the most active ports, out of 340 million humans living in the US.
β₯ As an example, harbors are strategically/economically important, so AIs will want to control them, but humans live around many harbors.
π~35 words AIs probably don't care about some human resources, like fertile farmland.
π~79 words It is possible that AIs will instead preferentially take resources from other AIs, but this depends on other AIs being weaker than humans.
β₯ Humans are sitting on resources that are valuable to AIs. If humans are too weak to defend those resources, AIs may be incentivized to take those resources.
π~160 words If humans are strong enough to defend themselves against AI, then it seems like AI war isn't doing anything important for keeping AIs in check, humans could just stand up for themselves.
β₯ If humans are strong enough to defend themselves against AI, then it seems like AI war isn't doing anything important for keeping AIs in check, humans could just stand up for themselves.
π~138 words If humans are trading with one AI, this may incentivize other AIs to attack humans.
π~21 words This is similar to how Napoleon invaded Russia, since it wouldn't stop trading with Britain.
β₯ If humans are trading with one AI, this may incentivize other AIs to attack humans.
β₯ It may be easier for an AI to take resources from humans than from other AIs.
π~343 words AIs might see humans as competition.
π~80 words If humans are strong enough to fight a single AI faction (but not all AIs at once, or an AI singleton), humans would be strategically important. AIs would have to spend resources considering whether to defend against human aggression (see my future thoughts about forcefully keeping an AI war going), and at some point it might be worth it for the AI to preemptively attack humanity.
β₯ AIs might see humans as competition.
β₯ AI war might not even prevent AIs from oppressing or fighting humanity.
π~1478 words AI war probably needs to be AI forever war, which does not seem good for humanity.
π~1001 words AI war probably does not prevent AIs from making scientific/technological progress. If humanity falls behind, humans will be vulnerable.
π~39 words Total wars in the last few hundred years have catapulted scientific/technological progress. If trends continue, then we might expect AI war to produce similar gains through R&D.
π~319 words On the other hand, it is possible that AI has different strengths and perspectives than humans, allowing it to develop new technology.
π~30 words Humans find some problems hard to think about and make progress in; AIs might be different enough that these problems are relatively easy, which would allow them to make progress where we previously stalled out. Since humans have already solved the problems that are easy for humans to solve, this AI progress could be much faster than recent human progress.
π~37 words Humans will probably have control of less advanced AIs for some time before AI war. This will allow some AI progress to benefit humans, but in the long run I expect most of the benefits to go to AIs.
β₯ On the other hand, it is possible that AI has different strengths and perspectives than humans, allowing it to develop new technology.
π~95 words At any rate, a slower rate of innovation doesn't mean zero innovation, and if AIs are faster at R&D than humans we will eventually be in trouble.
π~264 words AI war might be so consuming and total that it results in zero R&D expenditure, since every expense needs to be directed to the frontline. This seems unlikely.
π~97 words This is a weird argument; are human wars not total enough for you? "Oh, humans in their so-called total wars aren't actually trying all that hard, they had resources left to develop atomic bombs." I did previously argue that AI total war might be total-er than human total war, but expecting those factors to completely dominate seems a step too far.
π~79 words Maybe we've reached semantic satiation? Total war total war total war total war total war.
π~24 words It's just that giving up 5% combat effectiveness today for 10% combat effectiveness tomorrow just seems like too good a value proposition. If your opponents are making these trades, then you better be too, or you'll fall behind, even if you can briefly capitalize.
β₯ AI war might be so consuming and total that it results in zero R&D expenditure, since every expense needs to be directed to the frontline. This seems unlikely.
β₯ AI war probably does not prevent AIs from making scientific/technological progress. If humanity falls behind, humans will be vulnerable.
π~280 words What if the AIs weaken each other enough to allow humanity to finish them off? This seems difficult to do without cost.
π~98 words However, clinching the end to the AI war with a human victory could be costly for us. It is unclear to me how much human cost is reasonable to expect.
π~59 words I suspect that most people that propose AI competition would agree that relying on this outcome is not viable or desirable, but I expect at least some people to propose it without considering that we might need to spend human lives to get to the end.
β₯ What if the AIs weaken each other enough to allow humanity to finish them off? This seems difficult to do without cost.
β₯ AI war probably needs to be AI forever war, which does not seem good for humanity.
Appendix
- Obviously there are problems with the deal: the particulars of the good outcomes aren't defined and probably matter quite a lot, one person is taking the fate of humanity into their hands, talking to/about deities is weird and distracting, I haven't defined our BATNA, maybe you think Big Ethics is stealing correct moral intuitions from our children. Like I said, problems!
- More down to earth, I have not put forth any arguments against AI society yet. We could envision choosing between two different actions:
- We will create one AI. This will result in an AI singleton, which turns out badly.
- We will create many AIs. AI society is likely to follow, and will turn out great; however, there is a chance that AI war will happen instead, and this will turn out badly.
- In this case, it seems clear that we should choose the second option, even though it might result in AI war.
- As I've said before, I actually think it is likely that AI society will turn out poorly, so our real choice is between two bad options, but that's for another post.
- "Biological humans and the rising tide of AI" (2018): AIs might collude against humans. This is more of an AI society concern.
- "Homogeneity vs. heterogeneity in AI takeoff scenarios" (2020): we might not end up with "different AIs", so AI multipolarity is less likely to happen.
- Comment on "Why Not Wait On AI Risk?" (2022): contains both the previous arguments in one comment.
- If I did read this, I read it... 10+ years ago, and my memory is not good on the best of days.
- "Notes on War: Grand Strategy" (2021): general thinking about war goals like conquering, similar to my discussion around why humans might commit genocide vs conquering. The discussion is mostly framed around humans, with their heterogeneous interest groups and inability to just replace a conquered populace's minds.
- "Postmodern Warfare" (2021): an argument that China may be better situated to take advantage of AI in warfare by re-centralizing everything. Mostly applies to human-driven robotic warfare, but I think my post works whether AI war is waged in a centralized or decentralized manner.
- "AI takeoff and nuclear war" (2024-06): focuses on AI impacts on human war. Interesting nuts-and-bolts analysis of why humans go to war that parallels my discussion on human war outcomes. I think due to the human focus it is not especially applicable to AI war.
- "Drone Wars Endgame" (2024-01): some analysis on possible directions for drone warfare. I do not think it is super applicable for this post, since I think the broader strategic considerations work without needing to commit to one specific model of drone/robotic warfare.
π~247 words If you think AI society will work, AI multipolarity might still be attractive, despite the possibility of AI war.
π~62 words As a thought experiment, you could consider a deity coming to you with a proposal: rolling some cosmic dice, humanity ends up with AI-driven prosperity 99% of the time, but ends up in an AI war 1% of the time. I've argued that AI war is bad for humans, but it seems worth it to take that divine deal.
β₯ If you think AI society will work, AI multipolarity might still be attractive, despite the possibility of AI war.
π~300 words Other, tangentially related reading.
π~22 words The Hanson-Yudkowsky AI-FOOM Debate (2008) does cover winner-takes-all dynamics, but if I remember correctly it is probably from FOOM/fast AI takeoff, and not from a multipolar standoff.
β₯ Other, tangentially related reading.