top of page

Winner of the Future of Life Institute’s Superintelligence Imagined Contest

6 Magazine Covers from the Future: Warnings About the Dangers of Artificial (Super)intelligence

Click the covers below for short summaries on their contents.

a project by

Effct.org Logo

for Superintelligence Imagined

future of life logo.png

Get All Six Magazine Covers Delivered to Your Inbox

Introduction

Today, in the year 2024, many people are thrilled about artificial intelligence (AI). The many forms AI is taking on whether ChatGPT, Dall-e, Gemini, Grok, or another, are making us more productive, and driving innovation.

But a growing number of people are concerned that the risks posed by the technology are so extreme that AI could wipe out humanity.1

​​

Our team represents both sides of the coin. We use AI tools every day in our work, and at the same time have become increasingly concerned about what this powerful technology could do to our society as it advances. 

Therefore, we've created this project to illustrate a research-backed warning about what the future could look like if AI gets too powerful, and we lose control of it.

We chose magazine covers as our medium because they are snapshots of history, and quickly convey important information.

 

We feel they're a great tool for showing risks associated with artificial superintelligence (ASI).

Artificial superintelligence is “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.”1​​

Our hypothetical story begins optimistically in 2040, but as time progresses, the covers reveal the potentially catastrophic threats superintelligence poses.2 These covers are rooted in research and the insights of experts. Accompanying each is an explanation with sources cited.

 

Our goal is to educate viewers about the promise and peril of artificial superintelligence. We hope to make these complex issues clear and inspire individuals to awareness and action in aligning artificial intelligence with humanity's values.

Without further ado, let's get to the future.

Sources:

1.  Bostrom, N. (2017). Superintelligence. Dunod. 

2.  AI risks that could lead to catastrophe: Cais. AI Risks that Could Lead to Catastrophe | CAIS. (n.d.). https://www.safe.ai/ai-risk

Share This Project

The year is 2040.

This lands in your mailbox.

Share This Magazine Cover by Hovering Over the Image and Clicking the Arrow

How Did a "Robot" Become Time Person of the Year?

Artificial intelligence is getting smarter each moment.

Experts believe that around the year 2040 an even smarter AI technology will be available -- that technology is called artificial general intelligence or AGI for short.4

AGI is a computer that is as intelligent as a human across all areas, capable of performing any intellectual task that a human can.4

 

AGI is expected to be closely followed by Artificial Superintelligence (ASI), which poses even greater risks⁴.

Our first magazine cover captures the excitement surrounding AGI’s arrival in 2040. We named this AGI "Athena," after the Greek Goddess of wisdom, war, and crafts. The cover hints at the potential threats and hopes as AGI evolves into ASI, surpassing human intellect and becoming far more dangerous.

We made Athena's image out of code because artificial intelligence is unlikely to take the form of actual robots but rather that of computer programs.4 The cover showcases AGI’s role in breakthroughs like mass cancer cure testing, demonstrating its potential to solve complex problems and save lives².

However, the magazine also hints at the dangers. The global race for AGI development creates tension, and AGI’s quick evolution into ASI poses significant risks⁴, especially without conversation addressing those dangers.²

Sources:

3. Reich, H. (2018). Myths and Facts About Superintelligent AI. YouTube.

https://www.youtube.com/watch?v=3Om9ssTm194 

4.  Urban, T. (2023, July 18). The Artificial Intelligence Revolution: Part 1. Wait But Why. https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html 

5. Bostrom, N. (2017). Super intelligence. Dunod. 

The image of Athena was generated by Adobe Photoshop AI.

A few months pass.

You get this in the mail.

Share This Magazine Cover by Hovering Over the Image and Clicking the Arrow

Could Superintelligence Cure Cancer?

This Scientific American cover, set in 2041, envisions a future where artificial superintelligence (ASI) cures cancer, symbolizing AI's immense potential to tackle humanity's greatest challenges, especially in healthcare. The robot crossing the finish line represents ASI ending the "race for the cure", and reflects the optimistic side of quantum physicist Michael Nielsen’s view that:

 

"ASI is likely to be both the most dangerous and the most enabling technology ever."6

As ASI continues to develop, it could be weaponized by bad actors, leading to catastrophic consequences.7 We allude in the subtext of the magazine cover to the big data set and novel cures ASI tested to come up with the cure. In the next magazine, you'll see how bad actors may hack data sets to create genetically engineered pandemics or other forms of bioterrorism.One expert, Dario Amodei, CEO of AI corporation Anthropic, has warned Congress that, “malicious actors could use AI to help develop bioweapons within the next two or three years.”9

The cover imagines that ASI is still aligned with human goals in 2041, shortly after its advent in 2040.

 

Futurist Ray Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century due to the law of accelerating returns.10

 

However, the rapid advancement of AI technologies without sufficient regulatory oversight or alignment could also result in AI eventually outmaneuvering human control and acting against our interests.11

This cover serves as a powerful reminder that while ASI’s advancements may amaze us, we must not be lulled into seeing it as a purely benevolent actor. Instead, it requires careful management to prevent disastrous outcomes set off by the same advancements it allows.11

Sources:

6 Nielsen, M (n.d.). Notes on existential risk from artificial superintelligence. Michael Nielsen's Notebook. Retrieved August 9, 2024, from https://michaelnotebook.com/xrisk/index.html

7 Smith, W. (2009, July 6). Don’t fear the Terminator. Scientific American. Retrieved August 9, 2024, from https://www.scientificamerican.com/blog/observations/dont-fear-the-terminator/

 

8 Center for AI Safety. (n.d.). AI risk. Retrieved August 9, 2024, from https://www.safe.ai/ai-risk

 

9 Future of Life Institute. (n.d.). Catastrophic AI scenarios. Retrieved August 9, 2024, from https://futureoflife.org/resource/catastrophic-ai-scenarios/

 

10 Urban, T. (2023, July 18). The Artificial Intelligence Revolution: Part 1. Wait But Why. https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

 

11 Tegmark, Max. "TED Talk: How to Keep AI Under Control." YouTube. https://www.youtube.com/watch?v=xUNx_PxNHrY

The image of the robot crossing the finish line was generated by Adobe Photoshop AI.

Want to stay connected with Effct.org? Fill out the info below.

Share This Magazine Cover by Hovering Over the Image and Clicking the Arrow

ASI and the Bioweapon Threat:

A 2043 Scenario

Inspired by the potential risks artificial intelligence presents, our third magazine, this spring 2043 cover of The Atlantic, emphasizes the risks of bioweapons and how even just one bad actor with access to powerful technology can cause significant harm. This magazine cover is designed to show readers an event that occurred where a bad actor, realistically a person or organization, is responsible for hacking the superintelligence of the same company that developed the cure for cancer. Using the information that was breached during the hack, this bad actor is able to create a bioweapon, which is used for bioterrorism. As Oxford researcher Toby Ord notes, "The most dangerous escapes are not microbes... but information hazards."13 And ASI increases the amount of hazardous information available.

 

Engineered pandemics represent one of the gravest threats posed by AI.14

 

Today AI can create more than 40,000 new toxic molecules in less than six hours.15

 

In addition to creating harmful agents, AI models can offer strategic guidance that could assist in the design and implementation of a biological assault.14 This magazine cover presents the current death toll at 8,000 people and rising. This may be on the extreme lower end.

 

Ord estimates the risk of an existential catastrophe, meaning the destruction of humanity, from an engineered pandemic to be ~1 in 30.13

 

At the same time, we note in the sub-article listed at the top that ASI could potentially help cure the same diseases it creates.

 

Still, this hypothetical magazine cover urges readers to think about how artificial superintelligence can make existential risks such as engineered pandemics more likely.

Sources:

13, Ord, Toby. The Precipice: Existential Risk and the Future of Humanity. Bloomsbury Publishing, 2020.

14  Eisenpress, B. (2024, February 1). "Catastrophic AI scenarios." Future of Life Institute. https://futureoflife.org/resource/catastrophic-ai-scenarios/ 

15 AI risks that could lead to catastrophe: Cais. "AI Risks that Could Lead to Catastrophe" | CAIS. (n.d.-b). https://www.safe.ai/ai-risk 

The image of the person in the mask was generated by Adobe Photoshop AI.​​​​​

Share This Magazine Cover by Hovering Over the Image and Clicking the Arrow

Could Sam Altman's "Lights Out" Prediction Come True?

In January 2023 OpenAI CEO Sam Altman cautioned that super-intelligence may lead to disastrous outcomes that could mean “lights out for all of us."16

This hypothetical issue of The Economist, dated July 2046, takes that prediction literally, and depicts a darkened, somber Paris because the EU’s ASI caused a continent-wide blackout. The ASI was working to combat global warming, but decided the best way to do so was to completely shut down the EU’s power grid, leading to a devastating power outage in the midst of a catastrophic heat wave. This is an example of how ASI can work to "fix" humanity's problems in destructive ways that don’t align with our values.17

A sub-story in this issue focuses on how, in an effort to stop world hunger, the UK’s ASI redirected food shipments towards countries in need. However, a lack of coordination and communication, led many other countries, including South Africa, Brazil, and Indonesia, to experience severe food shortages and unprecedented inflation on many grocery products.

On top of that, at this point in our timeline, the highly touted ASI safety guardrails are completely inefficient and all too easy to circumvent, as a series of ASI-enabled cyberattacks devastated Japan’s national security and crippled many of its citizens financially.18 19

Finally, the global population is feeling nervous, uncertain, and restless. Due to ASI, more people are unemployed than at any other point in history, and humanity is grappling and struggling with a loss of purpose; questioning their role and what is to come.20

ASI experts, such as Daniel Eth, are urging that humanity is running out of time to successfully - and globally - align ASI to humanity's values.19

 

Soon, it’ll be too late to prevent an earth-shattering catastrophe.

Share This Magazine Cover by Hovering Over the Image and Clicking the Arrow

Should Artificial Intelligence Be Allowed to Make Military Decisions?

This fifth magazine cover highlights a world in which ASI is trusted to prevent wars, but launches an attack that could launch World War III.

 

ASI-controlled weaponry, meant to prevent conflicts, could unintentionally trigger war due to unsupervised "automated escalation" and "automated retaliation" between rival ASIs.21

Here we see an accidental nuclear exchange in October 2048, following a skirmish between automated naval vessels from Norway and Russia. ASI systems escalate the conflict between NATO and Russia, leading to nuclear strikes on multiple cities. As the world tries to recover, defense companies like Northrop Grumman - who have already begun work on ASI systems for national security - are dealing with their fallout due to ASI defense systems.22 With millions of people dead, the world begins to reevaluate its reliance on ASI.

Future conflicts may require ASI to keep pace with warfare, but this could lead to autonomous systems intensifying conflicts over minor incidents.23 Lethal Autonomous Weapons (LAWs) have already proven more capable than human combatants, besting F-16 pilots and engaging in major operations.23

Potential future applications, like "slaughter bots," could select and attack targets without human oversight, increasing the risk of unintended conflict escalation.24 Autonomous systems make it difficult to attribute hostile acts, further complicating conflict resolution.22

Finally, people are important for avoiding disasters. People have caused wars throughout history, but also have been the reason for avoiding conflict. An example of this is Vasily Arkhipov. During the Cuban Missile Crisis, Arkhipov prevented nuclear retaliation when his submarine was targeted, earning him the title "the man who saved the world."25 This shows how individuals can have the moral guardrails to prevent disaster. These guardrails often defy logic and may be hard to program into artificial intelligence.

Sources:

21 AI risks that could lead to catastrophe: Cais. AI Risks that Could Lead to Catastrophe | CAIS. (n.d.). https://www.safe.ai/ai-risk

 

22 Pemberton, L. (n.d.). Artificial Intelligence and Machine Learning. Northrop Grumman. https://www.northropgrumman.com/what-we-do/artificial-intelligence-and-machine-learning

 

23 Hendrycks, D., Mazeika, M., & Woodside, T. (2023). An Overview of Catastrophic AI Risks. ArXiv, abs/2306.12001.

 

24 Dresp-Langley B. (2023). The weaponization of artificial intelligence: What the public needs to be aware of. Frontiers in artificial intelligence, 6, 1154184. https://doi.org/10.3389/frai.2023.1154184 

 

25 Wikimedia Foundation. (2024, August 10). Vasily Arkhipov. Wikipedia. https://en.wikipedia.org/wiki/Vasily_Arkhipov​​​​​

Share This Magazine Cover by Hovering Over the Image and Clicking the Arrow

After the Fallout: Humanity’s Response to the Rise of Superintelligence

For our team’s last cover, we illustrate humanity's response to nuclear destruction following the threat of artificial superintelligence (ASI) controlling nuclear warheads.26  We’ve titled it "Return to the Dark Ages," reflecting a fearful world after ASI wipes out a significant number of people.

 

The cover highlights the dangers of ASI misalignment and the need to consider human actions during a nuclear winter and how the gradual, cumulative threats posed by AI require proactive regulation and international cooperation to prevent significant harm over time.27 28

 

The cover depicts an emergency international conference after one quarter of the human population is wiped out due to misaligned ASI -- a truly grim hypothesis. This response mirrors humanity’s historical pattern of forming alliances after major conflicts, such as the League of Nations after WWI and the United Nations after WWII. 29 30

 

Three sub-stories explore different aspects of this world:

 

  • Page 26: "In God We Trust": Examines the role of religion in a tech-driven future, similar to how WWII heightened religiosity. During WWII, religion was used to demonize communism and associate the Soviet Union with evil and destruction.31 In a study conducted by Nature Human Behavior, it was found that times of war dramatically increase religiosity.32

  • Page 9: Technology-Free Nations: Shows a shift away from technology in response to rapid ASI development, highlighting distrust and the unchecked power of ASI.33

  • Page 17: Hiding from ASI: References Geoffrey Hinton’s AI warnings and imagines a world where humans must hide from ASI, inspired by Orwell’s 1984.34

 

Set in January 2049, this cover presents a grim yet hopeful vision, showing that humanity can still unite for survival in the face of ASI.

Sources:

26 Zimmer, Daniel, and Johanna Rodehau-Noack. “Today’s AI Threat: More like Nuclear Winter than Nuclear War.” Bulletin of the Atomic Scientists, 11 Feb. 2024

27Jones, Will. “Introductory Resources on AI Risks.” Future of Life Institute, 18 Sept. 2023, futureoflife.org/resource/introductory-resources-on-ai-risks/. 

28 Ibid.

29“League of Nations.” Encyclopædia Britannica, Encyclopædia Britannica, inc., 29 July 2024, www.britannica.com/topic/League-of-Nations. 

30 “United Nations.” Encyclopædia Britannica, Encyclopædia Britannica, inc., 8 Aug. 2024, www.britannica.com/topic/United-Nations. 

 

31 “Religion on the World War II Home Front (U.S. National Park Service).” National Park Service, U.S. Department of the Interior, 27 Nov. 2023, www.nps.gov/articles/000/religion-on-the-world-war-ii-home-front.htm#:~:text=On%20the%20home%20front%2C%20Americans,military%20service%20on%20religious%20grounds.

32 Henrich, Joseph. 2019, War Increases Religiosity, https://henrich.fas.harvard.edu/files/henrich/files/henrich_et_al-2019-nature_human_behaviour.pdf. 

33 Dwork, Cynthia, and Martha Minow. 2022, Distrust of Artificial Intelligence: Sources & Responses from Computer Science & Law, https://direct.mit.edu/daed/article/151/2/309/110609/Distrust-of-Artificial-Intelligence-Sources-amp.  

34 Heaven, Will Douglas. “Geoffrey Hinton Tells Us Why He’s Now Scared of the Tech He Helped Build.” MIT Technology Review, MIT Technology Review, 2 May 2023, www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/. 

IMG_4728.heic

About Effct.org

Effct.org is a mission-driven marketing agency dedicated to empowering impactful organizations to achieve their goals through strategic digital. We strive to create lasting, positive change for individuals and communities.

 

Our team was drawn to the Superintelligence Imagined contest because of the threat posed by superintelligence if its goals become misaligned with humanity's. We recognize the power of digital, but it's crucial to use it wisely to safeguard humanity's well-being. At Effct.org, our core mission is to ensure a positive impact on our communities.

Provide Feedback

Contact Us

bottom of page