AGI FAQ

Comprehensive answers to frequently asked questions about Artificial General Intelligence, AI safety, timeline predictions, and preparation strategies

Frequently Asked Questions
byIndependent Research & Analysis
Published Jun 15, 2025Updated Jul 27, 2025✓ Reviewed

This analysis represents synthesis of expert opinions and publicly available research. The author is not a credentialed AI researcher but aims to provide accurate aggregation of expert consensus.

Showing 22 of 22 questions

Need More Information?

📚

Methodology

Learn about our prediction methods and data sources

View Methodology
👥

Expert Insights

Explore predictions from leading AI researchers

Browse Experts
📊

Research Data

Dive into the latest AI research and benchmarks

View Research

Still Have Questions?

If you can't find the answer you're looking for, we'd be happy to help. Contact us with your questions about AGI timelines, AI safety, or our methodology.

Contact Us

References

[1]
Stuart Russell (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking Press.
[2]
François Chollet (2019). On the Measure of Intelligence. arXiv preprint arXiv:1911.01547.
[3]
Nick Bostrom (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
[4]
Jaime Sevilla et al. (2022). Compute Trends Across Three Eras of Machine Learning. arXiv preprint arXiv:2202.05924.
[5]
Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, & Owain Evans (2018). When Will AI Exceed Human Performance? Evidence from AI Experts. Journal of Artificial Intelligence Research, 62, 729-754.
[6]
Metaculus Community (2023). When will the first weakly general AI system be devised, tested, and publicly announced?. Metaculus. Retrieved 2024-01-10.
[7]
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, & et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712.
[8]
Toby Ord (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette Books.
[9]
Yuntao Bai et al. (2022). Constitutional AI: Harmlessness from AI Feedback. arXiv preprint arXiv:2212.08073.
[10]
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, & et al. (2021). On the Opportunities and Risks of Foundation Models. arXiv preprint arXiv:2108.07258.
[11]
Jared Kaplan et al. (2020). Scaling Laws for Neural Language Models. arXiv preprint arXiv:2001.08361.
[12]
Erik Brynjolfsson & Andrew McAfee (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company.
[13]
Toby Shevlane, Sebastian Farquhar, Ben Garfinkel, & et al. (2023). Model evaluation for extreme risks. arXiv preprint arXiv:2305.15324.
[14]
Yuval Noah Harari (2017). Homo Deus: A Brief History of Tomorrow. Spiegel & Grau.
[15]
Max Tegmark (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.
[16]
Allan Dafoe (2018). AI Governance: A Research Agenda. Governance of AI Program, Future of Humanity Institute.
[17]
Ajeya Cotra (2022). Forecasting TAI with Biological Anchors. Open Philanthropy.
[18]
Stuart Russell, Daniel Dewey, & Max Tegmark (2015). Research priorities for robust and beneficial artificial intelligence. AI Magazine, 36(4), 105-114.