Wayback Machine
The Wayback Machine is a digital archive of the World Wide Web founded by the Internet Archive, an American nonprofit organization based in San Francisco, California. Launched for public access in 2001, the service allows users to go "back in time" to see how websites looked in the past. Founders Brewster Kahle and Bruce Gilliat developed the Wayback Machine to provide "universal access to all knowledge" by preserving archived copies of defunct web pages.
The name is a reference to the fictional time-traveling device of the same name from the animated cartoon The Bullwinkle Show from the 1960s. In a segment of the cartoon entitled "Peabody's Improbable History", the characters Mister Peabody and Sherman use the "Wayback Machine" to travel back in time to witness and participate in famous historical events.
The Wayback Machine's earliest archives go back at least to 1995, and by the end of 2009, more than 38.2 billion webpages had been saved., the Wayback Machine has archived more than 1 trillion web pages and well over 99 petabytes of data.
History
The Internet Archive has been archiving cached web pages since at least 1995. One of the earliest known pages was archived on May 8, 1995.Internet Archive founders Brewster Kahle and Bruce Gilliat launched the Wayback Machine in San Francisco, California, in October 2001, primarily to address the problem of web content vanishing whenever it gets changed or when a website is shut down. The service enables users to see archived versions of web pages across time, which the archive calls a "three-dimensional index". Kahle and Gilliat created the machine hoping to archive the entire Internet and provide "universal access to all knowledge".
From 1996 to 2001, the information was kept on digital tape, with Kahle occasionally allowing researchers and scientists to tap into the "clunky" database. When the archive reached its fifth anniversary in 2001, it was unveiled and opened to the public in a ceremony at the University of California, Berkeley. By the time the Wayback Machine launched, it already contained over 10 billion archived pages. The data is stored on the Internet Archive's large cluster of Linux nodes. It revisits and archives new versions of websites on occasion. Sites can also be captured manually by entering a website's URL into the search box, provided that the website allows the Wayback Machine to "crawl" it and save the data.
The Internet Archive migrated its customized storage architecture to Sun Open Storage in 2009, and hosts a new data centre in a Sun Modular Datacenter on Sun Microsystems' California campus.
A new, improved version of the Wayback Machine, with an updated interface and a fresher index of archived content, was made available for public testing in 2011, where captures appear in a calendar layout with circles whose width visualizes the number of crawls each day, but no marking of duplicates with asterisks or an advanced search page. A top toolbar was added to facilitate navigating between captures. A bar chart visualizes the frequency of captures per month over the years. Features like "Changes", "Summary", and a graphical site map were added subsequently.
In October 2013, Wayback Machine introduced the "Save Page Now" feature, which allows any Internet user to archive the contents of a URL, and quickly generates a permanent link unlike the preceding liveweb feature.
On October 30, 2020, the Wayback Machine began fact-checking content. As of January 2022, domains of ad servers are disabled from capturing.
In May 2021, for Internet Archive's 25th anniversary, the Wayback Machine introduced the "Wayforward Machine", which allows users to "travel to the Internet in 2046, where knowledge is under siege".
On July 24, 2025, Senator Alex Padilla designated the Internet Archive as a federal depository library.
Technical information
The Wayback Machine's software has been developed to "crawl" the Web and download all publicly accessible information and data files on webpages, the Gopher hierarchy, the Netnews bulletin board system, and software. The information collected by these 'crawlers' does not include all the content available on the Internet since much of the data is restricted by the publisher or stored in databases that are not accessible. To overcome inconsistencies in partially cached websites, Archive-It.org was developed in 2005 by the Internet Archive as a means of allowing institutions and content creators to voluntarily harvest and preserve collections of digital content and create digital archives.Crawls are contributed from various sources, some imported from third parties and others generated internally by the Archive. For example, content comes from crawls contributed by the Sloan Foundation and Alexa, crawls run by the Internet Archive on behalf of NARA and the Internet Memory Foundation, webpages archived by Archive Team, and mirrors of Common Crawl. The "Worldwide Web Crawls" have been running since 2010 and capture the global Web. In September 2020, the Internet Archive announced a partnership with Cloudflare – an American content delivery network service provider – to automatically index websites served via its "Always Online" services.
Documents and resources are stored with time stamp URLs such as
. Pages' individual resources, such as images, style sheets and scripts, as well as outgoing hyperlinks, are linked to with the time stamp of the currently viewed page, so they are redirected automatically to their individual captures that are the closest in time.The frequency of snapshot captures varies per website. Websites in the "Worldwide Web Crawls" are included in a "crawl list", with the site archived once per crawl. A crawl can take months or even years to complete, depending on size. For example, "Wide Crawl Number 13" started on January 9, 2015, and was completed on July 11, 2016. However, there may be multiple crawls ongoing at any one time, and a site might be included in more than one crawl list, so how often a site is crawled varies widely.
Starting in October 2019, users were limited to 15 archival requests and retrievals per minute.
Storage capacity and growth
As technology has developed over the years, the storage capacity of the Wayback Machine has grown. In 2003, after only two years of public access, the Wayback Machine was growing at a rate of 12 terabytes per month. The data is stored on PetaBox rack systems custom designed by Internet Archive staff. The first 100 TB rack became fully operational in June 2004, although it soon became clear that they would need much more storage than that., the Wayback Machine contained approximately three petabytes of data and was growing at a rate of 100 terabytes each month.
In March that year, it was said on the Wayback Machine forum that "the Beta of the new Wayback Machine has a more complete and up-to-date index of all crawled materials into 2010, and will continue to be updated regularly. The index driving the classic Wayback Machine only has a little bit of material past 2008, and no further index updates are planned, as it will be phased out this year." Also in 2011, the Internet Archive installed their sixth pair of PetaBox racks which increased the Wayback Machine's storage capacity by 700 terabytes.
In January 2013, Internet Archive announced a milestone of 240 billion URLs.
In December 2014, the Wayback Machine contained 435 billion web pages—almost nine petabytes of data, and was growing at about 20 terabytes a week.
In July 2016, the Wayback Machine reportedly contained around 15 petabytes of data. In October 2016, it was announced that the way web pages are counted would be changed, resulting in the decrease of the archived pages counts shown. Embedded objects such as pictures, videos, style sheets, JavaScripts are no longer counted as a "web page", whereas HTML, PDF, and plain text documents remain counted.
In September 2018, the Wayback Machine contained over 25 petabytes of data. As of December 2020, the Wayback Machine contained over 70 petabytes of data.
In 2025, Wayback Machine reached 1 trillion webpages archived, with a series of events being scheduled throughout October to celebrate it.
Wayback Machine APIs
The Wayback Machine service offers three public APIs, SavePageNow, Availability, and CDX. SavePageNow can be used to archive web pages. Availability API for checking the archive availability status for a web page, checking whether an archive for the web page exists or not. CDX API is for complex querying, filtering, and analysis of captured data.Website exclusion policy
Historically, the Wayback Machine has respected the robots exclusion standard in determining if a website would be crawled – or if already crawled, if its archives would be publicly viewable. Website owners had the option to opt out of Wayback Machine through the use of robots.txt. It applied robots.txt rules retroactively; if a site blocked the Internet Archive, any previously archived pages from the domain were immediately rendered unavailable as well. In addition, the Internet Archive stated that "Sometimes, a website owner will contact us directly and ask us to stop crawling or archiving a site. We comply with these requests." In addition, the website says: "The Internet Archive is not interested in preserving or offering access to Web sites or other internet documents of persons who do not want their materials in the collection."On April 17, 2017, reports surfaced of sites that had gone defunct and became parked domains that were using robots.txt to exclude themselves from search engines, resulting in them being inadvertently excluded from the Wayback Machine. Following this, the Internet Archive changed the policy to require an explicit exclusion request to remove sites from the Wayback Machine.
The Oakland Archive Policy
Wayback's retroactive exclusion policy is based in part upon Recommendations for Managing Removal Requests and Preserving Archival Integrity, known as The Oakland Archive Policy, published by the School of Information Management and Systems at University of California, Berkeley in 2002, which gives a website owner the right to block access to the site's archives. Wayback has complied with this policy to help avoid expensive litigation.The Wayback retroactive exclusion policy began to relax in 2017, when it stopped honoring robots on U.S. government and military web sites for both crawling and displaying web pages. As of April 2017, Wayback is ignoring robots.txt more broadly, not just for U.S. government websites.