Overclocking


In computing, overclocking is the practice of increasing the clock rate of a semiconductor device, such as a processor, beyond its rated speed, potentially increasing its performance. Overclocked devices, however, may have shorter lifespans, become unstable and unreliable, and in extreme cases, be permanently damaged. Many manufacturers do not cover damage from overclocking in their warranties, while some allow it inside a predefined safety margin.

Overview

A semiconductor device's processing speed depends on a variety of factors, including, but not limited to, its clock speed, microarchitecture, the kind of software it's running, and the bandwidth, latency and size for each level of its memory. All else being equal, a faster-clocked device can, though not necessarily, perform faster. Operating voltage is often increased to maintain a component's operational stability at accelerated speeds. Operating at higher frequencies and voltages increase power consumption and heat. Overclocking a device introduces additional risks of failure, for example, by overheating when the increased heat load is not removed, or by the device requesting more power than its power supply can provide.

Underclocking

Underclocking or downclocking is the practice of lowering a device's clock rate to below its default. An underclocked device trades lowered performance for reductions in power consumption and heat output. Such a device can potentially be cooled with less capable heatsinks, or, if present at all, slower rotating fans for quieter operation. For devices powered by a battery, e.g., smartphones and laptops, underclocking can be used to lower power consumption and extend battery life; some devices underclock themselves automatically when operating under battery power.
Underclocking and undervolting would be attempted on a desktop system to have it operate silently while potentially offering higher performance than currently offered by low-voltage processor offerings. This would use a "standard-voltage" part and attempt to run with lower voltages to meet an acceptable performance/noise target for the build. This was also attractive as using a "standard voltage" processor in a "low voltage" application avoided paying the traditional price premium for an officially certified low voltage version. However again like overclocking there is no guarantee of success, and the builder's time researching given system/processor combinations and especially the time and tedium of performing many iterations of stability testing need to be considered. The usefulness of underclocking is determined by what processor offerings, prices, and availability are at the specific time of the build. Underclocking is also sometimes used when troubleshooting.

Enthusiast culture

Overclocking has become more accessible with motherboard makers offering overclocking as a marketing feature on their mainstream product lines. However, the practice is embraced more by enthusiasts than professional users, as overclocking carries a risk of reduced reliability, accuracy and damage to data and equipment. Additionally, most manufacturer warranties and service agreements do not cover overclocked components nor any incidental damages caused by their use. While overclocking can still be an option for increasing personal computing capacity, and thus workflow productivity for professional users, the importance of stability testing components thoroughly before employing them into a production environment cannot be overstated.
Overclocking offers several draws for overclocking enthusiasts. Overclocking allows testing of components at speeds not currently offered by the manufacturer, or at speeds only officially offered on specialized, higher-priced versions of the product. A general trend in the computing industry is that new technologies tend to debut in the high-end market first, then later trickle down to the performance and mainstream market. If the high-end part only differs by an increased clock speed, an enthusiast can attempt to overclock a mainstream part to simulate the high-end offering. This can give insight on how over-the-horizon technologies will perform before they are officially available on the mainstream market, which can be especially helpful for other users considering if they should plan ahead to purchase or upgrade to the new feature when it is officially released.
Some hobbyists enjoy building, tuning, and "Hot-Rodding" their systems in competitive benchmarking competitions, competing with other like-minded users for high scores in standardized computer benchmark suites. Others will purchase a low-cost model of a component in a given product line, and attempt to overclock that part to match a more expensive model's stock performance. Another approach is overclocking older components to attempt to keep pace with increasing system requirements and extend the useful service life of the older part or at least delay purchase of new hardware solely for performance reasons. Another rationale for overclocking older equipment is even if overclocking stresses equipment to the point of failure earlier, little is lost as it is already depreciated, and would have needed to be replaced in any case.

Factors

Cooling

While stock cooling systems are commonly designed for heat produced during non-overclocked use, they may not be adequate for overclocked parts. These may include the use of additional and more powerful fans, larger and more efficient heat sinks, heat pipes, or the use of water cooling.

Heat sinks

Heat sinks are passive heat exchangers designed to take away excessive heat generated by the device it is in physical contact with. They are commonly made with copper or aluminum, with copper having higher thermal conductivity, and aluminum being less effcient but also cheaper. Heat pipes can be used to improve conductivity. Many heatsinks combine two or more materials to achieve a balance between performance and cost.
Image:2007TaipeiITMonth IntelOCLiveTest Overclocking-6.jpg|right|thumb|Liquid nitrogen may be used for cooling an overclocked system, when an extreme measure of cooling is needed.
Other cooling methods are forced convection and phase transition cooling which is used in refrigerators and can be adapted for computer use. Liquid nitrogen, liquid helium, and dry ice are used as coolants in extreme cases, such as record-setting attempts or one-off experiments rather than cooling an everyday system. In June 2006, IBM and Georgia Institute of Technology jointly announced a new record in silicon-based chip clock rate above 500 GHz, which was done by cooling the chip to using liquid helium. The current CPU frequency world record is 9,130.33 MHz, achieved in August 2025 with an Intel Core i9-14900KF. These extreme methods are generally impractical in the long term, as they require refilling reservoirs of vaporizing coolant, and condensation can form on chilled components. Moreover, silicon-based junction gate field-effect transistors will degrade below temperatures of roughly and eventually cease to function or "freeze out" at since the silicon ceases to be semiconducting, so using extremely cold coolants may cause devices to fail. Blowtorch is used to temporarily raise temperature to issues of over-cooling when not desirable.
Submersion cooling, used by the Cray-2 supercomputer, involves sinking a part of computer system directly into a chilled liquid that is thermally conductive but has low electrical conductivity. The advantage of this technique is that no condensation can form on components. A good submersion liquid is Fluorinert made by 3M, which is expensive. Another option is mineral oil, but impurities such as those in water might cause it to conduct electricity and damage components via short circuits.
Amateur overclocking enthusiasts have used a mixture of dry ice and a solvent with a low freezing point, such as acetone or isopropyl alcohol. This cooling bath, often used in laboratories, achieves a temperature of.

Stability and reliability

As an overclocked component operates outside of the manufacturer's recommended operating conditions, it may function incorrectly, leading to system instability. Another risk is silent data corruption by undetected errors. Such failures might never be correctly diagnosed and may instead be incorrectly attributed to software bugs in applications, device drivers, or the operating system. Overclocked use may permanently damage components enough to cause them to misbehave without becoming totally unusable.
A large-scale 2011 field study of hardware faults causing a system crash for consumer PCs and laptops showed a four to 20 times increase in system crashes due to CPU failure for overclocked computers over an eight-month period.
In general, overclockers claim that testing can ensure that an overclocked system is stable and functioning correctly. Although software tools are available for testing hardware stability, it is generally impossible for any private individual to thoroughly test the functionality of a processor.
Some semiconductor manufacturing techniques, like the silicon on insulator, produce devices with hysteresis behavior. These circuit's performance is affected by the events of the past, so without carefully targeted tests it is possible for a particular sequence of state changes to work at overclocked rates in one situation but not another even if the voltage and temperature are the same. Such a system may pass stress tests yet experiences instabilities in other programs.

Factors impacting overclocking potential

Overclockability arises in part due to the economics of the manufacturing processes of CPUs and other components. In many cases components are manufactured by the same process, and tested after manufacture to determine their actual maximum ratings. Components are then marked with a rating chosen by the market needs of the semiconductor manufacturer. If manufacturing yield is high, more higher-rated components than required may be produced, and the manufacturer may mark and sell higher-performing components as lower-rated for marketing reasons. In some cases, the true maximum rating of the component may exceed even the highest rated component sold. Many devices sold with a lower rating may behave in all ways as higher-rated ones, while in the worst case operation at the higher rating may be more problematical.
Notably, higher clocks must always mean greater waste heat generation, as semiconductors set to high must dump to ground more often. In some cases, this means that the chief drawback of the overclocked part is far more heat dissipated than the maximums published by the manufacturer. Pentium architect Bob Colwell calls overclocking an "uncontrolled experiment in better-than-worst-case system operation".