In the United States, silver was historically used in the production of quarters as a way to ensure the value and durability of the coin. However, with the rising cost of silver and a decreasing silver supply, the U.S. Mint was forced to take action. The final year for silver quarters in the United States was 1964. After this year, the composition of quarters changed to a combination of copper and nickel, which is still used to this day.
Once the decision to eliminate silver from quarters was made, the U.S. Mint had to carefully manage the transition. The 1964 quarters were produced with a 90% silver content, and they quickly became highly sought after by collectors and investors. This increase in demand created a shortage of silver quarters in circulation, prompting coin hoarding and melting.
To counter this, the U.S. Mint started phasing out silver from quarters and other denominations gradually. In 1965, the composition of the quarter was changed to a copper-nickel clad version. The outer layers were composed of 75% copper and 25% nickel, bonded to a core made of pure copper. This change was aimed at reducing the intrinsic value of the quarters and preventing hoarding.
Though the discontinuation of silver in quarters had practical reasons, it also signified an end of an era for coinage in the United States. Silver coins had been a staple of American currency since the early days of the republic, and the shift to base metal alloys marked a significant departure from tradition. Many collectors and coin enthusiasts consider the silver quarters minted between 1932 and 1964 as particularly valuable and collectible, as they represent a bygone era of precious metal coins.
Nowadays, quarters remain an essential part of everyday commerce, but their composition has greatly evolved since the days of silver. While the intrinsic value of quarters may no longer be as significant, they continue to hold value in terms of their purchasing power and cultural significance.