Bambu runcing
Dari Wikipedia bahasa Indonesia, ensiklopedia bebas
Belum Diperiksa
Senjata bambu runcing di koleksi Museum Monumen Yogya Kembali.
Foto yang menampilkan para napi di Surabaya membawa bambu runcing, pada tanggal 10 November 1945.
BBambu runcing
Dari Wikipedia bahasa Indonesia, ensiklopedia bebas
Perubahan tertunda ditampilkan di halaman iniBelum Diperiksa
Senjata bambu runcing di koleksi Museum Monumen Yogya Kembali.
Foto yang menampilkan para napi di Surabaya membawa bambu runcing, pada tanggal 10 November 1945.
Bambu runcing adalah sebuah senjata yang terbuat dari bahan baku bambu yang diruncingkan. Senjata ini dahulu konon digunakan oleh bangsa Indonesia sebagai alat perlawanan melawan penjajahan kolonialis Belanda.
Pada saat ini lambang bambu runcing banyak digunakan oleh berbagai daerah di Indonesia untuk melambangkan keberanian dan pengorbanan dalam meraih kemerdekaan.
Salah satu tokohnya yaitu K.H. Subchi dari Parakan, Temanggung yang dikenal dengan gelar Jenderal Bambu Runcing. Ia sebagai penasehat BMT (Barisan Muslimin Temanggung) yang kemudian dikenal menjadi Barisan Bambu Runcing.
Sejarah[sunting | sunting sumber]
Pencetus gerakan perjuangan dengan senjata bambu runcing, dalam pengertian sebagai senjata perjuangan yang bersifat massal dan nasional, sampai saat ini memang belumlah sangat jelas. Senjata Bambu Runcing pernah di pakai latihan ketentaraan Seinendan pada zaman Jepang. Tetapi khusus penggunaan senjata Bambu Runcing dengan doa, pengisian tenaga dalam, memang hal ini secara tegas dapat dikatakan, di mulai dai Parakan, Temanggung. Siapa para kiai yang terlibat ada beragam pandangan. Namun semua mengerucut kepada tokoh penting di Parakan yakni K.H. Subkhi (Subuki) dan K.H.R Sumo Gunardo, dan para kiai lain di Parakan dan Temanggung seperti K.H. M Ali (pengasuh pesantren tertua di Parakan), K.H. Abdurrahman, K.H. Nawawi, K.H. Istakhori dan kelanjutannya juga KH. Mandzur dari Temanggung dan berbagai kiai di NU Temangggung, khususnya MWC Parakan.
Senjata Bambu Runcing digunakan sebagai alat perjuangan, berangkat dari ketiadaan, kekurangan peralatan perang yang tersedia, sementara perjuangan harus dilanjutkan terutama setelah Indonesia merdeka. Musuh Indonesia setelah proklamasi menjadi sangat banyak dan dengan kekuatan besar, Jepang yang masih bercokol, Belanda yang ingin menguasai lagi dan Sekutu yang juga akan menjajah menggantikan Jepang dan Belanda. Maka praktis, keperluan persenjataan yang di butuhkan. Bambu Runcing dan peralatan tradisional lain menjadi alternatif, murah dan bersifat massal. Kekuatan doa menjadi faktor utama kekuatan alat-alat tradisional tersebut.
Ternyata dalam realitas sejarah, perjuangan dengan menggunakan senjata bambu runcing, terjadi pada hampir semua medan perang. Lasykar-lasykar rakyat BKR, AMRI, Hizbullah, Sabilillah dan sebagainya yang terlibat pada pertempuran di berbagai peristiwa, menggunakan senjata Bambu Runcing sebagai senjata utama, sebelum mereka mampu merebut senjata musuh.
Peninggalan-peninggalan sejarah Bambu Runcing khusus yang berhubungan dengan Bambu Runcing Parakan bisa dilacak ke tempat, atau para kiai yang pernah terlibat dalam berbagai peristiwa Bambu Runcing. Sampai sekarang Rumah KH. Subkhi masih berdiri dan berbagai peninggalannya, Rumah KH. R Sumo Gunardo masih adan juga beberapa peninggalanya, ada yang di Museum Monjali (Monumen Jogja Kembali), Pondok Pesantren KH. M. Ali sampai sekarang masih berdiri dan terus berkembang. Bekas kantor BMT dan pusat penyepuhan walaupun telah berubah, namun jejak-jejaknya masih ada. Dan khusus sumur yang sering di ambil airnya untuk penyepuhan Bambu Runcing juga masih ada. Khusus di Temanggung bahkan tempat Kiai Mandzur di kenal dengan Mujahidin, samapi sekarang menjadi pusat kegiatan Tarekat.
Perjuangan bersenjata yang melibatkan senjata Bambu Runcing oleh berbagai lasykar rakyat dalam perjuangan kemerdekaan sangat jelas dan nyata. Bahkan selama masa setelah Proklamasi Kemerdekaan dengan musuh utama Jepang, Belanda dan Sekutu, di mana pada saat itu bangsa Indonesia belum memiliki cukup senjata, maka Bambu Runcing menjadi senjata massal rakyat Indonesia. Kepemlikan senjata modern oleh rakyat, setelah mampu merebut dari senjata musuh terutama dari Jepang yang telah menyerah.
Lihat pula[sunting | sunting sumber]
Barisan Bambu Runcing
Sumber[sunting | sunting sumber]
KIAI DAN BAMBU RUNCING, MENGUNGKAP PERAN SEJARAH KIAI DAN BAMBU RUNCING PADA MASA PERANG KEMERDEKAAN (KAJIAN SEJARAH LESAN)
A view of the break-action of a typical side-by-side double-barreled shotgun, with the Anson & Deeley boxlock action open and the extractor visible. The opening lever and the safety catch can also be clearly seen.
A double-barreled shotgun ("double" in context) is a shotgun with two parallel barrels, allowing two shots to be fired in quick succession.
Contents [hide]
1 Construction
1.1 Barrel configuration
1.2 Trigger mechanism
1.3 Regulation
2 Regional use
3 See also
4 References
Construction[edit]
Modern double-barreled shotguns, often known as doubles, are almost universally break open actions, with the barrels tilting up at the rear to expose the breech ends of the barrels for unloading and reloading. Since there is no reciprocating action needed to eject and reload the shells, doubles are more compact than repeating designs such as pump action or lever-action shotguns.
Barrel configuration[edit]
See also: coach gun and sawed-off shotgun
Double-barreled shotguns come in two basic configurations: the side by side shotgun (SxS) and the over/under shotgun ("over and under", O/U, etc.), indicating the arrangement of barrels. The original double-barreled guns were nearly all SxS designs, which was a more practical design in the days of muzzle-loading firearms. Early cartridge shotguns also used the SxS action, because they kept the exposed hammers of the earlier muzzle-loading shotguns they evolved from. When hammerless designs started to become common, the O/U design was introduced, and most modern sporting doubles are O/U designs.[1]
One significant advantage that doubles have over single barrel repeating shotguns is the ability to provide access to more than one choke at a time. Some shotgun sports, such as skeet, use crossing targets presented in a narrow range of distance, and only require one level of choke. Others, like sporting clays, give the shooter targets at differing ranges, and targets that might approach or recede from the shooter, and so must be engaged at differing ranges. Having two barrels lets the shooter use a more open choke for near targets, and a tighter choke for distant targets, providing the optimal shot pattern for each distance.
Their disadvantage lies in the fact that the barrels of a double-barreled shotgun, whether O/U or SxS, are not parallel, but slightly angled, so that shots from the barrels converge, usually at "40 yards out". For the SxS configuration, the shotstring continues on its path to the opposite side of the rib after the converging point; for example, the left barrel's discharge travels on the left of the rib till it hits dead center at 40 yards out, after that, the discharge continues on to the right. In the O/U configuration with a parallel rib, both barrels' discharges will keep to the dead center, but the discharge from the "under" barrel will shoot higher than the discharge from the "over" barrel after 40 yards. Thus, double-barreled shotguns are accurate only at practical shotgun ranges, though the range of their ammunition easily exceeds four to six times that range.
SxS shotguns are often more expensive, and may take more practice to aim effectively than a O/U. The off-center nature of the recoil in a SxS gun may make shooting the body-side barrel slightly more painful by comparison to an OU, single-shot, or pump/lever action shotgun. Gas-operated, and to a lesser extent recoil-operated, designs will recoil less than either. More SxS than O/U guns have traditional 'cast-off' stocks, where the end of the buttstock veers to the right, allowing a right-handed user to point the gun more easily.[1]
Trigger mechanism[edit]
The early doubles used two triggers, one for each barrel. These were located front to back inside the trigger guard, the index finger being used to pull either trigger, as having two fingers inside the trigger guard can cause a recoil induced double-discharge. Double trigger designs are typically set up for right-handed users.[1] In double trigger designs, it is often possible to pull both triggers at once, firing both barrels simultaneously, though this is generally not recommended as it doubles the recoil, battering both shooter and shotgun. Discharging both barrels at the same time has long been a hunting trick employed by hunters using 8 gauge "elephant" shotguns, firing the two two-ounce slugs for sheer stopping power at close range.
Later models use a single trigger that alternately fires both barrels, called a single selective trigger or SST. The SST does not allow firing both barrels at once, since the single trigger must be pulled twice in order to fire both barrels. The change from one barrel to the other may be done by a clockwork type system, where a cam alternates between barrels, or by an inertial system where the recoil of firing the first barrel toggles the trigger to the next barrel. A double-barreled shotgun with an inertial trigger works best with full power shotshells; shooting low recoil shotshells often will not reliably toggle the inertial trigger, causing an apparent failure to fire occasionally when attempting to depress the trigger a second time to fire the second barrel. Generally there is a method of selecting the order in which the barrels of an SST shotgun fire; commonly this is done through manipulation of the safety, pushing to one side to select top barrel first and the other side to select bottom barrel first. In the event that an inertial trigger does not toggle to the second barrel when firing low recoil shotshells, manually selecting the order to the second barrel will enable the second barrel to fire when the trigger is depressed again.
One of the advantages of the double, with double triggers or SST, is that a second shot can be taken almost immediately after the first, utilizing different chokes for the two shots. (Assuming, of course, that full power shotshells are fired, at least for a double-barreled shotgun with an inertial type SST, as needed to toggle the inertial trigger.)
Regulation[edit]
Regulation is a term used for multi-barreled firearms that indicates how close to the same point of aim the barrels will shoot. Regulation is very important, because a poorly regulated gun may hit consistently with one barrel, but miss consistently with the other, making the gun nearly useless for anything requiring two shots. Fortunately, the short ranges and spread of shot provide a significant overlap, so a small error in regulation in a double will often be too small to be noticed. Generally the shotguns are regulated to hit the point of aim at a given distance, usually the maximum expected range since that is the range at which a full choke would be used, and where precise regulation matters most.
Regional use[edit]
The double-barreled shotgun is seen as a weapon of prestige and authority in rural parts of India,[citation needed] where it is known as dunali[2] (literally "two pipes"). It is especially common in Bihar, Purvanchal, Uttar Pradesh, Haryana and Punjab.[citation needed]
Sabre
From Wikipedia, the free encyclopedia
For other uses, see Sabre (disambiguation).
Saber redirects here. For other uses, see Saber (disambiguation)
Sabre
French sabre of the sailors of the Guard, First Empire.
TypeSword
Service history
WarsNapoleonic Wars, American Revolution, American Civil War, Franco-Prussian War, World War I, Polish-Soviet War
Production history
Producedc. 1800 - present
Specifications
Blade typeSingle-edged or double-edged, curved bladed or straight blade, pointed tip.
The sabre or saber (see spelling differences) is type of backsword, usually with a curved, single-edged blade and a rather large hand guard, covering the knuckles of the hand as well as the thumb and forefinger.
Ultimately based on a medieval type of single-edged weapon, the sabre was adopted as the weapon of heavy cavalry in Early Modern warfare Although sabres are typically thought of as curved-bladed slashing weapons, those used by the heavy cavalry of the 17th to 19th centuries often had straight and even double-edged blades more suitable for thrusting. The length of sabres varied, and most were carried in a scabbard hanging from a shoulder belt known as a baldric or from a waist-mounted sword belt, usually with slings of differing lengths to permit the scabbard to hang below the rider's waist level. The last sabre issued to US cavalry was the Patton saber of 1913, designed to be mounted to the cavalryman's saddle.
Contents [hide]
1 History
1.1 Origins
1.2 Early modern use
1.3 Modern use
1.3.1 Napoleonic era
1.3.2 Russian Empire
1.3.3 United States
1.3.4 Police
2 Contemporary dress uniform
3 Modern sport fencing
4 See also
5 References
6 External links
History[edit]
Origins[edit]
Medieval (12th century) Eastern European szabla blade.
Sabre-like curved backswords have been in use in Europe since the early medieval period (some early examples include the falchion and the Byzantine paramērion). The oldest well-documented "sabres" are those found in 9th and 10th century graves of Magyars (Hungarians) who entered the Carpathian Basin at this time.[1] These oldest sabres had a slight curve, short, down-turned quillons, the grip facing the opposite direction to the blade and a sharp point with the top third of the reverse edge sharpened.[2]
The introduction of the sabre proper in Western Europe, along with the term sabre itself, dates to the 17th century, via influence of the Eastern European szabla type ultimately derived from these medieval backswords.[3] The adoption of the term is connected to the employment of Hungarian "Hussar" (huszár) cavalry by Western armies at the time.[4] Hungarian hussars were employed as light cavalry, with the role of harassing enemy skirmishers, overrunning artillery positions, and pursuing fleeing troops. In the late 17th and 18th centuries, many Hungarian hussars fled to other Central and Western European countries and became the core of light cavalry formations created there.[5] The Hungarian term szablya is ultimately traced to the Northwestern Turkic selebe, with contamination from the Hungarian verb szab "to cut".[6]
Early modern use[edit]
The original type of Szabla, or Polish sabre, was used as a cavalry weapon, possibly inspired by Hungarian or wider Turco-Mongol warfare. The Karabela was a type of szabla popular in the late 17th century, worn by the Polish, Lithuanian, and Ukrainian nobility class, the Szlachta. While designed as a cavalry weapon, it also came to replace various types of straight-bladed swords used by infantry.[7] The Swiss sabre originates as a regular sword with a single-edged blade in the early 16th century, but by the 17th century begins to exhibit specialized hilt types.
Modern use[edit]
A British Hussar general with a scabbarded kilij of Turkish manufacture (1812)
The briquet, typical infantry sabre of the Napoleonic Wars.
French Navy sabre of the 19th Century, "boarding sabre".
Lieutenant Colonel Teófilo Marxuach's M1902 Officer's Sabre and Scabbard at the National Historic Trust site at Castillo San Cristobal in San Juan, Puerto Rico
The sabre saw extensive military use in the early 19th century, particularly in the Napoleonic Wars, during which Napoleon used heavy cavalry charges to great effect against his enemies. Shorter versions of the sabre were also used as sidearms by dismounted units, although these were gradually replaced by fascine knives and sword bayonets as the century went on. Although there was extensive debate over the effectiveness of weapons such as the sabre and lance, the sabre remained the standard weapon of cavalry for mounted action in most armies until World War I. Thereafter it was gradually relegated to the status of a ceremonial weapon, and most horse cavalry was replaced by armoured cavalry from 1930 on.
Napoleonic era[edit]
The elegant but effective 1803 pattern sword that the British Government authorized for use by infantry officers during the wars against Napoleon featured a curved sabre blade which was often blued and engraved by the owner in accordance with his personal taste. Europeans rekindled their interest in sabres inspired by the Mameluke sword, a type of Middle Eastern scimitar, encountered due to their confrontations with the Mamelukes in the late 18th century and early 19th century. The Mamluks were originally of Turkish descent; the Egyptians bore Turkish sabres for hundreds of years. During the Napoleonic Wars, the French conquest of Egypt brought these beautiful and functional swords to the attention of Europeans. This type of sabre became very popular for light cavalry officers, in both France and Britain, and became a fashionable weapon for senior officers to wear. In 1831, the "Mamaluke" sword became a regulation pattern for British general officers (and is still in use today).
Russian Empire[edit]
In the Polish–Lithuanian Commonwealth (16–18th century) a specific type of sabre-like melee weapon, the szabla, was used. The Don Cossacks used the shashka, (originating from Circassian "sashho" - big knife) and sablja (from Circassian "sa" - knife and "blja" - snake), which also saw military and police use in the Russian Empire and early Soviet Union.
United States[edit]
The American victory over the rebellious forces in the citadel of Tripoli in 1805, during the First Barbary War, led to the presentation of bejewelled examples of these swords to the senior officers of the US Marines. Officers of the US Marine Corps still use a mameluke-pattern dress sword. Although some genuine Turkish kilij sabres were used by Westerners, most "mameluke sabres" were manufactured in Europe; although their hilts were very similar in form to the Ottoman prototype, their blades, even when an expanded yelman was incorporated, tended to be longer, narrower and less curved than those of the true kilij.
In the American Civil War, the sabre was used infrequently as a weapon, but saw notable deployment in the Battle of Brandy Station and at East Cavalry Field at the Battle of Gettysburg in 1863. Many cavalrymen—particularly on the Confederate side—eventually abandoned the long, heavy weapons in favour of revolvers and carbines.
Police[edit]
During the 19th and into the early 20th century, sabres were also used by both mounted and dismounted personnel in some European police forces. When the sabre was used by mounted police against crowds, the results could be appalling, as portrayed in a key scene in Doctor Zhivago. The sabre was later phased out in favour of the baton, or nightstick, for both practical and humanitarian reasons. The Gendarmerie of Belgium used them until at least 1950,[8] and the Swedish police forces until 1965.
Contemporary dress uniform[edit]
Further information: Dress uniform, ceremonial sword and Color guard
Swords with sabre blades remain a component of the dress uniforms worn by most national Army, Navy, Air Force, Marine and Coast Guard officers. Some militaries also issue ceremonial swords to their highest-ranking non-commissioned officers; this is seen as an honour since, typically, non-commissioned, enlisted/other-rank military service members are instead issued a cutlass blade rather than a sabre. Sword deployments in the modern military are no longer intended for use as weapons, and now serve primarily in ornamental or ceremonial functions. As such, they are typically made of stainless steel, a material which keeps its shine bright but is much too brittle for direct impacts, let alone full blade-on-blade combat, and may shatter if such usage is attempted. One distinctive ceremonial function a sabre serves in modern times is the Wedding Arch or Sabre Arch, performed for servicemen or women getting married.
Modern sport fencing[edit]
Main article: Sabre (fencing)
The modern fencing sabre bears little resemblance to the cavalry sabre, having a thin, 88 cm (35 in) long straight blade. One of the three weapons used in the sport of fencing, it is a very fast-paced weapon with bouts characterized by quick footwork and cutting with the edge. The only allowed target area is from the waist up - the region a mounted man could reach on a foe on the ground.
The concept of attacking above the waist only is a 20th-century change to the sport; previously sabreurs used to pad their legs against cutting slashes from their opponents. The reason for the above waist rule is unknown[9] as the sport is based on the use of infantry sabres and not cavalry sabres.
Powered exoskeleton
From Wikipedia, the free encyclopedia
The exhibit "future soldier", designed by the US Army
A powered exoskeleton, also known as powered armor, exoframe, or exosuit, is a mobile machine consisting primarily of an outer framework (akin to an insect's exoskeleton) worn by a person, and powered by a system of motors, hydraulics or Pneumatics that delivers at least part of the energy for limb movement.
The main function of a powered exoskeleton is to assist the wearer by boosting their strength and endurance. They are commonly designed for military use, to help soldiers carry heavy loads both in and out of combat. In civilian areas, similar exoskeletons could be used to help firefighters and other rescue workers survive dangerous environments.[1] The medical field is another prime area for exoskeleton technology, where it can be used for enhanced precision during surgery,[citation needed] or as an assist to allow nurses to move heavy patients.[2]
Working prototypes of powered exoskeletons, including XOS[3] by Sarcos, and HULC[4] by Lockheed Martin (both meant for military use), have been constructed but have not yet been deployed in the field. Several companies have also created exosuits for medical use,[5] including the HAL 5 by Cyberdyne Inc.
An electric powered leg exoskeleton developed at MIT reduces the metabolic energy used when walking and carrying a load.[6] The exoskeleton augments human walking by providing mechanical power to the ankle joints.
Ekso Bionics is currently developing and manufacturing intelligently powered exoskeleton bionic devices that can be strapped on as wearable robots to enhance the strength, mobility, and endurance of soldiers and paraplegics.
Various problems remain to be solved, the most daunting being the creation of a compact power supply powerful enough to allow an exoskeleton to operate for extended periods without being plugged into external power.[7]
Contents [hide]
1 History
2 Applications
3 Current products
4 Under development
5 Limitations and design issues
5.1 Power supply
5.2 Skeleton
5.3 Actuators
5.4 Joint flexibility
5.4.1 NASA AX-5 hard shell space suit
5.5 Power control and modulation
5.6 Detection of unsafe/invalid motions
5.7 Pinching and joint fouling
5.8 Adaptation to user size variations
6 Popular culture
7 See also
8 References
9 External links
History[edit]
The earliest exoskeleton-like device was a set of walking, jumping and running assisted apparatus developed in 1890 by a Russian named Nicholas Yagn. As a unit, the apparatus used compressed gas bags to store energy that would assist with movements, although it was passive in operation and required human power.[8] In 1917, US inventor Leslie C. Kelley developed what he called a pedomotor, which operated on steam power with artificial ligaments acting in parallel to the wearers movements.[9] With the pedomotor, energy could be generated apart from the user.
The first true exoskeleton in the sense of being a mobile machine integrated with human movements was co-developed by General Electric and the United States military in the 1960s. The suit was named Hardiman, and made lifting 250 pounds (110 kg) feel like lifting 10 pounds (4.5 kg). Powered by hydraulics and electricity, the suit allowed the wearer to amplify their strength by a factor of 25, so that lifting 25 pounds was as easy as lifting one pound without the suit. A feature dubbed force feedback enabled the wearer to feel the forces and objects being manipulated.
While the general idea sounded somewhat promising, the actual Hardiman had major limitations.[10] It was impractical due to its 1,500-pound (680 kg) weight. Another issue was the fact it is a slave-master system, where the operator is in a master suit which is in turn inside the slave suit which responds to the master and takes care of the work load. This multiple physical layer type of operation may work fine, but takes longer than a single physical layer. When the goal is physical enhancement, response time matters. Its slow walking speed of 2.5 ft/s further limited practical uses. The project was not successful. Any attempt to use the full exoskeleton resulted in a violent uncontrolled motion, and as a result it was never tested with a human inside. Further research concentrated on one arm. Although it could lift its specified load of 750 pounds (340 kg), it weighed three quarters of a ton, just over twice the liftable load. Without getting all the components to work together the practical uses for the Hardiman project were limited.[11]
Exoskeleton being developed by DARPA
Los Alamos Laboratories worked on an exoskeleton project in the 1960s called Project Pitman. In 1986, an exoskeleton prototype called the LIFESUIT was created by Monty Reed, a US Army Ranger who had broken his back in a parachute accident.[12] While recovering in the hospital, he read Robert Heinlein's Starship Troopers and from Heinlein's description of Mobile Infantry Power Suits, he designed the LIFESUIT, and wrote letters to the military about his plans for the LIFESUIT. In 2001 LIFESUIT One (LSI) was built. In 2003 LS6 was able to record and play back a human gait. In 2005 LS12 was worn in a foot race known as the Saint Patrick's' Day Dash in Seattle, Washington. Monty Reed and LIFESUIT XII set the Land Speed Distance Record for walking in robot suits. LS12 completed the 3-mile race in 90 minutes. The current LIFESUIT prototype 14 can walk one mile on a full charge and lift 92 kg (203 lb) for the wearer.[citation needed]
In January 2007, Newsweek magazine reported that the Pentagon had granted development funds to The University of Texas at Dallas' nanotechnologist Ray Baughman to develop military-grade artificial electroactive polymers. These electrically contractive fibers are intended to increase the strength-to-weight ratio of movement systems in military powered armor.[13]
Applications[edit]
Steve Jurvetson with a Hybrid Assistive Limb powered exoskeleton suit, commercially available in Japan.
One of the proposed main uses for an exoskeleton would be enabling a soldier to carry heavy objects (80–300 kg) while running or climbing stairs. Not only could a soldier potentially carry more weight, he could presumably wield heavier armor and weapons. Most models use a hydraulic system controlled by an on-board computer. They could be powered by an internal combustion engine, batteries or potentially fuel cells. Another area of application could be medical care, nursing in particular. Faced with the impending shortage of medical professionals and the increasing number of people in elderly care, several teams of Japanese engineers have developed exoskeletons designed to help nurses lift and carry patients.
Exoskeletons could also be applied in the area of rehabilitation of stroke or Spinal cord injury patients. Such exoskeletons are sometimes also called Step Rehabilitation Robots. An exo-skeleton could reduce the number of therapists needed by allowing even the most impaired patient to be trained by one therapist, whereas several are currently needed. Also training could be more uniform, easier to analyze retrospectively and can be specifically customized for each patient. At this time there are several projects designing training aids for rehabilitation centers (LOPES exoskeleton, Lokomat, ALTACRO, CAPIO and the gait trainer, Hal 5.)
Exoskeletons could also be regarded as wearable robots: A wearable robot is a mechatronic system that is designed around the shape and function of the human body, with segments and joints corresponding to those of the person it is externally coupled with. German Research Centre for Artificial Intelligence developed two general purpose powered exoskeletons CAPIO and VI-Bot. They also considered human force sensitivities in the design and operation phases.[14] Teleoperation and power amplification were said to be the first applications, but after recent technological advances the range of application fields is said to have widened. Increasing recognition from the scientific community means that this technology is now employed in telemanipulation, man-amplification, neuromotor control research and rehabilitation, and to assist with impaired human motor control (Wearable Robots: Biomechatronic Exoskeletons).[15]
Current products[edit]
ReWalk: ReWalk features powered hip and knee motion to enable those with lower limb disabilities, including paraplegia as a result of spinal cord injury (SCI), to perform self-initiated standing, walking, and stair ascending/ descending.
Sarcos/Raytheon XOS Exoskeleton arms/legs. For use in the military, weighs 68 kg (150 lb) and allows the wearer to lift 90 kg (200 lb) with little or no effort.[16] In 2010, the XOS 2 was unveiled, which featured more fluid movement, increase in power output and decrease in power consumption.[3]
Ekso Bionics/Lockheed Martin HULC (Human Universal Load Carrier) legs, the primary competitor to Sarcos/Raytheon. Weighs 24 kg (53 lb)[17] and allows the user to carry up to 91 kg (201 lb) on a backpack attached to the exoskeleton independent of the user.[18] A modified version of HULC is also in development for medical use, to help patients walk.[19]
Ekso Bionics eLEGS: a hydraulically powered exoskeleton system allowing paraplegics to stand and walk with crutches or a walker
Cyberdyne's HAL 5 arms/legs. The first cyborg-type wearable robot allows the wearer to lift 10 times as much as they normally could.[20] HAL 5 is currently in use in Japanese hospitals, and was given global safety certification in 2013.[21]
Honda Exoskeleton Legs. Weighs 6.5 kg (14 lb) and features a seat for the wearer.[22]
M.I.T. Media Lab's Biomechatronics Group legs. Weighs 11.7 kg (26 lb).[23]
Parker Hannifin Indego Exoskeleton: an electrically powered system for paraplegics to walk with crutches.[24]
European Space Agency Series of ergonomic exoskeletons for robotic teleoperation. The EXARM, X-Arm-2 and SAM exoskeletons of the ESA Telerobotics & Haptics Laboratory.[25]
Ghent University exoskeleton: "WALL-X".[26] In 2013 this was the first exoskeleton that allowed to reduce metabolic cost below the cost of normal walking. This result was achieved by optimizing the controls based on study of the biomechanics of the human-exoskeleton interaction.[27]
Under development[edit]
European Commission's MINDWALKER:[28] a mind-controlled exoskeleton for disabled people
Vrije Universiteit Brussel's Altacro: an exoskeleton for disabled people[29]
ExoAtlet Med: an exoskeleton for disabled people with locomotion disorders [30]
Future Soldier 2030 Initiative by the US Army
The Vrije Universiteit Brussel - K.U.Leuven joint research program MIRAD:[31] an exoskeleton for people with limited mobility
Limitations and design issues[edit]
This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (September 2012)
Engineers of powered exoskeletons face a number of large technological challenges to build a suit that is capable of quick and agile movements, yet is also safe to operate without extensive training.
Power supply[edit]
One of the largest problems facing designers of powered exoskeletons is the power supply.[32] There are currently few power sources of sufficient energy density to sustain a full-body powered exoskeleton for more than a few hours.
Non-rechargeable primary cells tend to have more energy density and store it longer than rechargeable secondary cells, but then replacement cells must be transported into the field for use when the primary cells are depleted, of which may be a special and uncommon type. Rechargeable cells can be reused but may require transporting a charging system into the field, which either must recharge rapidly or the depleted cells need to be able to be swapped out in the field, to be replaced with cells that have been slowly charging.[33]
Internal combustion engine power supplies offer high energy output, but they also typically idle, or continue to operate at a low power level sufficient to keep the engine running, when not actively in use which continuously consumes fuel. Battery-based power sources are better at providing instantaneous and modulated power; stored chemical energy is conserved when load requirements cease. Engines which do not idle are possible, but require energy storage for a starting system capable of rapidly accelerating the engine to full operating speed, and the engine must be extremely reliable and never fail to begin running immediately.
Small and lightweight engines typically must operate at high speed to extract sufficient energy from a small engine cylinder volume, which both can be difficult to silence and induces vibrations into the overall system. Internal combustion engines can also get extremely hot, which may require additional weight from cooling systems or heat shielding.
Electrochemical fuel cells such as solid oxide fuel cells (SOFC) are also being considered as a power source since they can produce instantaneous energy like batteries and conserve the fuel source when not needed. They can also easily be refueled in the field with liquid fuels such as methanol. However they require high temperatures to function; 600 °C is considered a low operating temperature for SOFCs.
Most research designs are tethered to a much larger separate power source. For a powered exoskeleton that will not need to be used in completely standalone situations such as a battlefield soldier, this limitation may be acceptable, and the suit may be designed to be used with a permanent power umbilical.
Skeleton[edit]
(Section reference[34])
Initial exoskeleton experiments are commonly done using inexpensive and easy to mold materials such as steel and aluminum. However steel is heavy and the powered exoskeleton must work harder to overcome its own weight in order to assist the wearer, reducing efficiency. The aluminium alloys used are lightweight, but fail through fatigue quickly; it would be unacceptable for the exoskeleton to fail catastrophically in a high-load condition by "folding up" on itself and injuring the wearer.
As the design moves past the initial exploratory steps, the engineers move to progressively more expensive and strong but lightweight materials such as titanium, and use more complex component construction methods, such as molded carbon-fiber plates.
Actuators[edit]
The powerful but lightweight design issues are also true of the joint actuators. Standard hydraulic cylinders are powerful and capable of being precise, but they are also heavy due to the fluid-filled hoses and actuator cylinders, and the fluid has the potential to leak onto the user. Pneumatics are generally too unpredictable for precise movement since the compressed gas is springy, and the length of travel will vary with the gas compression and the reactive forces pushing against the actuator.
Pressurized hydraulic fluid leaks can be dangerous to humans. A jet squirting from a pinhole leak can penetrate skin at pressures as low as 100 PSI / 6.9 bar.[35] If the injected fluid is not surgically removed, gangrene and poisoning can occur.
Generally electronic servomotors are more efficient and power-dense, utilizing high-gauss permanent magnets and step-down gearing to provide high torque and responsive movement in a small package. Geared servomotors can also utilize electronic braking to hold in a steady position while consuming minimal power.
Additionally, new series elastic actuators and other deformable actuators are being proposed for use in robotic exoskeletons based on the ideas of control of stiffness in human limbs.
Joint flexibility[edit]
Flexibility of the human anatomy is another design issue, and which also affects the design of unpowered hard shell space suits. Several human joints such as the hips and shoulders are ball and socket joints, with the center of rotation inside the body. It is difficult for an exoskeleton to exactly match the motions of this ball joint using a series of external single-axis hinge points, limiting flexibility of the wearer.
A separate exterior ball joint can be used alongside the shoulder or hip, but this then forms a series of parallel rods in combination with the wearer's bones. As the external ball joint is rotated through its range of motion, the positional length of the knee/elbow joint will lengthen and shorten, causing joint misalignment with the wearer's body. This slip in suit alignment with the wearer can be permitted, or the suit limbs can be designed to lengthen and shorten under power assist as the wearer moves, to keep the knee/elbow joints in alignment.
A partial solution for more accurate free-axis movement is a hollow spherical ball joint that encloses the human joint, with the human joint as the center of rotation for the hollow sphere. Rotation around this joint may still be limited unless the spherical joint is composed of several plates that can either fan out or stack up onto themselves as the human ball joint moves through its full range of motion.
Spinal flexibility is another challenge since the spine is effectively a stack of limited-motion ball joints. There is no simple combination of external single-axis hinges that can easily match the full range of motion of the human spine. A chain of external ball joints behind the spine can perform a close approximation, though it is again the parallel-bar length problem. Leaning forward from the waist, the suit shoulder joints would press down into the wearer's body. Leaning back from the waist, the suit shoulder joints would lift off the wearer's body. Again, this alignment slop with the wearer's body can be permitted, or the suit can be designed to rapidly lengthen or shorten the exoskeleton spine under power assist as the wearer moves.
NASA AX-5 hard shell space suit[edit]
The NASA Ames research center experimental AX-5 hard-shell space suit (1988), had a flexibility rating of 95%, compared to what movements are possible while not wearing the suit. It is composed of gasketed hard shell sections joined with free-rotating mechanical bearings that spin around as the person moves.
However, the free-rotating hard sections have no limit on rotation and can potentially move outside the bounds of joint limits. It requires high precision manufacturing of the bearing surfaces to prevent binding, and the bearings may jam if exposed to lunar dust.[36]
Power control and modulation[edit]
Control and modulation of excessive and unwanted movement is a third large problem. It is not enough to build a simple single-speed assist motor, with forward/hold/reverse position controls and no on-board computer control. Such a mechanism can be too fast for the user's desired motion, with the assisted motion overshooting the desired position. If the wearer's body is enclosed with simple contact surfaces that trigger suit motion, the overshoot can result the wearer's body lagging behind the suit limb position, resulting in contact with a position sensor to move the exoskeleton in the opposite direction. This lagging of the wearer's body can lead to an uncontrolled high-speed oscillatory motion, and a powerful assist mechanism can batter or injure the operator unless shut down remotely. (An underdamped servo typically exhibits oscillations like this.)[37]
A single-speed assist mechanism which is slowed down to prevent oscillation is then restrictive on the agility of the wearer. Sudden unexpected movements such as tripping or being pushed over requires fast precise movements to recover and prevent falling over, but a slow assist mechanism may simply collapse and injure the user inside. (This is known as an overdamped servo.)[37]
Fast and accurate assistive positioning is typically done using a range of speeds controlled using computer position sensing of both the exoskeleton and the wearer, so that the assistive motion only moves as fast or as far as the motion of the wearer and does not overshoot or undershoot. (This is called a critically damped servo.)[37] This may involve rapidly accelerating and decelerating the motion of the suit to match the wearer, so that their limbs slightly press against the interior of the suit and then it moves out of the way to match the wearer's motion. The computer control also needs to be able to detect unwanted oscillatory motions and shut down in a safe manner if damage to the overall system occurs.
Detection of unsafe/invalid motions[edit]
A fourth issue is detection and prevention of invalid or unsafe motions, which is managed by an on-board realtime computational Self-Collision Detection System.[38]
It would be unacceptable for an exoskeleton to be able to move in a manner that exceeds the range of motion of the human body and tear muscle ligaments or dislocate joints. This problem can be partially solved using designed limits on hinge motion, such as not allowing the knee or elbow joints to flex backwards onto themselves.
However, the wearer of a powered exoskeleton can additionally damage themselves or the suit by moving the hinge joints through a series of combined and otherwise valid movements which together cause the suit to collide with itself or the wearer.
A powered exoskeleton would need to be able to computationally track limb positions and limit movement so that the wearer does not casually injure themselves through unintended assistive motions, such as when coughing, sneezing, when startled, or if experiencing a sudden uncontrolled seizure or muscle spasm.
Pinching and joint fouling[edit]
An exoskeleton is typically constructed of very strong and hard materials, while the human body is much softer than the alloys and hard plastics used in the exoskeleton. An exoskeleton typically cannot be worn directly in contact with bare skin due to the potential for skin pinching where the exoskeleton plates and servos slide across each other. Instead the wearer may be enclosed in a heavy fabric suit to protect them from joint pinch hazards.
Current exoskeleton joints themselves are also prone to environmental fouling from sand and grit, and may need protection from the elements to keep operating effectively. A traditional way of handling this is with seals and gaskets around rotating parts, but can also be accomplished by enclosing the exoskeleton mechanics in a tough fabric suit separate from the user, which functions as a protective "skin" for the exoskeleton. This enclosing suit around the exoskeleton can also protect the wearer from pinch hazards.
Adaptation to user size variations[edit]
Most exoskeletons pictured in this article typically show a fixed length distance between joints. But humans exhibit a wide range of physical size differences and skeletal bone lengths, so a one-size-fits all fixed-size exoskeleton would not work. Although military use would generally use only larger adult sizes, civilian use may extend across all size ranges, including physically disabled babies and small children.
There are several possible solutions to this problem:
A wide range of fixed-sized exoskeletons can be constructed, stored, and issued to each differently sized user. This is materially expensive due to the wide variety of different sizes of users, but may be feasible where only one person is ever expected to use the exoskeleton, such as when one is issued to a physically disabled person for their personal mobility. Exoskeletons in a wartime service would be custom sized to the user and not sharable, making it difficult to supply the wide range of repair parts needed for the many different possible model sizes.
The users can be required to be of a specific physical size in order to be issued an exoskeleton. Physical body size restrictions already occur in the military for jobs such as aircraft pilots, due to the problems of fitting seats and controls to very large and very small people.[39]
Adjustable-length exoskeleton limbs and frames can be constructed, allowing size flexibility across a range of users. Due to the large variety of potential user bone lengths, it may still be necessary to have several adjustable exoskeleton models each covering certain size ranges, such as one model only for people that are 5' - 7' tall.
A further difficulty is that not only is there variation in bone lengths, but also limb girth due to bone geometry, muscle build, fat, and any user clothing layering such as insulation for extreme cold or hot environments. An exoskeleton will generally need to fit the user's limb girth snugly so that their arms and legs are not loose inside and flopping around an oversized exoskeleton cavity, or so tight that the user's skin is lesioned from abrasion from a too-small exoskeleton cavity.
Again, this can be handled in a military environment by requiring certain degrees of muscle density and body fitness of the potential users, so that exoskeletons designed for a particular limb girth will fit the majority of soldiers. Many people would be excluded due to incompatibly thin or thick bodies, even if they are within the correct height range.
A rigid shell exoskeleton may be able to use an adjustable suspension harness within the shell. The rigid outer shell still imposes a maximum girth but may be able to accommodate many smaller girths inside.
A fully enclosing flexible armored exoskeleton using small overlapping sectioned sliding plates could dynamically expand and contract the overlap distance of its many outer plates, both to adapt to the wearer's limb length and girth, and as the plates move in coordination with the wearer's body in general use.
Popular culture[edit]
This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (September 2012)
Powered armor has appeared in a wide variety of media, beginning with E. E. Smith's Lensman series in 1937. Since then, it has featured in science fiction movies and literature, comic books, video games, and tabletop role-playing games. One of the most famous early appearances was in Robert A. Heinlein's 1959 novel Starship Troopers, which can be seen as spawning the entire subgenre concept of military powered armor.
In some powered armor, the suit is not much larger than a human. These depictions can be described as a battlesuit with mechanical and electronic mechanisms designed to augment the wearer's abilities.
In the Fallout series of video games powered armor is portrayed as a hulking armor-plated mechanism, offering nearly complete protection against ballistic weaponry and advanced resistance to directed energy weapons of various types; and the Marvel Comics characters Iron Man and Doctor Doom all fit this profile.
Other examples of powered armor suits include the 10-foot tall MJOLNIR suits worn by the Spartan supersoldiers featured in the Halo video game series, the symbiotic 'nanosuits' of the Crysis series, Space Marine and Tau battlesuits that enhance the soldier's strength, protection, senses, and communications in the Warhammer 40,000 series.[40]
The PlanetSide computer game series features Battlesuits called MAXes (Mechanized Assault Exo-Suit) in every release of the game, as well as larger bipedal vehicles (in PlanetSide: Aftershock, they are currently not implemented in PlanetSide 2), which fulfill a tank like role rather than being enhanced infantry units. These kind of vehicles are usually not referred to as Battlesuits. They are rather referred to as mecha, from the Japanese "メカ" (meka), an adaptation of the English "mechanical". Popular game representations include such titles as Battletech, Steel Battalion, and more recently, Hawken and Titanfall.
The line between mecha and power armor is necessarily vague. The usual distinction is that powered armor is form-fitting and worn; mecha have cockpits and are driven by pilots.[41] Some have defined it in that powered exoskeletons augment the user's natural abilities, while mechas replace them entirely. However, the line between the two can be difficult to determine at times, especially considering that force feedback systems are often included for delicate maneuvers. Even in a larger mecha meant to be driven like a walking tank rather than worn, a control system could be cybernetic or based on motion capture. Certain works allow powered armor to be integrated into mecha. In part 2 of Batman: The Dark Knight Returns, Batman dons a powered exoskeleton for his fight with Superman which allows him to lift the Batmobile one handed with ease and fight on equal terms with Superman. In The LEGO Movie, Emmet builds a large exoskeleton mech and pilots it through the city. In Ninjago, Cole and Kai each have Mechs. In BIONICLE, the Toa Mata wear suits of "Exo Toa Armor" to battle the Bohrok Queens. Exoskeletons are also used in the game Call of Duty: Advanced Warfare.
Powered exoskeletons are also commonly found in science fiction films. Examples include Edge of Tomorrow and Elysium.
History of nuclear weapons
From Wikipedia, the free encyclopedia
A nuclear fireball lights up the night in the United States nuclear test Upshot-Knothole Badger on April 18, 1953.
Nuclear weapons
Background
History Warfare Arms race Design Testing Ethics Effects Delivery Espionage Proliferation Arsenals Terrorism Opposition
Nuclear-armed states
NPT recognized
United States Russia United Kingdom France China
Others
India Israel (undeclared) Pakistan North Korea
v t e
Nuclear weapons possess enormous destructive power derived from nuclear fission or combined fission and fusion reactions. Starting with scientific breakthroughs made during the 1930s, the United States, the United Kingdom and Canada collaborated during World War II in what was called the Manhattan Project to counter the suspected Nazi German atomic bomb project. In August 1945 two fission bombs were dropped on Japan ending the Pacific War. The Soviet Union started development shortly thereafter with their own atomic bomb project, and not long after that both countries developed even more powerful fusion weapons known as "hydrogen bombs."
Contents [hide]
1 Physics and politics in the 1930s- and 1940s
2 From Los Alamos to Hiroshima
3 Soviet atomic bomb project
4 American developments after World War II
5 The first thermonuclear weapons
6 Deterrence and brinkmanship
7 Weapons improvement
8 Emergence of the anti-nuclear movement
9 Cuban Missile Crisis
10 Initial proliferation
11 Cold War
12 Cost
13 Second nuclear age
14 See also
15 References
16 Further reading
17 External links
Physics and politics in the 1930s- and 1940s[edit]
See also: Birth of Modern Physics
In nuclear fission, the nucleus of a fissile atom (in this case, enriched uranium) absorbs a thermal neutron, becomes unstable and splits into two new atoms, releasing some energy and between one and three new neutrons, which can perpetuate the process.
In the first decades of the 20th century, physics was revolutionised with developments in the understanding of the nature of atoms. In 1898, Pierre and Marie Curie discovered that pitchblende, an ore of uranium, contained a substance—which they named radium—that emitted large amounts of radioactivity. Ernest Rutherford and Frederick Soddy identified that atoms were breaking down and turning into different elements. Hopes were raised among scientists and laymen that the elements around us could contain tremendous amounts of unseen energy, waiting to be harnessed.
H. G. Wells was inspired to write about atomic weapons in a 1914 novel, The World Set Free, which appeared shortly before the First World War. In a 1924 article, Winston Churchill speculated about the possible military implications: "Might not a bomb no bigger than an orange be found to possess a secret power to destroy a whole block of buildings—nay to concentrate the force of a thousand tons of cordite and blast a township at a stroke?"[1]
In January 1933, Adolf Hitler was appointed Chancellor of Germany and it quickly became unsafe for Jewish scientists to remain in the country. Leó Szilárd fled to London where he proposed, and in 1934 patented, the idea of a nuclear chain reaction via neutrons. The patent also introduced the term critical mass to describe the minimum amount of material required to sustain the chain reaction and its potential to cause an explosion. (British patent 630,726.) He subsequently assigned the patent to the British Admiralty so that it could be covered by the Official Secrets Act.[2] In a very real sense, Szilárd was the father of the atomic bomb academically. Also in 1934, Irène and Frédéric Joliot-Curie discovered that artificial radioactivity could be induced in stable elements by bombarding them with alpha particles; Enrico Fermi reported similar results when bombarding uranium with neutrons.
In December 1938, Otto Hahn and Fritz Strassmann sent a manuscript to Naturwissenschaften reporting that they had detected the element barium after bombarding uranium with neutrons.[3] Lise Meitner and her nephew Otto Robert Frisch correctly interpreted these results as being due to the splitting of the uranium atom. (Frisch confirmed this experimentally on January 13, 1939.[4]) They gave the process the name "fission" because of its similarity to the splitting of a cell into two new cells.[5] Even before it was published, news of Meitner’s and Frisch’s interpretation crossed the Atlantic.[6] Scientists at Columbia University decided to replicate the experiment and on January 25, 1939, conducted the first nuclear fission experiment in the United States[7] in the basement of Pupin Hall.[8] The following year, they identified the active component of uranium as being the rare isotope uranium-235.[9]
Uranium appears in nature primarily in two isotopes: uranium-238 and uranium-235. When the nucleus of uranium-235 absorbs a neutron, it undergoes nuclear fission, releasing energy and, on average, 2.5 neutrons. Because uranium-235 releases more neutrons than it absorbs, it can support a chain reaction and so is described as fissile. Uranium-238, on the other hand, is not fissile as it does not normally undergo fission when it absorbs a neutron.
By the time Nazi Germany invaded Poland in 1939, beginning World War II, many of Europe's top scientists had already fled the imminent conflict. Physicists on both sides were well aware of the possibility of utilizing nuclear fission as a weapon, but no one was quite sure how it could be done. In August 1939, concerned that Germany might have its own project to develop fission-based weapons, Albert Einstein signed a letter to U.S. President Franklin D. Roosevelt warning him of the threat.[10] Roosevelt responded by setting up the Uranium Committee under Lyman James Briggs but, with little initial funding ($6,000), progress was slow. It was not until the Japanese attack on Pearl Harbor in December, 1941, that the U.S. decided to commit the necessary resources.[11]
Organized research first began in Britain as part of the Tube Alloys project. The Maud Committee was set up following the work of Frisch and Rudolf Peierls who calculated uranium-235's critical mass and found it to be much smaller than previously thought which meant that a deliverable bomb should be possible.[12] In the February 1940 Frisch–Peierls memorandum they stated that: "The energy liberated in the explosion of such a super-bomb...will, for an instant, produce a temperature comparable to that of the interior of the sun. The blast from such an explosion would destroy life in a wide area. The size of this area is difficult to estimate, but it will probably cover the centre of a big city."
Edgar Sengier, a director of Shinkolobwe Mine which produced by far the highest quality uranium ore in the world, had become aware of uranium's possible use in a bomb. In late 1940, fearful of its seizure by the Germans, he shipped the mine's entire stockpile of ore to a warehouse on Staten Island.[13]
For 18 months British research outpaced the American but by mid-1942, it became apparent that the industrial effort required was beyond Britain's already stretched wartime economy.[14]:204 In September 1942, General Leslie Groves was appointed to lead the U.S. project which became known as the Manhattan Project. Two of his first acts were to obtain authorization to assign the highest priority AAA rating on necessary procurements, and to put in train the purchase of all 1,250 tons of the Shinkolobwe ore.[13][15] The Tube Alloys project was quickly overtaken by the U.S. effort[14] and after Roosevelt and Churchill signed the Quebec Agreement in 1943, it was relocated and amalgamated into the Manhattan Project.
From Los Alamos to Hiroshima[edit]
Main article: Manhattan Project
UC Berkeley physicist J. Robert Oppenheimer led the Allied scientific effort at Los Alamos.
Proportions of uranium-238 (blue) and uranium-235 (red) found naturally versus grades that are enriched by separating the two isotopes atom-by-atom using various methods that all require a massive investment in time and money.
With a scientific team led by J. Robert Oppenheimer, the Manhattan project brought together some of the top scientific minds of the day, including many exiles from Europe, with the production power of American industry for the goal of producing fission-based explosive devices before Germany. Britain and the U.S. agreed to pool their resources and information for the project, but the other Allied power, the Soviet Union (USSR), was not informed. The U.S. made an unprecedented investment in the project which at the time was the largest industrial enterprise ever seen,[14] spread across more than 30 sites in the U.S. and Canada. Scientific development was centralized in a secret laboratory at Los Alamos.
For a fission weapon to operate, there must be sufficient fissile material to support a chain reaction, a critical mass. To separate the fissile uranium-235 isotope from the non-fissile uranium-238, two methods were developed which took advantage of the fact that uranium-238 has a slightly greater atomic mass: electromagnetic separation and gaseous diffusion. Another secret site was erected at rural Oak Ridge, Tennessee, for the large-scale production and purification of the rare isotope, which required considerable investment. At the time, K-25, one of the Oak Ridge facilities, was the world's largest factory under one roof. The Oak Ridge site employed tens of thousands of people at its peak, most of whom had no idea what they were working on.
Electromagnetic U235 separation plant at Oak Ridge, Tenn. Massive new physics machines were assembled at secret installations around the United States for the production of enriched uranium and plutonium.
Although uranium-238 cannot be used for the initial stage of an atomic bomb, when it absorbs a neutron, it becomes uranium-239 which decays into neptunium-239, and finally the relatively stable plutonium-239, an element that does not exist naturally on Earth, but is fissile like uranium-235. After Fermi achieved the world's first sustained and controlled nuclear chain reaction with the creation of the first atomic pile, massive reactors were secretly constructed at what is now known as Hanford Site to transform uranium-238 into plutonium for a bomb.
The simplest form of nuclear weapon is a gun-type fission weapon, where a sub-critical mass would be shot at another sub-critical mass. The result would be a super-critical mass and an uncontrolled chain reaction that would create the desired explosion. The weapons envisaged in 1942 were the two gun-type weapons, Little Boy (uranium) and Thin Man (plutonium), and the Fat Man plutonium implosion bomb.
In early 1943 Oppenheimer determined that two projects should proceed forwards: the Thin Man project (plutonium gun) and the Fat Man project (plutonium implosion). The plutonium gun was to receive the bulk of the research effort, as it was the project with the most uncertainty involved. It was assumed that the uranium gun-type bomb could then be adapted from it.
In December 1943 the British mission of 19 scientists arrived in Los Alamos. Hans Bethe became head of the Theoretical Division.
The two fission bomb assembly methods.
In April 1944 it was found by Emilio Segrè that the plutonium-239 produced by the Hanford reactors had too high a level of background neutron radiation, and underwent spontaneous fission to a very small extent, due to the unexpected presence of plutonium-240 impurities. If such plutonium were used in a gun-type design, the chain reaction would start in the split second before the critical mass was fully assembled, blowing the weapon apart with a much lower yield than expected, in what is known as a fizzle.
As a result, development of Fat Man was given high priority. Chemical explosives were used to implode a sub-critical sphere of plutonium, thus increasing its density and making it into a critical mass. The difficulties with implosion centered on the problem of making the chemical explosives deliver a perfectly uniform shock wave upon the plutonium sphere— if it were even slightly asymmetric, the weapon would fizzle. This problem was solved by the use of explosive lenses which would focus the blast waves inside the imploding sphere, akin to the way in which an optical lens focuses light rays.[16]
After D-Day, General Groves ordered a team of scientists to follow eastward-moving victorious Allied troops into Europe to assess the status of the German nuclear program (and to prevent the westward-moving Soviets from gaining any materials or scientific manpower). They concluded that, while Germany had an atomic bomb program headed by Werner Heisenberg, the government had not made a significant investment in the project, and it had been nowhere near success.
Historians claim to have found a rough schematic showing a Nazi nuclear bomb.[17] In March 1945, a German scientific team was directed by the physicist Kurt Diebner to develop a primitive nuclear device in Ohrdruf, Thuringia.[17][18] Last ditch research was conducted in an experimental nuclear reactor at Haigerloch.
On April 12, after Roosevelt's death, Vice-President Harry S Truman assumed the presidency. At the time of the unconditional surrender of Germany on May 8, 1945, the Manhattan Project was still months away from producing a working weapon.
Because of the difficulties in making a working plutonium bomb, it was decided that there should be a test of the weapon. On July 16, 1945, in the desert north of Alamogordo, New Mexico, the first nuclear test took place, code-named "Trinity", using a device nicknamed "the gadget." The test, a plutonium implosion type device, released energy equivalent to 19 kilotons of TNT, far more powerful than any weapon ever used before. The news of the test's success was rushed to Truman at the Potsdam Conference, where Churchill was briefed and Soviet Premier Joseph Stalin was informed of the new weapon. On July 26, the Potsdam Declaration was issued containing an ultimatum for Japan: either surrender or suffer "complete and utter destruction", although nuclear weapons were not mentioned.[14]
The atomic bombings of Hiroshima and Nagasaki killed tens of thousands of Japanese combatants and non-combatants and destroyed dozens of military bases and supply depots as well as hundreds (or thousands) of factories producing war materials.
After hearing arguments from scientists and military officers over the possible use of nuclear weapons against Japan (though some recommended using them as demonstrations in unpopulated areas, most recommended using them against built up targets, a euphemistic term for populated cities), Truman ordered the use of the weapons on Japanese cities, hoping it would send a strong message that would end in the capitulation of the Japanese leadership and avoid a lengthy invasion of the islands. On May 10–11, 1945, the Target Committee at Los Alamos, led by Oppenheimer, recommended Kyoto, Hiroshima, Yokohama, and Kokura as possible targets. Concerns about Kyoto's cultural heritage led to it being replaced by Nagasaki.
Hiroshima: burns from the intense thermal effect of the atomic bomb.
On August 6, 1945, a uranium-based weapon, Little Boy, was detonated above the Japanese city of Hiroshima. Three days later, a plutonium-based weapon, Fat Man, was detonated above the Japanese city of Nagasaki. The atomic bombing raids killed at least one hundred thousand Japanese civilians and military personnel outright, with the heat, radiation, and blast effects. Many tens of thousands would later die of radiation sickness and related cancers.[19][20] Truman promised a "rain of ruin" if Japan did not surrender immediately, threatening to systematically eliminate their ability to wage war.[21] On August 15, Emperor Hirohito announced Japan's surrender.[22]
Soviet atomic bomb project[edit]
Main article: Soviet atomic bomb project
The Soviet Union was not invited to share in the new weapons developed by the United States and the other Allies. During the war, information had been pouring in from a number of volunteer spies involved with the Manhattan Project (known in Soviet cables under the code-name of Enormoz), and the Soviet nuclear physicist Igor Kurchatov was carefully watching the Allied weapons development. It came as no surprise to Stalin when Truman had informed him at the Potsdam conference that he had a "powerful new weapon." Truman was shocked at Stalin's lack of interest.
The Soviet spies in the U.S. project were all volunteers and none were Soviet citizens. One of the most valuable, Klaus Fuchs, was a German émigré theoretical physicist who had been part of the early British nuclear efforts and the UK mission to Los Alamos. Fuchs had been intimately involved in the development of the implosion weapon, and passed on detailed cross-sections of the Trinity device to his Soviet contacts. Other Los Alamos spies—none of whom knew each other—included Theodore Hall and David Greenglass. The information was kept but not acted upon, as the Soviet Union was still too busy fighting the war in Europe to devote resources to this new project.
In the years immediately after World War II, the issue of who should control atomic weapons became a major international point of contention. Many of the Los Alamos scientists who had built the bomb began to call for "international control of atomic energy," often calling for either control by transnational organizations or the purposeful distribution of weapons information to all superpowers, but due to a deep distrust of the intentions of the Soviet Union, both in postwar Europe and in general, the policy-makers of the United States worked to attempt to secure an American nuclear monopoly.
A half-hearted plan for international control was proposed at the newly formed United Nations by Bernard Baruch (The Baruch Plan), but it was clear both to American commentators—and to the Soviets—that it was an attempt primarily to stymie Soviet nuclear efforts. The Soviets vetoed the plan, effectively ending any immediate postwar negotiations on atomic energy, and made overtures towards banning the use of atomic weapons in general.
The Soviets had put their full industrial might and manpower into the development of their own atomic weapons. The initial problem for the Soviets was primarily one of resources—they had not scouted out uranium resources in the Soviet Union and the U.S. had made deals to monopolise the largest known (and high purity) reserves in the Belgian Congo. The USSR used penal labour to mine the old deposits in Czechoslovakia—now an area under their control—and searched for other domestic deposits (which were eventually found).
Two days after the bombing of Nagasaki, the U.S. government released an official technical history of the Manhattan Project, authored by Princeton physicist Henry DeWolf Smyth, known colloquially as the Smyth Report. The sanitized summary of the wartime effort focused primarily on the production facilities and scale of investment, written in part to justify the wartime expenditure to the American public.
The Soviet program, under the suspicious watch of former NKVD chief Lavrenty Beria (a participant and victor in Stalin's Great Purge of the 1930s), would use the Report as a blueprint, seeking to duplicate as much as possible the American effort. The "secret cities" used for the Soviet equivalents of Hanford and Oak Ridge literally vanished from the maps for decades to come.
At the Soviet equivalent of Los Alamos, Arzamas-16, physicist Yuli Khariton led the scientific effort to develop the weapon. Beria distrusted his scientists, however, and he distrusted the carefully collected espionage information. As such, Beria assigned multiple teams of scientists to the same task without informing each team of the other's existence. If they arrived at different conclusions, Beria would bring them together for the first time and have them debate with their newfound counterparts. Beria used the espionage information as a way to double-check the progress of his scientists, and in his effort for duplication of the American project even rejected more efficient bomb designs in favor of ones that more closely mimicked the tried-and-true Fat Man bomb used by the U.S. against Nagasaki.
Working under a stubborn and scientifically ignorant administrator, the Soviet scientists struggled on. On August 29, 1949, the effort brought its results, when the USSR tested its first fission bomb, dubbed "Joe-1" by the U.S., years ahead of American predictions. The news of the first Soviet bomb was announced to the world first by the United States, which had detected the nuclear fallout it generated from its test site in Kazakhstan.
The loss of the American monopoly on nuclear weapons marked the first tit-for-tat of the nuclear arms race. The response in the U.S. was one of apprehension, fear, and scapegoating, which would lead eventually into the Red-baiting tactics of McCarthyism. Yet recent information from unclassified Venona intercepts and the opening of the KGB archives after the fall of the Soviet Union show that the USSR had useful spies that helped their program, although none were identified by McCarthy.[citation needed] Before this, though, President Truman announced a decision to begin a crash program that would develop a far more powerful weapon than those the U.S. used against Japan: the hydrogen bomb.
American developments after World War II[edit]
In 1946 Congress established the civilian Atomic Energy Commission (AEC) to take over the development of nuclear weapons from the military, and to develop nuclear power. The AEC made use of many private companies in processing uranium and thorium and in other urgent tasks related to the development of bombs. Many of these companies had very lax safety measures and employees were sometimes exposed to radiation levels far above what was allowed then or now.[23] In 1974, the Formerly Utilized Sites Remedial Action Program (FUSRAP) of the Army Corps of Engineers was set up to deal with contaminated sites left over from these operations.[24]
The first thermonuclear weapons[edit]
Main article: History of the Teller-Ulam design
Hungarian physicist Edward Teller toiled for years trying to discover a way to make a fusion bomb.
The notion of using a fission weapon to ignite a process of nuclear fusion can be dated back to 1942. At the first major theoretical conference on the development of an atomic bomb hosted by J. Robert Oppenheimer at the University of California, Berkeley, participant Edward Teller directed the majority of the discussion towards Enrico Fermi's idea of a "Super" bomb that would use the same reactions that powered the Sun itself.
It was thought at the time that a fission weapon would be quite simple to develop and that perhaps work on a hydrogen bomb (thermonuclear weapon) would be possible to complete before the end of the Second World War. However, in reality the problem of a regular atomic bomb was large enough to preoccupy the scientists for the next few years, much less the more speculative "Super" bomb. Only Teller continued working on the project—against the will of project leaders Oppenheimer and Hans Bethe.
After the atomic bombings of Japan, many scientists at Los Alamos rebelled against the notion of creating a weapon thousands of times more powerful than the first atomic bombs. For the scientists the question was in part technical—the weapon design was still quite uncertain and unworkable—and in part moral: such a weapon, they argued, could only be used against large civilian populations, and could thus only be used as a weapon of genocide.
Many scientists, such as Bethe, urged that the United States should not develop such weapons and set an example towards the Soviet Union. Promoters of the weapon, including Teller, Ernest Lawrence, and Luis Alvarez, argued that such a development was inevitable, and to deny such protection to the people of the United States—especially when the Soviet Union was likely to create such a weapon themselves—was itself an immoral and unwise act.
Oppenheimer, who was now head of the General Advisory Committee of the successor to the Manhattan Project, the Atomic Energy Commission, presided over a recommendation against the development of the weapon. The reasons were in part because the success of the technology seemed limited at the time (and not worth the investment of resources to confirm whether this was so), and because Oppenheimer believed that the atomic forces of the United States would be more effective if they consisted of many large fission weapons (of which multiple bombs could be dropped on the same targets) rather than the large and unwieldy super bombs, for which there was a relatively limited number of targets of sufficient size to warrant such a development.
Furthermore, were such weapons developed by both the U.S. and the USSR, they would be more effectively used against the U.S. than by it, as the U.S. had far more regions of dense industrial and civilian activity as targets for large weapons than the Soviet Union.
The "Mike" shot in 1952 inaugurated the age of fusion weapons.
In the end, President Truman made the final decision, looking for a proper response to the first Soviet atomic bomb test in 1949. On January 31, 1950, Truman announced a crash program to develop the hydrogen (fusion) bomb. At this point, however, the exact mechanism was still not known: the classical hydrogen bomb, whereby the heat of the fission bomb would be used to ignite the fusion material, seemed highly unworkable. However, an insight by Los Alamos mathematician Stanislaw Ulam showed that the fission bomb and the fusion fuel could be in separate parts of the bomb, and that radiation of the fission bomb could first work in a way to compress the fusion material before igniting it.
Teller pushed the notion further, and used the results of the boosted-fission "George" test (a boosted-fission device using a small amount of fusion fuel to boost the yield of a fission bomb) to confirm the fusion of heavy hydrogen elements before preparing for their first true multi-stage, Teller-Ulam hydrogen bomb test. Many scientists initially against the weapon, such as Oppenheimer and Bethe, changed their previous opinions, seeing the development as being unstoppable.
The first fusion bomb was tested by the United States in Operation Ivy on November 1, 1952, on Elugelab Island in the Enewetak (or Eniwetok) Atoll of the Marshall Islands, code-named "Mike." Mike used liquid deuterium as its fusion fuel and a large fission weapon as its trigger. The device was a prototype design and not a deliverable weapon: standing over 20 ft (6 m) high and weighing at least 140,000 lb (64 t) (its refrigeration equipment added an additional 24,000 lb (11,000 kg) as well), it could not have been dropped from even the largest planes.
Its explosion yielded energy equivalent to 10.4 megatons of TNT—over 450 times the power of the bomb dropped onto Nagasaki— and obliterated Elugelab, leaving an underwater crater 6240 ft (1.9 km) wide and 164 ft (50 m) deep where the island had once been. Truman had initially tried to create a media blackout about the test—hoping it would not become an issue in the upcoming presidential election—but on January 7, 1953, Truman announced the development of the hydrogen bomb to the world as hints and speculations of it were already beginning to emerge in the press.
Not to be outdone, the Soviet Union exploded its first thermonuclear device, designed by the physicist Andrei Sakharov, on August 12, 1953, labeled "Joe-4" by the West. This created concern within the U.S. government and military, because, unlike Mike, the Soviet device was a deliverable weapon, which the U.S. did not yet have. This first device though was arguably not a true hydrogen bomb, and could only reach explosive yields in the hundreds of kilotons (never reaching the megaton range of a staged weapon). Still, it was a powerful propaganda tool for the Soviet Union, and the technical differences were fairly oblique to the American public and politicians.
Following the Mike blast by less than a year, Joe-4 seemed to validate claims that the bombs were inevitable and vindicate those who had supported the development of the fusion program. Coming during the height of McCarthyism, the effect was pronounced on the security hearings in early 1954, which revoked former Los Alamos director Robert Oppenheimer's security clearance on the grounds that he was unreliable, had not supported the American hydrogen bomb program, and had made long-standing left-wing ties in the 1930s. Edward Teller participated in the hearing as the only major scientist to testify against Oppenheimer, resulting in his virtual expulsion from the physics community.
On March 1, 1954, the U.S. detonated its first practical thermonuclear weapon (which used isotopes of lithium as its fusion fuel), known as the "Shrimp" device of the Castle Bravo test, at Bikini Atoll, Marshall Islands. The device yielded 15 megatons, more than twice its expected yield, and became the worst radiological disaster in U.S. history. The combination of the unexpectedly large blast and poor weather conditions caused a cloud of radioactive nuclear fallout to contaminate over 7,000 square miles (18,000 km2). 239 Marshall Island natives and 28 Americans were exposed to significant amounts of radiation, resulting in elevated levels of cancer and birth defects in the years to come.[25]
The crew of the Japanese tuna-fishing boat Lucky Dragon 5, who had been fishing just outside the exclusion zone, returned to port suffering from radiation sickness and skin burns; one crew member was terminally ill. Efforts were made to recover the cargo of contaminated fish but at least two large tuna were probably sold and eaten. A further 75 tons of tuna caught between March and December were found to be unfit for human consumption. When the crew member died and the full results of the contamination were made public by the U.S., Japanese concerns were reignited about the hazards of radiation.[26]
The hydrogen bomb age had a profound effect on the thoughts of nuclear war in the popular and military mind. With only fission bombs, nuclear war was something that possibly could be limited. Dropped by planes and only able to destroy the most built up areas of major cities, it was possible for many to look at fission bombs as a technological extension of large-scale conventional bombing—such as the extensive firebombing against Japan and Germany during World War II). Proponents brushed aside as grave exaggeration claims that such weapons could lead to worldwide death or harm.
Even in the decades before fission weapons, there had been speculation about the possibility for human beings to end all life on the planet, either by accident or purposeful maliciousness—but technology had not provided the capacity for such action. The great power of hydrogen bombs made world-wide annihilation possible.
The Castle Bravo incident itself raised a number of questions about the survivability of a nuclear war. Government scientists in both the U.S. and the USSR had insisted that fusion weapons, unlike fission weapons, were cleaner, as fusion reactions did not produce the dangerously radioactive by-products of fission reactions. While technically true, this hid a more gruesome point: the last stage of a multi-staged hydrogen bomb often used the neutrons produced by the fusion reactions to induce fissioning in a jacket of natural uranium, and provided around half of the yield of the device itself.
This fission stage made fusion weapons considerably more dirty than they were made out to be. This was evident in the towering cloud of deadly fallout that followed the Bravo test. When the Soviet Union tested its first megaton device in 1955, the possibility of a limited nuclear war seemed even more remote in the public and political mind. Even cities and countries that were not direct targets would suffer fallout contamination. Extremely harmful fission products would disperse via normal weather patterns and embed in soil and water around the planet.
Speculation began to run towards what fallout and dust from a full-scale nuclear exchange would do to the world as a whole, rather than just cities and countries directly involved. In this way, the fate of the world was now tied to the fate of the bomb-wielding superpowers.
Deterrence and brinkmanship[edit]
Main articles: Nuclear testing, Nuclear strategy and Nuclear warfare
November 1951 nuclear test at the Nevada Test Site, from Operation Buster, with a yield of 21 kilotons. It was the first U.S. nuclear field exercise conducted on land; troops shown are 6 mi (9.7 km) from the blast.
Throughout the 1950s and the early 1960s the U.S. and the USSR both endeavored, in a tit-for-tat approach, to prevent the other power from acquiring nuclear supremacy. This had massive political and cultural effects during the Cold War.
The first atomic bombs dropped on Hiroshima and Nagasaki were large, custom-made devices, requiring highly trained personnel for their arming and deployment. They could be dropped only from the largest bomber planes—at the time the B-29 Superfortress—and each plane could only carry a single bomb in its hold.
The first hydrogen bombs were similarly massive and complicated. This ratio of one plane to one bomb was still fairly impressive in comparison with conventional, non-nuclear weapons, but against other nuclear-armed countries it was considered a grave danger. In the immediate postwar years, the U.S. expended much effort on making the bombs "G.I.-proof"—capable of being used and deployed by members of the U.S. Army, rather than Nobel Prize–winning scientists. In the 1950s, the U.S. undertook a nuclear testing program to improve the nuclear arsenal.
Starting in 1951, the Nevada Test Site (in the Nevada desert) became the primary location for all U.S. nuclear testing (in the USSR, Semipalatinsk Test Site in Kazakhstan served a similar role). Tests were divided into two primary categories: "weapons related" (verifying that a new weapon worked or looking at exactly how it worked) and "weapons effects" (looking at how weapons behaved under various conditions or how structures behaved when subjected to weapons).
In the beginning, almost all nuclear tests were either atmospheric (conducted above ground, in the atmosphere) or underwater (such as some of the tests done in the Marshall Islands). Testing was used as a sign of both national and technological strength, but also raised questions about the safety of the tests, which released nuclear fallout into the atmosphere (most dramatically with the Castle Bravo test in 1954, but in more limited amounts with almost all atmospheric nuclear testing).
Because testing was seen as a sign of technological development (the ability to design usable weapons without some form of testing was considered dubious), halts on testing were often called for as stand-ins for halts in the nuclear arms race itself, and many prominent scientists and statesmen lobbied for a ban on nuclear testing. In 1958, the U.S., USSR, and the United Kingdom (a new nuclear power) declared a temporary testing moratorium for both political and health reasons, but by 1961 the Soviet Union had broken the moratorium and both the USSR and the U.S. began testing with great frequency.
As a show of political strength, the Soviet Union tested the largest-ever nuclear weapon in October 1961, the massive Tsar Bomba, which was tested in a reduced state with a yield of around 50 megatons—in its full state it was estimated to have been around 100 Mt. The weapon was largely impractical for actual military use, but was hot enough to induce third-degree burns at a distance of 62 mi (100 km) away. In its full, dirty, design it would have increased the amount of worldwide fallout since 1945 by 25%.
In 1963, all nuclear and many non-nuclear states signed the Limited Test Ban Treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. The treaty permitted underground tests.
Most tests were considerably more modest, and worked for direct technical purposes as well as their potential political overtones. Weapons improvements took on two primary forms. One was an increase in efficiency and power, and within only a few years fission bombs were developed that were many times more powerful than the ones created during World War II. The other was a program of miniaturization, reducing the size of the nuclear weapons.
Smaller bombs meant that bombers could carry more of them, and also that they could be carried on the new generation of rockets in development in the 1950s and 1960s. U.S. rocket science received a large boost in the postwar years, largely with the help of engineers acquired from the Nazi rocketry program. These included scientists such as Wernher von Braun, who had helped design the V-2 rockets the Nazis launched across the English Channel. An American program, Project Paperclip, had endeavored to move German scientists into American hands (and away from Soviet hands) and put them to work for the U.S.
Weapons improvement[edit]
The introduction of nuclear-tipped rockets, like the MGR-1 Honest John, reflected a change in both nuclear technology and strategy.
Long-range bomber aircraft, such as the B-52 Stratofortress, allowed deployment of a wide range of strategic nuclear weapons.
Early nuclear-tipped rockets—such as the MGR-1 Honest John, first deployed by the U.S. in 1953—were surface-to-surface missiles with relatively short ranges (around 15 mi/25 km maximum) and yields around twice the size of the first fission weapons. The limited range meant they could only be used in certain types of military situations. U.S. rockets could not, for example, threaten Moscow with an immediate strike, and could only be used as tactical weapons (that is, for small-scale military situations).
Strategic weapons—weapons that could threaten an entire country—relied, for the time being, on long-range bombers that could penetrate deep into enemy territory. In the U.S., this requirement led, in 1946, to creation of the Strategic Air Command—a system of bombers headed by General Curtis LeMay (who previously presided over the firebombing of Japan during WWII). In operations like Chrome Dome, SAC kept nuclear-armed planes in the air 24 hours a day, ready for an order to attack Moscow.
These technological possibilities enabled nuclear strategy to develop a logic considerably different from previous military thinking. Because the threat of nuclear warfare was so awful, it was first thought that it might make any war of the future impossible. President Dwight D. Eisenhower's doctrine of "massive retaliation" in the early years of the Cold War was a message to the USSR, saying that if the Red Army attempted to invade the parts of Europe not given to the Eastern bloc during the Potsdam Conference (such as West Germany), nuclear weapons would be used against the Soviet troops and potentially the Soviet leaders.
With the development of more rapid-response technologies (such as rockets and long-range bombers), this policy began to shift. If the Soviet Union also had nuclear weapons and a policy of "massive retaliation" was carried out, it was reasoned, then any Soviet forces not killed in the initial attack, or launched while the attack was ongoing, would be able to serve their own form of nuclear retaliation against the U.S. Recognizing that this was an undesirable outcome, military officers and game theorists at the RAND think tank developed a nuclear warfare strategy that was eventually called Mutually Assured Destruction (MAD).
MAD divided potential nuclear war into two stages: first strike and second strike. First strike meant the first use of nuclear weapons by one nuclear-equipped nation against another nuclear-equipped nation. If the attacking nation did not prevent the attacked nation from a nuclear response, the attacked nation would respond with a second strike against the attacking nation. In this situation, whether the U.S. first attacked the USSR or the USSR first attacked the U.S., the end result would be that both nations would be damaged to the point of utter social collapse.
According to game theory, because starting a nuclear war was suicidal, no logical country would shoot first. However, if a country could launch a first strike that utterly destroyed the target country's ability to respond, that might give that country the confidence to initiate a nuclear war. The object of a country operating by the MAD doctrine is to deny the opposing country this first strike capability.
MAD played on two seemingly opposed modes of thought: cold logic and emotional fear. The English phrase MAD was often known by, "nuclear deterrence," was translated by the French as "dissuasion," and "terrorization" by the Soviets. This apparent paradox of nuclear war was summed up by British Prime Minister Winston Churchill as "the worse things get, the better they are"—the greater the threat of mutual destruction, the safer the world would be.
This philosophy made a number of technological and political demands on participating nations. For one thing, it said that it should always be assumed that an enemy nation may be trying to acquire first strike capability, which must always be avoided. In American politics this translated into demands to avoid "bomber gaps" and "missile gaps" where the Soviet Union could potentially outshoot the Americans. It also encouraged the production of thousands of nuclear weapons by both the U.S. and the USSR, far more than needed to simply destroy the major civilian and military infrastructures of the opposing country. These policies and strategies were satirized in the 1964 Stanley Kubrick film Dr. Strangelove, in which the Soviets, unable to keep up with the US's first strike capability, instead plan for MAD by building a Doomsday Machine, and thus, after a (literally) mad US General orders a nuclear attack on the USSR, the end of the world is brought about.
With early warning systems, it was thought that the strikes of nuclear war would come from dark rooms filled with computers, not the battlefield of the wars of old.
The policy also encouraged the development of the first early warning systems. Conventional war, even at its fastest, was fought over days and weeks. With long-range bombers, from the start of a nuclear attack to its conclusion was mere hours. Rockets could reduce a conflict to minutes. Planners reasoned that conventional command and control systems could not adequately react to a nuclear attack, so great lengths were taken to develop computer systems that could look for enemy attacks and direct rapid responses.
The U.S., poured massive funding into development of SAGE, a system that could track and intercept enemy bomber aircraft using information from remote radar stations. It was the first computer system to feature real-time processing, multiplexing, and display devices. It was the first general computing machine, and a direct predecessor of modern computers.
Emergence of the anti-nuclear movement[edit]
Main article: History of the anti-nuclear movement
Women Strike for Peace during the Cuban Missile Crisis
The atomic bombings of Hiroshima and Nagasaki and the end of World War II quickly followed the 1945 Trinity nuclear test, and the Little Boy device was detonated over the Japanese city of Hiroshima on 6 August 1945. Exploding with a yield equivalent to 12,500 tonnes of TNT, the blast and thermal wave of the bomb destroyed nearly 50,000 buildings and killed approximately 75,000 people.[27] Subsequently, the world’s nuclear weapons stockpiles grew.[28]
Operation Crossroads was a series of nuclear weapon tests conducted by the United States at Bikini Atoll in the Pacific Ocean in the summer of 1946. Its purpose was to test the effect of nuclear weapons on naval ships. To prepare the Bikini atoll for the nuclear tests, Bikini's native residents were evicted from their homes and resettled on smaller, uninhabited islands where they were unable to sustain themselves.[29]
National leaders debated the impact of nuclear weapons on domestic and foreign policy. Also involved in the debate about nuclear weapons policy was the scientific community, through professional associations such as the Federation of Atomic Scientists and the Pugwash Conference on Science and World Affairs.[30] Radioactive fallout from nuclear weapons testing was first drawn to public attention in 1954 when a Hydrogen bomb test in the Pacific contaminated the crew of the Japanese fishing boat Lucky Dragon.[31] One of the fishermen died in Japan seven months later. The incident caused widespread concern around the world and "provided a decisive impetus for the emergence of the anti-nuclear weapons movement in many countries".[31] The anti-nuclear weapons movement grew rapidly because for many people the atomic bomb "encapsulated the very worst direction in which society was moving".[32]
Peace movements emerged in Japan and in 1954 they converged to form a unified "Japanese Council Against Atomic and Hydrogen Bombs". Japanese opposition to the Pacific nuclear weapons tests was widespread, and "an estimated 35 million signatures were collected on petitions calling for bans on nuclear weapons".[32] The Russell–Einstein Manifesto was issued in London on July 9, 1955 by Bertrand Russell in the midst of the Cold War. It highlighted the dangers posed by nuclear weapons and called for world leaders to seek peaceful resolutions to international conflict. The signatories included eleven pre-eminent intellectuals and scientists, including Albert Einstein, who signed it just days before his death on April 18, 1955. A few days after the release, philanthropist Cyrus S. Eaton offered to sponsor a conference—called for in the manifesto—in Pugwash, Nova Scotia, Eaton's birthplace. This conference was to be the first of the Pugwash Conferences on Science and World Affairs, held in July 1957.
In the United Kingdom, the first Aldermaston March organised by the Campaign for Nuclear Disarmament took place at Easter 1958, when several thousand people marched for four days from Trafalgar Square, London, to the Atomic Weapons Research Establishment close to Aldermaston in Berkshire, England, to demonstrate their opposition to nuclear weapons.[33][34] The Aldermaston marches continued into the late 1960s when tens of thousands of people took part in the four-day marches.[32]
In 1959, a letter in the Bulletin of Atomic Scientists was the start of a successful campaign to stop the Atomic Energy Commission dumping radioactive waste in the sea 19 kilometres from Boston.[35] On November 1, 1961, at the height of the Cold War, about 50,000 women brought together by Women Strike for Peace marched in 60 cities in the United States to demonstrate against nuclear weapons. It was the largest national women's peace protest of the 20th century.[36][37]
In 1958, Linus Pauling and his wife presented the United Nations with the petition signed by more than 11,000 scientists calling for an end to nuclear-weapon testing. The "Baby Tooth Survey," headed by Dr Louise Reiss, demonstrated conclusively in 1961 that above-ground nuclear testing posed significant public health risks in the form of radioactive fallout spread primarily via milk from cows that had ingested contaminated grass.[38][39][40] Public pressure and the research results subsequently led to a moratorium on above-ground nuclear weapons testing, followed by the Partial Test Ban Treaty, signed in 1963 by John F. Kennedy and Nikita Khrushchev.[30][41][42]
Cuban Missile Crisis[edit]
Main article: Cuban Missile Crisis
U-2 photographs revealed that the Soviet Union was stationing nuclear missiles on the island of Cuba in 1962, beginning the Cuban Missile Crisis.
Submarine-launched ballistic missiles with multiple warheads made defending against nuclear attack impractical.
Bombers and short-range rockets were not reliable: planes could be shot down, and earlier nuclear missiles could cover only a limited range— for example, the first Soviet rockets' range limited them to targets in Europe. However, by the 1960s, both the United States and the Soviet Union had developed intercontinental ballistic missiles, which could be launched from extremely remote areas far away from their target. They had also developed submarine-launched ballistic missiles, which had less range but could be launched from submarines very close to the target without any radar warning. This made any national protection from nuclear missiles increasingly impractical.
The military realities made for a precarious diplomatic situation. The international politics of brinkmanship led leaders to exclaim their willingness to participate in a nuclear war rather than concede any advantage to their opponents, feeding public fears that their generation may be the last. Civil defense programs undertaken by both superpowers, exemplified by the construction of fallout shelters and urging civilians about the survivability of nuclear war, did little to ease public concerns.
The climax of brinksmanship came in early 1962, when an American U-2 spy plane photographed a series of launch sites for medium-range ballistic missiles being constructed on the island of Cuba, just off the coast of the southern United States, beginning what became known as the Cuban Missile Crisis. The U.S. administration of John F. Kennedy concluded that the Soviet Union, then led by Nikita Khrushchev, was planning to station Soviet nuclear missiles on the island, which was under the control of communist Fidel Castro. On October 22, Kennedy announced the discoveries in a televised address. He announced a naval blockade around Cuba that would turn back Soviet nuclear shipments, and warned that the military was prepared "for any eventualities." The missiles had 2,400 mile (4,000 km) range, and would allow the Soviet Union to quickly destroy many major American cities on the Eastern Seaboard if a nuclear war began.
The leaders of the two superpowers stood nose to nose, seemingly poised over the beginnings of a third world war. Khrushchev's ambitions for putting the weapons on the island were motivated in part by the fact that the U.S. had stationed similar weapons in Britain, Italy, and nearby Turkey, and had previously attempted to sponsor an invasion of Cuba the year before in the failed Bay of Pigs Invasion. On October 26, Khrushchev sent a message to Kennedy offering to withdraw all missiles if Kennedy committed to a policy of no future invasions of Cuba. Khrushchev worded the threat of assured destruction eloquently:
"You and I should not now pull on the ends of the rope in which you have tied a knot of war, because the harder you and I pull, the tighter the knot will become. And a time may come when this knot is tied so tight that the person who tied it is no longer capable of untying it, and then the knot will have to be cut. What that would mean I need not explain to you, because you yourself understand perfectly what dreaded forces our two countries possess."
A day later, however, the Soviets sent another message, this time demanding that the U.S. remove its missiles from Turkey before any missiles were withdrawn from Cuba. On the same day, a U-2 plane was shot down over Cuba and another almost intercepted over the Soviet Union, as Soviet merchant ships neared the quarantine zone. Kennedy responded by accepting the first deal publicly, and sending his brother Robert to the Soviet embassy to accept the second deal privately. On October 28, the Soviet ships stopped at the quarantine line and, after some hesitation, turned back towards the Soviet Union. Khrushchev announced that he had ordered the removal of all missiles in Cuba, and U.S. Secretary of State Dean Rusk was moved to comment, "We went eyeball to eyeball, and the other fellow just blinked."
The Crisis was later seen as the closest the U.S. and the USSR ever came to nuclear war and had been narrowly averted by last-minute compromise by both superpowers. Fears of communication difficulties led to the installment of the first hotline, a direct link between the superpowers that allowed them to more easily discuss future military activities and political maneuverings. It had been made clear that missiles, bombers, submarines, and computerized firing systems made escalating any situation to Armageddon far more easy than anybody desired.
After stepping so close to the brink, both the U.S. and the USSR worked to reduce their nuclear tensions in the years immediately following. The most immediate culmination of this work was the signing of the Partial Test Ban Treaty in 1963, in which the U.S. and USSR agreed to no longer test nuclear weapons in the atmosphere, underwater, or in outer space. Testing underground continued, allowing for further weapons development, but the worldwide fallout risks were purposefully reduced, and the era of using massive nuclear tests as a form of saber-rattling ended.
In December 1979, NATO decided to deploy cruise and Pershing II missiles in Western Europe in response to Soviet deployment of intermediate range mobile missiles, and in the early 1980s, a "dangerous Soviet-US nuclear confrontation" arose.[43] In New York on June 12, 1982, one million people gathered to protest about nuclear weapons, and to support the second UN Special Session on Disarmament.[44][45] As the nuclear abolitionist movement grew, there were many protests at the Nevada Test Site. For example, on February 6, 1987, nearly 2,000 demonstrators, including six members of Congress, protested against nuclear weapons testing and more than 400 people were arrested.[46] Four of the significant groups organizing this renewal of anti-nuclear activism were Greenpeace, The American Peace Test, The Western Shoshone, and Nevada Desert Experience.
There have been at least four major false alarms, the most recent in 1995, that resulted in the activation of nuclear attack early warning protocols. They include the accidental loading of a training tape into the American early-warning computers; a computer chip failure that appeared to show a random number of attacking missiles; a rare alignment of the Sun, the U.S. missile fields and a Soviet early-warning satellite that caused it to confuse high-altitude clouds with missile launches; the launch of a Norwegian research rocket resulted in President Yeltsin activating his nuclear briefcase for the first time.[47]
Initial proliferation[edit]
In the fifties and sixties, three more countries joined the "nuclear club." The United Kingdom had been an integral part of the Manhattan Project following the Quebec Agreement in 1943. The passing of the McMahon Act by the United States in 1946 unilaterally broke this partnership and prevented the passage of any further information to the United Kingdom. The British Government, under Clement Attlee, determined that a British Bomb was essential. Because of British involvement in the Manhattan Project, Britain had extensive knowledge in some areas, but not in others.
An improved version of 'Fat Man' was developed, and on 26 February 1952, Prime Minister Winston Churchill announced that the United Kingdom also had an atomic bomb and a successful test took place on 3 October 1952. At first these were free-fall bombs, intended for use by the V Force of jet bombers. A Vickers Valiant dropped the first UK nuclear weapon on 11 October 1956 at Maralinga, South Australia. Later came a missile, Blue Steel, intended for carriage by the V Force bombers, and then the Blue Streak medium-range ballistic missile (later canceled). Anglo-American cooperation on nuclear weapons was restored by the 1958 US-UK Mutual Defence Agreement. As a result of this and the Polaris Sales Agreement, the United Kingdom has bought United States designs for submarine missiles and fitted its own warheads. It retains full independent control over the use of the missiles. It no longer possesses any free-fall bombs.
France had been heavily involved in nuclear research before World War II through the work of the Joliot-Curies. This was discontinued after the war because of the instability of the Fourth Republic and lack of finances.[48] However, in the 1950s, France launched a civil nuclear research program, which produced plutonium as a byproduct.
In 1956, France formed a secret Committee for the Military Applications of Atomic Energy and a development program for delivery vehicles. With the return of Charles de Gaulle to the French presidency in 1958, final decisions to build a bomb were made, which led to a successful test in 1960. Since then, France has developed and maintained its own nuclear deterrent independent of NATO.
In 1951, China and the Soviet Union signed an agreement whereby China supplied uranium ore in exchange for technical assistance in producing nuclear weapons. In 1953, China established a research program under the guise of civilian nuclear energy. Throughout the 1950s the Soviet Union provided large amounts of equipment. But as the relations between the two countries worsened the Soviets reduced the amount of assistance and, in 1959, refused to donate a bomb for copying purposes. Despite this, the Chinese made rapid progress and tested an atomic bomb on October 16, 1964, at Lop Nur. They tested a nuclear missile on October 25, 1966, and a hydrogen bomb on June 14, 1967.
Chinese nuclear warheads were produced from 1968 and thermonuclear warheads from 1974.[49] It is also thought that Chinese warheads have been successfully miniaturised from 2200 kg to 700 kg through the use of designs obtained by espionage from the United States. The current number of weapons is unknown owing to strict secrecy, but it is thought that up to 2000 warheads may have been produced,[citation needed] though far fewer may be available for use. China is the only nuclear weapons state to have guaranteed the non-first use of nuclear weapons.
Cold War[edit]
Main article: Cold War
ICBMs, like the American Minuteman missile, allowed nations to deliver nuclear weapons thousands of miles away with relative ease.
On 12 December 1982, 30,000 women held hands around the 6 miles (9.7 km) perimeter of the RAF Greenham Common base, in protest against the decision to site American cruise missiles there.
After World War II, the balance of power between the Eastern and Western blocs and the fear of global destruction prevented the further military use of atomic bombs. This fear was even a central part of Cold War strategy, referred to as the doctrine of Mutually Assured Destruction. So important was this balance to international political stability that a treaty, the Anti-Ballistic Missile Treaty (or ABM treaty), was signed by the U.S. and the USSR in 1972 to curtail the development of defenses against nuclear weapons and the ballistic missiles that carry them. This doctrine resulted in a large increase in the number of nuclear weapons, as each side sought to ensure it possessed the firepower to destroy the opposition in all possible scenarios.
Early delivery systems for nuclear devices were primarily bombers like the United States B-29 Superfortress and Convair B-36, and later the B-52 Stratofortress. Ballistic missile systems, based on Wernher von Braun's World War II designs (specifically the V-2 rocket), were developed by both United States and Soviet Union teams (in the case of the U.S., effort was directed by the German scientists and engineers although the Soviet Union also made extensive use of captured German scientists, engineers, and technical data).
These systems were used to launch satellites, such as Sputnik, and to propel the Space Race, but they were primarily developed to create Intercontinental Ballistic Missiles (ICBMs) that could deliver nuclear weapons anywhere on the globe. Development of these systems continued throughout the Cold War—though plans and treaties, beginning with the Strategic Arms Limitation Treaty (SALT I), restricted deployment of these systems until, after the fall of the Soviet Union, system development essentially halted, and many weapons were disabled and destroyed. On January 27, 1967, more than 60 nations signed the Outer Space Treaty, banning nuclear weapons in space.
There have been a number of potential nuclear disasters. Following air accidents U.S. nuclear weapons have been lost near Atlantic City, New Jersey (1957); Savannah, Georgia (1958) (see Tybee Bomb); Goldsboro, North Carolina (1961); off the coast of Okinawa (1965); in the sea near Palomares, Spain (1966) (see 1966 Palomares B-52 crash); and near Thule, Greenland (1968) (see 1968 Thule Air Base B-52 crash). Most of the lost weapons were recovered, the Spanish device after three months' effort by the DSV Alvin and DSV Aluminaut.
The Soviet Union was less forthcoming about such incidents, but the environmental group Greenpeace believes that there are around forty non-U.S. nuclear devices that have been lost and not recovered, compared to eleven lost by America, mostly in submarine disasters. The U.S. has tried to recover Soviet devices, notably in the 1974 Project Azorian using the specialist salvage vessel Hughes Glomar Explorer to raise a Soviet submarine. After news leaked out about this boondoggle, the CIA would coin a favorite phrase for refusing to disclose sensitive information, called glomarization: We can neither confirm nor deny the existence of the information requested but, hypothetically, if such data were to exist, the subject matter would be classified, and could not be disclosed.[50]
The collapse of the Soviet Union in 1991 essentially ended the Cold War. However, the end of the Cold War failed to end the threat of nuclear weapon use, although global fears of nuclear war reduced substantially. In a major move of symbolic de-escalation, Boris Yeltsin, on January 26, 1992, announced that Russia planned to stop targeting United States cities with nuclear weapons.
Cost[edit]
The designing, testing, producing, deploying, and defending against nuclear weapons is one of the largest expenditures for the nations which possess nuclear weapons. In the United States during the Cold War years, between "one quarter to one third of all military spending since World War II [was] devoted to nuclear weapons and their infrastructure." [51] According to a retrospective Brookings Institution study published in 1998 by the Nuclear Weapons Cost Study Committee (formed in 1993 by the W. Alton Jones Foundation), the total expenditures for U.S. nuclear weapons from 1940 to 1998 was $5.5 trillion in 1996 Dollars.[52] The total public debt at the end of fiscal year 1998 was $5,478,189,000,000 in 1998 Dollars[53] or $5.3 trillion in 1996 Dollars. The entire public debt in 1998 was therefore equal to the cost of research, development, and deployment of U.S. nuclear weapons and nuclear weapons-related programs during the Cold War.[51][52][54]
Second nuclear age[edit]
See also: List of states with nuclear weapons
The second nuclear age can be regarded as proliferation of nuclear weapons among lesser powers and for reasons other than the American-Soviet-Chinese rivalry.
India embarked relatively early on a program aimed at nuclear weapons capability, but apparently accelerated this after border war with China in 1962. India's first atomic-test explosion was in 1974 with Smiling Buddha, which it described as a "peaceful nuclear explosion."
After the collapse of Eastern Military High Command and the disintegration of Pakistan as a result of the 1971 Winter war, Bhutto of Pakistan launched scientific research on nuclear weapons. The Indian test caused Pakistan to spur its programme, and the ISI conducted successful espionage operations in the Netherlands, while also developing the programme indigenously. India tested fission and perhaps fusion devices in 1998, and Pakistan successfully tested fission devices that same year, raising concerns that they would use nuclear weapons on each other.
All of the former Soviet bloc countries with nuclear weapons (Belarus, Ukraine, and Kazakhstan) returned their warheads to Russia by 1996.
South Africa also had an active program to develop uranium-based nuclear weapons, but dismantled its nuclear weapon program in the 1990s. Experts do not believe it actually tested such a weapon, though it later claimed it constructed several crude devices that it eventually dismantled. In the late 1970s American spy satellites detected a "brief, intense, double flash of light near the southern tip of Africa."[55] Known as the Vela Incident, it was speculated to have been a South African or possibly Israeli nuclear weapons test, though some feel that it may have been caused by natural events or a detector malfunction.
Israel is widely believed to possess an arsenal of up to several hundred nuclear warheads, but this has never been officially confirmed or denied (though the existence of their Dimona nuclear facility was confirmed by Mordechai Vanunu in 1986).
In January 2004, Dr A. Q. Khan of Pakistan's programme confessed to having been a key mover in "proliferation activities",[56] seen as part of an international proliferation network of materials, knowledge, and machines from Pakistan to Libya, Iran, and North Korea.
North Korea announced in 2003 that it also had several nuclear explosives though it has not been confirmed and the validity of this has been a subject of scrutiny amongst weapons experts. The first claimed detonation of a nuclear weapon by the Democratic People's Republic of Korea was the 2006 North Korean nuclear test, conducted on October 9, 2006. On May 25, 2009, North Korea continued nuclear testing, violating United Nations Security Council Resolution 1718. A third test was conducted on 13 February 2013.
In Iran, Ayatollah Ali Khamenei issued a fatwa forbidding the production, stockpiling and use of nuclear weapons on August 9, 2005. The full text of the fatwa was released in an official statement at the meeting of the International Atomic Energy Agency (IAEA) in Vienna.[57] Despite this, however, there is mounting concern in many nations about Iran's refusal to halt its nuclear power program, which many (including some members of the US government) fear is a cover for weapons development.