In Episode 15 of Under The Knife, I explore the horrible reality behind dental practices from the past, including how dentures used to be made from the teeth of executed criminals, exhumed bodies, and sometimes even slaves.
If you visit the Gordon Museum at Guy’s Hospital in London, you’ll see a small bladder stone—no bigger than 3 centimetres across. Besides the fact that it has been sliced open to reveal concentric circles within, it is entirely unremarkable in appearance. Yet, this tiny stone was the source of enormous pain for 53-year-old Stephen Pollard, who agreed to undergo surgery to remove it in 1828.
People frequently suffered from bladder stones in earlier periods due to poor diet, which often consisted of lots of meat and alcohol, and very few vegetables. The oldest bladder stone on record was discovered in Egyptian grave from 4,800 B.C. The problem was so common that itinerant healers traveled from village to village offering a vast array of services and potions that promised to cure those suffering from the condition. Depending on the size of these stones, they could block the flow of urine into the bladder from the kidneys; or, they could prevent the flow of urine out of the bladder through the urethra. Either situation was potentially lethal. In the first instance, the kidney is slowly destroyed by pressure from the urine; in the second instance, the bladder swells and eventually bursts, leading to infection and finally death.
Like today, bladder stones were unimaginably painful for those who suffered from them in the past. The stones themselves were often enormous. Some measured as large as a tennis ball. The afflicted often acted in desperation, going to great lengths to rid themselves of the agony. In the early 18th century, one man reportedly drove a nail through his penis and then used a blacksmith’s hammer to break the stone apart until the pieces were small enough to pass through his urethra. It’s not a surprise, then, that many sufferers chose to submit to the surgeon’s knife despite a very real risk of dying during or immediately after the procedure from shock or infection. Although the operation itself lasted only a matter of minutes, lithotomic procedures were incredibly painful and dangerous—not to mention humiliating.
The patient—naked from the waist down—was bound in such a way as to ensure an unobstructed view of his genitals and anus [see illustration below]. Afterwards, the surgeon passed a curved, metal tube up the patient’s penis and into the bladder. He then slid a finger into the man’s rectum, feeling for the stone. Once he had located it, his assistant removed the metal tube and replaced it with a wooden staff. This staff acted as a guide so that the surgeon did not fatally rupture the patient’s rectum or intestines as he began cutting deeper into the bladder. Once the staff was in place, the surgeon cut diagonally through the fibrous muscle of the scrotum until he reached the wooden staff. Next, he used a probe to widen the hole, ripping open the prostate gland in the process. At this point, the wooden staff was removed and the surgeon used forceps to extract the stone from the bladder. 
Unfortunately for Stephen Pollard, what should have lasted 5 minutes ended up lasting 55 minutes under the gaze of 200 spectators at Guy’s Hospital in London. The surgeon Bransby Cooper fumbled and panicked, cursing the patient loudly for having “a very deep perineum,” while the patient, in turn, cried: “Oh! let it go; —pray, let it keep in!’” The surgeon reportedly used every tool at his disposal before he finally reached into the gaping wound with his bare fingers. During this time, several of the spectators walked out of the operating theater, unable to bear witness to the patient’s agony any longer. Eventually, Cooper located the stone with a pair of forceps. He held it up for his audience, who clapped unenthusiastically at the sight of the stone.
Sadly, Pollard survived the surgery only to die the next day. His autopsy revealed that it was indeed the skill of his surgeon, and not his alleged “abnormal anatomy,” which was the cause of his death.
But the story didn’t end there. Word quickly got out about the botched operation. When Thomas Wakley [left]—the editor of The Lancet—heard of this medical disaster, he accused Cooper of incompetence and implied that the surgeon had only been appointed to Guy’s Hospital because he was nephew to one of the senior surgeons on staff. Wakley used the trial to attack what he believed to be corruption within the hospitals due to rampant nepotism. Outraged by the allegation, Cooper sued Wakley for libel and sought £2000 in damages. The jury reluctantly sided with the surgeon, but only awarded him £100. Wakley had raised more than that in a defence fund campaign and gave the remaining money over to Pollard’s widow after the trial. 
Bransby Cooper’s reputation, like his patient, never did recover.
If you’re interested in the history of pre-anesthetic and pre-antiseptic surgery, you can pre-order my book The Butchering Art in the US (click here) and in the UK (click here). Information of foreign editions to come!
1. Druin Burch, Digging up the Dead: Uncovering the Life and Times of an Extraordinary Surgeon (2007), p. 26. I am greatly indebted to his work for bringing this story to my attention.
2. Thomas Wakley, A Report of the Trial of Cooper v. Wakley (1829), pp. 4-5.
The word “hysteria” conjures up an array of images, none of which probably include a nomadic uterus wandering aimlessly around the female body. Yet that is precisely what medical practitioners in the past believed was the cause behind this mysterious disorder. The very word “hysteria” comes from the Greek word hystera, meaning “womb,” and arises from medical misunderstandings of basic female anatomy.
Today, hysteria is regarded as a physical expression of a mental conflict and can affect anyone regardless of age or gender.  Centuries ago, however, it was attributed only to women, and believed to be physiological (not psychological) in nature.
For instance, Plato believed that the womb—especially one which was barren—could become vexed and begin wandering throughout the body, blocking respiratory channels causing bizarre behavior.  This belief was ubiquitous in ancient Greece. The physician Aretaeus of Cappadocia went so far as to consider the womb “an animal within an animal,” an organ that “moved of itself hither and thither in the flanks.”  The uterus could move upwards, downwards, left or right. It could even collide with the liver or spleen. Depending on its direction, a wandering womb could cause all kinds of hell. One that traveled upwards might cause sluggishness, lack of strength, and vertigo in a patient; while a womb that moved downwards could cause a person to feel as if she were choking. So worrisome was the prospect of a wandering womb during this period, that some women wore amulets to protect themselves against it. 
The womb continued to hold a mystical place in medical text for centuries, and was often used to explain away an array of female complaints. The 17th-century physician William Harvey, famed for his theories on the circulation of the blood around the heart, perpetuated the belief that women were slaves to their own biology. He described the uterus as “insatiable, ferocious, animal-like,” and drew parallels between “bitches in heat and hysterical women.”  When a woman named Mary Glover accused her neighbor Elizabeth Jackson of cursing her in 1602, the physician Edward Jorden argued that the erratic behavior that drove Mary to make such an accusation was actually caused by noxious vapors in her womb, which he believed were slowly suffocating her. (The courts disagreed and Elizabeth Jackson was executed for witchcraft shortly thereafter.)
So what could be done for hysteria in the past?
Physicians prescribed all kinds of treatments for a wayward womb. These included sweet-smelling vaginal suppositories and fumigations used to tempt the uterus back to its rightful place. The Greek physician Atreaus wrote that the womb “delights…in fragrant smells and advances towards them; and it has an aversion to foetid smells, and flees from them.” Women were also advised to ingest disgusting substances—sometimes containing repulsive ingredients such as human or animal excrement—in order to coax the womb away from the lungs and heart. In some cases, physical force was used to correct the position of a wandering womb (see image, right). For the single woman suffering from hysteria, the cure was simple: marriage, followed by children. Lots and lots of children.
Today, wombs are no longer thought to wander; however, medicine still tends to pathologize the vagaries of the female reproductive system.  Over the course of several thousand years, the womb has become less of a way to explain physician ailments, and more of a way to explain psychological disfunction—often being cited as the reason behind irrationality and mood swings in women. Has the ever-elusive hysteria brought on by roving uteri simply been replaced by the equally intangible yet mysterious PMS? I’ll let you decide.
You can now pre-order my book THE BUTCHERING ART by clicking here. THE BUTCHERING ART follows the story of Joseph Lister as he attempts to revolutionize the brutal world of Victorian surgery through antisepsis. Pre-orders are incredibly helpful to new authors. Info on how to order foreign editions coming soon. Your support is greatly appreciated.
1. Mark J Adair, “Plato’s View of the ‘Wandering Uterus,’” The Classical Journal 91:2 (1996), p. 153.
2. G. S. Rousseau, “‘A Strange Pathology:’ Hysteria in the Early Modern World, 1500-1800” in Hysteria Beyond Freud (1993), p.104. Originally qtd in Heather Meek, “Of Wandering Wombs and Wrongs of Women: Evolving Concepts of Hysteria in the Age of Reason,” English Studies in Canada 35:2-3 (June/September 2009), p.109.
3. Quoted in Matt Simon, “Fantastically Wrong: The Theory of the Wandering Wombs that Drove Women to Madness,” Wired (7 May 2014).
4. Robert K. Ritner, “A Uterine Amulet in the Oriental Institute Collection,” Journal of Near Eastern Studies 45:3 (Jul. 1984), pp.209-221. For more on the fascinating subject of magical amulets, see Tom Blaen, Medical Jewels, Magical Gems: Precious Stones in Early Modern Britain (2012).
5. Rousseau, “A Strange Pathology,” p. 132.
6. Mary Lefkowitz, “Medical Notes: The Wandering Womb,” The New Yorker (26 February 1996).
The following blog post relates to my forthcoming book THE BUTCHERING ART, which you can pre-order here.
Today, we think of the hospital as an exemplar of sanitation. However, during the first half of the nineteenth century, hospitals were anything but hygienic. They were breeding grounds for infection and provided only the most primitive facilities for the sick and dying, many of whom were housed on wards with little ventilation or access to clean water. As a result of this squalor, hospitals became known as “Houses of Death.”
The best that can be said about Victorian hospitals is that they were a slight improvement over their Georgian predecessors. That’s hardly a ringing endorsement when one considers that a hospital’s “Chief Bug-Catcher”—whose job it was to rid the mattresses of lice—was paid more than its surgeons in the eighteenth century. In fact, bed bugs were so common that the “Bug Destroyer” Andrew Cooke [see image, left] claimed to have cleared upwards of 20,000 beds of insects during the course of his career.
In spite of token efforts to make them cleaner, most hospitals remained overcrowded, grimy, and poorly managed. The assistant surgeon at St. Thomas’s Hospital in London was expected to examine over 200 patients in a single day. The sick often languished in filth for long periods before they received medical attention, because most hospitals were disastrously understaffed. In 1825, visitors to St. George’s Hospital discovered mushrooms and wriggling maggots thriving in the damp, soiled sheets of a patient with a compound fracture. The afflicted man, believing this to be the norm, had not complained about the conditions, nor had any of his fellow convalescents thought the squalor especially noteworthy.
Worst of all was the fact that a sickening odor permeated every hospital ward. The air was thick with the stench of piss, shit, and vomit. The smell was so offensive that the staff sometimes walked around with handkerchiefs pressed to their noses. Doctors didn’t exactly smell like rose beds, either. Berkeley Moynihan—one of the first surgeons in England to use rubber gloves—recalled how he and his colleagues used to throw off their own jackets when entering the operating theater and don ancient frocks that were often stiff with dried blood and pus. They had belonged to retired members of staff and were worn as badges of honor by their proud successors, as were many items of surgical clothing.
The operating theaters within these hospitals were just as dirty as the surgeons working in them. In the early decades of the nineteenth century, it was safer to have surgery at home than it was in a hospital, where mortality rates were three to five times higher than they were in domestic settings. Those who went under the knife did so as a last resort, and so were usually mortally ill. Very few surgical patients recovered without incident. Many either died or fought their way back to only partial health. Those unlucky enough to find themselves hospitalized during this period would frequently fall prey to a host of infections, most of which were fatal in a pre-antibiotic era.
In addition to the foul smells, fear permeated the atmosphere of the Victorian hospital. The surgeon John Bell wrote that it was easy to imagine the mental anguish of the hospital patient awaiting surgery. He would hear regularly “the cries of those under operation which he is preparing to undergo,” and see his “fellow-sufferer conveyed to that scene of trial,” only to be “carried back in solemnity and silence to his bed.” Lastly, he was subjected to the sound of their dying groans as they suffered the final throes of what was almost certainly their end.
As horrible as these hospitals were, it was not easy gaining entry to one. Throughout the nineteenth century, almost all the hospitals in London except the Royal Free controlled inpatient admission through a system of ticketing. One could obtain a ticket from one of the hospital’s “subscribers,” who had paid an annual fee in exchange for the right to recommend patients to the hospital and vote in elections of medical staff. Securing a ticket required tireless soliciting on the part of potential patients, who might spend days waiting and calling on the servants of subscribers and begging their way into the hospital. Some hospitals only admitted patients who brought with them money to cover their almost inevitable burial. Others, like St. Thomas’ in London, charged double if the person in question was deemed “foul” by the admissions officer.
Before germs and antisepsis were fully understood, remedies for hospital squalor were hard to come by. The obstetrician James Y. Simpson suggested an almost-fatalistic approach to the problem. If cross-contamination could not be controlled, he argued, then hospitals should be periodically destroyed and built anew. Another surgeon voiced a similar view. “Once a hospital has become incurably pyemia-stricken, it is impossible to disinfect it by any known hygienic means, as it would to disinfect an old cheese of the maggots which have been generated in it,” he wrote. There was only one solution: the wholesale “demolition of the infected fabric.”
To read more about 19th-century hospitals and Joseph Lister’s antiseptic revolution, pre-order my book THE BUTCHERING ART by clicking here. Pre-orders are incredibly helpful to new authors . Info on how to order foreign editions coming soon. Your support is greatly appreciated.
1. Adrian Teal, The Gin Lane Gazette (London: Unbound, 2014).
2. F. B. Smith, The People’s Health 1830-1910 (London: Croom Helm, 1979), 262.
3. John Bell, The Principles of Surgery, Vol. III (1808), 293.
4. Elisabeth Bennion, Antique Medical Instruments (Berkeley: University of California Press, 1979), 13.
5. John Eric Erichsen, On Hospitalism and the Causes of Death after Operations (London: Longmans, Green, and Co., 1874), 98.
The surgical revolution began with an American dentist and a curiously sweet-smelling liquid known as ether.
Officially, ether had been discovered in 1275, but its stupefying effects weren’t synthesized until 1540, when the German botanist and chemist Valerius Cordus created a revolutionary formula that involved adding sulfuric acid to ethyl alcohol. His contemporary Paracelsus experimented with ether on chickens, noting that when the birds drank the liquid, they would undergo prolonged sleep and awake unharmed. He concluded that the substance “quiets all suffering without any harm and relieves all pain, and quenches all fevers, and prevents complications in all disease.”  Yet inexplicably, it would be several hundred years before it was tested on humans.
That moment finally arrived in 1842, when Crawford Williamson Long became the first pioneer to use ether as a general anesthetic when he removed a tumor from a patient’s neck in Jefferson, Georgia. Unfortunately, Long didn’t publish the results of his experiments until 1848. By that time, Boston dentist William T. G. Morton had won fame by using it while extracting a tooth painlessly from a patient on September 30, 1846 [see Morton’s inhaler for administering ether, right]. An account of this successful procedure was published in a newspaper, prompting a notable surgeon to ask Morton to assist him in an operation removing a large tumor from a patient’s lower jaw at Massachusetts General Hospital. After the demonstration, someone nicknamed the surgical amphitheater the “Ether Dome,” and it has been known by this name ever since.
It was an incredible breakthrough. Up until that point, surgery had been brutally painful. The patient, fully awake, would be restrained while the surgeon cut through skin, tissue, muscle, and bone. Surgeons were lauded for their brute strength and quick hands. A capable surgeon could remove a leg in under a minute. But with the discovery of ether, the need for speed in the operating theater had now vanished.
On November 18, 1846, Dr. Henry Jacob Bigelow wrote about this groundbreaking moment in The Boston Medical and Surgical Journal. He described how Morton had administered what he called “Letheon” to the patient before the operation commenced. This was a gas named after the River Lethe in classical mythology which made the souls of the dead forget their lives on earth. Morton, who had patented the composition of the gas shortly after the operation, kept its parts secret, even from the surgeons. Bigelow revealed, however, that he could detect the sickly sweet smell of ether in it. News about the miraculous substance which could render patients unconscious during surgery spread quickly around the world as surgeons rushed to test the effects of ether on their own patients.
The term “etherization” was coined, and the use of ether in surgery was celebrated in newspapers. “The history of Medicine has presented no parallel to the perfect success that has attended the use of ether,” a writer at the Exeter Flying Post proclaimed.  Another journalist declared: “Oh, what delight for every feeling heart… the announcement of this noble discovery of the power to still the sense of pain, and veil the eye and memory from all the horrors of an operation…WE HAVE CONQUERED PAIN!” 
A curious by-product of all this was the ether parties that sprang up all over the world. Thomas Lint, a medical student at St. Bartholomew’s Hospital in London, confessed: “We sit round a table and suck [on an inhaling apparatus], like many nabobs with their hookahs. It’s glorious, as you will see from this analysis of a quarter of an hour’s jolly good suck.”  He then went on to describe several “ethereal” experiences he and his fellow classmates had while under the influence of the newly discovered substance.
Ether wasn’t just inhaled. It was also drunk, like alcohol. In Ireland, the substance replaced whiskey for a while, due to its low cost (a penny a draught). After drinking a glass of water, “ethermaniacs” would take a drop of the drug on their tongues while pinching their noses and chasing it with another glass of water. Taken this way, ether hit the user hard and fast. Dr. Ernest Hart wrote that “the immediate effects of drinking ether are similar to those produced by alcohol, but everything takes place more rapidly.”  Recovery was just as swift. Those taken into custody for drunken disorderliness were often completely sober by the time they reached the police station, with the bonus that they also suffered no hangover. In this way, 19th-century revelers could take draughts of ether several times a day, with little consequence. 
Today, the “Ether Dome” at Massachusetts General Hospital has become a national historic landmark [pictured below], visited by thousands of members of the public each year. Although surgeons haven’t operated there for well over a hundred years, the room is still used for meetings and lectures at the hospital. The Ether Dome looks more or less like it did 165 years ago. Display cases at either end of the room contain surgical instruments from Morton’s day, their blades dull and rusted with age. At the front of the room an Egyptian mummy lords over the phantom audience. One can almost detect the sweet smell of ether in the air from so long ago.
If you enjoy my blog, please consider supporting my content by clicking HERE.
1. Quoted in Steve Parker, Kill or Cure: An Illustrated History ofMedicine (London: DK, 2013), 174.
2. “Etherization in Surgery,” Exeter Flying Post, 24 June, 1847, 4.
3. London People’s Journal, 9 January, 1847.
4. Punch, or The London Charivari (December 1847), 259.
5. Quoted in David J. Linden, Pleasure: How Our Brains Make Junk Food, Exercise, Marijuana, Generosity & Gambling Feel So Good (Viking, 2011), 31.
6. Sterling Haynes, “Ethermaniacs,” BC Medical Journal (June 2014), Vol. 56 (No.5), 254-3.
I’m excited to announce that I’ve just finished filming the first episode of my new YouTube series, Under The Knife, and will be releasing it very soon (please subscribe to my channel for video updates). Unsurprisingly, that got me thinking about, well, knives. Here’s a list of some rather terrifying knives from our medical past.
- VALENTIN KNIFE, 1838. This knife was one of the few able to cut slices of organs and soft tissues for microscopic examination. The double-bladed knife worked best when the blades were wet – best of all when submerged in water. Named after its inventor, Professor Gabriel Valentin (1810-1883), a German-Swiss physiologist, the knife was invented in 1838. This example, however, dates from 1890.
- BISTOURY CACHÉ, c.1850. Invented in the mid-19th century, bistoury caché literally translates from the French as ‘hidden knife’. The device was used to cut internal organs or to open cavities, particularly during the surgical removal of a bladder or kidney stone – a practice known as lithotomy.
- CIRCUMCISION KNIFE, c.1775. Circumcision – the removal of the foreskin of the penis – is practised across the world often for cultural and religious reasons. In some countries it is also promoted for reasons of hygiene and health. This knife dates from the late 18th century.
- CATARACT KNIFE & NEEDLE, 1805. Georg Joseph Beer (1763-1821), an Austrian professor of ophthalmology, invented this cataract knife and needle around 1805. Cataracts cause blurred vision as the lens becomes cloudy and if left untreated can cause blindness. These instruments allowed for the surgical removal of some of the cloudy mass and, if necessary, part or all of the lens itself. Prior to effective anaesthetics, this was an excruciatingly painful process. This particular example dates from 1820.
- ORTHOPEDIC KNIFE, 1855. William Adams (1820-1900), an English surgeon, invented this type of knife for his new procedure called periosteotomy in 1855. This involved un-fusing the bones of the hip joint by cutting the neck of the femur (upper leg bone). He affectionately called it ‘my little thaw’, because the knife was used to cut through and ‘melt’ fused bones.
- LISTON KNIFE, c.1830. Robert Liston (1797-1847), a Scottish surgeon renowned for his speed and precision in surgery, invented this double-edged amputation knife in the 1830s. This particular example is made of steel with a nickel-plated handle. Nickel plating was introduced in the 1890s and meant that the knife could be boiled without it rusting and was therefore ideal for aseptic surgery. It was made by Down Bros, a leading surgical instrument maker, in the 1920s.
- SYRIAN SURGICAL KNIFE, c.900 AD. Most of the blade of this ancient surgical knife is rusty and part of it is broken. The steel blade is slotted into a brass handle. The loop at the end may have been used as a finger hole for gripping. This knife dates to a period when the Islamic world became a major centre for medical study and practice.
- PLAGUE LANCET, c.1600. Plague epidemics ravaged Marseilles in France throughout the 17th and 18th centuries. Lancets, such as the copy shown here, were used to open buboes in order to relieve pressure and also remove poisons from the body – an unsuccessful attempt to cure the patient. The lancet would have been stored in a brass case.
- DOUBLE BLADED LITHOTOME, 1812. This object was used to cut the bladder in order to remove stones – a practice known as lithotomy. Baron Guillaume Dupuytren (1777-1835), a French surgeon and pathologist, invented this double bladed lithotome for the bi-lateral lithotomy procedure he developed in 1812. This procedure became widely used from the 1850s onwards, and this example dates from 1825.
- FALCIFORM AMPUTATION KNIFE, c.1700. The curved shape of this amputation knife was common in the early 1700s. Amputation knives became straighter once the practice of leaving a flap of skin to cover the limb stump became the preferred amputation method. Ebony was a common material for handles as it is a hard-wearing wood. This knife was probably made by Eberle in Germany, as indicated by the inscription on the silver blade.
Hans Christen Anderson, The Little Mermaid, 1837.
Mermaids have teased our imagination for thousands of years. One of the earliest tales originated in ancient Assyria, where the goddess Atargatis transformed herself into a mermaid out of shame for accidentally killing her human lover. Homer called them sirens in the Odyssey, and described them as beautiful singing creatures who lure sailors to their deaths. Throughout history, these seductive beings have been associated with floods, storms, shipwrecks and drownings. They have been depicted in countless mediums: in Etrurian sculptures, in Greek jewelry, and in bas-relief on ancient Roman tombs. Christopher Columbus even reported seeing these mythical creatures on his voyage to the Caribbean in 1493.
But could our concept of what a mermaid looks like actually have originated from a real medical disorder?
Sirenomelia is a lethal condition characterised by rotation and fusion of the legs, resulting in what often looks like a fish tail (left). It occurs when the umbilical cord fails to form two arteries, thus preventing a sufficient blood supply from reaching the fetus. As a result, the single artery steals the blood and nutrition from the lower body and diverts it back up to the placenta. Due to malnutrition, the fetus fails to develop two separate limbs.
Sirenomelia, also known as ‘Mermaid Syndrome’, is extremely rare. It affects 1 in 100,000 babies and is 100 times more likely to occur in identical twins. Usually, those born with this condition die within days.
Over the course of my research, I’ve found very little about the disorder’s history. There are snippets here and there which claim that fetuses born with sirenomelia were sometimes preserved in jars and put on display in ‘freak shows’ during the 19th century—but these sources are frustratingly vague. There is brief mention of the condition in a four-volume atlas published in 1891 titled Human Monstrosities, but nothing that hints at how medical practitioners understood sirenomelia in earlier periods.
Perhaps because the disorder is so rare, it’s also been hard for me to locate specimens in anatomical collections. My search in the Hunterian Museum at the Royal College of Surgeons in London came up cold. I did, however, find an early 20th-century example at the National Museum of Health & Medicine in Washington D.C. There are also three fetuses in the Anatomical Museum of the Second University of Naples, which have undergone 3D bone reconstructions (two pictured below).
By far the largest number of fetuses comes from the Vrolik Museum in Amsterdam, which consists of more than 5,000 specimens of human and animal anatomy, embryology, pathology and congenital anomalies. The collection was founded by Gerardus Crolik (1755 – 185) and his son, Willem Vrolik (1801 – 1863), who both wrote extensively on anatomical deformities in the 18th and 19th centuries. The Vrolik Museum has both wet preparations and skeletal remains, all of which are on display to the public today.
Unlike the first disorder I examined in this series—Harlequin Ichthyosis—sirenomelia is extremely fatal. There are no accounts of anyone with this condition surviving in the past. Most died within days of being born due to kidney and bladder failure. Even today, the odds are against those with sirenomelia, though there are a handful of examples of children living past infancy.
In 1988, Tiffany Yorks underwent surgery to separate her legs before her first birthday. She continues to suffer from mobility issues due to her fragile leg bones, and compensates by using crutches of a wheelchair to move around. At the age of 26, she is the longest-surviving sirenomelia patient to date.
If you enjoy reading my articles, please consider becoming a patron of The Chirurgeon’s Apprentice. Support my content by clicking HERE.