Wednesday 4 September 2024

Alain Delon was an Enigmatic Anti-Hero and France’s Most Beautiful Male Movie Star

Alain Delon as psychopath Tom Ripley in Rene Clement's Plein Soleil/Purple Noon (1960)
By Ben McCann

Alain Delon’s death at the age of 88 brings down the curtain of one of postwar European cinema’s most important film stars.

Known for his striking “movie star” look – chiselled features, piercing blue eyes – and magnetic screen presence, Delon portrayed characters who seemed on the surface to be effortless and suave.

He was often described as feline. But this outward gracefulness often masked a morally dubious, anti-hero persona. Beneath the sharp suits lay icy steel.

A breakthrough role

Born in 1935 in Sceaux, a wealthy Paris suburb, Delon had a difficult childhood, marked by his parents’ divorce, a disrupted schooling and an unhappy stint in the French Navy.

After being spotted by a talent scout at the 1957 Cannes Film Festival, Delon’s breakthrough came in 1960 with the French film Purple Noon, directed by René Clément.

In this adaptation of Patricia Highsmith’s novel The Talented Mr Ripley, Delon played the role of Tom Ripley, a charming but morally ambiguous forger and identity thief.

Setting the standard for future screen versions of Ripley (played by the likes of Matt Damon and Andrew Scott), Delon’s performance was widely acclaimed and established him as a rising star.

Rarely had audiences seen such a cool, enigmatic and morally compromised character. Highsmith was particularly impressed.

A 1960s icon

What followed was a glittering range of roles.

He collaborated twice with the great Italian director Luchino Visconti on Rocco and His Brothers (1960) and The Leopard (1963). Both films were critical successes, further solidifying Delon’s reputation as a leading actor.

In The Leopard, Delon plays Tancredi Falconeri, a young and charming Sicilian nobleman. His chemistry with Claudia Cardinale is one of the film’s highlights.

He moved effortlessly between genres, from crime dramas and thrillers to romantic films and period pieces. In the psychological thriller La Piscine (The Swimming Pool, 1969), Delon starred alongside Romy Schneider.

A year later came Borsalino, a popular gangster film in which Delon starred alongside his great friend Jean-Paul Belmondo.

While deeply rooted in French culture, Delon’s appeal transcended national borders. He became a global star, beloved not only in Europe but also in places like Japan, where he had a huge fan base.

The anti-hero archetype

But Delon’s most remarkable performance came in Le Samouraï (1967). Directed by Jean-Pierre Melville, Delon played Jef Costello, a stoic, methodical hitman, in a performance that became a benchmark for the “cool” anti-hero archetype in cinema.

It is widely regarded as a masterpiece of minimalist cinema and has had a significant influence on the crime and thriller genres.

Michael Fassbender in The Killer (2023), Forest Whitaker in Ghost Dog: The Way of the Samurai (1999) and Ryan Gosling in Drive (2011) all owe a debt to Delon’s “angel of death” portrayal of the silent hitman. Delon wore a trench coat and fedora in the film: his costume has been endlessly analysed and much imitated.

Delon worked again with Melville in Le Cercle Rouge (The Red Circle, 1970) and Un Flic (A Cop, 1972), plus other great European auteurs like Michelangelo Antonioni, Louis Malle and Jean Luc-Godard, for whom he played twins in Nouvelle Vague (New Wave, 1990).

And don’t forget his role as Klein in Joseph Losey’s gripping Mr Klein (1976), a film set in wartime Paris with Delon playing an art dealer who begins to realise there is another Klein who is Jewish and a Gestapo target. The police begin to investigate him, suspecting he might be the man they are looking for.

It was the clinching proof, wrote film critic David Thomson, that Alain Delon “matters” as an actor.

His final role in 2008 was a memorable one: Julius Caesar in Asterix at the Olympic Games.

He received an honorary Palme d'Or at the Cannes Film Festival in 2019, recognising his contributions to cinema over several decades. After suffering a stroke in 2019, Delon withdrew from public life.

A complicated off-screen life

That withdrawal only fuelled press gossip over his complicated personal life.

Delon’s relationship with Austrian actress Romy Schneider had captivated the public’s imagination. The two met on the set of the film Christine in 1958 and became engaged. Their breakup in 1963 reportedly devastated Schneider.

He later had relationships with the French singer Dalida and Swedish star Ann-Marget, before settling down with French actress Mireille Darc. She was his companion and occasional co-star from 1968 to 1982.

His outspoken political views often scandalised France (he once said he supported France’s far-right party).

More recently, his personal life was marked by controversies, including legal issues involving his four children. His son Anthony (also an actor) spoke publicly about the difficulties he faced growing up in his father’s shadow.

Another son, Alain-Fabien, also had a troubled relationship with his father, including a long estrangement. Delon’s final years were beset by squabbles and accusations among the family; at one point, his children accused Delon’s assistant of abuse and harassment.

What will endure is the “Delon style”, both on and off the screen. He influenced fashion, cultural attitudes and the concept of the “modern man” during the 1960s and 1970s.

Back in April, the New Yorker posed a rhetorical question about Delon: can a film star ever be too good-looking? Look at the towering achievement of Delon’s films and you’ll have your answer.The Conversation

Ben McCann, Associate Professor of French Studies, University of Adelaide

Subscribe to support our independent and original journalism, photography, artwork and film.

Tuesday 27 August 2024

Who was Hannibal? How One Brilliant General Almost Brought Ancient Rome to its Knees

Hannibal Crossing the Alps; detail from a fresco by Jacopo Ripanda, ca 1510, Palazzo dei Conservatori (Capitoline Museum), Rome


By Darius von Guttner Sporzynski, Australian Catholic University

He lived and died more than 2,000 years ago but Hannibal is remembered as one of history’s most formidable military commanders and as “Rome’s greatest enemy”.

His daring crossing of the Alps, with an army that included war elephants, shines as evidence to his tactical brilliance.

The Carthaginian general’s innovative military strategies in his struggle against Rome give us a glimpse into why his fame endures.

An early hostility toward Rome

Hannibal Barca was born in 247 BCE in Carthage, an ancient city in Northern Africa, in what is now Tunisia.

His father is credited with instilling in Hannibal a hostility towards Rome, a deep-seated drive that would shape much of his military career.

Hannibal’s leadership qualities and the understanding of military tactics were honed through his experiences in the Carthaginian army.

Hannibal first came into prominence in 219 BCE when the Carthaginian army under his command attacked the city of Saguntum (in modern Spain), triggering the Second Punic War with Rome.

Then came his cunning stratagem that brought his army into Italy all the way from Spain. Hannibal led his troops through the Alps in 218 BCE, catching the Romans off guard.

What’s more, he brought a contingent of war elephants ready for battle.

A statue of Hannibal stands in a garden.
Hannibal forced Rome to rethink its military strategies. Gilmanshin/Shutterstock

These elephants were trained to instil fear in the enemy during combat.

In the series of battles with the Romans, Hannibal proved he was capable of undertaking seemingly impossible feats to achieve strategic advantages.

In the Battle of the Trebia (218 BCE) Hannibal lured the Romans into an ambush on the Trebia River.

More victories soon followed. In both the Battle of Lake Trasimene and the Battle of Cannae Hannibal’s army inflicted devastating casualties on the significantly larger Roman forces.

Hannibal’s threat to Rome stemmed from his innovative tactics, psychological warfare, and his ability to exploit Roman leadership’s overconfidence, their rigid adherence to established war tactics, and their initial tendency to underestimate the power and speed of Hannibal’s cavalry.

Hannibal forced Rome to rethink its military strategies and adapt in ways that would ultimately shape the future of the Roman Empire.

Master of strategy

Hannibal’s tactical acumen was unparalleled. He consistently outmanoeuvred Roman armies, employing strategies that took advantage of the terrain and the element of surprise.

His victory at the Battle of Cannae (216 BCE) is illustrative of his tactical genius.

By executing a double envelopment manoeuvre, Hannibal managed to encircle and annihilate the Romans. Hannibal’s clever use of the cavalry allowed him to outflank the Roman infantry.

A year later, he adapted his strategy to a different terrain and used the fog at the Battle of Lake Trasimene to conceal his troops. The effect was a devastating ambush of the Romans.

The news of massive casualties delivered a profound psychological blow to Rome.

Psychological warfare

Hannibal understood the power of psychological warfare.

He knew fear undermined the confidence of Roman soldiers and their leaders.

The phrase “Hannibal is at the gates” became a Roman proverb, reflecting the pervasive horror he instilled in his opponents.

Hannibal’s use of psychological strategies extended to his own troops as well.

To maintain high morale and discipline he ensured his soldiers were well fed and shared in their hardships, sleeping on the ground wrapped in a blanket.

His leadership proved inspirational.

Exploiting Roman weaknesses

Hannibal was adept at identifying the weaknesses in Roman military and political structures. The Roman practice of alternating command between two consuls proved to be a vulnerability that Hannibal exploited.

On several occasions, he timed his attacks to coincide with the consulship of less experienced in command, leading to disastrous defeats for Rome.

Hannibal employed spies and gathered intelligence paid for by silver from Carthaginian-controlled mines in Spain. The information allowed him to anticipate Roman movements and counter their strategies.

Hannibal’s campaigns had lasting effects on Rome. His prolonged presence in Italy, despite never capturing Rome itself, forced the Romans to adapt their military strategies and organisation of their armies.

The Roman military became more flexible and began to place greater emphasis on cavalry and intelligence gathering. They learned from the very tactics that had caused them so much trouble. This led to Rome’s eventual victory in the Second Punic War.

Hannibal’s legacy

Hannibal’s legacy extends beyond his immediate impact on Rome. His military strategies and tactics continue to be studied in military academies around the world.

His ability to conduct successful campaigns with limited resources and his innovative use of terrain and psychological warfare remain relevant for military leaders today.

Commanders such as Julius Caesar, Napoleon, and George S. Patton drew inspiration from Hannibal’s methods, demonstrating the timeless nature of his military genius.

An engraving by Dutch artist Cornelis Cort depicts the Battle Between Scipio and Hannibal at Zama.
An engraving by Dutch artist Cornelis Cort depicts the battle between Scipio and Hannibal at Zama. The Metropolitan Museum

Hannibal’s downfall

Despite his victories against the Romans, Hannibal did not conquer the city of Rome, allowing the Romans to regroup. His position was weakened because his troops lacked reinforcements and supplies from Carthage.

When the Romans adopted a strategy of attrition, avoiding large-scale battles with the Carthaginian general, Hannibal’s army was cut off from supply lines.

At the Battle of Zama in modern-day Tunisia (in 202 BCE) Hannibal was defeated by the young Roman general Scipio Africanus. Scipio used Hannibal’s own tactics against him, marking the end of the Second Punic War.

Hannibal’s career never recovered. Hannibal took his own life in 183 BCE to avoid capture by the Romans.

A long legacy

Hannibal remains a towering figure in military history, not only for his bold campaigns and tactical brilliance but also for his ability to challenge and adapt to the formidable Roman war machine.

His fame as a master strategist continues to captivate and inspire today.The Conversation

Darius von Guttner Sporzynski, Historian, Australian Catholic University


Subscribe to support our independent and original journalism, photography, artwork and film.

Sunday 25 August 2024

AI Pioneers Want Bots to Replace Human Teachers – Here’s Why that’s Unlikely

History shows technological solutions in education often fall flat. Alexander Sikov via Getty


By Annette Vee, University of Pittsburgh

OpenAI co-founder Andrej Karpathy envisions a world in which artificial intelligence bots can be made into subject matter experts that are “deeply passionate, great at teaching, infinitely patient and fluent in all of the world’s languages.” Through this vision, the bots would be available to “personally tutor all 8 billion of us on demand.

The embodiment of that idea is his latest venture, Eureka Labs, which is merely the newest prominent example of how tech entrepreneurs are seeking to use AI to revolutionize education.

Karpathy believes AI can solve a long-standing challenge: the scarcity of good teachers who are also subject experts.

And he’s not alone. OpenAI CEO Sam Altman, Khan Academy CEO Sal Khan, venture capitalist Marc Andreessen and University of California, Berkeley computer scientist Stuart Russell also dream of bots becoming on-demand tutors, guidance counselors and perhaps even replacements for human teachers.

A man speaks at a lectern.
Andrej Karpathy, founder of Eureka Labs and a co-founder of OpenAI, in 2020. https://karpathy.ai/

As a researcher focused on AI and other new writing technologies, I’ve seen many cases of high-tech “solutions” for teaching problems that fizzled. AI certainly may enhance aspects of education, but history shows that bots probably won’t be an effective substitute for humans. That’s because students have long shown resistance to machines, however sophisticated, and a natural preference to connect with and be inspired by fellow humans.

The costly challenge of teaching writing to the masses

As the director of the English Composition program at the University of Pittsburgh, I oversee instruction for some 7,000 students a year. Programs like mine have long wrestled with how to teach writing efficiently and effectively to so many people at once.

The best answer so far is to keep class sizes to no more than 15 students. Research shows that students learn writing better in smaller classes because they are more engaged.

Yet small classes require more instructors, and that can get expensive for school districts and colleges.

Resuscitating dead scholars

Enter AI. Imagine, Karpathy posits, that the great theoretical physicist Richard Feynman, who has been dead for over 35 years, could be brought back to life as a bot to tutor students.

For Karpathy, an ideal learning experience would be working through physics material “together with Feynman, who is there to guide you every step of the way.” Feynman, renowned for his accessible way of presenting theoretical physics, could work with an unlimited number of students at the same time.

In this vision, human teachers still design course materials, but they are supported by an AI teaching assistant. This teacher-AI team “could run an entire curriculum of courses on a common platform,” Karpathy wrote. “If we are successful, it will be easy for anyone to learn anything,” whether it be a lot of people learning about one subject, or one person learning about many subjects.

Other efforts to personalize learning fall short

Yet technologies for personal learning aren’t new. Exactly 100 years ago, at the 1924 meeting of the American Psychological Association, inventor Sidney Pressey unveiled an “automatic teacher” made out of typewriter parts that asked multiple-choice questions.

In the 1950s, the psychologist B. F. Skinner designed “teaching machines.” If a student answered a question correctly, the machine advanced to ask about the problem’s next step. If not, the student stayed on that step of the problem until they solved it.

In both cases, students received positive feedback for correct answers. This gave them confidence as well as skills in the subject. The problem was that students didn’t learn much – they also found these nonhuman approaches boring, education writer Audrey Watters documents in “Teaching Machines.”

More recently, the world of education saw the rise and fall of “massive open online courses,” or MOOCs. These classes, which delivered video and quizzes, were heralded by The New York Times and others for their promise of democratizing education. Again, students lost interest and logged off.

Other web-based efforts have popped up, including course platforms like Coursera and Outlier. But the same problem persists: There’s no genuine interactivity to keep students engaged. One of the latest casualties in online learning was 2U, which acquired leading MOOC company edX in 2021 and in July 2024 filed for bankruptcy restructuring to reduce its US$945 million debt load. The culprit: falling demand for services.

Now comes the proliferation of AI-fueled platforms. Khanmigo deploys AI tutors to, as Sal Khan writes in his latest book, “personalize and customize coaching, as well as adapt to an individual’s needs while hovering beside our learners as they work.”

The educational publisher Pearson, too, is integrating AI into its educational materials. More than 1,000 universities are adopting these materials for fall 2024.

AI in education isn’t just coming; it’s here. The question is how effective it will be.

Drawbacks in AI learning

Some tech leaders believe bots can customize teaching and replace human teachers and tutors, but they’re likely to face the same problem as these earlier attempts: Students may not like it.

There are important reasons why, too. Students are unlikely to be inspired and excited the way they can be by a live instructor. Students in crisis often turn to trusted adults like teachers and coaches for help. Would they do the same with a bot? And what would the bot do if they did? We don’t know yet.

A lack of data privacy and security can also be a deterrent. These platforms collect volumes of information on students and their academic performance that can be misused or sold. Legislation may try to prevent this, but some popular platforms are based in China, out of reach of U.S. law.

Finally, there are concerns even if AI tutors and teachers become popular. If a bot teaches millions of students at once, we may lose diversity of thought. Where does originality come from when everyone receives the same teachings, especially if “academic success” relies on regurgitating what the AI instructor says?

The idea of an AI tutor in every pocket sounds exciting. I would love to learn physics from Richard Feynman or writing from Maya Angelou or astronomy from Carl Sagan. But history reminds us to be cautious and keep a close eye on whether students are actually learning. The promises of personalized learning are no guarantee for positive results.The Conversation

Annette Vee, Associate Professor of English, University of Pittsburgh

Subscribe to support our independent and original journalism, photography, artwork and film.

Thursday 15 August 2024

Singapore Fashion Now: A Celebration of Craft, Culture, and Innovation



Explore the vibrant world of Singaporean fashion with a captivating short film by award-winning director Franco Di Chiera. This cinematic overview offers a sneak peek into the Singapore Fashion Now: Runway exhibition, currently on display at the Asian Civilisations Museum. With editing by Paul James McDonnell and a dynamic score by Ben Sound, the film beautifully encapsulates the essence of this final and largest edition of the exhibition series. 

The exhibition includes 28 designers
 at the Asian Civilisations Museum
THE SPECIAL EXHIBITION #SGFashion Now: Runway Singapore showcases the works of established and emerging designers. This edition emphasizes sustainable exhibition design, utilizing upcycled materials to create a truly innovative and eco-conscious display. 

The show itself is a celebration of Singapore’s multicultural heritage, exploring the rich themes of craftsmanship, innovation in tradition, and urban styles. 

It takes visitors on a journey from the bespoke tailoring of the 1930s to the modern-day intricacies of labels like Thomas Wee and Laichan. The show also highlights designers who blend traditional techniques with cutting-edge technologies, reflecting Singapore's dual identity as both a cultural hub and a technological powerhouse. 

For those fascinated by the ever-evolving landscape of streetwear, the exhibition’s Urbanite section is must-see. It delves into the rise of streetwear in Singapore and its impact on the global stage, featuring edgy labels like Youths in Balaclava and The Salvages. Singapore Fashion Now is more than just an exhibition; it’s a testament to the dynamic, multifaceted nature of the city's fashion scene. 

Don’t miss this chance to explore the city’s sartorial journey, catch the exhibition at the Asian Civilizations Museum before it closes September 1st, 2024. 
 
For more details, visit the Asian Civilisations Museum, 1 Empress Place, Singapore, or contact them at +65 6332 7798. Open daily: 10am - 7pm and Fridays - 10am - 9pm.

Subscribe to support our independent and original journalism, photography, artwork and film.

Wednesday 14 August 2024

Copenhagen Fashion Week Spring/Summer 2025: Streetstyle Photographed by Andrea Heinsohn



















Subscribe to support our independent and original journalism, photography, artwork and film.

ChatGPT and the Movie ‘Her’ are Just the Latest Example of the ‘Sci-Fi Feedback Loop’

ChatGPT and the films 'Her' and 'Blade Runner 2049' all pull from one another as they develop the concept of a virtual assistant. Warner Bros

By Rizwan Virk, Arizona State University

In May 2024, OpenAI CEO Sam Altman sparked a firestorm by referencing the 2013 movie “Her” to highlight the novelty of the latest iteration of ChatGPT.

Within days, actor Scarlett Johansson, who played the voice of Samantha, the AI girlfriend of the protagonist in the movie “Her,” accused the company of improperly using her voice after she had spurned their offer to make her the voice of ChatGPT’s new virtual assistant. Johansson ended up suing OpenAI and has been invited to testify before Congress.

This tiff highlights a broader interchange between Hollywood and Silicon Valley that’s called the “sci-fi feedback loop.” The subject of my doctoral research, the sci-fi feedback loop explores how science fiction and technological innovation feed off each other. This dynamic is bidirectional and can sometimes play out over many decades, resulting in an ongoing loop.

Fiction sparks dreams of Moon travel

One of the most famous examples of this loop is Moon travel.

Jules Verne’s 1865 novel “From the Earth to the Moon” and the fiction of H.G. Wells inspired one of the first films to visualize such a journey, 1902’s “A Trip to the Moon.”

The fiction of Verne and Wells also influenced future rocket scientists such as Robert Goddard, Hermann Oberth and Oberth’s better-known protégé, Wernher von Braun. The innovations of these men – including the V-2 rocket built by von Braun during World War II – inspired works of science fiction, such as the 1950 film “Destination Moon,” which included a rocket that looked just like the V-2.

Films like “Destination Moon” would then go on to bolster public support for lavish government spending on the space program.

The 1902 silent short ‘A Trip to the Moon.’

Creative symbiosis

The sci-fi feedback loop generally follows the same cycle.

First, the technological climate of a given era will shape that period’s science fiction. For example, the personal computing revolution of the 1970s and 1980s directly inspired the works of cyberpunk writers Neal Stephenson and William Gibson.

Then the sci-fi that emerges will go on to inspire real-world technological innovation. In his 1992 classic “Snow Crash,” Stephenson coined the term “metaverse” to describe a 3-D, video game-like world accessed through virtual reality goggles.

Silicon Valley entrepreneurs and innovators have been trying to build a version of this metaverse ever since. The virtual world of the video game Second Life, released in 2003, took a stab at this: Players lived in virtual homes, went to virtual dance clubs and virtual concerts with virtual girlfriends and boyfriends, and were even paid virtual dollars for showing up at virtual jobs.

This technology seeded yet more fiction; in my research, I discovered that sci-fi novelist Ernest Cline had spent a lot of time playing Second Life, and it inspired the metaverse of his bestselling novel “Ready Player One.”

The cycle continued: Employees of Oculus VR – now known as Meta Reality Labs – were given copies of “Ready Player One” to read as they developed the company’s virtual reality headsets. When Facebook changed its name to Meta in 2021, it did so in the hopes of being at the forefront of building the metaverse, though the company’s grand ambitions have tempered somewhat.

Digitally rendered woman wearing pink outfit strolls along a runway.
Metaverse Fashion Week, the first virtual fashion week, was hosted by the Decentraland virtual world in 2022. Vittorio Zunino Celotto/Getty Images

Another sci-fi franchise that has its fingerprints all over this loop is “Star Trek,” which first aired in 1966, right in the middle of the space race.

Steve Perlman, the inventor of Apple’s QuickTime media format and player, said he was inspired by an episode of “Star Trek: The Next Generation,” in which Lt. Commander Data, an android, sifts through multiple streams of audio and video files. And Rob Haitani, the designer of the Palm Pilot’s operating system, has said that the bridge on the Enterprise influenced its interface.

In my research, I also discovered that the show’s Holodeck – a room that could simulate any environment – influenced both the name and the development of Microsoft’s HoloLens augmented reality glasses.

From ALICE to ‘Her’

Which brings us back to OpenAI and “Her.”

In the movie, the protagonist, Theodore, played by Joaquin Phoenix, acquires an AI assistant, “Samantha,” voiced by Johansson. He begins to develop feelings for Samantha – so much so that he starts to consider her his girlfriend.

ChatGPT-4o, the latest version of the generative AI software, seems to be able to cultivate a similar relationship between user and machine. Not only can ChatGPT-4o speak to you and “understand” you, but it can also do so sympathetically, as a romantic partner would.

There’s little doubt that the depiction of AI in “Her” influenced OpenAI’s developers. In addition to Altman’s tweet, the company’s promotional videos for ChatGPT-4o feature a chatbot speaking with a job candidate before his interview, propping him up and encouraging him – as, well, an AI girlfriend would. The AI featured in the clips, Ars Technica observed, was “disarmingly lifelike,” and willing “to laugh at your jokes and your dumb hat.”

But you might be surprised to learn that a previous generation of chatbots inspired Spike Jonze, the director and screenwriter of “Her,” to write the screenplay in the first place. Nearly a decade before the film’s release, Jonze had interacted with a version of the ALICE chatbot, which was one of the first chatbots to have a defined personality – in ALICE’s case, that of a young woman.

Young man wearing tuxedo smiles as he holds a gold statuette.
Filmmaker Spike Jonze won the Oscar for best original screenplay for ‘Her’ in 2014. Kevork Djansezian/Getty Images

The ALICE chatbot won the Loebner Prize three times, which was awarded annually until 2019 to the AI software that came closest to passing the Turing Test, long seen as a threshold for determining whether artificial intelligence has become indistinguishable from human intelligence.

The sci-fi feedback loop has no expiration date. AI’s ability to form relationships with humans is a theme that continues to be explored in fiction and real life.

A few years after “Her,” “Blade Runner 2049” featured a virtual girlfriend, Joi, with a holographic body. Well before the latest drama with OpenAI, companies had started developing and pitching virtual girlfriends, a process that will no doubt continue. As science fiction writer and social media critic Cory Doctorow wrote in 2017, “Science fiction does something better than predict the future: It influences it.”The Conversation

Rizwan Virk, Faculty Associate, PhD Candidate in Human and Social Dimensions of Science and Technology, Arizona State University


Subscribe to support our independent and original journalism, photography, artwork and film.

Thursday 8 August 2024

‘An Engineering and Biological Miracle’ – How I Fell for the Science, and the Poetry, of the Eye

recep kart/Shutterstock
By Hessom Razavi, The University of Western Australia

My first encounter came as a medical student. Under high magnification, I examined a colleague’s iris, the coloured part of their eye encircling the pupil.

I watched as the muscle fibres moved rhythmically, undulating between dilation and constriction. It looked like an underwater plant, swaying in a current.

Mesmerising. But in a busy university curriculum the experience quickly faded, to be replaced by the next clinical rotation. I forgot ophthalmology; “maybe orthopaedic surgery or emergency medicine are for me”, I thought.

But eyes returned, this time while I was a junior doctor in residency. Assisting in surgery, I observed a patient’s retina through an operating microscope. Here was a cinematic view of the orb, as if viewed from a spacecraft over a Martian landscape.

The internal surface glowed blood orange (the colour once ascribed to its rich blood supply, now attributed to a layer of underlying pigmented cells). Within this landscape ran red rivulets, a network of branching blood vessels.

The Greek anatomist Herophilus thought this pattern resembled a casting net, leading to “retiform” (meaning reticular or netlike), which became “retina” in contemporary language (the light-sensitive film at the back of the eye). I was struck by the intricacy of this secret globe, this gallery of miniature art.

The term “beauty is in the eye of the beholder” took on new connotations, and I turned to pursuing ophthalmology. Aside from the organ’s intrinsic appeal, I was struck by the technicality of eye surgery, and the apparent mystique of ophthalmologists themselves.

These unruffled surgeons appeared to float above the general fray, waltzing around the hospital with fancy eye equipment and clever jargon. No one really knew what they did, but they looked cool.

Acceptance into ophthalmology specialist training was notoriously competitive, with only one or two doctors accepted each year in our state. “Why not,” I thought, and went for it, planning my campaign for eligibility. Among other things, this included me experiencing blindness for 24 hours, by using a blindfold as part of a fundraising event, and conducting research on childhood eye disease in Iran, my country of origin.

Nine years later I was a qualified ophthalmologist, having learned the eye’s workings in both health and when diseased. I had come to view the eye as an engineering and biological miracle.

A photo of a man (the author) looking into the eye of a female patient.
Hessom Razavi examining a patient’s eye. Photographer: Frances Andrijich, CC BY

Mammals with seeing brains

A wonderfully elastic ball, the eye can be thought of as housing a camera at the front. This camera focuses incoming light through compound lenses (the cornea and crystalline lens), which are separated by an aperture (the pupil), to form a fine beam.

This beam travels towards a transducer (an electronic device turning one form of energy into another) at the back of the eye (the retina). The transducer converts photons into electrical signals at a rate of around 8.8 megabits (million bits) per second, just shy of a decent ethernet connection.

Carried in an insulated cable (the optic nerve), this electrical current runs backwards through the brain, to the visual cortex. This is the part of the brain that sits just in front of the bony bulge at the back of your skull.

Here is vision’s supercomputer, where incoming, semi-processed data is organised into a final experience of shapes, colours, contrast and movement. Et voila, there you have it: high-definition, stereoscopic human vision.

A close-up of a young woman's eye.
The front part of the eye is composed mainly of water. kei907/shutterstock

While the front part of the eye is mainly composed of water, the back is nature’s version of bioelectronics (the convergence of biology and electronics).

Eyesight, then, is an interplay of light, water and electricity, in networks both elemental and complex. This exquisite balance may be further demonstrated through two examples.

First, consider the structure of the cornea. This is the clear “windscreen” at the front of the eye, the only completely transparent tissue in the human body. Its living cells are a plexus of water and collagen, glass-like and liquid enough to permit light, sturdy enough to withstand trauma. Here again lies a balance between the eye’s delicacy and its resilience.

Second, let’s look at the eyes’ development, as direct extensions of our brains. When we are embryos from weeks three to ten, two folds appear on our forebrains (the forward-most portion of our brain). These folds extend forwards, becoming stalks. In turn, they are capped with little cups that turn into globes, later encased in eyelids and lashes.

The brain stretches forward, in other words, to form its own eyes. It’s brain tissue, then, that watches the world – we are the mammals with seeing brains.

Photographs revert to negatives

The descriptions above could perhaps be characterised as a meeting of science and lyricism. This is no accident. While ophthalmology concerns itself with optics, a mathematical affair, I was the schoolkid who loved English class.

Whether writing short stories, or nodding to hip-hop’s street poetry, I was drawn to language. These days I’m predominantly a doctor and family man, and only a dilettante as a writer. Still, I seek language out in the micro-gaps of a day, predawn before the kids wake, or on train rides to and from work.

There’s nothing glamorous about this, and nor is it special. Doctor-writers are far from rare – think of history’s Anton Chekhov or William Carlos Williams, the US’s Atul Gawande, or our own Karen Hitchcock or Michelle Johnston. So far, I’m the only writerly eye surgeon I know of (any others out there – shout!).

Author Margaret Lobenstine believes this sort of “renaissance soul” resides in all of us; after all, we have two cerebral hemispheres, one for reason and one for art (in truth though, the hemispheres cooperate on most tasks).

Let’s pivot fully from eyeballs to writing then, and specifically to poetry, my favoured sandpit.

Robert Frost said, “to be a poet is a condition, not a profession”. Most poets write, I believe, because they must, not because it’s fun or easy (although occasionally it’s both). Sometimes we write to understand or at least to name something, to gather up the events and emotions that move us, dangling like threads to be spooled up into something resembling sense.

In a medical day, I am periodically struck by a patient encounter that leaves me reeling. Perhaps it’s an unexpected confession, or a scrap of a life story. Either way, it’s the emotional charge that, like a vein of gold, points towards a buried poem.

Let’s take a real-life example from my practice, an elderly lady whom we shall call Iris (pun intended).

Iris presents to me with failing vision. Examining her eyes, I see “geographic atrophy”, little islands of missing retinal tissue worn away over time. This is a form of incurable, age-related, macular degeneration. It results in permanent loss of central vision, with peripheral vision remaining intact.

It’s not good news; my stomach tightens as I prepare to deliver it.

Iris replies, tearily, that she just lost her husband of 60 years. She’s now alone and becoming blind. I’m taken aback – what can one honestly say to this?

Sure, there are visual magnifiers, home modifications, other practical aids that may guardrail her physical safety. But her anguish goes beyond this; she’s on the edge of a personal precipice, and teetering. There’s electricity in the consult room, a lightning-rod moment for sure.

How might a poet view this scene? Placing Iris in the centre, let’s start with her appearance – her auburn-dyed hair, her knobbly walking stick, her potpourri perfume – enough to make her real. In addition to portraiture, poetry deals in metaphors; what are some for Iris’s grief?

How about:

Colour photographs revert
to their negatives, old-fashioned film
stark and inverting reality,
her life recognisable
yet draining of hue.

Or this:

Turned over, her hourglass
clumps onto the table,
sand trickling away
from having had, towards loss, the two bulbs
painfully, inextricably linked.

Good poetry must go further, seeking the patterns beneath the surface. What precisely is it about Iris that moves me so? She is losing things, important things. Witnessing this touches my deepest fears, knowing that, like an unwelcome house guest, loss visits us all, sometimes staying for good.

As my Persian countryman Rumi wrote, “this human being is a guest house”. Losing our own physical abilities or our loved ones, what would become of us?

Distilling this further, what exactly is loss, its weight and texture?

Inversions,
your cherished glass of shiraz shatters
on the tiles, your laden table
upended. Warmth whistles
out through the cracks, cold rises up.
Midnight:
your reasons for living dwindle,
walking out the door
one by one.

Dark, heavy material no doubt; well, welcome to medicine, and to real life. No wonder Iris’s visit rattles me. The poet must face this discomfort, exploring the interplay between the miniscule and the panoramic, the worldly and the transcendent.

Tasked with creating visions for life, from its mundane to its profoundest moments, poets, then, are our seers.

Anger and solace

I’m now in my 18th year working exclusively with eyes, the latter half as a qualified consultant ophthalmologist. These days, the toughest conditions I face are diseases without a cure, such as Iris’s geographic atrophy, or vision loss that could have been prevented, such as solar retinopathy.

In other scenarios, there are eye diseases caused by modern living. An example of this is diabetic eye disease, which disproportionately affects Indigenous people. When compared with non-Indigenous people, Indigenous Australians suffer three times the rate of vision loss from diabetes.

The reasons for this are manifold, and include the easy access to sugar-laden beverages in many Indigenous communities. As ophthalmologists, we deal with the downstream effects of high blood sugar levels. This manifests as “diabetic macular oedema”, where a swelling at the back of the eye leads to loss of vision.

Fortunately, we have good treatments for this condition. But prevention is far better than cure. As one measure, why don’t we impose a sugar tax, as more than 100 other countries have done?. By introducing refined sugars into a healthy traditional diet, modern Australia has arguably created this problem. By corollary, we have a duty to solve it.

This is an opportunity for resistance and empowerment.

Hauled over on ships,
white crystals in barrels -
dispossession’s sweetener - now
sat on shelves, bright bottles
singing cheap songs
to thirsty eyes.
We’ll brand you yet:
mark your barrels ‘poison’.

Conditions like this, where modern society harms people – for astronomical corporate profits, mind you – are infuriating.

Thankfully, there is solace in my ongoing fascination with the eye. There are moments of sheer beauty; images of fluorescein angiography, for example, where the retina’s blood vessels are highlighted with a fluorescent dye as a diagnostic tool.

These angiograms remind me of lightning storms in our state’s northwest, where cloud-to-cloud and sheet lightning flash in the night sky in split-second forks and streams. Much as power and charge flow in the sky, so blood is distributed in the back of the eye.

Also spurring me on are patients’ success stories, where sight is restored or blindness prevented.

Twenty years in a Thai refugee camp,
now sat in front of me,
grandma from Myanmar.
Twenty years to lose light –
this cataract surgery won’t
return your nation, grandma, but at least
it’s restored your sight.

These stories abound, such is the privilege of my profession.

A race between science and time

There may even be hope for Iris. Her condition, geographic atrophy, is caused in part by her immune system, and its complement proteins. This network of proteins marks selected entities (typically pathogens or tumour cells) for destruction by immune cells such as lymphocytes, phagocytes and macrophages.

For reasons including localised inflammation and reduced oxygen delivery, this response can, in ageing, be misdirected towards healthy retinal tissue, leading to its destruction – a process akin to friendly fire in battle.

For Iris, the cavalry may be cresting the hill. In 2023, two new medications were approved for the treatment of geographic atrophy in the US. Both block targets within our complement system and, while not curative, have been shown to slow (although not reverse or stop) the disease. By late 2024, we should know whether one of these drugs, pegcetacoplan, is approved in Australia.

Starter’s pistol fires! A race afoot
between science and time.
Do the molecules work
and – as the clock chimes –
will they cross the line
to save sight?
The Conversation

Hessom Razavi, Associate professor, The University of Western Australia


Subscribe to support our independent and original journalism, photography, artwork and film.