Wednesday, 14 August 2024

ChatGPT and the Movie ‘Her’ are Just the Latest Example of the ‘Sci-Fi Feedback Loop’

ChatGPT and the films 'Her' and 'Blade Runner 2049' all pull from one another as they develop the concept of a virtual assistant. Warner Bros

By Rizwan Virk, Arizona State University

In May 2024, OpenAI CEO Sam Altman sparked a firestorm by referencing the 2013 movie “Her” to highlight the novelty of the latest iteration of ChatGPT.

Within days, actor Scarlett Johansson, who played the voice of Samantha, the AI girlfriend of the protagonist in the movie “Her,” accused the company of improperly using her voice after she had spurned their offer to make her the voice of ChatGPT’s new virtual assistant. Johansson ended up suing OpenAI and has been invited to testify before Congress.

This tiff highlights a broader interchange between Hollywood and Silicon Valley that’s called the “sci-fi feedback loop.” The subject of my doctoral research, the sci-fi feedback loop explores how science fiction and technological innovation feed off each other. This dynamic is bidirectional and can sometimes play out over many decades, resulting in an ongoing loop.

Fiction sparks dreams of Moon travel

One of the most famous examples of this loop is Moon travel.

Jules Verne’s 1865 novel “From the Earth to the Moon” and the fiction of H.G. Wells inspired one of the first films to visualize such a journey, 1902’s “A Trip to the Moon.”

The fiction of Verne and Wells also influenced future rocket scientists such as Robert Goddard, Hermann Oberth and Oberth’s better-known protégé, Wernher von Braun. The innovations of these men – including the V-2 rocket built by von Braun during World War II – inspired works of science fiction, such as the 1950 film “Destination Moon,” which included a rocket that looked just like the V-2.

Films like “Destination Moon” would then go on to bolster public support for lavish government spending on the space program.

The 1902 silent short ‘A Trip to the Moon.’

Creative symbiosis

The sci-fi feedback loop generally follows the same cycle.

First, the technological climate of a given era will shape that period’s science fiction. For example, the personal computing revolution of the 1970s and 1980s directly inspired the works of cyberpunk writers Neal Stephenson and William Gibson.

Then the sci-fi that emerges will go on to inspire real-world technological innovation. In his 1992 classic “Snow Crash,” Stephenson coined the term “metaverse” to describe a 3-D, video game-like world accessed through virtual reality goggles.

Silicon Valley entrepreneurs and innovators have been trying to build a version of this metaverse ever since. The virtual world of the video game Second Life, released in 2003, took a stab at this: Players lived in virtual homes, went to virtual dance clubs and virtual concerts with virtual girlfriends and boyfriends, and were even paid virtual dollars for showing up at virtual jobs.

This technology seeded yet more fiction; in my research, I discovered that sci-fi novelist Ernest Cline had spent a lot of time playing Second Life, and it inspired the metaverse of his bestselling novel “Ready Player One.”

The cycle continued: Employees of Oculus VR – now known as Meta Reality Labs – were given copies of “Ready Player One” to read as they developed the company’s virtual reality headsets. When Facebook changed its name to Meta in 2021, it did so in the hopes of being at the forefront of building the metaverse, though the company’s grand ambitions have tempered somewhat.

Digitally rendered woman wearing pink outfit strolls along a runway.
Metaverse Fashion Week, the first virtual fashion week, was hosted by the Decentraland virtual world in 2022. Vittorio Zunino Celotto/Getty Images

Another sci-fi franchise that has its fingerprints all over this loop is “Star Trek,” which first aired in 1966, right in the middle of the space race.

Steve Perlman, the inventor of Apple’s QuickTime media format and player, said he was inspired by an episode of “Star Trek: The Next Generation,” in which Lt. Commander Data, an android, sifts through multiple streams of audio and video files. And Rob Haitani, the designer of the Palm Pilot’s operating system, has said that the bridge on the Enterprise influenced its interface.

In my research, I also discovered that the show’s Holodeck – a room that could simulate any environment – influenced both the name and the development of Microsoft’s HoloLens augmented reality glasses.

From ALICE to ‘Her’

Which brings us back to OpenAI and “Her.”

In the movie, the protagonist, Theodore, played by Joaquin Phoenix, acquires an AI assistant, “Samantha,” voiced by Johansson. He begins to develop feelings for Samantha – so much so that he starts to consider her his girlfriend.

ChatGPT-4o, the latest version of the generative AI software, seems to be able to cultivate a similar relationship between user and machine. Not only can ChatGPT-4o speak to you and “understand” you, but it can also do so sympathetically, as a romantic partner would.

There’s little doubt that the depiction of AI in “Her” influenced OpenAI’s developers. In addition to Altman’s tweet, the company’s promotional videos for ChatGPT-4o feature a chatbot speaking with a job candidate before his interview, propping him up and encouraging him – as, well, an AI girlfriend would. The AI featured in the clips, Ars Technica observed, was “disarmingly lifelike,” and willing “to laugh at your jokes and your dumb hat.”

But you might be surprised to learn that a previous generation of chatbots inspired Spike Jonze, the director and screenwriter of “Her,” to write the screenplay in the first place. Nearly a decade before the film’s release, Jonze had interacted with a version of the ALICE chatbot, which was one of the first chatbots to have a defined personality – in ALICE’s case, that of a young woman.

Young man wearing tuxedo smiles as he holds a gold statuette.
Filmmaker Spike Jonze won the Oscar for best original screenplay for ‘Her’ in 2014. Kevork Djansezian/Getty Images

The ALICE chatbot won the Loebner Prize three times, which was awarded annually until 2019 to the AI software that came closest to passing the Turing Test, long seen as a threshold for determining whether artificial intelligence has become indistinguishable from human intelligence.

The sci-fi feedback loop has no expiration date. AI’s ability to form relationships with humans is a theme that continues to be explored in fiction and real life.

A few years after “Her,” “Blade Runner 2049” featured a virtual girlfriend, Joi, with a holographic body. Well before the latest drama with OpenAI, companies had started developing and pitching virtual girlfriends, a process that will no doubt continue. As science fiction writer and social media critic Cory Doctorow wrote in 2017, “Science fiction does something better than predict the future: It influences it.”The Conversation

Rizwan Virk, Faculty Associate, PhD Candidate in Human and Social Dimensions of Science and Technology, Arizona State University


Subscribe to support our independent and original journalism, photography, artwork and film.

Friday, 9 August 2024

‘An Engineering and Biological Miracle’ – How I Fell for the Science, and the Poetry, of the Eye

recep kart/Shutterstock
By Hessom Razavi, The University of Western Australia

My first encounter came as a medical student. Under high magnification, I examined a colleague’s iris, the coloured part of their eye encircling the pupil.

I watched as the muscle fibres moved rhythmically, undulating between dilation and constriction. It looked like an underwater plant, swaying in a current.

Mesmerising. But in a busy university curriculum the experience quickly faded, to be replaced by the next clinical rotation. I forgot ophthalmology; “maybe orthopaedic surgery or emergency medicine are for me”, I thought.

But eyes returned, this time while I was a junior doctor in residency. Assisting in surgery, I observed a patient’s retina through an operating microscope. Here was a cinematic view of the orb, as if viewed from a spacecraft over a Martian landscape.

The internal surface glowed blood orange (the colour once ascribed to its rich blood supply, now attributed to a layer of underlying pigmented cells). Within this landscape ran red rivulets, a network of branching blood vessels.

The Greek anatomist Herophilus thought this pattern resembled a casting net, leading to “retiform” (meaning reticular or netlike), which became “retina” in contemporary language (the light-sensitive film at the back of the eye). I was struck by the intricacy of this secret globe, this gallery of miniature art.

The term “beauty is in the eye of the beholder” took on new connotations, and I turned to pursuing ophthalmology. Aside from the organ’s intrinsic appeal, I was struck by the technicality of eye surgery, and the apparent mystique of ophthalmologists themselves.

These unruffled surgeons appeared to float above the general fray, waltzing around the hospital with fancy eye equipment and clever jargon. No one really knew what they did, but they looked cool.

Acceptance into ophthalmology specialist training was notoriously competitive, with only one or two doctors accepted each year in our state. “Why not,” I thought, and went for it, planning my campaign for eligibility. Among other things, this included me experiencing blindness for 24 hours, by using a blindfold as part of a fundraising event, and conducting research on childhood eye disease in Iran, my country of origin.

Nine years later I was a qualified ophthalmologist, having learned the eye’s workings in both health and when diseased. I had come to view the eye as an engineering and biological miracle.

A photo of a man (the author) looking into the eye of a female patient.
Hessom Razavi examining a patient’s eye. Photographer: Frances Andrijich, CC BY

Mammals with seeing brains

A wonderfully elastic ball, the eye can be thought of as housing a camera at the front. This camera focuses incoming light through compound lenses (the cornea and crystalline lens), which are separated by an aperture (the pupil), to form a fine beam.

This beam travels towards a transducer (an electronic device turning one form of energy into another) at the back of the eye (the retina). The transducer converts photons into electrical signals at a rate of around 8.8 megabits (million bits) per second, just shy of a decent ethernet connection.

Carried in an insulated cable (the optic nerve), this electrical current runs backwards through the brain, to the visual cortex. This is the part of the brain that sits just in front of the bony bulge at the back of your skull.

Here is vision’s supercomputer, where incoming, semi-processed data is organised into a final experience of shapes, colours, contrast and movement. Et voila, there you have it: high-definition, stereoscopic human vision.

A close-up of a young woman's eye.
The front part of the eye is composed mainly of water. kei907/shutterstock

While the front part of the eye is mainly composed of water, the back is nature’s version of bioelectronics (the convergence of biology and electronics).

Eyesight, then, is an interplay of light, water and electricity, in networks both elemental and complex. This exquisite balance may be further demonstrated through two examples.

First, consider the structure of the cornea. This is the clear “windscreen” at the front of the eye, the only completely transparent tissue in the human body. Its living cells are a plexus of water and collagen, glass-like and liquid enough to permit light, sturdy enough to withstand trauma. Here again lies a balance between the eye’s delicacy and its resilience.

Second, let’s look at the eyes’ development, as direct extensions of our brains. When we are embryos from weeks three to ten, two folds appear on our forebrains (the forward-most portion of our brain). These folds extend forwards, becoming stalks. In turn, they are capped with little cups that turn into globes, later encased in eyelids and lashes.

The brain stretches forward, in other words, to form its own eyes. It’s brain tissue, then, that watches the world – we are the mammals with seeing brains.

Photographs revert to negatives

The descriptions above could perhaps be characterised as a meeting of science and lyricism. This is no accident. While ophthalmology concerns itself with optics, a mathematical affair, I was the schoolkid who loved English class.

Whether writing short stories, or nodding to hip-hop’s street poetry, I was drawn to language. These days I’m predominantly a doctor and family man, and only a dilettante as a writer. Still, I seek language out in the micro-gaps of a day, predawn before the kids wake, or on train rides to and from work.

There’s nothing glamorous about this, and nor is it special. Doctor-writers are far from rare – think of history’s Anton Chekhov or William Carlos Williams, the US’s Atul Gawande, or our own Karen Hitchcock or Michelle Johnston. So far, I’m the only writerly eye surgeon I know of (any others out there – shout!).

Author Margaret Lobenstine believes this sort of “renaissance soul” resides in all of us; after all, we have two cerebral hemispheres, one for reason and one for art (in truth though, the hemispheres cooperate on most tasks).

Let’s pivot fully from eyeballs to writing then, and specifically to poetry, my favoured sandpit.

Robert Frost said, “to be a poet is a condition, not a profession”. Most poets write, I believe, because they must, not because it’s fun or easy (although occasionally it’s both). Sometimes we write to understand or at least to name something, to gather up the events and emotions that move us, dangling like threads to be spooled up into something resembling sense.

In a medical day, I am periodically struck by a patient encounter that leaves me reeling. Perhaps it’s an unexpected confession, or a scrap of a life story. Either way, it’s the emotional charge that, like a vein of gold, points towards a buried poem.

Let’s take a real-life example from my practice, an elderly lady whom we shall call Iris (pun intended).

Iris presents to me with failing vision. Examining her eyes, I see “geographic atrophy”, little islands of missing retinal tissue worn away over time. This is a form of incurable, age-related, macular degeneration. It results in permanent loss of central vision, with peripheral vision remaining intact.

It’s not good news; my stomach tightens as I prepare to deliver it.

Iris replies, tearily, that she just lost her husband of 60 years. She’s now alone and becoming blind. I’m taken aback – what can one honestly say to this?

Sure, there are visual magnifiers, home modifications, other practical aids that may guardrail her physical safety. But her anguish goes beyond this; she’s on the edge of a personal precipice, and teetering. There’s electricity in the consult room, a lightning-rod moment for sure.

How might a poet view this scene? Placing Iris in the centre, let’s start with her appearance – her auburn-dyed hair, her knobbly walking stick, her potpourri perfume – enough to make her real. In addition to portraiture, poetry deals in metaphors; what are some for Iris’s grief?

How about:

Colour photographs revert
to their negatives, old-fashioned film
stark and inverting reality,
her life recognisable
yet draining of hue.

Or this:

Turned over, her hourglass
clumps onto the table,
sand trickling away
from having had, towards loss, the two bulbs
painfully, inextricably linked.

Good poetry must go further, seeking the patterns beneath the surface. What precisely is it about Iris that moves me so? She is losing things, important things. Witnessing this touches my deepest fears, knowing that, like an unwelcome house guest, loss visits us all, sometimes staying for good.

As my Persian countryman Rumi wrote, “this human being is a guest house”. Losing our own physical abilities or our loved ones, what would become of us?

Distilling this further, what exactly is loss, its weight and texture?

Inversions,
your cherished glass of shiraz shatters
on the tiles, your laden table
upended. Warmth whistles
out through the cracks, cold rises up.
Midnight:
your reasons for living dwindle,
walking out the door
one by one.

Dark, heavy material no doubt; well, welcome to medicine, and to real life. No wonder Iris’s visit rattles me. The poet must face this discomfort, exploring the interplay between the miniscule and the panoramic, the worldly and the transcendent.

Tasked with creating visions for life, from its mundane to its profoundest moments, poets, then, are our seers.

Anger and solace

I’m now in my 18th year working exclusively with eyes, the latter half as a qualified consultant ophthalmologist. These days, the toughest conditions I face are diseases without a cure, such as Iris’s geographic atrophy, or vision loss that could have been prevented, such as solar retinopathy.

In other scenarios, there are eye diseases caused by modern living. An example of this is diabetic eye disease, which disproportionately affects Indigenous people. When compared with non-Indigenous people, Indigenous Australians suffer three times the rate of vision loss from diabetes.

The reasons for this are manifold, and include the easy access to sugar-laden beverages in many Indigenous communities. As ophthalmologists, we deal with the downstream effects of high blood sugar levels. This manifests as “diabetic macular oedema”, where a swelling at the back of the eye leads to loss of vision.

Fortunately, we have good treatments for this condition. But prevention is far better than cure. As one measure, why don’t we impose a sugar tax, as more than 100 other countries have done?. By introducing refined sugars into a healthy traditional diet, modern Australia has arguably created this problem. By corollary, we have a duty to solve it.

This is an opportunity for resistance and empowerment.

Hauled over on ships,
white crystals in barrels -
dispossession’s sweetener - now
sat on shelves, bright bottles
singing cheap songs
to thirsty eyes.
We’ll brand you yet:
mark your barrels ‘poison’.

Conditions like this, where modern society harms people – for astronomical corporate profits, mind you – are infuriating.

Thankfully, there is solace in my ongoing fascination with the eye. There are moments of sheer beauty; images of fluorescein angiography, for example, where the retina’s blood vessels are highlighted with a fluorescent dye as a diagnostic tool.

These angiograms remind me of lightning storms in our state’s northwest, where cloud-to-cloud and sheet lightning flash in the night sky in split-second forks and streams. Much as power and charge flow in the sky, so blood is distributed in the back of the eye.

Also spurring me on are patients’ success stories, where sight is restored or blindness prevented.

Twenty years in a Thai refugee camp,
now sat in front of me,
grandma from Myanmar.
Twenty years to lose light –
this cataract surgery won’t
return your nation, grandma, but at least
it’s restored your sight.

These stories abound, such is the privilege of my profession.

A race between science and time

There may even be hope for Iris. Her condition, geographic atrophy, is caused in part by her immune system, and its complement proteins. This network of proteins marks selected entities (typically pathogens or tumour cells) for destruction by immune cells such as lymphocytes, phagocytes and macrophages.

For reasons including localised inflammation and reduced oxygen delivery, this response can, in ageing, be misdirected towards healthy retinal tissue, leading to its destruction – a process akin to friendly fire in battle.

For Iris, the cavalry may be cresting the hill. In 2023, two new medications were approved for the treatment of geographic atrophy in the US. Both block targets within our complement system and, while not curative, have been shown to slow (although not reverse or stop) the disease. By late 2024, we should know whether one of these drugs, pegcetacoplan, is approved in Australia.

Starter’s pistol fires! A race afoot
between science and time.
Do the molecules work
and – as the clock chimes –
will they cross the line
to save sight?
The Conversation

Hessom Razavi, Associate professor, The University of Western Australia


Subscribe to support our independent and original journalism, photography, artwork and film.

Tuesday, 6 August 2024

Philosophy is Crucial in the Age of AI

 mapman/Shutterstock

By Anthony Grayling, Northeastern University London and Brian Ball, Northeastern University London

New scientific understanding and engineering techniques have always impressed and frightened. No doubt they will continue to. OpenAI recently announced that it anticipates “superintelligence” – AI surpassing human abilities – this decade. It is accordingly building a new team, and devoting 20% of its computing resources to ensuring that the behaviour of such AI systems will be aligned with human values.

It seems they don’t want rogue artificial superintelligences waging war on humanity, as in James Cameron’s 1984 science fiction thriller, The Terminator (ominously, Arnold Schwarzenegger’s terminator is sent back in time from 2029). OpenAI is calling for top machine-learning researchers and engineers to help them tackle the problem.

But might philosophers have something to contribute? More generally, what can be expected of the age-old discipline in the new technologically advanced era that is now emerging?

To begin to answer this, it is worth stressing that philosophy has been instrumental to AI since its inception. One of the first AI success stories was a 1956 computer program, dubbed the the Logic Theorist, created by Allen Newell and Herbert Simon. Its job was to prove theorems using propositions from Principia Mathematica, a 1910 a three-volume work by the philosophers Alfred North Whitehead and Bertrand Russell, aiming to reconstruct all of mathematics on one logical foundation.

Indeed, the early focus on logic in AI owed a great deal to the foundational debates pursued by mathematicians and philosophers.

One significant step was the German philosopher Gottlob Frege’s development of modern logic in the late 19th century. Frege introduced the use of quantifiable variables – rather than objects such as people – into logic. His approach made it possible to say not only, for example, “Joe Biden is president” but also to systematically express such general thoughts as that “there exists an X such that X is president”, where “there exists” is a quantifier, and “X” is a variable.

Other important contributors in the 1930s were the Austrian-born logician Kurt Gödel, whose theorems of completeness and incompleteness are about the limits of what one can prove, and Polish logician Alfred Tarski’s “proof of the indefinability of truth”. The latter showed that “truth” in any standard formal system cannot be defined within that particular system, so that arithmetical truth, for example, cannot be defined within the system of arithmetic.

Finally, the 1936 abstract notion of a computing machine by the British pioneer Alan Turing drew on such development and had a huge impact on early AI.

It might be said, however, that even if such good old fashioned symbolic AI was indebted to high-level philosophy and logic, the “second-wave” AI, based on deep learning, derives more from the concrete engineering feats associated with processing vast quantities of data.

Still, philosophy has played a role here too. Take large language models, such as the one that powers ChatGPT, which produces conversational text. They are enormous models, with billions or even trillions of parameters, trained on vast datasets (typically comprising much of the internet). But at their heart, they track – and exploit – statistical patterns of language use. Something very much like this idea was articulated by the Austrian philosopher Ludwig Wittgenstein in the middle of the 20th century: “the meaning of a word”, he said, “is its use in the language”.

But contemporary philosophy, and not just its history, is relevant to AI and its development. Could an LLM truly understand the language it processes? Might it achieve consciousness? These are deeply philosophical questions.

Science has so far been unable to fully explain how consciousness arises from the cells in the human brain. Some philosophers even believe that this is such a “hard problem” that is beyond the scope of science, and may require a helping hand of philosophy.

In a similar vein, we can ask whether an image generating AI could be truly creative. Margaret Boden, a British cognitive scientist and philosopher of AI, argues that while AI will be able to produce new ideas, it will struggle to evaluate them as creative people do.

She also anticipates that only a hybrid (neural-symbolic) architecture – one that uses both the logical techniques and deep learning from data – will achieve artificial general intelligence.

Human values

To return to OpenAI’s announcement, when prompted with our question about the role of philosophy in the age of AI, ChatGPT suggested to us that (amongst other things) it “helps ensure that the development and use of AI are aligned with human values”.

In this spirit, perhaps we can be allowed to propose that, if AI alignment is the serious issue that OpenAI believes it to be, it is not just a technical problem to be solved by engineers or tech companies, but also a social one. That will require input from philosophers, but also social scientists, lawyers, policymakers, citizen users and others.

Apple Park is the corporate headquarters of Apple Inc in Silicon Valley,
Some philosophers are critical of the tech industry. iwonderTV/Shutterstock

Indeed, many people are worried about the rising power and influence of tech companies and their impact on democracy. Some argue we need a whole new way of thinking about AI – taking into account the underlying systems supporting the industry. The British barrister and author Jamie Susskind, for example, has argued it is time to build a “digital republic” – one which ultimately rejects the very political and economic system that has given tech companies so much influence.

Finally, let us briefly ask, how will AI affect philosophy? Formal logic in philosophy actually dates to Aristotle’s work in antiquity. In the 17th century. the German philosopher Gottfried Leibniz suggested that we may one day have a “calculus ratiocinator” – a calculating machine that would help us to derive answers to philosophical and scientific questions in a quasi-oracular fashion.

Perhaps we are now beginning to realise that vision, with some authors advocating a “computational philosophy” that literally encodes assumptions and derives consequences from them. This ultimately allows factual and/or value-oriented assessments of the outcomes.

For example, the PolyGraphs project simulates the effects of information sharing on social media. This can then be used to computationally address questions about how we ought to form our opinions.

Certainly, progress in AI has given philosophers plenty to think about; it may even have begun to provide some answers.The Conversation

Anthony Grayling, Professor of Philosophy, Northeastern University London and Brian Ball, Associate Professor of Philosophy AI and Information Ethics, Northeastern University London


Subscribe to support our independent and original journalism, photography, artwork and film.

Friday, 2 August 2024

How to Win on Your Own Terms: Simone Biles Claims her Eighth Olympic Gold on her Paris 2024 ‘Redemption Tour’

Simone Biles, of the United States, competes on the uneven bars during the women's artistic gymnastics at the 2024 Summer Olympics, in Paris, France. 

By Vaughan Cruickshank, University of Tasmania; Brendon Hyndman, Charles Sturt University, and Carla Valerio, Southern Cross University

Simone Biles is the most decorated gymnast, male or female, in history. She won her first world championship all-around gold medal in 2013 and has not lost an all-around competition since.

She arrived in Paris with 37 medals from World Championships and Olympics, including 27 golds.

She has since added to this total, winning team gold – her eighth Olympic medal – and she looks set to increase her tally when she competes in the finals for all-around, beam, floor and vault.

Biles returns to the Olympics after a difficult experience at Tokyo 2020. The athlete we have seen perform so far at Paris is more relaxed, more mature and still giving us the performances of the best gymnast in the world.

A difficult second Olympics

At her first Olympics, Rio 2016, Biles won gold in the team all-around, vault and floor competitions, and bronze on beam.

She was expected to repeat the feat in Tokyo in 2021, but she was forced to withdraw from most events because of mental health concerns and the “twisties” – the name gymnasts give to the phenomenon of losing sense of where they are in the air, making performing complicated moves dangerous.

In the end, she contributed one vault to the team final, where the women from the United States took silver, and she received a bronze medal on beam – far from the multiple golds she was expected to take home.

Many elite gymnasts get the twisties. They just didn’t talk about them so openly. Since Biles first spoke about it, other elite gymnasts such as Joscelyn Roberson and Laurie Hernandez have spoken about sharing the experience.

Biles has said the twisties were caused by a combination of trauma related to abuse by a former USA team doctor, isolation during the COVID affected games and the weight of high expectations of success.

Changing the discussion around mental health

Biles’ decision to prioritise her mental health and not compete has changed perceptions of elite gymnasts and their mental health.

Many former elite gymnasts have spoken about how they did not have agency over their bodies and decision-making while training and competing, and were forced to compete while injured.

Biles speaking about her mental health, alongside athletes like basketballer Kevin Love and tennis player Naomi Osaka, has reduced stigma, and increased the number of athletes talking about their mental health.

This year, inspired by this discussion, the US Olympic and Paralympic committee has made sure athletes have access to more mental health resources while they compete in Paris.

Changing the sport

Biles is also a trailblazer in competition.

She has five unique moves named after her across floor, balance beam and vault.

Only one of these moves has ever been performed by another gymnast in an international competition, when Hillary Heron of Panama performed a double layout with a half-twist in the second flip on floor this week in Paris.

She is a role model for many young African American girls, who are increasingly enrolling in gymnastics clubs.

Women’s gymnastics has long been dominated by younger athletes. There has not been an Olympic all-around women’s champion in her 20s for over 50 years.

The 2024 US team is one of the oldest in the country’s history, with an average age of 22. By way of contrast, in 2012, the oldest member of the team was 18-year-old Aly Raisman. Other medal contenders such as Brazil, have teams even older.

At 27, Biles could become the oldest woman to win Olympic all-around gold since 1952.

More experienced gymnasts competing at Olympic level reflects a change in recent years. Athletes have been able to maintain their elite abilities longer due to advances in sports medicine and training.

Her success as an older athlete also reflects her improved mental health and maturity.

Coming close to walking away from gymnastics after Tokyo, Biles was in and out of the gym for over a year and a half as she built from occasional gentle trampoline and mat exercises to more complex skills and routines.

When she returned to competition in 2023, she won her sixth all-around world championship and ninth all-around US championship.

As she details in her recent Netflix documentary, she now has much more balance in her life, with new priorities outside the gym. She still wants to win, but not winning isn’t the end of the world.

Paris 2024

Biles has called Paris her “redemption tour”. Again, she arrived at the Olympics with the pressure of being the favourite. But this time she is noticeably more relaxed, regularly seen chatting and laughing with teammates.

So far in Paris, her physical health has been more of a concern. She has a heavily strapped lower leg and has been seen limping. Her coach has said the injury is minor, and she will still be able to compete in the rest of the competition.

In her documentary, Biles talks about the importance of ending her career on her terms.

She has already changed the sport, both inside and outside the gym. If she can complete her remarkable comeback with individual all-around gold in Paris this will truly cement her legacy as the greatest gymnast of all time.The Conversation

Vaughan Cruickshank, Senior Lecturer in Health and Physical Education, University of Tasmania; Brendon Hyndman, Associate Professor of Health & Physical Education (Adj.), Charles Sturt University, and Carla Valerio, Health and Physical Education Lecturer, Southern Cross University

Subscribe to support our independent and original journalism, photography, artwork and film.

Monday, 29 July 2024

Taming the Machine: Should the Technological Revolution be Regulated ~ and Can it Be?

Phonlamai Photo/Shutterstock

By Charles Barbour, Western Sydney University

Back in 2005 ~ before the rise of social media or smart phones, let alone blockchain, metadata and OpenAI ~ computer scientist and entrepreneur Ray Kurzweil published a breathlessly prophetic account of what he called “the singularity”.

Kurzweil meant a moment in the not-too-distant future when super-intelligent technology would suddenly exceed all imaginable human capacities, absorb humanity into its operations, and spread its mastery across nothing less than the universe itself. The Singularity is Near, his title ominously declared. And he was confident enough in his calculations to offer a precise date: 2045.

This year, almost exactly halfway between 2005 and 2045, Kurzweil released an update on his prophesy. It was essentially the same prognosis, but with a somewhat less ominous sounding title: The Singularity is Nearer.

To understand Kurzweil, and the techno-prophets who have followed his lead, it is worth thinking a little about the nature of prophesy itself. For even in its ancient and religious forms, the purpose of prophesy has never really been to predict the future. It has always been to influence the present – to convince people to live their lives differently today, in preparation for a tomorrow that can only ever be hypothetical.

In this context, it would be interesting to ask why so much of the discourse around emerging technologies has become so apocalyptic in tone. What exactly is such discourse likely to accomplish? Does predicting the impending eclipse of humanity give anyone a reason to act now or change any aspect of their lives? Or is the projected inevitability more likely to convince people that nothing they do could possibly have any consequence?

No doubt, there is something darkly appealing about declarations of the end of times. Their ubiquity throughout human history suggests as much. But there are more productive, more balanced – if less sensational – ways of thinking and speaking.

Without going all the way over to “the singularity”, can we construct a genuine account of what is singular about our contemporary experience and the way it is being shaped by the machines we build?

Marcus Smith’s new book Techno: Humans and Technology is among the more levelheaded approaches to the topic.

Of course, like everyone else working with this genre, Smith is quick to propose that the present moment is exceptional and unique. The very first sentence of his book reads: “We are living in the midst of a technological revolution.” References to the concept of “revolution” are scattered liberally throughout.

But the central argument of Techno is that we must regulate technology. More importantly, Smith argues that we can. An associate professor of law at Charles Sturt University, he suggests the law has more than enough resources at its disposal to place machines firmly under human control.

In fact, on Smith’s account, Australia is uniquely situated to lead the world in technological regulation, precisely because it is not home to the large tech corporations that dominate American and European society. That explains why Australia is, in Smith’s words, “punching above its weight” in the field.

The threat to democracy

Smith breaks his book down into three tightly structured sections that examine technology’s relation to government, the individual, and society.

In part one, he engages with large scale political questions, such as human created climate change, the application of AI to every aspect of public life, and the systems of social credit made possible by digital surveillance and big data.

Perhaps Smith’s most interesting argument here concerns the similarity between the notorious social credit system employed by the Chinese government and systems of social credit developed by commercial forces.

It is easy to criticise a government that uses a battery of technological methods to observe, evaluate and regulate the behaviour of its citizens. But don’t banks collect data and pass judgement on potential customers all the time – often with deeply discriminatory results? And don’t platforms like eBay, Uber and Airbnb employ reputational credit scores as part of their business model?

Marcus Smith. University of Queensland Press

For Smith, the question is not whether social credit systems should exist. It is almost inevitable that they will. He calls on us to think long and hard about how we will regulate such systems and ensure they are not allowed to override what he deems the “core values” of liberal democracy. Among these, Smith includes “freedom of speech, movement and assembly”, and “the rule of law, the separation of powers, the freedom of the press and the free market”.

Part two of Techno turns its attention to the individual and the threat emerging technologies represent to privacy rights. The main concern here is the enormous amount of data collected on each and every one of us every time we engage with the internet – which means, for most of us, more or less all the time.

As Smith points out, while this is clearly a global phenomenon, Australia has the dubious honour of leading the world’s liberal democracies in legislating governmental access to that data. Private technology companies in Australia are legally required to insert a back door to the encrypted activities of their clients. Law enforcement agencies have the power to take over accounts and disrupt those activities.

“The fact is that liberal-democratic governments act the same way as the authoritarian regimes they criticise,” Smith writes:

They may argue they only do it in specified and justified cases under warrant, but once a technology becomes available, it is likely that some government agency will push the envelope, believing its actions are justified by the benefits of their work for the community.

The emergence of big data thus inevitably “shifts liberal democracies towards a more authoritarian posture.” But, for Smith, the solution remains ready to hand:

If rights such as privacy and autonomy are to be maintained, then new regulations are essential to manage these new privacy, security and political concerns.

Practical difficulties

The final part of Techno focuses on the relationship between technology and society, by which Smith largely means economics, and markets in particular.

He provides a helpful overview of the blockchain technology used by crypto-currencies, which has promised to mitigate inequality and create growth by decentralising exchange. Here again Smith avoids taking either a triumphalist or a catastrophising approach. He asks sensible questions about how governments might mediate such activity and keep it within the bounds of the rule of law.

He points to the examples of China and the European Union as two possible models. The first emphasises the role of the state; the second is attempting to create the legislative conditions for digital markets. And while both have serious limitations, some combination of the two is probably the most likely to succeed.

But it is really at the very end of the book that Smith’s central concern – regulation – comes to the fore. He has no difficulty stating what he takes to be the significance of his work. “Technology regulation,” he writes, “is probably the most important public policy issue facing humanity today.”

Declaring that we need to regulate technology, however, is far simpler than explaining how we might do so.

Techno provides a very broad sketch of the latter. Smith suggests that it would require “involving the key actors” (including technicians, corporations and ethicists), “regulating with technology” (that is, using technological means to impose laws on technological systems), and establishing “a dedicated international agency” for coordinating regulatory processes.

But Smith does not really reflect on the complexity of implementing any of these recommendations in practice. Moreover, it is possible that, despite his considerable ambition, his approach stops short of capturing the true scale of the problem. As another Australian academic, Kate Crawford, has recently argued, we cannot understand intelligent technologies simply as objects or tools – a computer, a platform, a program. This is because they do not exist independently of fraught networks of relationships between humans and the world.

These networks extend to the lithium mines that extract the minerals that allow the technology to operate, the Amazon warehouses that ship components around the globe, and the digital piecework factories in which humans are paid a subsistence wage to produce the illusion of mechanical intelligence. All of this is wreaking havoc on the environment, reinforcing inequalities, and facilitating the demolition of democratic governance.

If the project of regulation was going to touch phenomena of this sort, it would have to be much more expansive and comprehensive than even Smith proposes. It might mean rethinking, rather than simply attempting to secure, some of what Smith calls our “core values”. It might require asking, for instance, whether our democracies have ever really been democratic, whether our societies have ever really pursued equality, and whether we can continue to place our faith in the so-called “free market”.

Asking these kinds of questions certainly wouldn’t amount to an apocalypse, but it could amount to a revolution.The Conversation

Charles Barbour, Associate Professor, Philosophy, Western Sydney University

Subscribe to support our independent and original journalism, photography, artwork and film.

Tuesday, 23 July 2024

Eyes Wide Shut at 25: Why Stanley Kubrick’s Final Film Was Also His Greatest

Stanley Kubrick, Tom Cruise, Nicole Kidman and Sidney Pollack on the set of Eyes Wide Shut

By Nathan Abrams

Legendary filmmaker Stanley Kubrick spent a lifetime trying to make his final film, Eyes Wide Shut, a reality. He had been struggling to make it from the moment he began making feature films, some 75 years ago. When he finally did, 25 years ago in 1999, it killed him.

The plot centres on a physician (Tom Cruise) whose wife (Nicole Kidman) reveals that she had contemplated having an affair a year earlier. He becomes obsessed with having his own sexual encounter. When he discovers an underground sex group, he attends one of their masked orgies.

Having not made a film in 12 years since Full Metal Jacket in 1987, Eyes Wide Shut was hotly anticipated. Titillated by juicy rumours in the British tabloids, critics and fans who were expecting a steamy X-rated psychological thriller were inevitably disappointed. “Eyes Wide Shut turns out to be the dirtiest movie of 1958,” quipped one critic. Wait 12 years for anything and it won’t turn out to be quite so good as you imagined.

But where English speaking audiences panned it, the film was warmly received in Latin and Mediterranean countries. And in the long term, those audiences proved to be right, and the film has grown in stature since. Not everyone might agree that, as Kubrick claimed, it was his best work, but they certainly should see its merits today.

Kubrick adored the work of Arthur Schnitzler, the Austrian author of the 1926 text, Traumnovelle (translated into Dream Story in English), which became his source material. Once described as the greatest portrayer of adultery in German-language literature, Schnitzler wrote about themes of sex, marriage, betrayal and above all, jealousy. He even, it is rumoured, kept a diary of every orgasm he ever experienced.

Given that Kubrick discovered Traumnovelle in the early 1950s, it influenced almost every film he made. Consider the rapes in Fear and Desire (1952) and Killer’s Kiss (1955), the adultery and jealousy in The Killing (1956) and the attraction to younger women in Lolita (1962). Consider also the sexual violence in A Clockwork Orange (1971), the adultery in Barry Lyndon (1975), the marital troubles of The Shining (1980) and the toxic masculinity of Full Metal Jacket. They all culminated in Eyes Wide Shut.

This extends to the films Kubrick didn’t make too. The Freudian tale of Burning Secret written by Schnitzler’s contemporary, Stefan Zweig, that was abandoned in 1956 through to Napoleon, a figure that intrigued Kubrick partly because he had, in his own words, a sex life worthy of Arthur Schnitzler.

Eyes Wide Shut (1999) official trailer.

Kubrick returned to Eyes Wide Shut time and again during his career. But it took until the mid-1990s, when Kubrick was in his 60s, before he was able to execute it.

He struggled with adapting the source material. How does a director who spent his career putting big themes like nuclear war, the space race and Vietnam on the big screen put the tiny intimate moments of marriage on there?

His wife, Christiane, kept stopping him, telling him they were too young. Or maybe it was because Kubrick was legendary for his pre-production research, so only with four decades of marriage under his belt did he feel he really understood the topic.

By the time it was eventually made, Kubrick was in a poor state of health. Already a ponderous filmmaker, he was slowing up. The production was long, arduous and still holds the record for the longest continuous shoot in cinema history.

When it finally wrapped on June 17, 1998, he was exhausted. Eyes Wide Shut had been filmed over 294 days, spread over 579 calendar days, including 19 for re-shooting with actress Marie Richardson, totalling slightly over a year and seven months. And post-production would last for a further nine months, only brought to a halt by Kubrick’s death.

Not around to influence the marketing, the film suffered from a poor critical reception. The result was a disappointed audience, looking for salaciousness where none existed. That, in turn, influenced their response and the initial commercial failure of the film in the US.

Many US and British critics felt the film was too long, the acting was unconvincing, the New York sets looked fake, the ideas were weak, and the eagerly anticipated orgy scene was ridiculous. They thought it was hermetic, too ordered and too closed off.

In the end, ironically, it was the highest grosser of any Kubrick film. It cost US$65 million (£40 million) to make with another US$30 million in publicity costs and eventually grossed US$162 million worldwide.

Influence

Similar to The Shining, Eyes Wide Shut became the source of any number of conspiracy theories. It has even been seen as a warning to the predations of convicted US sex offenders, Harvey Weinstein and Jeffrey Epstein.

Now, it is regarded as a classic, maybe not Kubrick’s best film, but one with enough layers to reward repeated viewing. And its influence is felt in wider popular culture.

Consider the explicit reference in Jordan Peele’s 2017 film Get Out, a director much influenced by Kubrick’s style, when one character says: “You in some Eyes Wide Shut situation. Leave, motherfucker.”

Todd Field, who played Nick Nightingale in Eyes Wide Shut, showed a Kubrickian influence in the image making, pacing and almost dreamlike atmosphere of the film Tár which he directed in 2022. Jonathan Glazer’s Birth (2004) owes a huge debt to Eyes Wide Shut also.

In the final analysis, anyone who refuses to engage with Eyes Wide Shut is refusing to understand Kubrick as a filmmaker. He wanted to make it at the very point he began making feature films. It lurks behind every film he made. The Conversation

Nathan Abrams, Professor of Film Studies, Bangor University

Subscribe to support our independent and original journalism, photography, artwork and film.

Monday, 15 July 2024

The Science Behind Ariana Grande’s Vocal Metamorphosis

Grande performs during the 2024 Met Gala on May 6th in New York City. Cover picture: Elli Ioannou


By Lydia Kruse, Purdue University

While promoting her role in the upcoming film adaptation of the Broadway hit “Wicked,” singer Ariana Grande made a podcast appearance that left many of her fans befuddled and concerned.

In the middle of the interview, the sound of her voice drastically changed, going from lower-pitched and slightly raspy to one that was much higher pitched, with a smooth, light texture to it.

Speculation ensued.

“THAT WAS SO SUDDEN HELP,” one netizen exclaimed. “It’s her alter ego Kitten programming,” quipped another fan. Others wondered whether Grande was getting stuck in the voice of Glinda, the character she plays in “Wicked,” who speaks with a softer intensity and higher pitch. (After Austin Butler played Elvis Presley in the 2022 musical “Elvis,” the young actor continued speaking like the King of Rock and Roll long after the film’s premiere.)

Grande’s fans were perplexed by the singer’s vocal shift during a podcast interview.

Grande eventually responded to the confusion, explaining that she routinely and intentionally changes her “vocal placement” to preserve her vocal health.

For those unfamiliar with the science of voice production, Grande’s explanation may have prompted more – rather than less – confusion. But as a speech-language pathologist who specializes in voice disorders, I know how effective these techniques can be.

Singers and actors who routinely strain their vocal cords can damage them through what’s known as “phonotrauma,” or excessive and improper use of the voice.

The data shows that voice disorders can lead to loss of work for anyone, not just singers. But professional singers – whose livelihoods, like those of professional baseball pitchers, depend on a fully functional part of their body – are more likely to experience financial and emotional distress from a voice disorder.

Cords on a collision course

In order for you to speak or sing, your vocal cords – a delicate pair of thin, muscular strips shaped like a “v” in the throat – must come together and vibrate against one another other as air from the lungs is pushed through.

When the tension and length of the vocal folds increase, they vibrate faster. This leads to pitch increases. Likewise, when the tension and length of the vocal folds decrease, they vibrate slower, which lowers the pitch.

The more a person uses their voice, the more times the vocal cords collide against each other. For instance, when Steven Tyler hits the high note at the end of “Dream On,” his vocal cords vibrate over 800 times per second. In comparison, a hummingbird flaps its wings roughly 70 times per second.

Many big-name performers go on extended tour with shows taking place night after night, often with little time for vocal rest and recovery. So it’s no wonder that many of them end up injuring their vocal cords. There are other habits and behaviors that can damage the delicate mechanism that creates a singer’s unique sound: poor diet, lack of sleep, screaming, smoking and drinking alcohol.

Surgery comes with risks

Grande is no stranger to the pain of losing her voice.

In 2013, she sustained a vocal fold hemorrhage, which occurs when a blood vessel in the vocal cords ruptures because of phonotrauma. Doctors put her on strict voice rest so she could recover.

However, injuries to vocal cords don’t always heal on their own. Surgery can be necessary, but this option often carries serious risks for singers.

Surgical interventions can lead to a loss of vocal range due to scarring. In 1997, Julie Andrews famously lost her crystal-clear singing voice, which once spanned four octaves, following a minor vocal cord procedure.

Black and white movie still of a young, short-haired woman frolicking in a field, singing, with her arms spread wide.
Julie Andrews went under the knife in 1997, and her voice never recovered. Screen Archives/Getty Images

Thankfully, not all vocal cord surgeries end in disaster: Grammy-winner Adele went under the knife in 2011 to remove a vocal cord polyp. More than a decade later, she continues to top the charts. In fact, there are many singers, actors, news anchors and talk show hosts who have suffered various vocal cord injuries and ailments and have been able to successfully resume their careers.

But performers who don’t change their habits and behaviors following an injury or successful surgery may end up right back where they started.

Prevention is the best medicine

With all this in mind, Grande’s attempt to mitigate the risk of a vocal cord injury that could derail her professional success is wise.

But how, exactly, is she achieving this by changing the sound of her voice as she speaks?

In her response to the speculation on social media, she pointed to the importance of altering her “vocal placement” to preserve her vocal health.

What she’s really talking about is the interaction between the vocal cords and the vocal tract, which includes the throat, nose and mouth. The vocal tract acts like a filter for the sound created by the vocal cords, causing some sound waves to be dampened while others are amplified. This interaction creates a person’s unique, recognizable voice.

When Grande focuses on lifting her voice higher up in her vocal tract – toward her nose – certain vibrations created by her vocal cords are amplified by the frontal air-filled cavities. This creates a brighter, higher sound that actually lessens the stress on the vocal cords themselves.

In the clip, Ariana’s voice also sounds light and slightly breathy. She does this by decreasing the amount of force exerted on the vocal cords, so that they may not fully close while she speaks.

Creating a slight gap between the vocal cords keeps them from harshly colliding against one another, which, in turn, could prevent phonotrauma. This is not to be confused with whispering, which can also be harmful to the voice, since it can strain the vocal cords and throat muscles.

As with many health conditions, prevention is often the best medicine. Although behavioral change can be difficult, Grande seems to be embracing the challenge.The Conversation

Lydia Kruse, Clinical Assistant Professor of Speech, Language and Hearing Sciences, Purdue University

Subscribe to support our independent and original journalism, photography, artwork and film.