back to top
Thursday, 19 February, 2026
HomeArtificial Intelligence (AI)Why dying can’t be left to machines alone

Why dying can’t be left to machines alone

While AI and robotics show promise in enhancing end-of-life care, the prospect of fully autonomous machine-assisted dying raises profound ethical questions about accountability, human dignity, and whether death – our most fundamentally human experience – should ever be left entirely to algorithms, writes Chris Jones in News24.

As AI and robotics become increasingly embedded in medical practice, ethical attention has begun to turn toward their possible role in end-of-life care.

Robot- and AI-assisted dying currently refer to largely theoretical applications in evaluating eligibility, supporting clinical decision-making, or administering life-ending interventions. While prototypes like the Sarco capsule have moved these concepts into the public eye, they remain legally and ethically outside mainstream medical practice.

If robotics and AI are mapped more specifically within end-of-life care and decision-making, a robotic system essentially carries out predefined actions. It could theoretically include an implementation component – that after a decision has been made about a patient’s way forward, AI through robotics could perform a certain action, such as administering a prescribed drug that causes that person to die.

An AI system analyses data and can generate recommendations that may support/influence human decision-making. It can provide information about diagnosis, prognosis, and suggest the best treatment (understood as decision support) in specific situations.

It is about uploading data, computing patterns from these data, after which information in general and more specifically is returned to clinicians by AI.

Human moral agency

Presently, no jurisdiction permits AI-driven autonomous systems to perform assisted dying. Where assisted dying is legal, such as in the Netherlands, Belgium, Switzerland, Canada, Australia, Jersey, and parts of the United States, human physicians/clinicians are required to assess eligibility, secure informed consent, and remain morally and legally responsible for the act itself.

These contexts assume human moral agency at every stage, an assumption that remains ethically important. Machines cannot, morally or legally, be held accountable within these contexts.

The insistence on human responsibility reflects a deeper, non-sacrificing commitment to accountability, transparency, truthfulness, and respect for persons. However, at the same time, it does not rule out the use of AI or robotic systems in end-of-life care, like predictive analytics, proactive symptom management, care co-ordination, patient and family support and the theoretical possibility of administering a drug that causes death.

Along with this, the challenges associated with the use of AI within end-of-life care cannot be ignored, such as privacy concerns, legal and ethical issues, patient trust and conception, and cultural sensitivity.

Arguments against and in favour of assisted dying

Opposition to assisted dying is often grounded in religious claims about the sanctity and absoluteness of life (only God gives and takes life), traditional views of medicine as life-preserving, and/or fears that vulnerable patients may (through coercion) be harmed or killed.

These concerns should not be dismissed lightly. However, I argue that individuals are entitled to make fundamental decisions about their own lives, provided adequate safeguards are in place.

A central argument in favour of assisted dying is respect for personal autonomy. A competent individual facing severe (terminal) illness should have the moral right to decide how and when their life ends, particularly when continued existence involves profound suffering or loss of dignity.

Denying this choice can amount to an unjustified intrusion into bodily integrity and self-determination.

Closely related is the commitment to reducing unnecessary suffering. When there is no realistic prospect of recovery, pressurising individuals to endure prolonged physical or psychological pain appears difficult to justify. In such cases, the role of medicine may justifiably shift from prolonging biological life at all costs to initiating a compassionate and dignified death.

The argument of moral equivalence further supports this position. Withdrawing life-sustaining treatment (passive euthanasia) and providing proportionate palliative medications that may foreseeably, but unintentionally, hasten death are widely accepted ethical and legal practices in many jurisdictions.

Although passive euthanasia is often described as an omission rather than an act, both passive and active forms of euthanasia involve deliberate decisions with foreseeable outcomes. The distinction between passive and active forms of euthanasia, therefore, does not provide a convincing moral basis for rejecting active forms of assisted dying.

AI, robotics and the scope of end-of-life care

Robot- and AI-assisted dying should not be understood, as can be inferred from above, as a single practice but rather as a spectrum of possible technological involvement and innovation.

The latter should be assessed not only in terms of efficiency and safety, but also in terms of its social consequences. AI tools could, in principle, contribute positively to end-of-life care, as indicated earlier, however, as machine autonomy (making its own, independent, self-adapting decisions within a human unregulated environment) increases, so too do concerns about accountability, transparency, oversight and moral distance.

Key ethical concerns

Autonomy and consent

While robot-and AI-assisted dying might appear to expand patient choice and support, meaningful autonomy requires an understanding of the rationale behind AI-driven recommendations and ensuring that individuals facing end-of-life decisions remain free and self-determining participants in this process. Opaque or poorly explained algorithms risk undermining informed consent. They may also risk distancing both patients and clinicians from the moral gravity of end-of-life decisions.

• Justice and equity

A further concern is the possibility that robot- and AI-assisted dying could reinforce existing inequalities in healthcare. There is a real risk that technologically mediated assisted dying might be disproportionately offered to marginalised populations, including people with disabilities, the elderly, or those facing economic hardship, as a cost-saving alternative to comprehensive care. These risks underscore the need for strong safeguards, anti-discrimination protections, and ongoing public accountability.

• Ethical frameworks 

Utilitarian approaches may emphasise robot- and AI-assisted dying’s potential to reduce suffering and/or increase access. However, such calculations risk overlooking relational, cultural, and symbolic dimensions of dying. Deontological concerns highlight the inability of machines to bear moral responsibility, reinforcing the necessity of human oversight.

Virtue-based perspectives further warn that excessive reliance on AI autonomy (operating without direct, constant professional intervention) could erode the moral attentiveness and empathy central to good medical practice.

• Care and disability ethics

Care ethics underscores that dying is a relational process, shaped by presence, recognition, and responsiveness. These qualities cannot be fully replicated by machines. Disability ethics raises additional concerns about algorithmic judgments of “quality of life,” which may encode ableist assumptions and undervalue lives that fall outside dominant norms. Together, these perspectives caution against reducing dying to a purely technical event.

• Drawing ethical boundaries

In my view, certain forms of technological assistance may enhance end-of-life care. However, full AI and robotic autonomy in life-ending practices crosses an important moral boundary. Given the irreversible nature of death, any ethical framework governing robot- and AI-assisted dying must include continuous human oversight and validity, clear accountability, and strong protections for vulnerable groups.

Ultimately, dying remains a deeply human experience that calls for presence, connection, empathy, trust, honesty, and recognition.

While technology can and should be harnessed to reduce suffering and expand equitable access to care, it cannot replace the relational and symbolic work (rituals, metaphors and meaningful actions in providing comfort, meaning and a sense of closure) involved in a humane death. An open ethical approach therefore supports responsible governed AI and robot integration – one that strengthens, rather than diminishes, dignity, identity, justice, and compassion at the end of life.

International collaboration, in this respect, will be pivotal in navigating the way forward.

Chris Jones is an Emeritus Associate Professor in Systematic Theology and Ecclesiology at Stellenbosch University.

 

News24 article – OPINION | The human touch: Why dying can’t be left to machines alone (Restricted access)

 

See more from MedicalBrief archives:

 

Why SA needs both palliative care and assisted dying
Euthanasia activist says SA doctors support legalising assisted dying

 

Switzerland gives legal approval to suicide pod

 

SA woman has assisted suicide after winning medical negligence claim

MedicalBrief — our free weekly e-newsletter

We'd appreciate as much information as possible, however only an email address is required.