AI and Process Safety Ethics: What's a Human Life Worth?

AI Process Safety Ethics

In the Substack post AI and Process Safety Ethics, the author explores an ethical dimension of process safety that is often overlooked: the implicit valuation of human life embedded in consequence matrices used in risk assessment. As artificial intelligence (AI) becomes increasingly integrated into operational and safety systems, this latent ethical framework is being exposed and must be confronted directly rather than glossed over.

Consequence matrices like the one shown in the post are common tools in Process Hazards Analyses (PHAs). They organize four consequence categories—worker safety, public safety/health, environmental impact, and annual economic loss—into a unified scale for ranking hazards. At a casual glance, such matrices appear to be neutral technical tools. In reality, they encode value judgments. For example, in the matrix’s highest severity category, a fatality or multiple serious injuries are aligned with an economic consequence of “≥ $10 million.” This alignment implicitly treats a human life as having a monetary equivalent, a difficult ethical statement that practitioners often fail to articulate.

The post explains that this valuation does not arise from a single principled source. Instead, it reflects a mix of regulatory precedents, corporate practices, industry norms, insurance and legal experience, and pragmatic needs. Regulatory agencies often use a “value of statistical life” for rulemaking; companies borrow values from peers and historical practice; insurance realities influence what level of risk is tolerable; and organizational decision-making ultimately drives how consequence categories are defined. None of these is morally neutral, yet process safety teams rarely interrogate their matrices’ embedded assumptions.

One reason for this avoidance is practical. PHAs are time-bounded exercises with multidisciplinary participants. Raising explicit questions about the worth of a human life can derail the discussion into philosophy, law, or theology—territory outside most participants’ immediate expertise or comfort. Instead, teams rely on qualitative descriptors like “catastrophic” or “unacceptable” and let the matrix do the comparative work silently.

The post argues that AI changes the stakes by making the implicit explicit. As AI systems are deployed in real-time decision support, alarm management, advanced controls, or other operational domains, they require defined objective functions, weights, and trade-offs. AI cannot operate on vague qualitative judgments; it requires specific numerical inputs and thresholds. Thus, organizations that embed AI into safety and control systems will be forced to formalize the values they have historically left implicit.

This shift reveals an uncomfortable truth: organizations already make decisions every day that treat human life as commensurable with economic values. AI will not introduce this; it will expose it. The post distinguishes between three separate concepts often conflated: the intrinsic moral worth of human life, the legal or regulatory assignment of a statistical value for policy decisions, and the practical organizational decisions about how much to invest in risk reduction. Ethical failure occurs when these are not transparently examined and justified.

Finally, the post suggests that AI’s exposure of these assumptions presents an opportunity. Rather than avoid uncomfortable questions, organizations must articulate who sets these values, on what basis, and how accountability will be maintained when encoded systems make consequential decisions. This transparency, the author argues, is essential if process safety is to remain both effective and ethically grounded in an AI-augmented future. AI Process Safety Ethics