Insurance
Via Negativa
George Lewin-Smith and Mark Titmarsh
Sep 1, 2024
Via negativa is an ancient theological concept which looks to define something by what it is not. The words come from the latin, the 'way of negation', and the principle can be used to understand or define something which is so complex or difficult to understand (such as God or AI), that it is beyond human comprehension. This approach enables decisions to be made whilst acknowledging the limitations of human understanding, language and complexity. Via negativa has rigorous practical applications in one’s personal life, decision making, religion and insurance underwriting.
The approach can be exemplified by comparing regular reasoning with via negativa:
Regular reasoning:
God is good (vague and broad, unclear what this means)
God is love (vague and broad, unclear what this means)
Via negativa:
God is not evil (can be exemplified in the real world)
God is not hateful (can be exemplified in the real world)
Via negativa is a powerful approach as it:
challenges typical positive descriptions (via positiva) and provides real world examples of what something is not or what not to do
recognises with humility, the incomprehensibility or complexity of the thing it is describing
prevents attributing human-like qualities to the thing it is describing
challenges the idea that we can have definitive or quantifiable knowledge of things and refers to descriptions based off actions rather than subjective intellectual assertions
defines the factual, real world essence of something highly complex through inversion
Good decision making involves avoiding bad decisions, and this approach is key to Charlie Munger and Berkshire Hathaway’s success. Munger is famous for stating,
"All I want to know is where I’m going to die, so I’ll never go there”.
Applying via negativa to generative AI insurance
Generative AI risks are being speculated upon. The complexity of AI systems, hyper-scale and the pace of innovation make evaluating the risks associated with this technology difficult.
At Testudo, we provide insurance for losses arising from AI systems. We approach underwriting by removing known areas of accumulated AI risk, identified in our loss data. Given AI is a moving target, relying on historical data alone will not be sufficient, so we have developed a technology platform to help our clients actively track and mitigate identifiable risks.
Our approach of removing known risk is more robust and practical than forecasting where losses may come from. Whilst we appreciate the entire AI ecosystem needs to consider systemic risk, theorising may lead to obvious risks being missed, by valuing complexity over simplicity.
Insurance is a way for a market to self-regulate as it puts a price (an insurance premium) on assurance, regulatory standards and frameworks. Financial claims allow us to quantifiably understand AI risks as they occur in the real world and not in simulated evaluation environments. Applying unsubstantiated claims of risk in areas that are not systemic or ruinous may cause more harm to the industry than good (Iatrogenics).
George and Mark