The Vantasner Danger Meridian and AI

7 minute read

The Vantasner Danger Meridian and AI

The Vantasner Danger Meridian is a quantitative assessment of danger. Discussed in the academic journal Critical Horizons, the meridian is defined as the point at which danger, either to the viability of the task, or the people involved, starts to grow exponentially.

We deal with danger every day, but it often rises gradually. Think about the difference between crossing a road hours before rush hour, then at five-minute intervals afterwards as the volume of traffic, and danger, gradually increases. At what point do you decide it’s too dangerous to continue, after all, it’s only slightly riskier than moments before? The meridian is that point. And while it’s intended for potentially critical assessments, it’s a risk tool that everyone should have.

Except it doesn’t exist.

It should, but it doesn’t, it was completely made up for the 2018 television series Patriot.

A segue to praise Patriot

I discovered Patriot recently, searching for something to watch while waiting for the next series of Ted Lasso. It mixes spy thriller, dark humour, surrealism, folk music, and understated performances. The script is beautifully and intricately written. As I came towards the end (it was scandalously cancelled after two seasons, despite glowing reviews) I realised that it did not just have a few Chekov’s rifles, but an entire armoury of different calibre weapons.

The premise is that our hero, John Tavner, needs to complete a mission in Iran, and to get there, he assumes the identity of John Lakeman, a piping engineer. And here one of the key differences with a typical spy thriller becomes apparent.

Most spy or espionage tales are meticulously researched. Indeed, many authors had some first-hand experience. The idea is that, although a fictional scenario, there is still realism. This could be real life.

Patriot dumped that idea. Tavner doesn’t know the first thing about piping. And, frankly, that’s all that matters. So, why waste time making the engineering accurate. It’s best illustrated (and this isn’t much of a spoiler) early in the first season when his boss, who literally wrote the book on piping, suspects he is a fraud and sets him up to deliver a keynote address at a sales conference.

I refuse to believe anyone involved in writing, shooting, or editing that scene does not consider it the highlight of their career.

And Vantasner?

The Vantasner Danger Meridian comes up later, when Tavner is assessing the risk of a mission he considers “over the line.” And, being that type, I looked it up.

And there’s a good amount of information about it online.

What actually is the Vantasner Danger Meridian?

It’s made up. The producers concocted an academic paper and apparently got it published. The paper does not appear to be live anywhere now, but can still be found on the Internet Archive. The paper is full of nonsense, like the piping speech, and apparently the maths is just nonsense, too.

In short, it’s a wonderful Easter egg, and one I intend to carry with me whenever I’m thinking about risk. And I’m not the only one to find it useful, the concept even featured in a paper at an academic conference which noted its dubious origins, but suggested it was still a useful way to think about the layering of problems that institutions faced during the Covid-19 pandemic.

But what is it becoming?

Partly thanks to AI, it’s starting to take on a life of its own, and it’s not even close to being accurate. Searching for ‘Vantasner Danger Meridian’ will bring up a range of results, depending on the search engine and (I guess) your history, but using private browsing windows and a VPN, my first page of results tended to split between pages about its origin on Patriot, articles saying it should be real, merchandise, articles that refer to the fake paper’s description as if true, and articles that appear have made up a backstory for it.

Andrew Sridhar may have taken it at face value when he uses it to discuss intermittent fasting and attributes it to, “at least one high-level CIA official who also spent significant time on a remote island occupied by Others (sic).” He refers to Patriot, so he might know it’s fake, but equally, he doesn’t seem to acknowledge the concept as a fiction, and as far as I recall, it doesn’t match the backstory of the character that he quotes.

Another take is in Exploring the Vantasner Danger Meridian: The Line of Global Tension and Risk by Christophe Garon. Garon appears to be intellectually curious, and his site lists articles on subjects like web-scraping, machine learning, and crypto, as well as lots of health and fitness — which might be why he’s trying to flog an ebook about ‘Metabolism Recovery.’

He credits the concept to Dr Jerome Vantasner, ‘a political scientist and geopolitical analyst,’ who coined the term to describe an actual line on the map that passes through geopolitical hotspots like Korea and the Middle East. It is, he concludes in his penultimate paragraph, “a useful tool for understanding the potential risks and dangers in different regions of the world … it helps to inform decision-makers and policymakers and inform efforts to maintain global security and stability.” I suspect Garon, at most, curates the work of a generative AI for a fly-catching sales site.

It even features in what appears to be a New York State Department of Health presentation. Slide eight does ask, “Fact or Fiction?” But by slide eleven, it has a fairly authoritative definition. Was the presenter clear it was a fiction? Perhaps, but what about those who weren’t there, or are now reviewing old discussion papers?

Because no-one actually writes their own stuff anymore, I asked OpenAI about it. It came up with:

The Vantasner Danger Meridian is a large-scale map of the world created by the late 19th century cartographer, John Vantasner. It is a unique and detailed map of the world, showing all of the major land features, oceans, and other geographical features. It also includes an extensive network of lines and symbols, which represent the various dangers associated with each area. The map is divided into four sections, each representing a different level of danger.

I asked to be told more about Vantasner:

John Vantasner (1833-1905) was a noted cartographer and surveyor from the late 19th century. He was born in Pennsylvania, and his family moved to Kansas in the late 1850s, where he began his career as a surveyor. He worked for the US government in the late 1860s and 1870s, surveying and mapping the area that would become Oklahoma. He also worked on mapping the western states of Colorado and Wyoming.

And what is he most famous for?

John Vantasner is most famous for his work as a cartographer, specializing in creating detailed maps of natural and urban environments. He has created maps for many notable projects, such as the National Geographic Society’s “Atlas of the World” and the US Department of the Interior’s “National Atlas of the United States.” He has also been featured in publications such as The New York Times and National Geographic Magazine.

His work on the danger meridian is, it seems, not as well known as I think it should be. But the man himself appears to have had an illustrious career, that has been completed forgotten by everywhere but OpenAI.

And why am I writing at such length about this?

I recently wrote an article for another place about Microsoft’s collaboration with OpenAI to make Copilot, a reincarnation of Clippy. One of the potential uses, Microsoft suggest, is that Copilot can draft documents and presentations for users to edit, using the OpenAI model supplemented by proprietary documents and data held by the organisation.

Plenty of people have pointed out the problem of AI coming up with wrong answers. Even OpenAI accept it’s flawed, but suggest the answer is in human editors. AI is great, it seems, as long as we mark its work.

But the problem is that people just don’t do that. Look at things like the persistant belief in the tongue map despite the daily evidence of our own senses, or that we only use a fraction of our brains, or that immigrants live a publicly funded life of luxury. We live in a world where we use heuristics rather than critical thinking.

Look at the examples I mentioned. Sridhar probably used the Vantasner reference to appear clever, not addressing veracity because it was too much trouble. Garon simply doesn’t care, as long as it brings hits. And while the New York Department of Health presentater may have used the example effectively, it has left a trace that helps sustain the Vantasner story in AI models. Vantasner is being created because no-one marking AI’s work.

And while OpenAI, and Google, and everyone else, shrug their shoulders and remind us they pointed out flaws in large language models, and the need for human oversight, plenty of other gushing news articles (mine included) suggest a bright future where generative AI solves the world’s problem when asked a simple question.

The more likely scenario is, surely, that after several iterations of AI-authored presentations, reports, and discussion notes, the John Tavner presentation will be the norm. Employees will be the meat in the room, recycling the latest inaccurate plagiarism of a large language model, and no-one will have the audacity to challenge the all-seeing knowledge of AI.