The Perpetual Quest for a Truth Machine
🌈 Abstract
The article explores the historical attempts to create "truth machines" - devices or systems that could automatically generate or prove universal truths, from the 13th century philosopher Ramon Llull's "Ars Magna" to modern language models like ChatGPT. It traces the evolution of this utopian dream of automated certainty through the work of thinkers like Leibniz and Boole, and discusses how current language models fall short of this goal.
🙋 Q&A
[01] Ramon Llull and the "Ars Magna"
1. What was Ramon Llull's goal in creating the "Ars Magna"? Llull's goal was to create a book or "mechanical missionary" that could truthfully answer any question about faith and convert unbelievers to Christianity, without using violence but rather through logical facts.
2. How did Llull's "Ars Magna" work? The "Ars Magna" was a logic machine that combined different divine attributes on rotating paper discs to generate logically true statements. Llull believed this could prove the existence of the Christian God.
3. What was the ultimate outcome of Llull's efforts? Despite Llull's belief that his logic machine would gain new Christian converts, he was ultimately unsuccessful. There are reports that he was stoned to death while on a missionary trip to Tunisia.
[02] Gottfried Wilhelm Leibniz and the Search for a "Divine Language"
1. What was Leibniz's goal in creating a mechanical logic machine? Leibniz wanted to create a "divine language" that could perfectly represent the relationships between human thoughts and thoughts, in order to discover the fundamental "alphabet of human thought" and achieve certainty in any realm.
2. How did Leibniz's approach differ from Llull's? Leibniz found Llull's basic concepts in the "Ars Magna" to be too arbitrary, and instead proposed that all concepts could be described as combinations of simpler, more fundamental concepts.
3. How was Leibniz's idea received? While Leibniz believed his logic machine would usher in utopia, accelerate science, and perfect theology, the idea was ridiculed by figures like Jonathan Swift, who portrayed it as a meaningless combination of words.
[03] George Boole and the Formalization of Logic
1. What was George Boole's key insight in developing his system of logic? Boole had the insight that algebraic variables could stand for ideas rather than just numbers, allowing him to perform algebra on ideas to calculate whether they are true or false.
2. How did Boole's work build on the ideas of Llull and Leibniz? Like Llull and Leibniz, Boole was obsessed with creating a system of language that could put disagreements to rest and calculate truth with mathematical certainty.
3. What was the initial reception and impact of Boole's work? While Boole was delighted to learn of Leibniz's similar efforts, his own work only captured the interest of mathematicians at the time and was relegated to philosophy departments.
[04] Claude Shannon and the Foundations of Modern Computing
1. How did Claude Shannon build on the work of Boole? As an undergraduate, Shannon realized the full potential of Boole's ideas, demonstrating how Boolean logic could optimize the routing of telephone switches and laying the foundation for modern computing.
2. What was the significance of Shannon's insights? Shannon's work promised to finally transmute messy human thought into the organized language of logic, perhaps even in pursuit of truth, by giving rise to the digital realm of zeros and ones.
3. How did Shannon's language models relate to the earlier efforts of Llull and Leibniz? Shannon's early language models, which mimicked the patterns of natural language, can be seen as an evolution of the earlier attempts by Llull and Leibniz to create mechanical devices that could generate or prove truths.
[05] The Limitations of Modern Language Models
1. How do current large language models like ChatGPT differ from the earlier "truth machines"? Unlike the earlier efforts to create machines that could derive or prove universal truths, current language models are trained to simply mimic the statistics of human language, without any concern for the truthfulness of the output.
2. What are the key limitations of these modern language models? The article argues that these language models are "missing something even Llull and Leibniz believed was essential to their machines: reason." They make many of the same reasoning mistakes as humans and cannot self-correct to arrive at better answers.
3. How does the article characterize the current state of "truth machines"? The article suggests that modern language models have not progressed much beyond Llull's "Ars Magna," and have in fact "automated the uncertainty" rather than achieving the elusive goal of automated truth.