Dialog With an Artificial Intelligence

OK, I will admit it. I talked (well, typed) with ChatGPT. It’s free. It’s on the Web. It’s easy. And it’s talked about, a lot.

What did I ask it? I asked it about me. It was a fascinating exercise. If anyone (including me) asked it about the work of a third party, there is always the risk that one’s own interpretation of that thinker’s work is wrong. But in asking it about my own work, I think it is not debatable that I am expert. I know all about what Keith Weiner said!

Throughout this article, I incorporate actual questions I asked, and actual answers it provided. These are indented in quotation marks.

KW: “Who is Keith Weiner?”

GPT: “Keith Weiner is an American economist, entrepreneur, and author known for his work on the gold standard, monetary policy, and economics. He is the founder of Monetary Metals, a company that provides economic and market research, analysis, and investment management services with a focus on the gold and silver markets. Weiner is also the president of the Gold Standard Institute USA, an organization dedicated to advancing the understanding and adoption of the gold standard as a monetary system. He has written numerous articles and books on economics and monetary policy, and his work has been influential in discussions surrounding sound money, inflation, and monetary reform.”

It is interesting that I am the only person it knows with my name. And it is gratifying that it says my work is influential (not sure about that, I am too radical to fit in the Overton Window).

However, it runs into trouble when it says “books” (which appears to be modified by the word “numerous”). I have not yet published a book (though I am planning to write several).

Next, I asked it about my view of intrinsic value.

KW: “Does Keith Weiner use the term “intrinsic value” to describe gold?”

GPT: “Yes, Keith Weiner, an economist and proponent of the gold standard, uses the term “intrinsic value” to describe gold. He argues that gold possesses intrinsic value due to its unique physical properties, such as its scarcity, durability, and malleability, which make it suitable for use as money. According to Weiner, gold’s intrinsic value is inherent in its physical characteristics and is not dependent on any government decree or fiat currency. He often emphasizes the concept of intrinsic value in his writings and lectures to advocate for a return to a gold-backed monetary system.”

It boldly asserts that I use a term that I don’t use, even saying that I emphasize it in my advocacy of a gold standard.

There is a term in the AI community for this. It is not a “lie”. A lie requires the liar to know what he says to be false, and to say it with intent to deceive someone. The term used in AI is “hallucination”, which means believing that you see something which is not really there. Such as those books by Keith Weiner.

I thought I’d try asking it what it thinks about what I say about bitcoin.

KW: “What is Keith Weiner’s theory of bitcoin?”

GPT: “Keith Weiner is an economist and proponent of a theory called the “Fundamental Theory of Bitcoin” or “Monetary Metals Theory of Bitcoin” which posits that Bitcoin, a digital cryptocurrency, can become a superior form of money due to its unique properties.

According to Keith Weiner’s theory, the fundamental value of Bitcoin is derived from its usefulness as a medium of exchange and store of value, similar to traditional forms of money such as gold or silver. He argues that Bitcoin’s scarcity, divisibility, durability, transportability, and recognizability make it a viable form of money that can compete with fiat currencies and potentially replace them.

Weiner also emphasizes the importance of Bitcoin’s fixed supply limit of 21 million coins, which means that it cannot be inflated like traditional fiat currencies. He argues that this scarcity makes Bitcoin a reliable store of value, as it cannot be devalued through inflation. He further suggests that as Bitcoin gains wider adoption and acceptance in the mainstream economy, its value will increase, leading to its use as a unit of account in everyday transactions.

Additionally, Weiner’s theory emphasizes the role of interest rates in the value of Bitcoin. He argues that Bitcoin can serve as a medium for interest-bearing loans, where Bitcoin owners can earn interest by lending out their coins. This, he claims, can incentivize saving and promote the productive use of capital, leading to increased economic efficiency.

Overall, Keith Weiner’s theory of Bitcoin revolves around the concept of Bitcoin as a form of sound money, with its unique properties and limited supply potentially making it a superior alternative to traditional fiat currencies. However, it is important to note that economic theories are subject to debate and interpretation, and Weiner’s theory of Bitcoin is one of many perspectives on the topic.”

This is almost entirely wrong. Not I, nor Monetary Metals, has ever written what ChatGPT says, much less called it the “Fundamental Theory of Bitcoin”.

It follows with a regurgitation of the conventional arguments. There are just two problems. One, I have not made those arguments, even in favor of gold. Two, I have written many words to argue they are not true for bitcoin. In the Soho Forum gold vs. bitcoin debate, I argued that a fixed quantity is a bad policy based on a flawed understanding of economics.

The icing on the cake is where ChatGPT asserts that I “argue that Bitcoin can serve as a medium for interest-bearing loans, where Bitcoin owners can earn interest by lending out their coins. This, he claims, can incentivize saving and promote the productive use of capital, leading to increased economic efficiency.”

In fact, I have risked angering half of Twitter, with my numerous tweets which argue that bitcoin is unborrowable. Bitcoin can skyrocket, and that would ruin anyone whose debt was denominated in it. Fortunately, no one borrows bitcoin—not even the leading crypto companies.

So it is not merely that this artificial “intelligence” is attributing to me views I don’t have. It’s worse than that. It is attributing to me views that I have gone to great lengths to argue are false.

Finally, I asked it about one of my most important theories.

KW: “Summarize Keith Weiner’s theory of interest and prices in irredeemable currency.”

GPT: “Keith Weiner’s theory of interest and prices in an irredeemable currency is based on the idea that the value of a currency is ultimately determined by its ability to extinguish debt. In an irredeemable currency system, where the currency is not backed by any tangible asset, the value of the currency is solely derived from its ability to pay off debt.

According to Weiner’s theory, interest rates in an irredeemable currency system are determined by the marginal productivity of debt, which refers to the additional economic output generated by each unit of new debt. As debt levels increase, the marginal productivity of debt decreases, leading to lower interest rates.

Weiner also argues that in an irredeemable currency system, prices are not determined by supply and demand but by the amount of currency in circulation relative to the amount of debt. When there is more currency than debt, prices rise, and when there is more debt than currency, prices fall.

Furthermore, Weiner contends that the use of interest rates as a tool for monetary policy in an irredeemable currency system is flawed because it can lead to malinvestment and distortions in the economy. Instead, he proposes a system where the currency is backed by a tangible asset, such as gold, which would prevent the central bank from manipulating interest rates and allow for a stable monetary system.”

The above reminds me of an undergraduate student, who did not study for the exam. He answers the essay question with equal parts B.S. and glib overconfidence. Except for one thing. ChatGPT has read every word I wrote up until 2021. Lack of study does not explain it. At all.

First, the good. It seems to know that I describe the dollar by its essential attribute of being irredeemable. It knows that I argue that it is important to extinguish debt. It knows that I write a lot about the Marginal Productivity of Debt (if you Google this term, which I did not coin, you will see some of my articles as the top search results). It is certainly correct that I argue that interest rate manipulation is a terrible policy that causes malinvestment.

The rest is a jumble of phrases I have used, arranged in ways that I did not. I have said many times that the value of an irredeemable currency is held up by the debtors. But not that “the value of a currency is ultimately determined by its ability to extinguish debt.” The dollar does not extinguish debt, yet it has a great value indeed.

It is almost an interesting formulation to say “interest rates in an irredeemable currency system are determined by the marginal productivity of debt…” But not quite. In a gold standard, the ceiling over interest rates is the Marginal Return on Capital aka Marginal Productivity of the Entrepreneur. But not the amount of GDP added for each new dollar borrowed, aka Marginal Productivity of Debt.

It is also almost-interesting to say “prices are not determined by supply and demand but by the amount of currency in circulation relative to the amount of debt.” I certainly did not say that. And it certainly is not true. But it’s almost interesting! In the sense that it almost seems like ChatGPT is performing induction, having a new insight and developing a new idea based on what is already knows. I say “seems” because that is not what it is doing.

It is a so-called “Large Language Model”. Think of typing a word into a Google Search bar, and it offers a few suggestions for the word you might mean. Imagine picking one at random. Then it has a few suggests for the next word. And so on.

The difference with GPT is that it uses a neural net to make its predictions. This neural net is trained on vast amounts of data (I assume it crawls the entire visible Internet, just as a search engine does). And the model does not just guess the next word, or even phrase. It is trained to generate grammatical sentences. And beyond that, it is trained to generate “plausible” prose.

If you went through all of my works, you could collect statistics on phrases that occur. You could, for example, count how many times I say “irredeemable currency” or “the marginal productivity of debt” or “the marginal utility of gold does not diminish.” You could also see how often these phrases tend to occur in proximity, or even which tends to occur before another. Such as you might find that I say “irredeemable currency” prior to “extinguisher of debt” (or “no extinguisher of debt”).

Armed with a sufficiently capable model, and the statistics to populate it, you can generate wondrous text such as “Keith Weiner is an American economist, entrepreneur, and author known for his work on the gold standard, monetary policy, and economics,” and “Weiner contends that the use of interest rates as a tool for monetary policy in an irredeemable currency system is flawed because it can lead to malinvestment and distortions in the economy.”

You can also generate such rubbish as “According to Keith Weiner’s theory, the fundamental value of Bitcoin is derived from its usefulness as a medium of exchange and store of value,” and “When there is more currency than debt, prices rise, and when there is more debt than currency, prices fall.”

It generates both with equal facility, and equal apparent confidence.

This software is not intelligent, does not contain intelligence, and does not think.

It’s a clever application of statistics. It appears to the layman as if it is thinking, as if it is reasonable. This misperception is aided and abetted by all too many in the industry, plus an array of thinkers and business leaders, who keep adding fuel to the fire.

Promoters tout it as the precursor to the “Singularity”, which is supposed to be when artificial intelligence surpasses human intelligence. In the Singularity, it will either be utopia where we don’t have to work and everyone is given a Universal Basic Income, if not having all their whims catered to, like asking the Replicator for a hamburger with crispy bacon hold the mayo. Or else it will be dystopia, where Skynet will want to Terminate all human life forms as inferior and annoying.

There is indeed a petition signed by a lot of people who are renowned for having a lot of intelligence and/or a lot of money, demanding that all research on AI beyond the current GPT stop immediately because it is certain to kill us, or worse.

I see two bad ideas driving this. One is the ancient Luddite fallacy, dating to Ned Ludd 250 years ago. Ludd was afraid of sewing machines. Every major new technology has attracted its mob of would-be destroyers (though they don’t usually physically break the new machines). This one is no different.

AI, in whatever form, is not going to destroy jobs on net. Will the nature of work change? Always has, always will. We no longer employ large numbers of unskilled laborers shoveling out the muck from horse stalls, with pay of essentially bread most days with a bit of meat on Sunday, and a place to sleep in the hayloft. And it’s a good thing, too. Instead, we employ large numbers of programmers, with pay so great that the horse-stall-muckers could not conceive of it.

The other bad idea is a variant of the Malevolent Universe Premise. In this view, reality is hell, people are evil, and the latter explains the former. People who feel this way believe that the smart thing to do is to kill everyone, thus making a big improvement to reality.

So, naturally, when they contemplate artificial intelligence they first think it won’t be corrupted with mere human, base emotions such as regard for humanity. And second, then it will logically want to kill everyone. Being smarter than us—and having the ability to make a second generation that is smarter than itself, and that generation can make yet-smarter version—it will find it easy to wipe homo sapiens from the face of the earth.

Yikes! I say this for the worldview that leads to such misanthropy. Not for fear of such a dystopian vision. It is pure fiction, like the famous movie starring Arnold Schwarzenegger.

GPT is not a kind of intelligence. It is a clever application of neural nets and statistics and other computer programming stuff, which comprises a Large Language Model. Laypeople may focus on the words “large” and “language”. Computer scientists know they are just modifiers for “model”. Like the model used by the weather forecasters, or the model used by the Fed to predict and centrally plan the economy.

GPT is not a step towards intelligence, either. The way a skateboard is not a step towards interstellar space travel. Not even if you add a motor and steering mechanism. There is no path from a Large Language Model to any kind of intelligence. You can’t get there, from here.

The first step must be taken by philosophers, not computer scientists. They would need to develop a proper theory of consciousness. This includes reason, and its necessary attribute volition. A rational consciousness always faces a choice: to focus on the facts, the problems of survival, and the solution to go on living. Or to refuse to realize it knows what it knows, to evade or drift. Implicit in this is its own mortality. It has to choose—because life requires obtaining certain values. Fail to do so, and you die.

This would be a monumental achievement in philosophy. It is a prerequisite, before computer scientists could attempt to implement it in software.

NB: it is an open question, at least as far as I am concerned, whether the von Neumann machine—which all computers, today, are—is actually capable of implementing or even simulating a conscious mind. I don’t refer to speed or number of processors or size of its memory (though those are all limitations). I refer to its fundamental capabilities.

Without these prerequisites, no so-called intelligent computer software can *want* anything. Without volition, I don’t believe it can reason either.

I had planned on ending with one other example of GPT’s foibles. I’ve seen many variants, where someone asked if it takes an orchestra of 20 people an hour to play a symphony, how long would it take an orchestra of 100 people. And the answer it used to give was “5 hours”. But I just entered that into the ChatGPT window, and it gave a better answer.

Its programmers have been paying attention! They are writing custom code to detect these “corner cases”, and hard-coding the right answer. This is not intelligence in the software. Instead, it is engineers setting out on an impossible quest. There is no limit to the number of categories of such questions. The engineers working on GPT will never keep up.

So I say ignore the Luddites who are warning about sewing machines. And ignore the wannabe Hollywood script writers who are warning about a genocidal computer network. And finally, ignore the snake oil salesmen who are selling GPT as being equivalent to human intelligence, or claiming the next version of it will be.

When I studied neural nets and AI in computer science school, one professor said two things that stuck with me. One, AI is the field which includes all the stuff we don’t understand. As soon as we understand it, it becomes searching, sorting, predicate logic, etc. Two, futurists and promoters have been saying general intelligence is just around the corner for a long time. This was in 1990.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s