The Nature of A.I.
We start this discussion of A.I. with a simple Python computer
program, which gets at A.I.’s essence. If your utility function does not
include simple Python programming, then skip to Å the philosophy of
A.I. which, we think, is essential for determining A.I.’s long-term investment
implications and future.
This example of generative A.I. is taken from the
book, “Make Your Own Neural Network”, by Tariq Rashid (2016), who attended the
University of Cambridge and who wrote Amazon’s best seller on Neural Networks.
This simple 3 layer neural network recognizes numerous hand-written digits on
checks, and converts them to computer-ready numeric form. It looks like this:
Input Data Hidden Layer(s)
* Output Data
* We
quote an AI excerpt from Google Search:
“In AI
a hidden layer is an intermediate layer of artificial neurons in a neural
network located between input and output layers, responsible for learning
complex, non-linear patterns by transforming data via weights, biases, and
activation functions.
Key
aspects of Hidden Layers in Neural Networks
· Function: They process information
extracted from the previous layer, allowing the model to understand intricate
patterns.
·
Non-linearity:
By using
activation functions (e.g. …Sigmoid), hidden layers allow neural network to model
complex, non-linear relationships, which linear models cannot.”
Hidden layers enable computers to handle the
substantial non-linearities of human nature and the discontinuities of history,
to find patterns in them. Whether A.I. can handle enough of the above remains
to be seen.
First some simple background to A.I..
This is an example of a linear network:
Y=ax + b
This is an example of a non-linear network:
Y=ax2
+ bx + c
Now make ax2 millions of times more
complicated. That is artificial intelligence that can be applied to
unstructured data such as documents, images and events. Python is an
amazing computer language. It is a favorite language of A.I. because it can
reduce complications back to simplicity; i.e. it can
easily create matrices that computers can handle. Back to our check example in
Python code: The (numpy) function library means
“numerical Python” . The (dot) function calculates the
degree to which two vectors are in the same direction. (self) is a particular
instance on which the general function is called.
You can see from the below how the main variables
flow.
#input an assumed weight
matrix
[wih]
#calculate signals into the
hidden layer
hidden_inputs = numpy.dot(self.wih, inputs)
hidden_outputs=self.activation_function(hidden_inputs)
#-calculate the error to be minimized-
output_errors = target-final_outputs
#hidden layer error is the output_errors,
split by weight, and recombined at hidden nodes
hidden_errors=numpy.dot(self.who.T,
output_errors)
#update the weights for the links between the
input and hidden layers. Calculate [wih] all over
again.
self.wih += (learning rate)*numpy.dot((hidden_errors*hidden_outputs * (1.0-hidden_outputs)), numpy.transpose(inputs))
Proceed recursively as the algorithm progressively
reduces the error. That’s it! To confirm how far A.I. has developed, Perplexity
is now able to reference the text on page 156 of the above source, to tell us
that the last term: (numpy.transpose(inputs) ) solves
a matrix dimension multiplication mismatch.
Å
Generative A.I. is a subfield of A.I.. The former
analyzes for patterns in unstructured data such as text or images; the latter
analyzes for patterns in data that are more structured.
As you can see, even in a simple model, generative
A.I. has progressed very far but is very compute intensive because of the very
large number of matrix operations. As current events illustrate, Wall Street
doesn’t know how to deal with rapid A.I. progression. This also presents a
lesser challenge for value investors.
To give you an idea about how much U.S. capital is
being poured into generative A.I.: Its massive. For the year 2026, four U.S.
companies are estimated to spend $600 billion on generative A.I. facilities
including chips, computers, buildings, and power sources. “Consumption of fixed
capital” for the entire U.S. economy (including government) in 2025 was around $4.99 trillion (Nominal GDP
was around $29 trillion). 1 Assume that economy-wide deprecation equaled
capital expenditures. In 2026, those four companies are expected to invest
around 12% (sic) of 2025 economy-wide capital expenditures in A.I.. A.I., in turn, is going to have to
produce massive revenue improvements in order to justify those expenses, which
certainly favor capital rather than labor.
Wall Street is now also considering the employment
disruptions that A.I will bring to the entire economy: notably packaged
software stocks, private credit companies, video game makers, lawyers, and
financial researchers, and so on. The effect of A.I. goes beyond present
employment. Computers have infinite abilities to remember, thus precluding
future employment as well. Considering the incredible competition among A.I
companies to produce the best product, we do not think that any companies
having “moats” (pricing power) will emerge soon. As value investors, we are
staying away.
What really gives us pause is what a university
professor said, “The students probably know more about A.I. than the
professors.” Our experience with A.I. is that is very useful, provided you
ask the right questions. We are now holding, on 2/26, about 30% of our
portfolio in cash.
The following is a more philosophical view of A.I.
What A.I. does is to find complex, non-linear patterns. Do past patterns imply
the future, yes, partly but only partly. As an example, found in Yuval Harari’s
Nexus (2024). “How many Romans or Jews
in the days of Tiberius could have anticipated that a splinter Jewish sect
would eventually take over the Roman Empire….When Jesus was asked about paying
taxes to Tiberius’s government and answered, ‘Render unto Caesar the things
that are Caesar’s and unto God the things that are God’s’; nobody could imagine
the impact his response would have on the separation of church and state in the
American republic two millennia later.” 2 But Harari and separately
Yudkowsky (2025), and many others, worry that A.I. can take over, and
eventually rule, humans because A.I. is “smarter.”
What “smarter” means is a matter of definition.
Yudkowsky’s definition is very specific. Compared with biological brains,
computer brains have:
· Speed.
Computer brains can switch on and off (convey information) 10,000 times faster
than humans.
· Artificial
Intelligence. Can replicate geniuses on demand.
· Capable
of higher quality thinking. Not subject to systematic errors.
· A.I.s
can make copies of their minds; A.I. can then improve their minds.
· There
is a profit motive to make computer minds smarter and smarter,
not stopping at human levels. 3
What Harari and Yudkowsky worry about is the
“alignment problem”. In conventional A.I., that we learned, one is trying to
maximize revenues or minimize costs using understood objective functions and
variables. In generative A.I., on the
other hand, we cannot at all assume the computer will have understandable,
human goals. This results from both the probabilistic nature of the computer
algorithms and their sheer complexity. For example, we learned that
“correlation is not causation”; you have also to refer to existing theory. In
generative A.I., if there is a present correlation between sunspots and stock
market prices, sunspots automatically go into the algorithm because “it
works.”
Harari writes, “…the new computer network will not
necessarily be either bad or good. All we know for sure it will be alien
and it will be fallible. We therefore need to build institutions that will be
able to check not just familiar human weaknesses like greed and hatred but also
radically alien errors. There is no technological solution to this
problem. It is, rather, a political problem.” 4 In more engineering prose, Yudkowsky
writes, “Nobody knows how to engineer exact desires into AI, idealistic or not….The inner workings of batteries and rocket engines are
well understood, governed by known physics recorded in careful textbooks. AIs,
on the other hand, are grown; and no one understands their inner workings.
5
In more conventional terms, complex A.I. computer
networks have to be brought up well, in a balanced
manner. Tell us how to do that.
Furthermore, there is the setting of goals. U.S. firms
have the sights set on artificial general intelligence, the ultimate
intelligence that will exceed that of humans (and will solve all our problems).
The above suggests that is impossible; the human brain (sculpted by evolution)
consists of around 3000 molecular cell types. In contrast, the type of
intelligence that will result from computers won’t be human and will likely
deal with its own problems (which could be pesky humans).
1. The
key consideration in short-term computer development is whether it will be
humanly useful. Will A.I. increase human productivity, although it may take a
while to hit the bottom line. Will it increase the trend in real S&P 500
earnings beyond 2.0%, in spite of climate change?
2. Limits
should be placed on artificial general intelligence. Really.