The Evolution of AI

 

                    There is unanimous agreement among the members on these fundamental points:

1.    Automation and technological progress are essential to the

general welfare, the economic strength, and the defense of

the Nation.

.

3.    Achievement of technological progress without sacrifice

of human values requires a combination of private and

governmental action, consonant with the principles of

a free society.

                         President’s Advisory Committee of

                         Labor-Management Policy, 1962 1

                                 John F. Kennedy

                          

The Stanford 2023-2024 Course Bulletin listed 149 courses in AI. The next edition listed 252 courses. AI is clearly not going to go away and is beginning to permeate society. The evolved market structure of fast-changing AI determines how AI will be priced into the S&P 500, and thus we discuss why we value AI.

Dr. Fei Fei Li is known as the “Godmother of Artificial Intelligence.” She is a Stanford University Professor who grew up in Chengdu, China, came the U.S. at age 15 where her parents operated a laundry, graduated with a B.S. in Computer Science from Princeton, and a PhD from CalTech in electrical engineering. Created the database ImageNet where she and volunteers labeled 3.2 million images. By giving researchers a common benchmark, the database aided in the development of AI image recognition. Also responsible for more than 300 peer-reviewed journal articles, Li founded the Stanford Institute for Human-Centered AI and is the recipient of the 2025 Queen Elizabeth Prize for Engineering. She teaches computer science courses at Stanford.

Dr. Daron Acemoglu is an economist at MIT. He is primarily concerned with the effect of globalization on developing economies and has written extensively on AI. His recent book, “Power and Progress (2023)” details the effects of science and technology upon first England, the United States, Europe, and then the developing world. One might think the development of both in societies can only be to the good of all concerned. Acemoglu says that the path the science and technology matters a lot to determine whom wins and whom loses, “…machines can be used either to replace workers through automation or to increase workers marginal productivity.” 2

England

England invented the industrial economy, “…starting around 1750, there was fairly rapid productivity growth, especially in textiles. The earliest spinning machines increased output per hour of work nearly 400 times….But real incomes moved little, if at all. The spending power of an unskilled worker in the mid 1800s was about the same as it had been fifty or even one hundred years earlier.” 3 (Thus Karl Marx could rail against the inequities of the capitalist economy, which was destined to fail because it produced a return to capital (machinery) and only subsistence return to labor .) What changed things in the second half of the nineteenth century was the railways. “But railways did more than just automate work. To start with, advances in railways generated many new tasks in the transport industry and the jobs demanded a range of skills, from construction to ticket sales, maintenance, engineering, and management….More important were linkages from railways to other industries…The growth of railways increased the demand for a range of inputs, especially higher-quality iron products used in stronger metal rails and more powerful locomotives. Lowering the cost of moving coal (affected the price of other goods)… 4

United States

The cotton commodity economy of the South required only unskilled labor. After the industrialized North won the Civil War, the American innovation of Ford Model T style mass production involving standardized parts and processes, and likewise the railroads, greatly changed the economy. “Backward and forward linkages to other industries were critical in improving the productive capacity of the economy. 5 …Yet it would be incorrect to think that postwar technology was preordained to go in a direction that created new tasks to compensate for the ones that were being rapidly automated away. The contest over the direction of technology heated up as an integral part of struggles between labor and management, and advances in worker-friendly technologies cannot be separated from the institutional setup that induced companies to move in this direction, especially because of the countervailing powers of the labor movement. The Wagner Act and trade unions’ critical role in the war effort strengthened labor…” 6

The international predominance of post W.W. II U.S. manufacturing also enabled management to pay its workers high wages. This situation has, of course, changed in 2025. Tariffs will probably not increase overall U.S. manufacturing.

Europe

“…a direction of technology that sought to make best use of both skilled and unskilled workers spread from the United States to Europe. Many more countries thus started investing both in manufacturing and services for their growing mass markets…However, there was no uniformity in technological choices across countries. Each organized its economy in unique ways, and these choices naturally affected how new industrial knowledge was used and further developed. Whereas in Nordic countries technological investments were made in the context of the corporatist model (mandating collective bargaining by industry), German industry developed a distinctive system of apprenticeship training, which structured both labor-management relations and technology choices…” 7

A 6/1/25 article in the NYT cites a major problem in present trade negotiations between the Trump Administration and the E.U. “The U.S. Right Loathes the E.U. How Are They Going to Negotiate Trade?”

Less Developed Countries

Poverty reduction and rapid economic growth in cases such as South Korea, Taiwan, and China did not just come from the import of Western production methods. Economic success resulted from new technologies enabling the human resources of these countries to be used more effectively.

But the author states, “The current trajectory of AI is precluding this pathway. Digital technologies, robotics, and other automation equipment have already increased the skill requirements of global production and started remaking the international division of labor-for example, contributing to a process of deindustrialization…” 8  AI here is understood to be “automation” that replaces workers.

But why has AI evolved in the direction of automation, surveillance and data collection?

Scaling and the Theoretical Problem with Bottom-up Analysis

Five companies are constructing massive data centers to run training models and to answer user queries. Why is scaling so important? (Kaplan and McCandlish, 2020) of the firm Open AI published a paper which quantified the difference between a model’s prediction and the ground truth, as specified by a training probability curve, thus guiding the optimization process. This study showed that “Language modeling performance improves smoothly as we increase the model size, dataset size and the amount of compute used for training. For optimal performance all three factors must be scaled up in tandem. Empirical performance has a power-law relationship…”  The following graph shows the power-law relationship of all three variables. Therefore, the bigger the better.

 

 

 

 

Current AI models can have more than 100 billion (sic) parameters. 9 NVIDIA happily supplies this market.

The core idea of generative AI is that words can be represented as vectors in a high dimensional matrix, and these vectors have semantic similarity. We think it is useful for some of our readers to sense how generative AI actually works, and thus we reproduce a simple model in Exhibit I. The problem that we have with generative modeling is that it is totally bottom-up, predicting only the next word of an answer. This creates less of a problem if the intent is to model existing practice, and we grant that there is a lot still to be applied. But it does not model totally new ideas because it is not capable of seeing things totally anew, from top-down first principles.

Christopher Mims is the technology columnist for the Wall Street Journal. In a 4/25/25 article he writes, “We Now Know How AI ‘Thinks’ – and It’s Barely Thinking at All.”  Researchers have now devised, “New techniques for probing large language models-part of a growing field known as ‘mechanistic interpretability’, which probes how generative AI models ‘Think’…In a series of recent essays, Mitchell (of the Santa Fe Institute) argued that a growing body of work shows that it seems possible models develop gigantic ‘bags of heuristics’, rather than create more efficient mental models of situations and then reasoning through the tasks at hand. (‘Heuristic’ is a fancy word for a problem-solving shortcut.)” Vafa of Harvard,  …used as source material Manhattan’s dense network of streets and avenues. The (generative AI) result did not look anything like a street map of Manhattan. Close inspection revealed the AI had inferred all kinds of impossible maneuvers-routes that leap over Central Park, or traveled diagonally for many blocks. Yet the resulting model managed give usable turn-by-turn between any two points in the borough with 99% accuracy. Even though its topsy-turvy map would drive any motorist mad, the mode had essentially learned separate rules for navigating in a multitude of situations, from every possible starting point…All of this work suggests that under the hood, today’s AIs are overly complicated patched-together Rube Goldberg machines full of ad-hoc solutions for answering our prompts…When his team blocked just 1% of the virtual Manhattan’s roads, forcing the AI to navigate around detours, its performance plummeted.”

We also grant that total novelty is not our forte. This is the province of entrepreneurial business.

The goal of generative AI is ultimately artificial general intelligence, an AI that is “smarter” than humans. What “smarter” (like consciousness) means is subject to debate. If “smarter” means, “able to store and manipulate data”, AI in the form of a simple calculator wins. If “smarter” means “to be fast and accurate” in identifying the key changing variable, “intelligence” becomes a lot more complicated.

A 5/25/25 article by Cade Metz, a technology correspondent for the NYT, suggests that intelligent AI still lacks one variable. “All these systems are deployed into the world, humans tell them what to do and guide them through moments of novelty, change and uncertainty…‘A.I. needs us: living beings, producing constantly, feeding the above’, says Mateo Pasquinelli, a professor at Ca’ Foscari in Venice. ‘It needs the originality of our ideas and our lives.’ That is why many other scientists say no one will reach A.G.I. without a new idea – something beyond the powerful neural networks that merely find patterns in the data. That new idea could arrive tomorrow. But, even then, the industry will need years to develop it.”

That single idea is likely to be top-down. As the linguist Noam Chomsky wrote:

     “…the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among (vector) data points but to create explanations (What is going on here?)….When linguists seek to develop a theory for why a given language works as it does, they are building consciously and laboriously an explicit version of the grammar that the child builds instinctively and with minimal exposure to information. The child’s operating system is completely different from that of a machine learning program. Indeed, such programs are stuck in a prehuman or non human phase of cognitive evolution.” They don’t, like living animals (like Fido), know what goals to seek.”

 

Diversity at the Demand Level

Unlike the Internet which facilitated communication, Sam Altman of ChatGPT predicted that generative AI would devolve into very specific uses. Subject to scaling at the supplier level, this is indeed happening

According to the 4/26/25 WSJ, although 78% of all surveyed companies said that they used AI in at least one function, only 1% (sic) of all companies said they scaled their investments. The companies seem to apply AI at the department level, but not at the company level which requires that all the usable data be FAIR, that is as Professor Vasseur of the Haas Business School says must be: 

·      FINDABLE. It may be all over the place, in spreadsheets and notes.

·      ACCESIBLE. In disparate computer systems.

·      INTERPRATABLE. It may not exist in a common data format.

·      REUSABLE. It can’t get lost.

Firmwide data usability takes a lot of commitment and work, and as the economy changes the models have to be rerun. The four data criteria are more easily fulfilled at the departmental level. But, as the following 6/1/25 FT article by Rana Faroohar indicates, “…new research showing higher youth unemployment may be linked to AI rollouts. We knew the disruption was here, but suddenly you can really feel it. Industries like finance, healthcare, software and media (also personnel) are at the epicentre of the change…But the speed and scale of AI disruption could also bring white-collar backlash; surveys show the public wants its deployment to slow down.”

Acemoglu’s book presents an alternate view of AI.  “…technology should be steered in a direction that best suits a workforce’s skills, and education should…simultaneously adapt to new skill requirements.” 10 

Human Purposes

There is nothing in technology that is inherently anti-democratic. Consider two ways of initializing a Python program:

1) i=input(“i”)

2) i=7

The first asks for a user input and then calculates an answer. The second provides an answer without new data, based upon given data internal to the program. But democracy requires more than the people’s input. Facebook, for example, maximizes “user engagement”, and therefore ad revenues, showing sites that excite negative (right-brained) emotions. Democracy also requires structure. “Wikipedia does not try to monopolize user attention because it does not finance itself by advertisements.” Anonymous volunteers can make any edit, but layers of administrators, promoted from volunteers, can make maintenance or dispute resolution edits, and so on. “Wikipedia’s experience suggests that the wisdom of the crowd, so dearly admired by early techno-optimists of social media, can work, but only when underpinned and monitored by the right organizational structure.11 There is a difference between a populist mob and people operating in a fact-based political culture, where all are heard.

We think that AI is now worth heeding, because its use is proliferating. But that said, AI can only reflect a partial consensus. Overall civil society must be based upon citizen values.

 

Footnotes