
- Previous Article
- Next Article

Promises and Pitfalls of Technology
Politics and privacy, private-sector influence and big tech, state competition and conflict, author biography, how is technology changing the world, and how should the world change technology.
- Split-Screen
- Article contents
- Figures & tables
- Supplementary Data
- Peer Review
- Open the PDF for in another window
- Guest Access
- Get Permissions
- Cite Icon Cite
- Search Site
Josephine Wolff; How Is Technology Changing the World, and How Should the World Change Technology?. Global Perspectives 1 February 2021; 2 (1): 27353. doi: https://doi.org/10.1525/gp.2021.27353
Download citation file:
- Ris (Zotero)
- Reference Manager
Technologies are becoming increasingly complicated and increasingly interconnected. Cars, airplanes, medical devices, financial transactions, and electricity systems all rely on more computer software than they ever have before, making them seem both harder to understand and, in some cases, harder to control. Government and corporate surveillance of individuals and information processing relies largely on digital technologies and artificial intelligence, and therefore involves less human-to-human contact than ever before and more opportunities for biases to be embedded and codified in our technological systems in ways we may not even be able to identify or recognize. Bioengineering advances are opening up new terrain for challenging philosophical, political, and economic questions regarding human-natural relations. Additionally, the management of these large and small devices and systems is increasingly done through the cloud, so that control over them is both very remote and removed from direct human or social control. The study of how to make technologies like artificial intelligence or the Internet of Things “explainable” has become its own area of research because it is so difficult to understand how they work or what is at fault when something goes wrong (Gunning and Aha 2019) .
This growing complexity makes it more difficult than ever—and more imperative than ever—for scholars to probe how technological advancements are altering life around the world in both positive and negative ways and what social, political, and legal tools are needed to help shape the development and design of technology in beneficial directions. This can seem like an impossible task in light of the rapid pace of technological change and the sense that its continued advancement is inevitable, but many countries around the world are only just beginning to take significant steps toward regulating computer technologies and are still in the process of radically rethinking the rules governing global data flows and exchange of technology across borders.
These are exciting times not just for technological development but also for technology policy—our technologies may be more advanced and complicated than ever but so, too, are our understandings of how they can best be leveraged, protected, and even constrained. The structures of technological systems as determined largely by government and institutional policies and those structures have tremendous implications for social organization and agency, ranging from open source, open systems that are highly distributed and decentralized, to those that are tightly controlled and closed, structured according to stricter and more hierarchical models. And just as our understanding of the governance of technology is developing in new and interesting ways, so, too, is our understanding of the social, cultural, environmental, and political dimensions of emerging technologies. We are realizing both the challenges and the importance of mapping out the full range of ways that technology is changing our society, what we want those changes to look like, and what tools we have to try to influence and guide those shifts.
Technology can be a source of tremendous optimism. It can help overcome some of the greatest challenges our society faces, including climate change, famine, and disease. For those who believe in the power of innovation and the promise of creative destruction to advance economic development and lead to better quality of life, technology is a vital economic driver (Schumpeter 1942) . But it can also be a tool of tremendous fear and oppression, embedding biases in automated decision-making processes and information-processing algorithms, exacerbating economic and social inequalities within and between countries to a staggering degree, or creating new weapons and avenues for attack unlike any we have had to face in the past. Scholars have even contended that the emergence of the term technology in the nineteenth and twentieth centuries marked a shift from viewing individual pieces of machinery as a means to achieving political and social progress to the more dangerous, or hazardous, view that larger-scale, more complex technological systems were a semiautonomous form of progress in and of themselves (Marx 2010) . More recently, technologists have sharply criticized what they view as a wave of new Luddites, people intent on slowing the development of technology and turning back the clock on innovation as a means of mitigating the societal impacts of technological change (Marlowe 1970) .
At the heart of fights over new technologies and their resulting global changes are often two conflicting visions of technology: a fundamentally optimistic one that believes humans use it as a tool to achieve greater goals, and a fundamentally pessimistic one that holds that technological systems have reached a point beyond our control. Technology philosophers have argued that neither of these views is wholly accurate and that a purely optimistic or pessimistic view of technology is insufficient to capture the nuances and complexity of our relationship to technology (Oberdiek and Tiles 1995) . Understanding technology and how we can make better decisions about designing, deploying, and refining it requires capturing that nuance and complexity through in-depth analysis of the impacts of different technological advancements and the ways they have played out in all their complicated and controversial messiness across the world.
These impacts are often unpredictable as technologies are adopted in new contexts and come to be used in ways that sometimes diverge significantly from the use cases envisioned by their designers. The internet, designed to help transmit information between computer networks, became a crucial vehicle for commerce, introducing unexpected avenues for crime and financial fraud. Social media platforms like Facebook and Twitter, designed to connect friends and families through sharing photographs and life updates, became focal points of election controversies and political influence. Cryptocurrencies, originally intended as a means of decentralized digital cash, have become a significant environmental hazard as more and more computing resources are devoted to mining these forms of virtual money. One of the crucial challenges in this area is therefore recognizing, documenting, and even anticipating some of these unexpected consequences and providing mechanisms to technologists for how to think through the impacts of their work, as well as possible other paths to different outcomes (Verbeek 2006) . And just as technological innovations can cause unexpected harm, they can also bring about extraordinary benefits—new vaccines and medicines to address global pandemics and save thousands of lives, new sources of energy that can drastically reduce emissions and help combat climate change, new modes of education that can reach people who would otherwise have no access to schooling. Regulating technology therefore requires a careful balance of mitigating risks without overly restricting potentially beneficial innovations.
Nations around the world have taken very different approaches to governing emerging technologies and have adopted a range of different technologies themselves in pursuit of more modern governance structures and processes (Braman 2009) . In Europe, the precautionary principle has guided much more anticipatory regulation aimed at addressing the risks presented by technologies even before they are fully realized. For instance, the European Union’s General Data Protection Regulation focuses on the responsibilities of data controllers and processors to provide individuals with access to their data and information about how that data is being used not just as a means of addressing existing security and privacy threats, such as data breaches, but also to protect against future developments and uses of that data for artificial intelligence and automated decision-making purposes. In Germany, Technische Überwachungsvereine, or TÜVs, perform regular tests and inspections of technological systems to assess and minimize risks over time, as the tech landscape evolves. In the United States, by contrast, there is much greater reliance on litigation and liability regimes to address safety and security failings after-the-fact. These different approaches reflect not just the different legal and regulatory mechanisms and philosophies of different nations but also the different ways those nations prioritize rapid development of the technology industry versus safety, security, and individual control. Typically, governance innovations move much more slowly than technological innovations, and regulations can lag years, or even decades, behind the technologies they aim to govern.
In addition to this varied set of national regulatory approaches, a variety of international and nongovernmental organizations also contribute to the process of developing standards, rules, and norms for new technologies, including the International Organization for Standardization and the International Telecommunication Union. These multilateral and NGO actors play an especially important role in trying to define appropriate boundaries for the use of new technologies by governments as instruments of control for the state.
At the same time that policymakers are under scrutiny both for their decisions about how to regulate technology as well as their decisions about how and when to adopt technologies like facial recognition themselves, technology firms and designers have also come under increasing criticism. Growing recognition that the design of technologies can have far-reaching social and political implications means that there is more pressure on technologists to take into consideration the consequences of their decisions early on in the design process (Vincenti 1993; Winner 1980) . The question of how technologists should incorporate these social dimensions into their design and development processes is an old one, and debate on these issues dates back to the 1970s, but it remains an urgent and often overlooked part of the puzzle because so many of the supposedly systematic mechanisms for assessing the impacts of new technologies in both the private and public sectors are primarily bureaucratic, symbolic processes rather than carrying any real weight or influence.
Technologists are often ill-equipped or unwilling to respond to the sorts of social problems that their creations have—often unwittingly—exacerbated, and instead point to governments and lawmakers to address those problems (Zuckerberg 2019) . But governments often have few incentives to engage in this area. This is because setting clear standards and rules for an ever-evolving technological landscape can be extremely challenging, because enforcement of those rules can be a significant undertaking requiring considerable expertise, and because the tech sector is a major source of jobs and revenue for many countries that may fear losing those benefits if they constrain companies too much. This indicates not just a need for clearer incentives and better policies for both private- and public-sector entities but also a need for new mechanisms whereby the technology development and design process can be influenced and assessed by people with a wider range of experiences and expertise. If we want technologies to be designed with an eye to their impacts, who is responsible for predicting, measuring, and mitigating those impacts throughout the design process? Involving policymakers in that process in a more meaningful way will also require training them to have the analytic and technical capacity to more fully engage with technologists and understand more fully the implications of their decisions.
At the same time that tech companies seem unwilling or unable to rein in their creations, many also fear they wield too much power, in some cases all but replacing governments and international organizations in their ability to make decisions that affect millions of people worldwide and control access to information, platforms, and audiences (Kilovaty 2020) . Regulators around the world have begun considering whether some of these companies have become so powerful that they violate the tenets of antitrust laws, but it can be difficult for governments to identify exactly what those violations are, especially in the context of an industry where the largest players often provide their customers with free services. And the platforms and services developed by tech companies are often wielded most powerfully and dangerously not directly by their private-sector creators and operators but instead by states themselves for widespread misinformation campaigns that serve political purposes (Nye 2018) .
Since the largest private entities in the tech sector operate in many countries, they are often better poised to implement global changes to the technological ecosystem than individual states or regulatory bodies, creating new challenges to existing governance structures and hierarchies. Just as it can be challenging to provide oversight for government use of technologies, so, too, oversight of the biggest tech companies, which have more resources, reach, and power than many nations, can prove to be a daunting task. The rise of network forms of organization and the growing gig economy have added to these challenges, making it even harder for regulators to fully address the breadth of these companies’ operations (Powell 1990) . The private-public partnerships that have emerged around energy, transportation, medical, and cyber technologies further complicate this picture, blurring the line between the public and private sectors and raising critical questions about the role of each in providing critical infrastructure, health care, and security. How can and should private tech companies operating in these different sectors be governed, and what types of influence do they exert over regulators? How feasible are different policy proposals aimed at technological innovation, and what potential unintended consequences might they have?
Conflict between countries has also spilled over significantly into the private sector in recent years, most notably in the case of tensions between the United States and China over which technologies developed in each country will be permitted by the other and which will be purchased by other customers, outside those two countries. Countries competing to develop the best technology is not a new phenomenon, but the current conflicts have major international ramifications and will influence the infrastructure that is installed and used around the world for years to come. Untangling the different factors that feed into these tussles as well as whom they benefit and whom they leave at a disadvantage is crucial for understanding how governments can most effectively foster technological innovation and invention domestically as well as the global consequences of those efforts. As much of the world is forced to choose between buying technology from the United States or from China, how should we understand the long-term impacts of those choices and the options available to people in countries without robust domestic tech industries? Does the global spread of technologies help fuel further innovation in countries with smaller tech markets, or does it reinforce the dominance of the states that are already most prominent in this sector? How can research universities maintain global collaborations and research communities in light of these national competitions, and what role does government research and development spending play in fostering innovation within its own borders and worldwide? How should intellectual property protections evolve to meet the demands of the technology industry, and how can those protections be enforced globally?
These conflicts between countries sometimes appear to challenge the feasibility of truly global technologies and networks that operate across all countries through standardized protocols and design features. Organizations like the International Organization for Standardization, the World Intellectual Property Organization, the United Nations Industrial Development Organization, and many others have tried to harmonize these policies and protocols across different countries for years, but have met with limited success when it comes to resolving the issues of greatest tension and disagreement among nations. For technology to operate in a global environment, there is a need for a much greater degree of coordination among countries and the development of common standards and norms, but governments continue to struggle to agree not just on those norms themselves but even the appropriate venue and processes for developing them. Without greater global cooperation, is it possible to maintain a global network like the internet or to promote the spread of new technologies around the world to address challenges of sustainability? What might help incentivize that cooperation moving forward, and what could new structures and process for governance of global technologies look like? Why has the tech industry’s self-regulation culture persisted? Do the same traditional drivers for public policy, such as politics of harmonization and path dependency in policy-making, still sufficiently explain policy outcomes in this space? As new technologies and their applications spread across the globe in uneven ways, how and when do they create forces of change from unexpected places?
These are some of the questions that we hope to address in the Technology and Global Change section through articles that tackle new dimensions of the global landscape of designing, developing, deploying, and assessing new technologies to address major challenges the world faces. Understanding these processes requires synthesizing knowledge from a range of different fields, including sociology, political science, economics, and history, as well as technical fields such as engineering, climate science, and computer science. A crucial part of understanding how technology has created global change and, in turn, how global changes have influenced the development of new technologies is understanding the technologies themselves in all their richness and complexity—how they work, the limits of what they can do, what they were designed to do, how they are actually used. Just as technologies themselves are becoming more complicated, so are their embeddings and relationships to the larger social, political, and legal contexts in which they exist. Scholars across all disciplines are encouraged to join us in untangling those complexities.
Josephine Wolff is an associate professor of cybersecurity policy at the Fletcher School of Law and Diplomacy at Tufts University. Her book You’ll See This Message When It Is Too Late: The Legal and Economic Aftermath of Cybersecurity Breaches was published by MIT Press in 2018.
Recipient(s) will receive an email with a link to 'How Is Technology Changing the World, and How Should the World Change Technology?' and will not need an account to access the content.
Subject: How Is Technology Changing the World, and How Should the World Change Technology?
(Optional message may have a maximum of 1000 characters.)
Citing articles via
Email alerts, affiliations.
- Special Collections
- Review Symposia
- Info for Authors
- Info for Librarians
- Editorial Team
- Emerging Scholars Forum
- Open Access
- Online ISSN 2575-7350
- Copyright © 2023 The Regents of the University of California. All Rights Reserved.
Stay Informed
Disciplines.
- Ancient World
- Anthropology
- Communication
- Criminology & Criminal Justice
- Film & Media Studies
- Food & Wine
- Browse All Disciplines
- Browse All Courses
- Book Authors
- Booksellers
- Instructions
- Journal Authors
- Journal Editors
- Media & Journalists
- Planned Giving
About UC Press
- Press Releases
- Seasonal Catalog
- Acquisitions Editors
- Customer Service
- Exam/Desk Requests
- Media Inquiries
- Print-Disability
- Rights & Permissions
- UC Press Foundation
- © Copyright 2023 by the Regents of the University of California. All rights reserved. Privacy policy Accessibility
This Feature Is Available To Subscribers Only
Sign In or Create an Account
Mapping the technology evolution path: a novel model for dynamic topic detection and tracking
- Open access
- Published: 14 September 2020
- volume 125 , pages 2043–2090 ( 2020 )
You have full access to this open access article
- Huailan Liu 1 ,
- Zhiwang Chen 1 ,
- Jie Tang 2 ,
- Yuan Zhou ORCID: orcid.org/0000-0002-9198-6586 3 &
- Sheng Liu 1
5786 Accesses
21 Citations
Explore all metrics
Cite this article
Identifying the evolution path of a research field is essential to scientific and technological innovation. There have been many attempts to identify the technology evolution path based on the topic model or social networks analysis, but many of them had deficiencies in methodology. First, many studies have only considered a single type of information (text or citation information) in scientific literature, which may lead to incomplete technology path mapping. Second, the number of topics in each period cannot be determined automatically, making dynamic topic tracking difficult. Third, data mining methods fail to be effectively combined with visual analysis, which will affect the efficiency and flexibility of mapping. In this study, we developed a method for mapping the technology evolution path using a novel non-parametric topic model, the citation involved Hierarchical Dirichlet Process (CIHDP), to achieve better topic detection and tracking of scientific literature. To better present and analyze the path, D3.js is used to visualize the splitting and fusion of the evolutionary path. We used this novel model to mapping the artificial intelligence research domain, through a successful mapping of the evolution path, the proposed method’s validity and merits are shown. After incorporating the citation information, we found that the CIHDP can be mapping a complete path evolution process and had better performance than the Hierarchical Dirichlet Process and LDA. This method can be helpful for understanding and analyzing the development of technical topics. Moreover, it can be well used to map the science or technology of the innovation ecosystem. It may also arouse the interest of technology evolution path researchers or policymakers.
Avoid common mistakes on your manuscript.
Introduction
The technology evolution path describes the emergence, transition, and extinction of a subject in this field, which can help researchers understand the history and current situation of the research field so that they can quickly identify research hotspots and gaps.
In the study of technological evolution path, discovery and presentation of topic information is a crucial problem. In recent years, an increasing number of researchers have begun to use machine-learning methods to identify the development of specific research domains based on literature data. Probabilistic topic models are useful in detecting different research topics and mining research hotspots. Especially Probabilistic Latent Semantic Analysis (PLSA) (Hofmann 1999 ) and Latent Dirichlet Allocation (LDA) (Blei et al. 2003 ), have drawn much attention in the field of topic discovery because of their effectiveness in analyzing sparse high-dimensional data, like literature data (Jeong and Min 2014 ; Yau et al. 2014 ).
There are usually two main questions when using topic models for technology mapping. First, is non-textual technical information useful for technical evolution analysis? If yes, how to add it to the topic model? Second, can we dynamically identify and track technical topics in different periods? It can help us discover the evolution path of technology more flexibly.
Most existing topic models only consider textual information. However, scientific literature contains textual information, citation information, co-author information, and so on. When topic models using only textual information are applied to analyze scientific literature, many useful features of literature are ignored. In particular, the citation relationship, which also contains robust technical evolution information, cannot be ignored when analyzing the development of specific research domains (Kajikawa et al. 2007 ; Zhou et al. 2016 , 2019b , 2020 ).
The determination of the number of topics is essential for technical evolution analysis, but this is usually a troublesome problem. Generally, topic models, such as PLSA, LDA, and their extended models, need a preset number of topics. Two strategies can be used to handle this problem. One is comparing the experimental results for multiple times based on qualitative indicators such as perplexity or Normalized Mutual information (NMI) to determine the optimal number of topics. But this method requires a lot of experimentation, and the best result depends on the selected indicators. The second method is setting a relatively large number of topics, and then aggregating similar topics through Kullback–Leibler divergence, Cosine similarity, or another measure. By using the second method, the topics finally extracted are usually hard to understand (Griffiths and Steyvers 2004 ; Yao et al. 2011 ; Ding and Chen 2014 ).
When performing technology mapping, we hope that the algorithm can automatically determine the number of topics according to the structure of the data itself. In this way, we can not only have good adaptability to different data but also dynamically track changes of technical topics between different times. Teh et al. ( 2006 ) introduced HDP that can handle this problem. By utilizing the Dirichlet Processes feature of generating infinite clustering, the Hierarchical Dirichlet Processes (HDP) can automatically determine the appropriate number of mixture components.
In this paper, we combine textual information and citation information based on HDP to propose a new non-parametric topic model (no need to preset the parameters of the number of topics) to map the evolution path of the technology better. The novel non-parametric model based on HDP was named the citation-involved Hierarchical Dirichlet Process (CIHDP). Based on citation information, node2vec was used to convert papers in the citation network into vector form. Then we calculated the similarity of each pair of papers in the given paper data set to construct a similarity matrix. Unlike the Hierarchical Dirichlet Process (HDP), topic distribution for each document in the CIHDP was influenced by all of the other documents with different degrees of impact. The similarity of the two articles determined the degree of influence. That is, the less similarity there was in the citation network, the smaller the impact was. As other researchers did, we used the Gibbs sampling inference to estimate parameters in our model. Quantitative experiments prove that CIHDP can achieve better subject modeling effects than LDA. Through case analysis, CIHDP can find complete path evolution information than HDP.
The technology evolution path dynamically tracked by CIHDP is visualized through D3.js finally. For those who are not very familiar with technology mapping methods, visualization helps to adjust the topic model (such as parameters adjustment), and also facilitates understanding and discussing the technology path effectively.
The rest of this paper is organized as follows. The “ Related work ” section briefly reviews the related works. The “ Methodology ” section presents the overall research process and method introduction. The “ Result and discussion ” section conducts a case study in the field of AI research and evaluates the validity of the model. The “ Conclusions ” section lays out our key findings and future works. In the end, the Appendix provides details about the improved algorithm and experiment results. The code of CIHDP and sample data are available on the GitHub repository. Footnote 1
Related work
- Technology evolution path
As a powerful presentation of the development of technology, the technology evolution path can track historical development, explore knowledge diffusion and predict future trends in technology (Adomavicius et al. 2007 ; Yu 2011 ; Huang et al. 2016 ; Huang et al. 2020 ). Given the explosive growth in the quantity of literature in the current research environment, analysis of the technology evolution path is usually based on data mining. There are two kinds of existing technology path research using literature data: bibliometrics and method based on the topic model.
Most methods of bibliometrics are based on citation analysis of scientific and technological literature (Zhou and Minshall 2014 ; Li et al. 2015 , 2016b ; Zhou et al. 2018 ; Xu et al. 2017 , 2020 ; Nordensvard et al. 2018 ; Pan et al. 2019 ; Wang et al. 2018 ; Liu et al. 2019 ; Miao et al. 2020 ). Some methods can use to find simple information, such as keywords, influential authors, or core articles in the field literature. And then we can analyze the changes in this information over time to analyze the evolution of technology. These methods include co-word analysis (Callon et al. 1983 ), co-author analysis (Braun et al. 2001 ), bibliographic coupling (Kessler 1963 ), and co-citation analysis (Small 1973 ) and so on. Based on the citation network, some researchers also use the main path analysis method for path identification. For instance, Xiao et al. ( 2014 ) explore the knowledge diffusion path through an analysis of the main paths. Kim and Shin ( 2018 ) identify the main path of high voltage direct current transmission technology. Recently, researchers have begun to use citation network-based clustering methods, which can identify major research communities in a field. Chen et al. ( 2013 ) found that fuel cell technology consisted of several communities/clusters by clustering patent network. Moreover, the clusters used to detect and analyze technology evolution. However, the use of citation information alone is not convincing, and bibliometrics methods fail to consider both citation and text information.
The approach based on the topic model has gained more and more attention in recent years (Kong et al. 2017 ; Zhou et al. 2019a ; Li et al. 2020 ). The content of the literature contains much information about technology development. By analyzing the distribution of words in the corpus, topic models perform well in extracting latent topics of documents. Of the topic models proposed in the early stage, TF-IDF, PLSA, and LDA are the most frequently used by researchers for mining topics in the corpus. Based on the topic model, some researchers explore the change of technology topics of each period, to analyze the development path of technology. For example, they are using TF-IDF cluster associated terms and phrases to constitute meaningful technological topics, Zhang et al. ( 2016 ) forecast future developments. Xu ( 2020 ) explores the identification method for innovation paths based on the linkage of scientific and technological topics. Wei et al. ( 2020 ) tracing the evolution of 3D printing technology in china using LDA-based patent abstract mining.
Nevertheless, topic models like LDA also have method flaws; that is, it needs to set the number of topics in advance. Because of the ability to automatically determine the number of topics in a given corpus, the HDP has attracted more scholars’ attention. In contrast, the traditional topic model needs preset the topic number. See the next section for an introduction to the topic modeling domain.
To sum up, of the two methods used for path recognition, bibliometrics methods tend to use citation information, and topic model methods are good at using large amounts of textual information. However, the path drawn using one type of information alone is not convincing, and there are few attempts to combine citation information and text information. In this paper, we try to mapping the technology evolution path by a novel method that integrates citation information into the topic model.
Topic modeling
With the rapid increase in the amount of text data and the continuous improvement of machine learning, many latent topic discovery methods have been proposed (Hofmann 1999 ; Blei et al. 2003 ; Blei and Lafferty 2006 ; Teh et al. 2006 ; Chang and Blei 2010 ; Rosen-Zvi et al. 2012 ; Cheng et al. 2014 ; Fu et al. 2016 ; Chen et al. 2020 ). We first sort out the research context of the topic modeling. Among them, LDA and HDP are two typical representative algorithms. And then we introduce these two classic methods.
Most topic models, like LDA and HDP, only take the corpus as bags of words. Many data contain other information. For example, text data of web pages have hyperlinked information, comment text has user information, and scientific data have citation information and author information. Because of the excellent modularity of LDA, PLSA, and HDP, these models can be easily extended. BTM was used to integrate word co-occurrence information into LDA to solve the problem of inferring topics from large-scale short texts (Cheng et al. 2014 ).
Similarly, On-Line LDA (Alsumait et al. 2008 ) and Dynamic Online HDP (Fu et al. 2016 ) was used to integrate time information into LDA and the HDP to solve the problem of topic detection and tracking. Some researchers integrated author information into LDA, PLSA, or HDP to solve the problem of mining the author–topics distribution (Steyvers et al. 2004 ; Rosen-Zvi et al. 2012 ; Ming and Hsu 2016 ). Some other researchers integrated information besides author information, like recipient information (Mccallum et al. 2007 ) and conference information (Jie et al. 2008 ). For example, Dai and Storkey ( 2009 ) integrated author information into the HDP to solve the author’s disambiguation problem.
The above models have been shown to perform well under specific tasks and data. However, when these models are used to obtain scientific literature data, the citation information is ignored. And the citation information represents a strong topical relevance between the papers.
Several research advances have already incorporated citation information into topic modeling, and these works can be divided into two categories. One takes citation as an undirected link. For instance, the Relational Topic Model (RTM) (Chang and Blei 2010 ) uses LDA to model each document and uses the binary variables of a link whether or not there is a link between documents to optimizing model parameters. The other takes citation as a directed link. Based on PLSA and PHITS, Cohn and Hofmann ( 2000 ) proposed a joint probabilistic model link LDA, which generated terms and citations under a common set of underlying factors. Based on the link-based LDA, pairwise-ink LDA uses the Mixed Membership Stochastic Block (MMSB) model to generate a citation relationship alone to model the topicality of citations explicitly. Moreover, the link-PLSA-LDA method dividing data into citing part and cited part, and use the same global parameters to generate terms and citations (Nallapati et al. 2008 ). But, it could use PLSA to model the cited part while using LDA to model the citing part to reduce the calculation costs of a pairwise-link LDA. Kataria et al. ( 2010 ) proposed that cited-LDA and cited-PLSA-LDA extended link-LDA and link-PLSA-LDA. These two models explicitly model the influence propagation of words by citation. However, in these two models, words belonging to a citation are taken as clear information, yet the Inheritance Topic Model (ITM) views whether the word belongs to the citation as unclear information (He et al. 2009 ).
Specifically, LDA is a topic model based on the corpus (Blei et al. 2003 ), which treats the document as a set of words. LDA believes that the document contains only a limited number of hidden topics, and the number of topics corresponding to the corpus can be set to a fixed constant. Therefore, before being used, the number of topics needs to be preset (usually need). It can extract latent topics in the corpus, and each topic is composed of a set of words with different weights. At the same time, we can also obtain the probability value of each topic in the corpus (can be understood as the proportion of topics in the corpus).
The HDP is a topic model that automatically determines the number of expected topics, and can achieve dynamic topic mining. This model does not depend on the preset number of topics. As the data changes, the model can achieve adaptive changes, such as model parameter learning and automatic classification number update tasks. The model believes that the number of topics in the corpus can be infinite, and automatically learns the optimal set of topics based on the data. This model introduces the Dirichlet process and builds a hierarchical Dirichlet process, which provides a solution for sharing an infinite number of clusters among multiple documents. Similarly, its topic modeling process can mine latent topics in the corpus and output high-frequency words under each topic.
After the above overview, we focus on two problems with the topic model to make it better for technology path mapping. How to determine the number of topics automatically? How to use the citation information to mine more coherent topics? To solve these two problems, we aimed to propose a citation-involved topic model that automatically determines the number of topics (see " Methodology "). Since HDP has the specificity of automatically determining the number of topics, we selected it as the benchmark model of the improved algorithm. Unlike the link-and-content-involved topic model mentioned above, we used the citation information to calculate the similarity of each literature pair. Besides, when sampling the topics of a specific document, the similarity information was used to adjust the impact of the topic distribution of other documents.
Path visualization
The process of technology path mapping can be divided into two parts: one is to mine the path information based on the topic model and other methods; the other is to visualize the evolution path information of technology effectively.
The visualization of the path can help us understand and analyze the development process of technology more intuitively, and there are many existing visualization methods. Citespace (Chaomei 2006 ), based on Java development, can perform citation analysis and timing network visualization. However, this approach cannot show the whole path of technology evolution on a single graph. Similarly, TopicRiver (Havre et al. 2002 ) uses rivers with varying widths to symbolize technical topics, and changes in the width of rivers to indicate changes in the strength of the topics. This method is suitable for displaying the topic of continuous development, but it is challenging to represent isolated technical topics and the developing relationship between different topics. TextFlow is a more intuitive method, which can show the split and fusion of topics by the confluence and diversion of rivers (Cui et al. 2011 ). Based on the semantic similarity calculation, the topic association can be used to get the split and fusion information of the topic. However, in the case of a large number of topics, this approach makes the final evolution path look messy, so that the information presentation may be inappropriate. From the perspective of the data set, a more targeted visual model is designed for different data sets based on the above two river graph representation methods. For example, the TopicFlow (Malik et al. 2013 ) and the OpinionFlow (Wu et al. 2014 ) can visualize Twitter data further be used to analyze public opinion communication. Besides, Guo et al. ( 2012 ) used a technology roadmap-style chart to represent the evolution trend, which is difficult to express complex information when the evolution path is complicated (such as cross-development path, topic intensity changes).
D3.js (data-driven documents) is a JavaScript library (Bostock et al. 2011 ), which is also called an interactive and dynamic data visualization tool library and can be visualized with great flexibility through programming. Based on D3.js, CellWhere showing the local interaction network organized into subcellular locations (Heberle et al. 2017 ), SPV simplifies biological signaling pathways visualization (Calderone and Cesareni 2018 ).
Combining the advantages of TopicRiver and TextFlow, we use D3.js to present the evolution path of technology. Visualize the technical topic information proposed by the novel topic model, and use the changes of rivers to represent the development of technology. It is worth mentioning that the topic model and visualization method are not isolated. They together serve to map the path of technology evolution.
Methodology
How do we map the technology evolution path? This section summarizes the overall research process, introduces the integration of citation information and topic model, and a dynamic topic detection model. Then, the process of mapping the evolution path is explained in detail.
To make the mapping process of the technological evolution path proposed in this article clearer, we have drawn the overall methodological framework, as shown in Fig. 1 . Firstly, the processing of document data collection is divided into two aspects. On the one hand, we construct the citation network formed by the documents and embed the citation information of each document into a vector. Then calculate the similarity of citation information of two documents, and finally construct a document similarity matrix. On the other hand, based on the year of publication, the documents were grouped by period, and the interval of the period was uniform. Secondly, we integrated document content information with document similarity information, CIHDP was used to detect the topics of documents in each period dynamically. Thirdly, based on the topic information of each period obtained by CIHDP, we conduct topic path tracking. This part of the work is divided into two steps, the first step is to tag the topic of each period, and the second step is the correlation analysis of topics in the adjacent period to obtain the evolution path of the topics. Finally, D3.js was used to visualize the evolution path. By mapping out the evolution path, we can see the major technology branches of the field, as well as the splitting and fusion of technology evolution paths.

A methodological framework for mapping the technology evolution path
Measuring the similarity between documents
A citation network is a graph that contains information about the paper in each vertex, and an edge is the citation relationship between them. Vertex attributes are details about the paper, such as id, publication year, abstract, keywords, and content. Moreover, when the paper \(P_{i}\) referenced paper \(P_{j}\) , there was an arrow extending from the vertex representing \(P_{i}\) to the vertex representing \(P_{j}\) . Therefore, the citation network had the following characteristics. (1) The citation network was a directed graph in which each edge was an arrow going from one paper to the other. (2) All of the citation arrows almost always pointed backwards in time to older papers. Therefore, the graph of the citation network was acyclic and showed the development of the research field over time. (3) The most important characteristic of the citation network was the immediate relevance between topics of the paper and mentioned topics in other papers that it cited.
In bibliometrics, more researchers are using citation networks to identify and forecast development in the field of science and technology (Kajikawa et al. 2007 ). Therefore, many algorithms have been proposed to identify the similarity of each pair of documents in the citation network, such as bibliographic coupling, co-citation, Amsler, SimRank, P-Rank. Bibliographic coupling takes only out-links into account, so the similarity between two papers is computed based on the number of papers directly cited by both of them (Kessler 1963 ). Unlike bibliographic coupling, co-citation considers only in-links, and the similarity between the two papers depends on the number of papers that directly cite both of them (Small 1973 ). Amsler considers both direct in-links and out-links by combining the results of co-citation and bibliographic coupling (Amsler 1972 ). SimRank, a recursive version of co-citation, considers only in-links recursively, so the similarity between two papers is computed based on the papers that cite them (Jeh and Widom 2002 ). P-Rank, a recursive version of Amsler, considers both in-links and out-links recursively, so the similarity between two papers is computed based on the papers that cite them and are cited by them (Zhao et al. 2009 ).
However, because these models use co-citing and co-cited papers to calculate the similarity of each pair of documents, these models do not perform well in the task of calculating document pairs that have a direct citation relationship.
As shown in Fig. 2 , Node2vec (Grover and Leskovec 2016 ) was chosen to calculate the similarity of each pair of documents. This model uses the random-walk procedure to catch the features of similarity between nodes and then embeds the node into a low-dimensional space. First, node2vec was used to acquire the vector representation of each node in the citation network in this study. Then, cosine similarity (cosineSim) was used to calculate the similarity of each pair of documents. To avoid the negative value of similarity, we finally use the following formula to calculate document similarity (docSim), whose range is [0,1]:
where doc denotes the document, i and j denote the order of document (or node), i ∈[0, N ], docVector denotes the embedding vector of the node in citation networks, and cosineSim denotes cosine similarity calculation function.

Adapted from Perozzi et al. ( 2014 )
Flow chart of document similarity calculation.
At last, an N * N matrix was used to adjust the degree of influence between topics of different documents, where N is the number of documents in a given data set. It can be seen that the citation information is transformed into the similarity matrix of the document by way of graph embedding. The similarity matrix will be used to affect the topic allocation of the document to improve the quality of topic detection and tracking.
Citation involved Hierarchical Dirichlet Process
In this section, the citation information is indirectly introduced into the novel dynamic topic model (CIHDP) by using the document similarity matrix. Group documents by time, and by using the CIHDP algorithm, we can dynamically get the topics corresponding to each period. This part will be the primary work of path mapping.
In most cases, when researches try to extract topics from scientific literature, only the textual information (such as title and abstract) are used. In most topic models (such as LDA and HDP), topics are considered as the distribution of words, and the corpus is considered as batches of words. However, how can citations be appropriately used in topic extraction? The main idea of our improved model was to use citation information as a means to enhance the textual representation of documents, to discover more coherent technology evolution paths. Node2vec was used to construct a similarity matrix based on the citation network in the scientific literature. In our model, the topics of a paper were influenced by all other papers by different degrees in the corpus. And the similarity between papers determined the degree of influence. The similarity between documents will affect the topic modeling process.
The directed graphical representation of CIHDP is shown in Fig. 3 . In the directed graph, open circles represent variables, shaded circles represent observable measurements, rounded rectangles represent parameters or basic distributions, and rectangular boxes represent iteration cycles. The numbers in the lower right corner of the rectangular boxes represent the number of cycles. Among them, G 0 represents the global topic distribution of all documents, G j represents the local topic distribution of the j -th document, and θ ji represents the distribution of words under the topic. x ji represents observed words in the document.

Directed graphical representation of CIHDP
CIHDP is very similar to HDP, but CIHDP has an additional influencing factor “ c” , namely the influence of document similarity. In simple terms, the topic distribution of each document is not only affected by the overall topic distribution, but also by the topic distribution of similar documents. If the similarity of the two documents is higher (calculated based on the citation networks), the topic distribution between the two documents tends to be more similar, so that we can identify more coherent topics and better topic modeling effects. The citations we added are based on the global citation network, so the evolution of the topic will be more coherent, which will also improve the effect of topic tracking. To better explain our model, we provide a brief introduction of HDP (see “ Appendix A ”). We also proposed a metaphor for the CIHDP to explain our model. The metaphor was named the “USA Local Specialties Restaurant Franchise.” (See “ Appendix B ”).
To verify the effectiveness of the proposed algorithm in topic detection and topic tracking, we compared the LDA, HDP, and CIHDP. On the one hand, compare the advantages and disadvantages of the algorithm level, on the other hand, compare the effectiveness of the algorithm in the technology path mapping. In this article, we use perplexity indicators to compare algorithms between models.
Perplexity is an important measurement in information theory. It is a common way of evaluating language models. The lower the perplexity is, the better the model trains the dataset. The perplexity formula is as follows:
where D denotes the data set, \(\sum\nolimits_{d = 1}^{M} {N_{d} }\) denotes the number of words in the data set, and \(p\left( {\omega_{d} } \right)\) denotes the probability of that document generating a word in the data set.
Dynamic topic tracking and path identification
In order to complete the technology path mapping work, this section post-processes the topic modeling results to obtain the technology evolution path. Additionally, we designed the path expression and used visual methods to present the path.
The process of dynamic topic detection and tracking using CIHDP is shown in Fig. 1 . First, documents were grouped by periods, and each group of documents was modeled to detect the topic in each period. In the topic modeling process, the document similarity matrix calculated in the last part would be used to affect the distribution of document topics. The topic name had to be determined by manually reading the corresponding high-frequency topic words.
Through the topic model, we get the distribution of each topic in each period (the probability of each word appearing under each topic). First, the tag of each topic is determined by manual calibration, and then the correlation analysis of the topics in different periods is carried out.
To determine the tag of each topic, we output the 25 words with the highest occurrence probability in the distribution of topics and words. Based on the tags given in the original data set, through manual reading, determine the topic for each tag. There are two more critical issues in this process. (1) For tiny topics (the total number of word frequencies corresponding to the topic is less than 100), and there are no highly directed words, we consider these topics to be background topics and filter them. (2) If the topic with the same tag appears at the same time, we believe that the domain represented by this tag has evolved into sub-domains within this period. In our research, we do not discuss the issue of the technical level, so we will do the fusion processing on the same tag in the same period (see “Appendices D and E ”). The same topic in the same period retains the one with the highest word frequency.
We think the topics with the same tag can be connected directly in the two adjacent periods. Topics in the latter period are used to continue developing the topics in the previous period. As for topics with different tags in the two periods, the association between topics needs to be judged by the similarity. Here we use Jensen-Shannon divergence to characterize the similarity of two topics:
where \({\text{KL}}(T_{1} ||T_{2} )\) is the Kullback–Leibler divergence:
The value range of JS divergence is 0–1. The smaller the JS divergence, the higher the similarity of the topics. After sorting the JS divergence, we set a similarity threshold (S) between topics, and the association within this threshold will be presented.
Based on d3.js, this paper presents the evolution path visually, which is convenient for understanding development trends in the technology field. On the graph, there are tags for each topic, the intensity of the topic, and the topic relations in the adjacent periods. The topic of the topic modeling output was a set of words. The topic intensity was used to indicate the research heat of the topic. In this study, the number of words under the topic was used to measure topic intensity. The calculation formula of topic intensity strength ( T i ) was as follows:
In the s_th period, n ( T si ) represents the total word frequency of the i -th topic, \(\sum\nolimits_{j \in s} {T_{sj} }\) represents the total word frequency of all topics, and \(n\left( {doc_{s} } \right)\) represents the total number of documents.
There were two elements in our visual design: points and lines. Points represented the topic in the period, whereas lines (rivers) represented the relationship between topics. Each river in the figure represents the technology evolution path, which reflects the intensity change of the technical topic and the starting and ending time of the topic.
Result and discussion
To verify the effectiveness and usefulness of the proposed methodology in technology path mapping, we select the field of artificial intelligence for a case study. Also, a comparison of similar methods was conducted.
In the first section of this part, three data sets were selected for model comparison and case study, and the data were described and preprocessed. In the second section, the model parameters are set. The third section compares the topic modeling performance of CIHDP, HDP, and LDA based on the perplexity index, and performs dynamic topic detection on the Aminer data set. The fourth section carries out path identification based on manual calibration and topic similarity calculation. The fifth section is based on D3.js to visualize the path and compare the technology path mapping capabilities of CIHDP and HDP.
Data collection
The data sets used in this paper are shown in Table 1 . Our study’s data sets included two virtual data sets (Citeseer and Cora) and one real data set (Aminer). These data sets were used to verify the effectiveness of the CIHDP. We conducted a case study of an evolution path based on the Aminer data set. Details about the data are described below.
Citeseer Footnote 2 : This dataset contained 3312 scientific publications. All these papers had a unique category label. There were 6 categories in the data set, namely Agents, Artificial Intelligence, Database, Human–Computer Interaction, Machine Learning, and Information Retrieval. The citation network consisted of 4732 links, and the data set had 3703 unique words after stemming and removing the stop words. Moreover, there was no spelling information in this data set. When a word was repeated multiple times in a paper, we only counted it once.
Cora Footnote 3 : This dataset contained 2708 scientific publications. All these papers also had a unique category label. There were 7 categories in this dataset, namely Neural Networks, Rule Learning, Reinforcement Learning, Probabilistic Methods, Theory, Genetic Algorithms, and Case-Based. The citation network consisted of 5429 links, and the data set had 1433 unique words after the stemming process and removal of stop words. Like Citeseer, there was no spelling information in the Cora data set of each word in the vocabulary, and the words that appeared multiple times in the same paper were only recorded once.
Aminer Footnote 4 : The papers in this data set were mainly from the Aminer team and included about 10 research fields of artificial intelligence: “Data Mining/Association Rules” (DM/AR), “Web Services”, “Bayesian Networks/Belief function” (bayesian networks), “Web Mining/Information Fusion”(web mining), “Semantic Web/Description Logics” (SW/DL), “Machine Learning”, “Database Systems/XML Data” (DS/XD), “Information Retrieval”, “Pattern recognition/Image analysis”, and “Natural Language System/Statistical Machine Translation” (NLS/SMT). Since the raw data did not have the abstract information, we used the Web of Science database to complete the abstract information and delete some data that could not be found in the raw database. In addition, the paper of “Database Systems/XML Data” accounted for almost half of the total data. So, some papers from this category were also deleted to avoiding data skewness. At last, the data set contained 1000 scientific publications and information for 1109 citations. After the stemming process and removal of stop words, we had 670 unique words. The year range of the final data was 1990–2007. As with Cora and Citeseer, the words that appeared multiple times in the same paper were only recorded once.
Based on Aminer data, we conducted a case study on mapping the technology evolution path in the field of artificial intelligence. The overall time span of our analysis is from 1990 to 2007, and the overall time is divided into five periods. Among them, each period includes four consecutive years. Due to the small number of documents contained in 2006 and 2007, these two years are included in the last period (T5). The final period setting and the corresponding number of papers are shown in Table 2 .
Parameter setting
When using HDP and CIHDP models for topic modeling, how to determine the model parameters \(\left( {\beta ,\alpha ,\gamma } \right)\) is a tricky question. We have found different parameter combinations in the previous literature and applied these parameter combinations to our data set. With the number of topics (since we used a labeled data set, the number of labels is known, so we hope the number of topics is closer to the number of labels) and the perplexity as the basis for judgment, the result is not ideal. Therefore, we conducted orthogonal experiments to obtain the optimal parameter combination.
Combining the distribution of parameters \(\left( {\alpha \sim \varGamma \left( {5,0.1} \right),\gamma \sim \varGamma \left( {0.1,0.1} \right)} \right)\) and the parameter values set in the previous HDP model, we determine the value ranges of the three parameters. We divided the value range of the parameter into 5 levels and conducted orthogonal experiments. The parameter values range and level division are shown in the following Table 3 , and the results of the orthogonal experiments are shown in the “Appendix”.
When conducting orthogonal experiments, we need to consider the effect of different parameter combinations on the number of topics and perplexity. Ding and Chen ( 2014 ) designed an S value to consider the number of topics and perplexity in the parameter selection process:
The goal of parameter selection is that the perplexity is small enough and too many topics are unnecessary, so we chose the parameter combination that generates the lowest S value as the optimal parameter combination. Since the number of tags in the dataset we use is known, the number of topics determined using the S value may be very different from the actual number of tags. Therefore, for the labeled data set, we assign different weights to the perplexity part and the number of topics:
In the case of different weight distributions, the optimal parameters obtained by the S value are used for multiple repeated experiments to ensure that the number of topics finally obtained by the topic model is about 10. Finally, we determine the S value’s weight distribution value under the three data sets and the optimal parameter combination of the three data sets under the two subject models.
In this paper, the algorithm comparison experiment requires that the number of topics identified by different algorithms is basically the same, and then the subsequent perplexity index comparison and path mapping comparison. For simplicity, if there is a parameter combination in the orthogonal test table that meets our requirements, we will directly set this group of parameters as the model parameters.
According to the orthogonal experiment table (see “ Appendix C ”), we find that there are parameter combinations with 7 and 6 topics in the table, and set the corresponding parameter combinations as the topic model parameters of Cora and Citeseer data, respectively. Since the aminer data needs to be compared for path drawing, we want to find a better combination of parameters, using weights a = 0.7, b = 0.3 to set the parameter combination for the aminer data. The parameter settings of the three data are shown in Table 5 .
In our experiment, several parameters had to be set for CIHDP and the benchmark models LDA and HDP. See “ Appendix C ” for details of parameter settings. Since it is selected as the empirical test data, the following uses Aminer as an example to set the parameters.
For LDA, parameters α, β, K, and iteration need to be set (see Table 4 ). To compare the three models, we try to make the number of topics, that generated by the three models, consistent with the number of data set categories. Thus, the number of topics, K = 10 (for Aminer), was set the same as the number of categories of data set. Dirichlet prior parameters α and β will influence the performance of the model. We set α = 50/K, β = 0.01, which has been proved to be effective for the LDA model (Heinrich 2005 ; Cheng et al. 2014 ; Li et al. 2016a ; Liu et al. 2016 ). In all of the experiments, the number of iterations of Gibbs samples was set to according to the perplexity index. The perplexity index of the LDA topic model has a slower convergence rate. Iterations is set to 2000.
For the CIHDP and the HDP, we used a symmetric Dirichlet distribution with parameters β for the prior H over the topic distribution, and concentration parameters γ and α that influence hierarchical DP. For CIHDP and HDP, we set β = 0.2, γ = 0.7, α = 0.5, and iteration = 150 (these two models can fast achieve good performance than LDA).
In our method, we had to mine the topic of relevance between documents for CIHDP. Node2vec was used to capture the information from the network via a biased random walk (Grover and Leskovec 2016 ). Then, we can calculate the similarity of each document pair. Two parameters were used to control the process: the return parameter p and the in–out parameter q . The same with Kim et al. ( 2018 ), the goal of our study was to identify nodes that are closely interconnected and belong to the same communities (homophily equivalence), we set p = 2 and q = 0.125. The other parameters involved in node2vec were set as d = 128, r = 10, l = 10, and k = 10, where d , r , l , and k denote embedding dimensions, walk per node, walk length, and context size, respectively. Parameter values also were selected based on the parameter-sensitive part of the original paper (Grover and Leskovec 2016 ) for the best performance.
The third section compares the topic modeling performance of CIHDP, HDP, and LDA based on the perplexity index, and performs dynamic topic detection on the Aminer data set. According to the model parameters set above, we set the model parameters for CIHDP, HDP, and LDA separately. Taking the Citeseer data set as an example, we use the three topic model algorithms to perform topic modeling on the data set and obtain the perplexity data in the topic modeling process. We conduct five repeated experiments for each algorithm and use the average perplexity to draw the perplexity change curve during the topic model sampling process.
We run LDA topic modeling on Citeseer, and draw the perplexity curve with the number of iterations in the topic detection process, as shown in Table 4 . It can be seen from the figure that the perplexity of LDA changes slowly, and the model needs many iterations to achieve better results.
To facilitate the comparison of the three algorithms’ performance, we plot their perplexity curves in the same coordinate system. The number of iterations of LDA is much greater than the number of iterations of CIHDP and HDP, so we set a secondary abscissa system for LDA so that the three algorithms can be compared in the same graph (see Table 5 a).
It can be seen from the figure that the perplexity of CIHDP and HDP is similar, and the final convergence value of the perplexity is also almost equal. It can be seen that their algorithm performance on the perplexity is similar. However, the perplexity of LDA decreases very slowly (the number of iterations needs to be 2000), and the final convergence value of the perplexity is higher than others. It can be seen that the algorithm performance of CIHDP and HDP on the perplexity is better than LDA (Fig. 4 ).

Perplexity curve of LDA trained by Citeseer
In the process of topic modeling for Cora and Aminer, we also found the same conclusion, the corresponding perplexity is shown in the subplot (b) and (c) of Fig. 5 .

Comparison of the perplexity of different topic models (LDA, HDP, CIHDP)
As can be seen from the above, the algorithm performance comparison between CIHDP and HDP is not apparent. Since these two algorithms can automatically determine the number of the topics, to perform dynamic topic detection and subsequent dynamic topic tracking, the following will compare the advantages and disadvantages of the two algorithms in mapping. The following section uses HDP as a benchmark model.
To compare the technology path mapping, we use CIHDP and HDP to model the topic of the Aminer data, identify the topics of each period, and get the word distribution under each topic (see “ Appendix E ”).
Path identification
Before topic calibration in each period, we first subject the comprehensive data set to topic modeling and pre-calibration (see “Asppendix D”). Pre-calibration can make the calibration work more directional and guiding, and improve the efficiency of each calibration work. According to the topic modeling results, we first perform manual calibration (see “ Appendix E ”), and then perform path tracking on the calibration results.
We divide the evolution path into two categories, one is the evolution path of the same topic, and the other is the evolution path between different topics. In two adjacent time slices, the topics with the same tag can establish an association relationship directly. As for topics with different tags in the two periods, the association relationship between topics needs to be judged by the semantic similarity. As mention above, if the semantic similarity between two topics on adjacent time slices is high, we think there is an evolutionary relationship between the two topics. In this paper, the JS divergence is used to measure the semantic similarity, set the similarity threshold, and connect the topics with higher similarity to the path. First, calculate the JS divergence of all the associations of the topics with the different tags in adjacent periods and rank the JS divergence.
We set the similarity thresholds to different values, and we can get different evolution paths. In this paper, we set the similarity thresholds (S) to 10%, 20%, and 30%, respectively, and we can get the six different evolution paths in Fig. 6 . Taking machine learning as an example, the red circle in Fig. 6 indicates different paths made using different similarity thresholds. We conduct information validity analysis on the paths under different similarity thresholds, and finally, determine an appropriate threshold so that the obtained evolutionary paths present the most useful information.

Comparison of technology path mapping using different methods and topic similarity threshold
Mapping the evolution path
Based on Aminer data, we conducted a case study on the path identification in the field of artificial intelligence. We use CIHDP and HDP to mapping the evolution path in this field separately, and conduct a comparative analysis to prove the usefulness and advantage of the method proposed (CIHDP). After performing topic detection and topic tracking analysis for Aminer data, we can get the correlation of artificial intelligence technology topics from 1990 to 2007. That is to say, we can obtain multiple paths of technological evolution.
To intuitively analyze the path evolution in this field, we use D3.js to visualize the path evolution process. The previous section calculated the semantic similarity between topics with different tags and connected every two topics with a semantic similarity larger than the similarity threshold. In this paper, the similarity threshold for CIHDP and HDP is set to S = 20%, because the most effective path evolution information can be obtained in this way.
The visualization results are shown in Fig. 7 . In the visual design, we use different rivers to describe the evolution path of technology. (1) Each vertical line represents a period and is marked with a year interval. From left to right, the year is getting closer to the present. (2) The topics included in each period are presented on the corresponding time vertical line, and the red dots with tags represent the technical topics. (3) Connect related topics with lines to form a series of rivers, and use different rivers to represent different technical evolution paths. (4) Different colors represent different paths, and the gradation of colors indicates the fusion and splitting of paths. (5) The width of the river expresses the topic intensity. The stronger the topic, the wider the river.

Mapping the technology evolution path of the artificial intelligence field. (S = 20%). a Mapping the technology evolution path using CIHDP. b Mapping the technology evolution path using HDP
After visualizing the path information, we can conveniently conduct technical evolution analysis, and at the same time, we can also compare the effectiveness of CIHDP and HDP in path mapping.
In the sub-figure (a) in Fig. 7 , we can analyze the technology evolution path mapped using CIHDP. Judging from the overall time, CIHDP has identified a total of 10 types of topics, and the overall topic recognition effect is satisfactory. In general, we first analyze the development trend of each topic. The topics that first appeared represent some basic and supporting research areas. These topics include “database systems and XML data” (DS/XD), “semantic web and description logic” (SW/DL), “natural language systems and statistical machine translation” (NLS/SMT), “bayesian network”, “information retrieval”, in which DS/XD and SW/DL exist almost throughout the entire period.
The topics that emerged later were research areas that were more application-oriented and high-end R&D. These topics include “data mining”, “web mining”, “machine learning”, “pattern recognition”, and “web services”. Among them, in the T4 and T5 periods, the intensity of the topic of machine learning has increased dramatically, and this kind of research is hot and popular, occupying the mainstream research status. In T5, the new topic of “web service” appeared, and the topic intensity value is not high, which means that a new path has emerged, and it is in the initial stage of path evolution.
Next, with the help of the color gradient effect, we analyze the path evolution details between different topics. Following the direction of time development, we analyze path splitting, fusion, emergence, and even disappear. (1) In the process from T2 to T3, “database system”, “information retrieval” and “data mining” merge to form “web mining”. (2) During the process from T3 to T4, “web mining” continues to integrate DS/XD. (3) Based on the development of related technologies such as database systems, information retrieval technology, and data mining technology, the emerging of “web mining” path is reasonable. Moreover, similar path fusion and splitting conditions still exist. (4) For example, DS/XD, as a basic technology, always undergoes path splitting during its development and merges with “data mining”, “web mining” and SW/DL. (5) Another example is that the “bayesian network” path splits, and merges with “pattern recognition”, “machine learning”. (6) From the figure, we can also find some interesting phenomenon. “bayesian network” and “machine learning” have interactive fusion and splitting of paths. From T2 to T3, “bayesian network” split and merge to “machine learning”. From T3 to T4, “machine learning” is integrated into the “bayesian network” partly, it also shows that these two research areas are closely related. The same situation also exists between the DS/XD and SW/DL.
We also searched for information on the development of artificial intelligence and learned some facts. In 1995, Corinna proposed Support Vector Machine. In 1997, (a) the computer “dark blue” defeated Kasparov in chess. (b) Long Short-Term Memory (LSTM) was first proposed by Sepp Hochreiter and Jürgen Schmidhuber. (c) And AdaBoost was also proposed and used to achieve the effect of a strong classifier. In 1998, Tim Berners Lee proposed the semantic web and previous research is more biased towards “descriptive logic”. In 2001, Conditional Random Field (CRF) was proposed by Lafferty et al. Based on these critical events, and we can better understand and believe that the above findings are in line with the facts, and also find that these significant scientific advances have promoted the emergence of new paths in T3 and T4.
Similarly, in the sub-figure (b) in Fig. 7 , we can analyze the technology evolution path mapped using HDP. HDP has successfully identified 10 types of topics and has also obtained some effective technological evolution paths. However, we found that many paths identified by HDP lack actual meaning. In other words, the paths tracked using HDP are less effective than CIHDP.
The technology path mapped by CIHDP has been analyzed in detail above, so here is only a brief analysis of the path traced by HDP. It can be seen from the figure based on HDP that there is a little path information identified between T1, T2, and T3. There are many unexplained correlations and evolution paths in subgraph (b). Especially T3–T4, there are many-to-many associations. For example, data mining, web mining, information retrieval, network services, and statistical machine translation are integrated into the database system, which covers almost all the topics of T3. It is difficult to judge the core evolution path. Except for the evolution process from T3 to T4, it is almost difficult to find useful information.
Combined with the above analysis, we found that the technology evolution path we identified is consistent with the facts, indicating that the proposed method is valid in mapping the technology evolution path. We use CIHDP and HDP to make a comparative analysis of the paths and find that the former can find a more complete and detailed path evolution information. Therefore, it also proves that CIHDP is better at mapping the path of technological evolution.
In this study, we developed a method of mapping the technology evolution path that uses a novel non-parametric topic model (CIHDP) to achieve better dynamic topic detection and tracking of scientific literature. We performed a visual analysis of the evolution path based on D3.js. By incorporating literature citation information into the topic modeling process, the combination of textual and citation information is achieved. By using CIHDP, we have successfully completed the mapping of technological evolution paths, and obtained more detailed and complete path splitting and fusion information.
The method proposed in this paper is universal and suitable for technology path mapping. In principle, CIHDP is designed to mine and analyze data containing textual information (such as the title and abstract of the literature data) and citation information, including commonly used paper or patent data. Therefore, it is also feasible to use CIHDP to process patent data. Considering the availability and standardization of the data, we selected the paper data for technology path analysis in this study.
It is worth mentioning that the method and process in this paper can also help to solve general technology management problems, such as (1) analyzing the overall development trend of technology in a field, (2) determining the mainstream or emerging technology in the process of technology evolution, (3) or technology life cycle evaluation, and so on.
The key findings and contributions were as follows. First, this paper proposes a technology path mapping method based on an improved topic model and compares CIHDP with some traditional methods (such as LDA and HDP). For evaluating the proposed method, we used three data sets to verify our model. Through the comparison of algorithms and case studies, we found that the proposed method can find more detailed and complete technical evolution path information, and the identified evolution path is more interpretable than HDP.
Second, CIHDP makes full use of the information in the literature data, taking into account both textual semantic information and citation information, and mines the literature data from a more comprehensive perspective. This method can identify the path of technological evolution, and the experimental results also indicated that the method in this paper effectively avoids the lack of information caused by a single perspective analysis.
Third, few previous studies have mentioned how to set the parameters of non-parametric Bayesian models (such as HDP). In this paper, a large number of parameter orthogonal experiments were carried out separately on three different data sets. It provides reference values or process recommendations for users of HDP and CIHDP models to set optimal parameters. Besides, the traditional evaluation indexes (such as perplexity index) of the topic modeling algorithm are not enough to explain the pros and cons of the model, and actual case verification should be carried out.
Finally, this study conducted a visual analysis of the technology evolution path based on D3.js. We found that this visual method is suitable for analyzing complex evolution paths. Data mining combined with visual analysis can find the path splitting and fusion evolution process more efficient.
However, there are still limitations and future work here. First of all, considering the workload and time, the case study uses the core literature of the AI research field organized by the Aminer team, and the data set is not large. Later, we will consider changing to another field and use data sets with large data volume for further verification. Second, this paper has conducted in-depth data mining on scientific and technological literature. To discover more and more complete path evolution information, data fusion and analysis of different types of literature data may be performed in the future. Third, the scientific literature is rich in information, including not only textual and citation relationships, but also co-occurrence of authors, the quality of journal literature, and other factors. Including these factors in this model also has the potential to mine more accurate path information.
https://github.com/scientometrics-special-issue-2020-ml .
https://s3.us-east-2.amazonaws.com/dgl.ai/dataset/citeseer.zip .
https://s3.us-east-2.amazonaws.com/dgl.ai/dataset/cora_raw.zip .
https://lfs.aminer.cn/lab-datasets/soinf .
Adomavicius, G., Bockstedt, J. C., Gupta, A., & Kauffman, R. J. (2007). Technology roles and paths of influence in an ecosystem model of technology evolution. Information Technology Management, 8 (2), 185–202.
Google Scholar
Aldous, D. J. (1985). Exchangeability and related topics. Ecole Dete De Probabilites De Saint Flour, 1117 (3), 1–198.
MathSciNet MATH Google Scholar
Alsumait, L., Barbará, D., & Domeniconi, C. (2008). On-Line LDA: Adaptive topic models for mining text streams with applications to topic detection and tracking. In: Eighth IEEE international conference on data mining.
Amsler, R. A. (1972). Applications of citation-based automatic classification. Linguistics Research Center, University of Texas at Austin.
Blackwell, D., & Macqueen, J. B. (1973). Ferguson distributions via polya urn schemes. Annals of Statistics, 1 (2), 353–355.
Blei, D. M., & Lafferty, J. D. (2006). Dynamic topic models. In: Proceedings of the twenty - third international conference machine learning (ICML 2006)
Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of Machine Learning Research, 3, 993–1022.
MATH Google Scholar
Bostock, M., Ogievetsky, V., & Heer, J. (2011). D 3 data-driven documents. IEEE Transactions on Visualization Computer Graphics, 17 (12), 2301–2309.
Braun, T., Glänzel, W., & Schubert, A. (2001). Publication and cooperation patterns of the authors of neuroscience journals. Scientometrics, 51 (3), 499–510.
Calderone, A., & Cesareni, G. (2018). SPV: a javascript signaling pathway visualizer. Bioinformatics, 34 (15), 2684–2686.
Callon, M., Courtial, J. P., Turner, W. A., & Bauin, S. (1983). From translations to problematic networks: An introduction to co-word analysis. Social Science Information, 22 (2), 191–235.
Chang, J., & Blei, D. M. (2010). Hierarchical relational models for document networks. Annals of Applied Statistics, 4 (1), 124–150.
Chaomei, C. (2006). CiteSpace II: Detecting and visualizing emerging trends and transient patterns in scientific literature. Journal of the Association for Information Science Technology, 57 (3), 359–377.
Chen, J., Zhang, K., Zhou, Y., Chen, Z., Liu, Y., Tang, Z., et al. (2020). A novel topic model for documents by incorporating semantic relations between words. Soft Computing, 24 (15), 11407–11423.
Chen, S.-H., Huang, M.-H., & Chen, D.-Z. (2013). Exploring technology evolution and transition characteristics of leading countries: A case of fuel cell field. Advanced Engineering Informatics, 27 (3), 366–377.
Cheng, X., Yan, X., Lan, Y., & Guo, J. (2014). BTM: Topic modeling over short texts. IEEE Transactions on Knowledge and Data Engineering, 26 (12), 2928–2941.
Cohn, D., & Hofmann, T. (2000). The missing link: A probabilistic model of document content and hypertext connectivity. In: International conference on neural information processing systems
Cui, W., Liu, S., Tan, L., Shi, C., Song, Y., Gao, Z., et al. (2011). Textflow: Towards better understanding of evolving topics in text. IEEE Transactions on Visualization Computer Graphics, 17 (12), 2412–2421.
Dai, A. M., & Storkey, A. J. (2009). Author disambiguation: A nonparametric topic and co-authorship model. NIPS workshop on applications for topic models text and beyond.
Ding, W., & Chen, C. (2014). Dynamic topic detection and tracking: A comparison of HDP, C-word, and cocitation methods. Journal of the Association for Information Science Technology, 65 (10), 2084–2097.
Fu, X., Li, J., Yang, K., Cui, L., & Lei, Y. (2016). Dynamic Online HDP model for discovering evolutionary topics from Chinese social texts. Neurocomputing, 171, 412–424.
Griffiths, T. L., & Steyvers, M. (2004). Finding scientific topics. Proceedings of National Academy of Sciences, 101 (Suppl 1), 5228–5235.
Grover, A., & Leskovec, J. (2016). node2vec: Scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining .
Guo, Y., Ma, T., Porter, A. L., & Huang, L. (2012). Text mining of information resources to inform forecasting innovation pathways. Technology Analysis & Strategic Management, 24 (8), 843–861.
Havre, S., Hetzler, E., Whitney, P., & Nowell, L. (2002). Themeriver: Visualizing thematic changes in large document collections. IEEE Transactions on Visualization Computer Graphics, 8 (1), 9–20.
He, Q., Chen, B., Pei, J., Qiu, B., Mitra, P., & Giles, L. (2009). Detecting topic evolution in scientific literature: how can citations help? In: Proceedings of the 18th ACM conference on Information and knowledge management .
Heberle, H., Carazzolle, M. F., Telles, G. P., Meirelles, G. V., & Minghim, R. (2017). CellNetVis: A web tool for visualization of biological networks using force-directed layout constrained by cellular components. BMC Bioinformatics, 18 (10), 395.
Heinrich, G. (2005). Parameter estimation for text analysis, Technical report.
Hofmann, T. (1999). Probabilistic latent semantic analysis. In: Fifteenth conference on uncertainty in artificial intelligence .
Huang, Y., Zhu, F., Guo, Y., Porter, A. L., Zhang, Y., & Zhu, D. (2016). Exploring technology evolution pathways to facilitate technology management: A study of dye-sensitized solar cells (DSSCs). In: 2016 Portland international conference on management of engineering and technology (PICMET) .
Huang, Y., Zhu, F., Porter, A. L., Zhang, Y., Zhu, D., & Guo, Y. (2020). Exploring technology evolution pathways to facilitate technology management: From a technology life cycle perspective. IEEE Transactions on Engineering Management, PP (99), 1–13.
Jeh, G., & Widom, J. (2002). SimRank: A measure of structural-context similarity. In: Eighth ACM Sigkdd international conference on knowledge discovery & data mining .
Jeong, D. H., & Min, S. (2014). Time gap analysis by the topic model-based temporal technique. Journal of Informetrics, 8 (3), 776–790.
Jie, T., Jing, Z., Yao, L., Li, J., Li, Z., & Zhong, S. (2008). ArnetMiner:extraction and mining of academic social networks. In: ACM Sigkdd intersnational conference on knowledge discovery & data mining .
Kajikawa, Y., Ohno, J., Takeda, Y., Matsushima, K., & Komiyama, H. (2007). Creating an academic landscape of sustainability science: An analysis of the citation network. Sustainability Science, 2 (2), 221–231.
Kataria, S., Mitra, P., & Bhatia, S. (2010). Utilizing context in generative Bayesian models for linked corpus. In: Twenty - fourth AAAI conference on artificial intelligence .
Kessler, M. M. (1963). Bibliographic coupling between scientific papers. American Documentation, 14 (1), 10–25.
Kim, M., Baek, S. H., & Song, M. (2018). Relation extraction for biological pathway construction using node2vec. BMC Bioinformatics, 19 (Suppl 8), 206.
Kim, J., & Shin, J. (2018). Mapping extended technological trajectories: Integration of main path, derivative paths, and technology junctures. Scientometrics, 116 (3), 1439–1459.
MathSciNet Google Scholar
Kong, D., Zhou, Y., Liu, Y., & Xue, L. (2017). Using the data mining method to assess the innovation gap: A case of industrial robotics in a catching-up country. Technological Forecasting & Social Change, 119 .
Li, C., Wang, H., Zhang, Z., Sun, A., & Ma, Z. (2016a). Topic modeling for short texts with auxiliary word embeddings. In: Proceedings of the 39th international Acm sigir conference on research and development in information retrieval — SIGIR ‘16 , pp. 165–174
Li, X., Zhou, Y., Xue, L., & Huang, L. (2015). Integrating bibliometrics and roadmapping methods: A case of dye-sensitized solar cell technology-based industry in China. Technological Forecasting and Social Change, 97 , 205–222.
Li, X., Zhou, Y., Xue, L., & Huang, L. (2016b). Roadmapping for industrial emergence and innovation gaps to catch-up: A patent-based analysis of OLED industry in China. International Journal of Technology Management, 72 (1/2/3), 105.
Li, Y., Li, Y., Wang, J., & Sherratt, R. S. (2020). Sentiment analysis for E-commerce product reviews in Chinese based on sentiment lexicon and deep learning. IEEE Access, 8 (1), 23522–23530.
Liu, Y., Wang, J., & Jiang, Y. (2016). PT-LDA: A latent variable model to predict personality traits of social network users. Neurocomputing, 210, 155–163.
Liu, Y., Zhou, Y., Liu, X., Dong, F., Wang, C., & Wang, Z. (2019). Wasserstein gan-based small-sample augmentation for new-generation artificial intelligence: A case study of cancer-staging data in biology. Engineering , 2019 (5), 156–163.
Malik, S., Smith, A., Hawes, T., Papadatos, P., Li, J., Dunne, C., & Shneiderman, B. (2013). TopicFlow: Visualizing topic alignment of Twitter data over time. In: Proceedings of the 2013 IEEE/ACM international conference on advances in social networks analysis and mining .
Mccallum, A., Wang, X., & Corrada-Emmanuel, A. (2007). Topic and role discovery in social networks with experiments on enron and academic email. Journal of Artificial Intelligence Research, 30 (2), 249–272.
Miao, Z., Du, J., Dong, F., Liu, Y., & Wang, X. (2020). Identifying technology evolution pathways using topic variation detection based on patent data: A case study of 3D printing. Futures, 118 , 102530.
Ming, Y., & Hsu, W. H. (2016). HDPauthor: A new hybrid author-topic model using latent dirichlet allocation and hierarchical dirichlet processes. In: International conference companion on world wide web.
Nallapati, R. M., Ahmed, A., Xing, E. P., & Cohen, W. W. (2008). Joint latent topic models for text and citations. In: ACM Sigkdd international conference on knowledge discovery & data mining .
Nordensvard, J., Zhou, Y., & Zhang, X. (2018). Innovation core, innovation semi-periphery and technology transfer: The case of wind energy patents. Energy Policy , 120 , 213–227.
Pan, M., Zhou, Y., & Zhou, D. (2019). Comparing the innovation strategies of Chinese and European wind turbine firms through a patent lens. Environmental Innovation and Societal Transitions, 30 , 6–18.
Perozzi, B., Al-Rfou, R., & Skiena, S. (2014). Deepwalk: Online learning of social representations. In: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining , pp. 701–710
Rosen-Zvi, M., Griffiths, T. L., Steyvers, M., & Smyth, P. (2012). The author-topic model for authors and documents. In: Conference on uncertainty in artificial intelligence.
Small, H. (1973). Co-citation in the scientific literature: A new measure of the relationship between two documents. Journal of the American Society for information Science, 24 (4), 265–269.
Steyvers, M., Smyth, P., Rosen-Zvi, M. & Griffiths, T. (2004). Probabilistic author-topic models for information discovery. In: Tenth Acm Sigkdd international conference on knowledge discovery & data mining .
Teh, Y. W., Jordan, M. I., Beal, M. J., & Blei, D. M. (2006). Hierarchical dirichlet processes. Publications of the American Statistical Association, 101 (476), 1566–1581.
Wang, B., Liu, Y., Zhou, Y., & Wen, Z. (2018). Emerging nanogenerator technology in China: A review and forecast using integrating bibliometrics, patent analysis and technology roadmapping methods. Nano Energy, 46 , 322–330.
Wei, C., Chaoran, L., Chuanyun, L., Lingkai, K., & Zaoli, Y. (2020). Tracing the evolution of 3-D printing technology in China using LDA-based patent abstract mining. IEEE Transactions on Engineering Management, PP , 1–14.
Wu, Y., Liu, S., Yan, K., Liu, M., & Wu, F. (2014). Opinionflow: Visual analysis of opinion diffusion on social media. IEEE Transactions on Visualization Computer Graphics, 20 (12), 1763–1772.
Xiao, Y., Lu, L. Y., Liu, J. S., & Zhou, Z. (2014). Knowledge diffusion path analysis of data quality literature: A main path analysis. Journal of Informetrics, 8 (3), 594–605.
Xu, H. (2020). Topic-linked innovation paths in science and technology. Journal of Informetrics, 14 (2), 101014.
Xu, G., Hu, W., Qiao, Y., & Zhou, Y. (2020). Mapping an innovation ecosystem using network clustering and community identification: A multi-layered framework. Scientometrics, 124 , 2057–2081. https://doi.org/10.1007/s11192-020-03543-0 .
Article Google Scholar
Xu, G., Wu, Y., Minshall, T., & Zhou, Y. (2017). Exploring the emerging ecosystem across science, technology and business: A case of 3D printing in China. Technological Forecasting and Social Change . https://doi.org/10.1016/j.techfore.2017.06.030 .
Yao, Q., Song, Z., & Peng, C. (2011). Research on text categorization based on LDA. Computer Engineering Applications, 47 (13), 150–153.
Yau, C. K., Porter, A., Newman, N., & Suominen, A. (2014). Clustering scientific documents with topic modeling. Scientometrics, 100 (3), 767–786.
Yu, J. (2011). From 3G to 4G: Technology evolution and path dynamics in China’s mobile telecommunication sector. Technology Analysis Strategic Management, 23 (10), 1079–1093.
Zhang, Y., Zhang, G., Chen, H., Porter, A. L., Zhu, D., & Lu, J. (2016). Topic analysis and forecasting for science, technology and innovation: Methodology with a case study focusing on big data research. Technological Forecasting Social Change, 105, 179–191.
Zhao, P., Han, J., & Sun, Y. (2009). P-Rank: A comprehensive structural similarity measure over information networks. In: ACM conference on information & knowledge management .
Zhou, Y., & Minshall, T. (2014). Building global products and competing in innovation: The role of Chinese university spin–outs and required innovation capabilities. International Journal of Technology Management, 64 (2), 180–209.
Zhou, Y., Dong, F., Kong, D., & Liu, Y. (2019b). Unfolding the convergence process of scientific knowledge for the early identification of emerging technologies. Technological Forecasting and Social Change, 144 (JUL.), 205–220.
Zhou, Y., Dong, F., Liu, Y., Li, Z., Du, J., & Zhang, L. (2020). Forecasting emerging technologies using data augmentation and deep learning. Scientometrics, 123 (1), 1–29.
Zhou, Y., Li, X., Lema, R., & Urban, F. (2016). Comparing the knowledge bases of wind turbine firms in Asia and Europe: Patent trajectories, networks, and globalisation. Science and Public Policy , 43 (4), 476–491. https://doi.org/10.1093/scipol/scv055 .
Zhou, Y., Lin, H., Liu, Y., & Ding, W. (2019a). A novel method to identify emerging technologies using a semi-supervised topic clustering model: A case of 3d printing industry. Scientometrics, 120 , 167.
Zhou, Y., Pan, M., & Urban, F. (2018). Comparing the international knowledge flow of china’s wind and solar photovoltaic (pv) industries: Patent analysis and implications for sustainable development. Sustainability, 10 (6), 1883.
Download references
This work was supported by the National Natural Science Foundation of China (Nos. 71974107, 91646102, L1824043, L1924058, L1824039, L1724034), the MOE (Ministry of Education in China) Project of Humanities and Social Sciences (16JDGC011), the Construction Project of China Knowledge Center for Engineering Sciences and Technology (No. CKCEST-2020-2-5), the UK–China Industry Academia Partnership Program (UK-CIAPP/260) and Tsinghua University Project of Volvo-supported Green Economy and Sustainable Development (20153000181). The findings and observations contained in this paper are those of the authors and do not necessarily reflect the views of the National Natural Science Foundation.
Author information
Authors and affiliations.
School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, 430074, China
Huailan Liu, Zhiwang Chen & Sheng Liu
Department of Computer Science and Technology, Tsinghua University, Beijing, 100084, China
School of Public Policy and Management, Tsinghua University, Beijing, 100084, China
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Yuan Zhou .
Appendix A: Hierarchical Dirichlet Process
The Hierarchical Dirichlet Process (HDP) is a non-parametric Bayesian topic model that assumes that there are infinite topics in the corpus. To understand what HDP is, we need to start with what is the Dirichlet Process (DP).
The DP is a stochastic process that generates probability distributions, parameterized by a scaling parameter \(\lambda\) and a base probability measure H . We denote it by \(G_{0} \sim DP\left( {\gamma ,H} \right)\) , A perspective on the Dirichlet process is provided by the Chinese restaurant process (CRP) (Aldous 1985 ). A sequence of variables \(\theta_{1} ,\theta_{2} , \ldots\) are independent and identically distributed according to \(G_{0}\) . In this metaphor, take \(\theta_{i}\) to be a customer entering a restaurant with infinitely many tables, each serving a unique dish \(\phi_{k}\) . Each arriving customer chooses a table, in proportion to the number of customers already sitting at that table, denoted as \(m_{k}\) . With some positive probability proportional to \(\gamma\) , the customer chooses a new, previously unoccupied table. The above equation can be expressed as follows (Blackwell and Macqueen 1973 ):
The Dirichlet process can be used to model group data, and the HDP is used to link group-specific Dirichlet processes, which can share clusters among groups of data. The graphical model is shown in Fig. 8 .

The graphical representation for HDP
The HDP has two-level DP structures. The \(G_{j}\) is distributed as a DP corresponding group, j , with a concentration parameter, \(\alpha ,\) and a base distribution, \(G_{0}\) . \(G_{0}\) is also distributed as a DP with a concentration parameter, \(\gamma ,\) and a base distribution, H . Moreover, J is the number of observed groups, \(n_{j}\) is the number of observed variables in group j , \(x_{ji}\) is the \(i_{th}\) observed variable in group j , \(\theta_{ji}\) is the factor of \(x_{ji} ,\) . And \(F\left( {\theta_{ji} } \right)\) is the distribution of the \(x_{ji}\) given \(\theta_{ji}\) . The generative model for HDP is as follows:
To better understand HDP, we will combine the topic model process over documents to explain the HDP model. In the HDP, the sampling order of \(\theta_{ji}\) is exchangeable, and so is \(G_{j}\) . H is taken as a Dirichlet distribution whose dimension is the size of the vocabulary, i.e., it is the distribution over an uncountable number of term distributions. Moreover, \(G_{0}\) is a distribution over a countable but infinite number of topic-word distributions. For each document j, \(G_{j}\) is a distribution over a countable but infinite number of categorical term distributions, i.e., topic distributions of the document. Here, \(\theta_{ji}\) is a categorical distribution over terms, i.e., a topic, and \(x_{ji}\) represents observed variables.
We can use the Chinese Restaurant Franchise (Teh et al. 2006 ) to understand HDP. In this metaphor, there a lot of restaurants that shares the same menu. There is an unlimited number of dishes in this menu. Additionally, each restaurant has an unlimited number of tables. Each table can serve an unlimited number of customers. However, each table only has one dish. All of the dishes will be chosen after customers of all the restaurants have chosen a table to sit. Like customers in CRP, when the \(i_{th}\) customer \(\theta_{ji}\) enters the \(j_{th}\) restaurant, the customer will choose an occupied table or a new table according to the equation:
where m j . is the number of occupied tables of the \(j_{th}\) restaurant; \(n_{jt.}\) is the number of the customers of the \(t_{th}\) table of the \(j_{th}\) restaurant; \(\psi_{jt}\) is the dish index of the \(t_{th}\) table of the \(j_{th}\) restaurant; and \(\delta_{{\psi_{jt} }}\) is a probability measure concentrated at \(\psi_{jt}\) .
After all of the customers of all of the restaurants have chosen a table to sit, the customers of each table will pick one dish for each table in turn. In each selection process, customers do not know which dish is good. Thus, the customers will consider the number of different dishes that have been selected. The more times a dish is selected, the more likely it is that the customers will select the dish. At the same time, there is a certain probability of choosing new dishes. This process is following the equation:
where m. k is the number of the tables serving the \(k_{th}\) dish in all the restaurant, \(m_{..}\) is the number of all the occupied tables of all the restaurants, and \(\phi_{k}\) is the \(k_{th}\) dish.
In this metaphor, each dish \(\phi\) corresponds to a global topic, and each restaurant corresponds to a document. Because each document’s topics are drawn from the same measure G0, global topics could be shared by all documents. Moreover, because of the two levels of DP, each document contains words from different topics.
It is easy to find that, when choosing a dish (topic) for table t in restaurant j , all the numbers of the different dishes in different restaurant are taken into considered in the same degree. That is to say, customers who choose a new dish in the restaurant j are under the same influence of all the restaurants. Corresponding to the topic model process, the topic distribution of a document was influenced to the same degree by all the documents. This is not reasonable. We think that integrating the citation information into the topic model can handle this problem.
Appendix B: Citation involved Hierarchical Dirichlet Process
In the CIHDP model, \(D = \left\{ {j,j_{2} , \ldots } \right\}\) is a collection of scientific literature, J is the number of scientific literature works, \(\varvec{x}_{j}\) consists of a series of words chosen from a vocabulary V as \(\varvec{x}_{j} = \left\{ {x_{j1} ,x_{j2} , \ldots } \right\}\) , and \(G = \left( {V,E} \right)\) represents the graph of the citation network. Moreover, each element \(v \in V\) is an index of document j , and each element directed edge \(\left( {u,v} \right) \in {\text{E}}\) represents a citation. In our model, the citation directed graph G was not directly used. Instead, node2vec and cosine similarity were used to calculate the similarity of each pair of documents based on the citation directed graph G . A similarity matrix, M , was generated in this process. The element \(sim\left( {j_{a} ,j_{b} } \right)\) in the \(a_{\text{th}}\) row and \(b_{th}\) column of M was the similarity of document a and document b .
Similar to the HDP, there were two level Dirichlet Processes in our model. \(G_{0}\) , as a topic, was generated as a Dirichlet Process with a base distribution, H , and a concentration parameter, \(\alpha\) . Additionally, a set of measures, \(\{ G_{j} \}_{j = 1}^{J}\) , was drawn from the DP with a base distribution, \(G_{0,}\) and a concentration parameter, \(\gamma\) . However, the value of \(\psi\) was influenced by the similarity matrix, M, except in the case of other parameters related to \(\psi\) in the HDP.
That is, in the process of choosing dishes for each table, we took into consideration the differences in customer preferences caed by differences in the geographical locations of different restaurants. This metaphor was named the USA Local Specialties Restaurant Franchise. Similar to the Chinese Restaurant Franchise (CRF) metaphor, some restaurants were sharing the same menu. The number of dishes, the number of tables in each restaurant, and the number of customers that each table could sit were all infinite. Each customer chose a table to sit in the same way as in the CRF metaphor. Nevertheless, these restaurants were located all over USA. Moreover, in the process of picking dishes, the customer not only considered the number of different dishes that had been selected, but also the geographical locations of each restaurant.
As shown in Fig. 9 , all the blocks of \(\phi_{1}\) in purple represent lobster, all the blocks of \(\phi_{2}\) in yellow represent the boiled peanuts, and all the blocks of \(\phi_{3}\) in orange represent the mountain trout. We assumed that all the tables in the restaurant located in Utah had not been ordered yet, and the franchise only had three restaurants. Except for the Utah restaurant, the lobster had been ordered twice, the mountain trout had been ordered twice, and the boiled peanuts also had been ordered twice. Thus, according to the CRF’s ordering principle, these three dishes would be ordered with the same probability, which was clearly unreasonable. It is known that Utah people like to eat mountain trout and have similar tastes as people in Colorado. And between boiled peanuts and lobsters, Utah people are more likely to eat boiled peanuts.

A Depiction of Generation process of USA Local Specialties Restaurant Franchise. (the USA map quoted from: www.mapsofworld.com/usa/ )
However, in the CIHDP model, the geographical locations of each restaurant were taken into consideration. Because Maine is far from Utah, the dishes ordered in Maine restaurant would have less impact on the dishes ordered by customers at Utah restaurants. On the contrary, because the Colorado is adjacent to the Utah, the dishes ordered in Colorado would have a greater impact on the dishes ordered by customers at Utah restaurant. Of course, the dishes ordered in the Utah restaurant would have the greatest impact on themselves. Different geographical location will affect the distribution of dishes on the table.
In detail, for the restaurant farthest from their restaurant, the number of different dishes that were selected would have a weak influence on their selection of dish. For the restaurant near their restaurant, the influence would be strong. It was obvious that the influence of the restaurant itself was the biggest. The following equations confirmed the whole process:
where \(j_{l}\) is the \(l_{th}\) restaurant.
In the process of topic modeling by CIHDP, geography corresponded to the distance between the two documents in the citation network. Moreover, the degree of the impact was determined by the similarity of the two documents calculated by node2vec and cosine similarity based on the citation network. The generation of the words for each document is described in Table 6 .
We used the Gibbs sampling method to obtain the index variables, \(t_{ji,}\) (associating tables with customers/words) and \(k_{jt}\) (associating tables with dishes/topics). Given these variables, we could reconstruct the distribution over topics for each document and the distribution over words for each topic. The sampling probability of table \(t_{ji}\) was as follows:
where \(n_{jt}^{\neg ji}\) is a count of customers at table t in restaurant j ; \(\neg ji\) denotes the counter calculated without considering the customer i in restaurant j ; and \(f_{k}^{{\neg x_{ji} }} \left( {x_{ji} } \right)\) is the likelihood of generating \(x_{ji}\) for existing table t , which could be calculated by:
where \(k = k_{jt}\) is the dish served at table t in restaurant j . In addition, \(p(x_{ji} |t_{\neg ji} ,t_{ji} = t^{\text{new}} ,k)\) is the conditional distribution of \(x_{ji}\) for \(t_{ji} = t^{\text{new}} ,\) which could be calculated by integrating the possible values of \(k_{{jt^{\text{new}} }}\) as follows:
where \(m_{jk}^{*}\) is the influenced number of tables assigned to dish k for restaurant \(j. {\text{and}} m_{j.}^{*}\) is the total influenced number of tables for restaurant \(\varvec{j}\) .
Also, \(f_{{\neg x_{ji} }}^{{k^{\text{new}} }} \left( {x_{ji} } \right) = \smallint f\left( {x_{ji} |\phi } \right)h\left( \phi \right)d\left( \phi \right)\) is the prior density of \(x_{ji}\) . The prior probability that the new table \(t^{\text{new}}\) served a new dish \(k^{\text{new}}\) was proportional to \(\gamma\) . If the sample value of \(t_{ji}\) was equal to \(t^{\text{new}} ,\) we could obtain a sample of \(k_{{jt^{\text{new}} }}\) by sampling as follows:
In the process of Gibbs sampling, if the value of \(n_{jt}\) was reduced to 0, the probability that the next customer would choose table \(t_{ji}\) was 0. Thus, we needed to delete the \(k_{jt}\) corresponding to \(t_{ji}\) . Moreover, if dish k did not correspond to a table at this time, \(k_{jt}\) had to be updated as follows:
Appendix C: Parameter setting and orthogonal test
Both CIHDP and HDP are non-parametric Bayesian models. Although it is not necessary to set the number of topics in advance like LDA, it is also necessary to provide the parameter values of the prior distribution of the model. The parameters of the two models are the same, both β, γ, and α. In this study, three-factor and five-level orthogonal experiments were performed on the parameters to determine the optimal experimental parameter combination (Table 7 ).
To reduce the randomness of the test results, we conducted 3 repeated tests for each condition (that is, each set of parameters). The perplexity and the number of topics (K) in the experiment are the averages of the results of three repeated experiments. In the experiment, the number of model iterations is set to 150. Under this condition, the perplexity of the model can be reduced to the convergence. The results of the parameter orthogonal experiment are shown in Tables 8 and 9 .
Appendix D: Topics in the overall time span
This section gives the experimental results of topic detection in the overall period in the case study. Among them, “topic tag” is a label that is calibrated for the topic after reading 25 high-frequency keywords. At the same time, the probability value and frequency of occurrence of words in the topic are given behind the words (word frequency is given in parentheses). The words marked in red are representative words of the corresponding topic (Tables 10 , 11 ).
Appendix E: Topics in each period
This section gives the experimental results of topic tracking in the case study. The topic information for each period is given below. Among them, “topic tag” is a label that is calibrated for the topic after reading 25 high-frequency keywords. At the same time, the probability value and frequency of occurrence of words in the topic are given behind the words (word frequency is given in parentheses). The words marked in red are representative words of the corresponding topic (Tables 12 , 13 ).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and Permissions
About this article
Liu, H., Chen, Z., Tang, J. et al. Mapping the technology evolution path: a novel model for dynamic topic detection and tracking. Scientometrics 125 , 2043–2090 (2020). https://doi.org/10.1007/s11192-020-03700-5
Download citation
Received : 09 November 2019
Published : 14 September 2020
Issue Date : December 2020
DOI : https://doi.org/10.1007/s11192-020-03700-5
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Technology path mapping
- Topic model
- Dynamic topic detection
- Hierarchical Dirichlet Process (HDP)
- Path splitting and fusion
- Find a journal
- Publish with us
To read this content please select one of the options below:
Please note you do not have access to teaching notes, evolutions and trends of artificial intelligence (ai): research, output, influence and competition.
Library Hi Tech
ISSN : 0737-8831
Article publication date: 22 July 2021
Issue publication date: 27 May 2022
This paper throws light on some of the nature of artificial intelligence (AI) development, which will serve as a starter for helping to advance its development.
Design/methodology/approach
This work reveals the evolutions and trends of AI from four dimensions: research, output, influence and competition through leveraging academic knowledge graph with 130,750 AI scholars and 43,746 scholarly articles.
The authors unearth that the “research convergence” phenomenon becomes more evident in current AI research for scholars' highly similar research interests in different regions. The authors notice that Pareto's principle applies to AI scholars' outputs, and the outputs have been increasing at an explosive rate in the past two decades. The authors discover that top works dominate the AI academia, for they attracted considerable attention. Finally, the authors delve into AI competition, which accelerates technology development, talent flow, and collaboration.
Originality/value
The work aims to throw light on the nature of AI development, which will serve as a starter for helping to advance its development. The work will help us to have a more comprehensive and profound understanding of the evolutions and trends, which bridge the gap between literature research and AI development as well as enlighten the way the authors promote AI development and its strategy formulation.
- Science of science
- Artificial intelligence
- Knowledge graph
Acknowledgements
This article has been awarded by the National Natural Science Foundation of China (61941113, 82074580, 61806111), the Fundamental Research Fund for the Central Universities (30918015103, 30918012204), Nanjing Science and Technology Development Plan Project (201805036), China Academy of Engineering Consulting Research Project (2019-ZD-1-02-02), National Social Science Foundation (18BTQ073), NSFC for Distinguished Young Scholar under Grant No. 61825602 and National Key R&D Program of China under Grant No. 2020AAA010520002.
Shao, Z. , Yuan, S. , Wang, Y. and Xu, J. (2022), "Evolutions and trends of artificial intelligence (AI): research, output, influence and competition", Library Hi Tech , Vol. 40 No. 3, pp. 704-724. https://doi.org/10.1108/LHT-01-2021-0018
Emerald Publishing Limited
Copyright © 2021, Emerald Publishing Limited
Related articles
We’re listening — tell us what you think, something didn’t work….
Report bugs here
All feedback is valuable
Please share your general feedback
Join us on our journey
Platform update page.
Visit emeraldpublishing.com/platformupdate to discover the latest news and updates
Questions & More Information
Answers to the most commonly asked questions here
- Reference Manager
- Simple TEXT file
People also looked at
Systematic review article, the effects of technological developments on work and their implications for continuous vocational education and training: a systematic review.
- Faculty of Human Sciences, University of Regensburg, Regensburg, Germany
Technology is changing the way organizations and their employees need to accomplish their work. Empirical evidence on this topic is scarce. The aim of this study is to provide an overview of the effects of technological developments on work characteristics and to derive the implications for work demands and continuous vocational education and training (CVET). The following research questions are answered: What are the effects of new technologies on work characteristics? What are the implications thereof for continuous vocational education and training? Technologies, defined as digital, electrical or mechanical tools that affect the accomplishment of work tasks, are considered in various disciplines, such as sociology or psychology. A theoretical framework based on theories from these disciplines (e.g., upskilling, task-based approach) was developed and statements on the relationships between technology and work characteristics, such as complexity, autonomy, or meaningfulness, were derived. A systematic literature review was conducted by searching databases from the fields of psychology, sociology, economics and educational science. Twenty-one studies met the inclusion criteria. Empirical evidence was extracted and its implications for work demands and CVET were derived by using a model that illustrates the components of learning environments. Evidence indicates an increase in complexity and mental work, especially while working with automated systems and robots. Manual work is reported to decrease on many occasions. Workload and workflow interruptions increase simultaneously with autonomy, especially with regard to digital communication devices. Role expectations and opportunities for development depend on how the profession and the technology relate to each other, especially when working with automated systems. The implications for the work demands necessary to deal with changes in work characteristics include knowledge about technology, openness toward change and technology, skills for self- and time management and for further professional and career development. Implications for the design of formal learning environments (i.e., the content, method, assessment, and guidance) include that the work demands mentioned must be part of the content of the trainings, the teachers/trainers must be equipped to promote those work demands, and that instruction models used for the learning environments must be flexible in their application.
Introduction
In the face of technology-driven disruptive changes in societal and organizational practices, continuous vocational education and training (CVET) lacks information on how the impact of technologies on work must be considered from an educational perspective ( Cascio and Montealegre, 2016 ). Research on workplace technologies, i.e., tools or systems that have the potential to replace or supplement work tasks, typically are concerned with one out of two areas of interest: First, economic and sociological research repeatedly raises the question on technological mass-unemployment and societal inequality as a result of technological advances ( Brynjolfsson and McAfee, 2014 ; Ford, 2015 ; Frey and Osborne, 2017 ). And second, management literature questions the suitability of prevailing organizational structures in the face of the so-called “fourth industrial revolution” ( Schwab, 2017 ), taking visionary leaps into a fully automated future of digital value creation ( Roblek et al., 2016 ).
Many of the contributions of scholars discuss the enormous potential of new technologies for work and society at a hypothetical level, which led to a large number of position papers. Moreover, the question on what consequences recent developments, such as working with robots, automated systems or artificial intelligence will have for different professions remain largely unclear. By examining what workplace technologies actually “do” in the work environment, it was suggested that work tasks change because of technological developments ( Autor et al., 2003 ; Autor, 2015 ). This is due to technologies substituting different operations or entire tasks and thus leave room for other activities. Jobs are defined by the work tasks and the conditions under which the tasks have to be performed. This in turn defines the necessary competences, that is the potential capacity to carry out a job (e.g., Ellström, 1997 ). Therefore, CVET needs to be informed on the changes that technology causes in work tasks and the consequential characteristics of work. Only then CVET is able to derive the required competences of employees and organize learning environments that foster the acquirement of these competences. These insights can be used to determine the implications thereof for the components of formal learning environments: content, didactics, trainer behavior, assessment, and resources (e.g., Mulder et al., 2015 ).
The aim of this systematic literature review is to get insight into the effects of new technological developments on work characteristics in order to derive the necessary work demands and their implications for the design of formal learning environments in CVET.
Therefore, the following research questions will be answered:
RQ 1 : What are the effects of new technologies on work characteristics?
RQ 2 : What are the implications thereof for continuous vocational education and training?
Theoretical considerations on the relationships between technology and work characteristics are presented before the methods for searching, selecting and analyzing suitable studies are described. Regarding the results section, the structure is based on the three main steps of analyzing the included studies: First, the variables identified within the selected studies are clustered and defined in terms of work characteristics. Second, a comprehensive overview of evidence on the relationships between technologies and work characteristics is displayed. Third, the evidence is evaluated regarding the work demands that result from technologies changing work characteristics. Finally, the implications for CVET and future research as well as the limitations of this study will be discussed.
Theoretical Framework
In this section, a conceptualization of technology and theoretical assumptions on relationships between technology and work characteristics will be outlined. Research within various disciplines, such as sociology, management, economics, educational science, and psychology was considered to inform us on the role of technology within work. Completing this section, an overview of the various components of learning environments is provided to be used as a basis for the analyses of the empirical evidence.
Outlining Technology and Recent Technological Developments
A clear definition of technology often lacks in studies, what may be due to the fact that the word itself is an “equivoque” ( Weick, 1990 , p. 1) and a “repository of overlapping inconsistent meanings” ( McOmber, 1999 , p. 149). A suitable definition can be provided by analyzing what technologies actually “do” ( Autor et al., 2003 , p. 1,280). The primary goal of technology at work is to save or enhance labor in the form of work tasks, defined as “a unit of work activity that produces output” ( Autor, 2013 , p. 186). Technology can therefore be defined as mechanical or digital devices, tools or systems. These are used to replace work tasks or complement the execution of work tasks (e.g., McOmber, 1999 ; Autor et al., 2003 ). According to this view, technology is conceptualized according to “its status as a tool” (“instrumentality”; McOmber, 1999 , p. 141). Alternatively, technology is understood as “the product of a specific historical time and place,” reflecting a stage of development within a predefined historical process (“industrialization”; McOmber, 1999 , p. 143) or as the “newest or latest instrumental products of human imagination” (“novelty”; McOmber, 1999 , p. 143), reflecting its nature that is rapidly replacing and “outdating” its predecessors. The definition according to “instrumentality” is particularly suitable for this research, as the interest focuses on individual-level effects of technologies and its use for accomplishing work. Therefore, the technology needs to be mentioned explicitly (e.g., “robot” instead of “digital transformation”) and described specifically in the form with which the employee is confronted at the workplace. Different definitions may reflect different perspectives on the role of technology for society and work. These perspectives in the form of paradigmatic views ( Liker et al., 1999 ) include philosophical and cultural beliefs as well as ideas on organizational design and labor relations. They differ with regard to the complexity in which the social context is believed to determine the impact of technology on society. Listed in accordance to increasing social complexity, the impact may be determined by technology itself (i.e., “technological determinism”), established power relations (i.e., “political interest”), managerial decisions (i.e., “management of technology”), or the interaction between technology and its social context (i.e., “interpretivist”) ( Liker et al., 1999 ). Later research added an even more complex perspective, according to which the effects of technology on society and organizations are determined by the relations between the actors themselves (i.e., “sociomateriality”; Orlikowski and Scott, 2008 ). Paradigmatic views may guide research in terms of content, purpose and goals, which in turn is likely to affect the methods and approach to research and may be specific to disciplines. For instance, Marxist sociological research following the view of “political interest” or research in information systems following the view of “management of technology.”
New technological developments are widely discussed in various disciplines. For instance, Ghobakhloo (2018) summarizes the expected areas of application of various technological concepts within the “smart factory” in the manufacturing industry: The internet of things as an umbrella term for independent communication of physical objects, big data as procedure to analyse enormous amounts of data to predict the consequences of operative, administrative, and strategic actions, blockchain as the basis for independent, transparent, secure, and trustworthy transaction executed by humans or machines, and cloud computing as an internet-based flexible infrastructure to manage all these processes simultaneously ( Cascio and Montealegre, 2016 ; Ghobakhloo, 2018 ). The central question to guide the next section is to what extent these new technologies, and also well-established technologies such as information and communication technologies (ICT), which are constantly being expanded with new functions, could influence work characteristics on a theoretical basis.
Theories on the Relationships Between Technology and Work Characteristics
A central discussion on technology can be found in the sociological literature on deskilling vs. upgrading ( Heisig, 2009 ). The definition of “skill” in empirical studies on this subject varies regarding its content by describing either the level of complexity that an employee is faced with at work, or the level of autonomy that employees are able to make use of Spenner (1990) . Theories advocating the deskilling of work (e.g., labor process theory; Braverman, 1998 ) propose that technology is used to undermine workers' skill, sense of control, and freedom. Employees need to support a mechanized workflow under constant surveillance in order to maximize production efficiency ( Braverman, 1998 ). Other authors, advocating “upskilling” ( Blauner, 1967 ; Bell, 1976 ; Zuboff, 1988 ), propose the opposite by claiming that technology frees employee's from strenuous tasks, leaving them with more challenging and fulfilling tasks ( Francis, 1986 ). In addition, issues of identity at work were raised by Blauner (1967) who acknowledged that employees may feel “alienated” as soon as technologies change or substitute work that is meaningful to them, leaving them with a feeling of powerlessness, meaninglessness, or self-estrangement ( Shepard, 1977 ). In sum, sociological theories suggest that technology has an impact on the level of freedom, power and privacy of employees, determining their identity at work and the level of alienation they experience.
According to contingency theories ( Burns and Stalker, 1994 ; Liker et al., 1999 ) technology is a means to reduce uncertainty and increase competitiveness for organizations ( Parker et al., 2017 ). Therefore, the effects of technology on the employee depend on strategic decisions that fit the organizational environment best. When operational uncertainty is high, organizations get more competitive by using technology to enhance the flexibility of employees in order to enable a self-organized adaption to the changing environment ( Cherns, 1976 ). This increases employee's flexibility by allowing them to identify and decide on new ways to add value to the organization (“organic organization”; Burns and Stalker, 1994 ). When operational uncertainty is low, organizations formalize and standardize procedures in order to optimize the workflow and make outputs more calculable (“mechanistic organization”; Burns and Stalker, 1994 ). This leads to less opportunities for individual decision-making and less flexibility for the employees. In sum, contingency theories suggest, that the effects of technology depend on the uncertainty and competitiveness in the external environment and may increase or decrease employee's flexibility and opportunities for decision-making and self-organization.
Economic research following the task-based approach from Autor et al. (2003) suggests, that technology substitutes routine tasks and complements complex (or “non-routine”) ones. Routine manual and cognitive tasks usually follow a defined set of explicit rules, which makes them susceptible to automation. By analyzing qualification requirements in relation to employment rates and wage development, it was argued that workplace automation substitutes routine and low-skill tasks and thus favors individuals who can carry out high-skilled complex work due to their education and cognitive abilities ( Card and DiNardo, 2002 ; Autor et al., 2003 ). This means, that the accomplishment of tasks “demanding flexibility, creativity, generalized problem-solving, and complex communications” ( Autor et al., 2003 , p. 1,284) becomes more important. Complex tasks, so far, posed a challenge for automation, because they required procedural and often implicit knowledge ( Polanyi, 1966 ; Autor, 2015 ). However, recent technological developments such as machine learning, are capable of delivering heuristic responses to complex cognitive tasks by applying inductive thinking or big data analysis ( Autor, 2015 ). Regarding complex manual tasks, mobile robots are increasingly equipped with advanced sensors which enable them to navigate through dynamic environments and interactively collaborate with human employees ( Cascio and Montealegre, 2016 ). In sum, economic research following the task-based approach argues that technology affects the routineness and complexity of work by substituting routine tasks. However, new technologies may be able to increasingly substitute and complement not only routine tasks, but complex tasks as well. According to the theories, this will again increase the complexity of work by creating new demands for problem-solving and reviewing the technology's activity.
Useful insights can be gained from psychological theories that explicitly take the role of work characteristics into account. Work characteristics are often mentioned by for instance sociological theories (e.g., autonomy and meaningfulness) without clearly defining the concepts. Particularly the job characteristics model of Hackman and Oldham (1975) and the job-demand-control model of Karasek (1979) and Karasek et al. (1998) are consulted to further clarify the meaning of autonomy and meaningfulness at work. With regard to autonomy, Hackman and Oldham's model 1975 conceptualizes autonomy as a work characteristic, defined as “the degree to which the job provides substantial freedom, independence, and discretion to the employee in scheduling the work and in determining the procedures to be used in carrying it out” ( Hackman and Oldham, 1975 , p. 162). According to the authors, autonomy facilitates various work outcomes, such as motivation and performance. In a similar vein, Karasek et al. (1998) stress the role of autonomy in the form of “decision authority” that interacts with more demanding work characteristics, such as workload or frequent interruptions and therefore enables a prediction of job strain and stress ( Karasek et al., 1998 ). With regard to meaningfulness, Hackman and Oldham (1975) clarify that different core job dimensions, such as the significance of one's own work results for the work and lives of other people, the direct contribution to a common goal with visible outcomes, and the employment of various skills, talents and activities all enhance the perception of meaningfulness at work. In sum, psychological theories on employee motivation and stress clarify the concepts of autonomy and meaningfulness by illustrating the factors that contribute to their experience in relation to challenging and rewarding aspects of work.
Components of CVET
In order to formulate the implications for CVET of the studied effects of technology on work characteristics, a framework with the different components of CVET is needed. The objective of the VET system and continuous education is to qualify people by supporting the acquirement of required competences, for instance by providing training. Competences refer to the potential capacity of an individual in order to successfully carry out work tasks ( Ellström, 1997 ). They contain various components such as work-related knowledge and social skills (e.g., Sonntag, 1992 ). Competences are considered here as “the combination of knowledge, skills and attitude, in relation to one another and in relation to (future) jobs” ( Mulder and Baumann, 2005 , p. 106; e.g., Baartman and de Bruijn, 2011 ).
Participants in CVET enter the system with competences, such as prior knowledge, motivation, and expectations. It is argued that these have to be considered when designing learning environments for CVET. Next to making the distinction between the different components of learning environments content, guidance, method, and assessment, it is considered important that these components are coherent and consistent ( Mulder et al., 2015 ). For instance, the content of the training needs to fit to the objectives and the background of the participants. The same goes for the method or didactics used (e.g., co-operative learning, frontal instruction) and the guidance of teachers, mentors or trainers. In addition, assessment needs to be consistent with all these components. For instance, problem based learning or competence based training requires other forms of assessment than more classical teacher centered forms of didactics, which makes a classic multiple choice test not fitting ( Gulikers et al., 2004 ). Figure 1 contains an overview of the components of learning environments for CVET.

Figure 1 . Components of CVET learning environments (adapted from Mulder et al., 2015 , p. 501).
Three steps are necessary to answer the research questions. Firstly, a systematic search and review of empirical studies reporting evidence on the direct relationships between new technologies and work characteristics. Secondly, an analysis of the evidence with regard to its implications for work demands. Thirdly, deriving the work demands and their implications for CVET.
Systematic Search Strategy
Due to the interdisciplinary nature of our research, specific databases were selected for each of the disciplines involved: Business Source Premier (business and management research) and PsycArticles (psychology) were searched via EBSCOhost, and ERIC (educational science), and Sociological Abstracts (sociology) were searched via ProQuest.
Identifying suitable keywords for technological concepts is challenging due to the rapidly changing and inconsistent terminology and the nested nature of technological concepts ( Huang et al., 2015 ). Therefore, technological terms were systematically mapped by using the different thesauri provided by each of the chosen databases. After exploding a basic term within a thesaurus, the resulting narrower terms and related terms were documented and examined within the following procedure: (a) Checking the compatibility with our definition of technology reflecting its instrumentality, (b) Adjustment of keywords that are too broad or too narrow, (c) Disassembling nested concepts. The procedure was repeated stepwise for each of the databases. Finally, 45 terms that reflect new technologies were documented and used for the database search.
Keywords reflecting work characteristics are derived from the theoretical conceptualizations previously outlined. Synonyms for different concepts within the relevant theories were identified and included. In order to narrow our search results, additionally operators for empirical studies conducted in a workplace setting were added.
In order to avoid unnecessary redundancy, the use of asterisks was carefully considered, provided that the search results did not lose significantly in precision or the number of hits did not grow to an unmanageable number of studies. The final search string is shown in Table 1 .

Table 1 . Final search string.
Eligibility Criteria and Study Selection
Technical criteria included methodological adequacy. This was ensured by only including studies published in peer-reviewed journals. In addition, the studies had to provide quantitative or qualitative data on relationships between technology and work characteristics. Only English-language studies were considered, because most of the studies are published in English and therefore the most complete overview of the existing knowledge on this topic can be obtained. This also enables as many readers as possible to have access to the original studies and analyse the findings of the empirical studies themselves.
Concerning technology, variables had to express the direct consequence or interaction with a certain technology (e.g., the amount of computer-use or experience with robots in the workplace) and indirect psychological states that conceptually resulted from the presence of the technology (e.g., a feeling of increased expectations concerning availability). Regarding work characteristics, variables had to describe work-related aspects associated with our conceptualization of work characteristics (e.g., a change in flexibility or the perception of complexity).
Regarding the direction of effects, only studies that focused on the implementation or use of technologies for work-related purposes were included. Studies were excluded, if they (a) tested particular designs or features of technologies and evaluated them without considering effects on work characteristics, (b) regarded technology not as a specific tool but an abstract process (e.g., “digital transformation”), (c) were published before 1990 due to the fact that the extent of usability and usefulness of technologies before that time should be substantially limited compared to today (e.g., Gattiker et al., 1988 ), and (d) investigated the impact of technologies on society in general without a specific relation to professional contexts (e.g., McClure, 2018 ).
Studies that were found but that did not report empirical findings on the relationships between technology and work characteristics, but rather on the relationships between technology and work demands (e.g., specific knowledge or skills) or work outcomes (e.g., performance, job satisfaction) were documented. Since the aim for this study was to derive the work demands from the work characteristics in any case, the studies that reported a direct empirical relationship between technology and work demands were analyzed separately ( N = 7).
Data Extraction
The variables expressing technology and work characteristics were listed in a table, including the quantitative or qualitative data on the relationships. Pearson's r correlations were preferred over regression results to ensure comparability. For qualitative data, the relevant passages documenting data were included. Finally, methodological information as well as sample characteristics and size are listed.
Analysis of the Results
Firstly, the variables containing work-related aspects are clustered thematically into a comprehensive final set of work characteristics. This is necessary to reduce complexity due to variations in naming, operationalization and measurement and to make any patterns in the data more visible. Deviations from the theoretically expected clusters are noted and discussed before synthesizing the evidence narratively in accordance to the research questions ( Rodgers et al., 2009 ). As proposed, the evidence on changing work characteristics is analyzed with respect to the resulting work demands in the sense of knowledge, skills, attitude and behavior, which in turn are used to determine the implications for the different components of CVET.
Figure 2 depicts a flowchart documenting the literature search. In sum, 21 studies providing evidence on relationships between technology and work characteristics were included. In addition, seven supplementary studies containing empirical evidence on relationships between technology and specific work demands were identified. These studies are taken into account when deriving the work requirements. Next, the descriptive characteristics of the included studies will be reported. After that, the evidence on relationships between technologies and work characteristics of the 21 included studies will be summarized, before finally deriving the work demands based on the evidence found.

Figure 2 . Flowchart of literature search process.
Characteristics of Studies
Table 2 contains an overview of the characteristics of selected studies. Most of the studies were published between 2015 and 2019 (52%). Nearly half of the studies were conducted in Europe (48%), followed by North America (33%). Most of the studies reported qualitative data collected with methods such as interviews (62%).

Table 2 . Characteristics of the studies.
The studies investigated a variety of technologies, such as computers (1, 7), various forms of Information and Communication technologies (ICTs; 2, 3, 17, 18, 21) in a broad sense, including specific examples of work-extending technologies and other tools for digital communication, information technology (IT) systems supporting information dissemination and retrieval within organizations (4, 9), automated systems supporting predominantly physical work procedures (5, 6, 11, 12, 13, 14, 20), robots (15, 19), social media enabling professional networking and participation in organizational and societal practices (8, 16), and more domain-specific technologies such as clinical technology supporting professional decisions (9) and field technology for labor management (10).
Relationships Between Technology and Work Characteristics
In sum, nine work characteristics were identified and defined distinctively. Table 3 contains the operational definitions of the final work characteristics and the work-related aspects they consist of. The final work characteristics are: Workflow interruptions, workload, manual work, mental work, privacy, autonomy, complexity, role expectations, and opportunities for development.

Table 3 . Overview for final work characteristics and the exemplary work-related aspects assigned to them.
The complete overview of the selected studies and results for the relationships between technology and work characteristics is provided in Table 4 (for quantitative data) and Table 5 (for qualitative data). To further increase comprehensibility, the variables within the tables were labeled according to their function in the respective study (e.g., independent variable, mediating variable, dependent variable; see notes).

Table 4 . Studies providing quantitative evidence for the relationship between technology and work-related aspects.

Table 5 . Studies providing qualitative evidence for the relationship between technology and work-related aspects.
There is quantitative evidence on positive relationships between IT system use and complexity reported by two studies (4, 9). On a similar note, qualitative evidence suggests lower situational awareness within automated systems indicating an increase in complexity (12), and clinical technology being associated with an increase in complexity for nurses (9).
There is mixed quantitative evidence on the relationships between computer work and autonomy (1). The amount of computer work is positively related to autonomy, while technological pacing is negatively related to autonomy. Working within automated systems is negatively (5, 6) or not related (6) to different measures of autonomy. ICT use shows mixed relationships with job decision latitude (3) depending on ICT features that describe negative or positive effects of use. Evidence indicates a positive relationship between social media use and autonomy. Qualitative evidence suggests that ICT use increases autonomy (21) and flexibility (17, 18, 21).
Quantitative studies indicate strong positive relationships between computer work (1) and ICT use (2) and workload. The relationships are not consistent due to the fact that certain ICT features differ in their effects on workload. ICT characteristics such as presenteeism and pace of change are positively related to feelings of increasing workload, while a feeling of anonymity is negatively associated with workload. Evidence indicates positive relationships between time or workload pressure in the context of computer work (7), working in an automated system (5), as well as social media use (8) and provide evidence for positive relationships between various technologies and workload. Qualitative studies report similar outcomes. ICT use (18), automated systems (12, 13) as well as clinical technology (9) are reported to increase the workload.
Workflow Interruptions
Quantitative evidence indicates positive relationships between computer work and increasing levels of interruptions as well as an increasing demand for multitasking (7). Qualitative evidence suggests that ICT use is positively associated with an increased level of interruptions on the one hand and workflow support on the other hand (21). Further qualitative evidence suggests that robots at the workplace have positive effects on workflow support (19), and automated systems seem to increase the level of multitasking required in general (12).
Manual Work
Qualitative evidence suggests a decrease in the amount of physically demanding tasks when working with automated systems (11) and robots (15). In one study, qualitative evidence suggests an increase in manual work for technical jobs where automated systems are used (14).
Mental Work
Quantitative evidence indicates no relationships between monitoring tasks or problem-solving demands for technical jobs within automated systems (6). Qualitative evidence however suggests positive relationships between work within automated systems and various cognitive tasks and demands, such as problem-solving and monitoring (11, 13), while working with robots increases the amount of new and challenging mental tasks (15).
Quantitative evidence indicates that different ICT characteristics show different relationships with invasion of privacy (2). Some features are negatively related to invasion of privacy (anonymity) and others are positively related to it (presenteeism, pace of change). Qualitative evidence suggests that IT systems are not related to the perception of managerial surveillance (9), while social media is positively related to peer-monitoring (16), and field technology is negatively related to employee data control (10).
Role Expectations
Quantitative evidence indicates that ICT use is inconsistently related to role ambiguity depending on specific characteristics of the technology (2). Regarding automated systems, quantitative evidence indicates no relationship between working in an automated system and opportunities for role expansion in the form of an increased perceived responsibility (6). Qualitative evidence suggests that ICT use increases the expectations for availability and connectivity (21), and social media positively affects networking pressure (16). Qualitative evidence suggests that IT systems (9) decrease meaningful job content and role expansion. Qualitative evidence suggests that automated systems vary with regard to enhancing meaningfulness at work, dependent on whether the work tasks are complemented by the system or revolve around maintaining the system (20).
Opportunities for Development
Qualitative evidence suggests that ICT use (12) as well as working with an automated system (17) increase the demands for continuing qualification. Qualitative evidence suggests that opportunities for learning and development are prevalent with clinical technology (9) and absent when working with robots (19). Mixed qualitative evidence regarding automated systems and learning opportunities suggests that the effects depend on the differences in work roles in relation to being supported by the system or supporting the system (20).
A comprehensive summary of the outcomes can be found in Table 6 . The information in this table gives a summary of the evidence found for the different technologies and their relationships to work characteristics, more specifically to work related aspects. Important distinctive characteristics such as sample characteristics are listed in Tables 4 , 5 .

Table 6 . Overview over identified relationships between technology and work characteristics.
Subsequently, the results shown are now used as a basis for the identification of work demands that lead to the need for adapting to changes in work characteristics.
Relationships Between Technologies and Work Demands
Three sources are considered for the identification of work demands: Work demands mentioned in the studies on technology and work characteristics, work demands mentioned by the supplementary studies found during the database search ( N = 7), and work demands analytically derived from the results.
Some studies that examined the effects of technology on work characteristics also reported concrete work demands. Regarding the increasing complexity and the associated mental work, qualitative evidence suggests an increasing demand for cognitive as well as digital skills (11) in automated systems. With regard to IT systems, quantitative evidence indicates positive relationships with computer literacy (9), and analytical skills (4). With regard to the increase in workflow interruptions and the role expectations for constant availability and connectivity, time and attention management strategies are proposed in order to cope with the intrusive features of technology (2). Other strategies mentioned in the studies include self-discipline for disengaging from the ubiquitous availability resulting from mobile communication devices (18, 8) as well as the need for reflecting on individual responsiveness when working overtime due to self-imposed pressure to be available at all times (18, 21). Concerning opportunities for development, the willingness and ability to learn and adapt to technological changes and the associated changes in work (15, 4, 12) is emphasized. Moreover, employability is facilitated by using technological tools for professional networking (16).
The supplementary studies provide evidence on the direct relationships between technologies and work demands without the mediating consideration of work characteristics. This evidence is listed in Table 7 .

Table 7 . Supplementary studies on the relationship between technology and work-related demands.
There is quantitative evidence for positive relationships between the perception of controllability and exploratory use of computers (22), first-hand experience with robots and readiness for robotization (23, 24), and perceived usefulness and positive attitudes toward telemedicine technology (25), blockchain technology (26), and IT systems in general (27). Further quantitative evidence indicates mixed effects of perceived ease of use. Evidence indicates a positive relationship between perceived ease of use and perceived technological control with regard to telemedicine (25), no relationship between ease of use and attitude regarding blockchain technology (26), and a positive relationship between ease of use and attitude toward using IT systems (27). Quantitative evidence indicates that information processing enabled by technology is positively related to an increasing demand of cognitive skills (e.g., synthesizing and interpreting data) and interpersonal skills (e.g., coordinating and monitoring other people), but not related to an increasing demand in psychomotor skills (e.g., manual producing and precise assembling) (28). The level of standardization of work is positively related to interpersonal skills, but not related to cognitive and psychomotor skills (28). A high variety of tasks is positively related to the demand for cognitive skills and interpersonal skills and not related to psychomotor skills (28).
By analyzing the evidence on relationships between technology and work characteristics, further work demands can be derived. Knowledge about the specific technology at hand may be useful to decrease the perception of complexity as new technologies are introduced. This seems evident when comparing the effects of a simple computer with the effects of work within an automated system. For instance, while evidence indicates no relationship between computer work and complexity (6), work within an automated system is suggested to be associated with increasing complexity (12). Moreover, problem-solving skills (13) and cognitive skills such as diagnosing and monitoring (11, 15) increase when employees work within automated systems. Increasing autonomy suggests the need for personal skills regarding self-organizing and self-management due to greater flexibility and the associated possibilities for structuring work in many ways, particularly when working with ICTs (18, 21). Workflow interruptions and an increasing workload also increases the importance of communication skills for explicating the boundaries of one's own engagement to colleagues and leaders (17, 18, 21). Furthermore, reflecting the professional role at work may be critical due to changes in role expectations. The example of self-imposed need for availability underlines this argument (21). All this has implications for self-regulatory activities, such as reflection, and could benefit from experimenting and monitoring one's own strategies for time and attention management.
Implications for CVET: Objectives and Characteristics
The aforementioned studies describe several required behavioral aspects that are considered important due to technology at work. Emphasized is the need for components related to the organization of one's own work, namely self-discipline and time and attention management.
The identified need for reflection on one's own professional actions, for experimentation, and also for professional networking (for instance by using tools) can be seen as parts of further professional development by oneself or in interaction with others. In addition, the need for demonstrating employability is mentioned. From all these professional and career development aspects can be derived that problem-solving skills, self-regulation skills, and communication skills are required as well as proactive work behavior and coping and reflection strategies.
Various relevant skills, such as psychomotor skills, analytical skills, management skills, and interpersonal skills are mentioned. In addition, the need for diagnostic and monitoring skills as well as digital skills is emphasized. All these components can be used in relation to two explicitly mentioned needs: ability to learn and computer literacy. The demand for generic and transferable skills is emphasized. As a basis for the skills, knowledge is required, for instance on the technology itself, although not explicitly discussed in the studies. In contrast, several components of attitude are explicitly mentioned and considered to be a requirement for the ability to deal with challenges caused by new technologies at work. Firstly, the more generic willingness to learn, adaptability, and perceived behavioral control. Secondly, attitudes that are directly linked to technology, namely a positive attitude and trust, especially toward technology (e.g., robots), and technological readiness and acceptance.
Next to the opportunity of acquiring the mentioned components of competences at work, CVET can organize training interventions in the form of adequate learning environments to foster these. The ability of employees to carry out, develop and use the mentioned behavioral aspects, skills, knowledge, and attitudes, can be considered as required objectives of CVET and have concrete consequences for the characteristics of the learning environments.
As for the content of the learning environments, derived from the aforementioned requirements, it can be argued that attention should be paid to different categories of learning objectives: acquiring knowledge about and learning how to use technology, how to manage work and oneself, and how to continue one's own professional development. In addition, the relevance of attitude tells us that these components need to be fostered in the training and therefore need to be part of the content of the learning environments as well.
In relation to the methods or the didactics, only one study explicitly mentioned a suggestion, namely experience based learning for fostering adaptability (12). In relation to the guidance of trainers or teachers no suggestions are provided. The same goes for assessment, diagnoses or monitoring, and the coherence of components of the learning environments.
This systematic literature review aimed at identifying effects of new technological developments on work characteristics, identifying associated work demands, and determining their implications for the design of formal CVET learning environments.
Effects of New Technologies on Work Characteristics and Word Demands
Based on a systematic review focusing on empirical evidence, several effects of technology on work characteristics were found, thus answering RQ 1. Evidence suggests that complexity and mental work increases with ongoing automation and robotization of work, for instance due to the automatization of procedures which “hides” certain processes from employees. The automatization of tasks introduces new mental tasks, such as monitoring the machine's activities and solving problems. A decrease in manual work depends on the relation between the job and the technology in use (supporting vs. being supported).
Workload and workflow interruptions increase as a general consequence of the ubiquity of technology, mainly due to a higher level of job speed and the associated time and workload pressure. A higher level of autonomy seems to be associated with a higher workload and more workflow interruptions. This applies in particular to work with ICTs and domain-specific technologies, such as field technology.
Role expectations and opportunities for development depend on the relation between the job and the technology in use (supporting vs. being supported). With regard to role expectations, the need for being available or connected via digital devices and a new division of responsibilities between employees and technology are repeatedly mentioned in the studies. This applies particularly to work with automated systems, robots, and domain-specific technologies such as clinical technology.
With regard to work demands, employees need strategies to deal with higher levels of workload, autonomy, and complexity. Required skill demands contain mental, analytical, cognitive, and self-regulatory demands. In addition, opportunities for role expansion and learning, which do not seem to automatically result from the implementation and use of new technologies, need to be created (pro)actively by the employees. Employees need to take more responsibility with regard to their own development and professional work identity (for instance considering the pressure for constant availability). They need to be able to effectively deal with a high workload and number of interruptions, increasing flexibility, complexity, and autonomy, a demand for constant availability, changes in meaningfulness of tasks, changes in work roles, and the need to create and use learning opportunities. In the light of ongoing changes and challenges, skills to further develop and adapt one's own skills gain in importance. Regarding attitudes, the willingness to learn, adapt and experiment may be a central work demand.
Implications for the Practice of CVET
Various required objectives of CVET can be concluded from the reported results. For instance, developing the ability of employees to carry out the mentioned behaviors, as well as the skills, knowledge and attitudes that are necessary for those behaviors. These objectives have consequences for the content of CVET learning environments. From the empirical studies on the relationships between technology and work, we derived the need for employees to organize their own work, for instance through time management. Furthermore, many issues relating to own professional development and career development are important, to acquire individually and independently as well as by interacting with others. Ultimately, this refers to the skills of self-initiated learning and development. With regard to fostering helpful attitudes, raising awareness of the relevance of trust or training the social skills to promote trust in the workplace can be included in the content of CVET learning environments. In research on creating trust within organizations, regularly giving and receiving relevant information was shown to be important for creating trust toward co-workers, supervisors and top-management, which in turn fostered the perception of organizational openness and employee involvement as a result ( Thomas et al., 2009 ). In the research on creating trust in virtual teams, the importance of frequent interaction was important to develop trust on a cognitive as well as an affective level (e.g., Germain, 2011 ). These research results however need to be adapted to the context of technology at work.
Although there is no information provided on the guidance of employees, informal guidance through leadership ( Bass and Avolio, 1994 ) as well as formal guidance by trainers and teachers during interventions contain possibilities for fostering the required competences. Attention should be paid not only to acquiring relevant knowledge (digital literacy), but also to skills in applying the knowledge and therefore dealing with technology. Even more challenging might be the task of supporting attitude development (e.g., technological acceptance and openness to changes), fostering transfer of skills, and preparation for future development. Especially future professional development, which includes the ability to learn in relation to current and future changes, needs to be focused on. Teachers, trainers and mentors need to be equipped to be able to foster these competences.
In relation to the use of didactical methods, methods that do not merely focus on knowledge acquisition but also provide opportunities for skill acquisition and changes in attitude need to be applied. For example, one study explicitly suggested experience based learning for fostering the adaptability of employees when faced with ongoing technological developments. Other solutions for instruction models as a profound basis for learning environments may be found in more flexible approaches, for instance according to the cognitive flexibility theory ( Spiro et al., 2003 ), where learners are meant to find their own learning paths in ill-structured domains. By applying such models, that are often based on constructivist learning theories, in a coherent way, the development of strategies for self-organizing and self-regulation may be facilitated.
Furthermore, the use of technology within learning environments may have the potential to increase participants interactions, which are focused in for instance collaborative and co-operative learning ( Dillenbourg et al., 2009 ). Next to increasing interactions in learning and being able to co-operate, technology in learning environments can used to foster the other required competences, if adequately designed ( Vosniadou et al., 1996 ; Littlejohn and Margaryan, 2014 ).
When keeping in mind, that the coherence of components is an important requirement for the design of learning environments ( Mulder et al., 2015 ), the component that describes assessment needs further attention. There is evidence supporting the idea, that the type of assessment has an impact on how learning takes place ( Gulikers et al., 2004 ; Dolmans et al., 2005 ). Therefore, it can be used to deliberatively support and direct learning processes.
Only when all these aspects are considered can CVET interventions effectively and sustainably foster the mentioned objectives, such as promoting a willingness to change in relation to technologies, the effective use of technology, and personal development in the context of technological developments.
Limitations and Implications for Future Research
Regarding the search methods, the use of databases is challenging when investigating technologies ( Huang et al., 2015 ). Technological and technical terms are widespread outside the research in which they are regarded as the object of investigation. Therefore, it produces a large amount of studies that concern technology with diverse research objectives that can be difficult to sort. An interesting focus for future research would be the systematic mapping of journals dealing specifically with technology in order to identify research that could complement the results of the present study as well as consider specificities regarding the domains in which the data is collected and disciplines by which the research is conducted. For instance, domain-specific databases from healthcare or manufacturing might provide additional insights into the effects of technology on work. Another limitation is the absence of innovative new technologies, such as artificial intelligence, blockchain, or the internet of things as object of investigation. Broad technological categories, such as ICTs and social media have received some attention in research, especially in relation to questions beyond the scope of this review. Newer technological developments as discussed by Ghobakhloo (2018) are virtually not present in current research. This gap in empirical research needs to be filled. In addition, future research should ensure that it does not miss opportunities for research where effects of these innovative technologies can be examined in detail, for instance by conducting an accompanying case study of the implementation process. Research investigating changes over time regarding the use of technology and its effects is needed. In doing so, research could capture the actual dynamics of change and development of processes as they happen in order to inform truly effective interventions in practice. Moreover, a classification of technological characteristics according to their effects may be valuable by enabling a more in-depth analysis of new technologies and their effects on specific groups of employees and different types of organizations. These analyses will also allow a breakdown of effects in relation to differences in jobs, hierarchy levels and levels of qualification, which could be very important for organizations and employers in order to adapt the CVET strategy to the specific demands of specific groups of employees. The present review takes a first step in this direction by identifying work characteristics that are affected by different technologies. In addition, future research could also take into account non-English language research, which might increase insight in for instance cultural differences in the use and the effects of technology at work.
Regarding theory, some of the relevant theories considering technology stem from sociology (e.g., Braverman, 1998 ) or economics ( Autor et al., 2003 ). For instance, the task-based approach ( Autor et al., 2003 ) showed some explanatory value by suggesting that complexity may increase as a consequence of technology. Furthermore, it suggested that this effect may depend on job specifics. Those propositions are reflected in the aforementioned empirical evidence. Psychological theories on work characteristics do not conceptualize technology explicitly (e.g., Hackman and Oldham, 1975 ; Karasek, 1979 ). As of the present study, the large variation regarding the concepts and variables derived from theory might limit the comparability of results. To foster systematic research, further theory development needs to more explicitly consider the role of technology at multiple levels (i.e., individual level, team level, organizational level) and with regard to the characteristics and demands of work. In the context of theory, the paradigmatic views also deserve attention (e.g., Liker et al., 1999 ; Orlikowski and Scott, 2008 ). These views could be reflected in the subject of research, as exemplified for instance in the study of field technologies and its effects on privacy from a managerial control and power perspective, potentially reflecting the view of political interest ( Tranvik and Bråten, 2017 ). Most of the studies, however, do not take a clear stand on what exactly they mean when they investigate technology. This complicates interdisciplinary inquiry and integration, as it is not always clear which understanding of technology is prevalent. We therefore encourage future research to explicitly define technology, for instance as in the present paper using the proposed framework of McOmber (1999) . In doing so, characteristics of technology may be defined more clearly and distinctive which in turn would enable the formation of the strongly needed categorization of technologies, as was proposed earlier.
And, although there are theories and models on the use of technology in education (e.g., E-Learning, Technology enhanced learning), they are not focussing on fostering the competences required to deal with new technologies in a sustainable manner. In general, the same gap needs to be filled for instruction models and instructional design models, for instance to promote changes in attitude and professional development. In addition, there is hardly any attention for the consequences of new technologies at work for CVET yet ( Harteis, 2017 ). All this requires more systematic evaluation studies. The research gaps identified need to be filled in order to provide evidence-based support to employees in dealing with new technologies at work in a sustainable manner, taking charge of their own performance and health, as well as seeking and using opportunities for their own professional and career development.
Data Availability Statement
All datasets generated for this study are included in the article/supplementary material.

Author Contributions
PB and RM have jointly developed the article, and to a greater or lesser extent both have participated in all parts of the study (design, development of the theoretical framework, search, analyses, and writing). The authors approved this version and take full responsibility for the originality of the research.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
* Amick, B. C., and Celentano, D. D. (1991). Structural determinants of the psychosocial work environment: introducing technology in the work stress framework. Ergonomics 34, 625–646. doi: 10.1080/00140139108967341
PubMed Abstract | CrossRef Full Text | Google Scholar
Autor, D. H. (2013). The “task approach” to labor markets: an overview. J. Labour Market Res. 46, 185–199. doi: 10.1007/s12651-013-0128-z
CrossRef Full Text | Google Scholar
Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. J. Econ. Perspect. 29, 3–30. doi: 10.1257/jep.29.3.3
Autor, D. H., Levy, F., and Murnane, R. J. (2003). The skill content of recent technological change: an empirical exploration. Q. J. Econ. 118, 1279–1333. doi: 10.1162/003355303322552801
* Ayyagari, R., Grover, V., and Purvis, R. (2011). Technostress: technological antecedents and implications. MIS Q. 35, 831–858. doi: 10.2307/41409963
Baartman, L. K. J., and de Bruijn, E. (2011). Integrating knowledge, skills and attitudes: conceptualising learning processes towards vocational competence. Educ. Res. Rev. 6, 125–134. doi: 10.1016/j.edurev.2011.03.001
Bass, B. M., and Avolio, B. J. (1994). Transformational leadership and organizational culture. Int. J. Public Administr. 17, 541–554. doi: 10.1080/01900699408524907
Bell, D. (1976). The coming of the post-industrial society. Educ. Forum 40, 574–579. doi: 10.1080/00131727609336501
Blauner, R. (1967). Alienation and Freedom: The Factory Worker and His Industry. Oxford: Chicago University of Press.
Google Scholar
* Bordi, L., Okkonen, J., Mäkiniemi, J.-P., and Heikkilä-Tammi, K. (2018). Communication in the digital work environment: implications for wellbeing at work. NJWLS 8, 29–48. doi: 10.18291/njwls.v8iS3.105275
Braverman, H. (1998). Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century. New York, NY: Monthly Review Press.
Brynjolfsson, E., and McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York, NY: Norton.
Burns, T., and Stalker, G. M. (1994). The Management of Innovation. Oxford: Oxford University Press.
* Calitz, A. P., Poisat, P., and Cullen, M. (2017). The future African workplace: the use of collaborative robots in manufacturing. SA J. Hum. Resour. Manag. 15, 1–11. doi: 10.4102/sajhrm.v15i0.901
Card, D., and DiNardo, J. E. (2002). Skill-biased technological change and rising wage inequality: some problems and puzzles. J. Labor Econ. 20, 733–783. doi: 10.1086/342055
Cascio, W. F., and Montealegre, R. (2016). How technology is changing work and organizations. Annu. Rev. Organ. Psychol. Organ. Behav. 3, 349–375. doi: 10.1146/annurev-orgpsych-041015-062352
** Chau, P. Y.K., and Hu, P. J.-H. (2002). Investigating healthcare professionals' decisions to accept telemedicine technology: an empirical test of competing theories. Inform. Manage. 39, 297–311. doi: 10.1016/S0378-7206(01)00098-2
* Chen, W., and McDonald, S. (2015). Do networked workers have more control? The implications of teamwork, telework, ICTs, and social capital for job decision latitude. Am. Behav. Sci. 59, 492–507. doi: 10.1177/0002764214556808
Cherns, A. (1976). The principles of sociotechnical design. Hum. Relations 29, 783–792. doi: 10.1177/001872677602900806
* Chesley, N. (2014). Information and communication technology use, work intensification and employee strain and distress. Work Employ. Soc. 28, 589–610. doi: 10.1177/0950017013500112
** Chow, S. K., Chin, W.-Y., Lee, H.-Y., Leung, H.-C., and Tang, F.-H. (2012). Nurses' perceptions and attitudes towards computerisation in a private hospital. J. Clin. Nurs. 21, 1685–1696. doi: 10.1111/j.1365-2702.2011.03905.x
Dillenbourg, P., Järvela, S., and Fischer, F. (2009). “The evolution of research on computer-supported collaborative learning: from design to orchestration,” in Technology-Enhanced Learning , ed N. Balacheff, S. Ludvigsen, T. de Jong, A. Lazonder, and S. Barnes (Dordrecht: Springer Netherlands), 3–19.
Dolmans, D. H., De Grave, W., Wolfhagen, I. H., and van der Vleuten, C. P. (2005). Problem-based learning: future challenges for educational practice and research. Med. Educ. 39, 732–741. doi: 10.1111/j.1365-2929.2005.02205.x
* Dvash, A., and Mannheim, B. (2001). Technological coupling, job characteristics and operators' well-being as moderated by desirability of control. Behav. Inform. Technol. 20, 225–236. doi: 10.1080/01449290116930
Ellström, P.-E. (1997). The many meanings of occupational competence and qualification. J. Euro Industr. Train. 21, 266–273. doi: 10.1108/03090599710171567
* Findlay, P., Lindsay, C., McQuarrie, J., Bennie, M., Corcoran, E. D., and van der Meer, R. (2017). Employer choice and job quality. Work Occupat. 44, 113–136. doi: 10.1177/0730888416678038
Ford, M. (2015). Rise of the Robots: Technology and the Threat of a Jobless Future. New York, NY: Basic Books.
Francis, A. (1986). New Technology at Work. Oxford: Clarendon Press.
Frey, C. B., and Osborne, M. A. (2017). The future of employment: how susceptible are jobs to computerisation? Technol. Forecast. Soc. Change 114, 254–280. doi: 10.1016/j.techfore.2016.08.019
Gattiker, U. E., Gutek, B. A., and Berger, D. E. (1988). Office technology and employee attitudes. Soc. Sci. Comp. Rev. 6, 327–340. doi: 10.1177/089443938800600301
* Gekara, V. O., and Thanh Nguyen, V.-X. (2018). New technologies and the transformation of work and skills: a study of computerisation and automation of Australian container terminals. N. Technol. Work Employ. 33, 219–233. doi: 10.1111/ntwe.12118
Germain, M.-L. (2011). Developing trust in virtual teams. Perf. Improvement Q. 24, 29–54. doi: 10.1002/piq.20119
** Ghani, J. A., and Deshpande, S. P. (1994). Task characteristics and the experience of optimal flow in human—computer interaction. J. Psychol. 128, 381–391. doi: 10.1080/00223980.1994.9712742
Ghobakhloo, M. (2018). The future of manufacturing industry: a strategic roadmap toward industry 4.0. J. Manuf. Tech. Manage. 29, 910–936. doi: 10.1108/JMTM-02-2018-0057
* Gough, R., Ballardie, R., and Brewer, P. (2014). New technology and nurses. Labour Industry 24, 9–25. doi: 10.1080/10301763.2013.877118
Gulikers, J. T. M., Bastiaens, T. J., and Kirschner, P. A. (2004). A five-dimensional framework for authentic assessment. ETR&D 52, 67–86. doi: 10.1007/BF02504676
Hackman, J. R., and Oldham, G. R. (1975). Development of the job diagnostic survey. J. Appl. Psychol. 60, 159–170. doi: 10.1037/h0076546
Harteis, C. (2017). “Machines, change and work: an educational view on the digitalization of work,” in The Impact Digitalization in the Workplace: An Educational View , ed C. Harteis (New York: Springer), 1–10.
Heisig, U. (2009). “The deskilling and upskilling debate,” in International Handbook of Education for the Changing World of Work: Bridging Academic and Vocational Learning , ed R. Maclean and D. Wilson (Dordrecht: Springer Netherlands), 1639–1651.
* Holden, R. J., Rivera-Rodriguez, A. J., Faye, H., Scanlon, M. C., and Karsh, B.-T. (2013). Automation and adaptation: Nurses' problem-solving behavior following the implementation of bar coded medication administration technology. Cognition Technol. Work 15, 283–296. doi: 10.1007/s10111-012-0229-4
Huang, Y., Schuehle, J., Porter, A. L., and Youtie, J. (2015). A systematic method to create search strategies for emerging technologies based on the Web of Science: illustrated for ‘Big Data’. Scientometrics 105, 2005–2022. doi: 10.1007/s11192-015-1638-y
* James, K. L., Barlow, D., Bithell, A., Hiom, S., Lord, S., Oakley, P., et al. (2013). The impact of automation on pharmacy staff experience of workplace stressors. Int. J. Pharm. Pract. 21, 105–116. doi: 10.1111/j.2042-7174.2012.00231.x
** Kamble, S., Gunasekaran, A., and Arha, H. (2019). Understanding the Blockchain technology adoption in supply chains-Indian context. Int. J. Product. Res. 57, 2009–2033. doi: 10.1080/00207543.2018.1518610
Karasek, R., Brisson, C., Kawakami, N., Houtman, I., Bongers, P., and Amick, B. (1998). The Job Content Questionnaire (JCQ): an instrument for internationally comparative assessments of psychosocial job characteristics. J. Occupat. Health Psychol. 3, 322–355. doi: 10.1037/1076-8998.3.4.322
Karasek, R. A. (1979). Job demands, job decision latitude, and mental strain: implications for job redesign. Administr. Sci. Q. 24, 285–308. doi: 10.2307/2392498
* Körner, U., Müller-Thur, K., Lunau, T., Dragano, N., Angerer, P., and Buchner, A. (2019). Perceived stress in human-machine interaction in modern manufacturing environments-results of a qualitative interview study. Stress Health 35, 187–199. doi: 10.1002/smi.2853
* Kraan, K. O., Dhondt, S., Houtman, I. L.D., Batenburg, R. S., Kompier, M. A.J., and Taris, T. W. (2014). Computers and types of control in relation to work stress and learning. Behav. Inform. Technol. 33, 1013–1026. doi: 10.1080/0144929X.2014.916351
Liker, J. K., Haddad, C. J., and Karlin, J. (1999). Perspectives on technology and work organization. Annu. Rev. Sociol. 25, 575–596. doi: 10.1146/annurev.soc.25.1.575
Littlejohn, A., and Margaryan, A., (eds.). (2014). Technology-Enhanced Professional Learning: Processes, Practices and Tools. New York, NY: Routledge.
* Marler, J. H., and Liang, X. (2012). Information technology change, work complexity and service jobs: a contingent perspective. N. Technol. Work Employ. 27, 133–146. doi: 10.1111/j.1468-005X.2012.00280.x
McClure, P. K. (2018). ‘You're Fired,’ says the robot: the rise of automation in the workplace, technophobes, and fears of unemployment. Soc. Sci. Comput. Rev. 36, 139–156. doi: 10.1177/0894439317698637
McOmber, J. B. (1999). Technological autonomy and three definitions of technology. J. Commun. 49, 137–153. doi: 10.1111/j.1460-2466.1999.tb02809.x
Mulder, R. H., and Baumann, S. (2005). “Competencies of in-company trainers,” in Bridging Individual, Organisational, and Cultural Aspects of Professional Learning , eds H. Gruber, C. Harteis, R. H. Mulder, and M. Rehrl (Regensburg: S. Roderer Verlag), 105–109.
Mulder, R. H., Messmann, G., and König, C. (2015). Vocational education and training: researching the relationship between school and work. Eur. J. Educ. 50, 497–512. doi: 10.1111/ejed.12147
* Ninaus, K., Diehl, S., Terlutter, R., Chan, K., and Huang, A. (2015). Benefits and stressors - perceived effects of ICT use on employee health and work stress: an exploratory study from Austria and Hong Kong. Int. J. Qual. Stud. Health Well Being 10:28838. doi: 10.3402/qhw.v10.28838
Orlikowski, W. J., and Scott, S. V. (2008). 10 sociomateriality: challenging the separation of technology, work and organization. ANNALS 2, 433–474. doi: 10.1080/19416520802211644
Parker, S. K., van den Broeck, A., and Holman, D. (2017). Work design influences: a synthesis of multilevel factors that affect the design of jobs. Acad. Manage. Ann. 11, 267–308. doi: 10.5465/annals.2014.0054
Polanyi, M. (1966). The Tacit Dimension. New Work, NY: Doubleday.
Roblek, V., Meško, M., and KrapeŽ, A. (2016). A complex view of industry 4.0. SAGE Open 6:215824401665398. doi: 10.1177/2158244016653987
Rodgers, M., Sowden, A., Petticrew, M., Arai, L., Roberts, H., Britten, N., and Popay, J. (2009). Testing methodological guidance on the conduct of narrative synthesis in systematic reviews. Evaluation 15, 49–73. doi: 10.1177/1356389008097871
Schwab, K. (2017). The Fourth Industrial Revolution. London: Portfolio Penguin.
Shepard, J. M. (1977). Technology, alienation, and job satisfaction. Ann. Rev. Sociol. 3, 1–21.
Sonntag, K. (1992). Personalentwicklung in Organisationen [Human Resource Development in Companies. Psychological Basics, Methods, and Strategies]. Göttingen: Verlag für Psychologie.
** Spell, C. S. (2001). Organizational technologies and human resource management. Hum. Relat. 54, 193–213. doi: 10.1177/0018726701542003
Spenner, K. I. (1990). Skill: meanings, methods, and measures. Work Occupat. 17, 399–421. doi: 10.1177/0730888490017004002
Spiro, R. J., Collins, B. P., Thota, J. J., and Feltovich, P. J. (2003). Cognitive flexibility theory: hypermedia for complex learning, adaptive knowledge application, and experience acceleration. Educ. Technol. 43, 5–10. Available online at: https://www.jstor.org/stable/44429454 (accessed April 28, 2020).
Thomas, G. F., Zolin, R., and Hartman, J. L. (2009). The central role of communication in developing trust and its effect on employee involvement. J. Bus. Commun. 46, 287–310. doi: 10.1177/0021943609333522
* Towers, I., Duxbury, L., Higgins, C., and Thomas, J. (2006). Time thieves and space invaders: technology, work and the organization. J. Org. Change Manage. 19, 593–618. doi: 10.1108/09534810610686076
* Tranvik, T., and Bråten, M. (2017). The visible employee - technological governance and control of the mobile workforce. Socio Econ. Stud. 28, 319–337. doi: 10.5771/0935-9915-2017-3-319
** Turja, T., and Oksanen, A. (2019). Robot acceptance at work: a multilevel analysis based on 27 EU countries. Int. J. Soc. Robot. 11, 679–689. doi: 10.1007/s12369-019-00526-x
** Turja, T., Taipale, S., Kaakinen, M., and Oksanen, A. (2019). Care workers' readiness for robotization: identifying psychological and socio-demographic determinants. Int. J. Soc. Robot. 4:67. doi: 10.1007/s12369-019-00544-9
* van Zoonen, W., and Rice, R. E. (2017). Paradoxical implications of personal social media use for work. N. Technol. Work Employ. 32, 228–246. doi: 10.1111/ntwe.12098
Vosniadou, S., Corte, E. de, Glaser, R., and Mandl, H., (eds.). (1996). International Perspectives on the Design of Technology-Supported Learning Environments. Mahwah, NJ: Erlbaum.
* Walden, J. A. (2016). Integrating social media into the workplace: a study of shifting technology use repertoires. J. Broadcast. Electr. Med. 60, 347–363. doi: 10.1080/08838151.2016.1164163
Weick, K. E. (1990). “Technology as equivoque,” in Technology and Organizations , ed P. S. Goodman and L. Sproull (San Francisco, CA: Jossey-Bass), 1–44.
Zuboff, S. (1988). In the Age of the Smart Machine: The Future of Work and Power. New York, NY: Basic Books.
* Zubrycki, I., and Granosik, G. (2016). Understanding therapists' needs and attitudes towards robotic support. The roboterapia project. Int. J. Soc. Robot. 8, 553–563. doi: 10.1007/s12369-016-0372-9
* Studies included in the systematic review.
** Supplementary studies.
Keywords: technology, work characteristics, continuous vocational education and training, automation, work demands, systematic review
Citation: Beer P and Mulder RH (2020) The Effects of Technological Developments on Work and Their Implications for Continuous Vocational Education and Training: A Systematic Review. Front. Psychol. 11:918. doi: 10.3389/fpsyg.2020.00918
Received: 14 February 2020; Accepted: 14 April 2020; Published: 08 May 2020.
Reviewed by:
Copyright © 2020 Beer and Mulder. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Patrick Beer, patrick.beer@ur.de
This article is part of the Research Topic
Continuous Vocational Education and Training in a Changing World - Requirements, Practices and Implementation Examples
How has technology changed - and changed us - in the past 20 years?

Remember this? Image: REUTERS/Stephen Hird
.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Madeleine Hillyer

.chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} Explore and monitor how .chakra .wef-15eoq1r{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;color:#F7DB5E;}@media screen and (min-width:56.5rem){.chakra .wef-15eoq1r{font-size:1.125rem;}} Information and Communications Technology is affecting economies, industries and global issues

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:, technological transformation.

Have you read?
The future of jobs report 2023, how to follow the growth summit 2023.

Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Related topics:
The agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} weekly.
A weekly update of the most important issues driving the global agenda
.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Technological Transformation .chakra .wef-17xejub{-webkit-flex:1;-ms-flex:1;flex:1;justify-self:stretch;-webkit-align-self:stretch;-ms-flex-item-align:stretch;align-self:stretch;} .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all
The International Space Station has just marked a significant anniversary. Here's what life inside is like
Sean Fleming and Ian Shine
November 8, 2023

Azeem Azhar explains why ‘techno-pessimism’ is not helpful

This new desalination system is inspired by the ocean and powered by the sun
Jennifer Chu
September 29, 2023

Cryptocurrencies: How can organizations mitigate the risks of ransomware attacks?
Alpesh Bhudia, Anna Cartwright, Darren Hurley-Smith and Edward Cartwright
September 7, 2023

How digital economy companies can accelerate Southeast Asia's sustainable development
Ming Tan, Professor Lawrence Loh and Sabrina Soon
August 2, 2023

From robotic dogs to magnetic slime: 9 ways robots are helping humans
Victoria Masterson and Ian Shine
July 19, 2023
Click through the PLOS taxonomy to find articles in your field.
For more information about PLOS Subject Areas, click here .
Loading metrics
Open Access
Peer-reviewed
Research Article
Rise of the war machines: Charting the evolution of military technologies from the Neolithic to the Industrial Revolution
Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Methodology, Project administration, Visualization, Writing – original draft, Writing – review & editing
Affiliations Complexity Science Hub, Vienna, Austria, University of Connecticut, Storrs, Connecticut, United States of America, University of Oxford, Oxford, England
Roles Conceptualization, Data curation, Investigation, Methodology, Project administration, Visualization, Writing – original draft, Writing – review & editing
* E-mail: [email protected]
Affiliations Evolution Institute, Tampa, FL, United States of America, George Brown College, Toronto, Canada

Roles Conceptualization, Investigation, Methodology, Writing – review & editing
Affiliations HSE University, Moscow, Russia, Institute of Oriental Studies, Russian Academy of Sciences, Moscow, Russia
Affiliation Institute of History, Archaeology and Ethnology, Far East Branch of the Russian Academy of Sciences, Vladivostok, Russia
Affiliation Institute of History and Archeology of the Ural Branch of the Russian Academy of Sciences, Ural Federal University, Yekaterinburg, Russia
Affiliation Field Museum of Natural History, Chicago, IL, United States of America
Roles Data curation, Investigation, Writing – review & editing
Affiliation Evolution Institute, Tampa, FL, United States of America
Affiliation Complexity Science Hub, Vienna, Austria
Affiliation University of Oxford, Oxford, England
Roles Conceptualization, Formal analysis, Investigation, Visualization, Writing – review & editing
Affiliation University of Washington, Seattle, Washington, United States of America
Roles Conceptualization, Data curation, Investigation, Methodology, Project administration, Supervision, Writing – review & editing
- Peter Turchin,
- Daniel Hoyer,
- Andrey Korotayev,
- Nikolay Kradin,
- Sergey Nefedov,
- Gary Feinman,
- Jill Levine,
- Jenny Reddish,
- Enrico Cioni,

- Published: October 20, 2021
- https://doi.org/10.1371/journal.pone.0258161
- Peer Review
- Reader Comments
What have been the causes and consequences of technological evolution in world history? In particular, what propels innovation and diffusion of military technologies, details of which are comparatively well preserved and which are often seen as drivers of broad socio-cultural processes? Here we analyze the evolution of key military technologies in a sample of pre-industrial societies world-wide covering almost 10,000 years of history using Seshat : Global History Databank . We empirically test previously speculative theories that proposed world population size, connectivity between geographical areas of innovation and adoption, and critical enabling technological advances, such as iron metallurgy and horse riding, as central drivers of military technological evolution. We find that all of these factors are strong predictors of change in military technology, whereas state-level factors such as polity population, territorial size, or governance sophistication play no major role. We discuss how our approach can be extended to explore technological change more generally, and how our results carry important ramifications for understanding major drivers of evolution of social complexity.
Citation: Turchin P, Hoyer D, Korotayev A, Kradin N, Nefedov S, Feinman G, et al. (2021) Rise of the war machines: Charting the evolution of military technologies from the Neolithic to the Industrial Revolution. PLoS ONE 16(10): e0258161. https://doi.org/10.1371/journal.pone.0258161
Editor: Olivier Morin, Max Planck Institute for the Science of Human History, GERMANY
Received: May 28, 2021; Accepted: August 18, 2021; Published: October 20, 2021
Copyright: © 2021 Turchin et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the manuscript and its Supporting Information files. Additionally, all data are available alongside a preprint publication of this article, accessible at: https://osf.io/mkhde/
Funding: This work was supported by: a John Templeton Foundation grant to the Evolution Institute, entitled "Axial-Age Religions and the Z-Curve of Human Egalitarianism" (HW, PF, PT); a Tricoastal Foundation grant to the Evolution Institute, entitled "The Deep Roots of the Modern World: The Cultural Evolution of Economic Growth and Political Stability" (PT); an Economic and Social Research Council Large Grant to the University of Oxford, entitled "Ritual, Community, and Conflict" (REF RES-060-25-0085) (HW); a grant from the European Union Horizon 2020 research and innovation programme (grant agreement No 644055 [ALIGNED, www.aligned-project.eu ]) (HW, PF); a European Research Council Advanced Grant to the University of Oxford, entitled “Ritual Modes: Divergent modes of ritual, social cohesion, prosociality, and conflict" (HW, PF); a grant from the Institute of Economics and Peace to develop a Historical Peace Index (HW, PF, PT, DH); and the program “Complexity Science,” which is supported by the Austrian Research Promotion Agency FFG under grant #873927 (PT).
Competing interests: The authors have declared that no competing interests exist.
Introduction
From simple sharpened stone projectiles in the Paleolithic to the weapons of mass destruction in the modern world, what have been the main factors driving the evolution of military technology? Many have argued that the evolution of military technologies is just one aspect of a much broader pattern of technological evolution driven by increasing size and interconnectedness among human societies [ 1 – 3 ]. Several cultural evolutionary theories, conversely, highlight military technologies as a special case, arguing that steep improvements in both offensive and defensive capabilities of technologies along with accompanying tactical and organizational innovations resulted in “Military Revolutions” (note the plural), which in turn had major ramifications on the rise and, of particular concern here, the spread of state formations globally [ 4 – 8 ] and the evolution of religion and other cultural phenomena [ 9 , 10 ]. But the evolutionary mechanisms underlying general technological innovation, adoption, and transmission (especially in pre-industrial societies) are not well understood. Moreover, available theories have drawn on evidence that is limited both in geographical scope and temporal depth and deployed in ways that are subject to selection bias. Here we explore a variety of factors that previous scholarship suggests may have played a role in the evolution of military technologies by systematically quantifying the effects of those factors for thousands of years of world history.
Earlier efforts to quantify levels of technological complexity in eastern and western ends of Eurasia [ 11 , 12 ] have been criticized for being unduly subjective [ 13 ], especially when it comes to measuring rates of innovation in military technology, and are obviously limited in spatial coverage. Here we propose an alternative methodology for quantifying technological evolution and expand the geographic scope from just these two broad regions to 35 “Natural Geographic Areas” across all ten major world regions, using Seshat : Global History Databank , a major resource for studying patterns of sociocultural evolution in world history (see Materials and Methods below).
This article has two related goals. The first is to establish broad spatio-temporal patterns in the evolution of military technologies in pre-industrial societies. By technological evolution we mean here the dynamics of uptake (and possible loss) of technologies used by societies at significant scale (rather than simply whether the technology was known at all), regardless of how that society came to acquire that technology (indigenous innovation or adoption from another culture). For those interested in the study of technological evolution in general, focusing specifically on military technologies in pre-industrial societies has many practical benefits. Warfare was one of the most intensive activities of human societies, leaving abundant traces in the archaeological and historical record.
The second goal is to explore why these important military technologies developed or were adopted in the places, at the times, and as part of the technological packages as we observe in the historical and archaeological record. There have been several theoretical conjectures (discussed below) about the main causal drivers of technological innovation that we test. As our approach will show, the pattern of military technological evolution shows great variation in time and space, with different regions assuming a leading role in innovation at different moments in time.
Delineating the possible causes and observed consequences of changes in levels of military technologies will have far-reaching implications for understanding the evolution of technology broadly. To encourage further progress towards that ultimate goal, we present here a detailed methodology for testing theories about technological change in human history. This paper serves as a crucial step along this path.
Theoretical background
Here, we review several competing theoretical perspectives on the evolution of technologies offered in the past. Technological change is one of the fundamental drivers in social and cultural evolution and of long-term economic growth [ 14 – 17 ]. Many have pointed to technology’s ramifying effects on warfare, state formation, and the development of information processing systems [ 1 , 18 – 22 ]. Military technologies and their widespread application in particular have been shown to foment rises in social complexity and to spur related ideological developments [ 23 – 27 ]. But what processes are responsible for the evolution–the development, spread, and cumulative adoption–of military technology globally and across time?
Following this link between military technologies and socio-cultural development, we might expect to find a positive feedback between technological innovation and population growth at the global scale [ 2 , 28 – 32 ] see also [ 33 – 36 ]. Indeed, a well-known and much discussed theory proposed by economist Michael Kremer and expanded by others suggests exactly this causal link [ 2 ]. According to Kremer, “high population spurs technological change because it increases the number of potential inventors.” Kremer notes that "this implication flows naturally from the nonrivalry of technology… The cost of inventing a new technology is independent of the number of people who use it. Thus, holding constant the share of resources devoted to research, an increase in population leads to an increase in technological change. Thus, in a larger population there will be proportionally more people lucky or smart enough to come up with new ideas" [ 2 : 685]. This innovation, in turn, can spur further population growth, creating a positive feedback loop between technological and population growth; for instance, the proliferation of iron axes facilitated the clearing of agricultural land from forests [ 37 ], while the iron ploughshare improved the quality of plowing allowing for increased productivity [ 38 ] and, hence, larger populations to develop further innovations. Note that what is described by Kremer is virtually identical with what David Christian calls “collective learning” [ 39 ].
This equation states that the technological growth rate at a given moment in time ( dT / dt ) is proportional to the global population, N (the larger the population, the larger the number of potential inventors) and to the current technological level, T . The second factor is included in the model to reflect the assumption that the wider the existing technological base, the greater the number of inventions that can be made on its basis. This model explicitly refers to global population level, rather than regional or localized populations of specific societies. To account both for the effect of global population size and the existing stock of technology, the Taagepera-Kremer model assumes that the rate of technology growth is proportional to the product of these two quantities. Taagepera and Kremer did not test this hypothesis empirically in a direct way. Empirical tests of this hypothesis performed by other researchers, however, have found support [ 34 , 40 ]. Note that Kremer observed that these new technologies would, in turn, likely generate population growth, suggesting a positive feedback between technological innovation and population. Here, however, we are concerned only with the effect of population on the evolution of technology, rather than the reverse.
A limitation of such population-focused theories, however, is the assumption that world population can be treated as having been an integrated information-exchanging system for many centuries, if not millennia. To address this problem, world-systems analysts have advanced an additional cluster of hypotheses. Chase-Dunn and Hall, for example, distinguish four types of networks of world-system communications: bulk good networks, political-military networks, prestige good networks, and information networks (IN) [ 41 ]. Korotayev et al. [ 42 – 44 ] explicitly focus on INs as technological innovation diffusion networks, proposing that a systematic diffusion of technological innovations within a certain set of societies is a sufficient condition to consider them a “world-system”. Thus, an as-yet unexplored synthesis of these ideas is that, while population may be one factor in the pace and location of technological evolution, membership in such an information diffusion network may play an additional role in facilitating the exchange of ideas and propensity for wide-spread adoption of new technologies.
One important advantage of the population-driven model advocated by Kremer, Taagepera, and others is that it explicitly includes the effect of the existing stock of technologies on technological growth rate. The greater the existing stock, the greater number of new technologies the model expects to be developed in the next time period. Although this is only one, relatively straightforward, way to model the impact of the existing technology stock, there is substantial historical evidence to make it a strong contender to be tested empirically. For example, the improvement of metallurgy and metal processing led not only to the emergence of new tools such as iron ploughs, but also to the proliferation of various types of weapons—starting with knives, daggers, swords, battle axes, up to the appearance of rifles and artillery. Nevertheless, the model assumes that the means and knowledge to adapt and improve upon existing technologies are readily accessible as well as the organizational capacity to deploy these technologies at large scales, which are open questions requiring further scrutiny.
Further, once a military technology had proven advantageous in inter-state competition, there arose an existential pressure on nearby societies to adopt that technology as well, so as not to be left behind. This sort of mimetic diffusion has been observed with respect to key technologies such as horse-mounted warfare that spread initially among nomadic confederations and nearby agrarian societies located along the central Eurasian Steppe [ 45 – 48 ]. Indeed, the domestication of the horse and its use in the civil and military sphere–including both the material components of horse-mounted archery as well as the tactical and organizational means to wield these weapons–appear to be of particular importance in the evolution of technologies and social complexity during the pre-industrial era, improving transportation, agriculture, and military capacities alike [ 47 ]. Further, the creation of new and more lethal weapons in one society could force people in their “strike zone” [ 27 ] to invent more sophisticated defenses while also often adopting the offensive technology themselves, prompting further technological advances. Following the invention of increasingly powerful, armor-piercing projectiles from bows and crossbows, for instance, we tend to see the means of protection improved as well to include chain mail, scaled armor, and plate armor.
Similarly, some work suggests that location is a critical factor in this process, as societies on the periphery, or semi-periphery [ 41 ], of larger, more complex imperial states will tend to be hotbeds of innovation, as they have both the incentive to increase (typically military) capability to compete with regional powers as well as the requisite flexibility to explore more radical innovation by being removed from the institutionalized practices and path-dependencies experienced by the larger societies “locked in” to the tools and habits that won them their hegemony [ 41 , 49 , 50 ].
Overall, previous theoretical work suggests that the evolution of military technologies depends on the total number of potential innovators involved in this process, the connectedness of distinct centers of innovation as well as of spheres of inter-state competition, and on the already existing stock of technologies, especially such fundamental developments as metal processing and transportation. In Materials and Methods below we discuss how we operationalize an empirical test of these hypotheses.
Materials and methods
A general approach to quantifying the evolution of pre-industrial societies.
This article follows the general philosophy and procedures that have been developed by the Seshat : Global History Databank project [ 51 – 54 ]. The Seshat Databank stores large volumes of historical and archaeological data on a growing number of variables for past polities going back to the late Neolithic. Supplementary Information ( SI ) contains a detailed description of the core methods and workflows underpinning the Seshat project, including how we incorporate differing levels of uncertainty and disagreement and data quality procedures involving experts and research assistants. We make the data used for the analyses presented here available online through a DataBrowser site ( seshatdatabank.info/databrowser ) and we encourage scholars to make use of and to augment our dataset.
The principal unit for data collection and analysis is a polity, defined as any independent political unit ranging from autonomous villages (local communities) through simple and complex chiefdoms to states and empires, regardless of degree of centralization [ 51 , 52 ]. Our sample of historic polities was developed using a stratified sample of the globe using the concept of ‘Natural Geographic Area’ (NGA). An NGA is a fixed spatial location of roughly 100 x 100 km delimited by naturally occurring geographical features (for example, a river basin, a coastal plain, a mountain valley, or an island). All polities that occupied the NGA, or part thereof, at a century mark (e.g. 200 CE), are included in our sample. This strategy avoids oversampling (redundantly repeating information across time points) while still capturing meaningful changes in the variables of interest. Although this granularity is relatively coarse, it is suitable for uncovering macro-level patterns in societal dynamics and exploring pathways of cultural evolution [ 24 , 53 ]. The data used in the analyses presented here come from 373 historic polities covering 35 NGAs.
Aggregation of military technology data into “Warfare Characteristics”
We quantified the sophistication of war-making capacity by encoding 46 binary variables indicating the presence or absence of different military technologies by a polity. These variables were aggregated into six general categories, termed Warfare Characteristics (WCs): Metals used in producing weapons and armor, the variety of Projectiles and hand-held Weapons, the sophistication of Armor, the use of transport Animals, and different kinds of Defensive Fortifications. Finally these WCs were aggregated into a composite, temporal MilTech variable for each NGA. See SI for details of the aggregation.
Of the six WCs, two (Metal and Animal) have a much broader area of application than specifically warfare. In some analyses below we investigate another measure (CoreMil) that focuses more narrowly on the sophistication of core military technologies by aggregating only the Projectiles, Weapon, Armor, and Defense WCs. As described below, we explore the impact of the spread of Iron and Cavalry in particular. Because Iron and Cavalry are correlated with the Metals and Animals WCs, analyzing CoreMil allows us to disentangle any potentially spurious effects of these WCs on overall military technology.
Hypotheses to be tested: Defining predictor variables
Our review in Theoretical Background suggested that the evolution of military technologies may be a function of the total number of potential innovators, the connectedness of innovation/adoption centers, and/or the existing stocks of technology. We measure these various potential explanatory factors in the following ways:
Following Taagepera and Kremer, we proxy the number of potential innovators with the world population (WorldPop, defined as log(10) of the global population at time t ). We take data on the dynamics of world population during the Holocene from [ 55 ].
Connectedness is a harder variable to quantify. Here we build on the concept of IN used by Chase-Dunn and Hall and other world-systems theorists [ 34 , 43 ] who define the extent of any particular IN as the zone within which spatially and culturally distinct regions exchange information, so that technological innovations made in one society diffuse relatively rapidly (on the time scale of centuries) to all other societies within the system than to societies that may be close (spatially and culturally) but fall outside of the IN. As an example, the contacts between Western and Eastern Eurasia (mediated via Central Asia) in the third and especially the second millennia BCE led to the spread of multiple technological innovations between the western and eastern parts of Eurasia: wheat, cattle, horses, bronze metallurgy, wheeled chariots, among others [ 44 , 56 ]. Here, we constructed a predictor variable proxying the Centrality of each region within the evolving (eventually global) IN by calculating the distance between each of our NGAs and the system of Silk Routes that connected East and West Eurasia for the majority of the period under study [ 57 – 60 ]. Our measure of Centrality is the inverse of the distance between an NGA and the nearest node on the Silk Route (see Fig 1 ; and SI for details).
- PPT PowerPoint slide
- PNG larger image
- TIFF original image
https://doi.org/10.1371/journal.pone.0258161.g001
https://doi.org/10.1371/journal.pone.0258161.t001
In addition to Centrality within the IN, we capture two additional kinds of connectivity, namely the possible influence of spatial proximity (Space) as well as cultural affinity (Phylogeny). These terms not only allow us to control for possible autocorrelations and phylogenetic effects in our response variable (see Dynamic Regression Analysis below), but can also carry important information about processes influencing the evolution of military technologies. In particular, Space captures the process by which technological innovations may travel between geographically proximate societies–separately from the possible mediating influence of an expanding IN described above–measuring the likelihood that neighboring regions will share similar levels of military technology. Phylogeny focuses on the cultural similarity between polities, however spatially close, proxied by the relatedness of their dominant languages.
Another possible factor in the evolution of technology identified in the theoretical review is the effect of the current technology stock. We measure this in two ways. First, we model MilTech as a temporal autoregressive process, in which past values of MilTech affect its future values (for the details of the statistical model, see the next section). Second, we focus on the potential effects of two specific fundamental technologies: horse-riding and iron smelting.
According to the Cavalry Revolution theory, the invention of effective horse-riding in the Pontic-Caspian steppes, combined with powerful recurved bows and iron-tipped arrows, triggered a process of military evolution that spread from the steppes south to the belt of farming societies over several centuries throughout the first millennia BCE and CE [ 8 , 47 , 61 ]. Specifically, the threat of nomadic warriors armed with this advanced (for the period) military technology spurred the development of counter-measures designed to mitigate the cavalry advantage, while also producing an incentive to adopt horse-riding and effective accompanying combat tactics in areas further and further away from the location of their initial invention within the Steppe. The history of the military use of the horse went through several stages: the use of the chariot, the development of riding, the formation of light auxiliary cavalry, the development of nomadic riding, the appearance of the hard saddle, armored cataphracts, stirrups and, finally, heavy cavalry—a major branch of troops across Afro-Eurasian societies between c. 550 and 1400 CE [ 62 ]. As a result, effective horse-riding had far-reaching consequences for the evolution of military technologies, and specifically armor, projectiles such as crossbows, and fortifications. We use the data from [ 63 ] to encode the Cavalry variable (see Fig 2 ).
Data from [ 63 ].
https://doi.org/10.1371/journal.pone.0258161.g002
The effect of Iron is similarly widespread. Multiple authors [ 64 – 66 ] have suggested that the availability of iron had a major impact on the evolution of technologies, as this strong and malleable material served as an input for a host of important technologies, military and otherwise, throughout the period under investigation here. We use data from [ 67 ] to encode the Iron variable (see Fig 3 ).
Data from [ 67 ].
https://doi.org/10.1371/journal.pone.0258161.g003
Note that these two variables, Cavalry and Iron, are highly correlated with each other (compare Figs 2 and 3 ) and it may be difficult to estimate their effects separately (the problem of collinearity). To address this potential issue we created a synthetic variable, IronCav, that combines the two effects (by adding Cavalry and Iron together). IronCav, thus, takes the maximum value for societies with both mounted warfare and iron weapons, intermediate value for societies having one characteristic and not the other, and minimum for societies with neither characteristic. We explored with dynamic regressions whether IronCav turns out to be a better predictor than either of its constituent variables, reported below.
In addition to the theoretically-motivated predictors–WorldPop, Centrality, Iron, and Cavalry, along with our autocorrelation terms Space and Phylogeny–we explore other potential polity- and NGA-specific predictors to proxy interesting subsidiary hypotheses, as explained below. These measures are taken from previously published analyses using Seshat data [ 68 ] and enable us to reduce the potential “hidden variable” (or omitted variable bias) problem, which arises when analysis implicates X as a causal factor for Y , while in reality the true cause could be Z , with which X is closely correlated [ 69 , 70 ]. The additional predictor variables include the following:
- Social scale (Scale) represents the first principal component (PC) of the Seshat variables polity population , polity territory , population of the largest settlement , and the number of hierarchical levels . The hypothesis here is that larger and more complexly organized and productive societies (in both population, territory) will have more resources to both generate new inventions and to implement them, or adopt them from elsewhere, especially the costly ones like sophisticated siege engines or elaborate fortifications. This measure also reflects having larger shares of the population not mainly engaged in primary production, proxied by the population of the largest settlement [ 71 , 72 ]. Further, more stratified and administratively complex societies–measured by the number of levels in administrative, military, and settlement hierarchies (combined here as one measure of hierarchical levels–see SI )–are hypothesized to be better equipped to implement useful technologies along with developing or adopting effective tactical and organizational models at scale. Thus, by this logic, increases in military technology should occur preferentially in larger scale societies. Previous analysis [ 53 ] reveals that these four dimensions are highly correlated within the Seshat sample and so represent an effective cross-cultural measure of societal scale to explore this hypothesis.
- SocSoph (“social sophistication”) represents the first PC of the Seshat variables governance , infrastructure , information systems , and money . This measure likewise derives from previous analysis of the dimensions of social complexity [ 53 ], capturing the important non-scale institutional and informational aspects. The hypothesis here is that societies with more sophisticated, pre-existing mechanisms for the exchange and implementation of ideas will generate and/or adopt innovations into widespread use at a faster pace.
- Agricultural productivity (Agri) is the estimated yield of different regions, measured as tonnes per hectare of the major carbohydrate source consumed in each of our NGAs. These data are taken from the analyses in [ 73 ]. The term is included here to test the possibility that productivity affects the amount of resources that are available for technological advances.
Social scale and productivity, thus, give us two complementary views of the resource base that may drive the evolution of technology. Agri tracks the underlying material resource base in a given geographical region (our NGAs) needed to support development, including technological evolution. Social scale, on the other hand, is a measure of the territorial and population size of specific historical polities. Larger polities can gather resources from a large territory, including the human energy from large populations, even where agricultural productivity is low. Separately SocSoph represents the sophistication of infrastructure and exchange media that could conceivably facilitate the flow of ideas from invention (whether within or outside of the society) to widespread adoption.
Statistical analysis
In addition to standard correlational statistical analyses of our response and predictor variables, we used a general non-linear dynamic regression model to investigate factors affecting the evolution of military technology. This dynamic regression analysis distinguishes correlation from causation by estimating the influence potential causal factors at a previous time have on the response variable at a later time (known generally as Wiener-Granger causality [ 74 , 75 ]). While an improvement over ‘static’ correlations, where causal direction remains ambiguous, this method is, nevertheless, insufficient for making absolute claims of causality. Further scrutiny will be required to provide additional support for the provisional causal interpretations suggested below.
Here Y i , t is the response variable (MilTech) for location (NGA) i at time t . We construct a spatio-temporal series for Seshat response and predictor variables by following polities (or quasi-polities, such as archaeologically attested cultures) that occupied a specific NGA at each century mark during the sampled period. Thus, the time step in the analysis is100 years.
On the right-hand side, a is the regression constant (intercept). The next term captures the influences of past history (“autoregressive terms”), with τ = 1, 2, … indexing time-lagged values of Y (as time is measured in centuries, Y i , t – 1 refers to the value of MilTech 100 years before t ).
The third term represents potential effects resulting from geographic diffusion using our Space term. We used a negative-exponential form to relate the distance between location i and location j , δ i , j , to the influence of j on i . Unlike a linear kernel, the negative-exponential does not become negative at very large δ i , j , instead approaching 0 smoothly. The third term, thus, is a weighted average of the response variable values in the vicinity of location i at the previous time step, with weights falling off to 0 as distance from i increases. Parameter d measures how steeply the influence falls with distance, and parameter c is a regression coefficient measuring the importance of geographic diffusion. For an overview of potential effects resulting from geographic diffusion, see [ 69 , 76 ]; for a description of how we avoided the problem of endogeneity, see [ 70 ].
The fourth term detects autocorrelations due to any shared cultural history at location i with other regions j using our Phylogeny variable. Here w represents the weight applied to the phylogenetic (linguistic) distance between locations (set to 1 if locations i and j share the same language, 0.5 if they are in the same linguistic genus, and 0.25 if they are in the same linguistic family). Linguistic genera and families were taken from The World Atlas of Language Structures and Glottolog [ 77 ].
The next term on the right-hand side represents the effects of the main predictor variables X k [ 70 ]; g k are regression coefficients. These variables (described in the previous section) are of primary interest because they enable us to test various theories about the evolution of MilTech against each other. Finally, ε i , t is the error term. We also include quadratic versions of these terms at a time lag (not shown) in order to explore non-linear responses to response and predictor factors.
Model selection (choosing which terms to include in the regression model) was accomplished by exhaustive search: regressing the response variable on all possible linear combinations of predictor variables. The degree of fit was quantified by the Akaike Information Criterion (AIC). Standard diagnostic tests were performed for the best-fitting models [ 70 ].
Missing values, estimate uncertainty, and expert disagreement in the predictors were dealt with by multiple imputation [ 78 , 79 ]. The response variable, MilTech, however, was not imputed as that can result in biased estimates [ 76 ]. For details of the multiple imputation procedure see SI . Because diagnostic tests indicated that the distribution of residuals are not gaussian, we used nonparametric bootstrap to estimate the P -values associated with various regression terms (see the SI for details). Additional robustness checks are similarly detailed in the SI .
Spatio-temporal patterns
We first examined the frequency distributions of the variables of interest and the cross-correlations between WCs, overall MilTech, which combines all WCs, as well as calendar Time and the various aspects of social complexity and productivity. As expected, we find that all WCs are closely correlated with each other and with the overall MilTech variables. Plotting MilTech as a function of time for each NGA ( Fig 4 ), we observe that there is a general upward trend, as expected. However, there are also periods when some technologies are lost, for a time. Most importantly, there is a great amount of variation between different geographic regions in the timing of MilTech increases. Interestingly, all WCs are more strongly correlated with the two dimensions of social complexity specified here–Scale and SocSoph–rather than with Time, suggesting that key drivers of MilTech evolution go beyond merely the additive nature of technology through the ‘march of time’. The nature of any causality between complexity and MilTech is discussed below.
MilTech trajectories in Seshat NGAs, divided by major world region: (A) Europe and Africa; (B) Western Asia; (C) East and SE Asia; (D) Americas and Oceania.
https://doi.org/10.1371/journal.pone.0258161.g004
We next focus on “technology leaders”, NGAs that at some point in their history had the highest value of MilTech available at the time. Fig 5 shows them, roughly in the order that they achieved world leadership (note that this order is also affected by how far back in the past we have data). The hot spot of technological development, through either innovation or adoption, appears to roughly coincide with the “Imperial Belt” of the Old World, located just south of the Great Eurasian Steppe (and in places, impinging into it, as in Sogdiana), which can be seen by the location of the ‘leader’ NGAs (mapped in Fig 5 ).
https://doi.org/10.1371/journal.pone.0258161.g005
This same territory also of course corresponds roughly to the path of the overland silk routes used in our analyses ( Fig 1 ). We return to this pattern below. Overall, the pattern is that most of the leading regions exhibit an increase in their overall MilTech levels roughly together and at a fairly regular, almost linear pace (after the 4 th millennium BCE), with late comers accelerating at various points to merge with leaders. This is seen clearly in this graph on the example of Sogdiana, but it is a general pattern discernible in the regional examples ( Fig 4 ).
We explored the “similarity” between NGAs by calculating the number of MilTech variables in each NGA shared with other NGAs at each time-step. As explained in SI (see Similarity Analysis and S1 Fig in S1 File ) we trace how NGAs join the expanding Eurasian (eventually global) IN by noting the time when they achieve a similarity index of 10, that is, when they share 10 or more specific MilTech variables with one or more other NGAs. As the histograms in S1 Fig in S1 File show, the first NGAs that achieve this threshold of similarity appear between 3000 and 2500 BCE. As time progresses, more and more NGAs cross this threshold. Fig 6 maps the expansion of this IN–initially restricted to central Eurasia but growing eventually into a global network–by color coding the date when the NGA cross this threshold. Thus, the similarity analysis reveals that different regions not only saw rapid increases in their overall level of MilTech, but these areas came increasingly to share specific technologies. A plausible interpretation for this pattern is that, as the IN expands, each new region accelerates its development of MilTech to join the level achieved by the network leaders, until, eventually, all regions in diverse areas around the globe adopt similar ‘MilTech packages’. Future work is needed to disentangle occasions where these late-comer regions adopt or adapt existing technologies from cases where ‘leader’ societies simply take over others, imposing their technologies (along with a host of other socio-political and cultural traits) onto this new regions.
NGAs are binned into 6 categories according to the earliest time they share 10 or more MilTech variables with another NGA, displayed by color: dark red = 2500 BCE or before; orange = between 2500 and 1500 BCE; yellow = between 1500 and 500 BCE; green = between 500 BCE and 500 CE; blue = after 500 CE; grey = did not display any similarities during our sample period. Unfilled red circles indicate Silk Route Nodes as in Fig 3 .
https://doi.org/10.1371/journal.pone.0258161.g006
Dynamic regression results.
The best fitting model from our general dynamic regression analysis is shown in Table 1 .
https://doi.org/10.1371/journal.pone.0258161.t002
Our analysis identifies the following variables as having the strongest causal influence on MilTech:
- Autocatalytic effects (the value of MilTech in the previous time step).
- Global population size (WorldPop).
- Connection to an expanding (eventually global) Information Network (Centrality).
- Spread of Iron+Cavalry (IronCav), revealing both the importance of prior technology stock on continued technological evolution as well as the incentive that these advances placed on societies within connected information and competition spheres to adopt or develop additional technologies in response.
- Cultural similarity (Phylogeny), revealing that polities linguistically similar to polities with high MilTech are more likely to have high MilTech themselves. This effect could be a result of either common inheritance or easier diffusion of technology between culturally similar polities, or, most likely, both.
- Productivity of the resource base (Agri).
Investigation of the effects of Cavalry and Iron as predictor variables indicate that either, separately, has a statistically significant effect of similar strength on the evolution of MilTech. The synthetic variable, IronCav, however, is a better predictor than either of its constituents. For this reason, the results here are reported for IronCav only.
We estimated how location with respect to the system of Silk Routes affects the evolution of MilTech in each region. Our measure of Centrality (inverse distance to the nearest Silk Route node) finds strong empirical support, although we ran analyses using alternate methods of proxying this type of spatial effect (see SI for details). Overall our best model predicts the level of military technology with regression coefficient of determination ( R 2 ) of 0.96. While some of this high predictability is a result of strong temporal autocorrelation, rerunning the regression omitting all autocorrelation terms nevertheless yields an R 2 of 0.72. Thus, more than 70% of the variation in MilTech is explained by WorldPop, Centrality, IronCav, and Agri.
We performed several supplemental analyses and robustness checks to detect any biases in our results. Several of these checks are discussed below and detailed in the SI .
Table 2 shows a comparison between the best fitting model and other models with Δ AIC ≤ 2. Strong effects are detected in these alternative models for all terms in the best model including Agri, which, though its standardized coefficient is the smallest, remains statistically significant at the conventional P < 0.05 level. However, additional robustness tests using multiple datasets built by random sampling from among the different WCs comprising the MilTech variable indicate that Agri is not always supported (see SI for details). Neither measures of social complexity, Scale and SocSoph, appear to have a consistent significant positive effect on MilTech evolution (they show up in several of the alternative models, but with small t -values and negative signs for Scale).
https://doi.org/10.1371/journal.pone.0258161.t003
Further checks indicate that these results are robust to the inclusion of additional spatial and temporal autocorrelation effects: Neither geographic diffusion (Space) nor higher temporal lags ( τ = 2 centuries or greater) are significant. In addition, as we discussed in Materials and Methods , because our measure of MilTech includes the Metals and Animal WCs, which might confound the effect of IronCav due to a potential circularity, we re-ran the analysis using CoreMil, our measure of military technologies that does not include these WCs. This analysis yields essentially identical results (see SI ), thus suggesting that the effect of IronCav is not spurious.
What is remarkable is that neither Scale nor SocSoph variables, which characterize polities, have any detectable effect on the level of MilTech. Overall, these results suggest that MilTech evolves almost entirely as an exogenous variable: it is little, if at all, affected by such polity characteristics as the population, territory size, the sophistication of information systems or administrative institutions, or provision of infrastructure and public goods.
As noted, our dynamic regression approach cannot offer a definitive demonstration that the factors in the best model are the central causal forces driving the evolution of military technologies. These variables may simply be highly correlated with the ‘true’ causal factors, not included in our analyses, or the causal link may be indirect, as these factors, as well as MilTech, could be caused separately by additional factors whose effects were felt at different time-scales. We explored such a possible ‘hidden variable bias’ as much as possible through supplemental analyses of several variables for which we had reliable information. As our findings remain robust to various tests, we provisionally conclude that these results offer appealing and parsimonious causal explanation for the long-run and global evolution of military technologies. Future research will need to scrutinize whether these results hold up to the inclusion of additional factors and exploration using alternate statistical methods or mechanistic models perhaps using agent-based modelling [ 24 , 27 , 80 ].
Our goals were to investigate the global spatio-temporal evolution of key pre-industrial military technologies to illuminate the major forces driving the evolution of these critical tools, whether by innovation, adoption and adaption, or a combination of these processes. Further, our approach to testing theoretically-informed hypotheses against a broad and diverse set of empirical historical data taken from Seshat : Global History Databank serves as an example of how more general patterns of technological evolution can be explored in future research, as well as more fine-grained analyses seeking to distinguish these different processes or explore the pathways taken by individual regions or societies. Here we surveyed various causal hypotheses, which together suggested that the evolution of military technology would be a function of some combination of global population size, connectedness to information exchange networks, involvement in inter-state competition networks, and prior histories of technological innovation and adoption (especially major breakthroughs such as iron metallurgy and horse riding), along with, perhaps, various properties of polities and their resource base. We set out to test these theories empirically against the evidence from world history, using a stratified sample of polities in Seshat , dating from the Neolithic to the Industrial Revolutions.
While we found some empirical support for each of these hypotheses, no one theory alone accounted for the observed dynamics of military technology as well as a combination of the factors suggested by these various proposals. Our results not only explain why these theories have found support in previous studies, but also why a general understanding of the evolution of military technology has proven elusive. Our robust historical sample and extensive dynamic analyses allowed us to compare and combine elements of different theories proposed as critical drivers of military technology. Specifically, we found that global population size is a strong predictor of the subsequent levels of MilTech. While this result supports the Kremer-Taagepera model, it does not rule out other possible causal explanations based on additional variables, which, while correlated with global world population, could turn out to be a better predictor of MilTech. One such addition in future work could be to distinguish societies by their general affluence or social mobility [ 81 ], rather than treating populations as indistinguishable, which may play such a causal role driving both population increases and technological evolution.
Our analysis found that stock of prior technological innovations played an important role in the observed levels of military technology, not only from the autoregressive terms (again, supporting the Kremer-Taagepera model) but critically because the combination of iron metallurgy and horse riding had a particularly strong effect on innovation and adoption of militarily technologies in the periods under investigation here.
Importantly, we found that location within the central Eurasian IN was also a strong predictor of our response measure, in line with the insights of World Systems and cultural evolutionary theorists. This result supports the impact of being connected to other major centers of development and innovation, as well as being incorporated into spheres of inter-state competition.
However, it is noteworthy that geographic proximity between NGAs itself (proxied here by our Space measure) does not appear to be a strong predictor of the evolution of military technologies, contrary to what might be expected from certain cultural evolution theories and ideas of mimetic diffusion. This underscores the significance of iron and cavalry diffusion in particular, which have a strong effect on subsequent levels of MilTech, supporting previous work highlighting the unique role of the nomadic pastoralists of the Eurasian Steppe, early adopters of mounted archery tactics, in driving not only technological innovation among nearby agrarian populations, but in driving the expansion of social complexity and, relatedly, technological evolution throughout Afro-Eurasia [ 8 , 24 , 26 , 27 , 45 – 48 , 82 ]. The development of iron-smelting, as an input material for so many valuable weapons, appears to play a similarly crucial role [ 64 – 67 ]. These findings suggest that iron and cavalry were particularly critical technologies that conferred an important enough advantage that they fomented widespread adoption as well as sparked ‘arms races’ among competitors that included a host of other, related technologies as discussed above, which would explain the observed patterns.
This interpretation gains further support from our similarity analysis. Our main result indicate that the overall level of MilTech–measured with our aggregate MilTech score–generally rose over time (with some losses, noted above), with more and more regions coming to exhibit the same level of MilTech over time. Our similarity analysis unpacks this finding, demonstrating that not only did regions increasingly exhibit the same overall MilTech score, but they also came to share the same ‘packages’ of specific military technologies. Further, as expected, the regions with the highest combined similarity scores followed the same pattern as seen in the Centrality measure, as the NGAs closest to a Silk Route node both appeared as sharing MilTech variables with other NGAs earlier and continued to display similar MilTech packages with other NGAs that joined the IN over time, resulting in their larger combined scores (see Fig 4 ).
An interesting and somewhat surprising finding is that the properties of polities, including such seemingly important characteristics as their scale (population and territory) and sophistication (e.g., information systems), have no significant impact on the evolution of military technologies wielded by the polity (with the partial exception of Phylogeny, discussed below). We expected both scale (Scale) and non-scale (SocSoph) aspects of social complexity to play a significant role in these processes, due to an increased availability of populations and resources to put towards technological development as well as how developments in organizational and informational-exchange capacities could facilitate the adoption and adaption of existing technologies from elsewhere. However, these terms display no significant effect on subsequent levels of MilTech, suggesting that the level of technology characterizing a particular polity (whether invented or adopted) depends not on the polity’s characteristics, but rather on the characteristics of the inter-polity informational and competitive interaction spheres to which it belongs, along with the other factors identified above. The Arabian Peninsula, for example, despite being relatively low-scale in the early first millennium CE, adopts much of the ‘military package’ seen in other parts of Eurasia around 300 CE (see Fig 4 and the Similarity Analysis in the SI ), as it became increasingly incorporated into Silk Route trade connections via the Persian and Roman imperial systems, before becoming its own seat of imperial power with the rise of Islam a few centuries later.
The only polity-related term that is included in the best regression model is Phylogeny, which can reflect an operation of one of two (or both) processes: inheritance of technological sophistication from a “common ancestor” (for example, Italy and France inheriting technologies from the Roman Empire), or easier spread of innovations between culturally similar countries (such as between Romance-speaking Italy and France, or between Arabic-speaking Egypt and Mesopotamia). The latter process likely reflects the greater likelihood that an innovation developed in one polity will be more compatible with existing institutional, social, cultural, and economic systems of a culturally similar polity than those of a more distant one [ 83 ]. One major component of this effect might be that military technologies require specific tactical and organizational apparatus to wield effectively. Cultural similarity then could not only facilitate exchange of information about a new, useful technology across societies, but facilitate the spread of knowledge of and increase the ability to acquire these more ephemeral aspects accompanying the material components of this new technology. Alternatively, linguistically similar polities might have engaged in more frequent and intense competition, which could lead to a similar impact (likely correlated strongly with the IronCav and Centrality effects) on overall MilTech. This is less plausible than the other processes, however, as interstate competition has been shown to be most intense involving culturally dissimilar polities [ 45 , 47 , 61 ]. Additional study is needed to fully clarify the different possible causal forces driving this effect and to explore the possible causal role that each of these potential processes play in the overall development and spread of these key military technologies, along with technological evolution more generally. Nevertheless, the finding that Phylogeny is a significant predictor of MilTech further speaks to the importance of connection-mediated information exchange, over and above closeness in space.
Lastly, we find that agricultural productivity, measured here as per-hectare tons of major carbohydrates, displays a significant effect on subsequent levels of MilTech. While we had no strong theoretical motivations for this idea, we included the term in analyses to test the possibility that an increased resource base would impact technological development. Its inclusion in our best model suggests that a certain level of agriculture productivity may have been a necessary component in generating and adopting new technologies. Perhaps a more efficient productivity was required to support large enough populations not primarily employed in agriculture, or expanding a society’s general resource base and extractive capacity provided the raw materials and intermediate goods used in constructing key military technologies. As noted, however, this factor displays a much weaker effect compared to the others, and is least robust to supplemental analyses. Thus, this result must remain tentative. Exploring more deeply the impact of agricultural productivity on the evolution of technology stands out as an important avenue for future research.
While these findings constitute an important first step towards identifying some of the major long-term drivers of technological evolution in general, and in the domain of military capacity in particular, and finding broad support for previously somewhat speculative theories, there is still much to be done to build on this line of research. First, it would be desirable to extend the geographical coverage beyond the stratified global sample used in the present study, particularly relating to the phylogenic connections in the spread of existing technologies and the different possible processes that lead to this interesting effect. Second, it would be important to explore the downstream consequences of changes in military technology for other aspects of human life, including levels of peacefulness (or, alternatively, mortality rates due to violence), equality (e.g. distributions of wealth, rights of citizenry, levels of exploitation and oppression based on class or ethnicity) and public health (e.g. longevity, infant mortality, nutrition, infection rates, etc.). Third, our goal was to offer a preliminary exploration of some key causal forces proposed to support the evolution of military technology, ignoring differences between the initial innovation of new technologies and subsequent adoption by other societies. Future work is needed to pinpoint the source(s) of invention and distinguish advances made by innovation from advances by later spread to assess whether the same or different factors drive each of these separate processes. Fourth, additional potential drivers of technological innovation in general should be explored, over and above the effects of population size, connectivity, and existing stocks of critical innovations, as well as analyzing further the potential causal role played by rising agricultural productivity. These explorations would include factors impacting resource scarcity (e.g. due to drought, pestilence, and other natural disasters), more direct measures of intergroup competition (e.g. levels and intensities of external warfare, cultural distance between competitors, and other exogenous factors), identifying various different regional INs which might (partially) overlap in time and space.
Finally, it is important for future studies to ‘narrow in’ on the details of some of the more macro-level processes suggested by the present study. In particular, it will useful to explore the possible impact of regional-level factors along with a broader range of technological innovations within the polity (e.g. in energy, construction, transportation, and information sectors). Seshat data is relatively coarse, resolved here to 100-year intervals. While this granularity is well suited to exploring broad, global-level dynamics over thousands of years, it likely misses some of the nuances and outlying patterns. Future effort can hopefully generate more fine-grained temporal data allowing for meso- and even micro-level scrutiny of the pathways to technological evolution taken by different societies in various times and places. Alongside this, we require more qualitative investigation into the details of the specific items as well as the less material, tactical and managerial aspects of technological development employed in a host of specific historical cases.
Beyond the insights gained from these analyses on the development of military technologies over the very long-term, we hope that the approach presented here, which explores likely casual theories against a wide body of empirical data gathered by the Seshat project, will provide a roadmap to these important future studies, allowing scholars to delve deeper into not only the critical ‘Military Revolutions’ throughout history, but into the evolution of technology generally.
Supporting information
S1 file. supporting information text and figs..
https://doi.org/10.1371/journal.pone.0258161.s001
S1 Data. Compressed file containing data files and analysis scripts.
https://doi.org/10.1371/journal.pone.0258161.s002
Acknowledgments
The authors are grateful to Sergey Nefedov who reviewed data and provided helpful comments. We thank also Christopher Chase-Dunn, Peter Grimes, Gene Anderson, and the SetPol project for their constructive critique on earlier versions of the manuscript, as well as Jennifer Larson and Alan Covey for helpful comments on previous drafts. We gratefully acknowledge the contributions of our team of research assistants, post-doctoral researchers, consultants, and experts. Additionally, we have received invaluable assistance from our collaborators. Please see the Seshat website ( www.seshatdatabank.info ) for a comprehensive list of private donors, partners, experts, and consultants and their respective areas of expertise.
- 1. McNeill WH. The pursuit of power. Chicago, IL: University of Chicago Press; 1982.
- View Article
- Google Scholar
- 3. McNeill JR, McNeill WH. The Human Web: A Bird’s-Eye View of World History. New York: W. W. Norton; 2003.
- 4. McNeill WH. The Rise of the West. New York: New American Library; 1963.
- 5. Duffy M, editor. The military revolution and the state, 1500–1800. Exeter: University of Exeter Publications; 1980.
- 6. Downing B. The military revolution and political change. Princeton, NJ: Princeton University Press; 1992.
- 7. Parker G. The Military Revolution, 1500–1800: Military Innovation and the Rise of the West (2nd ed.). Cambridge: Cambridge University Press; 1996.
- 8. Turchin P. Ultrasociety: How 10,000 Years of War Made Humans the Greatest Cooperators on Earth. Chaplin, CT: Beresta Books; 2016.
- 10. Bellah RN. Religion in Human Evolution: From the Paleolithic to the Axial Age. Cambridge, MA: Harvard University Press; 2011.
- 11. Morris I. Why the West Rules—For Now: The Patterns of History, and What They Reveal About the Future. New York: Farrar, Straus and Giroux; 2010.
- 12. Morris I. The Measure of Civilization: How Social Developments Decides the Fate of Nations. Princeton: Princeton University Press; 2013.
- 14. Schumpeter JA. Business Cycles. New York: McGraw-Hill; 1939.
- 15. Arthur WB. The Nature of Technology: What It Is and How It Evolves New York: Free Press; 2011.
- 16. Mokyr J. The Lever of Riches: Technological Creativity and Economic Progress. New York: Oxford University Press; 1992. https://doi.org/10.1002/ijc.2910510117 pmid:1563847
- 17. Mokyr J. The Gifts of Athena: Historical Origins of the Knowledge Economy. Princeton, NJ: Princeton University Press; 2002. https://doi.org/10.1007/s00262-002-0271-9 pmid:12012107
- 19. White LA. The Evolution of Culture. New York: McGraw-Hill; 1959.
- 20. Cipolla CM. Guns, Sails, and Empires: Technological Innovation and the Early Phases of European Expansion, 1400–1700. New York: Pantheon Books; 1965. pmid:5321636
- 21. Modelski G, Thompson WR. Leading sectors and world powers: the coevolution of global politics and economics. Columbia, SC: University of South Carolina Press; 1996.
- PubMed/NCBI
- 31. Tsirel S. On the Possible Reasons for the Hyperexponential Growth of the Earth Population. In: Dmitriev MG, Petrov AP, editors. Mathematical Modeling of Social and Economic Dynamics. Moscow: Russian State Social University; 2004. p. 367–9. https://doi.org/10.1107/S010876730401339X pmid:15477680
- 34. Korotayev A, Malkov A, Khaltourina D. Introduction to Social Macrodynamics: Secular Cycles and Millennial Trends. Moscow: URSS; 2006. https://doi.org/10.1074/jbc.M601758200 pmid:16912047
- 37. Salisbury RF. From stone to steel: economic consequences of a technological change in New Guinea. Cambridge: Cambridge University Press; 1962.
- 39. Christian D. Maps of Time: An Introduction to Big History. Berkeley: University of California Press; 2004.
- 40. Korotayev AV, LePoire DJ, editors. The 21st Century Singularity and Global Futures: A Big History Perspective. Cham, Switzerland: Springer; 2020.
- 41. Chase-Dunn CK, Hall TD. Rise and demise: comparing world-systems. Boulder, CO: Westview Press; 1997. xi, 322 p.
- 42. Korotayev A. The World System Urbanization Urbanization Dynamics: A quantitative analysis. In: Turchin P, Grinin L, de Munck VC, Korotayev A, editors. History and Mathematics: Historical Dynamics and Development of Complex Societies. Moscow: URSS; 2006. p. 44–63. https://doi.org/10.1074/jbc.M601758200 pmid:16912047
- 43. Korotayev A. Compact mathematical models of world system development, and how they can help us to clarify our understanding of globalization processes. In: Modelski G, Devezas T, Thompson WR, editors. Globalization as Evolutionary Process: Modeling Global Change. London: Routledge; 2008. p. 133–60.
- 45. Barfield TJ. The perilous frontier: nomadic empires and China. Oxford, UK: Blackwell; 1989.
- 46. Kradin NN. Nomads, world-empires, and social evolution. In: Kradin NN, Korotayev AV, Bondarenko DM, Lynshi VA, editors. Alternative routes to civilization (in Russian). Moscow: Logos; 2000. p. 314–36.
- 48. Scheidel W. The Xiongnu and the comparative study of empire. In: Brosseder U, Miller BK, editors. iongnu archaeology–multidisciplinary perspectives on the first steppe empire in Inner Asia. Bonn: Rheinische Friedrich-Wilhelms-Universität Bonn; 2010. p. 111–20. pmid:20405626
- 49. Collins R. Macrohistory: Essays in Sociology of the Long Run. Palo Alto, CA: Stanford University Press; 1999.
- 50. Hui VT-b. War and State Formation in Ancient China and Early Modern Europe [Hardcover]. Cambridge: Cambridge University Press; 2005.
- 56. Zinkina J, Christian D, Grinin L, Ilyin I, Andreev A, Aleshkovski I, et al. A Big History of Globalization: The Emergence of a Global World System. Cham, Switzerland: Springer Nature; 2019.
- 58. Beckwith CI. Empires of the Silk Road: A History of Cenrtal Eurasia from the Bronze Age to the Present. Princeton: Princeton University Press; 2009.
- 59. Williams T. The Silk Roads: An Icomos Thematic Study. International Council of Monuments and Sites. 2014.
- 60. McLaughlin R. The Roman Empire and the Silk Routes: The Ancient World Economy & the Empires of Parthia, Central Asia & Han China: Pen and Sword; 2016.
- 61. Turchin P. War and Peace and War: The Life Cycles of Imperial Nations. NY: Pi Press; 2006.
- 62. Nefedov SA. War and Society (In Russian: Voyna i obschestvo. Faktornyi analiz istoricheskogo protsessa). Moscow: Publishing House "Territoriya buduschego"; 2009.
- 64. Drews R. The End of the Bronze Age: Changes in Warfare and the Catastrophe ca. 1200 BC. Princeton: Princeton University Press; 1993.
- 71. Johnson AW, Earle T. The evolution of human societies: from foraging group to agrarian state, 2nd edition. Stanford, CA: Stanford University Press; 2000.
- 72. Sanderson SK. Social Transformations: a General Theory for Historical Development. Lanham, MD: Rowman and Littlefield; 1999.
- 74. Wiener N. The theory of prediction. In: Beckenbath E, editor. ModernMathematics for Engineers. New York: McGraw-Hill; 1956.
- 77. Dryer MS, Haspelmath M, editors. The World Atlas of Language Structures Online. Leipzig: Max Planck Institute for Evolutionary Anthropology; 2013.
- 78. Rubin DB. Multiple Imputation for Nonresponse in Surveys. New York: Wiley; 1987.
- 79. Yuan YC. Multiple Imputation for Missing Data: Concepts and New Development (Version 9.0). Rockville, MD: SAS Institute; 2011.
- 80. Cioffi-Revilla C, Luke S, Parker DC, Rogers JD, Fitzhugh WW, Honeychurch W, et al. Computational Modeling Frontiers in International Politics: Agent-based Modeling and Simulation of Adaptive Behavior and Long-Term Change in Inner Asia. Proceedings of First World Congress on Social Simulation2006.
Technology and Labor Displacement: Evidence from Linking Patents with Worker-Level Data
We develop measures of labor-saving and labor-augmenting technology exposure using textual analysis of patents and job tasks. Using US administrative data, we show that both measures negatively predict earnings growth of individual incumbent workers. While labor-saving technologies predict earnings declines and higher likelihood of job loss for all workers, labor-augmenting technologies primarily predict losses for older or highly-paid workers. However, we find positive effects of labor-augmenting technologies on occupation-level employment and wage bills. A model featuring labor-saving and labor-augmenting technologies with vintage-specific human capital quantitatively matches these patterns. We extend our analysis to predict the effect of AI on earnings.
We are grateful to Daron Acemoglu, Andy Atkeson, David Autor, Effi Benmelech, Nicholas Bloom, Julieta Caunedo, Martin Beraja, Carola Frydman, Tarek Hassan, Anders Humlum, Michael Peters, Pascual Restrepo, Jonathan Rothbaum, Miao Ben Zhang, among others, and seminar participants at University of Amsterdam, Boston University, Columbia GSB, FIRS, Johns Hopkins, HKUST, Labor and Finance Group, NBER (EFG, PRMP), MacroFinance Society, MIT Sloan, Michigan State, the Econometric Society, Rice University, University of Rochester, the Society of Economic Dynamics, University College London, University of Illinois at Urbana Champaign, University of Toronto, Tsinghua PBC, WFA, and Wharton for valuable discussions and feedback. We thank Carter Braxton, Will Cong, and Jonathan Rothbaum for generously sharing code. Huben Liu provided outstanding research support. The paper has been previously circulated under the titles “Technological Change and Occupations over the Long Run”, “Technology-Skill Complementarity and Labor Displacement: Evidence from Linking Two Centuries of Patents with Occupations,” and “Technology, Vintage-Specific Human Capital, and Labor Displacement: Evidence from Linking Patents with Occupations”. The Census Bureau has reviewed this data product to ensure appropriate access, use, and disclosure avoidance protection of the confidential source data used to produce this product (Data Management System (DMS) number: P-7503840, Disclosure Review Board (DRB) approval numbers: CBDRB-FY21-POP001-0176, CBDRBFY22-SEHSD003-006, CBDRB-FY22-SEHSD003-023, CBDRB-FY22-SEHSD003-028,CBDRB-FY23-SEHSD003-0350, CBDRB-FY23-SEHSD003-064). The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research.
MARC RIS BibTeΧ
Download Citation Data
Working Groups
More from nber.
In addition to working papers , the NBER disseminates affiliates’ latest findings through a range of free periodicals — the NBER Reporter , the NBER Digest , the Bulletin on Retirement and Disability , the Bulletin on Health , and the Bulletin on Entrepreneurship — as well as online conference reports , video lectures , and interviews .

- Search for: Toggle Search
Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning
A new AI agent developed by NVIDIA Research that can teach robots complex skills has trained a robotic hand to perform rapid pen-spinning tricks — for the first time as well as a human can.
The stunning prestidigitation, showcased in the video above, is one of nearly 30 tasks that robots have learned to expertly accomplish thanks to Eureka, which autonomously writes reward algorithms to train bots.
Eureka has also taught robots to open drawers and cabinets, toss and catch balls, and manipulate scissors, among other tasks.
The Eureka research, published today , includes a paper and the project’s AI algorithms, which developers can experiment with using NVIDIA Isaac Gym , a physics simulation reference application for reinforcement learning research. Isaac Gym is built on NVIDIA Omniverse , a development platform for building 3D tools and applications based on the OpenUSD framework. Eureka itself is powered by the GPT-4 large language model .
“Reinforcement learning has enabled impressive wins over the last decade, yet many challenges still exist, such as reward design, which remains a trial-and-error process,” said Anima Anandkumar, senior director of AI research at NVIDIA and an author of the Eureka paper. “Eureka is a first step toward developing new algorithms that integrate generative and reinforcement learning methods to solve hard tasks.”
AI Trains Robots
Eureka-generated reward programs — which enable trial-and-error learning for robots — outperform expert human-written ones on more than 80% of tasks, according to the paper. This leads to an average performance improvement of more than 50% for the bots.
Robot arm taught by Eureka to open a drawer.
The AI agent taps the GPT-4 LLM and generative AI to write software code that rewards robots for reinforcement learning. It doesn’t require task-specific prompting or predefined reward templates — and readily incorporates human feedback to modify its rewards for results more accurately aligned with a developer’s vision.
Using GPU-accelerated simulation in Isaac Gym, Eureka can quickly evaluate the quality of large batches of reward candidates for more efficient training.
Eureka then constructs a summary of the key stats from the training results and instructs the LLM to improve its generation of reward functions. In this way, the AI is self-improving. It’s taught all kinds of robots — quadruped, bipedal, quadrotor, dexterous hands, cobot arms and others — to accomplish all kinds of tasks.
The research paper provides in-depth evaluations of 20 Eureka-trained tasks, based on open-source dexterity benchmarks that require robotic hands to demonstrate a wide range of complex manipulation skills.
The results from nine Isaac Gym environments are showcased in visualizations generated using NVIDIA Omniverse.
Humanoid robot learns a running gait via Eureka.
“Eureka is a unique combination of large language models and NVIDIA GPU-accelerated simulation technologies,” said Linxi “Jim” Fan, senior research scientist at NVIDIA, who’s one of the project’s contributors. “We believe that Eureka will enable dexterous robot control and provide a new way to produce physically realistic animations for artists.”
It’s breakthrough work bound to get developers’ minds spinning with possibilities, adding to recent NVIDIA Research advancements like Voyager , an AI agent built with GPT-4 that can autonomously play Minecraft .
NVIDIA Research comprises hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.
Learn more about Eureka and NVIDIA Research .
NVIDIA websites use cookies to deliver and improve the website experience. See our cookie policy for further details on how we use cookies and how to change your cookie settings.
Suggestions or feedback?
MIT News | Massachusetts Institute of Technology
- Machine learning
- Social justice
- Black holes
- Classes and programs
Departments
- Aeronautics and Astronautics
- Brain and Cognitive Sciences
- Architecture
- Political Science
- Mechanical Engineering
Centers, Labs, & Programs
- Abdul Latif Jameel Poverty Action Lab (J-PAL)
- Picower Institute for Learning and Memory
- Lincoln Laboratory
- School of Architecture + Planning
- School of Engineering
- School of Humanities, Arts, and Social Sciences
- Sloan School of Management
- School of Science
- MIT Schwarzman College of Computing
Using AI to optimize for rapid neural imaging
Press contact :.

Previous image Next image
Connectomics, the ambitious field of study that seeks to map the intricate network of animal brains, is undergoing a growth spurt. Within the span of a decade, it has journeyed from its nascent stages to a discipline that is poised to (hopefully) unlock the enigmas of cognition and the physical underpinning of neuropathologies such as in Alzheimer’s disease.
At its forefront is the use of powerful electron microscopes, which researchers from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Samuel and Lichtman Labs of Harvard University bestowed with the analytical prowess of machine learning. Unlike traditional electron microscopy, the integrated AI serves as a “brain” that learns a specimen while acquiring the images, and intelligently focuses on the relevant pixels at nanoscale resolution similar to how animals inspect their worlds.
“ SmartEM ” assists connectomics in quickly examining and reconstructing the brain’s complex network of synapses and neurons with nanometer precision. Unlike traditional electron microscopy, its integrated AI opens new doors to understand the brain's intricate architecture.
The integration of hardware and software in the process is crucial. The team embedded a GPU into the support computer connected to their microscope. This enabled running machine-learning models on the images, helping the microscope beam be directed to areas deemed interesting by the AI. “This lets the microscope dwell longer in areas that are harder to understand until it captures what it needs,” says MIT professor and CSAIL principal investigator Nir Shavit. “This step helps in mirroring human eye control, enabling rapid understanding of the images.”
“When we look at a human face, our eyes swiftly navigate to the focal points that deliver vital cues for effective communication and comprehension,” says the lead architect of SmartEM, Yaron Meirovitch, a visiting scientist at MIT CSAIL who is also a former postdoc and current research associate neuroscientist at Harvard. “When we immerse ourselves in a book, we don't scan all of the empty space; rather, we direct our gaze towards the words and characters with ambiguity relative to our sentence expectations. This phenomenon within the human visual system has paved the way for the birth of the novel microscope concept.”
For the task of reconstructing a human brain segment of about 100,000 neurons, achieving this with a conventional microscope would necessitate a decade of continuous imaging and a prohibitive budget. However, with SmartEM, by investing in four of these innovative microscopes at less than $1 million each, the task could be completed in a mere three months.
Nobel Prizes and little worms
Over a century ago, Spanish neuroscientist Santiago Ramón y Cajal was heralded as being the first to characterize the structure of the nervous system. Employing the rudimentary light microscopes of his time, he embarked on leading explorations into neuroscience, laying the foundational understanding of neurons and sketching the initial outlines of this expansive and uncharted realm — a feat that earned him a Nobel Prize. He noted, on the topics of inspiration and discovery, that “As long as our brain is a mystery, the universe, the reflection of the structure of the brain will also be a mystery.”
Progressing from these early stages, the field has advanced dramatically, evidenced by efforts in the 1980s, mapping the relatively simpler connectome of C. elegans , small worms, to today’s endeavors probing into more intricate brains of organisms like zebrafish and mice. This evolution reflects not only enormous strides, but also escalating complexities and demands: mapping the mouse brain alone means managing a staggering thousand petabytes of data , a task that vastly eclipses the storage capabilities of any university, the team says.
Testing the waters
For their own work, Meirovitch and others from the research team studied 30-nanometer thick slices of octopus tissue that were mounted on tapes, put on wafers, and finally inserted into the electron microscopes. Each section of an octopus brain, comprising billions of pixels, was imaged, letting the scientists reconstruct the slices into a three-dimensional cube at nanometer resolution. This provided an ultra-detailed view of synapses. The chief aim? To colorize these images, identify each neuron, and understand their interrelationships, thereby creating a detailed map or “connectome” of the brain's circuitry.
“SmartEM will cut the imaging time of such projects from two weeks to 1.5 days,” says Meirovitch. “Neuroscience labs that currently can't be engaged with expensive and long EM imaging will be able to do it now,” The method should also allow synapse-level circuit analysis in samples from patients with psychiatric and neurologic disorders.
Down the line, the team envisions a future where connectomics is both affordable and accessible. They hope that with tools like SmartEM, a wider spectrum of research institutions could contribute to neuroscience without relying on large partnerships, and that the method will soon be a standard pipeline in cases where biopsies from living patients are available. Additionally, they’re eager to apply the tech to understand pathologies, extending utility beyond just connectomics. “We are now endeavoring to introduce this to hospitals for large biopsies, utilizing electron microscopes, aiming to make pathology studies more efficient,” says Shavit.
Two other authors on the paper have MIT CSAIL ties: lead author Lu Mi MCS ’19, PhD ’22, who is now a postdoc at the Allen Institute for Brain Science, and Shashata Sawmya, an MIT graduate student in the lab. The other lead authors are Core Francisco Park and Pavel Potocek, while Harvard professors Jeff Lichtman and Aravi Samuel are additional senior authors. Their research was supported by the NIH BRAIN Initiative and was presented at the 2023 International Conference on Machine Learning (ICML) Workshop on Computational Biology. The work was done in collaboration with scientists from Thermo Fisher Scientific.
Share this news article on:
Related links.
- Yaron Meirovitch
- Computer Science and Artificial Intelligence Laboratory
- Department of Electrical Engineering and Computer Science
Related Topics
- Artificial intelligence
- Neuroscience
- Computer science and technology
- Brain and cognitive sciences
- Nanoscience and nanotechnology
- Electrical Engineering & Computer Science (eecs)
- Computer Science and Artificial Intelligence Laboratory (CSAIL)
Related Articles

How a single neuron’s parallel outputs can coordinate many aspects of behavior

Cracking the code that relates brain and behavior in a simple animal

From molecular to whole-brain scale in a simple animal, study reveals serotonin’s effects

My connectome, myself
Previous item Next item
More MIT News

How do reasonable people disagree?
Read full story →

Celebrating diversity and cultural connections
Working to beat the clock on climate change

Mark Bear wins Society for Neuroscience Julius Axelrod Prize

Foreign policy scholars examine the China-Russia relationship

Ingestible electronic device detects breathing depression in patients
- More news on MIT News homepage →
Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA
- Map (opens in new window)
- Events (opens in new window)
- People (opens in new window)
- Careers (opens in new window)
- Accessibility
- Social Media Hub
- MIT on Facebook
- MIT on YouTube
- MIT on Instagram
Microsoft Ignite 2023: AI transformation and the technology driving change
Nov 15, 2023 | Frank X. Shaw - Chief Communications Officer, Microsoft
- Share on Facebook (opens new window)
- Share on Twitter (opens new window)
- Share on LinkedIn (opens new window)

As we reach the end of 2023, nearly every industry is undergoing a collective transformation – discovering entirely new ways of working due to AI advancements.
Microsoft Ignite is a showcase of the advances being developed to help customers, partners and developers achieve the total value of Microsoft’s technology and reshape the way work is done.
As we round out the year, there are strong signals of AI’s potential to transform work. Take our latest Work Trend Index . Eight months ago, we introduced Copilot for Microsoft 365 to reduce digital debt and increase productivity so people can focus on the work that is uniquely human. What everyone wants to know now is: Will Copilot really change work, and how? Our research, using a combination of surveys and experiments, shows the productivity gains are real:
- 70% of Copilot users said they were more productive and 68% said it improved the quality of their work; 68% say it helped jumpstart the creative process.
- Overall, users were 29% faster at specific tasks (searching, writing and summarizing).
- Users caught up on a missed meeting nearly 4x faster.
- 64% of users said Copilot helps them spend less time processing email.
- 87% of users said Copilot makes it easier to get started on a first draft.
- 75% of users said Copilot “saves me time by finding whatever I need in my files.”
- 77% of users said once they use Copilot, they don’t want to give it up.
Today, we will make about 100 news announcements that touch on multiple layers of an AI-forward strategy, from adoption to productivity to security. We’ll zoom in on a few key areas of impact below.
Rethinking cloud infrastructure Microsoft has led with groundbreaking advances like partnerships with OpenAI and the integration of ChatGPT capabilities into tools used to search, collaborate, work and learn. As we accelerate further into AI, Microsoft is rethinking cloud infrastructure to ensure optimization across every layer of the hardware and software stack.
At Ignite we are announcing new innovations across our datacenter fleet, including the latest AI optimized silicon from our industry partners and two new Microsoft-designed chips.
- Microsoft Azure Maia, an AI Accelerator chip designed to run cloud-based training and inferencing for AI workloads such as OpenAI models, Bing, GitHub Copilot and ChatGPT.
- Microsoft Azure Cobalt, a cloud-native chip based on Arm architecture optimized for performance, power efficiency and cost-effectiveness for general purpose workloads.
- Additionally, we are announcing the general availability of Azure Boost , a system that makes storage and networking faster by moving those processes off the host servers onto purpose-built hardware and software.
Complementing our custom silicon, we are expanding partnerships with our silicon providers to provide infrastructure options for customers.
- We’ll be adding AMD MI300X accelerated virtual machines (VMs) to Azure. The ND MI300 VMs are designed to accelerate the processing of AI workloads for high range AI model training and generative inferencing, and will feature AMD’s latest GPU, the AMD Instinct MI300X.
- The preview of the new NC H100 v5 Virtual Machine Series built for NVIDIA H100 Tensor Core GPUs, offering greater performance, reliability and efficiency for mid-range AI training and generative AI inferencing. We’re also announcing plans for the ND H200 v5 Virtual Machine Series, an AI-optimized VM featuring the upcoming NVIDIA H200 Tensor Core GPU.
Extending the Microsoft Copilot experience Over the past year we have continued to refine our vision for Microsoft Copilot, a set of tools that help people achieve more using AI. To go beyond individual productivity, we are extending Microsoft Copilot offerings across solutions to transform productivity and business processes for every role and function – from office workers and front-line workers to developers and IT professionals.
Microsoft is the Copilot company, and we believe in the future there will be a Copilot for everyone and for everything you do. Some of our Copilot-related announcements and updates include:
- Microsoft Copilot for Microsoft 365: This month, Copilot for Microsoft 365 became generally available for enterprises. Already customers like Visa, BP, Honda and Pfizer and partners like Accenture, EY, KPMG, Kyndryl and PwC are using Copilot. We continue to bring new value, based on learnings from our Early Access Program and other research channels. The new Microsoft Copilot Dashboard shows customers how Copilot is impacting their organization – with insights like those found in our Work Trend Index. We’re introducing new personalization capabilities that help Copilot offer responses that are tailored to your unique preferences and role. To empower teamwork, new features for Copilot in Outlook help you prep for meetings, and during meetings, new whiteboarding and note-taking experiences for Copilot in Microsoft Teams keep everyone on the same page. And customers who need it can now use Copilot during a meeting without transcription retention. When you give Copilot a seat at the table, it goes beyond being your personal assistant to helping the entire team – check out the Microsoft 365 blog for updates across the suite including PowerPoint, Excel, Microsoft Viva and more.
- Microsoft Copilot Studio: AI transformation begins by tapping into an organization’s unique data and workflows. Microsoft Copilot Studio is a low-code tool designed to customize Microsoft Copilot for Microsoft 365 by integrating business-critical data and build custom copilots for internal or external use. Copilot Studio works with connectors, plugins and GPTs, allowing IT teams to steer Copilot to the best data sources for specific queries.
- Microsoft Copilot for Service: The newest copilot to provide role-based support helps businesses accelerate their AI transformation of customer service. Copilot for Service includes Microsoft Copilot for Microsoft 365 and helps extend existing contact centers with generative AI. In customer interactions, agents can ask Copilot for Service questions in natural language and receive relevant insights based on data sources from knowledge repositories, leading to faster and smarter resolutions.
- Copilot in Microsoft Dynamics 365 Guides: Combining the power of generative AI and mixed reality, this copilot helps frontline workers complete complex tasks and resolve issues faster without disrupting workflow. Available first on HoloLens 2, the hands-free copilot will help service industry professionals use natural language and human gestures to offer interactive guidance through content and holograms overlaid on the equipment.
- Microsoft Copilot for Azure: This is an AI companion for IT that simplifies day-to-day IT administration. More than just a tool, it is a unified chat experience that understands the user’s role and goals, and enhances the ability to design, operate and troubleshoot apps and infrastructure. Copilot for Azure helps IT teams gain new insights into their workloads, unlock untapped Azure functionality and orchestrate tasks across both cloud and edge.
- Bringing Copilot to everyone : Our efforts to simplify the user experience and make Copilot more accessible to everyone starts with Bing, our leading experience for the web. Bing Chat and Bing Chat Enterprise will now simply become Copilot. With these changes, when signed in with a Microsoft Entra ID, customers using Copilot in Bing, Edge and Windows will receive the benefit of commercial data protection. Over time, Microsoft will also expand the eligibility of Copilot with commercial data protection to even more Entra ID (formerly Azure Active Directory) users at no additional cost. Copilot (formerly Bing Chat and Bing Chat Enterprise) will be out of preview and become generally available starting Dec. 1. Learn more here .
Reinforcing the data and AI connection AI is only as good as the data that fuels it. That’s why Microsoft is committed to creating an integrated, simplified experience to connect your data to our AI tools .
Microsoft Fabric is part of that solution. Available now, Microsoft Fabric reshapes how teams work with data by bringing everyone together on a single, AI-powered platform that unifies all those data estates on an enterprise-grade data foundation.
Copilot in Microsoft Fabric also integrates with Microsoft Office and Teams to foster a data culture to scale the power of data value creation throughout the organization. We’ve made more than 100 feature updates since Build and expanded our ecosystem with industry leading partners , and have over 25,000 customers including Milliman, Zeiss, London Stock Exchange and EY using it today.
Unlocking more value for developers with Azure AI We continue to expand choice and flexibility in generative AI models to offer developers the most comprehensive selection. With Model-as-a-Service , a new feature in the model catalog we announced at Microsoft Build, pro developers will be able to easily integrate the latest AI models, such as Llama 2 from Meta and upcoming premium models from Mistral, and Jais from G42, as API endpoints to their applications. They can also customize these models with their own data without needing to worry about setting up and managing the GPU infrastructure, helping eliminate complexity.
With the preview of Azure AI Studio , there is now a unified and trusted platform to help organizations more easily explore, build, test and deploy AI apps – all in one place. With Azure AI Studio, you can build your own copilots, train your own, or ground other foundational and open models with data that you bring.
And Vector Search , a feature of Azure AI Search, is now generally available, so organizations can generate highly accurate experiences for every user in their generative AI applications.
The new GPT-3.5 Turbo model with a 16K token prompt length will be generally available and GPT-4 Turbo will be in public preview in Azure OpenAI Service at the end of November 2023. GPT-4 Turbo will enable customers to extend prompt length and bring even more control and efficiency to their generative AI applications.
GPT-4 Turbo with Vision is coming soon to preview and DALL · E 3 is now available in public preview in Azure OpenAI Service , helping fuel the next generation of enterprise solutions along with GPT-4, so organizations can pursue advanced functionalities with images. And when used with our Azure AI Vision service, GPT-4 Turbo with Vision even understands video for generating text outputs, furthering human creativity.
Enabling the responsible deployment of AI Microsoft leads the industry in the safe and responsible use of AI. The company has set the standard with an industry-leading commitment to defend and indemnify commercial customers from lawsuits for copyright infringement – the Copilot Copyright Commitment (CCC).
Today, Microsoft takes its commitment one step further by announcing the expansion of the CCC to customers using Azure OpenAI Service. The new benefit will be called the Customer Copyright Commitment. As part of this expansion, Microsoft has published new documentation to help Azure OpenAI Service customers implement technical measures to mitigate the risk of infringing content. Customers will need to comply with the documentation to take advantage of the benefit.
And Azure AI Content Safety is now generally available, helping organizations detect and mitigate harmful content and create better online experiences. Customers can use Azure AI Content Safety as a built-in-safety system within Azure OpenAI Service, for open-source models as part of their prompt engineering in Azure Machine Learning, or as a standalone API service.
Introducing new experiences in Windows to empower employees, IT and developers We continue to invest in and build Windows to empower people to navigate the platform shift to AI. We are thrilled to introduce new experiences in Windows 11 and Windows 365 for IT and employees that unlock new ways of working and make more AI accessible across any device. To further our mission of making Windows the home for developers and the best place for AI development, we announced a host of new AI and productivity tools for developers , including Windows AI Studio.
Announcing NVIDIA AI foundry service Aimed at helping enterprises and startups supercharge the development, tuning and deployment of their own custom AI models on Microsoft Azure, NVIDIA will announce their AI foundry service running on Azure. The NVIDIA AI foundry service pulls together three elements – a collection of NVIDIA AI Foundation models, NVIDIA NeMo framework and tools, and NVIDIA DGX Cloud AI supercomputing and services – that give enterprises an end-to-end solution for creating custom generative AI models. Businesses can then deploy their models with NVIDIA AI Enterprise software on Azure to power generative AI applications, including intelligent search, summarization and content generation.
Strengthening defenses in the era of AI The threat landscape has evolved dramatically in recent years, and at Microsoft Ignite we are introducing new technologies across Microsoft’s suite of security solutions to help defenders make the world a safer place.
Microsoft Sentinel and Microsoft Defender XDR (previously Microsoft 365 Defender) will be combined to create the industry’s first Unified Security Operations Platform, with embedded Security Copilot experiences. With built-in generative AI, it’s a single, powerful experience focused on protecting threats at machine speed and aiding defenders by simplifying the complexity of their environment.
Additionally, the expansion of Security Copilot embedded within Intune, Purview and Entra will help IT administrators, compliance units and identity teams simplify complex scenarios. In Entra, identity administrators can quickly troubleshoot identity access. In Purview, data security alerts deliver rich context to help resolve problems faster. In Intune, IT administrators can use “what if” analysis to keep business running while improving governance and compliance.
And that’s just a snapshot of what we’ll be announcing at Ignite. As a reminder, you can view keynote sessions from Satya Nadella, Rajesh Jha and Jared Spataro, Charlie Bell and Vasu Jakkal, and Scott Guthrie live or on-demand.
Plus, you can get more on all these announcements by exploring the Book of News , the official compendium of all today’s news, and the product blogs below.
Watch the keynotes and get all the latest photos, videos and more from Microsoft Ignite
The online event for Microsoft Ignite
With a systems approach to chips, Microsoft aims to tailor everything ‘from silicon to service’ to meet AI demand
Introducing new Copilot experiences to boost productivity and elevate customer experiences across the organization
Simplify IT management with Microsoft Copilot for Azure – save time and get answers fast
Introducing Microsoft Copilot Studio and new features in Copilot for Microsoft 365
Announcing general availability of vector search and semantic ranker in Azure AI Search
GPT-4 Turbo with Vision on Azure OpenAI Service
How Azure AI Content Safety helps protect users from the classroom to the chatroom
Elevating the developer experience on Windows with new AI tools and productivity tools
Microsoft unveils expansion of AI for security and security for AI at Microsoft Ignite
Tags: AI , Azure AI Content Safety , Azure AI Studio , Microsoft 365 , Microsoft Copilot , Microsoft Fabric , Microsoft Ignite 2023 , Microsoft Security Copilot , Model-as-a-Service
- Check us out on RSS
Evolution Research Paper

This sample evolution research paper features: 9900 words (approx. 33 pages), an outline, and a bibliography with 72 sources. Browse other research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.
Introduction
Earliest speculations on evolution.
- Age of Enlightenment and Evolution
Jean-Baptiste de Lamarck
Charles darwin, herbert spencer, ernst haeckel, peter kropotkin, friedrich nietzsche, henri bergson.
- Bozidar Knezevic
Alfred North Whitehead
Pierre teilhard de chardin, marvin farber, the neo-darwinian synthesis, sociobiology: nature and/or nurture, religious creationism or scientific evolutionism, evolutionary humanism, transhumanism, and posthumanism, exobiology and exoevolution.
- Bibliography
More Evolution Research Papers:
- Evolutionary Perspectives In Ethics Research Paper
- Evolutionary Perspectives on Mate Preferences Research Paper
- Evolutionary Psychology Research Paper
- Evolutionary Theories Of Criminal Behaviors Research Paper
- Human Evolution Research Paper
- Evolution/Creation Controversy Research Paper
- Social Evolution Research Paper
The fact of evolution pervades modern thought from astronomy to psychology. It is safe to assume that no academic discipline has escaped the influence of an evolutionary framework. Our present worldview is grounded in a serious consideration of time, change, and evolution; it is a remarkably different explanation for this universe, life-forms on earth, and our own emerging species than was given by natural philosophers only 2 centuries ago.
Academic Writing, Editing, Proofreading, And Problem Solving Services
Get 10% off with fall23 discount code.
Rocks, fossils, artifacts, and genes offer compelling and sufficient evidence for a dynamic view of this planet and those organisms that have existed before and do now live on it (Coyne, 2009; Dawkins, 2009; Fortey, 1998; Mayr, 2001; Ridley, 2004). Yet facts do not interpret themselves. Consequently, interpretations of evolution vary greatly from materialism through vitalism and spiritualism to mysticism (Birx, 1984). In this arc of evolution (Birx, 2006a), there is a glaring difference between the materialist stance of Charles Darwin and the mystical outlook of Pierre Teilhard de Chardin (Birx, 1991). Each interpreter of evolution comes to the theory with a different set of ideas, issues, and values within a specific orientation. Perspectives range from a planetary focus to a cosmic approach.
Modern anthropology embraces the fact of evolution, viewing the recent appearance of humankind within a sweeping geological framework. Our biological structures and functions, as well as societies and cultures (Harris, 1968; Morgan, 1877/1963; Tylor, 1871/1958; White, 1949, 1959), have changed throughout time and will continue to do so. One fascinating prospect for our species is its future adaptation to and survival in outer space, whether for living on neighboring planets or elsewhere in this expanding universe.
The idea of evolution did not originate with the thoughts of Charles Darwin in the middle of the 19th century. Nor did this naturalist have the last word on his own theory of “descent with modification” (as he put it). Yet in terms of science and reason, the conceptual revolution of organic evolution received its factual foundation with Darwin’s pivotal writings on the history and diversity of life-forms on this planet.
In fact, the idea of evolution had been glimpsed by several natural philosophers in ancient Greece during the pre-Socratic Age (Whitlock, 2009): Thales, Anaximander, Heraclitus, Xenophanes, and Empedocles. They recognized the biological similarities between the human being and other animals and held to the dynamic history of this universe. One is tempted to refer to them as protoevolutionists, since they anticipated (to varying degrees) the thoughts of Darwin and Alfred Russel Wallace more than 2,000 years later.
The emerging concept of organic evolution received an unfortunate intellectual impediment with the philosophical writings of the ancient Greek thinker Aristotle (384–322 BCE), who taught that species are eternally fixed in the natural world. However, he did acknowledge the biological similarities among groups of animals thereby fathering both comparative biology and a natural taxonomy. Even so, he ignored the biohistorical significance of fossils, referring to them as being merely chance aberrations in rock strata. Aristotle’s interpretation of life-forms as representing a static hierarchy of fixed species (his comprehensive concept of the great chain of being, or so-called ladder of nature) had an enormous influence on later naturalists, philosophers, and theologians; subsequently, these thinkers were not predisposed to accepting the mutability of species throughout earth history.
The Roman philosopher Lucretius (96–55 BCE) wrote that this planet itself, over time, had produced plants and animals. He also claimed that organisms, including intelligent beings, inhabit other worlds in this universe. But his anti-Aristotelian worldview was not taken seriously by those thinkers who dogmatically clung to the traditional interpretation of life-forms as fixed species.
During the Italian Renaissance, Leonardo da Vinci (1452–1519) did recognize both the biological and historical significance of fossils as the remains of once living organisms. He had discovered marine fossils embedded in the top rock strata of the Alps; three centuries later, Darwin would have a similar experience in the Andes. Unfortunately, Leonardo never recorded his own thoughts on the history of changing life-forms throughout the thousands of years of geological time; he thought our earth to be at least 200,000 years old. His genius may have imagined the mutability of species but, if so, he never wrote about this idea in his notebooks.
Age of Enlightenment and Evolution
Following the so-called Dark Ages and Middle Ages, the Enlightenment represented an exciting time for academic scholars during which serious thinkers criticized the dogmatic church and oppressive state in favor of science and reason (Cassirer, 1955). The courageous French philosophers of this time called for open inquiry and the extension of the scientific method from the natural sciences to the emerging social sciences. By taking a historical perspective, emphasizing the value of freedom and individualism, and anticipating ongoing progress in the special sciences (both natural and social), these enlightened thinkers established an intellectual atmosphere that paved the way for the coming of anthropology as a distinct discipline.
With the natural philosopher Denis Diderot (1713–1784) as its major editor, the Encyclopédie (1751–1772) represented a practical outcome in the devotion to both scientific research and critical thinking, and it was a project exemplary of this age. In fact, achieving the completion of this unique project was Diderot’s supreme accomplishment. With the publication of this multivolume work, extensive knowledge was now accessible for both academic scholars and general readers.
The nature-oriented thoughts of the Enlightenment gave a major impetus to the growth of several earth sciences: historical geology, comparative paleontology, and prehistoric archaeology (as well as ongoing advances in biology). Rocks, fossils, and artifacts were revealing an incredible explanation for life-forms on earth that was far different from the biblical story of Creation in Genesis. Furthermore, extensive travels by naturalists led to the discovery of other societies with different cultures subsequently contributing to the need for a specific science of humankind itself.
Representative of the optimistic outlook during the Enlightenment is the future vision presented by Marquis de Condorcet (1743–1794), who is remembered primarily for his extraordinary book titled Sketch for a Historical Picture of the Progress of the Human Mind (1795/1980). As a result of astonishing advances in science and technology throughout the forthcoming centuries, Condorcet held that one practical consequence would be that human beings will eventually achieve and enjoy an indefinite life span.
Influential Scientists and Provocative Philosophers
Once the fact of evolution was established, it had an overwhelming influence on several major thinkers in science, philosophy, and theology. The pivotal writings of Charles Darwin represented a scientific revolution that seriously challenged those ideas, beliefs, and values that were embedded in the traditional, static worldview, an outlook that had stymied both creative and critical thought for centuries. A dynamic interpretation of nature now replaced the old conceptual framework grounded in fixity and permanence. Some naturalists were eager to consider the farreaching implications of evolution for understanding life, our species, and this universe. Some philosophers and theologians were courageous enough to consider the startling consequences of evolution for appreciating reality itself.
In 1809, following the Enlightenment, the French natural philosopher Jean-Baptiste de Lamarck wrote the first serious work on organic evolution, titled ZoologicalPhilosophy (1809/1984). This book appeared exactly 50 years before the publication of Charles Darwin’s major work, On the Origin of Species (1859). However, Lamarck’s interpretation of evolution was essentially conceptual and speculative, lacking the sufficient empirical evidence and a testable explanatory mechanism that Darwin would later offer to convince other biologists of the fact that species are mutable and have evolved throughout natural history.
To his lasting credit, Lamarck had studied the fossil record in rock strata. He correctly concluded that the sequence of remains in the geological column clearly demonstrated that life-forms have evolved during earth history. His idea that plants and animals are mutable and change over time challenged the entrenched concept of fixed species. Unfortunately, Lamarck was unable to persuade his contemporary naturalists that species have evolved throughout planetary time. His explanation for organic evolution in terms of the inheritance of acquired characteristics through use and disuse was not convincing; for example, his own idea that the long neck of a giraffe is directly due to the accumulated results of stretching, over countless generations, to reach the leaves of ever-higher trees remains a preposterous explanation in the history of biology. In addition, Lamarck’s vitalist orientation was not in step with the naturalism espoused by most biologists. Likewise, his ludicrous claim that complex animals, such as our human species, can actually will those biological changes that are needed by them to adapt and survive in changing environments has been verified neither by evidence nor by experience since his time.
In fact, at first, Darwin was reluctant to acknowledge the influence that Lamarck had had on the early development of his own evolution framework. Nevertheless, Lamarck had been brave enough to maintain the heretical idea that species change through time.
Charles Darwin (1809–1882) is referred to as “the father of evolution,” a designation he richly deserves for his lifelong dedication to science to substantiate the mutability of species (Birx, 2009a). With focused energies, he was able to amass overwhelming empirical evidence from various fields, thereby documenting the fact of evolution for other naturalists. His scientific theory of organic evolution and explanatory mechanism of natural selection represented a conceptual revolution in both science and philosophy, with devastating consequences for traditional theology.
As a young naturalist in England, Darwin was primarily interested in rocks and beetles; over the years, his research shifted from geology to biology. After university studies in medicine and theology, his comfortable life was altered dramatically when captain Robert FitzRoy accepted him for the position of a naturalist aboard the HMS Beagle; this survey ship would sail for 5 years (1831–1836) in the Southern Hemisphere, with its primary purpose being the mapping of the coastlines of South America. This extensive trip would prove to be a voyage of discovery for the emerging scientist (Darwin, 1839/2000; McCalman, 2009).
When Darwin boarded the Beagle, he was an amateur geologist who accepted both the then-taught fixity of species and the beliefs of Christianity. But his own worldview would change radically as a result of three fortuitous events: his critical study of Charles Lyell’s three-volume Principles of Geology (1830–1833), his unique experiences as an astute observer of nature in the Southern Hemisphere (especially during his 5-week visit to the Galapagos Islands), and his beneficial reading of Thomas Robert Malthus’s An Essay on the Principle of Population (1798/1803).
Questioning and then rejecting the story of Creation as presented in Genesis, Darwin began to envision a dynamic web of life-forms changing over space and throughout time. Lyell’s sweeping geological framework offered an immense period of planetary history within which Darwin could imagine the slow and continuous mutability of species. Furthermore, not only the fossil record in rock strata but also the geographical distribution of different organisms argued for the evolution of life-forms throughout biological time. In short, the earth is a massive graveyard of past species and a changing stage for the emergence of new ones, as well as a global museum of previous cultures and human activities. Finally, in 1838, Darwin’s reflections on Malthus’s vivid description of the living world as a “struggle for existence” gave to him his explanatory mechanism of natural selection. Thus, in merely 7 years, Darwin the geobiologist had become convinced that species either evolve or become extinct within changing environments throughout organic history. He referred to his evolution theory as “descent with modification,” but he had no immediate plan to get his disturbing interpretation of life into print.
With the luxury of time, Darwin’s ongoing scientific research in biology and critical reflection on dynamic nature included the rigorous study of worms, pigeons, orchids, and barnacles as well as numerous other species (Boulter, 2009). Suddenly, in 1858, his scientific life of isolated contentment was abruptly disrupted when he learned that the naturalist Alfred Russel Wallace, while living in Indonesia, had come forth with both a theory of evolution and the same explanatory mechanism of natural selection to account for the history of life on earth.
Consequently, in 1859, Darwin quickly published his major work, On the Origin of Species (Darwin, 1859), which saved his priority as being the father of evolution. He was also fortunate to have three major naturalists defend his counterintuitive and most controversial theory: Thomas Huxley in England, Ernst Haeckel in Germany, and Asa Gray in the United States. Even so, Darwin had deliberately left out any consideration of the human animal. However, 12 years later, his The Descent of Man (1871) actually focused on our own species (Darwin, 1871).
Darwin’s materialist theory of organic evolution held incredible, if not disquieting, ramifications for viewing the place of our human species within earth history. As had Huxley and Haeckel, Darwin himself now wrote that our species is closest to the three great apes (orangutan, gorilla, and chimpanzee), with which the human animal shares a common ancestral origin. And he thought that the remains of this shared group would be found in the fossil record of Africa. Also, Darwin maintained that the human being differs merely in degree rather than in kind from these three great apes. This was not a claim that endeared him to those who believed that our species is unique and therefore occupies a special position in this universe. Nevertheless, Darwin’s evolution theory gave to the emerging discipline of anthropology a scientific foundation that is quintessential for understanding and appreciating the origin and history of humankind.
With dynamic integrity, Darwin clung to his materialist outlook, thereby giving an atheistic interpretation of organic evolution, while his cosmological perspective remained agnostic at best. He even reflected on the evolution of the human brain with its mental activity, as well as pondering the emergence of moral conduct from earlier ape behavior.
Following the pervasive and overwhelming influence of Darwin’s writings, the early anthropologists speculated on and searched for fossils and artifacts to document the biological and sociocultural evolution of the human animal, respectively. Other anthropologists wrote about the evolution of languages, kinship systems, political organizations, and magical-religious belief systems. Evolution research continues to enlighten and inspire the science of anthropology, with remarkable evidence discovered each year. One may eagerly anticipate new findings in genetics, paleontology (Brasier, 2009), primatology, and evolutionary psychology.
No doubt, during his frequent strolls down the Sandwalk behind Down House, the aging Charles Darwin reflected on his incredible experiences during his voyage on the HMS Beagle (especially his visit to the primeval-like Galapagos Archipelago). Yet one may argue that it was Lyell’s geological perspective that had had the greatest lasting influence on the young naturalist. It gave to Darwin in particular, and to anthropologists in general, a vast framework of time and change within which one could comprehend organic evolution and the recent appearance of humankind on planet earth.
Today, the English thinker Herbert Spencer (1820–1903) is primarily remembered for coining the famous expression “the survival of the fittest,” a phrase that Darwin himself later used in his own writings on organic evolution. But Spencer’s greatest achievement was authoring a 10-volume work titled Synthetic Philosophy (1862–1893), a comprehensive interpretation of reality that dealt with cosmology and biology as well as sociology, psychology, and ethics. This worldview is grounded in a universal force and a crucial distinction between the now knowable world of human experience and the forever unknowable realm of ultimate reality.
Taking time and change seriously, Spencer presented his cosmic perspective in First Principles (1862/1958), Volume 1 of the 10-volume grand synthesis (Spencer, 1862–1893). In it, he offers his evolutionary view of this dynamic universe. He speculates that the cosmos evolves from maximum simplicity (homogeneity) to maximum complexity (heterogeneity), as does the history of life on earth. Then, the cosmos and life devolute back to ultimate simplicity. He further speculated that there is an endless series of cosmic cycles, each finite cycle identical in structure but different in content.
Spencer likened the evolution of a society to the evolution of an organism, referring to a human society as the superorganic, which is distinct from nature itself but follows the same progressive process from simplicity to complexity and then devolutes back to simplicity. Thus, planetary evolution is from the inorganic through the organic to the superorganic. Spencer rejected religious creationism in favor of scientific evolutionism. In anthropology, he called for the empirical description and comparative study of societies and their cultures within an evolution framework. Ultimately, his ruthless individualism became the foundation for social Darwinism. Nevertheless, his ideas paved the way for the sociocultural evolutionists of the 20th century, for example, V. Gordon Childe, Marvin Harris, Julian H. Steward, and Leslie A. White (among others). No doubt, ongoing research in anthropology will provide an even clearer view of human evolution in all of its aspects.
Thomas Huxley
Referred to as “Darwin’s bulldog” in England because of his enthusiastic support for the fact of evolution, Thomas Henry Huxley (1825–1895) contributed to science through his own comparative research in anatomy and paleontology. He accepted the evolution framework, with its vast geologic perspective and compelling paleontological record. His scientific imagination could even see earth history represented in a piece of chalk, even though our present knowledge of rocks, fossils, and genes was not available to him. At a time when most naturalists still held to the fixity of species, Huxley boldly argued that organisms either evolved throughout earth history, or they became extinct. His writings and lectures greatly helped to spread the scientific theory of biological evolution to both academic specialists and the general public.
Huxley is best remembered for defending the scientific theory of biological evolution at the University of Oxford’s Museum of Natural History in the summer of 1860 (Darwin was ostentatiously absent). The heated confrontation between biblical fundamentalist Samuel Wilberforce, Bishop of Oxford, and materialist evolutionist Thomas Huxley ended with a victory for science and reason over religious shortsightedness and myopic beliefs. Nevertheless, the “battle” between religious creationists and scientific evolutionists continues, and it is as contentious today as it was during Darwin’s time.
In 1863, concerning our own species, Huxley presented his pithecometra hypothesis (1863/1959): The human animal differs merely in degree rather than in kind from the two African great apes (gorilla and chimpanzee), and, in turn, our species is closer to these great apes than they are to the two lesser apes (gibbon and siamang). This position was also maintained by Ernst Haeckel and several years later by Charles Darwin himself. No doubt, the disturbing claim that the human animal is closely related to the living apes through organic evolution has contributed significantly to the continuing outrage against evolutionary biology and biological anthropology.
To represent his own position of scientific naturalism, Huxley coined the term agnosticism, as he was not certain whether a personal God exists or not. Even so, Huxley never believed that the process of evolution represented a divine plan or intelligent design. The philosophical scientist Ernst Haeckel was a pantheist (God is nature), while Charles Darwin kept his atheism to himself.
Huxley’s interpretation of evolution differed from Darwin’s view. Influenced by Charles Lyell’s theory of uniformitarianism in historical geology, which held that geological structures change slowly over immense periods of time due to natural forces, Darwin’s support of gradualism in organic evolution held that species change slowly over vast periods of time due to biological variation and natural selection. However, doubting that natural selection alone could account for the transformation of species, Huxley thought that new species could have “suddenly” appeared as a result of periodic rapid changes in biological evolution. Considering the enormous age of this planet and the awesome number of species that have existed on it (almost all of them having become extinct), it seems reasonable to assume that different rates of evolutionary change are represented in the fossil record.
Known as “Germany’s Darwin” for daringly advocating and rigorously defending organic evolution, Ernst Haeckel (1834–1919) dedicated his research activities to many scientific areas, especially comparative embryology and marine biology. He not only contributed to the empirical evidence that supported organic evolution, but also seriously considered the far-reaching consequences of the evolutionary sciences for both philosophy and theology. His most successful book was The Riddle of the Universe (1899), in which he presented an evolutionary worldview that courageously challenged those traditional ideas and embedded beliefs that had pervaded Western thought for centuries (Haeckel, 1899).
Haeckel’s evolutionary philosophy is grounded in a process monism (his law of substance); this position claims that dynamic reality is essentially a cosmic unity. Therefore, he held that human existence is a product of and totally within material nature. Moreover, for him, the evolving universe itself is eternal in time and infinite in space.
Haeckel had no patience for those thinkers who ignored the fact of evolution and its atheistic consequences. He rejected the common earth-bound and human-centered view of reality, which had taught that our species holds a special place in cosmic immensity. Moreover, by extending the fact of evolution beyond earth, Haeckel speculated that life-forms, including intelligent beings, exist on other planets elsewhere in this universe. As such, he anticipated the new research area of exobiology.
Inspired by Charles Darwin’s On the Origin of Species (1859), Haeckel expanded the evolution theory to include the emergence and history of the human animal. He claimed that the evolution of our species could be traced back to a “missing link” represented by an ape-man without speech, Pithecanthropus alalus, whose fossil remains he thought would be found somewhere in Asia. (Darwin held Africa to be the cradle of humankind.) For Haeckel, this ape-man once existed between the earlier prehistoric apes of the alleged Asian landmass Lemuria (now vanished) and our own species of today. In the early 1890s, the naturalist Eugene Dubois discovered the hominid specimen Pithecanthropus erectus at the Trinil site on the island of Java in Indonesia. This remarkable find inspired other naturalists to search for similar fossil evidence in Africa and Asia. Haeckel also claimed that the human animal and the two African great apes (gorilla and chimpanzee) differ merely in degree rather than in kind.
In fact, as an artist in science, Haeckel drew the first tree of life diagram and, subsequently, many other illustrations that showed the evolutionary relationships among organisms as naturalists understood the historical web of life at that time. In general, Haeckel’s basic ideas remain in step with modern thought. Today, his rigorous evolutionism may be seen in the writings of Richard Dawkins and Daniel C. Dennett (among others).
In Russia, Prince Peter Alekeyevich Kropotkin (1842–1921) became known for his original research in geography, zoology, anthropology, and sociology. He spent time in Siberia, where he studied the influence of past glaciers on its environment. He also carefully observed the group behavior of tribal communities and wild animals, deriving an important generalization about the adaptation and survival of societies, that generalization being his concept of mutual aid (social cooperation).
Although an evolutionist, Kropotkin differed from Darwin in maintaining that the natural selection of individuals was necessary but not sufficient to account for the survival and therefore successful evolution of social animals, including our own species. Kropotkin stressed that mutual aid is also crucial for the adaptation and reproduction of species (Kropotkin, 1902/1914; Montagu, 1952). In fact, for him, mutual aid is the key to understanding and appreciating the evolution of the human being; in human social evolution, from bands and tribes to chiefdoms and states, mutual aid has played a crucial role in both protecting individuals and ensuring the survival of groups.
Kropotkin (1922/1968) even held that the biological origin of mutual aid was the foundation of a universal ethics for our own species. Therefore, he saw a sound anthropology resulting from the convergence of evolutionary science and a community ethics grounded in mutual aid. For him, collective thought and social action enhances the life, harmony, unity, and evolution of human communities. Extending his naturalism and humanism into politics, Kropotkin advocated communist anarchism.
In the 20th century, evolutionary biology in Russia received a devastating setback due to the politically motivated ideas concerning heredity defended by Trofim D. Lysenko (1898–1976), who sided with the philosophical views of Jean-Baptiste de Lamarck and Ivan Vladimirovich Michurin rather than the scientific discoveries of Gregor Johann Mendel and Hugo DeVries.
Yet it was the Russian biochemist A. I. Oparin who proposed a scientific explanation for the material appearance of life on this planet. In his groundbreaking book The Origin of Life (1923), he extended Darwin’s naturalist theory by arguing that inorganic development had paved the way for the emergence of organic evolution in terms of biochemical advances in the waters of a primordial earth billions of years ago. Oparin had rejected all nonmaterialist explanations for the origin and evolution of life on this planet, as well as the assumption that life on earth is unique in this dynamic universe.
One may argue that the German philosopher Friedrich Nietzsche (1844–1900) is the most influential thinker of the recent past. Yet it is not often realized that he was greatly influenced by the evolution theory of Charles Darwin (Birx, 2006c). Reminiscent of Heraclitus in ancient Greece, Nietzsche took time and change seriously, seeing our species as being totally within the flux of reality. And like the scientist Darwin, the philosopher Nietzsche presented a strictly naturalist worldview. Nevertheless, Nietzsche’s vitalistic interpretation of organic evolution is far removed from Darwin’s materialist explanation for life-forms on earth (including the human animal).
Nietzsche was deeply concerned with the cosmological implications, ethical ramifications, and religious consequences embedded in the fact of evolution (as he saw them). For him, “God is dead!” and, therefore, this dynamic world has no meaning or purpose other than those values that humankind creates for its existence (Nietzsche, 1883–1885/1993). Likewise, if everything changes, then ideas and beliefs and values also change throughout time. In fact, Nietzsche called for a rigorous reevaluation of all values to overcome the complacency and mediocrity that he held to be pervasive in modern civilization.
Darwin neither concerned himself with questions about the beginning of this universe and the origin of life nor speculated on the future of our species and the end of this cosmos. Instead, he focused his time and effort on demonstrating (as best he could) the fact of evolution in terms of empirical evidence and logical argumentation. In sharp contrast, however, Nietzsche was always eager to grapple with those metaphysical issues that the evolution framework posed for both philosophy and theology.
Nietzsche’s philosophical anthropology gives priority to no particular society or specific culture. His own position emphasizes the value of human creativity within the history of a creative universe in general and the process of creative evolution in particular.
Nietzsche’s worldview stresses three essential ideas that are compatible with the evolution theory as he interpreted it: The dynamic universe is ultimately a will to power; the further evolution of the human animal will bring about a superior form, the overbeing, which will be as intellectually advanced beyond our species of today as the human being is now biologically advanced beyond the lowly worm, and the eternal recurrence of this same universe as his all-encompassing conception of reality itself.
In his sweeping vision of the eternal recurrence, Nietzsche maintains that this finite cyclical universe will repeat itself forever. He argued that space and the amount of matter or energy in reality is finite, but time is eternal. Therefore, only a finite cosmic series of objects and events and relationships is possible. Consequently, this identical sequence repeats itself an infinite number of times; there was no first sequence and there will be no last sequence. Since each cosmic cycle is absolutely identical, there is no evolution from universe to universe within this endless repetition. As a result, Nietzsche himself and everything else in reality has a form of natural immortality.
The eternal recurrence remains an engaging idea in modern cosmology, especially in terms of an oscillating model for this dynamic universe.
Critical of Charles Darwin’s mechanistic and materialistic interpretation of organic evolution, the French philosopher Henri Bergson (1859–1941) offered a vitalistic explanation for biological history in his major work, Creative Evolution (1907/1998). Unlike the early scientists who defended Darwin’s naturalism, for example, Huxley and Haeckel, Bergson argued that it was only a philosophical interpretation of organic evolution that would disclose the essential aspect of diverging lifeforms on earth over countless millions of years and, furthermore, would reveal the unique value of the human being in terms of its immediate awareness of real time and creative evolution.
Bergson set forth his essential philosophical stance in his book An Introduction to Metaphysics (1903). To grasp the significance of his conceptual orientation, it is necessary to understand Bergson’s crucial distinction between science and metaphysics: Science is interested in a rational (mathematical and logical) analysis of the appearance of diverse and fixed material objects in external space; in sharp contrast, metaphysics is concerned with intuitively grasping the creative flux of events in the unity of reality as evolving consciousness in internal time or duration. Bergson gave preference to intuition over reason, that is, metaphysical insights over scientific information. He argued that it was only through intuition that a human being could appreciate both the flux of time and the creativity in evolution.
As a vitalist, Bergson (1907/1998) held that an invisible life force, or élan vital, causes the awesome creativity throughout organic evolution on our planet. He maintained that this metaphysical principal is needed to account for the emergence of an enormous diversity of species that has appeared over countless millions of years on earth. For him, the diverging evolution of life-forms has taken three major directions: plants with torpor, insects with instinct, and animals with consciousness. Bergson focused on the evolution of animals, which demonstrated (for him) a direction toward ever-increasing complexity and everincreasing consciousness. So far, this direction has reached its peak in the human animal with its self-consciousness. In fact, in our own species, Bergson maintained that selfconsciousness is the élan vital conscious of itself. He even envisioned, as human evolution continues, the emergence of a community of mystics.
Vitalism is not taken seriously by most modern evolutionists, who give priority to science and reason rather than to metaphysical speculations and mystical beliefs. Thus, neo-Darwinists interpret organic evolution within a strictly naturalistic framework.
Greatly influenced by the Darwinian theory in science, the American philosopher John Dewey (1859–1952) presented his own dynamic outlook as “instrumentalism,” a version of pragmatism. Having abandoned his early interest in Hegelian idealism, he wholeheartedly embraced the evolutionary paradigm with its far-reaching naturalistic implications for comprehending the place of humankind within this universe. Therefore, he saw our species within the organic history of this planet and earth within the cosmic history of this universe. His mature position gave no credence to idealism or spiritualism.
In his essay, “The Influence of Darwin on Philosophy” (1910), Dewey had called for philosophers to take the fact of evolution seriously (Dewey, 1965). In doing so, the entrenched two-world interpretation of reality as matter and spirit is discredited, as is a dualistic view of the human being as mortal body and immortal soul. Dewey best presented his own philosophy in Experience and Nature (1925/1958), a book that rigorously advocates the value of human experience and scientific inquiry.
Dewey understood the human animal as the recent product of biological evolution, a natural process within which there is always an interaction between organisms and their environments. He saw the discoveries in anthropology as being crucial for any sound interpretation of humankind within nature. Additionally, Dewey appreciated both the scientific method and the use of human concepts as means for solving problems in the natural and social worlds. For this philosopher, knowledge and wisdom come from experiencing nature itself; facts and concepts and values are derived from reflecting on experiences within nature. For Dewey, ideas and beliefs and hypotheses have adaptive value, as do critical thinking and social action. He claimed that advances in science and philosophy are only possible when there is an active community of free inquirers in a democratic society. Not surprisingly, Dewey completely rejected both Spencer’s social Darwinism and Nietzsche’s ruthless individualism.
John Dewey remains an inspiration for all naturalists and humanists, particularly those dedicated to education. For the scientific philosopher as active pragmatist, the evolutionary perspective allows for the ongoing transformation of our species in terms of adapting to and surviving in an endlessly changing universe. Thus, the enlightenment and fulfillment of humankind requires taking seriously both philosophical reflection and scientific research.
Božidar Kneževic´
In Serbia, the historian Božidar Kneževic´ (1862–1905) developed a unique interpretation of evolution that grew out of the ideas of Charles Darwin and Herbert Spencer (among others). Although he adopted a cosmic vision, his bold speculations focused on the history and future of life on earth. Within the ascent and then descent of this immense universe, Kneževic´ saw our species as being only a part of the evolution and then the devolution of organisms on this planet. As such, the naturalist taught that neither the planet earth nor the human animal is at the center of cosmic reality; consequently, he held that each is an ephemeral event in the material universe.
Kneževic´ (1901/1980) saw both cosmic and planetary history as a semicircle of evolutionary ascent from an initial chaos followed by a devolutionary descent back to an ultimate chaos. He held that this universe is utterly indifferent to the fleeting incident of human existence, and in time, everything will disappear in the endless flux of cosmic reality.
Even so, Kneževic´ was convinced that other planets, stars, galaxies, and universes exist and undergo this pervasive semicircular history within the infinity of superspace and the eternity of supertime. On earth, after the appearance of vertebrates from invertebrates, the fossil record shows the sequential emergence of these groups: fishes, amphibians, reptiles, birds, and mammals. Most recently, one sees the appearance of the human animal. Subsequently, when planetary devolution sets in, our species will be the first organism to vanish, followed by this series of extinctions in the remaining groups: mammals, birds, reptiles, amphibians, fishes, and lastly all of the invertebrates. This semicircular process will occur on other planets with life-forms, including intelligent beings superior to our species (in each case, the last form to appear is the first form to disappear).
Božidar Kneževic´ was a brave spokesperson for science, reason, evolution, and open inquiry. He was a futurist who courageously advocated naturalism and humanism. His acknowledgement of the inevitable extinction of our own species and, in fact, of all that exists is a sobering but relevant reminder of the finitude of life-forms, which needs to be taken seriously in our modern worldview (particularly with the present growing concern for the environment).
With its emphasis on time and change, the evolution framework had a significant influence on 20th-century thought. This outlook inspired serious thinkers to see creativity in this world in terms of an expanding universe and emerging species; it also resulted in a deep concern for dynamic philosophy and process theology. This focus on pervasive change throughout cosmic time is exemplified in the impressive writings of Alfred North Whitehead (1861–1947), who was interested in not only scientific discoveries but also metaphysical speculations. He sought to include the recent findings of both relativity physics and evolutionary biology in his comprehensive worldview that reflects their implications for understanding and appreciating the value of human experiences and feelings within an ever-changing universe. Whitehead taught first in England and then in the United States, distinguishing himself at the University of Cambridge and later at Harvard University. His academic life passed through three distinct stages; it moved from mathematics and logic, through a concern for education and the history of science, to natural philosophy and metaphysics (Whitehead, 1920/1964, 1925/1967, 1929/1969).
Whitehead’s major work is Process and Reality: An Essay in Cosmology (1929/1969). It is a systematic interpretation of change that aims to incorporate both the being of eternal objects and the becoming of actual occasions. This ongoing interaction between being and becoming results in the all-encompassing creativity of endless reality. In terms of pervasive experiences and feelings, all objects and events continuously interact in the evolutionary advance of this eternal and infinite universe. As such, there is an integrated and essential unity (through experiencing and feeling) of human perception and reality itself, that is, a unity of internal mental activity with external physical activity throughout the extensive continuum of this cosmic epoch.
As a panentheist, Whitehead merely distinguished between God and Nature (for him, they are neither separate nor identical entities); both are interacting forever, as there is no ultimate end or final goal to the creative process of an endless reality. However, there have been and will be other finite cosmic epochs, each with its own physical laws and unique creativity. In short, Whitehead’s dynamic cosmology clearly illustrates how extremely abstract an interpretation of evolving nature may become. Within this philosophy of organism, the experiencing human being is the concrescence of all its actual occasions within a continuously flowing space-time continuum.
There is a crucial distinction between the fact of evolution in science and those interpretations of evolution that exist in the philosophical literature. Evolutionary viewpoints range from materialism through vitalism and spiritualism to mysticism. Furthermore, for some thinkers, there is a serious need to synthesize science and theology into a comprehensive philosophical system that will embrace both established facts and personal beliefs. Such an audacious attempt had been made by Pierre Teilhard de Chardin (1881–1955), an eminent French geopaleontologist and devout Jesuit priest, who accepted both the truth and challenge of evolution, despite the inevitable problems and tragic consequences his unique vision would cause him from some myopic religionists and his intolerant superiors (Birx, 2006d).
Because of his interest in both science and theology in terms of evolution, Teilhard was eventually silenced by the Roman Catholic Church for his unorthodox views on original sin. He was then exiled from France to China, where his geological research at Zhoukoudian, the significant fossil hominid site near Peking (now Beijing), and his subsequent scientific writings made him world famous (Aczel, 2007). Teilhard’s involvement as a geologist with this Sinanthropus pekinensis discovery resulted in his intense reflections on the meaning and purpose of human evolution within dynamic reality. Consequently, he authored his major but controversial philosophical book, The Phenomenon of Man (1975; written in 1938–1940, 1947–1948, and first published in 1955 in French). Unfortunately, the Vatican denied him permission to have it published. Quintessentially, the book argued for a teleological and mystical interpretation of human existence on earth based on theistic evolution (what today is referred to as an appeal to an intelligent design within the historical process of the natural world).
Teilhard worked with those geologists, paleontologists, and anthropologists who were dedicated to unearthing the remains of fossil hominids in the Eastern Hemisphere, from Africa to Indonesia. He himself spoke of an anthropogenesis, that is, the emergence and ongoing evolution of our species. He also called for an ultra-anthropology, that is, a rigorously comprehensive view of humankind within this evolving world. Of course, for many, evolution was a devastating challenge to traditional theologies and religious beliefs. It required a reinterpretation of God, personal immortality, human free will, and the divine destiny for our species. In their dynamic worldview of reality, both Teilhard and Whitehead were panentheists, seeing God and Nature as continuously interacting in an ongoing process of creative evolution.
Teilhard’s unique synthesis (1975) is based on four fundamental conceptual assumptions: (1) The unity of this process universe is ultimately grounded in spiritual energy; (2) cosmic evolution reveals the design of ever-increasing complexity and ever-centralizing consciousness; (3) organic evolution on the finite, spherical earth reveals three consecutive and essential layers (matter or the geosphere, life or the biosphere [Vernadsky, 1926/1998], and thought or the noosphere); and (4) the end goal of human evolution will occur on this planet with the formation of a theosphere. For this Jesuit scientist, converging and involuting human evolution will eventually form a collective consciousness at the Omega Point, which is the ultimate destiny for our species on the earth. Then, this collective consciousness will detach itself from this planet, transcend space and time, and unite itself with a personal God as a result of a final mystical synthesis.
In the last analysis, Teilhard’s cosmology (Heller, 2009) is actually a planetology. Incredible as his vision may seem, it is nevertheless to Teilhard’s lasting credit that he accepted the fact of evolution at a time when the worldwide religious community was either skeptical of it or rejected it outright. Actually, by foreseeing the future unity of our human world through converging advances in science and technology, Teilhard had glimpsed our age of the Internet.
In the history of philosophy, there has been a contentious debate between the objectivists who gave preference to the natural world and the subjectivists who gave preference to the human mind. This clash in metaphysics continues today; some philosophers claim that the material universe is the starting point for any sound cosmology, while others ground their worldview in the reflective ego as the alleged center of any true ontology. However, if philosophy takes the factual theory of organic evolution seriously, then any metaphysical framework must embrace both a dynamic universe independent of human thought and the recent emergence of our species within the sweeping history of life-forms on this planet.
As a distinguished American philosopher, Marvin Farber (1901–1980) devoted his academic activities to the intellectual defense of a cosmic naturalism over a myopic subjectivism (Farber, 1968a, 1968b). Although he studied and contributed to phenomenology as a method of inquiry, his own refreshing naturalist standpoint recognized the severe limitations of restricting philosophical investigations to merely the content of a human mind. Farber accepted the fact of evolution, realizing the far-reaching implications that this scientific theory holds for philosophical ideas and religious beliefs. Consequently, his unabashed atheism and pervasive naturalism were in stark contrast to all idealist positions in the philosophical literature and all theistic interpretations in religious thought.
Farber had been greatly influenced by the writings of Ludwig Feuerbach and Karl Marx (among others). He was indebted to the cosmic perspective of Giordano Bruno and the evolutionary framework of Ernst Haeckel. His inquiring mind was always open to crucial findings in the natural and social sciences, as well as advances in logic. He was particularly receptive to the ongoing discoveries in anthropology, a discipline he thought to be especially important to any sound understanding of and proper appreciation for human existence in terms of both science and philosophy. To him, the facts and concepts of scientific anthropology are indispensible for modern philosophy.
Incorporating the evolutionary perspective, Farber held that humankind is merely a newcomer in earth history, and its vulnerable existence is a fleeting event within the flux of cosmic reality. Therefore, one must come to grips with the ephemeral status of mental activity in this universe. Moreover, for him, the ongoing discoveries in paleoanthropology, as well as research in primatology and genetics, offer a striking confirmation of human evolution and the close relationship between our own species and the great apes.
Because of his commitment to the special sciences, uncompromising materialism, and sobering interpretation of human evolution, the wise Marvin Farber stood almost alone in modern philosophy. Nevertheless, his enlightened stance against ignorance and superstition would gladly welcome all forthcoming findings in scientific anthropology and evolutionary science. As Farber saw it, the goal of human research is to increase freedom, happiness, and longevity (with the issues in ethics taking priority over those themes that still surround epistemology and metaphysics).
One may anticipate a neo-Enlightenment with a renewed emphasis on science, reason, and humanism. For now, however, and with prudent courage, our species must have the will to evolve and fulfill itself on earth and later elsewhere in a godless universe.
Research and Speculation on Evolution
The ramifications of evolution open up new areas for scientific research, especially in anthropology with its focus on humankind. Although opposition to the fact of evolution continues, it does not stifle rational speculations on the awesome possibilities that evolution holds for both the future of our species and the probable existence of lifeforms on other worlds.
At the beginning of the 20th century, scientists were divided into two distinct groups concerning the primary force behind organic evolution: One group argued that the explanatory mechanism of natural selection accounted for the emergence of new species over vast periods of time, while the other group maintained that genetic variation held the key to understanding and appreciating biological evolution. However, before 1959, it became obvious that genetic variation and natural selection, taken together, explained the appearance of new species throughout the history of life-forms on earth. As a result, populations (or gene pools) became the focus of evolutionary research, particularly in terms of probability and statistics. As such, neo-Darwinism, or the so-called synthetic theory of organic evolution, now represents the scientific foundation for modern biology.
The writings of several scientists helped to popularize the emerging synthesis in evolution theory: Theodosius Dobzhansky, Sir Julian Huxley, Ernst Mayr, and George Gaylord Simpson (among others). Their informed books spread the facts and concepts of evolution theory, as well as defended evolutionary biology from the uninformed positions of dogmatic biblical fundamentalists and myopic religious creationists. Ongoing discoveries in paleoanthropology and human genetics, as well as improved dating techniques, gave greater empirical evidence to support the fact of human evolution (despite those attacks that still challenge the enormous age of this earth, the mutability of species, and the great antiquity of our own species). The recent completion of the Human Genome Project opens up new areas of research for the genetic engineering of species, including our own.
In 1975, the appearance of a groundbreaking book titled Sociobiology: The New Synthesis, from the American naturalist Edward O. Wilson, caused a major debate among anthropologists, including other scientists and philosophers (Wilson, 1975). A specialist in entomology who focused on the biology and behavior of ants, Wilson boldly extended organic evolution in order to include our own species in terms of seeing human behavior influenced by the inherited genetic makeup of the human animal. His position intensified the nature versus nurture controversy in the academic world, with Wilson himself giving priority to genetic inheritance over sociocultural influences. He has also rigorously advocated protecting and preserving the diversity of life-forms on earth (Wilson, 1992).
Since 1975, and especially with the mapping of the human genome, it is becoming clearer that genes play a substantial role in providing the propensity for causing favorable and unfavorable variations, for example, illness and disease, as well as both desirable and undesirable behavior in species (including in our own). Not surprisingly, some thinkers vehemently object to manipulating the human genome, despite those incredible advantages that this scientific breakthrough will offer for human existence and evolution. Admittedly, sociobiology holds great promises and serious perils. Of course, determining the biological characteristics and behavior patterns of the human being through genetic engineering necessitates that sociobiological research follow stringent ethical guidelines.
As with the origin of any science, there are those people who are at first skeptical of the value of a new field of inquiry and protest the emerging science. However, as time passes and the overwhelming benefits become obvious, the new science is accepted and eventually praised. One may assume that this change of attitude will be true for the emerging science of sociobiology, as well as evolutionary psychology and genetic engineering.
The human being is a complex product of both biology and culture. For the anthropologist, as well as the scientist and philosopher, the fact of biocultural evolution makes it clear that inherited and learned mental activity are grounded in the material brain and that the material organism (no matter how complex) is grounded in the DNA molecule. Consequently, all aspects of the human being are the result of evolution and, therefore, they are subject to scientific inquiry within a naturalist framework.
Anthropology: Facts, Concepts, and Perspectives
As the comprehensive study of evolving humankind, anthropology is that discipline that is devoted to research in those areas that are relevant to understanding and appreciating Homo sapiens sapiens within the natural world (Bollt, 2009; Hublin, 2006). These areas range from genetics, paleontology, and archaeology to sociology, psychology, and linguistics. The more anthropologists search, the more fossils and artifacts they find that shed light on the emergence of our species over several million years. Each discovery helps to complete the developing picture of hominid evolution (Birx, 1988; Shubin, 2009; Tattersall & Schwartz, 2000). Of particular significance are those discoveries in primatology that clearly show the undeniable similarities between our human species and the four great apes in terms of genetics and psychology. Research in cross-cultural studies reveals the astonishing diversity of human thought and behavior from society to society throughout history.
In paleoanthropology, three discoveries have been especially important: Ardipithecus ramidus (“Ardi”), Astralopithecus afarensis (“Lucy”), and Homo florensiensis (“Hobbit”). Although interpretations of these three hominid species vary among anthropologists, who debate specific conclusions from the fossil specimens, there is no denying the empirical evidence itself. Today, it is exciting to speculate on what remarkable fossil specimens are still in the earth waiting to be discovered by future anthropologists.
A perplexing question still haunts some anthropologists: What is the uniqueness of our species? One answer offered was that the human animal is the only toolmaker— until it was discovered that chimpanzees make and use simple tools (as do a few other animals). A second reply was that only our species has self-consciousness that allows it to communicate through language—until ape studies showed that the pongids have self-awareness and are capable of learning symbolic communication. More recently, it has been argued that only humans stand erect and walk upright with a bipedal gait; that is, only humans are capable of sustained bipedality. However, chimpanzees and bonobos are able to walk erect for short distances. It seems that the only uniqueness of our species that separates us from the other living hominoids is about 6 million years of biological evolution (Rachels, 1999). Huxley, Haeckel, and Darwin himself got it correctly back in the 19th century: Man differs merely in degree rather than in kind from the great apes.
During the 19th century, two fundamental questions remained to be answered: What is the age of this planet? Have species always been fixed throughout earth history? As evidence accumulated in geology and paleontology, it became increasingly obvious to naturalists that our planet is millions (actually billions) of years old and that species have changed over time (with most species eventually becoming extinct). This emerging evolution framework held devastating consequences for all orthodox conceptions of earth, life-forms, and our species. In 1860 at the University of Oxford, England, the infamous Thomas Huxley and Samuel Wilberforce confrontation exemplified the intense conflict between the new evolution paradigm in science and an outmoded static worldview in religion.
The fact of evolution challenged not only traditional science and philosophy but also natural theology. Darwin himself was disturbed by the materialist implications of his own evolution theory for religious beliefs. In fact, his wife, Emma, even felt compelled to delete all of her husband’s views on theology and religion from his Autobiography, which was published posthumously in 1887; not until 1958 did an unexpurgated edition of Darwin’s life, written by himself in 1876, appear in print (Darwin, 1969).
In England, to reconcile evolutionary science with Christian faith, religious naturalist Philip Gosse argued that God had placed fossils in the earth in order to merely suggest that organic evolution had taken place, although in reality (so thought Gosse) species are fixed and earth had been suddenly created only about 6,000 years ago. Not surprisingly, his bizarre but provocative book Omphalos: An Attempt to Untie the Geological Knot (1857) convinced neither scientists nor theologians.
During the 20th century, reacting to the materialist ramifications of organic evolution, some religionists argued against the new dynamic outlook by first defending biblical fundamentalism and then advocating so-called scientific creationism (Isaak, 2007). Both viewpoints gave priority to beliefs rather than to facts. In 1925 at Dayton, Tennessee, the infamous John Scopes “Monkey Trial” had best represented this ongoing clash between science and religion over the factual theory of organic evolution.
In an attempt to reconcile modern science with traditional theology, some religionists now maintain that the universe in general and evolution in particular manifest an intelligent design (Petto & Godfrey, 2007). Ultimately, this is a religious position not supported by scientific evidence. Despite all the ongoing attacks, continuing research in all areas of science (from genetics to paleontology) confirms the fact of evolution and the close biological relationship between our species and the great apes. In fact, an honest examination of human history clearly shows that even complex religious beliefs and theological systems have evolved, over thousands of years, from simplistic explanations for interpreting the natural world. No doubt, exciting discoveries in the future will further strengthen the evolution framework. Finally, in light of ongoing changes in human societies and their cultures, one wonders what the religious beliefs and theological systems of human beings will look like 2,000 years from now.
Grounded in science, reason, and an open-ended perspective, evolutionary humanism emphasizes the ongoing development of human beings within a strictly naturalistic framework. It maintains the unity of mental activity and the organic brain, and places our species totally within biological evolution. With optimism, evolutionary humanism argues for the improvement of our species in order to increase its health, happiness, and longevity (overcoming illness, disease, and physical disability). With the advances in science and technology since the middle of the 20th century, especially in genetics, the innovative ideas and pragmatic values of this movement for human enhancement would seem increasingly plausible for guiding our evolving species.
Extending the evolutionary framework, some scientists and philosophers see the human being as an unfinished species that will continue to change as a result of implementing nanotechnology and genetic engineering (Harris, 2007; Savulescu & Bostrom, 2009; Sorgner, 2006; Young, 2006). Both the ideas and values of transhumanism (going beyond the human of today) have been put forward by several visionary thinkers: Nick Bostrom, Fereidoun M. Esfandiary, Sir Julian S. Huxley, Michel Houellebecq, and Julian Savulescu (among others). Through human intervention, these thinkers argue, our species will be improved in its biological and psychological makeup, just as Homo sapiens of today is a biopsychological advance over Homo erectus of the distant past.
Reminiscent of Friedrich Nietzsche’s conception of the overbeing, some thinkers even speculate that the transhuman will be the “missing link” between the human of today and the posthuman of the remote future. In fact, the posthuman may even be a new species far beyond both humans and the following transhumans. Of course, one cannot imagine the nature of the posthumans. It is likely that these cosmic overbeings will travel to and live among the stars.
In 1836, during the end of his 5-year voyage on the HMS Beagle, Charles Darwin revisited the tropical Brazilian rainforest. He admired this lush environment and thought how great it would be, if it were ever possible, to experience the scenery on another planet. Therefore, at least once, the young naturalist glimpsed the forthcoming science of exobiology or astrobiology as the search for life-forms on other worlds (and if they are found, their study).
In the history of philosophy, major thinkers like Giordano Bruno (1548–1600) and Immanuel Kant (1724–1804) envisioned living beings inhabiting other planets. Today, with advances in technology, scientists are seriously scanning the heavens in hopes of detecting indisputable evidence that organisms exist elsewhere in sidereal reality (Boss, 2009; Lamb, 2001). The size and age of this material universe, with its billions of galaxies each having billions of stars, argues for the existence of countless planets. If the same physical laws and chemical elements pervade this cosmos, then it seems reasonable to assume that earthlike worlds harbor life-forms among the stars, perhaps even sentient beings similar to or even advanced beyond ourselves.
In our own solar system, the earth has those necessary natural conditions that have allowed for the origin and evolution of biological forms over the past 4 billion years. Beyond this solar system, extrasolar planets may have similar life zones that permit the existence of organisms. Thus, planetology becomes cosmology as the probability of and interest in biological evolution are extended to include this entire universe. Likewise, exobiology implies exoevolution, that is, the evolution of life-forms on different worlds, where organisms are adapting to changing habitats far different from those environments on earth (Birx, 2006b). In the distant future, both exobiology and exoevolution may offer intriguing areas for scientific research.
Even if forms of life are never found elsewhere in this universe, it does not mean that they do not exist on worlds that will remain beyond the detection of our human species (Webb, 2002). Moreover, organisms may have existed in the remote past before the formation of the present galaxies or will emerge in the distant future in new galaxies. And there may have been, are, or will be other universes with life-forms very similar to or far different from those organisms that have inhabited or are now inhabiting earth. One can only speculate on what the consequences might be if our human species ever encounters superior intelligent beings evolving among the stars.
Since the convincing writings of Charles Darwin, interpretations of organic evolution have evolved from the narrow materialism of early evolutionists to the comprehensive naturalism of modern neo-Darwinists. Advances in those special sciences that support biological evolution include ongoing discoveries in paleontology, comparative biology, anthropology, and population genetics, as well as more accurate dating techniques in geology and biochemistry. Progress in these special sciences is an increasing challenge to vitalistic, spiritualistic, and mystical interpretations of our species and organic evolution.
Two exciting and promising but controversial areas in modern evolution research are transhumanism and exoevolution. With the rapid advances in nanotechnology and genetic engineering, an increasing ability to design the DNA molecule will allow humans to alter and improve species, including our own, and to design new organisms for specific purposes both on earth and in outer space; as such, one may speak of emerging teleology in terms of human intervention and technological manipulation. The successful journey of human beings into outer space will require our species to adapt to and survive in different environments, both artificial and natural. If life-forms are discovered elsewhere in this universe, then scientists and philosophers will be able to study the evolution of organisms on other worlds.
Quo vadis, Homo sapiens? In those countless centuries to come, the human being may even transform itself into a new species, Homo futurensis. Of course, designer evolution will require establishing ethical guidelines while promoting open inquiry. For now, the primary focus must be on those steps that need to be taken to ensure the continued biodiversity of life-forms on this planet, including the ongoing fulfillment of humans on this earth before they venture to the stars.
Bibliography:
- Aczel, A. D. (2007). The Jesuit and the skull: Teilhard de Chardin, evolution, and the search for Peking Man. New York: Riverhead Books/Penguin Group.
- Armstrong, P. H. (2007). All things Darwin: An encyclopedia of Darwin’s world (2 vols.). Westport, CT: Greenwood Press.
- Bergson, H. (1903). An introduction to metaphysics. Hackett Publishing.
- Bergson, H. (1998). Creative evolution. Mineola, NY: Dover. (Original work published 1907)
- Birx, H. J. (1984). Theories of evolution. Springfield, IL: Charles C Thomas.
- Birx, H. J. (1988). Human evolution. Springfield, IL: Charles C Thomas.
- Birx, H. J. (1991). Interpreting evolution: Darwin & Teilhard de Chardin. Amherst, NY: Prometheus Books.
- Birx, H. J. (2006a). Evolution, arc of. In H. J. Birx (Ed.), Encyclopedia of anthropology (Vol. 2, pp. 881–882). Thousand Oaks, CA: Sage.
- Birx, H. J. (2006b). Exobiology and exoevolution. In H. J. Birx (Ed.), Encyclopedia of anthropology (Vol. 2, pp. 931–934). Thousand Oaks, CA: Sage.
- Birx, H. J. (2006c). Nietzsche, Friedrich. In H. J. Birx (Ed.), Encyclopedia of anthropology (Vol. 4, pp. 1741–1745). Thousand Oaks, CA: Sage.
- Birx, H. J. (2006d). Teilhard de Chardin, Pierre. In H. J. Birx (Ed.), Encyclopedia of anthropology (Vol. 5, pp. 2168–2171). Thousand Oaks, CA: Sage.
- Birx, H. J. (2009a). Darwin, Charles. In H. J. Birx (Ed.), Encyclopedia of time (Vol. 1, pp. 263–269). Thousand Oaks, CA: Sage.
- Birx, H. J. (2009b). Haeckel, Ernst. In H. J. Birx (Ed.), Encyclopedia of time (Vol. 2, pp. 619–620). Thousand Oaks, CA: Sage.
- Bollt, R. (2009). Anthropology. In H. J. Birx (Ed.), Encyclopedia of time (Vol. 1, pp. 23–30). Thousand Oaks, CA: Sage.
- Boss, A. (2009). The crowded universe: The search for living planets. New York: Basic Books.
- Boulter, M. (2009). Darwin’s garden: Down House and the origin of species. Berkeley, CA: Counterpoint.
- Brasier, M. (2009). Darwin’s lost world: The hidden history of animal life. Oxford, UK: Oxford University Press.
- Cassirer, E. (1955). The philosophy of the Enlightenment. Boston: Beacon Press.
- Condorcet, A.-N. M. de. (1980). Sketch for a historical picture of the progress of the human mind. New York: Hyperion Press. (Original work published 1795)
- Coyne, J. A. (2009). Why evolution is true. New York: Viking.
- Darwin, C. (1859). On the origin of species by means of natural selection: Or, the preservation of favored races in the struggle for life. London: John Murray.
- Darwin, C. (1871). The descent of man and selection in relation to sex. London: John Murray.
- Darwin, C. (1969). Autobiography (N. Barlow, Ed.). New York: W. W. Norton. (Original work published 1887, unexpurgated version published 1958)
- Darwin, C. (2000). The voyage of the Amherst, NY: Prometheus Books. (Original work published 1839)
- Dawkins, R. (2009). The greatest show on earth: The evidence for evolution. New York: Free Press.
- Dewey, J. (1958). Experience and nature. New York: Dover. (Original work published 1925)
- Dewey, J. (1965). The influence of Darwin on philosophy, and other essays in contemporary thought. Bloomington: Indiana University Press.
- Farber, M. (1968a). Basic issues of philosophy: Experience, reality, and human values. New York: Harper & Row.
- Farber, M. (1968b). Naturalism and subjectivism. Albany: State University of New York Press.
- Fortey, R. (1998). Life: A natural history of the first four billion years of life on earth. New York: Knopf.
- Goose, P. H. (1998). Omphalos: An attempt to untie the geological knot. Woodbridge, CT: Ox Bow Press. (Original work published 1857)
- Haeckel, E. (1899). The riddle of the universe at the close of the nineteenth century. New York: Harper & Row.
- Harris, J. (2007). Enhancing evolution: The ethical case for making better people. Princeton, NJ: Princeton University Press.
- Harris, M. (1968). The rise of anthropological theory: A history of theories of culture. New York: Thomas Y. Crowell.
- Heller, M. (2009). Teilhard’s vision of the world and modern cosmology (Teilhard Studies No. 58, Spring). New Rochelle, NY: American Teilhard Association.
- Hublin, J.-J. (2006). Evolutionary anthropology. In H. J. Birx (Ed.), Encyclopedia of anthropology (Vol. 2, pp. 913–919). Thousand Oaks, CA: Sage.
- Huxley, T. H. (1959). Evidence as to man’s place in nature. Ann Arbor: University of Michigan Press. (Original work published 1863)
- Isaak, M. (2007). The counter-creationism handbook. Berkeley: University of California Press.
- Kneževic´, B. (1980). History, the anatomy of time: The final phase of sunlight (G. V. Tomashevich & S. A. Wakeman, Trans.). New York: Philosophical Library. (Original 2-vol. work published 1898, 1901)
- Kropotkin, P. (1914). Mutual aid: A factor of evolution. Boston: Extending Horizons. (Original work published 1902)
- Kropotkin, P. (1968). Ethics: Origin and development. London: Benjamin Blom. (Original work published 1922)
- Lamarck, J. B. (1984). Zoological philosophy: An exposition with regard to the natural history of animals. Chicago: University of Chicago Press. (Original work published 1809)
- Lamb, D. (2001). The search for extraterrestrial intelligence: A philosophical inquiry. New York: Routledge.
- Mayr, E. (2001). What evolution is. New York: Basic Books.
- McCalman, I. (2009). Darwin’s armada: Four voyages and the battle for the theory of evolution. New York: W. W. Norton.
- Montagu, A. (1952). Darwin, competition and cooperation. New York: Henry Shuman.
- Morgan, L. H. (1963). Ancient society: Or, researches in the line of human progress from savagery through barbarism to civilization. New York: Meridian Books. (Original work published 1877)
- Nietzsche, F. (1993). Thus spake Zarathustra. Amherst, NY: Prometheus Books. (Original work published 1883–1885)
- Oparin, A. I. (1923). The origin of life. New York: Academic Press.
- Paley, W. (2006). Natural theology: Or, evidence of the existence and attributes of the deity, collected from the appearances of nature. New York: Oxford University Press. (Original work published l802)
- Petto, A. J., & Godfrey, L. R. (Eds.). (2007). Scientists confront creationism: Intelligent design and beyond. New York: W. W. Norton.
- Rachels, J. (1999). Created from animals: The moral implications of Darwinism. Oxford, UK: Oxford University Press.
- Ridley, M. (2004). Evolution (3rd ed.). Malden, MA: Blackwell.
- Savulescu, J., & Bostrom, N. (Eds.). (2009). Human enhancement. Oxford, UK: Oxford University Press.
- Shubin, N. (2009). Your inner fish: A journey into the 3.5-billionyear history of the human body. New York: Random House.
- Sorgner, S. (2006). Transhumanism. In H. J. Birx (Ed.), Encyclopedia of time (Vol. 3, pp. 1375–1376). Thousand Oaks, CA: Sage.
- Spencer, H. (1862–1893). Synthetic philosophy (Vols. 1–10). New York: D. Appleton.
- Spencer, H. (1958). First principles. New York: DeWitt Revolving Fund. (Original work published 1862)
- Tattersall, I., & Schwartz, J. H. (2000). Extinct humans. Boulder, CO: Westview Press.
- Teilhard de Chardin, P. (1975). The phenomenon of man (2nd ed.). New York: Harper & Row.
- Tylor, E. B. (1958). Primitive culture: Research into the development of mythology, philosophy, religion, language, art and custom (Vols. 1–2). New York: Harper Torchbooks. (Original work published 1871)
- Vernadsky, V. I. (1998). The biosphere (D. B. Langmuir, Trans.). New York: Copernicus Books. (Original work published 1926)
- Webb, S. (2002). Where is everybody? Fifty solutions to the Fermi paradox and the problem of extraterrestrial life. New York: Copernicus Books.
- White, L. A. (1949). The science of culture: A study of man and civilization. New York: Grove.
- White, L. A. (1959). The evolution of culture: The development of civilization to the fall of Rome. New York: McGraw-Hill.
- Whitehead, A. N. (1964). The concept of nature. Cambridge, UK: Cambridge University Press. (Original work published 1920)
- Whitehead, A. N. (1967). Science and the modern world. NewYork: Free Press. (Original work published in 1925)
- Whitehead, A. N. (1969). Process and reality: An essay in cosmology. New York: Free Press. (Original work published 1929)
- Whitlock, G. (2009). Presocratics. In H. J. Birx (Ed.), Encyclopedia of time (Vol. 2, pp. 1035–1050). Thousand Oaks, CA: Sage.
- Wilson, E. O. (1975). Sociobiology: The new synthesis. Cambridge, MA: Belknap Press.
- Wilson, E. O. (1992). The diversity of life. Cambridge, MA: Harvard University Press.
- Young, S. (2006). Designer evolution: A transhumanist manifesto. Amherst, NY: Prometheus Book.
ORDER HIGH QUALITY CUSTOM PAPER

We've detected unusual activity from your computer network
To continue, please click the box below to let us know you're not a robot.
Why did this happen?
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .
For inquiries related to this message please contact our support team and provide the reference ID below.
- Share full article
Advertisement
Supported by
Room-Temperature Superconductor Discovery Is Retracted
It was the second paper led by Ranga P. Dias, a researcher at the University of Rochester, that the journal Nature has retracted.

By Kenneth Chang
Nature, one of the most prestigious journals in scientific publishing, on Tuesday retracted a high-profile paper it had published in March that claimed the discovery of a superconductor that worked at everyday temperatures.
It was the second superconductor paper involving Ranga P. Dias, a professor of mechanical engineering and physics at the University of Rochester in New York State, to be retracted by the journal in just over a year. It joined an unrelated paper retracted by another journal in which Dr. Dias was a key author.
Dr. Dias and his colleagues’ research is the latest in a long list of claims of room-temperature superconductors that have failed to pan out. But the retraction raised uncomfortable questions for Nature about why the journal’s editors publicized the research after they had already scrutinized and retracted an earlier paper from the same group.
A spokesman for Dr. Dias said that the scientist denied allegations of research misconduct. “Professor Dias intends to resubmit the scientific paper to a journal with a more independent editorial process,” the representative said.
First discovered in 1911, superconductors can seem almost magical — they conduct electricity without resistance. However, no known materials are superconductors in everyday conditions. Most require ultracold temperatures, and recent advances toward superconductors that function at higher temperatures require crushing pressures.
A superconductor that works at everyday temperatures and pressures could find use in M.R.I. scanners, novel electronic devices and levitating trains.
Superconductors unexpectedly became a viral topic on social networks over the summer when a different group of scientists, in South Korea, also claimed to have discovered a room-temperature superconductor, named LK-99. Within a couple of weeks, the excitement died away after other scientists were unable to confirm the superconductivity observations and came up with plausible alternative explanations.
Even though it was published in a high-profile journal, Dr. Dias’s claim of a room-temperature superconductor did not set off euphoria like LK-99 did because many scientists in the field already regarded his work with doubt.
In the Nature paper published in March, Dr. Dias and his colleagues reported that they had discovered a material — lutetium hydride with some nitrogen added — that was able to superconduct electricity at temperatures of up to 70 degrees Fahrenheit. It still required pressure of 145,000 pounds per square inch, which is not difficult to apply in a laboratory. The material took on a red hue when squeezed, leading Dr. Dias to nickname it “reddmatter” after a substance in a “Star Trek” movie .
Less than three years earlier, Nature published a paper from Dr. Dias and many of the same scientists. It described a different material that they said was also a superconductor although only at crushing pressures of nearly 40 million pounds per square inch. But other researchers questioned some of the data in the paper. After an investigation, Nature agreed, retracting the paper in September 2022 over the objections of the authors.
In August of this year, the journal Physical Review Letters retracted a 2021 paper by Dr. Dias that described intriguing electrical properties, although not superconductivity, in another chemical compound, manganese sulfide.
James Hamlin, a professor of physics at the University of Florida, told Physical Review Letters’ editors that the curves in one of the paper’s figures describing electrical resistance in manganese sulfide looked similar to graphs in Dr. Dias’s doctoral thesis that described the behavior of a different material.
Outside experts enlisted by the journal agreed that the data looked suspiciously similar, and the paper was retracted . Unlike the earlier Nature retraction, all nine of Dr. Dias’s co-authors agreed to the retraction. Dr. Dias was the lone holdout and maintained that the paper accurately portrayed the research findings.
In May, Dr. Hamlin and Brad J. Ramshaw, a professor of physics at Cornell University, sent editors at Nature their concerns about the lutetium hydride data in the March paper.
After the retraction by Physical Review Letters, most of the authors of the lutetium hydride paper concluded that the research from their paper was flawed too.
In a letter dated Sept. 8, eight of the 11 authors asked for the Nature paper to be retracted .
“Dr. Dias has not acted in good faith in regard to the preparation and submission of the manuscript,” they told the Nature editors.
The writers of the letter included five recent graduate students who worked in Dr. Dias’s lab, as well as Ashkan Salamat, a professor of physics at the University of Nevada, Las Vegas, who collaborated with Dr. Dias on the two earlier retracted papers. Dr. Dias and Dr. Salamat founded Unearthly Materials, a company that was meant to turn the superconducting discoveries into commercial products.
Dr. Salamat, who was the company’s president and chief executive, is no longer an employee there. He did not respond to a request for comment on the retraction.
In the retraction notice published on Tuesday, Nature said that the eight authors who wrote the letter in September expressed the view that “the published paper does not accurately reflect the provenance of the investigated materials, the experimental measurements undertaken and the data-processing protocols applied.”
The issues, those authors said, “undermine the integrity of the published paper.”
Dr. Dias and two other authors, former students of his, “have not stated whether they agree or disagree with this retraction,” the notice said. A Nature spokeswoman said they did not respond to the proposed retraction.
“This has been a deeply frustrating situation,” Karl Ziemelis, the chief editor for applied and physical sciences at Nature, said in a statement.
Mr. Ziemelis defended the journal’s handling of the paper. “Indeed, as is so often the case, the highly qualified expert reviewers we selected raised a number of questions about the original submission, which were largely resolved in later revisions,” he said. “This is how peer review works.”
He added, “What the peer-review process cannot detect is whether the paper as written accurately reflects the research as it was undertaken.”
For Dr. Ramshaw, the retraction provided validation. “When you are looking into someone else’s work, you always wonder whether you are just seeing things or overinterpreting,” he said.
The disappointments of LK-99 and Dr. Dias’s claims may not deter other scientists from investigating possible superconductors. Two decades ago, a scientist at Bell Labs, J. Hendrik Schön, published a series of striking findings, including novel superconductors. Investigations showed that he had made up most of his data.
That did not stymie later major superconductor discoveries. In 2014, a group led by Mikhail Eremets, of the Max Planck Institute for Chemistry in Germany, showed that hydrogen-containing compounds are superconductors at surprisingly warm temperatures when squeezed under ultrahigh pressures. Those findings are still broadly accepted.
Russell J. Hemley, a professor of physics and chemistry at the University of Illinois Chicago who followed up Dr. Eremets’s work with experiments that found another material that was also a superconductor at ultrahigh pressure conditions, continues to believe Dr. Dias’s lutetium hydride findings. In June, Dr. Hemley and his collaborators reported that they had also measured the apparent vanishing of electrical resistance in a sample that Dr. Dias had provided, and on Tuesday, Dr. Hemley said he remained confident that the findings would be reproduced by other scientists.
After the Physical Review Letters retraction, the University of Rochester confirmed that it had started a “comprehensive investigation” by experts not affiliated with the school. A university spokeswoman said that it had no plans to make the findings of the investigation public.
The University of Rochester has removed YouTube videos it produced in March that featured university officials lauding Dr. Dias’s research as a breakthrough.
Kenneth Chang has been at The Times since 2000, writing about physics, geology, chemistry, and the planets. Before becoming a science writer, he was a graduate student whose research involved the control of chaos. More about Kenneth Chang

IMAGES
VIDEO
COMMENTS
This growing complexity makes it more difficult than ever—and more imperative than ever—for scholars to probe how technological advancements are altering life around the world in both positive and negative ways and what social, political, and legal tools are needed to help shape the development and design of technology in beneficial directions.
Of course, the evolution of technology is just one lens through which innovators must assess R&D investments. As Clay Christensen pointed out in The Innovator's Dilemma, it is easy for companies to focus so much on established dimensions of merit that they miss a disruption based on a new set of customer needs.
Abstract. Evolution of technology is a stepwise advancement of a complex system of artifact, driven by interaction with sub-systems and other systems, considering technical choices, technical requirements and science advances, which generate new and/or improved products or processes for use or consumption to satisfy increasing needs and/or to solve complex problems of people in society.
In order to demonstrate the feasibility of the research methodology and realize a given widely concerned technology deeply, a case, smart grid technology, was chosen in this paper. Smart grid technology has raised great expectations concerning environmental protection, energy saving, and reduced carbon emissions.
The current study provides a topic-based summary of the AI research published in the TF&SC by reviewing the relevant papers that exclusively advance AI research. Furthermore, to explore the diversity of research on the adoption of AI in the contemporary era of big data, Industry 4.0, Web 3.0, and circular economy, the study extracts and ...
Approached this way, the systematic literature review displays major research avenues of digital transformation that consider technology as the main driver of these changes. This paper qualitatively classifies the literature on digital business transformation into three different clusters based on technological, business, and societal impacts.
The mobile industry is developing and preparing to deploy the fifth-generation (5G) networks. The evolving 5G networks are becoming more readily available as a significant driver of the growth of IoT and other intelligent automation applications. 5G's lightning-fast connection and low-latency are needed for advances in intelligent automation—the Internet of Things (IoT), Artificial ...
Identifying the evolution path of a research field is essential to scientific and technological innovation. There have been many attempts to identify the technology evolution path based on the topic model or social networks analysis, but many of them had deficiencies in methodology. First, many studies have only considered a single type of information (text or citation information) in ...
This paper presents a systematic outline of the development of 5G-related research until 2020 as revealed by over 10,000 science and technology publications. The exercise addresses the emergence, growth, and impact of this body of work and offers insights regarding disciplinary distribution, international performance, and historical dynamics.
This paper throws light on some of the nature of artificial intelligence (AI) development, which will serve as a starter for helping to advance its development.,This work reveals the evolutions and trends of AI from four dimensions: research, output, influence and competition through leveraging academic knowledge graph with 130,750 AI scholars ...
Introduction. In the face of technology-driven disruptive changes in societal and organizational practices, continuous vocational education and training (CVET) lacks information on how the impact of technologies on work must be considered from an educational perspective (Cascio and Montealegre, 2016).Research on workplace technologies, i.e., tools or systems that have the potential to replace ...
The evolution of technology is a central theme for management theory due to the transformative effect of technological change on societies, markets, industries, organizations, and individuals. Over the last decades, scholars from a broad range of theoretical and methodological traditions have generated a vast yet dispersed body of literature on technology evolution. We offer a comprehensive ...
of technological evolution. To guide our empirical work, we reviewed available theory from the literature and derived testable hypotheses about the path, shape, source, and speed of technological evolution and the competition among rival technologies. Findings in this area have been partly con-founded by the use of circular definitions. Thus ...
the process of technology evolution, (3) or technology lif e cycle evaluation, and so on. The key findings and contributions were as follo ws. First, this paper proposes a technol -
Social, Cognitive, and Political Factors in Technology Evolution. Even though the importance of social, cognitive, and political factors in influencing technological evolution has been recognized even in the very early research on the matter, few scholars had studied these factors in detail (an early exception is Clark, 1985). Some recent ...
Technological evolution is a theory of radical transformation of society through technological development. This theory originated with Czech philosopher Radovan Richta. [5] Mankind In Transition; A View of the Distant Past, the Present and the Far Future, Masefield Books, 1993. [6] Technology (which Richta defines as "a material entity created ...
In this paper, we examine a markets' readiness for potential disruptive innovations based on past and current conditions. For this purpose, we developed a theoretical framework to evaluate the ...
PDF | On Jan 1, 2020, Alessandra Perri and others published Technology Evolution in the Global Automotive Industry: A Patent-based Analysis | Find, read and cite all the research you need on ...
This paper uses science mapping analysis, that is focused on monitoring a scientific field and defining the areas of research determining its cognitive and evolutionary structure, showing the structural and dynamic aspects of scientific research (Noyons et al., 1999; Börner et al., 2003; Morris and Van der Veer Martens, 2008; Cobo et al., 2012).
Since the dotcom bubble burst back in 2000, technology has radically transformed our societies and our daily lives. From smartphones to social media and healthcare, here's a brief history of the 21st century's technological revolution. Just over 20 years ago, the dotcom bubble burst, causing the stocks of many tech firms to tumble.
Theoretical background. Here, we review several competing theoretical perspectives on the evolution of technologies offered in the past. Technological change is one of the fundamental drivers in social and cultural evolution and of long-term economic growth [14-17].Many have pointed to technology's ramifying effects on warfare, state formation, and the development of information processing ...
DOI 10.3386/w31846. Issue Date November 2023. We develop measures of labor-saving and labor-augmenting technology exposure using textual analysis of patents and job tasks. Using US administrative data, we show that both measures negatively predict earnings growth of individual incumbent workers. While labor-saving technologies predict earnings ...
A new AI agent developed by NVIDIA Research that can teach robots complex skills has trained a robotic hand to perform rapid pen-spinning tricks — for the first time as well as a human can. The stunning prestidigitation, showcased in the video above, is one of nearly 30 tasks that robots have learned to expertly accomplish thanks to Eureka ...
MIT researchers invented a technology and software to take electron microscopy to the next level by seamlessly integrating real-time machine learning into the imaging process — "smart microscopy.". Credits. Left image: Yaron Meirovitch via the Stable Diffusion XL AI image generator and Alex Shipps via the Midjourney AI image generator.
Microsoft Ignite is a showcase of the advances being developed to help customers, partners and developers achieve the total value of Microsoft's technology and reshape the way work is done. As we round out the year, there are strong signals of AI's potential to transform work. Take our latest Work Trend Index. Eight months ago, we ...
Evolution Of Technology - Free Essay Examples And Topic Ideas. The evolution of technology can be traced back to the early human civilizations when tools made of stone were used for hunting and gathering. Over time, technological advancements have been made in several fields including agriculture, transportation, communication, healthcare, and ...
This sample evolution research paper features: 9900 words (approx. 33 pages), an outline, and a bibliography with 72 sources. Browse other research paper exampl ... (1724-1804) envisioned living beings inhabiting other planets. Today, with advances in technology, scientists are seriously scanning the heavens in hopes of detecting indisputable ...
1:25. The European Union could reap €628 billion ($681 billion) a year by bringing digitally lagging businesses up to speed, according to a research paper by software publisher Sage. At the core ...
Room-Temperature Superconductor Discovery Is Retracted. It was the second paper led by Ranga P. Dias, a researcher at the University of Rochester, that the journal Nature has retracted. Ranga P ...