GCFGlobal Logo

  • Get started with computers
  • Learn Microsoft Office
  • Apply for a job
  • Improve my work skills
  • Design nice-looking docs
  • Getting Started
  • Smartphones & Tablets
  • Typing Tutorial
  • Online Learning
  • Basic Internet Skills
  • Online Safety
  • Social Media
  • Zoom Basics
  • Google Docs
  • Google Sheets
  • Career Planning
  • Resume Writing
  • Cover Letters
  • Job Search and Networking
  • Business Communication
  • Entrepreneurship 101
  • Careers without College
  • Job Hunt for Today
  • 3D Printing
  • Freelancing 101
  • Personal Finance
  • Sharing Economy
  • Decision-Making
  • Graphic Design
  • Photography
  • Image Editing
  • Learning WordPress
  • Language Learning
  • Critical Thinking
  • For Educators
  • Translations
  • Staff Picks
  • English expand_more expand_less

Basic Computer Skills  - Basic Computer Skills Quiz

Basic computer skills  -, basic computer skills quiz, basic computer skills basic computer skills quiz.

GCFLearnFree Logo

Basic Computer Skills: Basic Computer Skills Quiz

Lesson 24: basic computer skills quiz.

/en/basic-computer-skills/uninstalling-software-from-your-mac/content/

previous

National Academies Press: OpenBook

Tech Tally: Approaches to Assessing Technological Literacy (2006)

Chapter: 7 computer-based assessment methods, 7 computer-based assessment methods.

T he committee believes that assessments of technological literacy would benefit from—may even require—innovative approaches, especially for the capability dimension, for which test takers must demonstrate iterative problem-solving techniques typical of a design process. Even with thoughtfully developed paper-and-pencil assessments, it would be extremely difficult to assess this dimension. An alternative approach would be to present test takers with hands-on laboratory exercises, but the costs and complexities of developing, administering, and “grading” a truly hands-on design or problem-solving activity for a large sample of individuals would be prohibitive.

Social scientists, public opinion polling organizations, and others interested in assessing what out-of-school experiences contribute to technological literacy have few tools at their disposal. In national-scale surveys, for example, it is customary to contact participants by telephone using various forms of random-digit dialing. However, response rates have dropped significantly recently because of the number of research surveys, the exponential increase in cell phone use, and other factors, raising concerns about the reliability and validity of survey data. Free-choice learning environments, such as museums and science centers, are also struggling to find ways of measuring attitudinal changes and learning as a result of exposure to exhibits and other programs.

The presentation strategies and analyses possible with computer-based methods would be, at best, impractical, and often, out of the question with traditional assessment methods. Computer-based methods could have several advantages over traditional methods. They could provide faster, more accurate scoring (Bahr and Bahr, 1997), reduce test-

administration times (Shermis et al., 1996), and make possible relatively low-cost scaling to large numbers of test takers. They could also be designed to meet the needs of special populations, including people with physical disabilities and people from diverse cultural or linguistic backgrounds (Naglieri et al., 2004).

However, there are legitimate concerns about using computers in educational testing. A potential limitation, of course, is the lack of computer literacy of the test population. Test takers—children or adults— who do not have at least a basic familiarity with computers and computer keyboarding may not perform as well as those who have at least basic computer skills (Russell, 1999). In addition, requirements for computer memory and processing speeds, graphics quality, and bandwidth— for applications using the Internet—may pose significant cost and resource barriers.

There are legitimate concerns about using computers in educational testing.

Computer-based tests would be just as susceptible to cheating as traditional paper-and-pencil assessments, although the types of cheating and strategies for countering them may differ. For example, someone other than the registered examinee could take the test or help answer questions on an assessment administered remotely (online). To preclude this kind of cheating, authentication could be attempted using a biometric measure (e.g., a fingerprint or retina scan), or the test taker could be required to take a short, proctored confirmatory test (Segall, 2001).

It is important to keep in mind that although computer technology could potentially increase testing flexibility, authenticity, efficiency, and accuracy, computer-based assessments must still be subject to the same defensible standards as paper-and-pencil assessments, particularly if the results are used to make important decisions. The reference of choice is Standards for Educational and Psychological Testing (AERA et al., 1999).

The following discussion focuses on aspects of computer-based testing that offer significant potential benefits for the assessment of technological literacy.

Computer-Based Adaptive Assessments

Computer-based, flexi-level, branching, and stratified adaptive testing have been investigated for more than 30 years (Baker, 1989; Bunderson et al., 1989; Lord, 1971a,b,c; van der Linden, 1995; Weiss, 1983). Research has been focused mostly on using interactive (computer) technology to select, in real time, specific items to present to individual

examinees based on responses to previous items. Incorrect responses evoke less difficult items in that dimension, whereas correct responses evoke increasingly difficult items until the standard error of estimate for that dimension oscillates regularly—within preset confidence levels— around a particular value.

Adaptive testing has been used by the U.S. Department of Defense in some high-profile areas. For example, a computerized version of the Armed Services Vocational Ability Test (ASVAB) has been administered to thousands of recruits since 1998. ASVAB now uses computers for item writing, item banking, test construction, test administration, test scoring, item and test analyses, and score reporting (Baker, 1989). Overall, research findings and experience suggest that tests using adaptive techniques are shorter, more precise, and more reliable than tests using other techniques (Weiss, 2004). Therefore, it is reasonable to expect that adaptive testing would be effective for assessments of technological literacy.

Tests using adaptive techniques are shorter, more precise, and more reliable than tests using other techniques.

However, computer-based adaptive testing has some shortcomings. Because of the nature of the algorithms used to select successive test questions, computer-adaptive items are usually presented only once. Thus, test takers do not have an opportunity to review and modify responses, which could be a disadvantage to some test takers who might improve their scores by changing responses on a traditional paper-and-pencil test.

In theory, each person who takes a computer-adaptive test is presented with a unique subset of the total pool of test items, which would seem to make it very difficult for cheaters to beat the system by memorizing individual items. However, this assumption was challenged in the mid-1990s when significant cheating was uncovered on the Educational Testing Service (ETS) computer-adaptive Graduate Record Exam (Fair Test Examiner, 1997), causing the company to withdraw this version of the exam. ETS has since made a number of changes, including enlarging the item pool, and the online test is now back on the market.

The two main costs of computer-adaptive testing are (1) the software coding necessary to create an adaptive test environment and (2) the creation of items. Although the cost varies depending on the nature of the assessment, it is not unusual for an assessment developer to spend $250,000 for software coding (D. Fletcher, Institute for Defense Analyses, personal communication, February 27, 2006). Per-item development costs are about the same for paper-and-pencil and computer-adaptive tests, but two to four times as many items may be required to support a computerized assessment. Nevertheless, computerized adaptive

tests, such as the Renaissance Learning Star Reading Test ( http://www.renlearn.com/starreading/ ), are being used in some K–12 settings. Some firms (e.g., Microsoft) are also using adaptive testing to certify an individual’s product knowledge.

Simulations

Rather than presenting a series of test items, even items adapted to an individual’s responses, assessments might be improved by immersing the test taker in simulations of real-life situations. This idea is particularly appealing for assessments of technological literacy, which necessarily emphasize capability and critical thinking and decision making, in addition to basic knowledge.

With simulated environments, performance and competence can be assessed in situations that cannot be attempted in the real world. Aircraft can be crashed, bridges can be tested with heavy loads, expensive equipment can be ruined, and lives can be risked in simulated environments in ways that would be impractical, or unthinkable, in the real world. Simulated environments can also make the invisible visible, compress or expand time, and repeatedly reproduce events, situations, and decision points.

The military has long used simulations to assess the readiness of individuals and groups for military operations (Andrews and Bell, 2000; Fletcher, 1999; Fletcher and Chatelier, 2000; Pohlman and Fletcher, 1999). Industry also uses simulation-based assessments for everything from device maintenance and social role-playing to planning marketing campaigns (Aldrich, 2004). In formal education, simulations and computer-based modeling are being investigated as tools for improving learning in biology, chemistry, and physics (e.g., Concord Consortium, 2005; TELS, 2005; Thinkertools, 2005).

Simulation can be used in a variety of ways: (1) in design, to describe the behavior of a system that does not yet exist; (2) in analysis, to describe the behavior of an existing system under various operating conditions; (3) in training, to shape the behavior of individuals and groups and prepare them for situations they may encounter on the job; and (4) in entertainment, to provide computer games (Smith, 2000). The quality of a simulation depends on its purpose—the question(s) it is expected to answer—and the accuracy with which it represents system components that are relevant to this purpose.

A simulation can be used to situate individuals in the system it represents and then compare their judgments about the operation of the system with those of the simulation. Simulations might represent a system with sufficient accuracy to allow individuals and groups to try to understand and apply technology, without delving into the scientific basis of the system’s operation.

Because simulation-based assessments have highly reactive and interactive capabilities, they can be more sophisticated and elaborate than paper-based tests and provide more comprehensive and more substantive measures of technological literacy. Simulations can not only provide opportunities for individuals or teams to demonstrate technological literacy through designing, building, and application capabilities, they can also review the results, assess the ability to correct errors (if any), apply probability techniques to infer understanding of actions, and “coach” and “supply hints” to improve partial solutions. One can imagine a number of simulated design-related tasks ( Box 7-1 ) in which individuals could build and test their own systems and system components within a larger, simulated context that could assess their actions.

One concern about computer-based simulations is the cost of developing them. In some instances, the costs could even outweigh the value of using simulation in an assessment. But determining when simulation would be too expensive requires that one know the costs and benefits of assessment with and without simulation, and the committee was unable to find studies that address this issue.

Cost-benefit decisions would have to take into account the time-saving potential of so-called authoring tools (software designed to simplify the creation of simulations). A number of off-the-shelf products have been developed for this purpose, such as Macromedia Captivate

( http://www.macromedia.com/software/captivate ) and Vcommunicator Studio ( http://www.vcom3d.com/vstuidio.htm ). Other authoring tools have been developed with government funding by academic researchers (e.g., Munro and Pizzini, 1996; Pizzini and Munro, 1998).

One study describes the use of DIAG, a set of authoring tools developed by the Behavioral Technology Laboratories at the University of Southern California, to create a simulation-based instructional module for diagnosing faults in an aircraft power-distribution system (Towne, 1997). The module consists of 28 screen displays (including a fully operational front panel simulation), 13 operational circuit breakers, 11 connectors, 94 wires, and 21 other components that could be faulty. The system was capable of generating and diagnosing 19,110 fault conditions.

Using the authoring tool, Towne found that it required 22 person-days to develop the module with all of the control and logic necessary for its operation as an instructional system. Without DIAG, he estimated that the time required would be 168 days. Whether 22 days of a technician’s time is a reasonable cost for the development of a computer-based simulation for assessing technological literacy depends on the uses of the simulation and the decisions it is intended to inform. In any case, this study suggests that it is reasonable to expect that authoring tools will have a substantial impact on the costs of developing simulations.

Despite increasing use of simulations by industry, the military, and educators, the design, development, and use of simulations specifically for assessments is rarely discussed in the technical literature. In addition, the prospect of assessment via simulation has raised questions about measurement that are just being articulated and addressed by assessment specialists. For instance, O’Neil and colleagues have conducted empirical studies of psychometric properties, such as reliability, validity, and precision (e.g., O’Neil et al., 1997a,b).

Use of simulations specifically for assessments is rarely discussed in the technical literature.

After reviewing the potential of using simulation for assessment, the committee identified several questions for researchers ( Box 7-2 ). With simulations, individuals (or groups) may be immersed in a system (or situation) that reacts to their decisions and allows them to achieve their goals, or not—providing feedback on their success or failure. However, sometimes test takers may take correct actions for the wrong reasons—in other words, they may be lucky rather than competent. This could also happen, of course, in any design problem or laboratory-based exercise. Sometimes, if an incorrect decision is made early in the running of a simulation, all subsequent actions, even if correct, may lead to failure at

the end. Sometimes, an incorrect decision toward the end of a simulation may be inconsequential. In addition, simulations begin with a set of circumstances—a scenario. A change in any one of the circumstances could change the entire nature of the assessment.

Nevertheless, researchers are making progress in using simulations for assessing complex problem solving comparable to the skills required for technological literacy. For instance, one promising approach is based on evidence-centered design (ECD) (Mislevy et al., 2003). In this approach, capabilities are identified for a subject area and organized into a graphical framework. ECD then shows how to connect the responses of test takers working in a complex simulated environment to the framework. Bennett and colleagues (2003) have provided an example of how ECD might be used to assess scientific-inquiry skills in a simulated environment.

Simulations can also be used in networked configurations to assess individuals or groups at any time and anywhere from remote

locations. Both the military and the computer-games industry have made major investments in networked simulation. In the military, the focus is on team performance, rather than individual performance. The members of crews, teams, and units are assumed to be proficient in their individual specialties (they are expected to know how to drive tanks, read maps, fly airplanes, fire weapons) before they begin networked simulation exercises (Alluisi, 1991). Because some aspects of technological literacy also involve group coordination and communication, networked simulation may be useful for assessing these competencies. However, as noted, development costs may be higher than for more traditional test methods.

Computer-Based and Web-Based Games

Games, especially games available over the World Wide Web, may also be useful for assessing technological literacy. Most technology-based games incorporate simulations of real and/or imagined systems. Although they emphasize entertainment over realism, well-designed games provide both realism and entertainment.

Some games are designed to be played by thousands of players. According to one estimate, there are some 5 million players of massive, multiplayer, on-line games (MMOGs) with at least 10,000 subscribers each (Woodcock, 2005). One might imagine an ongoing (continuous and unobtrusive) assessment of technological literacy based on an MMOG that collects data aggregated from the activities of hundreds of thousands of players who could contribute minimal personal data without compromising their privacy. Provisions would have to be put in place to ensure that participation was voluntary.

One example of a game that might be adapted to assess technological literacy is “Monkey Wrench Conspiracy” (available from http://www.Games2train.com ). In this game, which is actually a set of training modules for new users of another company’s computer-aided design/ computer-aided manufacturing (CAD/CAM) design software, the player (i.e., trainee) becomes an intergalactic secret agent who has to save a space station from attack by using CAD software to build tools, repair weapons, and defeat booby traps. The 30 tasks to be performed are presented in order of difficulty and keyed to increasing levels of technological capability. Because the game is modular, modified or new tasks can be added easily; thus, the concept of technological literacy could evolve with the technology.

Another useful feature of computer games is their capacity for motivation. Great numbers of people are motivated to play games, perhaps even games intended to assess technological literacy, for extended periods of time, thereby increasing the reliability and accuracy of the assessments they could provide. A computer game that assesses technological literacy could be a national assessment instrument for identifiable segments of the population. If players allow their responses to be anonymously collected and pooled, a well designed game that taps into technological knowledge and capability could become an unobtrusive, continuous, self-motivating, and inexpensive source of diagnostic information on the levels of technological literacy of different segments of the national population.

Considerable research has been done to identify and describe gender differences in game-seeking and game-playing behavior, whether on a personal computer, video arcade console, or online. In absolute numbers, at least as many women as men play games, including online games, but women prefer different types of games and different types of interactions (Crusoe, 2005; Robar and Steele, 2004). Women prefer quizzes, trivia games, and board and contest games, whereas men prefer action games. Women tend to enjoy the social aspects of online gaming and relationship-building in games. In contrast, men prefer strategy games, military games, and games that involve fighting or shooting. Both men and women seem to be interested in simulations (e.g., The Sims), racing games (e.g., Need for Speed Underground), and role-playing games (e.g., Everquest).

Women tend to enjoy the social aspects of online gaming.

Male-female differences in online game-playing behavior suggest that assessments that rely on computer technology may also be skewed by gender (i.e., sample bias). Other potential sources of sample bias include socioeconomic status and age. Lower income individuals, for example, may have relatively infrequent access to computers and computer-game software and therefore may not have experience or interest in operating computers and engaging in computer-based simulation. Similarly, older adults who have not grown up in the digital age—a demographic Prensky dubs “digital immigrants”—may have varying degrees of difficulty adapting to and using digital technology (Prensky, 2001). They may also simply have less interest in interacting with computers. Whether or not one accepts Prensky’s characterization, assessment developers will have to ensure that the mode of assessment does not bias results based on test takers’ computer literacy skills (Haertel and Wiley, 2003).

Electronic Portfolios

Artists, dancers, musicians, actors, and photographers have used portfolios to demonstrate their competency and show examples of their work. In formal education, portfolios have been used in K–12 and undergraduate classrooms, as well as schools of education (Carroll et al., 1996). Portfolios typically document student projects, often detailing the iterative steps in the production of a finished product. Portfolios can provide information for both formative and summative assessments, as well as an opportunity for making accurate measurements of performance and self-reflection.

Traditional paper-based portfolios, which may include writing, drawing, photos, and other visual information and which have been used for decades by U.S. educators, have several limitations. Most important, they require large amounts of physical storage space, and their contents can be difficult to maintain and share. With the introduction of computers and online communication into educational settings in the early 1990s, digital, or electronic, portfolios could be created (Georgi and Crowe, 1998). Electronic portfolios can be used for many purposes, including marketing or employment (to highlight competencies), accountability (to show attainment of standards), and self-reflection (to foster learning); these purposes may sometimes be at odds with one another (Barrett and Carney, 2005).

To the committee’s knowledge, electronic portfolios have not been used in the United States to assess technological literacy as defined in this report. However, electronic portfolios appear to be excellent tools for documenting and exploring the process of technological design. A number of companies produce off-the-shelf portfolio software (e.g., HyperStudio, FolioLive [McGraw Hill]), and customized software is being developed by universities and researchers in other settings (e.g, Open Source Portfolio Initiative, http://www.osportfolio.org ). The question of whether existing software could be adapted for assessments of technological literacy is a subject for further inquiry.

Electronic portfolios appear to be excellent tools for documenting and exploring the process of technological design.

Traditional, paper-based portfolios have been an essential component of the design and technology curriculum in the United Kingdom for documenting and assessing student projects. The portfolios of some 500,000 16-year-olds are reviewed and graded every year. Assembling a portfolio is a learning tool as much as an assessment tool, and students typically report that they learn more from their major project—which may

occupy them for as long as eight months of their final year—than from anything else in their design and technology program (R. Kimbell, professor, Technology Education Research Unit, Goldsmiths College, London, personal communication, May 5, 2005).

Recently, the British government funded a research group at Goldsmiths College to develop an electronic-portfolio examination system to enable students to develop design projects digitally, submit them digitally (via a secure website), and have them assessed digitally. In addition to computers and CAD software, other technologies that might enrich electronic portfolios are being considered, such as digital pens that can store what has been written and drawn with them; personal digital assistants that can store task-related data; and speech-to-text software that can enable sharing and analysis of design discussions. If the prototype system is successful, the research team will expand the electronic-portfolio system for four other areas of the curriculum, English, science, and two cross-curricular subjects.

Electronic Questionnaires

Adaptive testing, simulations, games, and portfolios could also be used in informal-education settings, such as museums and science centers. For example, portable devices, such as PC tablets and palm computers, might be used in museums, where people move from place to place. A questionnaire presented via these technologies could include logic branching and dynamic graphics, allowing a respondent to use visual as well as verbal resources in thinking about the question (Miller, 2004).

Very short questionnaires, consisting of only one or two questions, could be delivered as text messages on cell phones, a technique that some marketing companies now use to test consumer reactions to potential new products or product-related advertising. At least one polling organization used a similar technique to gauge young voters’ political leanings during the 2004 U.S. presidential election (Zogby International, 2004). Finally, considering that more than 70 percent of U.S. homes have Internet access (Duffy and Kirkley, 2004), informal-learning centers, survey researchers, and others interested in tapping into public knowledge and attitudes about technology could send follow-up questionnaires by e-mail or online. Several relatively inexpensive software packages are available for designing and conducting online surveys, and the resulting

data usually cost less and are of higher quality than data from traditional printed questionnaires or telephone interviews.

AERA (American Educational Research Association), APA (American Psychological Association), and NCME (National Council on Measurement in Education). 1999. Standards for Educational and Psychological Testing. Washington, D.C.: AERA.

Aldrich, C. 2004. Simulations and the Future of Learning. San Francisco: Pfeiffer.

Alluisi, E.A. 1991. The development of technology for collective training: SIMNET, a case history. Human Factors 33(3): 343–362.

Andrews, D.H., and H.H. Bell. 2000. Simulation-Based Training. Pp. 357–384 in Training and Retraining: A Handbook for Business, Industry, Government, and the Military, edited by S. Tobias and J.D. Fletcher. New York: Macmillan Reference USA.

Bahr, M.W, and C.M. Bahr. 1997. Education assessment in the next millennium: contributions of technology. Preventing School Failure 4(Winter): 90–94.

Baker, F.B. 1989. Computer technology in test construction and processing. Pp. 409– 428 in Educational Measurement, 3rd ed., edited by R.L. Linn. New York: Macmillan.

Barrett, H., and J. Carney. 2005. Conflicting paradigms and competing purposes in electronic portfolio development. Educational Assessment. Submitted for publication.

Bennett, R.E., F. Jenkins, H. Persky, and A. Weiss. 2003. Assessing complex problem-solving performances. Assessment in Education 10(3): 347–359.

Bunderson, C.V., D.K. Inouye, and J.B. Olson. 1989. The four generations of computerized educational measurement. Pp. 367–408 in Educational Measurement, 3rd ed., edited by R.L. Linn. New York: Macmillan.

Carroll, J., D. Potthoff, and T. Huber. 1996. Learning from three years of portfolio use in teacher education. Journal of Teacher Education 47(4): 253–262.

Concord Consortium. 2005. Molecular Logic Project. Available online at: http://molo.concord.org/ (October 19, 2005).

Crusoe, D. 2005. A discussion of gender diversity in computer-based assessment. Available online at: http://www.bitculture.org/storage/DHC_Gender_Div_EdDRvw0705. pdf (December 23, 2005).

Duffy, T.M., and J.R. Kirkley. 2004. Learning Theory and Pedagogy Applied in Distanced Learning: The Case of Cardean University. Pp. 107–141 in Learner Centered Theory and Practice in Distance Education: Cases from Higher Education, edited by T.M. Duffy and J.R. Kirkley. Mahwah, N.J.: Lawrence Erlbaum Associates.

Fair Test Examiner. 1997. ETS and test cheating. Available online at: http://www.fairtest.org/examarts/winter97/etscheat.htm (January 4, 2006).

Fletcher, J.D. 1999. Using networked simulation to assess problem solving by tactical teams. Computers in Human Behavior 15(May/July): 375–402.

Fletcher, J.D., and P.R. Chatelier. 2000. Military Training. Pp. 267–288 in Training and Retraining: A Handbook for Business, Industry, Government, and the Military, edited by S. Tobias and J.D. Fletcher. New York: Macmillan.

Georgi, D., and J. Crowe. 1998. Digital portfolios: a confluence of portfolio assessment and technology. Teacher Education Quarterly 25(1): 73–84.

Haertel, E., and D. Wiley. 2003. Comparability issues when scores are produced under varying test conditions. Paper presented at the Validity and Accommodations: Psychometric and Policy Perspectives Conference, August 4–5, College Park, Maryland.

Lord, F.M. 1971a. Robbins-Monro procedures for tailored testing. Educational and Psychological Measurement 31: 3–31.

Lord, F.M. 1971b. A theoretical study of the measurement effectiveness of flexilevel tests. Educational and Psychological Measurement 31: 805–813.

Lord, F.M. 1971c. The self-scoring flexilevel test. Educational and Psychological Measurement 31: 147–151.

Miller, J. 2004. The Evaluation of Adult Science Learning. Pp. 26–34 in Proceedings of NASA Office of Space Science Education and Public Outreach Conference 2002. ASP Conference Series 319. Washington, D.C.: National Aeronautics and Space Administration.

Mislevy, R.J., R.G. Almond, and J.F. Lukas. 2003 A Brief Introduction to Evidence-Centered Design. RR-03-16. Princeton, N.J.: Educational Testing Service.

Munro, A., and Q.A. Pizzini. 1996. RIDES Reference Manual. Los Angeles, Calif.: Behavioral Technology Laboratories, University of Southern California.

Naglieri, J.A., F. Drascow, M. Schmidt, L. Handler, A. Prifitera, A. Margolis, and R. Velasquez. 2004. Psychological testing on the Internet: new problems, old issues. American Psychologist 59(3): 150–162.

O’Neil, H.F., K. Allred, and R.A. Dennis. 1997a. Validation of a Computer Simulation for Assessment of Interpersonal Skill. Pp. 229–254 in Workplace Readiness: Competencies and Assessment, edited by H.F. O’Neil. Mahwah, N.J.: Lawrence Erlbaum Associates.

O’Neil, H.F., G.K.W.K. Chung, and R.S. Brown. 1997b. Use of Networked Simulations as a Context to Measure Team Competencies. Pp. 411–452 in Workplace Readiness: Competencies and Assessment, edited by H.F. O’Neil. Mahwah, N.J.: Lawrence Erlbaum Associates.

Pizzini, Q.A., and A. Munro. 1998. VIVIDS Authoring for Virtual Environments. Los Angeles, Calif.: Behavioral Technology Laboratories, University of Southern California.

Pohlman, D.L., and J.D. Fletcher. 1999. Aviation Personnel Selection and Training. Pp. 277–308 in Handbook of Aviation Human Factors, edited by D.J. Garland, J.A. Wise, and V.D. Hopkin. Mahwah, N.J.: Lawrence Erlbaum Associates.

Prensky, M. 2001. Digital natives, digital immigrants. On the Horizon 9(5). Available online at: http://www.marcprensky.com/writing/Prensky%20-%20Digital%20Natives,%20Digital%20Immigrants%20-%20Part1.pdf (January 4, 2006).

Robar, J., and A. Steele. 2004. Females and Games. Computer and Video Game Industry Research Study. March 2004. Issaquah, Washington: AisA Group.

Russell, M. 1999. Testing on Computers: A Follow-up Study Comparing Performance on Computer and on Paper. Available online at: http://epaa.asu.edu/epaa/v7n20/ (January 4, 2006).

Segall, D.O. 2001. ASVAB Testing via the Internet. Unpublished paper.

Shermis, M.D., P.M. Stemmer, and P.M. Webb. 1996. Computerized adaptive skill assessment in a statewide testing program. Journal of Research on Computing in Education 29(1): 49–67.

Smith, R.D. 2000. Simulation. Pp. 1578–1587 in Encyclopedia of Computer Science, 4th ed., edited by A. Ralston, E.D. Reilley, and D. Hemmendinger. New York: Grove’s Dictionaries.

TELS (Technology Enhanced Learning in Science). 2005. Web-based inquiry science environment. Available online at: http://wise.berkeley.edu/ (October 19, 2005).

Thinkertools. 2005. Force and motion. Available online at: http://thinkertools.soe.berkeley.edu/Pages/force.html (October 19, 2005).

Towne, D.M. 1997. An Intelligent Tutor for Diagnosing Faults in an Aircraft Power Distribution System. Technical Report 118. Los Angeles, Calif.: Behavioral Technology Laboratories, University of Southern California.

van der Linden, W.J. 1995. Advances in Computer Applications. Pp. 105–123 in International Perspectives on Academic Assessment, edited by T. Oakland and R.K. Hambleton. Boston: Kluwer Academic Publishers.

Weiss, D.J. 1983. Computer-Based Measurement of Intellectual Capabilities: Final Report. Minneapolis, Minn.: Computerized Adaptive Testing Laboratory, University of Minnesota.

Weiss, D.J. 2004. Computerized adaptive testing for effective and efficient measurement in counseling and education. Measurement and Evaluation in Counseling and Development 37(2): 70–84.

Woodcock, B.S. 2005. Total MMOG Active Subscriptions (Excluding Lineage, Lineage II, and Ragnorak Online). Available online at: http://mmogchart.com/ (August 22, 2005).

Zogby International. 2004. Young Mobile Voters Pick Kerry over Bush 55% to 40%, Rock the Vote/Zogby Poll Reveals: National Text-Message Poll Breaks New Ground. Press release dated October 31, 2004. Available online at: http://www.zogby.com/news/ReadNews.dbm?ID=919 (August 22, 2005).

In a broad sense, technology is any modification of the natural world made to fulfill human needs or desires. Although people tend to focus on the most recent technological inventions, technology includes a myriad of devices and systems that profoundly affect everyone in modern society. Technology is pervasive; an informed citizenship needs to know what technology is, how it works, how it is created, how it shapes our society, and how society influences technological development. This understanding depends in large part on an individual level of technological literacy.

Tech Tally: Approaches to Assessing Technological Literacy determines the most viable approaches to assessing technological literacy for students, teachers, and out-of-school adults. The book examines opportunities and obstacles to developing scientifically valid and broadly applicable assessment instruments for technological literacy in the three target populations. The book offers findings and 12 related recommendations that address five critical areas: instrument development; research on learning; computer-based assessment methods, framework development, and public perceptions of technology.

This book will be of special interest to individuals and groups promoting technological literacy in the United States, education and government policy makers in federal and state agencies, as well as the education research community.

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

  • Deutsch   |   
  • Español   |   
  • Français   |   

ASSESSMENT TAKER

Just enter the test ID provided and click GO.

Access dashboard, create tests, view reports, and more.

What is a Computer Skills Assessment Test?

What Is A Computer Skills Assessment Test

The National Skills Coalition recently compiled data from 43 million job postings, and 92% of the positions required at least minimal computer skills. This shows that more organizations are going digital and computer skills have become a “must have” instead of a “nice to have.”

Businesses now require applicants to have computer skills for positions that never needed them in the past. As cases in point, truck drivers document deliveries on iPads, and restaurant workers use them to take orders. Warehouse employees use computers to check inventory, and store cashiers use point-of-sale systems.

Organizations must evaluate essential skills like typing speed, MS Office® proficiency, knowledge of specific applications, and expertise with in-demand technologies. Many have implemented an industry-leading assessment solution like the eSkill Talent Assessment Platform TM and use computer skills assessment tests to screen applicants.

What is a Computer Skills Assessment Test ?

Employers worldwide have added employment testing to their hiring process, but some HR professionals still ask, “ What is a computer skills assessment test ?”

A computer skills assessment test may be a basic online computer test to recruit candidates with strong Word®, Excel®, and PowerPoint® abilities. It can also be an assessment that evaluates applicants’ knowledge of an application or process, such as Java programming or web development. Computer skills assessment tests are also used to assess specialized IT expertise such as business intelligence, artificial intelligence (AI), and DevOps.

How to Use Basic Online Computer Tests for Hiring

HR professionals use computer skills assessment tests to screen applicants for entry- and mid-level positions. Using a basic computer test online is more efficient because HR teams often receive hundreds of applications for a single job posting, and manual review is not cost-effective.

By reviewing computer skills assessment test results, HR leaders can instantly identify top applicants and eliminate those who do not meet their requirements. This saves time and enables them to focus on the best-qualified candidates.

HR teams can choose from hundreds of eSkill’s validated job- and subject-based skills tests or build customized computer skills assessment tests by selecting questions from various skills tests. Popular assessments for entry- and mid-level positions include the General Typing, MS Office®, Form Fill Data, and Data Entry Operator skills tests.

Candidates’ computer skills assessment test results show who has the skills to do a job but do not confirm that applicants know how to apply them. So, hiring teams include simulations in basic online computer tests and observe candidates’ performance in job-related situations. The eSkill Talent Assessment Platform TM offers simulations for Multitasking, Chat, all MS Office applications, and a Digital Literacy simulation to assess overall computer, Internet, and social media proficiency.

HR teams can use basic online computer tests as-is or build custom computer skills assessment tests using questions from multiple skills tests. For instance, if you need to hire an executive assistant, you can create an assessment using questions from eSkill’s Executive Assistant and MS Office skills tests and add questions from the Personal Assistant and Administrative Coordinator skills tests.

Computer Skills Assessment Tests for Technical Hires

HR professionals rely on computer skills assessment tests while recruiting programmers and software engineers because the eSkill Talent Assessment Platform TM includes assessments to measure programming and software testing skills.

Popular basic online computer tests include the Java, C#, Python, and PHP skills tests and the SharePoint and Salesforce Developer assessments. HR professionals also use the Software Testing and Quality Assurance assessments because developers and programmers must understand basic testing methods and QC/QA best practices for troubleshooting, bug fixes, and alpha/beta testing.

Online Basic Computer Tests That Assess Technology Expertise

When a new technology gains in popularity, related skills are in demand. Hot areas for the foreseeable future include cybersecurity, artificial intelligence (AI), networking/DevOps, and business intelligence/data science.

Demand for cybersecurity professionals is increasing because all organizations are vulnerable to security breaches. The Application Security, Information System Security Engineer, and Application Security Engineer assessments are essential when recruiting cybersecurity experts.

Competition to recruit artificial intelligence experts is intense because only 10% of all workers have AI experience. Hiring teams should use the Basic Artificial Intelligence Knowledge Assessment and include questions from the Machine Learning and Data Analytics skills tests to identify top candidates.

Demand for networking and DevOps engineers continues to increase as more organizations embark on digital transformation initiatives. HR teams recruiting networking and DevOps engineers will find the Network Engineer and DevOps Engineer computer skills assessment tests and the General IT Infrastructure and Networking Essentials assessments essential.

The demand for business intelligence and data science professionals will exceed $322.9 billion by 2026. HR teams looking for experts in these areas can use eSkill’s Data Science and Machine Learning Engineer assessments and add questions from the Data Analytics and Quantitative Analysis computer skills assessment tests.

Get Started with Computer Skills Assessment Tests

Businesses that adopt a computer skills assessment testing solution like the eSkill Talent Assessment Platform TM see an immediate improvement in hiring outcomes. They also reduce recruiting costs and decrease time-to-hire. Many eSkill clients have seen around a 70% decrease in hiring costs and almost a 60% reduction in time-to-hire.

Are you ready to learn how administering basic computer tests online can help you hire candidates with the computer and technical expertise you need? Contact us to request a demo .

  • Assessments
  • Call Center
  • Client Stories
  • Customer Service
  • Data Entry Testing
  • Employee Relations
  • Employment Assessments
  • Engineering
  • Featured Posts
  • Government / Public sector
  • Hospitality
  • Job Skills Tests
  • Leadership Skills
  • Manufacturing
  • Pre-Employment Tests
  • Remote Hiring
  • Sales Aptitude Tests
  • Skill Tests
  • Soft Skills Assessment
  • Software Development
  • Strategic Workforce
  • Team Scoring
  • Transportation & Logistics
  • Uncategorized
  • Video Interviews

Subscribe to Our Blog

  • First Name *
  • Last Name *
  • Country * Country USA UK Canada -------------------- Afghanistan Albania Algeria American Samoa Andorra Angola Antigua and Barbuda Argentina Armenia Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Brazil Brunei Bulgaria Burkina Faso Burundi Cambodia Cameroon Cape Verde Central African Republic Chad Chile China Colombia Comoros Democratic Republic of the Congo Republic of the Congo Costa Rica Côte d'Ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic East Timor Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Fiji Finland France Gabon Gambia Georgia Germany Ghana Greece Greenland Grenada Guam Guatemala Guinea Guinea-Bissau Guyana Haiti Honduras Hong Kong Hungary Iceland India Indonesia Iran Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati North Korea South Korea Kuwait Kyrgyzstan Laos Latvia Lebanon Lesotho Liberia Libya Liechtenstein Lithuania Luxembourg Macedonia Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Mauritania Mauritius Mexico Micronesia Moldova Monaco Mongolia Montenegro Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands New Zealand Nicaragua Niger Nigeria Norway Northern Mariana Islands Oman Pakistan Palau Palestine Panama Papua New Guinea Paraguay Peru Philippines Poland Portugal Puerto Rico Qatar Romania Russia Rwanda Saint Kitts and Nevis Saint Lucia Saint Vincent and the Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa Spain Sri Lanka Sudan Sudan, South Suriname Swaziland Sweden Switzerland Syria Taiwan Tajikistan Tanzania Thailand Togo Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Tuvalu Uganda Ukraine United Arab Emirates Uruguay Uzbekistan Vanuatu Vatican City Venezuela Vietnam Virgin Islands, British Virgin Islands, U.S. Yemen Zambia Zimbabwe
  • State * State Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware District of Columbia Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington West Virginia Wisconsin Wyoming Armed Forces Americas Armed Forces Europe Armed Forces Pacific
  • Yes, I would like to receive marketing communications from eSkill. I can unsubscribe anytime.

By registering, you confirm that you agree to the storing and processing of your personal data by eSkill as described in the Privacy Policy

  • Hidden eSkill_Form__c

Latest Posts

  • Unlocking Administrative Talent:The Skills Assessment Test January 30, 2024
  • Skills that Shine: A Comprehensive Guide to Administrative Testing January 30, 2024
  • Finding the Perfect Fit: Skill Tests for Administrative Professionals January 30, 2024
  • Employer’s Guide to Effective Data Entry Assessment Testing January 30, 2024
  • Unlocking Precision: Alphanumeric Data Entry Testing for Employers January 30, 2024

Stay Social

subscription

Subscribe to Our Newsletter for Updates

  • Hidden State State

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Unit 1. Basic knowledge and common uses of computers

Unit 1: Self-test

Click play on the following audio player to listen along as you read this section.

Find answers to this self-test at the back of the book: Answers for Self-Tests

True or false

  • A system unit includes the motherboard, CPU, RAM, hard drive, expansion cards, power supply, etc.
  • A hard drive a computer’s central structure that connects the different parts of a computer together.
  • All information that was stored in RAM is erased when the computer is turned on.
  • A keyboard is an output device that allows a user to enter characters into a computer.
  • Blogs are written and updated by bloggers. They write about their opinions and thoughts.
  • Computer can help business to start, run, manage, and grow.

Fill in the blank

  • The [blank] is the main part of a desktop computer.
  • CPU is the abbreviation for [blank].
  • RAM is a type of data storage used in computers that [blank] stores programs and data.
  • Computer [blank] is a display screen used to display information processed by a computer.
  • USB stands for [blank].
  • When you save data or install programs on your computer, the information is written to the [blank] disk.
  • [Blank] is a telephone connection over the Internet. It allows users to make calls over the Internet.
  • Disk drive is a hardware that stores and retrieves information, data, files, programs, etc. that are used by your computer. The drive is often referred to by the [blank].

Multiple choice

  • motherboard
  • system unit
  • central processor
  • microprocessor
  • all of the above
  • flash drive
  • simulating experiments
  • patient monitoring
  • diagnostic databases

Key Concepts of Computer Studies by Meizhong Wang is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

computer assessment 1

The AI-Powered Talent Assessment Tool – See Testlify in action

The ai-powered talent assessment tool – view demo, basic computer skills test, overview of basic computer skills test.

The Basic computer skills test is used as a pre-hire assessment tool to evaluate the computer skills of potential job candidates.

Skills measured

  • Computer Hardware
  • Operating System
  • Software Applications

Available in

Cognitive Ability

About the Basic Computer Skills test

The Pre-Hire Basic Computer Skills Test is an assessment tool employed to gauge the computer aptitude of prospective job applicants.

Incorporating a fundamental computer skills test as part of the pre-assessment process holds significance for employers aiming to appraise candidates’ competency in fundamental computer operations.

Here are a few reasons why: Ensures job requirements are met: Most jobs today require at least some basic computer skills, such as using Microsoft Office, navigating the internet, and sending emails. A computer skill test can ensure that candidates meet these minimum requirements before being hired. Saves time and resources: Hiring candidates who lack basic computer skills can be costly for businesses. Employers may have to spend time and resources training the employee or may need to hire additional staff to fill in gaps in the candidate’s knowledge. A computer skill test can help identify candidates who are already proficient, saving time and resources in the long run. Identifies potential candidates: A computer skill test can help identify candidates who possess a higher level of proficiency, indicating that they may be a good fit for more advanced roles that require more technical skills. Provides a fair evaluation: By using a standardized computer skill test, employers can evaluate candidates fairly and objectively, ensuring that each candidate is assessed on the same set of skills.

Overall, conducting a basic computer skill test during pre-assessment is an effective way for employers to understand a candidate’s proficiency with essential computer skills and can help ensure that the candidate is a good fit for the job requirements.

Relevant for

  • Academic Editor
  • Administrative Assistant
  • Budget Analyst
  • Business Development Manager
  • Customer Experience Specialist
  • Customer Service Representative
  • Customer Service Team Lead
  • Inbound Sales Account Executive
  • Executive Assistant
  • Account Executive

Hire the best, every time, anywhere

Customer satisfaction.

Testlify helps you identify the best talent from anywhere in the world, with a seamless experience that candidates and hiring teams love every step of the way.

Computer hardware refers to the physical components of a computer system, such as the central processing unit (CPU), motherboard, and memory. The sub-skill of computer hardware is crucial to assess in the Basic Computer skill assessment because it helps organizations evaluate candidates’ ability to troubleshoot and maintain computer hardware, which is essential for efficient and effective computer usage.

An operating system (OS) is software that manages computer hardware and provides common services for computer programs. Examples of operating systems include Windows, MacOS, and Linux. The sub-skill of operating systems is crucial to assess in the Basic Computer skill assessment because it helps organizations evaluate candidates’ ability to use and navigate different operating systems, which is essential for efficient and effective computer usage.

Software applications refer to computer programs designed to perform specific tasks, such as word processing, spreadsheet analysis, and video editing. The sub-skill of software applications is crucial to assess in the Basic Computer skill assessment because it helps organizations evaluate candidates’ ability to use and navigate different software applications, which is essential for efficient and effective computer usage.

Security refers to the measures taken to protect computer systems and data from unauthorized access, use, disclosure, disruption, modification, or destruction. The sub-skill of security is crucial to assess in the Basic Computer skill assessment because it helps organizations evaluate candidates’ ability to identify and respond to security threats, which is essential for protecting sensitive information and maintaining the integrity of computer systems.

The Basic Computer Skills test is created by a subject-matter expert

Testlify’s skill tests are designed by experienced SMEs (subject matter experts). We evaluate these experts based on specific metrics such as expertise, capability, and their market reputation. Prior to being published, each skill test is peer-reviewed by other experts and then calibrated based on insights derived from a significant number of test-takers who are well-versed in that skill area. Our inherent feedback systems and built-in algorithms enable our SMEs to refine our tests continually.

computer assessment 1

Key features

White label

Typing test

ATS integrations

Custom questions

Live coding tests

Multilingual support

Psychometric tests

Why choose testlify?

Elevate your recruitment process with Testlify, the finest talent assessment tool. With a diverse test library boasting 1000+ tests, and features such as custom questions, typing test, live coding challenges, google suit questions and psychometric tests, finding the perfect candidate is effortless.

Enjoy seamless ATS integrations, white label features and multilingual support, all in one platform. Simplify candidate skill evaluation and make informed hiring decisions with Testlify.

Solve your skill assessment needs with ease

1200+ premium tests.

Choose from a test library of 1200+ tests for different roles and skills.

100+ ATS integration

View sample score card

View SMART personality test report

View DISC personality test report

View Culture fit test report

Use this test

organizational development

Top five hard skills interview questions for Basic Computer Skills

Here are the top five hard-skill interview questions tailored specifically for Basic Computer Skills. These questions are designed to assess candidates’ expertise and suitability for the role, along with skill assessments.

1. How would you rename a file on your computer? Why this matters:

Why this matters.

File management is a fundamental computer skill, and renaming files demonstrates basic proficiency in navigating and organizing digital content.

What to listen for?

Listen for the candidate to describe the process of selecting a file, right-clicking to access the context menu, choosing the “Rename” option, and typing in the new name. Pay attention to their understanding of the steps and their ability to articulate them clearly

2. Can you explain what a web browser is and how you would open a new tab?

Understanding web browsers and tabs is essential for internet navigation and research

Candidates should define a web browser as a software application used to access websites. They should describe how to open a new tab (e.g., clicking on the “+” icon or using keyboard shortcuts) and emphasize the purpose of tabs in multitasking and browsing efficiency.

3. How do you create a bulleted list in a word processing document?

Basic document formatting skills are important for creating organized and readable content

Candidates should explain how to access the bullet list feature (usually found in the toolbar or ribbon), apply it to selected text, and note that each item in the list is automatically preceded by a bullet point. Look for clarity in their explanation and their ability to demonstrate the action.

4. What is the purpose of copy and paste, and how would you use it to transfer text between different documents?

Copy and paste is a core computer skill for transferring information across applications and documents.

Candidates should articulate that copy and paste allow text duplication and transfer. They should describe selecting the text, using keyboard shortcuts (Ctrl+C and Ctrl+V), or right-click options. Pay attention to their understanding of the concept and their ability to explain the steps accurately.

5. Explain how you would insert an image into a presentation slide.

Basic image insertion is crucial for enhancing visual content in documents and presentations.

Candidates should mention accessing the image insertion feature (typically found in the Insert menu or toolbar), selecting the image file from their computer, and placing it on the slide. Look for their understanding of the process and their ability to provide a coherent description.

Frequently Asked Questions for Basic Computer Skills

What is a basic computer skills assessment.

A Basic Computer Skills assessment is a test designed to evaluate an individual’s proficiency in using a computer and basic digital tools like email, word processing software, and web browsing. The assessment typically includes questions and tasks that test the individual’s ability to perform common computer-related tasks.

How to use Basic Computer Skills assessment for hiring?

A Basic Computer Skills assessment can be used as part of the hiring process to evaluate the technical skills of job candidates. By assessing candidates’ basic computer skills, you can ensure that they have the skills needed to perform the job successfully and minimize the need for additional training.

What roles can I use the Basic Computer Skills assessment for?

What topics can be covered in basic computer skills assessment, why is a basic computer skills assessment important.

A Basic Computer Skills assessment is important because it ensures that job candidates have the necessary technical skills to perform their job duties. In addition, the assessment can help identify any knowledge gaps that may need to be addressed through training or additional support. Overall, a Basic Computer Skills assessment can help improve the efficiency and productivity of employees and the organization as a whole.

Frequently asked questions (FAQs)

Want to know more about Testlify? Here are answers to the most commonly asked questions about our company.

Can I try a sample test before attempting the actual test?

Yes, Testlify offers a free trial for you to try out our platform and get a hands-on experience of our talent assessment tests. Sign up for our free trial and see how our platform can simplify your recruitment process.

How can I select the tests I want from the Test Library?

To select the tests you want from the Test Library go to the Test Library page and browse tests by categories like role-specific tests, Language tests, programming tests, software skills tests, cognitive ability tests, situational judgment tests, and more. You can also search for specific tests by name.

Ready-to-go tests are pre-built assessments that are ready for immediate use, without the need for customization. Testlify offers a wide range of ready-to-go tests across different categories like Language tests (22 tests), programming tests (57 tests), software skills tests (101 tests), cognitive ability tests (245 tests), situational judgment tests (12 tests), and more.

Can you integrate with our existing ATS?

Yes, Testlify offers seamless integration with many popular Applicant Tracking Systems (ATS). We have integrations with ATS platforms such as Lever, BambooHR, Greenhouse, JazzHR, and more. If you have a specific ATS that you would like to integrate with Testlify, please contact our support team for more information.

What are the basic technical requirements needed to take your tests?

Testlify is a web-based platform, so all you need is a computer or mobile device with a stable internet connection and a web browser. For optimal performance, we recommend using the latest version of the web browser you’re using. Testlify’s tests are designed to be accessible and user-friendly, with clear instructions and intuitive interfaces.

Are your tests valid and reliable?

Yes, our tests are created by industry subject matter experts and go through an extensive QA process by I/O psychologists and industry experts to make sure that the tests have a good reliability and validity and give accurate results.

Hire with Facts, not Fiction.

Resumes don’t tell you everything! Testlify gives you the insights you need to hire the right people with skills assessments that are accurate, automated, and unbiased.

Test library

Reseller plan

What’s new

Video interviews

Product roadmap

Lateral hiring

Diversity and inclusion

Volume hiring

Remote hiring

Blue collar hiring

Freelance hiring

Campus hiring

Information technology

Logistics & supply chain

Recruitment

Hospitality

Real estate

Careers We are hiring

For subject matter experts

Our partners

Role specific tests

Language tests

Programming tests

Software skills tests

Cognitive ability tests

Situational judgment tests

Coding test s

Engineering tests

Company type

Non-profits

Public sector

Help center

Join Testlify SME

Integration program

Referral program

Partnership program

Success stories

Competitors

Hiring guides

HR glossary

Privacy policy Terms & conditions Refund policy

GDPR compliance

Cookie policy

Security practices

Data processing agreement

Data privacy framework

Trust center

Testgorilla

Vervoe Adaface Maki People Xobin TestDome Mettl

Greenhouse JobAdder JazzHR

Zoho Recruit

[email protected]

[email protected]

©2024 Testlify All Rights Reserved

Get email notifications whenever Workable  creates ,  updates  or  resolves  an incident.Email address:

Get incident updates and maintenance status messages in Slack.

Recruiting is complex, time-consuming, and risky.

[fluentform id=”23″]

Get 40% off on your first year’s billing!

Hurry and make the most of this special offer before it expires., new customers only..

[fluentform id=”21″]

Test library request

These are upcoming tests. If you wish to prioritize this test request, we can curate it for you at an additional cost.

Relationship Manager (Event Management – Wedding)

The Relationship Manager test is an essential tool in the hiring process, offering a comprehensive evaluation of key skills like empathy, customer centricity, and stress management.

Stakeholder Management

The Stakeholder Management test identifies candidates skilled in managing diverse stakeholder relationships, crucial for roles requiring negotiation and collaboration.

Change Leadership

The Change Leadership test identifies candidates adept at leading through transformation, focusing on strategic visioning, effective communication, adaptability and collaborative problem-solving.

16 PF Personality

The 16 PF Personality test assesses 16 primary personality factors, aiding employers in evaluating candidates beyond qualifications.

Enneagram Personality

The Enneagram Personality Test offers deep insights into candidates’ motivations and behaviors, aiding in hiring decisions. It assesses work style, communication and leadership potential.

Organizing Skills for Managers

The Organizing Skills for Managers test evaluates a candidate’s ability to strategize, prioritize, and manage resources effectively.

Construction Assistant

The Construction Assistant test effectively assesses technical skills, safety knowledge, and teamwork, streamlining hiring for construction roles.

Cashier Aptitude

The Cashier Aptitude assessment evaluates a candidate’s ability to handle transactions, process payments, manage cash, and interact with customers.

SEO Specialist (Beginner)

The SEO Specialist test evaluates a candidate’s ability to analyze traffic, develop keyword strategies, and optimize content for search engines.

SEO Specialist (Intermediate)

The SEO Specialist test evaluates a candidate’s ability to analyze web traffic, devise keyword strategies, and optimize content for search engines.

Oracle PeopleSoft

The Oracle PeopleSoft Test assesses expertise in PeopleSoft applications, vital for roles in HR, finance, and supply chain management.

Workday Software

The Workday Software Test evaluates proficiency in Workday solutions, essential for roles involving HR and financial management systems.

Microsoft Windows Server

Microsoft Windows Server is an enterprise-level operating system designed to manage and serve network resources, making it a cornerstone of IT infrastructure.

Adobe Creative Cloud software

Adobe Creative Cloud is a comprehensive suite of creative tools and software, that enables professionals to create, design, and innovate.

Autodesk Revit

The Autodesk Revit test assesses candidates’ proficiency in industry-standard software for architecture, engineering, and construction roles, ensuring they can create accurate 3D models.

Epic Systems

The Epic Systems Test assesses candidates’ proficiency in healthcare IT, focusing on software solutions, system analysis, and user interface design.

Microsoft Outlook

Microsoft Outlook is an email and calendar management tool, offering robust features for communication, scheduling, and task organization.

Shell script

Assessing shell scripting skills, this test measures a candidate’s proficiency in automating tasks, managing system operations, and scripting in Unix/Linux environments.

Microsoft Teams

The Microsoft Teams test evaluates proficiency in using Teams for collaboration, communication, and productivity in workplace settings.

The Zoom test assesses candidates’ proficiency in using Zoom for effective communication and collaboration in a remote work environment.

Google workspace

The Google Workspace assessment tests are designed to evaluate a candidate’s proficiency in using the various tools and functionalities offered by Google Workspace.

Atlassian Bamboo

The Atlassian Bamboo Test assesses knowledge in Bamboo CI/CD processes, integration with development tools, and automation skills for software deployment.

Objective C

The Objective C Test assesses proficiency in Objective C programming, focusing on language syntax, object-oriented principles, and iOS development.

Marketo Marketing Automation

The Marketo Marketing Automation Test evaluates proficiency in Marketo tools, understanding marketing automation strategies, and the ability to leverage Marketo for effective marketing campaigns.

The LinkedIn Test evaluates proficiency in LinkedIn usage for networking, personal branding, and digital marketing strategies.

Microsoft Access

The Microsoft Access test assesses candidates’ Access skills for data management and analysis roles. Identify proficient individuals efficiently.

SAP Software

The SAP Software test assesses candidates’ proficiency in SAP software, including its modules and functionalities, relevant to various business processes.

Teradata Database

The Teradata Database Test evaluates expertise in Teradata for data warehousing and analytics, essential for roles in database management and analysis.

Oracle Java

The Oracle Java test assesses candidates’ expertise in Java programming, focusing on language features, object-oriented concepts, and Java application development.

Microsoft Office software

Microsoft Office is a suite of productivity tools, including Word, Excel, PowerPoint, and Outlook, designed for document creation, data analysis, presentations, and email management.

Talk to our product advisor

Schedule a product demo meeting, and we’ll show you Testlify in action

computer assessment 1

  • Liberty Online
  • Residential
  • Request More Information
  • (434) 582-2000
  • Academic Calendar
  • Bachelor’s Degrees
  • Master’s Degrees
  • Postgraduate Degrees
  • Doctoral Degrees
  • Associate Degrees
  • Certificate Programs
  • Degree Minors
  • Registrar’s Office
  • Degree Completion Plans (DCPs)
  • Course Catalog
  • Policy Directory
  • Academic Support (CASAS)
  • LU Bookstore
  • Research at Liberty
  • Eagle Scholars Program
  • Honors Program
  • Quiz Bowl Team
  • Debate Team
  • Student Travel
  • Liberty University Online Academy (K-12)
  • Tuition & Costs
  • Net Price Calculator
  • Student Financial Services
  • Scholarships
  • Undergraduate
  • International
  • Apply for LU Online
  • Online Admissions
  • Online Tuition & Fees
  • Military Students
  • School of Law
  • Osteopathic Medicine
  • Convocation
  • Campus Community
  • LU Serve Now
  • Liberty Worship Collective
  • Office of Spiritual Development
  • Online Engagement
  • LU Shepherd
  • Doctrinal Statement
  • Mission Statement
  • Residence Life
  • Student Government
  • Student Clubs
  • Conduct Code & Appeals
  • Health & Wellness
  • Student Affairs Offices
  • Campus Recreation
  • LaHaye Rec & Fit
  • Intramural Sports
  • Hydaway Outdoor Center
  • Snowflex Centre
  • Student Activities
  • Club Sports
  • LaHaye Ice Center
  • ID & Campus Services
  • Dining Services
  • Parents & Families
  • Commuter Students
  • International Students
  • Graduate Students
  • Disability Support
  • Equity & Inclusion
  • NCAA Sports
  • Flames Club
  • Varsity Club
  • Williams Stadium
  • Vines Center
  • Liberty Baseball Stadium
  • Kamphuis Field
  • Ticket Information
  • Flames Merchandise
  • LU Quick Facts
  • News & Events
  • Virtual Tour
  • History of Liberty
  • Contact Liberty
  • Visit Liberty
  • Give to Liberty
  • School of Business
  • School of Business: INFT

Computer Assessment

Additional Navigation

Computer Assessment Information

As you prepare for a rewarding career, it’s important to be proficient in variety of technological skills and computer programs. As part of this, you will take a computer assessment test to determine your proficiency level.

The computer assessment consists of four different tests, each part focusing on a different concept.

  • There is no charge for the computer assessment.
  • Computer Concepts
  • The passing score for each assessment is 70%
  • The Computer Assessment must be taken before/at the beginning of your first semester at Liberty.

If you do not pass two or more of the tests, you will need to complete either Information Technology (INFT) 110 or 111.

  • INFT 110 — users with Windows devices
  • INFT 111 — users with Apple devices

If you do not pass just one of the tests, you will need to take the course for that particular concept.

  • Computer Concepts — INFT 000
  • Microsoft Powerpoint — INFT 102
  • Microsoft Excel — INFT 103
  • Microsoft Word — INFT 104

Transfer Students

Transfer students who have transferred in a course substituted for INFT 110 have satisfied the technology competency and do not need to take the assessment.

Computer Assessment Schedule

If you choose to take the Computer Assessment, make sure to take it before beginning your first semester at Liberty. It is available online during the Summer and the first few weeks of the Spring Semester and has time to use practice questions and the actual testing. You can access the assessment through your INFT-000 section in Canvas and it will link you to a Cengage account.

**Please note: If you do not take the assessment during the Summer or first few weeks of the Spring Semester before attending Liberty University you will need to take INFT 110 in the following Spring/Fall semester.

Skills Training

Please refer to the skills and concepts page on www.liberty.edu/computerassessment .

While taking the assessment, you may not obtain help from other individuals and the assessment must be taken on your own. We, the students, faculty, and staff of Liberty University, have a responsibility to uphold the moral and ethical standards of this institution and personally confront those who do not.

During this assessment, you may not share or obtain any information from another individual. This is considered cheating. Any student who is caught cheating will be disciplined appropriately. We strictly adhere to The Liberty Way.

If you need help or think you are supposed to take the INFT Assessment and do not have a Canvas course, please contact [email protected] .

  • Subject planning
  • Specialists

Assessment Computing Y2: What is a computer?

Assessment quiz and Knowledge catcher for use at the start and/or end of the unit to assess pupil progress.

  • Computing >
  • Key Stage 1 >
  • Computing systems and networks 1: What is a computer? >
  • Assessment – Computing Y2: What is a computer?

Assessment resources

Quiz presentation - Y2: What is a computer?

Quiz - Pupil answer sheet

Knowledge catcher

Watch the one-minute video to find out more

Go to help video

Defining Computer Science and Information Technology-1 (1)

  • Information Systems

logo

Have an account?

Quiz image

Principles of Programming Languages

University  , java programming, 34.1k plays, 2nd -  3rd  , revision test, 4th -  5th  .

pencil-icon

Computer Revision Test - Grade 1

User image

15 questions

Player avatar

A keyboard is an _____________________ device .

The buttons on the keyboard are called ________________

There are ____________ keys on the most of the keyboard

Letter keys are used for ___________ letters and words.

There are ____________ letter keys on keyboard

Number keys are also called ______________ keys

There are ten number keys from __________

The _______________ is a small blinking line on the computer screen.

How many types of arrow keys are there on keyboard?

You can type CAPITAL letters by pressing the _________ key

_______________ is the longest key on the keyboard.

Space Bar key

Backspace key is used to erase anything typed on the ______ of the cursor.

__________ key help us to start a new line

Computer is an ____________________ Machine

Calculating

Title of chapter 3 _________________

Know Your Computer

Parts of a computer

Keyboard and keys

Google Logo

First Grade Computer Quiz

Settings

First Grade Computer Parts Identification

First Grade Computer Quiz - Quiz

Apple Computer, Monitor, or Screen

First Grade Computer Quiz - Quiz

Icon for PIXIE program

First Grade Computer Quiz - Quiz

Related Topics

Recent Quizzes

Featured Quizzes

Popular Topics

  • 10th Grade Quizzes
  • 11th Grade Quizzes
  • 12th Grade Quizzes
  • 5th Grade Quizzes
  • 6th Grade Quizzes
  • 7th Grade Quizzes
  • 8th Grade Quizzes
  • 9th Grade Quizzes
  • Standard Grade Quizzes

Back to Top

Related Quizzes

Wait! Here's an interesting quiz for you.

  • Open access
  • Published: 14 February 2024

Enhancing surgical performance in cardiothoracic surgery with innovations from computer vision and artificial intelligence: a narrative review

  • Merryn D. Constable 1 ,
  • Hubert P. H. Shum 2 &
  • Stephen Clark 3 , 4  

Journal of Cardiothoracic Surgery volume  19 , Article number:  94 ( 2024 ) Cite this article

15 Accesses

Metrics details

When technical requirements are high, and patient outcomes are critical, opportunities for monitoring and improving surgical skills via objective motion analysis feedback may be particularly beneficial. This narrative review synthesises work on technical and non-technical surgical skills, collaborative task performance, and pose estimation to illustrate new opportunities to advance cardiothoracic surgical performance with innovations from computer vision and artificial intelligence. These technological innovations are critically evaluated in terms of the benefits they could offer the cardiothoracic surgical community, and any barriers to the uptake of the technology are elaborated upon. Like some other specialities, cardiothoracic surgery has relatively few opportunities to benefit from tools with data capture technology embedded within them (as is possible with robotic-assisted laparoscopic surgery, for example). In such cases, pose estimation techniques that allow for movement tracking across a conventional operating field without using specialist equipment or markers offer considerable potential. With video data from either simulated or real surgical procedures, these tools can (1) provide insight into the development of expertise and surgical performance over a surgeon’s career, (2) provide feedback to trainee surgeons regarding areas for improvement, (3) provide the opportunity to investigate what aspects of skill may be linked to patient outcomes which can (4) inform the aspects of surgical skill which should be focused on within training or mentoring programmes. Classifier or assessment algorithms that use artificial intelligence to ‘learn’ what expertise is from expert surgical evaluators could further assist educators in determining if trainees meet competency thresholds. With collaborative efforts between surgical teams, medical institutions, computer scientists and researchers to ensure this technology is developed with usability and ethics in mind, the developed feedback tools could improve cardiothoracic surgical practice in a data-driven way.

Peer Review reports

Introduction

Cardiothoracic surgical performance depends on integrating an extensive body of knowledge with, often complex and nuanced, technical and non-technical skills [ 1 ]. Given that surgery occurs within the context of individual patients and environmental factors, understanding surgical expertise and performance in a meaningful way that informs patient care and surgical training is a particularly challenging problem. Investment in tools to objectively track and analyse human movement is commonplace in elite sports [ 2 ], and similar tools could be used in surgical environments to enhance performance and patient outcomes [ 3 ]. Movement tracking in a surgical setting is also not unusual, with performance metrics available through surgical techniques and procedures that support data capture [ 4 , 5 ]. However, it is relatively rare to analyse movement and technical expertise in open surgical procedures due to difficulties in extracting the required metrics. Yet, a data-driven approach that provides analytics linking intra-operative clinical and technical processes to patient outcomes may provide a means of targeted improvement in surgical care [ 6 , 7 ]. Recent computer vision innovations may open doors to similar tools that benefit the pursuit of cardiothoracic surgical excellence, which relies less heavily, in relative terms, on robotic and thoracoscopic technology than many other specialities. Thus, the present review intends first to provide background on how objective kinematic parameters link with technical and non-technical skills. The potential for innovations from artificial intelligence and computer vision to track technical and nontechnical skills in real and simulated settings will then be evaluated, and last, the benefits and barriers to the uptake of such technology for the cardiothoracic community will be discussed.

Innovations in machine learning that render the analysis of real-time cardiothoracic surgery accessible even without specialist motion-tracking equipment provide a solution to understanding how the surgical environment shapes operative performance by providing objective measures of technical and co-ordinative skill in the operating theatre. Such analyses could also provide a valuable performance feedback tool for surgeons throughout their careers as they evolve or provide immediate objective indicators of factors that can impact surgical performance, such as fatigue. In terms of training, integrating such data-driven approaches with empirically validated teamwork theory may give trainees or those surgeons being mentored more substantial opportunities to develop the technical and professional skills to excel in their practice.

  • Surgical expertise

Much work has already been done to delineate markers of surgical expertise and provide measures that assess surgical skills. Whilst it is beyond the scope of the present paper to provide a comprehensive overview (reviews: [ 1 , 8 ]), surgical expertise is thought to comprise both technical and non-technical skills. Technical skill refers to direct psychomotor ability as governed by visuomotor aptitude, economy of movement and co-ordination [ 1 ], whereas non-technical skills encompass a broad range of abilities that support the surgical task [ 1 , 9 , 10 , 11 ]. Specifically, individual markers of expertise beyond technical skills include declarative knowledge, interpersonal skills, situational awareness, and cognitive flexibility Footnote 1 .

Surgical performance depends not just on a surgeon’s technical and non-technical skills but also on the skills of the surgical team and the administrative, managerial and organisational policies and procedures that support them. Thus, it is crucial to consider that surgical performance occurs within a broader context. For example, better team communication is associated with higher non-technical skill performance overall [ 12 ], and fewer miscommunications occur in teams that have a high degree of familiarity [ 13 ]. These effects on non-technical skills translate further: situational awareness in the surgical team demonstrates a strong negative relationship to the frequency of technical errors [ 14 , 15 ], and high team familiarity appears to be associated with lower rates of postoperative morbidity [ 16 ]. Thus, the path to cardiothoracic surgical excellence requires a holistic and integrative approach; considering context can provide insights beyond direct technical performance [ 9 ].

From general cognitive research and theory into joint [ 17 ] and individual co-ordination [ 18 ], as well as applied surgical investigations [ 8 ], we know that factors beyond technical skill contribute to psychomotor performance and thus may also be observed in movement execution. Non-technical skills, group-related factors and organisational factors can all influence psychomotor performance. Consider how a cardiothoracic surgeon executes an action plan and how the surgeon must adapt their movements due to an unforeseen event such as major unexpected bleeding or a perfusion issue such as air embolism — the speed at which the surgeon adjusts quickly to new circumstances might be influenced by any number of non-technical skills (e.g. mental readiness, fatigue, anticipatory ability, cognitive flexibility or situational awareness). Such individual factors would also depend on team-level factors; for example, working with new team members may mean attention is diverted from the task towards managing new team dynamics and developing confidence and trust in those around them. If attention is directed towards managing new team dynamics, the cognitive resources available for action planning are reduced, and movement quality may suffer. Conversely, a familiar team may allow for more cognitive flexibility or ease with which to adapt to changing patient circumstances. Specifically, the surgeon does not need to monitor team members to the same extent when they understand their behavioural preferences and abilities and have an innate feeling of trust and confidence. Additionally, the tendency to monitor team members may be lower when the surgeon expects that team members will anticipate their needs [ 19 ]. Similarly, managerial or organisational level factors such as policy, established procedures or culture can influence movement by providing formalised mutual understanding among team members, which, in turn, provides a scaffolding for behaviour and minimises the cognitive resources required to coordinate.

Kinematic analysis of surgical performance and expertise

A range of kinematic metrics have been shown to discriminate surgical expertise (see Table  1 ). For example, increased experience is associated with lower trajectory displacement during suturing, lower acceleration with non-dominant hands, and higher velocity while tying sutures [ 20 ]. Similar results have been obtained in live settings [ 21 ]. Early increases in expertise as a trainee surgeon are related to increases in psychomotor performance (e.g. increased velocity and precision of movements). In contrast, later expertise developments are characterised by physical efficiency gains [ 20 , 22 , 23 ]. Although most work in this area is not considered specialty-specific, it is important to consider the generalisability of the findings. Speaking directly to the cardiothoracic speciality, recent work indicates that the expertise effects concerning speed and physical efficiency gains are also evident whilst performing a simulated graft anastomosis [ 24 ].

Cardiothoracic surgeons and surgical teams must be both experts on a technical level and expert co-ordinators. Yet, little research investigates group-level coordination during surgery using objective kinematic measurements. Recent research on general surgical trainees indicates that expertise is linked with more robust mental representations [ 29 ]. A consequence of stronger mental representations is easier and faster retrieval from memory, which can lead to better action planning and execution, as evident in the objective measurement of movement trajectory and speed of movement initiation and execution.

Research from cognitive psychology in ‘joint action’ also strongly emphasises the importance of mental representations. Humans tend to form joint ‘internal predictive models’ - models for action that team members jointly represent. These shared representations promote smooth and effective co-ordination [ 30 , 31 ] because they allow team members to predict the actions of co-ordinative partners, anticipate their needs and adapt their movements to accommodate those co-ordinative needs, ultimately maximising physical efficiency [ 32 , 33 ], and potentially cognitive efficiency [ 34 ].

Co-ordinative agents can also facilitate shared representation via communicative movements; for example, movements can be exaggerated spatially or temporally to indicate intention [ 35 ] or call attention to important parts of a movement for teaching purposes [ 36 , 37 ]. In attending to the movements’ communicative aspects, the observer can enhance their own internal models and respond more effectively to the signaller. Kinematic analysis of surgery on the co-ordinative level will provide insight into how surgeons optimally engage co-ordinative mechanisms, including communicative movement, to facilitate and enhance task performance.

Marker-less movement tracking for the evaluation of surgical performance

Kinematic analyses have historically been performed in simulated settings [ 38 ] due to the need for specialist motion-tracking equipment that would not typically be present during live cardiothoracic surgical procedures. It is possible to use traditional markered motion capture in live surgery using tools equipped with motion capture markers or by attaching them to a surgeon’s hands. However, it should be noted that any changes to an instrument or a surgeon’s hands may influence their movement patterns. Therefore, extensive pre-testing or experience with the new equipment should be undertaken to ensure that the introduction of markers would not impact technical performance. Further, the practice is not well adopted beyond simulated settings and minimally invasive surgery due to concerns over introducing motion capture sensors and markers into sterile environments in open surgery [ 39 ].

Assessing surgical skill in simulated settings provides only a limited picture of surgical performance, and objective measures in simulated settings cannot be directly linked to patient outcomes. Marker-less tracking techniques using custom software algorithms have demonstrated success for kinematic analysis of surgical skills in both live open surgery [ 40 , 41 ] and simulated settings [ 23 , 42 ]. Such video review is possible using only a video camera.

In computer vision, markerless motion tracking is called ‘pose estimation’; here we focus on innovations in deep learning techniques. Deep learning for pose estimation involves training an artificial neural network on annotated pose data sets [ 43 , 44 ], so it can ‘learn’ to recognise poses. When provided with new videos of operations, the network identifies the poses within that video. Kinematic parameters can then be extracted using spatial and temporal data. Where deep learning requires more powerful hardware than a standard computer, the acquisition and set up of appropriate equipment for markerless motion capture is more accessible than traditional means of markered motion capture because traditional video cameras alone may be used.

Pose estimation algorithms [ 45 , 46 , 47 ] and toolboxes [ 48 ] have advanced considerably recently. Further, the speed with which a network can identify poses has markedly increased, making deep learning for pose estimation a viable tool in applied kinematic analysis to provide feedback to surgeons on their performance and link psychomotor metrics to patient outcomes. Such techniques can precisely track extremely delicate procedures requiring microprecision instruments, such as retinal microsurgery [ 49 ]. They thus may even be useful to analyse procedures such as coronary artery bypass surgery or in paediatric heart surgery. Further, innovations in terms of multi-person pose estimation [ 46 ] and multi-instrument pose estimation [ 50 ] provide opportunities to analyse group co-ordination, which is an under-studied area [ 38 ].

Typically, using multiple cameras to enhance 3D spatial precision is optimal. Nevertheless, cameras designed to measure depth without using makers or additional cameras have been developed—for example, Microsoft’s Kinect [ 51 ]. Like some optical marker trackers, these cameras emit infrared light and read depth information from the reflected light. High-speed pose recognition can be achieved through traditional machine learning, allowing for real-time interaction [ 52 ]. Depth cameras have been used for the 3D pose estimation of medical instruments [ 53 ].

Pose estimation techniques do have limitations. For example, occluded points are not estimated or estimated with lower accuracy. Many standard methods using markers also suffer from this limitation; Footnote 2 multiple cameras in markered and markerless [ 56 ] approaches may help alleviate occlusion issues, and gap-filling techniques may be employed post hoc to deal with short durations of missing data. As an alternative to simple gap-filling techniques and to prioritise accuracy, artificial neural networks can be used to estimate the missing information [ 57 ]. Alternatively, a hybrid approach that incorporates wearable sensors may assist. Flexible sensors (accelerometers and gyroscopes) that can be unobtrusively worn under gloves may provide useful information when the line of sight is obstructed. These sensors have recently been shown to produce measures that differentiate experts and novices during a simulated graft anastomosis [ 24 ].

Training a network to perform pose estimation adequately takes substantial computational time, often days, even with hardware with sizeable computational power. Whereas, some forms of markered motion capture are fast enough to provide feedback in milliseconds with appropriate computer hardware. Nevertheless, a pre-trained network can perform pose estimation extremely quickly to reach real-time processing speeds (10-30 Hz). Last, purpose-built algorithms may be required depending on the goals of the analysis. There have been recent initiatives that aim to collate surgical tool or surgical procedure data sets [ 58 , 59 ], which can be used for developing pose estimation models.

Pose estimation data and other forms of kinematic data can also be fed into artificial neural networks designed to classify a cardiothoracic surgeon’s skill level [ 60 , 61 , 62 ]. Most demonstrations have been performed in the context of robotic or laparoscopic surgery as more data is available. However, with appropriate videos, surgical skills in open cardiac or thoracic surgery could be assessed using similar approaches. To build such classifiers, neural networks are trained on data (e.g. kinematic data, videos) taken from surgeons at all skill levels. The neural network then extracts parameters common to each group and then classifies new videos based on how well they match the typical parameters of a given skill level. These parameters, however, may not always represent meaningful skill differences between levels but may be arbitrary parameters that co-occur with skill differences. Thus, cardiothoracic surgeons with unique approaches may be disadvantaged even though such technique differences may not be meaningful for patient outcomes. Further, surgical adaptations associated with environmental factors or personal characteristics may not be accounted for sufficiently. For example, classifier algorithms are known to be prejudiced as a function of their inputs and often disadvantage under-represented groups [ 63 ]. Thus, for such skill assessment algorithms to be developed, comprehensive data sets for a given procedure that demonstrate sufficient variation in action execution and skill level are needed from surgical teams and hospitals.

Where deep learning for kinematic analysis is not likely to raise concerns over bias because it directly measures an event, a classifier takes the decision out of human hands and can make decisional factors opaque. Thus, developers should make every effort to ensure that the training data is not biased. Ideally, any classification should be accompanied by a meaningful description of the decisional parameters (Explainable Artificial Intelligence [ 64 ]). For example, classifiers can also be designed to provide feedback on what data components are most predictive of the skill classification [ 60 ].

A recent systematic review indicated that reported machine learning methods developed to classify expertise typically achieved over 80% accuracy [ 65 ]. With further accuracy gains likely, the technology could facilitate assessment in competency-based education. With limitations in mind, we believe such methods could be most usefully employed as a formative tool that aids surgeons in developing their technical expertise, supplementing more human resource-intensive means of feedback. Indeed, a recent systematic review highlighted that personalised feedback supported by Artificial Intelligence is well accepted and considered beneficial by users [ 66 ]. However, it is noted that there is a need for rigorous experimental studies that contrast traditional pedagogical interventions with those from artificial intelligence to evaluate any potential learning gains.

Understanding the impact of technical skill on patient outcomes via kinematic data

Patient outcomes can be linked to technical skill. For example, higher-rated technical performance is linked with better post-operative outcomes in neonatal cardiac surgery [ 67 ]. High skill levels are also linked to operating time [ 20 ], which may consequently influence patient outcomes. For example, prolonged femoral-popliteal bypass procedures in vascular surgery are associated with increased surgical site infection and extended post-operative stays [ 68 ]. As we operate in an age of big data, opportunities to understand the factors contributing to patient outcomes within and between hospitals will become more accessible through data mining techniques. Using deep learning to tap into kinematic data is a particularly exciting innovation that will contribute even further to the standard information that is commonly extracted.

First, it is essential to look for objective measurements of expertise linked to patient outcomes in real surgery to identify the most critical aspects of technical skill. However, a necessary corollary to this work is understanding the nuance in technical expertise between surgeons. Nuanced kinematic variations [ 69 ], for example, may represent differences in technique that surgeons have adapted to their own biomechanical and cognitive needs or the surgical procedure. Further, surgeons are heavily influenced by their training, resulting in different techniques for the same procedure. Yet, international and national registry and audit data show similar outcomes between surgeons, hospitals and nations, indicating that variations in approach are not necessarily meaningful. Overall, a better understanding of which markers of technical skill are related to patient outcomes will inform cardiothoracic surgical training and development or decline over a surgeon’s whole career. As with elite athletes, surgical trainees should be supported with movement feedback to explore what works for them [ 70 ]. Identifying if, how and when technical skill impacts patient outcomes will require exceptional interdisciplinary cooperation from surgeons, surgical team members, hospitals, and researchers. Large amounts of data will be required given that many other factors can influence patient outcomes within and outside the operating room.

Understanding the routes by which nontechnical skills influence patient outcomes via kinematic data

One route by which non-technical skills influence patient outcomes particularly pertinent to kinematic analysis is their capacity to feed into the technical execution of surgical actions [ 8 ]. For example, an overall assessment of non-technical skills [ 71 ] measuring communication and interaction, situation awareness and vigilance, co-operation and team skills, leadership and managerial skills, and decision-making was linked to the technical performance of surgeons performing carotid endarterectomy. The same is likely to be true in cardiothoracic surgery more generally.

Non-technical skills are not only a modulator of technical skills but may also influence patient outcomes directly. In fact, a video evaluation study assessing the surgical skills of surgeons (including 83 cardiac and cardiothoracic surgeons) demonstrated that increased scores for non-technical skills, independent of technical skill, were related to higher patient safety ratings [ 72 ]. This direct influence is particularly evident in crisis settings in the operating room, where non-technical skills drop considerably for all expertise levels [ 73 ] where changes in technical performance are less pronounced or negligible for highly experienced surgeons [ 74 ]. A similar pattern can be observed in response to fatigue [ 28 ]. If situational awareness drops, for example, the cardiothoracic surgeon will be less able to monitor all aspects of the surgery well and may not make the most informed patient care decisions.

Measuring non-technical skills objectively via video-based data is less straightforward than speaking to movement execution. Nevertheless, it is achievable. An investigation of nursing students showed that video-based feedback on gaze allowed the trainee to develop situational awareness [ 75 ]. In addition, recent advances show that human attention can be tracked within a task space by modelling head pose and orientation. Of course, this approach is less precise than using eye-tracking technology. However, such modelling helps understand various factors contributing to situational awareness, such as concentration loss, collaborative attention and stress levels more generally whilst engaging in collaborative tasks [ 76 ]. Further, as mentioned earlier, communicative gestures can be differentiated from goal-directed gestures [ 35 , 36 , 37 ] in terms of their kinematic features, and collaborative responses could be indexed by reaction times to requests. Although it is theoretically achievable to objectively measure some non-technical skills from video data as indicated by work in other fields, such an approach would need to be empirically validated within a cardiothoracic surgical setting.

Analysing co-ordinative kinematics via deep learning techniques in real crises may explain how and why performance changes. Such analyses will also provide insight into how the organisation and the wider surgical team can optimally support the operating surgeon.

Implications and implementation

Understanding markers of expertise (and learning trajectories) for various surgical tasks, how those markers relate to cognitive mechanisms that optimise performance, and subsequently, how performance impacts patient outcomes would help design targeted training programmes for cardiothoracic trainees and established surgeons at all levels wishing to enhance their own motor expertise. Understanding the aspects of expertise linked to surgical outcomes would allow a surgeon to target the most beneficial areas of improvement at any given time during their learning curve as this changes throughout their career.

Traditional methods of teaching and development focus on repeated practice under the supervision of a more senior surgeon. With surgical coaching and mentorship, the surgeon engages in an ongoing process of performance reflection and adjustment under the guidance of a surgical coach. Although research investigating the efficacy of surgical coaching is still new [ 77 ], current evidence suggests it is highly effective for skill acquisition and development, and participants receive it well [ 78 , 79 , 80 ].

Deliberate individualised practice is essential to surgical skill acquisition [ 81 ] and is also an essential part of the surgical coaching process. Tracking metrics over time allows a surgeon to engage in deliberate practice by measuring improvement and using targeted feedback. Thus, kinematic tracking and feedback tools could provide further targeted guidance to complement the feedback and reflection provided in surgical coaching. For example, a user could submit a video of their own performance to software that they wish to get feedback on. The software would then analyse kinematic parameters known to reflect expertise against collective benchmarks or their own previous performance and provide targeted recommendations to improve performance. Feedback could also be provided online during simulations if such feedback was helpful.

Whilst it is important to note that engaging in facilitated reflection and problem solving with a surgical coaching tool based on artificial intelligence is unlikely to provide the same experience as with a human coach or mentor given the lack of authentic empathy and emotional intelligence, very recent research suggests good outcomes from AI coaches that aim to assist with goal attainment more generally [ 82 ]. Given that surgical training and coaching are extremely human resource intensive, increasing the availability of tools that provide opportunities to gain additional feedback without needing a human expert may assist in accelerating the development of surgical skills [ 83 ] to complement or facilitate the more resource-intensive visual assessment used in surgical coaching [ 84 ]. Furthermore, surgical coaches could use the technology to assist decision-making or feedback (human in the loop, [ 85 ]). Indeed, using audio-visual technology to review performance in coaching contexts has offered additive benefits over in-person observation alone [ 86 ].

This technology also offers the potential to monitor the surgical team to provide real-time feedback that may benefit decision-making processes during surgery. For example, given that fatigue is related to technical errors [ 8 ], when a surgical team member shows signs of fatigue during long surgeries, an alert could notify the team to take a break or alter roles. Monitoring surgical skills using pose estimation methods could also assist offline in determining how a cardiothoracic surgeon’s role may need to alter toward the end of their career if objective measures of intra-operative performance begin to decline and affect patient outcomes.

Competency-based education and certification are highly labour-intensive and thus could also benefit from these technologies. Pose estimation algorithms could be used to assess if a trainee surgeon meets the competency requirements. Of course, it would be essential to assess trust and acceptance of the technology for this purpose in addition to the accuracy of the algorithm. A hybrid approach could reduce labour if trust and acceptance require human oversight (human in the loop, [ 85 ]). For example, algorithms could be used throughout training to track progression against milestones and flag when the trainee meets the competency threshold to be formally assessed by a human.

Beyond training and monitoring interventions, correlative ecological studies investigating already recorded operations could help further understand what factors link with patient outcomes. Indeed, early investigations examining intra-operative performance in laparoscopic and robotic-assisted procedures demonstrate a link to short-term patient outcomes [ 4 ]. This new understanding would provide valuable information that can be used to develop data-driven policies, procedures and environments that support the optimal performance of surgeons. By training artificial neural networks to predict and track a surgeon’s movement, large-scale investigations evaluating kinematic data from recorded surgeries are possible. This research could be additionally informative for competency-based educational frameworks as there is currently no evidence to suggest that a trainee’s progression through milestones links with patient outcomes [ 87 ]. By understanding more clearly what aspects of skill relate to patient outcomes, it may be possible to determine competency-based thresholds informed by empirical evidence.

Of course, implementing any performance monitoring or feedback tool should be done in such a way as to foster trust between users (cardiothoracic surgeons and their teams) and the institution. Installing cameras to monitor operations is becoming more commonplace with the use of ‘operating room black boxes’, but it is possible that such initiatives could result in resistance. In a cross-sectional survey of Danish healthcare professionals [ 88 ], on average, opinions toward using a black box were neutral or positive, with little concern over data safety. Conversely, in a similar study conducted in Canada [ 89 ], there were more significant concerns over data safety and the potential for litigation, highlighting the importance of considering any concerns within a societal, cultural and legislative context. Irrespective of perception, video data most often supports healthcare professionals from a legal perspective [ 90 ] and thus is more likely to offer protection than be a threat. Ultimately, the success of the technology elaborated upon within this review will rely on fostering a culture of trust and engagement with users and institutions to ensure that any concerns are addressed and there are strong institutional policies to protect and support the interests of the observed cardiothoracic surgical team.

Ethical and legal considerations

A solid institutional policy should be developed to ensure that video footage (both from simulated and live procedures) is recorded, stored and used ethically and legally. Potential concerns and methods for addressing those concerns have been summarised in detail elsewhere [ 91 ]. It is common to raise legal fears concerning the recording and storage of footage. However, as mentioned already, these fears are likely unfounded: typically, video data protects healthcare professionals rather than puts them at risk from a legal perspective [ 90 ]. Further, it is not considered necessary for video data to be added to a patient’s medical record if the video is collected solely for quality improvement because it is not in any way used for the patients care [ 90 ]. Nevertheless, there may be variations in patient consent requirements across institutions and regions. Confidentiality and anonymity should be carefully considered as, in some cases, the nature of the research would require analysis of identifiable or sensitive personal information.

New means of analysing surgical performance open doors to understanding surgical excellence in the cardiothoracic specialty. Other disciplines have traditionally benefitted from technological innovations around training and the objective measurement of performance; the fields of computer vision and Artificial Intelligence now offer opportunities that are ideal for use in the cardiothoracic surgical environment. Further, these tools are feasible to use within in the operating room which will assist in understanding how technical and non-technical skills influence patient outcomes. However, the technology is still in the early stages, and thus, further innovation will require commitment and partnership from hospitals and cardiothoracic surgeons to provide (1) data that can be used to develop feedback tools and (2) constructive direction to ensure that any tool used for applied purposes has been developed to meet the needs of the user adequately. With data from simulated and real surgical settings, research aimed at understanding how expertise relates to the cognitive mechanisms that support psychomotor performance within the context of surgery will further help design targeted training interventions and surgical environments more optimally to enhance surgical outcomes. With big data generated across many institutions, it may be possible to develop data-driven guidelines for task execution, and team coordination that reduce the surgeon and team’s physical and cognitive load [ 6 ].

Data availability

Not applicable.

Some theoretical conceptualisations of surgical skill consider cognitive skills to be distinct from non-technical skills [ 1 ].

Electromagnetic tracking [ 54 ] avoids issues of visual occlusion but in addition to the requirement of markers it requires that there be no magnetic interference present. Varieties of tracking techniques, and their suitability for given tasks have been summarised elsewhere [ 55 ].

Madani A, Vassiliou MC, Watanabe Y, Al-Halabi B, Al-Rowais MS, Deckelbaum DL, et al. What are the principles that Guide behaviors in the operating room? Creating a Framework to define and measure performance. Ann Surg. 2017;265:255–67. https://doi.org/10.1097/SLA.0000000000001962 .

Article   PubMed   Google Scholar  

Petancevski EL, Inns J, Fransen J, Impellizzeri FM. The effect of augmented feedback on the performance and learning of gross motor and sport-specific skills: a systematic review. Psychol Sport Exerc. 2022;63:102277. https://doi.org/10.1016/j.psychsport.2022.102277 .

Article   Google Scholar  

Yule S, Janda A, Likosky DS. Annals Surg Open. 2021;2:e054. https://doi.org/10.1097/AS9.0000000000000054 . Surgical Sabermetrics: Applying Athletics Data Science to Enhance Operative Performance.

Balvardi S, Kammili A, Hanson M, Mueller C, Vassiliou M, Lee L, et al. The association between video-based assessment of intraoperative technical performance and patient outcomes: a systematic review. Surg Endosc. 2022;36:7938–48. https://doi.org/10.1007/s00464-022-09296-6 .

Mazer L, Varban O, Montgomery JR, Awad MM, Schulman A. Video is better: why aren’t we using it? A mixed-methods study of the barriers to routine procedural video recording and case review. Surg Endosc. 2022;36:1090–7. https://doi.org/10.1007/s00464-021-08375-4 .

Chadebecq F, Vasconcelos F, Mazomenos E, Stoyanov D. Computer Vision in the Surgical operating room. VIS. 2020;36:456–62. https://doi.org/10.1159/000511934 .

Vedula SS, Hager GD. Surgical data science: the new knowledge domain. Innov Surg Sci. 2017;2:109–21. https://doi.org/10.1515/iss-2017-0004 .

Article   PubMed   PubMed Central   Google Scholar  

Hull L, Arora S, Aggarwal R, Darzi A, Vincent C, Sevdalis N. The Impact of Nontechnical Skills on technical performance in surgery: a systematic review. J Am Coll Surg. 2012;214:214–30. https://doi.org/10.1016/j.jamcollsurg.2011.10.016 .

Carthey J, de Leval MR, Wright DJ, Farewell VT, Reason JT. Behavioural markers of surgical excellence. Saf Sci. 2003;41:409–25. https://doi.org/10.1016/S0925-7535(01)00076-5 .

Crossley J, Marriott J, Purdie H, Beard JD. Prospective observational study to evaluate NOTSS (non-technical skills for surgeons) for assessing trainees’ non-technical performance in the operating theatre. Br J Surg. 2011;98:1010–20. https://doi.org/10.1002/bjs.7478 .

Article   CAS   PubMed   Google Scholar  

Yule S, Flin R, Paterson-Brown S, Maran N. Non-technical skills for surgeons in the operating room: a review of the literature. Surgery. 2006;139:140–9. https://doi.org/10.1016/j.surg.2005.06.017 .

Gillespie BM, Harbeck E, Kang E, Steel C, Fairweather N, Chaboyer W. Correlates of non-technical skills in surgery: a prospective study. BMJ Open. 2017;7:e014480. https://doi.org/10.1136/bmjopen-2016-014480 .

Gillespie BM, Chaboyer W, Fairweather N. Interruptions and miscommunications in surgery: an observational study. AORN J. 2012;95:576–90. https://doi.org/10.1016/j.aorn.2012.02.012 .

Siu J, Maran N, Paterson-Brown S. Observation of behavioural markers of non-technical skills in the operating room and their relationship to intra-operative incidents. Surgeon. 2016;14:119–28. https://doi.org/10.1016/j.surge.2014.06.005 .

McCulloch P, Mishra A, Handa A, Dale T, Hirst G, Catchpole K. The effects of aviation-style non-technical skills training on technical performance and outcome in the operating theatre. Qual Saf Health Care. 2009;18:109–15. https://doi.org/10.1136/qshc.2008.032045 .

Kurmann A, Keller S, Tschan-Semmer F, Seelandt J, Semmer NK, Candinas D, et al. Impact of team familiarity in the operating room on surgical complications. World J Surg. 2014;38:3047–52. https://doi.org/10.1007/s00268-014-2680-2 .

Sebanz N, Knoblich G. Progress in Joint-Action Research. Curr Dir Psychol Sci. 2021;0963721420984425. https://doi.org/10.1177/0963721420984425 .

Jeannerod M. Motor Cognition: What Actions Tell the Self. OUP Oxford; 2006.

Frasier LL, Pavuluri Quamme SR, Ma Y, Wiegmann D, Leverson G, DuGoff EH, et al. Familiarity and communication in the operating room. J Surg Res. 2019;235:395–403. https://doi.org/10.1016/j.jss.2018.09.079 .

D’Angelo A-LD, Rutherford DN, Ray RD, Laufer S, Kwan C, Cohen ER, et al. Idle time: an underdeveloped performance metric for assessing surgical skill. Am J Surg. 2015;209:645–51. https://doi.org/10.1016/j.amjsurg.2014.12.013 .

D’Angelo A-LD, Rutherford DN, Ray RD, Laufer S, Mason A, Pugh CM. Working volume: validity evidence for a motion-based metric of surgical efficiency. Am J Surg. 2016;211:445–50. https://doi.org/10.1016/j.amjsurg.2015.10.005 .

Glarner CE, Hu Y-Y, Chen C-H, Radwin RG, Zhao Q, Craven MW, et al. Quantifying technical skills during open operations using video-based motion analysis. Surgery. 2014;156:729–34. https://doi.org/10.1016/j.surg.2014.04.054 .

Azari D, Miller BL, Le BV, Greenberg CC, Radwin RG. Quantifying surgeon maneuevers across experience levels through marker-less hand motion kinematics of simulated surgical tasks. Appl Ergon. 2020;87:103136. https://doi.org/10.1016/j.apergo.2020.103136 .

Boyajian GP, Zulbaran-Rojas A, Najafi B, Atique MMU, Loor G, Gilani R et al. Development of a Sensor Technology to objectively measure dexterity for Cardiac Surgical proficiency. The Annals of Thoracic Surgery 2023. https://doi.org/10.1016/j.athoracsur.2023.07.013 .

Sharon Y, Jarc AM, Lendvay TS, Nisky I. Rate of Orientation Change as a New Metric for Robot-assisted and Open Surgical skill evaluation. IEEE Trans Med Rob Bionics. 2021;3:414–25. https://doi.org/10.1109/TMRB.2021.3073209 .

Bann SD, Khan MS, Darzi AW. Measurement of Surgical Dexterity using motion analysis of simple bench tasks. World J Surg. 2003;27:390–4. https://doi.org/10.1007/s00268-002-6769-7 .

Xeroulis GJ, Park J, Moulton C-A, Reznick RK, Leblanc V, Dubrowski A. Teaching suturing and knot-tying skills to medical students: a randomized controlled study comparing computer-based video instruction and (concurrent and summary) expert feedback. Surgery. 2007;141:442–9. https://doi.org/10.1016/j.surg.2006.09.012 .

Kahol K, Leyba MJ, Deka M, Deka V, Mayes S, Smith M, et al. Effect of fatigue on psychomotor and cognitive skills. Am J Surg. 2008;195:195–204. https://doi.org/10.1016/j.amjsurg.2007.10.004 .

Yeh VJ-H, Mukhtar F, Yudkowsky R, Baloul MS, Farley DR, Cook DA. Response process validity evidence for Video Commentary Assessment in surgery: a qualitative study. J Surg Educ. 2022;79:1270–81. https://doi.org/10.1016/j.jsurg.2022.05.006 .

Curioni A, Vesper C, Knoblich G, Sebanz N. Reciprocal information flow and role distribution support joint action coordination. Cognition. 2019;187:21–31. https://doi.org/10.1016/j.cognition.2019.02.006 .

Wolpert DM, Doya K, Kawato M. A unifying computational framework for motor control and social interaction. Philos Trans R Soc Lond B Biol Sci. 2003;358:593–602. https://doi.org/10.1098/rstb.2002.1238 .

Török G, Pomiechowska B, Csibra G, Sebanz N. Rationality in Joint Action: maximizing Coefficiency in Coordination. Psychol Sci. 2019;30:930–41. https://doi.org/10.1177/0956797619842550 .

Constable MD, Bayliss AP, Tipper SP, Spaniol AP, Pratt J, Welsh TN. Ownership status influences the degree of joint facilitatory behavior. Psychol Sci. 2016;27:1371–8. https://doi.org/10.1177/0956797616661544 .

Lagomarsino M, Lorenzini M, Constable MD, De Momi E, Becchio C, Ajoudani A. Maximising Coefficiency of Human-Robot handovers through reinforcement learning. IEEE Rob Autom Lett. 2023;8:4378–85. https://doi.org/10.1109/LRA.2023.3280752 .

Pezzulo G, Donnarumma F, Dindo H. Human sensorimotor communication: a theory of signaling in online social interactions. PLoS ONE. 2013;8:e79876. https://doi.org/10.1371/journal.pone.0079876 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Strachan JWA, Curioni A, Constable MD, Charbonneu M. A methodology for distinguishing copying and reconstruction in cultural transmission episodes. 42nd Annual Meeting of the Cognitive, 2020.

Strachan JWA, Curioni A, Constable MD, Knoblich G, Charbonneau M. Evaluating the relative contributions of copying and reconstruction processes in cultural transmission episodes. PLoS ONE. 2021;16:e0256901. https://doi.org/10.1371/journal.pone.0256901 .

Mitchell EL, Arora S, Moneta GL, Kret MR, Dargon PT, Landry GJ, et al. A systematic review of assessment of skill acquisition and operative competency in vascular surgical training. J Vasc Surg. 2014;59:1440–55. https://doi.org/10.1016/j.jvs.2014.02.018 .

Reiley CE, Lin HC, Yuh DD, Hager GD. Review of methods for objective surgical skill evaluation. Surg Endosc. 2011;25:356–66. https://doi.org/10.1007/s00464-010-1190-z .

Frasier LL, Azari DP, Ma Y, Quamme SRP, Radwin RG, Pugh CM, et al. A marker-less technique for measuring kinematics in the operating room. Surgery. 2016;160:1400–13. https://doi.org/10.1016/j.surg.2016.05.004 .

Kadkhodamohammadi A, Gangi A, de Mathelin M, Padoy NA, Multi-view. RGB-D Approach for Human Pose Estimation in Operating Rooms. 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), 2017, p. 363–72. https://doi.org/10.1109/WACV.2017.47 .

Casy T, Tronchot A, Thomazeau H, Morandi X, Jannin P, Huaulmé A. Stand-up straight! Human pose estimation to evaluate postural skills during orthopedic surgery simulations. Int J Comput Assist Radiol Surg. 2023;18:279–88. https://doi.org/10.1007/s11548-022-02762-5 .

Ionescu C, Papava D, Olaru V, Sminchisescu C. Human3.6 M: large scale datasets and predictive methods for 3D human sensing in natural environments. IEEE Trans Pattern Anal Mach Intell. 2014;36:1325–39. https://doi.org/10.1109/TPAMI.2013.248 .

Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, et al. Microsoft COCO: common objects in Context. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T, editors. Computer vision – ECCV 2014. Volume 8693. Cham: Springer International Publishing; 2014. pp. 740–55. https://doi.org/10.1007/978-3-319-10602-1_48 .

Chapter   Google Scholar  

Cao Z, Simon T, Wei S-E, Sheikh Y. Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI: IEEE; 2017, p. 1302–10. https://doi.org/10.1109/CVPR.2017.143 .

Huang Y, Shum HPH, Ho ESL, Aslam N. High-speed multi-person pose estimation with deep feature transfer. Comput Vis Image Underst. 2020. https://doi.org/10.1016/j.cviu.2020.103010 . 197–198:103010.

Insafutdinov E, Pishchulin L, Andres B, Andriluka M, Schiele B, DeeperCut:. A deeper, stronger, and faster multi-person pose estimation model. In: Leibe B, Matas J, Sebe N, Welling M, editors. Computer vision – ECCV 2016. Cham: Springer International Publishing; 2016. pp. 34–50. https://doi.org/10.1007/978-3-319-46466-4_3 .

Mathis A, Mamidanna P, Cury KM, Abe T, Murthy VN, Mathis MW, et al. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nat Neurosci. 2018;21:1281–9. https://doi.org/10.1038/s41593-018-0209-y .

Alsheakhali M, Eslami A, Roodaki H, Navab N. CRF-Based model for instrument detection and pose estimation in Retinal Microsurgery. Comput Math Methods Med. 2016;2016:1–10. https://doi.org/10.1155/2016/1067509 .

Kurmann T, Marquez Neila P, Du X, Fua P, Stoyanov D, Wolf S, et al. Simultaneous Recognition and Pose Estimation of Instruments in minimally invasive surgery. In: Descoteaux M, Maier-Hein L, Franz A, Jannin P, Collins DL, Duchesne S, editors. Medical Image Computing and Computer-assisted Intervention – MICCAI 2017. Cham: Springer International Publishing; 2017. pp. 505–13. https://doi.org/10.1007/978-3-319-66185-8_57 .

Microsoft Corp. Kinect for Xbox 360 n.d.

Shotton J, Fitzgibbon A, Cook M, Sharp T, Finocchio M, Moore R et al. Real-time human pose recognition in parts from single depth images. CVPR 2011, 2011, p. 1297–304. https://doi.org/10.1109/CVPR.2011.5995316 .

Liu J, Tateyama T, Iwamoto Y, Chen Y-W. A Preliminary Study of Kinect-Based Real-Time Hand Gesture Interaction Systems for touchless visualizations of hepatic structures in surgery. 医用画像情報学会雑誌. 2019;36:128–35. https://doi.org/10.11318/mii.36.128 .

Polhemus. Polhemus n.d.

Rutherford DN, D’Angelo A-LD, Law KE, Pugh CM. Advanced Engineering Technology for Measuring Performance. Surg Clin North Am. 2015;95:813–26. https://doi.org/10.1016/j.suc.2015.04.005 .

Kocabas M, Karagoz S, Akbas E. Self-Supervised Learning of 3D Human Pose Using Multi-View Geometry., Recognition P. (CVPR), Long Beach, CA, USA: IEEE; 2019, p. 1077–86. https://doi.org/10.1109/CVPR.2019.00117 .

Kanazawa A, Black MJ, Jacobs DW, Malik J. End-to-End Recovery of Human Shape and Pose. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, p. 7122–31. https://doi.org/10.1109/CVPR.2018.00744 .

Bouget D, Allan M, Stoyanov D, Jannin P. Vision-based and marker-less surgical tool detection and tracking: a review of the literature. Med Image Anal. 2017;35:633–54. https://doi.org/10.1016/j.media.2016.09.003 .

Srivastav V, Issenhuth T, Kadkhodamohammadi A, de Mathelin M, Gangi A, Padoy N, MVOR. A Multi-view RGB-D Operating Room Dataset for 2D and 3D Human Pose Estimation. arXivOrg 2018. https://arxiv.org/abs/1808.08180v3 (accessed October 8, 2023).

Ismail Fawaz H, Forestier G, Weber J, Idoumghar L, Muller P-A. Evaluating Surgical skills from Kinematic Data using Convolutional neural networks. In: Frangi AF, Schnabel JA, Davatzikos C, Alberola-López C, Fichtinger G, editors. Medical Image Computing and Computer assisted intervention – MICCAI 2018. Cham: Springer International Publishing; 2018. pp. 214–21. https://doi.org/10.1007/978-3-030-00937-3_25 .

Khalid S, Goldenberg M, Grantcharov T, Taati B, Rudzicz F. Evaluation of Deep Learning models for identifying Surgical actions and measuring performance. JAMA Netw Open. 2020;3:e201664. https://doi.org/10.1001/jamanetworkopen.2020.1664 .

Levin M, McKechnie T, Khalid S, Grantcharov TP, Goldenberg M. Automated methods of Technical Skill Assessment in surgery: a systematic review. J Surg Educ. 2019;76:1629–39. https://doi.org/10.1016/j.jsurg.2019.06.011 .

Holstein K, Wortman Vaughan J, Daumé H, Dudik M, Wallach H. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, New York, NY, USA: Association for Computing Machinery; 2019, p. 1–16. https://doi.org/10.1145/3290605.3300830 .

Taylor JET, Taylor GW. Artificial cognition: how experimental psychology can help generate explainable artificial intelligence. Psychon Bull Rev. 2020. https://doi.org/10.3758/s13423-020-01825-5 .

Lam K, Chen J, Wang Z, Iqbal FM, Darzi A, Lo B, et al. Machine learning for technical skill assessment in surgery: a systematic review. Npj Digit Med. 2022;5:1–16. https://doi.org/10.1038/s41746-022-00566-0 .

Kirubarajan A, Young D, Khan S, Crasto N, Sobel M, Sussman D. Artificial Intelligence and Surgical Education: a systematic scoping review of interventions. J Surg Educ. 2022;79:500–15. https://doi.org/10.1016/j.jsurg.2021.09.012 .

Nathan M, Karamichalis JM, Liu H, del Nido P, Pigula F, Thiagarajan R, et al. Intraoperative adverse events can be compensated by technical performance in neonates and infants after cardiac surgery: a prospective study. J Thorac Cardiovasc Surg. 2011;142:1098–1107e5. https://doi.org/10.1016/j.jtcvs.2011.07.003 .

Tan T-W, Kalish JA, Hamburg NM, Rybin D, Doros G, Eberhardt RT, et al. Shorter duration of femoral-popliteal bypass is associated with decreased surgical site infection and shorter hospital length of stay. J Am Coll Surg. 2012;215:512–8. https://doi.org/10.1016/j.jamcollsurg.2012.06.007 .

Datta V, Mackay S, Mandalia M, Darzi A. The use of electromagnetic motion tracking analysis to objectively measure open surgical skill in the laboratory-based model. J Am Coll Surg. 2001;193:479–85. https://doi.org/10.1016/s1072-7515(01)01041-9 .

Glazier PS. Beyond animated skeletons: how can biomechanical feedback be used to enhance sports performance? J Biomech. 2021;129:110686. https://doi.org/10.1016/j.jbiomech.2021.110686 .

Sevdalis N, Davis R, Koutantji M, Undre S, Darzi A, Vincent CA. Reliability of a revised NOTECHS scale for use in surgical teams. Am J Surg. 2008;196:184–90. https://doi.org/10.1016/j.amjsurg.2007.08.070 .

Yule S, Gupta A, Gazarian D, Geraghty A, Smink DS, Beard J, et al. Construct and criterion validity testing of the non-technical skills for surgeons (NOTSS) behaviour assessment tool using videos of simulated operations. Br J Surg. 2018;105:719–27. https://doi.org/10.1002/bjs.10779 .

Black SA, Nestel DF, Kneebone RL, Wolfe JHN. Assessment of surgical competence at carotid endarterectomy under local anaesthesia in a simulated operating theatre. Br J Surg. 2010;97:511–6. https://doi.org/10.1002/bjs.6938 .

Wetzel CM, Black SA, Hanna GB, Athanasiou T, Kneebone RL, Nestel D, et al. The effects of stress and coping on Surgical Performance during simulations. Ann Surg. 2010;251:171–6. https://doi.org/10.1097/SLA.0b013e3181b3b2be .

O’Meara P, Munro G, Williams B, Cooper S, Bogossian F, Ross L, et al. Developing situation awareness amongst nursing and paramedicine students utilizing eye tracking technology and video debriefing techniques: a proof of concept paper. Int Emerg Nurs. 2015;23:94–9. https://doi.org/10.1016/j.ienj.2014.11.001 .

Lagomarsino M, Lorenzini M, Balatti P, Momi ED, Ajoudani A. Pick the right co-worker: Online Assessment of Cognitive Ergonomics in Human-Robot Collaborative Assembly. IEEE Trans Cogn Dev Syst 2022:1–1. https://doi.org/10.1109/TCDS.2022.3182811 .

Skinner SC, Mazza S, Carty MJ, Lifante J-C, Duclos A. Coaching for surgeons: a scoping review of the quantitative evidence. Ann Surg Open. 2022;3:e179. https://doi.org/10.1097/AS9.0000000000000179 .

Bonrath EM, Dedy NJ, Gordon LE, Grantcharov TP. Comprehensive Surgical Coaching enhances Surgical Skill in the operating room: a Randomized Controlled Trial. Ann Surg. 2015;262:205–12. https://doi.org/10.1097/SLA.0000000000001214 .

Gagnon L-H, Abbasi N. Systematic review of randomized controlled trials on the role of coaching in surgery to improve learner outcomes. Am J Surg. 2018;216:140–6. https://doi.org/10.1016/j.amjsurg.2017.05.003 .

Greenberg CC, Byrnes ME, Engler TA, Quamme SP, Thumma JR, Dimick JB. Association of a Statewide Surgical Coaching Program with clinical outcomes and surgeon perceptions. Ann Surg. 2021;273:1034–9. https://doi.org/10.1097/SLA.0000000000004800 .

Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach. 2005;27:10–28. https://doi.org/10.1080/01421590500046924 .

Terblanche N, Molyn J, de Haan E, Nilsson VO. Comparing artificial intelligence and human coaching goal attainment efficacy. PLoS ONE. 2022;17:e0270255. https://doi.org/10.1371/journal.pone.0270255 .

Safir O, Williams CK, Dubrowski A, Backstein D, Carnahan H. Self-directed practice schedule enhances learning of suturing skills. Can J Surg. 2013;56:E142–7. https://doi.org/10.1503/cjs.019512 .

Hu Y-Y, Peyre SE, Arriaga AF, Osteen RT, Corso KA, Weiser TG, et al. Post Game Analysis: using video-based coaching for continuous Professional Development. J Am Coll Surg. 2012;214:115–24. https://doi.org/10.1016/j.jamcollsurg.2011.10.009 .

Enarsson T, Enqvist L, Naarttijärvi M. Approaching the human in the loop – legal perspectives on hybrid human/algorithmic decision-making in three contexts. Inform Commun Technol Law. 2022;31:123–53. https://doi.org/10.1080/13600834.2021.1958860 .

Gunn EGM, Ambler OC, Nallapati SC, Smink DS, Tambyraja AL, Yule S. Coaching with audiovisual technology in acute-care hospital settings: systematic review. BJS Open. 2023;7:zrad017. https://doi.org/10.1093/bjsopen/zrad017 .

Kendrick DE, Thelen AE, Chen X, Gupta T, Yamazaki K, Krumm AE, et al. Association of Surgical Resident competency ratings with patient outcomes. Acad Med. 2023;98:813–20. https://doi.org/10.1097/ACM.0000000000005157 .

Strandbygaard J, Dose N, Moeller KE, Gordon L, Shore E, Rosthøj S, et al. Healthcare professionals’ perception of safety culture and the operating room (OR) Black Box technology before clinical implementation: a cross-sectional survey. BMJ Open Qual. 2022;11:e001819. https://doi.org/10.1136/bmjoq-2022-001819 .

Gordon L, Reed C, Sorensen JL, Schulthess P, Strandbygaard J, Mcloone M, et al. Perceptions of safety culture and recording in the operating room: understanding barriers to video data capture. Surg Endosc. 2022;36:3789–97. https://doi.org/10.1007/s00464-021-08695-5 .

van Dalen ASHM, Legemaate J, Schlack WS, Legemate DA, Schijven MP. Legal perspectives on black box recording devices in the operating environment. BJS (British J Surgery). 2019;106:1433–41. https://doi.org/10.1002/bjs.11198 .

Xiao Y, Schimpff S, Mackenzie C, Merrell R, Entin E, Voigt R, et al. Video Technology to Advance Safety in the operating room and Perioperative Environment. Surg Innov. 2007;14:52–61. https://doi.org/10.1177/1553350607299777 .

Download references

Acknowledgements

Author information, authors and affiliations.

Department of Psychology, Northumbria University, Newcastle-upon-Tyne, UK

Merryn D. Constable

Department of Computer Science, Durham University, Durham, UK

Hubert P. H. Shum

Department of Applied Sciences, Northumbria University, Newcastle-upon-Tyne, UK

Stephen Clark

Consultant Cardiothoracic and Transplant Surgeon, Freeman Hospital, Newcastle upon Tyne, UK

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualisation: SC, MDC. Writing – Original Draft: MDC. Writing – Revising and Editing: SC, HPHS, MDC. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Merryn D. Constable .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Constable, M.D., Shum, H.P.H. & Clark, S. Enhancing surgical performance in cardiothoracic surgery with innovations from computer vision and artificial intelligence: a narrative review. J Cardiothorac Surg 19 , 94 (2024). https://doi.org/10.1186/s13019-024-02558-5

Download citation

Received : 01 July 2023

Accepted : 30 January 2024

Published : 14 February 2024

DOI : https://doi.org/10.1186/s13019-024-02558-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Deep learning
  • Pose estimation
  • Psychomotor ability
  • Surgical skills
  • Markerless motion tracking
  • Surgical kinematics
  • Surgical performance
  • Surgical education
  • Surgical training

Journal of Cardiothoracic Surgery

ISSN: 1749-8090

computer assessment 1

IMAGES

  1. Using Simulations to Prepare for Computer-Based Assessments

    computer assessment 1

  2. What is a Skills Assessment and Why is it Important?

    computer assessment 1

  3. PPT

    computer assessment 1

  4. PPT

    computer assessment 1

  5. Intro-to-Computer Assessment 2016B: Computer & Matlab: A. B. D. E. F

    computer assessment 1

  6. Topic 1 computer

    computer assessment 1

VIDEO

  1. Part-1 Test computer Question very importance

  2. Learn Excel Assessment -1

  3. Class 8 Computer 2nd Term Paper School Based Assessment 2024

  4. CS110 Access Module 1 Exam

  5. Class 8 Computer Paper School Based Assessment 2024

  6. RS-CIT i-Learn Assessment- 15 Most Questions and Answers in Hindi For RS-CIT Exam 2024

COMMENTS

  1. Computer Basics: Computer Basics Quiz

    Question 1 of 25. What is software? clothing designed to be worn by computer users. any part of the computer that has a physical structure. flexible parts of a computer case. instructions that tell the hardware what to do. Back to Tutorial. Test your knowledge of computer basics by taking our quiz.

  2. Basic Computer Skills Test Questions And Answers

    Welcome to the "Basic Computer Skills Test Questions and Answers" quiz! In today's increasingly digital world, computer literacy is essential. This quiz is designed to help you assess and enhance your fundamental computer skills, making you more confident and efficient when using computers for various tasks. Whether you're a beginner looking to learn the basics or someone seeking to brush up ...

  3. Computer Assessment Skills and Concepts

    Computer Assessment Skills and Concepts. Listed below are the Office 2019 skills which will possibly be evaluated on the Computer Assessment. To further aid in your study, please view Microsoft ...

  4. Basic Computer Skills: Basic Computer Skills Quiz

    Question 1 of 20. What is the role of an operating system? To manage a computer's hardware and software. To delete unnecessary information. To create documents and files. To provide power to the computer. Back to Tutorial. Take the final quiz to see how much you've learned about computers.

  5. Computer-Based Assessment Methods

    The committee believes that assessments of technological literacy would benefit from—may even require—innovative approaches, especially for the capability dimension, for which test takers must demonstrate iterative problem-solving techniques typical of a design process.Even with thoughtfully developed paper-and-pencil assessments, it would be extremely difficult to assess this dimension.

  6. PDF Component 1: Content Knowledge

    Component 1 (C1) is a computer-based assessment that requires you to demonstrate your understanding of content knowledge and pedagogical practices for teaching your content area. You must demonstrate knowledge of developmentally-appropriate content, which is necessary for teaching across the full range and ...

  7. What is a Computer-Based Assessment?

    At its broadest, the definition of computer-based assessment is this: "the use of digital tools for assessment-related activity.". Notice the use of the term "digital tools" rather than "computers"—this is because today computer-based assessments can be created and taken using laptops, tablets, and even smartphones.

  8. What is a Computer Skills Assessment Test?

    A computer skills assessment test may be a basic online computer test to recruit candidates with strong Word®, Excel®, and PowerPoint® abilities. It can also be an assessment that evaluates applicants' knowledge of an application or process, such as Java programming or web development. Computer skills assessment tests are also used to ...

  9. Computer-Based Assessments, what are they?

    There are many complex definitions of computer-based assessment (CBA) but, taken in its simplest form, it involves assessing pupils by means of computers and electronic devices rather than on paper. Other terms used for CBA include onscreen assessment and e-assessment. CBA can be used for a range of purposes including diagnostic, formative and ...

  10. Full article: A review of computer-assisted assessment

    Computer-based assessment can be subdivided into stand-alone applications that only require a single computer, applications that work on private networks and those that are designed to be delivered across public networks such as the web (online assessment). Figure 1 Different types of CAA.

  11. Assignment 1: Computer Basics Flashcards

    the computers main circuit board Its' a think plate that hold the CPU, memory, connectors for the hard drive and optical drives.It connects directly and indirectly to every part of the computer. power supply unit. converts the power from the wall outlet the type of power needed by the computer.

  12. Computerized Assessment

    Technological developments in assessment. Robert L. Kane, Thomas D. Parsons, in Handbook of Psychological Assessment (Fourth Edition), 2019 Advantages and challenges in early adoption. Dating from the first attempts to implement computers as cognitive assessment devices, there was an appreciation of the potential advantages as well as the challenges and cautions that surrounded computerized ...

  13. PDF Component 1: Content Knowledge Assessment Center Policy and Guidelines

    About the Computer-based Assessment The Component 1: Content Knowledge assessment component is a computer-based assessment administered at a testing center. In the computer-based assessment, you must demonstrate knowledge of and pedagogical practices for teaching your content area. You must demonstrate

  14. CGS2100

    Study with Quizlet and memorize flashcards containing terms like The process of defining relationships among data to create useful information requires ______., A clothing store chain tracks the sale of each product at each location. Managers use this information to calculate the organization's profits, to track inventory needs, and to determine which styles and fabrics are the most popular ...

  15. Unit 1: Self-test

    A system unit includes the motherboard, CPU, RAM, hard drive, expansion cards, power supply, etc. A hard drive a computer's central structure that connects the different parts of a computer together. All information that was stored in RAM is erased when the computer is turned on. A keyboard is an output device that allows a user to enter ...

  16. PDF Computer Skills Assessment (CSA) PC BASICS Study Guide

    Computer Skills Assessment (CSA) PC BASICS Study Guide 5500 East Opportunity Drive Nampa, Idaho 83687 208.562.3000 | www.cwidaho.cc Below is a list of objectives covered in the PC Basics Computer Skills Assessment. Each of the questions in the assessment requires the completion of multiple tasks and an example has been included.

  17. PDF Curriculum Guide for 1 Grade Computers

    Elementary Computer Objectives: 1, 2, 3, 9. Be able to use tools in the paint program to include: pencil, paint bucket, eraser, paintbrush, line tool, spray can, text tool and shape tool. Learn to draw pictures and type in descriptive words using the paint program. Be able to draw a picture of a Bible story.

  18. Basic Computer Skills test

    A Basic Computer Skills assessment is important because it ensures that job candidates have the necessary technical skills to perform their job duties. In addition, the assessment can help identify any knowledge gaps that may need to be addressed through training or additional support. Overall, a Basic Computer Skills assessment can help ...

  19. What records are exempted from FERPA?

    Records on a student who is 18 years of age or older, or attending a postsecondary institution, that are: (1) made or maintained by a physician or other recognized professional acting in that capacity; (2) made, maintained, or used only in connection with treatment of the student; and (3) disclosed only to individuals providing the treatment.

  20. Computer Assessment

    The passing score for each assessment is 70%. The Computer Assessment must be taken before/at the beginning of your first semester at Liberty. If you do not pass two or more of the tests, you will ...

  21. Computing: Computing Skills: Year 1 Assessment Pack

    This Computing Skills Assessment Pack is designed to support the teaching of the Year 1 Computing Skills Unit. Designed by teachers for teachers, this pack includes a number of handy assessment materials you can use to track your children's progress and attainment in computing, in particular their computing skills. Inside this pack, you will ...

  22. Assessment Computing Y2: What is a computer?

    Assessment Computing Y2: What is a computer? Assessment quiz and Knowledge catcher for use at the start and/or end of the unit to assess pupil progress. Free trial. Subjects > Computing > Key Stage 1 > Computing systems and networks 1: What is a computer? > ...

  23. Defining Computer Science and Information Technology-1 (1)

    Career Coach Assessment Tool Career Coach can help you: discover potential careers based on your interests and strengths browse relevant data on wages, employment and training preview TCC programs that will provide an educational path for your chosen career build and deploy a professional resume help military veterans explore civilian careers To access Career Coach, go to: https://tidewater ...

  24. Quiz

    Know Your Computer. Parts of a computer. Keyboard and keys. Explore all questions with a free account. Continue with Google. Continue with Microsoft. Continue with email. Continue with phone. Already have an account? Log in. Let me read it first. Report an issue. Suggestions for you. See more. 25 Qs . Python Quiz 1K plays

  25. First Grade Computer Quiz

    A. B. C. Correct Answer. A. Try this amazing First Grade Computer Quiz quiz which has been attempted 1978 times by avid quiz takers. Also explore over 37 similar quizzes in this category.

  26. Enhancing surgical performance in cardiothoracic surgery with

    Classifier or assessment algorithms that use artificial intelligence to 'learn' what expertise is from expert surgical evaluators could further assist educators in determining if trainees meet competency thresholds. With collaborative efforts between surgical teams, medical institutions, computer scientists and researchers to ensure this ...