Search code, repositories, users, issues, pull requests...
Provide feedback.
We read every piece of feedback, and take your input very seriously.

Saved searches
Use saved searches to filter your results more quickly.
To see all available qualifiers, see our documentation .
- Notifications
📚 A curated list of papers for Software Engineers
facundoolano/software-papers
Name already in use.
Use Git or checkout with SVN using the web URL.
Work fast with our official CLI. Learn more about the CLI .
- Open with GitHub Desktop
- Download ZIP
Sign In Required
Please sign in to use Codespaces.
Launching GitHub Desktop
If nothing happens, download GitHub Desktop and try again.
Launching Xcode
If nothing happens, download Xcode and try again.
Launching Visual Studio Code
Your codespace will open once ready.
There was a problem preparing your codespace, please try again.
Latest commit
- 137 commits
Papers for Software Engineers
A curated list of papers that may be of interest to Software Engineering students or professionals. See the sources and selection criteria below.
Von Neumann's First Computer Program. Knuth (1970) . Computer History; Early Programming
- The Education of a Computer. Hopper (1952) .
- Recursive Programming. Dijkstra (1960) .
- Programming Considered as a Human Activity. Dijkstra (1965) .
- Goto Statement Considered Harmful. Dijkstra (1968) .
- Program development by stepwise refinement. Wirth (1971) .
- The Humble Programmer. Dijkstra (1972) .
- Computer Programming as an Art. Knuth (1974) .
- The paradigms of programming. Floyd (1979) .
- Literate Programming. Knuth (1984) .
Computing Machinery and Intelligence. Turing (1950) . Early Artificial Intelligence
- Some Moral and Technical Consequences of Automation. Wiener (1960) .
- Steps towards Artificial Intelligence. Minsky (1960) .
- ELIZA—a computer program for the study of natural language communication between man and machine. Weizenbaum (1966) .
- A Theory of the Learnable. Valiant (1984) .
A Method for the Construction of Minimum-Redundancy Codes. Huffman (1952) . Information Theory
- A Universal Algorithm for Sequential Data Compression. Ziv, Lempel (1977) .
- Fifty Years of Shannon Theory. Verdú (1998) .
Engineering a Sort Function. Bentley, McIlroy (1993) . Data Structures; Algorithms
- On the Shortest Spanning Subtree of a Graph and the Traveling Salesman Problem. Kruskal (1956) .
- A Note on Two Problems in Connexion with Graphs. Dijkstra (1959) .
- Quicksort. Hoare (1962) .
- Space/Time Trade-offs in Hash Coding with Allowable Errors. Bloom (1970) .
- The Ubiquitous B-Tree. Comer (1979) .
- Programming pearls: Algorithm design techniques. Bentley (1984) .
- Programming pearls: The back of the envelope. Bentley (1984) .
- Making data structures persistent. Driscoll et al (1986) .
A Design Methodology for Reliable Software Systems. Liskov (1972) . Software Design
- On the Criteria To Be Used in Decomposing Systems into Modules. Parnas (1971) .
- Information Distribution Aspects of Design Methodology. Parnas (1972) .
- Designing Software for Ease of Extension and Contraction. Parnas (1979) .
- Programming as Theory Building. Naur (1985) .
- Software Aging. Parnas (1994) .
- Towards a Theory of Conceptual Design for Software. Jackson (2015) .
Programming with Abstract Data Types. Liskov, Zilles (1974) . Abstract Data Types; Object-Oriented Programming
- The Smalltalk-76 Programming System Design and Implementation. Ingalls (1978) .
- A Theory of Type Polymorphism in Programming. Milner (1978) .
- On understanding types, data abstraction, and polymorphism. Cardelli, Wegner (1985) .
- SELF: The Power of Simplicity. Ungar, Smith (1991) .
Why Functional Programming Matters. Hughes (1990) . Functional Programming
- Recursive Functions of Symbolic Expressions and Their Computation by Machine. McCarthy (1960) .
- The Semantics of Predicate Logic as a Programming Language. Van Emden, Kowalski (1976) .
- Can Programming Be Liberated from the von Neumann Style? Backus (1978) .
- The Semantic Elegance of Applicative Languages. Turner (1981) .
- The essence of functional programming. Wadler (1992) .
- QuickCheck: A Lightweight Tool for Random Testing of Haskell Programs. Claessen, Hughes (2000) .
- Church's Thesis and Functional Programming. Turner (2006) .
An Incremental Approach to Compiler Construction. Ghuloum (2006) . Language Design; Compilers
- The Next 700 Programming Languages. Landin (1966) .
- Programming pearls: little languages. Bentley (1986) .
- The Essence of Compiling with Continuations. Flanagan et al (1993) .
- A Brief History of Just-In-Time. Aycock (2003) .
- LLVM: A Compilation Framework for Lifelong Program Analysis & Transformation. Lattner, Adve (2004) .
- A Unified Theory of Garbage Collection. Bacon, Cheng, Rajan (2004) .
- A Nanopass Framework for Compiler Education. Sarkar, Waddell, Dybvig (2005) .
- Bringing the Web up to Speed with WebAssembly. Haas (2017) .
No Silver Bullet: Essence and Accidents of Software Engineering. Brooks (1987) . Software Engineering; Project Management
- How do committees invent? Conway (1968) .
- Managing the Development of Large Software Systems. Royce (1970) .
- The Mythical Man Month. Brooks (1975) .
- On Building Systems That Will Fail. Corbató (1991) .
- The Cathedral and the Bazaar. Raymond (1998) .
- Out of the Tar Pit. Moseley, Marks (2006) .
Communicating sequential processes. Hoare (1978) . Concurrency
- Solution Of a Problem in Concurrent Program Control. Dijkstra (1965) .
- Monitors: An operating system structuring concept. Hoare (1974) .
- On the Duality of Operating System Structures. Lauer, Needham (1978) .
- Software Transactional Memory. Shavit, Touitou (1997) .
The UNIX Time- Sharing System. Ritchie, Thompson (1974) . Operating Systems
- An Experimental Time-Sharing System. Corbató, Merwin Daggett, Daley (1962) .
- The Structure of the "THE"-Multiprogramming System. Dijkstra (1968) .
- The nucleus of a multiprogramming system. Hansen (1970) .
- Reflections on Trusting Trust. Thompson (1984) .
- The Design and Implementation of a Log-Structured File System. Rosenblum, Ousterhout (1991) .
A Relational Model of Data for Large Shared Data Banks. Codd (1970) . Databases
- Granularity of Locks and Degrees of Consistency in a Shared Data Base. Gray et al (1975) .
- Access Path Selection in a Relational Database Management System. Selinger et al (1979) .
- The Transaction Concept: Virtues and Limitations. Gray (1981) .
- The design of POSTGRES. Stonebraker, Rowe (1986) .
- Rules of Thumb in Data Engineering. Gray, Shenay (1999) .
A Protocol for Packet Network Intercommunication. Cerf, Kahn (1974) . Networking
- Ethernet: Distributed packet switching for local computer networks. Metcalfe, Boggs (1978) .
- End-To-End Arguments in System Design. Saltzer, Reed, Clark (1984) .
- An algorithm for distributed computation of a Spanning Tree in an Extended LAN. Perlman (1985) .
- The Design Philosophy of the DARPA Internet Protocols. Clark (1988) .
- TOR: The second generation onion router. Dingledine et al (2004) .
- Why the Internet only just works. Handley (2006) .
- The Network is Reliable. Bailis, Kingsbury (2014) .
New Directions in Cryptography. Diffie, Hellman (1976) . Cryptography
- A Method for Obtaining Digital Signatures and Public-Key Cryptosystems. Rivest, Shamir, Adleman (1978) .
- How To Share A Secret. Shamir (1979) .
- A Digital Signature Based on a Conventional Encryption Function. Merkle (1987) .
- The Salsa20 family of stream ciphers. Bernstein (2007) .
Time, Clocks, and the Ordering of Events in a Distributed System. Lamport (1978) . Distributed Systems
- Self-stabilizing systems in spite of distributed control. Dijkstra (1974) .
- The Byzantine Generals Problem. Lamport, Shostak, Pease (1982) .
- Impossibility of Distributed Consensus With One Faulty Process. Fisher, Lynch, Patterson (1985) .
- Implementing Fault-Tolerant Services Using the State Machine Approach: A Tutorial. Schneider (1990) .
- Practical Byzantine Fault Tolerance. Castro, Liskov (1999) .
- Paxos made simple. Lamport (2001) .
- Paxos made live - An Engineering Perspective. Chandra, Griesemer, Redstone (2007) .
- In Search of an Understandable Consensus Algorithm. Ongaro, Ousterhout (2014) .
Designing for Usability: Key Principles and What Designers Think. Gould, Lewis (1985) . Human-Computer Interaction; User Interfaces
- As We May Think. Bush (1945) .
- Man-Computer symbiosis. Licklider (1958) .
- Some Thoughts About the Social Implications of Accessible Computing. David, Fano (1965) .
- Tutorials for the First-Time Computer User. Al-Awar, Chapanis, Ford (1981) .
- The star user interface: an overview. Smith, Irby, Kimball (1982) .
- Design Principles for Human-Computer Interfaces. Norman (1983) .
- Human-Computer Interaction: Psychology as a Science of Design. Carroll (1997) .
The anatomy of a large-scale hypertextual Web search engine. Brin, Page (1998) . Information Retrieval; World-Wide Web
- A Statistical Interpretation of Term Specificity in Retrieval. Spärck Jones (1972) .
- World-Wide Web: Information Universe. Berners-Lee et al (1992) .
- The PageRank Citation Ranking: Bringing Order to the Web. Page, Brin, Motwani (1998) .
Dynamo, Amazon’s Highly Available Key-value store. DeCandia et al (2007) . Internet Scale Data Systems
- The Google File System. Ghemawat, Gobioff, Leung (2003) .
- MapReduce: Simplified Data Processing on Large Clusters. Dean, Ghemawat (2004) .
- Bigtable: A Distributed Storage System for Structured Data. Chang et al (2006) .
- ZooKeeper: wait-free coordination for internet scale systems. Hunt et al (2010) .
- The Hadoop Distributed File System. Shvachko et al (2010) .
- Kafka: a Distributed Messaging System for Log Processing. Kreps, Narkhede, Rao (2011) .
- CAP Twelve Years Later: How the "Rules" Have Changed. Brewer (2012) .
- Amazon Aurora: Design Considerations for High Throughput Cloud-Native Relational Databases. Verbitski et al (2017) .
On Designing and Deploying Internet Scale Services. Hamilton (2007) . Operations; Reliability; Fault-tolerance
- Ironies of Automation. Bainbridge (1983) .
- Why do computers stop and what can be done about it? Gray (1985) .
- Recovery Oriented Computing (ROC): Motivation, Definition, Techniques, and Case Studies. Patterson et al (2002) .
- Crash-Only Software. Candea, Fox (2003) .
- Building on Quicksand. Helland, Campbell (2009) .
Thinking Methodically about Performance. Gregg (2012) . Performance
- Performance Anti-Patterns. Smaalders (2006) .
- Thinking Clearly about Performance. Millsap (2010) .
Bitcoin, A peer-to-peer electronic cash system. Nakamoto (2008) . Crytpocurrencies
- Ethereum: A Next-Generation Smart Contract and Decentralized Application Platform. Buterin (2014) .
A Few Useful Things to Know About Machine Learning. Domingos (2012) . Machine Learning
- Statistical Modeling: The Two Cultures. Breiman (2001) .
- The Unreasonable Effectiveness of Data. Halevy, Norvig, Pereira (2009) .
- ImageNet Classification with Deep Convolutional Neural Networks. Krizhevsky, Sutskever, Hinton (2012) .
- Playing Atari with Deep Reinforcement Learning. Mnih et al (2013) .
- Generative Adversarial Nets. Goodfellow et al (2014) .
- Deep Learning. LeCun, Bengio, Hinton (2015) .
- Attention Is All You Need. Vaswani et al (2017) .
- Von Neumann's First Computer Program. Knuth (1970) .
- Computing Machinery and Intelligence. Turing (1950) .
- A Method for the Construction of Minimum-Redundancy Codes. Huffman (1952) .
- Engineering a Sort Function. Bentley, McIlroy (1993) .
- A Design Methodology for Reliable Software Systems. Liskov (1972) .
- Programming with Abstract Data Types. Liskov, Zilles (1974) .
- Why Functional Programming Matters. Hughes (1990) .
- An Incremental Approach to Compiler Construction. Ghuloum (2006) .
- No Silver Bullet: Essence and Accidents of Software Engineering. Brooks (1987) .
- Communicating sequential processes. Hoare (1978) .
- The UNIX Time- Sharing System. Ritchie, Thompson (1974) .
- A Relational Model of Data for Large Shared Data Banks. Codd (1970) .
- A Protocol for Packet Network Intercommunication. Cerf, Kahn (1974) .
- New Directions in Cryptography. Diffie, Hellman (1976) .
- Time, Clocks, and the Ordering of Events in a Distributed System. Lamport (1978) .
- Designing for Usability: Key Principles and What Designers Think. Gould, Lewis (1985) .
- The anatomy of a large-scale hypertextual Web search engine. Brin, Page (1998) .
- Dynamo, Amazon’s Highly Available Key-value store. DeCandia et al (2007) .
- On Designing and Deploying Internet Scale Services. Hamilton (2007) .
- Thinking Methodically about Performance. Gregg (2012) .
- Bitcoin, A peer-to-peer electronic cash system. Nakamoto (2008) .
- A Few Useful Things to Know About Machine Learning. Domingos (2012) .
This list was inspired by (and draws from) several books and paper collections:
- Papers We Love
- Ideas That Created the Future
- The Innovators
- The morning paper
- Distributed systems for fun and profit
- Readings in Database Systems (the Red Book)
- Fermat's Library
- Classics in Human-Computer Interaction
- Awesome Compilers
- Distributed Consensus Reading List
- The Decade of Deep Learning
A few interesting resources about reading papers from Papers We Love and elsewhere:
- Should I read papers?
- How to Read an Academic Article
- How to Read a Paper. Keshav (2007) .
- Efficient Reading of Papers in Science and Technology. Hanson (1999) .
- On ICSE’s “Most Influential Papers”. Parnas (1995) .
Selection criteria
- The idea is not to include every interesting paper that I come across but rather to keep a representative list that's possible to read from start to finish with a similar level of effort as reading a technical book from cover to cover.
- I tried to include one paper per each major topic and author. Since in the process I found a lot of noteworthy alternatives, related or follow-up papers and I wanted to keep track of those as well, I included them as sublist items.
- The papers shouldn't be too long. For the same reasons as the previous item, I try to avoid papers longer than 20 or 30 pages.
- They should be self-contained and readable enough to be approachable by the casual technical reader.
- They should be freely available online.
- Examples of this are classic works by Von Neumann, Turing and Shannon.
- That being said, where possible I preferred the original paper on each subject over modern updates or survey papers.
- Similarly, I tended to skip more theoretical papers, those focusing on mathematical foundations for Computer Science, electronic aspects of hardware, etc.
- I sorted the list by a mix of relatedness of topics and a vague chronological relevance, such that it makes sense to read it in the suggested order. For example, historical and seminal topics go first, contemporary internet-era developments last, networking precedes distributed systems, etc.
Sponsor this project
Contributors 4.
- Python 100.0%
software engineering Recently Published Documents
Total documents.
- Latest Documents
- Most Cited Documents
- Contributed Authors
- Related Sources
- Related Keywords
Identifying Non-Technical Skill Gaps in Software Engineering Education: What Experts Expect But Students Don’t Learn
As the importance of non-technical skills in the software engineering industry increases, the skill sets of graduates match less and less with industry expectations. A growing body of research exists that attempts to identify this skill gap. However, only few so far explicitly compare opinions of the industry with what is currently being taught in academia. By aggregating data from three previous works, we identify the three biggest non-technical skill gaps between industry and academia for the field of software engineering: devoting oneself to continuous learning , being creative by approaching a problem from different angles , and thinking in a solution-oriented way by favoring outcome over ego . Eight follow-up interviews were conducted to further explore how the industry perceives these skill gaps, yielding 26 sub-themes grouped into six bigger themes: stimulating continuous learning , stimulating creativity , creative techniques , addressing the gap in education , skill requirements in industry , and the industry selection process . With this work, we hope to inspire educators to give the necessary attention to the uncovered skills, further mitigating the gap between the industry and the academic world.
Opportunities and Challenges in Code Search Tools
Code search is a core software engineering task. Effective code search tools can help developers substantially improve their software development efficiency and effectiveness. In recent years, many code search studies have leveraged different techniques, such as deep learning and information retrieval approaches, to retrieve expected code from a large-scale codebase. However, there is a lack of a comprehensive comparative summary of existing code search approaches. To understand the research trends in existing code search studies, we systematically reviewed 81 relevant studies. We investigated the publication trends of code search studies, analyzed key components, such as codebase, query, and modeling technique used to build code search tools, and classified existing tools into focusing on supporting seven different search tasks. Based on our findings, we identified a set of outstanding challenges in existing studies and a research roadmap for future code search research.
Psychometrics in Behavioral Software Engineering: A Methodological Introduction with Guidelines
A meaningful and deep understanding of the human aspects of software engineering (SE) requires psychological constructs to be considered. Psychology theory can facilitate the systematic and sound development as well as the adoption of instruments (e.g., psychological tests, questionnaires) to assess these constructs. In particular, to ensure high quality, the psychometric properties of instruments need evaluation. In this article, we provide an introduction to psychometric theory for the evaluation of measurement instruments for SE researchers. We present guidelines that enable using existing instruments and developing new ones adequately. We conducted a comprehensive review of the psychology literature framed by the Standards for Educational and Psychological Testing. We detail activities used when operationalizing new psychological constructs, such as item pooling, item review, pilot testing, item analysis, factor analysis, statistical property of items, reliability, validity, and fairness in testing and test bias. We provide an openly available example of a psychometric evaluation based on our guideline. We hope to encourage a culture change in SE research towards the adoption of established methods from psychology. To improve the quality of behavioral research in SE, studies focusing on introducing, validating, and then using psychometric instruments need to be more common.
Towards an Anatomy of Software Craftsmanship
Context: The concept of software craftsmanship has early roots in computing, and in 2009, the Manifesto for Software Craftsmanship was formulated as a reaction to how the Agile methods were practiced and taught. But software craftsmanship has seldom been studied from a software engineering perspective. Objective: The objective of this article is to systematize an anatomy of software craftsmanship through literature studies and a longitudinal case study. Method: We performed a snowballing literature review based on an initial set of nine papers, resulting in 18 papers and 11 books. We also performed a case study following seven years of software development of a product for the financial market, eliciting qualitative, and quantitative results. We used thematic coding to synthesize the results into categories. Results: The resulting anatomy is centered around four themes, containing 17 principles and 47 hierarchical practices connected to the principles. We present the identified practices based on the experiences gathered from the case study, triangulating with the literature results. Conclusion: We provide our systematically derived anatomy of software craftsmanship with the goal of inspiring more research into the principles and practices of software craftsmanship and how these relate to other principles within software engineering in general.
On the Reproducibility and Replicability of Deep Learning in Software Engineering
Context: Deep learning (DL) techniques have gained significant popularity among software engineering (SE) researchers in recent years. This is because they can often solve many SE challenges without enormous manual feature engineering effort and complex domain knowledge. Objective: Although many DL studies have reported substantial advantages over other state-of-the-art models on effectiveness, they often ignore two factors: (1) reproducibility —whether the reported experimental results can be obtained by other researchers using authors’ artifacts (i.e., source code and datasets) with the same experimental setup; and (2) replicability —whether the reported experimental result can be obtained by other researchers using their re-implemented artifacts with a different experimental setup. We observed that DL studies commonly overlook these two factors and declare them as minor threats or leave them for future work. This is mainly due to high model complexity with many manually set parameters and the time-consuming optimization process, unlike classical supervised machine learning (ML) methods (e.g., random forest). This study aims to investigate the urgency and importance of reproducibility and replicability for DL studies on SE tasks. Method: In this study, we conducted a literature review on 147 DL studies recently published in 20 SE venues and 20 AI (Artificial Intelligence) venues to investigate these issues. We also re-ran four representative DL models in SE to investigate important factors that may strongly affect the reproducibility and replicability of a study. Results: Our statistics show the urgency of investigating these two factors in SE, where only 10.2% of the studies investigate any research question to show that their models can address at least one issue of replicability and/or reproducibility. More than 62.6% of the studies do not even share high-quality source code or complete data to support the reproducibility of their complex models. Meanwhile, our experimental results show the importance of reproducibility and replicability, where the reported performance of a DL model could not be reproduced for an unstable optimization process. Replicability could be substantially compromised if the model training is not convergent, or if performance is sensitive to the size of vocabulary and testing data. Conclusion: It is urgent for the SE community to provide a long-lasting link to a high-quality reproduction package, enhance DL-based solution stability and convergence, and avoid performance sensitivity on different sampled data.
Predictive Software Engineering: Transform Custom Software Development into Effective Business Solutions
The paper examines the principles of the Predictive Software Engineering (PSE) framework. The authors examine how PSE enables custom software development companies to offer transparent services and products while staying within the intended budget and a guaranteed budget. The paper will cover all 7 principles of PSE: (1) Meaningful Customer Care, (2) Transparent End-to-End Control, (3) Proven Productivity, (4) Efficient Distributed Teams, (5) Disciplined Agile Delivery Process, (6) Measurable Quality Management and Technical Debt Reduction, and (7) Sound Human Development.
Software—A New Open Access Journal on Software Engineering
Software (ISSN: 2674-113X) [...]
Improving bioinformatics software quality through incorporation of software engineering practices
Background Bioinformatics software is developed for collecting, analyzing, integrating, and interpreting life science datasets that are often enormous. Bioinformatics engineers often lack the software engineering skills necessary for developing robust, maintainable, reusable software. This study presents review and discussion of the findings and efforts made to improve the quality of bioinformatics software. Methodology A systematic review was conducted of related literature that identifies core software engineering concepts for improving bioinformatics software development: requirements gathering, documentation, testing, and integration. The findings are presented with the aim of illuminating trends within the research that could lead to viable solutions to the struggles faced by bioinformatics engineers when developing scientific software. Results The findings suggest that bioinformatics engineers could significantly benefit from the incorporation of software engineering principles into their development efforts. This leads to suggestion of both cultural changes within bioinformatics research communities as well as adoption of software engineering disciplines into the formal education of bioinformatics engineers. Open management of scientific bioinformatics development projects can result in improved software quality through collaboration amongst both bioinformatics engineers and software engineers. Conclusions While strides have been made both in identification and solution of issues of particular import to bioinformatics software development, there is still room for improvement in terms of shifts in both the formal education of bioinformatics engineers as well as the culture and approaches of managing scientific bioinformatics research and development efforts.
Inter-team communication in large-scale co-located software engineering: a case study
AbstractLarge-scale software engineering is a collaborative effort where teams need to communicate to develop software products. Managers face the challenge of how to organise work to facilitate necessary communication between teams and individuals. This includes a range of decisions from distributing work over teams located in multiple buildings and sites, through work processes and tools for coordinating work, to softer issues including ensuring well-functioning teams. In this case study, we focus on inter-team communication by considering geographical, cognitive and psychological distances between teams, and factors and strategies that can affect this communication. Data was collected for ten test teams within a large development organisation, in two main phases: (1) measuring cognitive and psychological distance between teams using interactive posters, and (2) five focus group sessions where the obtained distance measurements were discussed. We present ten factors and five strategies, and how these relate to inter-team communication. We see three types of arenas that facilitate inter-team communication, namely physical, virtual and organisational arenas. Our findings can support managers in assessing and improving communication within large development organisations. In addition, the findings can provide insights into factors that may explain the challenges of scaling development organisations, in particular agile organisations that place a large emphasis on direct communication over written documentation.
Aligning Software Engineering and Artificial Intelligence With Transdisciplinary
Study examined AI and SE transdisciplinarity to find ways of aligning them to enable development of AI-SE transdisciplinary theory. Literature review and analysis method was used. The findings are AI and SE transdisciplinarity is tacit with islands within and between them that can be linked to accelerate their transdisciplinary orientation by codification, internally developing and externally borrowing and adapting transdisciplinary theories. Lack of theory has been identified as the major barrier toward towards maturing the two disciplines as engineering disciplines. Creating AI and SE transdisciplinary theory would contribute to maturing AI and SE engineering disciplines. Implications of study are transdisciplinary theory can support mode 2 and 3 AI and SE innovations; provide an alternative for maturing two disciplines as engineering disciplines. Study’s originality it’s first in SE, AI or their intersections.
Export Citation Format
Share document.

- Search by keyword
- Search by citation
Page 1 of 2
Metric-centered and technology-independent architectural views for software comprehension
The maintenance of applications is a crucial activity in the software industry. The high cost of this process is due to the effort invested on software comprehension since, in most of cases, there is no up-to-...
- View Full Text
Back to the future: origins and directions of the “Agile Manifesto” – views of the originators
In 2001, seventeen professionals set up the manifesto for agile software development. They wanted to define values and basic principles for better software development. On top of being brought into focus, the ...
Investigating the effectiveness of peer code review in distributed software development based on objective and subjective data
Code review is a potential means of improving software quality. To be effective, it depends on different factors, and many have been investigated in the literature to identify the scenarios in which it adds qu...
On the benefits and challenges of using kanban in software engineering: a structured synthesis study
Kanban is increasingly being used in diverse software organizations. There is extensive research regarding its benefits and challenges in Software Engineering, reported in both primary and secondary studies. H...
Challenges on applying genetic improvement in JavaScript using a high-performance computer
Genetic Improvement is an area of Search Based Software Engineering that aims to apply evolutionary computing operators to the software source code to improve it according to one or more quality metrics. This ...
Actor’s social complexity: a proposal for managing the iStar model
Complex systems are inherent to modern society, in which individuals, organizations, and computational elements relate with each other to achieve a predefined purpose, which transcends individual goals. In thi...
Investigating measures for applying statistical process control in software organizations
The growing interest in improving software processes has led organizations to aim for high maturity, where statistical process control (SPC) is required. SPC makes it possible to analyze process behavior, pred...
An approach for applying Test-Driven Development (TDD) in the development of randomized algorithms
TDD is a technique traditionally applied in applications with deterministic algorithms, in which the input and the expected result are known. However, the application of TDD with randomized algorithms have bee...
Supporting governance of mobile application developers from mining and analyzing technical questions in stack overflow
There is a need to improve the direct communication between large organizations that maintain mobile platforms (e.g. Apple, Google, and Microsoft) and third-party developers to solve technical questions that e...
Working software over comprehensive documentation – Rationales of agile teams for artefacts usage
Agile software development (ASD) promotes working software over comprehensive documentation. Still, recent research has shown agile teams to use quite a number of artefacts. Whereas some artefacts may be adopt...
Development as a journey: factors supporting the adoption and use of software frameworks
From the point of view of the software framework owner, attracting new and supporting existing application developers is crucial for the long-term success of the framework. This mixed-methods study explores th...
Applying user-centered techniques to analyze and design a mobile application
Techniques that help in understanding and designing user needs are increasingly being used in Software Engineering to improve the acceptance of applications. Among these techniques we can cite personas, scenar...
A measurement model to analyze the effect of agile enterprise architecture on geographically distributed agile development
Efficient and effective communication (active communication) among stakeholders is thought to be central to agile development. However, in geographically distributed agile development (GDAD) environments, it c...
A survey of search-based refactoring for software maintenance
This survey reviews published materials related to the specific area of Search-Based Software Engineering that concerns software maintenance and, in particular, refactoring. The survey aims to give a comprehen...
Guest editorial foreword for the special issue on automated software testing: trends and evidence
Similarity testing for role-based access control systems.
Access control systems demand rigorous verification and validation approaches, otherwise, they can end up with security breaches. Finite state machines based testing has been successfully applied to RBAC syste...
An algorithm for combinatorial interaction testing: definitions and rigorous evaluations
Combinatorial Interaction Testing (CIT) approaches have drawn attention of the software testing community to generate sets of smaller, efficient, and effective test cases where they have been successful in det...
How diverse is your team? Investigating gender and nationality diversity in GitHub teams
Building an effective team of developers is a complex task faced by both software companies and open source communities. The problem of forming a “dream”
Investigating factors that affect the human perception on god class detection: an analysis based on a family of four controlled experiments
Evaluation of design problems in object oriented systems, which we call code smells, is mostly a human-based task. Several studies have investigated the impact of code smells in practice. Studies focusing on h...
On the evaluation of code smells and detection tools
Code smells refer to any symptom in the source code of a program that possibly indicates a deeper problem, hindering software maintenance and evolution. Detection of code smells is challenging for developers a...

On the influence of program constructs on bug localization effectiveness
Software projects often reach hundreds or thousands of files. Therefore, manually searching for code elements that should be changed to fix a failure is a difficult task. Static bug localization techniques pro...
DyeVC: an approach for monitoring and visualizing distributed repositories
Software development using distributed version control systems has become more frequent recently. Such systems bring more flexibility, but also greater complexity to manage and monitor multiple existing reposi...
A genetic algorithm based framework for software effort prediction
Several prediction models have been proposed in the literature using different techniques obtaining different results in different contexts. The need for accurate effort predictions for projects is one of the ...
Elaboration of software requirements documents by means of patterns instantiation
Studies show that problems associated with the requirements specifications are widely recognized for affecting software quality and impacting effectiveness of its development process. The reuse of knowledge ob...
ArchReco: a software tool to assist software design based on context aware recommendations of design patterns
This work describes the design, development and evaluation of a software Prototype, named ArchReco, an educational tool that employs two types of Context-aware Recommendations of Design Patterns, to support us...
On multi-language software development, cross-language links and accompanying tools: a survey of professional software developers
Non-trivial software systems are written using multiple (programming) languages, which are connected by cross-language links. The existence of such links may lead to various problems during software developmen...
SoftCoDeR approach: promoting Software Engineering Academia-Industry partnership using CMD, DSR and ESE
The Academia-Industry partnership has been increasingly encouraged in the software development field. The main focus of the initiatives is driven by the collaborative work where the scientific research work me...
Issues on developing interoperable cloud applications: definitions, concepts, approaches, requirements, characteristics and evaluation models
Among research opportunities in software engineering for cloud computing model, interoperability stands out. We found that the dynamic nature of cloud technologies and the battle for market domination make clo...
Game development software engineering process life cycle: a systematic review
Software game is a kind of application that is used not only for entertainment, but also for serious purposes that can be applicable to different domains such as education, business, and health care. Multidisc...
Correlating automatic static analysis and mutation testing: towards incremental strategies
Traditionally, mutation testing is used as test set generation and/or test evaluation criteria once it is considered a good fault model. This paper uses mutation testing for evaluating an automated static anal...
A multi-objective test data generation approach for mutation testing of feature models
Mutation approaches have been recently applied for feature testing of Software Product Lines (SPLs). The idea is to select products, associated to mutation operators that describe possible faults in the Featur...
An extended global software engineering taxonomy
In Global Software Engineering (GSE), the need for a common terminology and knowledge classification has been identified to facilitate the sharing and combination of knowledge by GSE researchers and practition...
A systematic process for obtaining the behavior of context-sensitive systems
Context-sensitive systems use contextual information in order to adapt to the user’s current needs or requirements failure. Therefore, they need to dynamically adapt their behavior. It is of paramount importan...
Distinguishing extended finite state machine configurations using predicate abstraction
Extended Finite State Machines (EFSMs) provide a powerful model for the derivation of functional tests for software systems and protocols. Many EFSM based testing problems, such as mutation testing, fault diag...
Extending statecharts to model system interactions
Statecharts are diagrams comprised of visual elements that can improve the modeling of reactive system behaviors. They extend conventional state diagrams with the notions of hierarchy, concurrency and communic...
On the relationship of code-anomaly agglomerations and architectural problems
Several projects have been discontinued in the history of the software industry due to the presence of software architecture problems. The identification of such problems in source code is often required in re...
An approach based on feature models and quality criteria for adapting component-based systems
Feature modeling has been widely used in domain engineering for the development and configuration of software product lines. A feature model represents the set of possible products or configurations to apply i...
Patch rejection in Firefox: negative reviews, backouts, and issue reopening
Writing patches to fix bugs or implement new features is an important software development task, as it contributes to raise the quality of a software system. Not all patches are accepted in the first attempt, ...
Investigating probabilistic sampling approaches for large-scale surveys in software engineering
Establishing representative samples for Software Engineering surveys is still considered a challenge. Specialized literature often presents limitations on interpreting surveys’ results, mainly due to the use o...
Characterising the state of the practice in software testing through a TMMi-based process
The software testing phase, despite its importance, is usually compromised by the lack of planning and resources in industry. This can risk the quality of the derived products. The identification of mandatory ...
Self-adaptation by coordination-targeted reconfigurations
A software system is self-adaptive when it is able to dynamically and autonomously respond to changes detected either in its internal components or in its deployment environment. This response is expected to ensu...
Templates for textual use cases of software product lines: results from a systematic mapping study and a controlled experiment
Use case templates can be used to describe functional requirements of a Software Product Line. However, to the best of our knowledge, no efforts have been made to collect and summarize these existing templates...
F3T: a tool to support the F3 approach on the development and reuse of frameworks
Frameworks are used to enhance the quality of applications and the productivity of the development process, since applications may be designed and implemented by reusing framework classes. However, frameworks ...
NextBug: a Bugzilla extension for recommending similar bugs
Due to the characteristics of the maintenance process followed in open source systems, developers are usually overwhelmed with a great amount of bugs. For instance, in 2012, approximately 7,600 bugs/month were...
Assessing the benefits of search-based approaches when designing self-adaptive systems: a controlled experiment
The well-orchestrated use of distilled experience, domain-specific knowledge, and well-informed trade-off decisions is imperative if we are to design effective architectures for complex software-intensive syst...
Revealing influence of model structure and test case profile on the prioritization of test cases in the context of model-based testing
Test case prioritization techniques aim at defining an order of test cases that favor the achievement of a goal during test execution, such as revealing failures as earlier as possible. A number of techniques ...
A metrics suite for JUnit test code: a multiple case study on open source software
The code of JUnit test cases is commonly used to characterize software testing effort. Different metrics have been proposed in literature to measure various perspectives of the size of JUnit test cases. Unfort...
Designing fault-tolerant SOA based on design diversity
Over recent years, software developers have been evaluating the benefits of both Service-Oriented Architecture (SOA) and software fault tolerance techniques based on design diversity. This is achieved by creat...
Method-level code clone detection through LWH (Light Weight Hybrid) approach
Many researchers have investigated different techniques to automatically detect duplicate code in programs exceeding thousand lines of code. These techniques have limitations in finding either the structural o...
The problem of conceptualization in god class detection: agreement, strategies and decision drivers
The concept of code smells is widespread in Software Engineering. Despite the empirical studies addressing the topic, the set of context-dependent issues that impacts the human perception of what is a code sme...
- Editorial Board
- Sign up for article alerts and news from this journal
Journal of Software Engineering Research and Development

Current Issue
Research article, extending the docstone to enable a blockchain-based service for customizable assets and blockchain types, modeling software processes from different domains using spem and bpmn notations an experience report of teaching software processes, test smell refactoring revisited: what can internal quality attributes and developers’ experience tell us, investigating the point of view of project management practitioners on technical debt - a study on stack exchange, simulation-supported development for cooperative multi-uav systems with the mysterio framework, identifying and mitigating risks in estimation process: a case study applying action research, software architectural practices: influences on the open source ecosystem health, education, innovation and software production: the contributions of the reflective practice in a software studio, insights from the application of exploratory tests in the daily life of distributed teams: an experience report, naming practices in object-oriented programming: an empirical study, an evaluation of ranking-to-learn approaches for test case prioritization in continuous integration, investigating the relationship between technical debt management and software development issues, oss in software engineering education mapping characteristics of brazilian instructors, technical debt guild managing technical debt from code up to build, identification and management of technical debt a systematic mapping study update, make a submission.

Classified as Qualis A4 by the Brazilian federal agency CAPES – Coordination for the Improvement of Higher Education Personnel –, considering the four-year period 2017-2020.
*All the content made available in this website represent exclusively the opinion of their authors and not necessarily the position of the Brazilian Computing Society - SBC, its collaborators and associates. SBC may at anytime, without prior notice, charge the use and availability of the platform and its content for non-members.

SOFTWARE ENGINEERING IEEE PAPER
Computer science-cse-2020 software engineering, 2019 cse papers, computer science-software engineering 2016, computer science-software engineering 2015, programming languages, recent papers 2014, subject wise, cloud computing, semantic-web-mining, adaptive computing, computer network, data mining, distributed computing, distributed system, data warehousing, green computing, grid-computing, mobile computing, network security, operating system, pervasive computing, soa-software oriented architecture, testing-software, web application, web service, web technology, mobile platform software project management, ieee projects 2022, seminar reports, free ieee projects ieee papers.

Top Software Engineering Research articles of 2020
- Catalog file_copy
- search Search
FROM QUALITY ASSURANCE TO QUALITY ENGINEERING FOR DIGITAL TRANSFORMATION
Kiran Kumaar CNK, Capgemini India Private Limited, India
Defects are one of the seven prominent wastes in lean process that arises out of the failure of a product or functionality from meeting customer expectations. These defects, in turn, can cause rework and redeployment of that product or functionality again, which costs valuable time, effort, and money. As per the survey, most of the clients invest much time, energy, and money in fixing production defects. This paper provides information about ways to move into quality engineering from quality assurance mode for digital transformation by diagnostic, Predictive & Prescriptive approaches, it also outlines the overall increase in quality observations, given QA shift left and continuous delivery through Agile with the integration of analytics and toolbox.
Diagnostic, Predictive & Prescriptive approaches, continuous delivery through Agile.
Full Paper https://aircconline.com/csit/csit1002.pdf
Volume Link : http://airccse.org/csit/V10N02.html
DESIGN OF SOFTWARE TRUSTED TOOL BASED ON SEMANTIC ANALYSIS
Guofengli, Beijing University of Technology, Beijing, China
IAt present, the research on software trustworthiness mainly focuses on two parts: behavioral trustworthiness and trusted computing. The research status of trusted computing is in the stage of active immune of trusted 3.0. Behavioral trustworthiness mainly focuses on the detection and monitoring of software behavior trajectory. Abnormal behaviors are found through scene and hierarchical monitoring program call sequence, Restrict sensitive and dangerous software behavior.
At present, the research of behavior trust mainly uses XML language to configure behavior statement, which constrains sensitive and dangerous software behaviors. These researches are mainly applied to software trust testing methods. The research of XML behavior statement file mainly uses the method of obtaining sensitive behavior set and defining behavior path to manually configure. It mainly focuses on the formulation of behavior statements and the generation of behavior statement test cases. There are few researches on behavior semantics trustworthiness. Behavior statements are all based on behavior set configuration XML format declaration files. There are complicated and time-consuming problems in manual configuration, including incomplete behavior sets. This paper uses the trusted tool of semantic analysis technology to solve the problem of behavior set integrity And can generate credible statement file efficiently.
The main idea of this paper is to use semantic analysis technology to model requirements, including dynamic semantic analysis and static semantic analysis. This paper uses UML model to automatically generate XML language code, behavioral semantic analysis and modeling, and formal modeling of non functional requirements, so as to ensure the credibility of the developed software trusted tools and the automatically generated XML files. It is mainly based on the formal construction of non functional requirements Model research, semantic analysis of the state diagram and function layer in the research process, generation of XML language trusted behavior declaration file by activity diagram established by model driven method, and finally generation of functional semantic set and functional semantic tree set by semantic analysis to ensure the integrity of the software. Behavior set generates behavior declaration file in XML format by the design of trusted tools Trusted computing is used to verify the credibility of trusted tools.
Behavior declaration, behavior semantic analysis, trusted tool design, functional semantic set.
For More Details : https://aircconline.com/csit/csit1002.pdf
DOCPRO: A FRAMEWORK FOR BUILDING DOCUMENT PROCESSING SYSTEMS
Ming-Jen Huang, Chun-Fang Huang, Chiching Wei Foxit Software Inc., Albrae Street, Fremont, USA
With the recent advance of the deep neural network, we observe new applications of natural language processing (NLP) and computer vision (CV) technologies. Especaully, when applying them to document processing, NLP and CV tasks are usually treated individually in research work and open source libraries. However, designing a real-world document processing system needs to weave NLP and CV tasks and their generated information together. There is a need to have a unified approach for processing documents containing textual and graphical elements with rich formats, diverse layout arrangement, and distinct semantics. This paper introduces a framework to fulfil this need. The framework includes a representation model definition for holding the generated information and specifications defining the coordination between the NLP and CV tasks.
Document Processing, Framework, Formal definition, Machine Learning.
For More Details : https://aircconline.com/csit/csit1009.pdf
Volume Link : http://airccse.org/csit/V10N09.html
MODERATION EFFECT OF SOFTWARE ENGINEERS’ EMOTIONAL INTELLIGENCE (EQ) BETWEEN THEIR WORK ETHICS AND THEIR WORK PERFORMANCE
Shafia Khatun and Norsaremah Salleh International Islamic University Malaysia (IIUM), Kuala Lumpur, Malaysia
In today’s world, software is being used in every sector, be it education, healthcare, security, transportation, finance and so on. As software engineers are affecting society greatly, if they do not behave ethically, it could cause widespread damage, such as the Facebook-CambridgeAnalytica scandal in 2018. Therefore, investigating the ethics of software engineers and the relationships it has with other interpersonal variables such as work performance is important for understanding what could be done to improve the situation. Software engineers work in rapidly-changing business environments which lead to a lot of stress. Their emotions are important for dealing with this, and can impact their ethical decision-making. In this quantitative study, the researcher aims to investigate whether Emotional Intelligence (EQ) moderates the relationship between work ethics of software engineers and their work performance using hierarchical multiple regression analysis in SPSS. The findings have found that EQ does significantly moderate the relationship between work ethics and work performance. These findings provide valuable information for improving the ethical behavior of software engineers.
Software engineers, emotional intelligence, work ethics, work performance, quantitative study .
For More Details : https://aircconline.com/csit/csit1014.pdf
Volume Link : http://airccse.org/csit/V10N14.html
UNIQUE SOFTWARE ENGINEERING TECHNIQUES: PANACEA FOR THREAT COMPLEXITIES IN SECURE MULTIPARTY COMPUTATION (MPC) WITH BIG DATA
Uchechukwu Emejeamara 1 , Udochukwu Nwoduh 2 and Andrew Madu 2 1 IEEE Computer Society, Connecticut Section, USA 2 Federal Polytechnic Nekede, Nigeria.
Most large corporations with big data have adopted more privacy measures in handling their sensitive/private data and as a result, employing the use of analytic tools to run across multiple sources has become ineffective. Joint computation across multiple parties is allowed through the use of secure multi-party computations (MPC). The practicality of MPC is impaired when dealing with large datasets as more of its algorithms are poorly scaled with data sizes. Despite its limitations, MPC continues to attract increasing attention from industry players who have viewed it as a better approach to exploiting big data. Secure MPC is however, faced with complexities that most times overwhelm its handlers, so the need for special software engineering techniques for resolving these threat complexities. This research presents cryptographic data security measures, garbed circuits protocol, optimizing circuits, and protocol execution techniques as some of the special techniques for resolving threat complexities associated with MPC’s. Honest majority, asymmetric trust, covert security, and trading off leakage are some of the experimental outcomes of implementing these special techniques. This paper also reveals that an essential approach in developing suitable mitigation strategies is having knowledge of the adversary type.
Cryptographic Data Security, Garbed Circuits, Optimizing Circuits, Protocol Execution, Honest Majority, Asymmetric Trust, Covert Security, Trading Off Leakage.
NETWORK DEFENSE IN AN END-TO-END PARADIGM
William R. Simpson and Kevin E. Foltz The Institute for Defense Analyses (IDA), Alexandria, Virginia, USA
Network defense implies a comprehensive set of software tools to preclude malicious entities from conducting nefarious activities. For most enterprises at this time, that defense builds upon a clear concept of the fortress approach. Many of the requirements are based on inspection and reporting prior to delivery of the communication to the intended target. These inspections require decryption of packets when encrypted. This decryption implies that the defensive suite has access to the private keys of the servers that are the target of communication. This is in contrast to an end-to-end paradigm where known good entities can communicate directly with each other. In an end-to-end paradigm, maintaining confidentiality through unbroken end-toend encryption, the private key resides only with the holder-of-key in the communication and on a distributed computation of inspection and reporting. This paper examines a formulation that is pertinent to the Enterprise Level Security (ELS) framework. .
Appliance, end-to-end security model, ELS, network defenses, web server handlers.
QUALITY MODEL BASED ON PLAYABILITY FOR THE UNDERSTANDABILITY AND USABILITY COMPONENTS IN SERIOUS VIDEO GAMES
Iván Humberto Fuentes Chab, Damián Uriel Rosado Castellanos, Olivia Graciela Fragoso Diaz and Ivette Stephany Pacheco Farfán Instituto Tecnológico Superior de Escárcega (ITSE), Escárcega, México
A serious video game is an easy and practical way to get the player to learn about a complex subject, such as performing integrals, applying first aid, or even getting children to learn to read and write in their native language or another language. Therefore, to develop a serious video game, you must have a guide containing the basic or necessary elements of its software components to be considered. This research presents a quality model to evaluate the playability, taking the attributes of usability and understandability at the level of software components. This model can serve as parameters to measure the quality of the software product of the serious video games before and during its development, providing a margin with the primordial elements that a serious video game must have so that the players reach the desired objective of learning while playing. The experimental results show that 88.045% is obtained concerning for to the quality model proposed for the serious video game used in the test case, margin that can vary according to the needs of the implemented video game. .
Quality Model, Serious Video Games, Playability Metrics.
For More Details : : https://aircconline.com/csit/papers/vol10/csit101912.pdf
Volume Link : http://airccse.org/csit/V10N19.html
Journals by Area
- computer Computer Science & Engg
- book Engineering
- accessibility_new Biology & Life Sciences
- school Education
- payment Economics
- local_pharmacy Medical
- fireplace Energy
- fastfood Food Science
- public Humanities, Art & Social Studies
- ballot Politics
- functions Maths
- donut_small Nano
- local_bar Chemistry
- library_books Control Theory
Conference Publicity
To list your conference in this page, please contact us

IMAGES
VIDEO
COMMENTS
To make an acknowledgement in a research paper, a writer should express thanks by using the full or professional names of the people being thanked and should specify exactly how the people being acknowledged helped.
The title of a research paper should outline the purpose of the research, the methods used and the overall tone of the paper. The title is important because it is the first thing that is read. It is important that the title is focused, but ...
The sample methodology in a research paper provides the information to show that the research is valid. It must tell what was done to answer the research question and how the research was done.
Some Moral and Technical Consequences of Automation. Wiener (1960). Steps towards Artificial Intelligence. Minsky (1960). ELIZA—a computer program for the study
The paper examines the principles of the Predictive Software Engineering (PSE) framework. The authors examine how PSE enables custom software development
In software engineering, research papers are customary vehicles for reporting results to the research community. In a research paper, the author explains to
Traditionally, mutation testing is used as test set generation and/or test evaluation criteria once it is considered a good fault model. This paper uses
... research papers that meet the criteria. They then compare the results presented in these papers. Literature reviews, by contrast, provide a summary of what
The mining of software engineering data is one of the significant research paper topics for software engineering, involving the application of
Software engineering researchers solve problems of several different kinds. To do so, they produce several different kinds of results, and they should
Felipe Gomes, Eder Santos, Sávio Freire, Thiago Souto Mendes, Manoel Mendonça, Rodrigo Spínola. 12:1 - 12:15.
1) Software Engineering · 2) Software requirements specification · 3) Software Cost Estimation · 4) software reliability · 5) Software Testing · 6)
IEEE PAPERSoftware engineering research papers-FREE ENGINEERING RESEARCH PAPERS-ENGPAPER.COM.
This paper provides information about ways to move into quality engineering from quality assurance mode for digital transformation by diagnostic, Predictive