๐Ÿ“ Analysis Bots

The Story

Our paper on analysis bots has been accepted to the ICSE ERA track. The paper is co-written with Ivan Bestchastnick from University of British Columbia and Yanyan Zhuang from the University of Colorado.

The paper is written from the perspective of software engineering researchers and for a software engineering audience. It argues that we must implement a platform where analysis bots compete to provide the most meaningful analysis results to practitioners.

However, I think there is a more powerful and general perspective which is trying to emerge from the paper. In particular I find it fascinating to think of the introduction of automated agents in a workflow which is designed for people.

Or maybe this is simply automation catching up with us, software engineers?

The Paper

The preprint (pdf) is available online. It’s not too long and an easy read ;)

The BibTex

@inproceedings{Best:2017,
 author = {Beschastnikh, Ivan and Lungu, Mircea and Zhuang, Yanyan},
 title = {{Accelerating Software Engineering Research Adoption
with Analysis Bots}},
 booktitle = {Proceedings of the 39th International Conference on Software Engineering},
 series = {ICSE 2017},
 year = {2017},
 location = {Buenos Aires, Argentina},
 pages = {},
 url = {},
 doi = {},
 acmid = {},
 publisher = {ACM},
 address = {New York, NY, USA},
} 

The Abstract

An important part of software engineering (SE) research is to develop new analysis techniques and to integrate these techniques into software development practice. However, since access to developers is non-trivial and research tool adoption is slow, new analyses are typically evaluated as follows: a prototype tool that embeds the analysis is implemented, a set of projects is identified, their revisions are selected, and the tool is run in a controlled environment, rarely involving the developers of the software. As a result, research artifacts are brittle and it is unclear if an analysis tool would actually be adopted.

In this paper, we envision harnessing the rich interfaces provided by popular social coding platforms for automated deployment and evaluation of SE research analysis. We propose that SE analyses can be deployed as analysis bots. We focus on two specific benefits of such an approach: 
    (1) analysis bots can help evaluate analysis techniques in a less controlled, and more realistic context, and 
    (2) analysis bots provide an interface for developers to โ€œsubscribeโ€ to new research techniques without needing to trust the implementation, the developer of the new tool, or to install the analysis tool locally. 

We outline basic requirements for an analysis bots platform, and present research challenges that would need to be resolved for bots to flourish