“CLIP meets GamePhysics: Towards bug identification in gameplay videos using zero-shot transfer learning” accepted at MSR 2022!

Mohammad Reza’s paper “CLIP meets GamePhysics: Towards bug identification in gameplay videos using zero-shot transfer learning” was accepted for publication at the Mining Software Repositories (MSR) conference 2022! Super congrats Mohammad Reza and co-author Finlay!

Abstract:
“Gameplay videos contain rich information about how players interact with the game and how the game responds. Sharing gameplay videos on social media platforms, such as Reddit, has become a common practice for many players. Often, players will share game-play videos that showcase video game bugs. Such gameplay videos are software artifacts that can be utilized for game testing, as they provide insight for bug analysis. Although large repositories of gameplay videos exist, parsing and mining them in an effective and structured fashion has still remained a big challenge. In this paper, we propose a search method that accepts any English text query as input to retrieve relevant videos from large repositories of gameplay videos. Our approach does not rely on any external information (such as video metadata); it works solely based on the content of the video. By leveraging the zero-shot transfer capabilities of the Contrastive Language-Image Pre-Training (CLIP) model, our approach does not require any data labeling or training. To evaluate our approach, we present the GamePhysics dataset consisting of 26,954 videos from 1,873 games, that were collected from the GamePhysics section on the Reddit website. Our approach shows promising results in our extensive analysis of simple queries, compound queries, and bug queries, indicating that our approach is useful for object and event detection in gameplay videos. An example application of our approach is as a gameplay video search engine to aid in reproducing video game bugs. Please visit the following link for the code and the data: https://asgaardlab.github.io/CLIPxGamePhysics/.”

See our Publications or arXiv for the full paper

“A Case Study on the Stability of Performance Tests for Serverless Applications” accepted in JSS!

Simon’s paper “A Case Study on the Stability of Performance Tests for Serverless Applications” was accepted for publication in the Journal of Systems and Software (JSS)! This paper was a collaboration with Diego Costa, Lizhi Liao, Weiyi Shang, Andre van Hoorn and Samuel Kounev through the SPEC RG DevOps Performance Working Group.

Abstract:
“Context. While in serverless computing, application resource management and operational concerns are generally delegated to the cloud provider, ensuring that serverless applications meet their performance requirements is still a responsibility of the developers. Performance testing is a commonly used performance assessment practice; however, it traditionally requires visibility of the resource environment.
Objective. In this study, we investigate whether performance tests of serverless applications are stable, that is, if their results are reproducible, and what implications the serverless paradigm has for performance tests.
Method. We conduct a case study where we collect two datasets of performance test results: (a) repetitions of performance tests for varying memory size and load intensities and (b) three repetitions of the same performance test every day for ten months.
Results. We find that performance tests of serverless applications are comparatively stable if conducted on the same day. However, we also observe short-term performance variations and frequent long-term performance changes.
Conclusion. Performance tests for serverless applications can be stable; however, the serverless model impacts the planning, execution, and analysis of performance tests.”

See our Publications for the full paper.

“How are Solidity smart contracts tested in open source projects? An exploratory study” accepted at AST 2022!

Luisa’s paper “How are Solidity smart contracts tested in open source projects? An exploratory study” was accepted for publication at AST 2022! Super congrats Luisa!

Abstract:
“Smart contracts are self-executing programs that are stored on the blockchain. Once a smart contract is compiled and deployed on the blockchain, it cannot be modified. Therefore, having a bug-free smart contract is vital. To ensure a bug-free smart contract, it must be tested thoroughly. However, little is known about how developers test smart contracts in practice. Our study explores 139 open source smart contract projects that are written in Solidity to investigate the state of smart contract testing from three dimensions: (1) the developers working on the tests, (2) the used testing frameworks and testnets and (3) the type of tests that are conducted. We found that mostly core developers of a project are responsible for testing the contracts. Second, developers typically use only functional testing frameworks to test a smart contract, with Truffle being the most popular one. Finally, our results show that functional testing is conducted in most of the studied projects (93%), security testing is only performed in a few projects (9.4%) and traditional performance testing is conducted in none. In addition, we found 25 projects that mentioned or published external audit reports.”

See our Publications for the full paper.