“GlitchBench: Can large multimodal models detect video game glitches?” accepted at CVPR 2024!

Mohammad Reza’s paper “GlitchBench: Can large multimodal models detect video game glitches?” was accepted for publication at the CVPR 2024 conference! Super congrats Mohammad Reza and co-author Tianjun! This was a collaboration with Dr. Anh Nguyen from Auburn University.

Abstract: “Large multimodal models (LMMs) have evolved from large language models (LLMs) to integrate multiple input modalities, such as visual inputs. This integration augments the capacity of LLMs for tasks requiring visual comprehension and reasoning. However, the extent and limitations of their enhanced abilities are not fully understood, especially when it comes to real-world tasks. To address this gap, we introduce GlitchBench, a novel benchmark derived from video game quality assurance tasks, to test and evaluate the reasoning capabilities of LMMs. Our benchmark is curated from a variety of unusual and glitched scenarios from video games and aims to challenge both the visual and linguistic reasoning powers of LMMs in detecting and interpreting
out-of-the-ordinary events. We evaluate multiple state-of-the-art LMMs, and we show that GlitchBench presents a new challenge for these models. Code and data are available at: https://glitchbench.github.io/.”

A preprint of the paper is available here.

“Micro-FL: A Fault-Tolerant Scalable Microservice Based Platform for Federated Learning” accepted in Future Internet!

Mikael’s paper “Micro-FL: A Fault-Tolerant Scalable Microservice Based Platform for Federated Learning” was accepted for publication in the Future Internet journal! Super congrats Mikael!

Abstract: “As the number of machine learning applications increases, growing concerns about data privacy expose the limitations of traditional cloud-based machine learning methods that rely on centralized data collection and processing. Federated learning emerges as a promising alternative, offering a novel approach to training machine learning models that safeguards data privacy. Federated learning facilitates collaborative model training across various entities. In this approach, each user trains models locally and shares only the local model parameters with a central server, which then generates a global model based on these individual updates. This approach ensures data privacy since the training data itself is never directly shared with a central entity. However, existing federated machine learning frameworks are not without challenges. In terms of server design, these frameworks exhibit limited scalability with an increasing number of clients and are highly vulnerable to system faults, particularly as the central server becomes a single point of failure. This paper introduces Micro-FL, a federated learning framework that uses a microservices architecture to implement the federated learning system. It demonstrates that the framework is fault-tolerant and scalable, showing its ability to handle an increasing number of clients. A comprehensive performance evaluation confirms that Micro-FL proficiently handles component faults, enabling a smooth and uninterrupted operation.”

A preprint of the paper is available here.