Recommender systems have become increasingly important in the age of information overload, where a vast amount of information is available. These systems offer valuable assistance to individuals seeking relevant and useful information. By providing recommendations based on a user's interests and preferences, recommender systems make it easier and more efficient to find information, while benefiting sellers and producers by increasing their exposure and potential profits. However, the power of recommender systems to make decisions about the information delivered to users raises important questions about fairness.
Despite significant progress in fairness research in the context of recommendations, numerous challenges and issues remain unresolved. One of these challenges is the lack of consideration in existing studies on the problem of lacking negative feedback in implicit feedback data, which can lead to biased recommendations, especially for items with little representation in the data. To address this issue, we proposed a Generative Adversarial Networks (GANs) based learning algorithm, called FairGAN mapping the fairness issues in recommendations to the problem of lacking negative feedback in implicit feedback data. The proposed FairGAN does not treat unobserved interactions as negative, but instead, generates fairness signals to search the optimal rankings that can fairly allocate exposure to items while maintaining high user utilities. The extensive experiments show the effectiveness of FairGAN over the state-of-the-art algorithms.
Another issue that has received limited attention is fairness in session-based recommender systems (SBRSs), which provide recommendations based on a user's recent interactions. In existing research on SBRSs, there has been a predominant focus on maximizing session utilities. However, there has been less attention to an equally critical facet - fairness. There are two dimensions of fairness issues that have often been overlooked in these studies. Firstly, the concept of global fairness, which is the principle of ensuring that items across all sessions have a similar degree of exposure. This aspect of fairness is essential as it prevents certain items from being continuously recommended and thereby dominating the platform, while others are consistently under-represented. This could inadvertently skew customers' choices and preferences over time. Secondly, fairness within each session is also largely neglected. This refers to the idea that items within a single session should also have an equal chance of being exposed to customers. This mitigates the risk of any single item, or a group of items consistently being recommended within the same session, which may potentially lead to an imbalance in customers' exposure to the variety of items available. Both dimensions of fairness - across all sessions and within each session - are vital to the overall effectiveness and balance of SBRS. Ignoring these aspects could have significant repercussions on the diversity of items being recommended and subsequently on the customers' decision-making process. Hence, this study seeks to address these gaps and aims to make SBRS not just more effective, but also more equitable and fairer. To tackle this issue, we propose a novel concept of session-oriented fairness, which enforces individual items to have the same exposure accumulated within each single session. We devise a Session-Oriented Fairness-Aware (SOFA) algorithm to achieve global fairness by maximizing session-oriented fairness while maintaining high session utilities. The extensive experiments on real-world datasets demonstrate that SOFA outperforms the state-of-the-art approaches in terms of both utility and fairness.
Although the increasing use of recommender systems has led to a greater focus on fairness issues in these systems by academic, industrial, and societal domains, the diagnosis of fairness in recommendations has not been widely researched. Current research on explanation of fairness employs knowledge graphs to provide explainable diversity in recommendations and identify items of interest, or proposes a counterfactual framework for feature-based recommendations based on the counterfactual reasoning paradigm. However, they do not consider the relationship between fairness and recommendations from the perspective of individual users and items, which are the fundamental components of recommender systems. In most modern recommender systems, each user's interactions with items and each item's interactions with users have varying degrees of influence on recommendations to all other users and items. By understanding how fairness is related to individual users and items, researchers and practitioners can identify sources of unfairness in the system. For example, if a particular group of users consistently receives lower-quality recommendations, researchers may investigate which users or items are contributing to this unfairness. Similarly, if a particular group of items is repeatedly recommended to users instead of presenting a diverse range of options, decision-makers may investigate whether some individual users or items are the cause of this. Therefore, we finally investigated how to explain recommendation fairness from the perspective of users/items. We proposed a solution named Adding-based Counterfactual Fairness Reasoning (ACFR). Unlike traditional erasing-based and feature-based counterfactual analyses, ACFR adopts an adding-based strategy to provides fairness explanations from the perspective of interactions between users and items. The main application of ACFR is to identify the key users and items which are related to recommendation fairness, either user fairness or item fairness. The experiment results have verified the superiority of the proposed solution compared to the baseline methods in terms of recommendation fairness on benchmark datasets.
To conclude, our research addressed various fairness issues in recommender systems that previous works ignored. For implicit feedback data-based recommendation, we proposed a GAN-based learning algorithm named FairGAN that extracts fairness signals from implicit feedback and does not consider unobserved interactions as negative. For session-based recommendations, we introduced a new concept of session-oriented fairness and proposed an algorithm, SOFA, to achieve global fairness by maximizing the proposed session-oriented fairness. Finally, we explained how fairness is related to individual users and items in recommender systems by proposing ACFR, a solution that can provide explanations for fairness by adding interactions to users or items based on counterfactual reasoning analysis.