Recently, recommender systems have played an important role in improving web user experiences and increasing profits. Recommender systems exploit users' behavioral history (i.e., feedback on items) to build models. The feedback usually includes explicit feedback (e.g., ratings) and implicit feedback (e.g., browsing history, click logs), which are both useful for improving recommendations. However, as far as we are concerned, no existing works have integrated both explicit and multiple implicit feedback simultaneously. Therefore, we propose a unified and flexible model, named MFPR, to make full use of multiple feedback, which uses a personalized ranking framework. In order to train model MFPR, we design an algorithm to generate ordered item pairs as labeled data, with consideration of both rating scores and multiple implicit feedback. Extensive experiments on two real-world datasets validate the effectiveness of the MFPR model. With the integration of multiple feedback, MFPR significantly improves recommendation performance.
Collective socialization involves introducing new members to an organization as a group or cohort. In traditional offline organizations, collective socialization is a standard and effective socialization strategy. This paper investigates the impact of collective socialization on newcomers' motivation and learning in an online community and the effect it has on newcomers' reaction to feedback from the community. One observational field study and two random-assignment experiments involving editing Wikipedia show that collective socialization altered the way newcomers responded to feedback from the community. The observational study of students editing Wikipedia articles as part of a classroom assignment found that those who worked relatively independently without peer support made more edits in response to critical, negative feedback, presumably to fix errors, whereas students who had peer support did not. Two experiments in which Mechanical Turk workers edited Wikipedia articles independently or in a group found that working in a group diffused the impact of both positive and negative feedback. We discuss these findings as well as design considerations for implementing collective socialization online.
It has been observed that different media outlets exert bias in the way they report the news, which seamlessly influences the way that readers' knowledge is built through filtering what we read. Therefore, understanding bias in news media is fundamental for obtaining a holistic view of a news story. Traditional work has focused on biases in terms of "agenda setting", where more attention is allocated to stories that fit their biased narrative. The corresponding method is straightforward, since the bias can be detected through counting the occurrences of different stories/themes within the documents. However, these methods are not applicable to biases which are implicit in wording, namely "framing" bias. According to framing theory, biased communicators will select and emphasize certain facts and interpretations over others when telling their story. By focusing on facts and interpretations that conform to their bias, they can tell the story in a way that suits their narrative. Automatic detection of framing bias is challenging since nuances in the wording can change the interpretation of the story. In this work, we aim to investigate how the subtle pattern hidden in language use of a news agency can be discovered and further leveraged to detect frames. In particular, we aim to identify the type and polarity of frame in a sentence. Extensive experiments are conducted on real world data from different countries. A case study is further provided to reveal possible applications of the proposed method.
The use of game elements within virtual citizen science is increasingly common, promising to bring increased user activity, motivation and engagement to large-scale scientific projects. However there is an ongoing debate about whether or not gamifying systems such as these is actually an effective means by which to increase motivation and engagement in the long term. While gamification itself is receiving a large amount of attention, there has been little beyond individual studies to assess its suitability or success for citizen science; similarly, while frameworks exist for assessing citizen science performance, they tend to lack any appreciation of the effects that game elements might have had. We therefore review the literature to determine what the trends are regarding the performance of particular game elements or characteristics in citizen science, and survey existing projects to assess how popular different game features are. Investigating this phenomenon further, we then present the results of a series of interviews carried out with the EyeWire citizen science project team to understand more about how gamification elements are introduced, monitored and assessed in a live project. Our findings suggest that projects use a range of game elements with points and leaderboards the most popular, particularly in projects that describe themselves as `games'. Currently, gamification appears to be effective in citizen science for maintaining engagement with existing communities, but shows limited impact for attracting new players.