[go: up one dir, main page]

GB2572230A - Credibility score - Google Patents

Credibility score Download PDF

Info

Publication number
GB2572230A
GB2572230A GB1810079.2A GB201810079A GB2572230A GB 2572230 A GB2572230 A GB 2572230A GB 201810079 A GB201810079 A GB 201810079A GB 2572230 A GB2572230 A GB 2572230A
Authority
GB
United Kingdom
Prior art keywords
users
content
credibility
determining
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1810079.2A
Other versions
GB201810079D0 (en
Inventor
Ghulati Dhruv
Vincent Emmanuel
Robbins Martin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Factmata Ltd
Original Assignee
Factmata Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Factmata Ltd filed Critical Factmata Ltd
Publication of GB201810079D0 publication Critical patent/GB201810079D0/en
Priority to PCT/GB2019/050693 priority Critical patent/WO2019175571A1/en
Publication of GB2572230A publication Critical patent/GB2572230A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/254Extract, transform and load [ETL] procedures, e.g. ETL data flows in data warehouses
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Determining a score indicative of credibility of one or more users, the comprising the steps of: receiving metadata in relation to each of said one or more users; receiving content generated by said one or more users; determining one or more scores in relation to said content generated by said one or more users; and determining the score indicative of credibility for each of the one or more users based on said one or more scores in relation to said content generated by the one or more users and said metadata in relation to each of said one or more users. There may also be a step of determining a score indicative of credibility of each piece of content generated by each of the one or more users. The metadata may comprise any one or more of gender, age, socio-economic status, socio-economic background, accreditations, or expertise, among others.

Description

The present invention relates to a method of determining credibility scores for users based on extrinsic signals. More particularly, the present invention relates to a method of determining a credibility score for users based on user metadata and content generated by the users.
Background
Current social credit scoring applications, if incorporating user profiles, account for age, gender, other personal metadata, as well as a limited range of online content signals such as use of pornography, what blogs or articles the user reads and so on. In the process of credibility scoring very little is taken into account of what the user generated content of a user actually entails.
The focus of existing technology based on credibility of content are domain specific being implemented on articles, blog posts or tweets for example. However, these technologies are not capable of analysing content on a semantic level or a combination of the different types of content and can only determine a general high-level credibility value or score rather than the credibility of an author or journalist to a specific topic of the content item in question. The credibility analysis of content is currently based on user endorsements such as likes, shares, and clicks lacking in assessing the credibility of the actual text of comments and the credibility of those comments as well as the authors of the comment.
Present scoring systems in assessing the credibility of content categorise the content into categories such as true, mostly true, mostly false or false. There is a lack of insight into the output of such systems and there is a need for more informative outputs of credibility scores for comments such as “I think this is interesting”. Rather than labelling the comment, the ability to imply a quality score, for example 67%, may inform a user regarding the credibility of the content based on the credibility of the comment, and more particularly the author of the comment.
Informative scoring of credibility may allow authors, journalists and online users to be scored disregarding their reputations or their biased ways of appealing to a certain audience. When combined with filtering systems, a new way credibility scoring may encourage the prevention of abuse and toxicity within online platforms. Further example applications may include, determining a user’s financial credit score and building an online resume for employers based on online content.
Summary of Invention
Aspects and/or embodiments seek to provide a method of determining a score indicative of credibility of one or more users.
According to a first aspect, there is provided a method of determining a score indicative of credibility of one or more users, the method comprising the steps of: receiving metadata in relation to each of said one or more users; receiving content generated by said one or more users; determining one or more scores in relation to said content generated by said one or more users; and determining the score indicative of credibility for each of the one or more users based on said one or more scores in relation to said content generated by the one or more users and said metadata in relation to each of said one or more users.
The method of determining a score indicative of credibility of one or more users may result in greater user engagement and serve to increase the level of content quality generated in online platforms thus reducing toxicity.
Optionally, further comprising a step of determining a score indicative of credibility of each piece of content generated by each of the one or more users. Optionally, further comprising a step of determining a score indicative of credibility of all content generated by each of the one or more users.
Credibility scores of contents generated by each of the one or more users can contribute to adjust the credibility scores of users.
Optionally, the metadata in relation to the one or more users comprises any one or more of: gender; age; socio-economic status; socio-economic background; accreditations; financial interests; expertise; verification status; and/or other external user data.
Metadata in relation to the one or more users may further adjust the credibility score of the one or more users
Optionally, the one or more scores comprise one or more automated scores and/or one or more user input scores. Optionally, the one or more automated scores comprise one or more scores indicative of any one or more of: contentious content; content/user bias; content/user quality; content/user credibility; and/or true/false content.
One or more automated scores and/or one or more user input scores may add to the weighting of the score indicative of credibility of the one or more users and may also enable pre-scoring of content prior to comments and endorsements being made.
Optionally, the one or more user input scores is input by highly credible users. Optionally, the step of assessing data reflective of the credibility of the one or more users comprises a step of determining any one or more of: professional affiliations; relationships with other users; interactions with other users; quality of content produced by the one or more users; quality of content associated with the one or more users; credibility of content produced by the one or more users; and/or credibility of content associated with the one or more users.
Determining any one or more of: professional affiliations; relationships with other users; interactions with other users; quality of content produced by the one or more users; quality of content associated with the one or more users; credibility of content produced by the one or more users; and/or credibility of content associated with the one or more users, can impact the credibility of one or more users through acknowledgment.
Optionally, the step of determining a score indicative of the credibility of the content generated by the one or more users further comprises a step of determining one or more genres and/or one or more topics implied in the content: optionally wherein the one or more genres and/or one or more topics implied in the content is compared against one or more genres and/or one or more topics implied in one or more directly related contents.
The step of determining one or more genres and/or one or more topics implied in the content can serve to identify the relevance of the content within a context.
Optionally, the score indicative of the credibility of the content generated by the one or more users is further determined by the combination of the score indicative of the credibility of the content and the score indicative of the credibility of the user.
Optionally, further comprising the step of generating a financial credit score for at least one of the one or more users.
According to a second aspect, there is provided an apparatus operable to perform the method of any preceding feature.
According to a third aspect, there is provided a system operable to perform the method of any preceding feature.
According to a fourth aspect, there is provided a computer program operable to perform the method and/or apparatus and/or system of any preceding feature.
Brief Description of Drawings
Embodiments will now be described, by way of example only and with reference to the accompanying drawings having like-reference numerals, in which:
Figure 1 shows a flow diagram of user-user, content-user and content-content relationships;
Figure 2 shows a flow diagram linking authors, contents and annotations depicting a credibility score for each of the authors, contents and annotations; and
Figure 3 shows tags in relation to a content and comments linked with the content with indication of content and author credibility scores.
Figure 4 shows examples of reputation function obtained.
Specific Description
Referring to Figures 1, 2 and 3, example embodiments for a method of determining a score indicative of credibility of one or more users online will now be described.
In an embodiment, a method and system for assessing the quality of content generated by a user and their position within a credibility graph in order to generate a reliable credibility score is provided. The credibility score may be determined for a person, organisation, brand or piece of content by means of calculation using a combination of extrinsic signals, content signals, and its position within the credibility graph. The method may be further capable of determining a credibility score of the user of the generated content by the combination of the score indicative of the credibility of the content and the score indicative of the credibility of the user.
Thus, a credibility score is built through a combination of data mining, automated scoring and endorsements by other credible agents as will be described herein.
In an embodiment, extrinsic signals may include metadata in relation to the one or more users comprising: gender; age; socio-economic status; socio-economic background; accreditations; financial interests; expertise; verification status; and/or other external user data.
In this embodiment, user or author credibility, as shown as 204 in Figure 2, can be based on the examples as follows:
• Author’s expertise • Author is famous for independence and/or bravery • Author is followed by respectable people • Author is followed by someone deeply respectable • Credibility of the author • Financial interests of author • Platforms and key sources used by the author • Reputation of the author • Credentials of the author • The author or source is trusted by people I respect/ trust • Verifiable author • Where the author is someone I trust to be conscientious, meticulous or has a good track record in terms of generated content.
• Name of the author • A particular subject the author has written about • Errors or bias in content written by the author
Author and content credibility score based on endorsement network (‘credibility graph’), external credentials, and feedback on evaluation of online content.
In an embodiment, content signals may include one or more automated scores which are indicative of a number of factors such as: contentious content; content/user bias; content/user quality; content/user credibility; and/or true/false content. As form of content signal there may also include manual input of scores in relation to credibility which may be input by users of highly credible status.
In this embodiment, the credibility feedback, as shown as 202 In Figure 2, may be derived from an assessment of the quality of user generated content through neural networks and other algorithms detecting for example hate speech, hyper-partisanship or false claims, and other forms of quality and credibility scoring systems.
In an embodiment, the position of a user within a credibility graph may be determined by analysing and assessing data reflective of a user’s credibility. Figure 1, 100, shows an example flow of user-user, content-user, and content-content interactions online. The user’s position may be determined upon various factors such as: the user’s professional affiliations; relationships the user has with other users and/or interactions with other users; the quality of content produced by the user; the quality of content which may be associated with the user; credibility of content produced by the user; credibility of other content associated with the user.
In other embodiments, additional factors can contribute to the overall credibility score of contents and users. One example, as shown as 300 in Figure 3, may be analysing the genre or the specific topic embedded within the content whether it may be explicitly stated or implicitly mentioned. In this example, the genre or topic within the content is compared against related content such as comments on a blog post for example. The comparison may indicate the level of relevance of the comment in relation to that blog post.
In some embodiment, bias assessment of content which may contribute to author credibility may be carried out methods of crowdsourcing bias assessments. For example, articles may be drawn from a pilot study, representing a corpus of example 1,000 articles on which ads had been displayed for the account of a customer; these thus form a sample of highly visited news articles from mainstream media as well as more partisan blog-like “news” sources. Platforms such as Crowdflower may be used to present these articles to participants who are asked to read each article’s webpage and answer the question: “Overall, how biased is this article?”, providing one answer form the following bias scale, or any other bias scale: 1. Unbiased 2. Fairly unbiased 3. Somewhat biased 4. Biased 5. Extremely biased.
In order to guide crowdsourced assessments, contributors may be provided with more details regarding how to classify articles in the form of a general definition of biased article as well as examples of articles with their expected classification. An example instruction template may be provided as follows.
“Providing a definition as such: Biased articles provide an unbalanced point of view in describing events; they are either strongly opposed to or strongly in favour of a person, a party, a country... Very often the bias is about politics (e.g. the article is strongly biased in favour of Republicans or Democrats), but it can be about other entities (e.g. anti-science bias, pro-Brexit bias, bias against a country, a religion...). A biased article supports a particular position, political view, person or organization with overly suggestive support or opposition with disregard for accuracy, often omitting valid information that would run counter to its narrative. Often, extremely biased articles attempt to inflame emotion using loaded language and offensive words to target and belittle the people, institutions, or political affiliations it dislikes. Rules and Tips Rate the article on the “bias scale” following these instructions:
• Provide a rating of 1 if the article is not biased at all; the article might discuss cooking, movies, lifestyle... or talk about politics in a neutral and factual way.
• Provide a rating of 2 if the article is fairly unbiased; the article might talk about contentious topics, like politics, but remains fairly neutral.
• Provide a rating of 3 if the article is somewhat biased or if it is impossible to determine its bias, or the article is ambivalent (i.e. biased both for and against the same entity).
• Provide a rating of 4 if the article is clearly biased; it overtly favors or denigrates a side, typically an opinion piece with little fairness.
• Provide a rating of 5 if the article is extremely biased I hyper partisan; it overtly favors a side in emphatic terms and/or belittles the other ‘side’, with disregard for accuracy, and attempts to incite an action or emotion in the reader.
Please do not include your own personal political opinion on the subject of the article or the website itself. If you agree with the bias of the article, you still should tag is as biased. Try and remove any sense of your personal political beliefs, and critically examine the language and the way the article has been written.
Please do not pay attention to other information on the webpage (page layout, other articles, advertising etc.). Only the content of the article is relevant here: text, hyperlinks in it, photos and videos within the text of the article. Also, do not look at the title of the website, its name, or how it looks - just examine the article in front of you and its text.
Do not answer randomly, submissions may be rejected if there is evidence that a worker is providing spam responses. Do not skip the rating, providing an overall bias is required.”
A suitable bias scale may be chosen to allow contributors to express their degree of certainty, for example leaving the central value on the scale (3) for when they are unsure about the article bias while the values 1 and 2 or 4 and 5 represent higher confidence that the article is respectively unbiased or biased to a more (1 and 5) or less (2 and 4) marked extent. Fifty participants contributed to the labelling and five to fifteen contributors assessed each article.
In an embodiment, to assess the reliability of contributors within a crowdsourced platform, one or more expert annotators (such as a journalist and a fact-checker) may be asked to estimate which bias ratings should be counted as acceptable for a number of articles within the dataset.
For each article in this particular or ‘gold’ dataset, the values provided by the two experts are merged. Two values are typically found to be acceptable for an article (most often 1 and 2, or 4 and 5), but sometimes three values are deemed acceptable and less often one value only: typically, when both experts agree the article is either clearly extremely biased or not biased at all (e.g. because it covers a trivial and non-confrontational topic in the latter case). When experts disagree on the nature of the bias, providing a set of acceptable ratings as strictly greater than three for one and strictly lower than three for the other, the article is not considered in the ‘gold’ dataset.
In one approach to guide one through assessing the quality of data collected through crowdsourcing, a comparison of contributors’ rating may be carried out with the ‘gold’ dataset ratings in mind. Building on the “Beta reputation system” framework (Ismail and Josang 2002), users’ reliability can be represented in the form of a beta probability density function. The beta distribution β(ρ\α,β) can be expressed using the gamma function Γ as:
= Γ(α + /?)/(Γ(α). Γ(/?)) . ρα(1 - p)^1 (1) where p is the probability a contributor will provide an acceptable rating, and a and β are the number of ‘correct’ (respectively ‘incorrect’) answers as compared to the gold. In order to account for the fact that not all incorrect answers are as far from the gold, the incorrect answers may be weighted as follows: an incorrect answer is weighted by a factor of 1, 2, 5 or 10 respectively if its shortest distance to an acceptable answer is 1, 2, 3 or 4 respectively. So β is incremented by 10 (resp. 2) for a contributor providing a rating of 1 (resp. 4) while the gold is 5 (resp. 2) for example. In embodiments, expectation values of the beta distribution R = a(a + β) may be used as a simple measure of the reliability of each contributor. Figure 4 shows examples of reputation function obtained for (a) a user with few verified reviews, (b) a contributor of low reliability and (c) a user of high reliability.
In an embodiment, the goal may be to determine the articles’ bias and a degree of confidence in that classification based on signals provided by the crowd. A straightforward way to obtain an overall rating is to simply take each assessment as a ‘vote’ and average these to obtain a single value for the article. However, to try and get closer to an objective assessment of the article’s bias, an approach of weighting each rating by the reliability of the contributor may be tested. In some embodiments, a ‘linear’ weight for which a user’s rating is weighted by its reliability R and a more aggressive ‘exponential’ weight for which a user’s rating is weighted by so that an absolutely reliable (R = 1) contributor’s rating would weight a hundred times more than a contributor of reliability R = 0.5.
Using a probabilistic framework allows for the estimation of the confidence of users’ reliability scores. Weighting users’ contributions by their reliability score increases the clarity of the data and allows for identification of the articles that have been confidently classified by the consensus of high reliability users to train one or more machine learning algorithms. In such cases, it may be notably so that high reliability contributors disagree on the bias rating for about a third of the articles, which may be used to train one or more machine learning models in order to recognize uncategorizable articles in addition to biased and unbiased.
In some embodiments, an important next step may be to learn about potential contributors’ bias from the pattern of their article ratings: for instance a contributor might be systematically providing more “left-leaning” or “right-leaning” ratings than others, which could be taken into account as an additional way to generate objective classifications. Another avenue of research will be to mitigate possible bias in the gold dataset. This can be achieved by broadening the set of experts providing acceptable classification and/or by also calculating a reliability score for experts, who would start with a high prior reliability but have their reliability decrease if their ratings diverge from a classification by other users when a consensus emerges.
In an embodiment, the method of determining the credibility score of users may further generate a financial credit score for users more particularly based on the combination of user credibility and content credibility.
Any system feature as described herein may also be provided as a method feature, and vice versa. As used herein, means plus function features may be expressed alternatively in terms of their corresponding structure.
Any feature in one aspect may be applied to other aspects, in any appropriate combination. In particular, method aspects may be applied to system aspects, and vice versa. Furthermore, any, some and/or all features in one aspect can be applied to any, some and/or all features in any other aspect, in any appropriate combination.
It should also be appreciated that particular combinations of the various features described and defined in any aspects can be implemented and/or supplied and/or used independently.

Claims (12)

CLAIMS:
1.
2.
3.
4.
5.
6.
7.
8.
A method of determining a score indicative of credibility of one or more users, the method comprising the steps of:
receiving metadata in relation to each of said one or more users;
receiving content generated by said one or more users;
determining one or more scores in relation to said content generated by said one or more users; and determining the score indicative of credibility for each of the one or more users based on said one or more scores in relation to said content generated by the one or more users and said metadata in relation to each of said one or more users.
The method of Claim 1 further comprising a step of determining a score indicative of credibility of each piece of content generated by each of the one or more users.
The method of any preceding claim further comprising a step of determining a score indicative of credibility of all content generated by each of the one or more users.
The method of any preceding claim wherein the metadata in relation to the one or more users comprises any one or more of: gender; age; socio-economic status; socio-economic background; accreditations; financial interests; expertise; verification status; and/or other external user data.
The method of any preceding claim wherein the one or more scores comprise one or more automated scores and/or one or more user input scores.
The method of any preceding claim wherein the one or more automated scores comprise one or more scores indicative of any one or more of: contentious content; content/user bias; content/user quality; content/user credibility; and/or true/false content.
The method of any preceding claim optionally wherein the one or more user input scores is input by highly credible users.
The method of any preceding claim wherein the step of assessing data reflective of the credibility of the one or more users comprises a step of determining any one or more of: professional affiliations; relationships with other users; interactions with other users; quality of content produced by the one or more users; quality of content associated with the one or more users; credibility of content produced by the one or more users; and/or credibility of content associated with the one or more users.
9. The method of any preceding claim wherein the step of determining a score indicative of the credibility of the content generated by the one or more users further
5 comprises a step of determining one or more genres and/or one or more topics implied in the content: optionally wherein the one or more genres and/or one or more topics implied in the content is compared against one or more genres and/or one or more topics implied in one or more directly related contents.
10. The method of any preceding claim wherein the score indicative of the credibility of io the content generated by the one or more users is further determined by the combination of the score indicative of the credibility of the content and the score indicative of the credibility of the user.
11. The method of any preceding claim further comprising the step of generating a financial credit score for at least one of the one or more users.
15
12. A computer program product comprising software code and/or a computer readable medium for carrying out the method, system or functional requirements of any preceding claim.
GB1810079.2A 2018-03-12 2018-06-19 Credibility score Withdrawn GB2572230A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/GB2019/050693 WO2019175571A1 (en) 2018-03-12 2019-03-12 Combined methods and systems for online media content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GBGB1804295.2A GB201804295D0 (en) 2018-03-16 2018-03-16 Credibility score

Publications (2)

Publication Number Publication Date
GB201810079D0 GB201810079D0 (en) 2018-08-08
GB2572230A true GB2572230A (en) 2019-09-25

Family

ID=62017934

Family Applications (2)

Application Number Title Priority Date Filing Date
GBGB1804295.2A Ceased GB201804295D0 (en) 2018-03-12 2018-03-16 Credibility score
GB1810079.2A Withdrawn GB2572230A (en) 2018-03-12 2018-06-19 Credibility score

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GBGB1804295.2A Ceased GB201804295D0 (en) 2018-03-12 2018-03-16 Credibility score

Country Status (1)

Country Link
GB (2) GB201804295D0 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10747837B2 (en) 2013-03-11 2020-08-18 Creopoint, Inc. Containing disinformation spread using customizable intelligence channels

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100042928A1 (en) * 2008-08-12 2010-02-18 Peter Rinearson Systems and methods for calculating and presenting a user-contributor rating index
US20160179805A1 (en) * 2014-12-17 2016-06-23 International Business Machines Corporation Calculating expertise confidence based on content and social proximity

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100042928A1 (en) * 2008-08-12 2010-02-18 Peter Rinearson Systems and methods for calculating and presenting a user-contributor rating index
US20160179805A1 (en) * 2014-12-17 2016-06-23 International Business Machines Corporation Calculating expertise confidence based on content and social proximity

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10747837B2 (en) 2013-03-11 2020-08-18 Creopoint, Inc. Containing disinformation spread using customizable intelligence channels

Also Published As

Publication number Publication date
GB201804295D0 (en) 2018-05-02
GB201810079D0 (en) 2018-08-08

Similar Documents

Publication Publication Date Title
Fabris et al. Fairness and bias in algorithmic hiring: A multidisciplinary survey
Tay et al. Psychometric and validity issues in machine learning approaches to personality assessment: A focus on social media text mining
Valsesia et al. The positive effect of not following others on social media
Mellon et al. Twitter and Facebook are not representative of the general population: Political attitudes and demographics of British social media users
Metzger et al. Psychological approaches to credibility assessment online
Rowley et al. Understanding trust formation in digital information sources: The case of Wikipedia
Greene et al. Adjusting to the GDPR: The impact on data scientists and behavioral researchers
US20190082224A1 (en) System and Computer Implemented Method for Detecting, Identifying, and Rating Content
Faliagka et al. An integrated e‐recruitment system for automated personality mining and applicant ranking
Boydstun et al. Assessing the relationship between economic news coverage and mass economic attitudes
Lanier et al. Preventing infant maltreatment with predictive analytics: Applying ethical principles to evidence-based child welfare policy
Lee et al. Detecting fake reviews with supervised machine learning algorithms
Liu et al. Computational trust models and machine learning
Scarborough Feminist Twitter and gender attitudes: opportunities and limitations to using Twitter in the study of public opinion
Zhang et al. Learning user credibility for product ranking
Turetsky et al. Porous chambers, echoes of valence and stereotypes: A network analysis of online news coverage interconnectedness following a nationally polarizing race-related event
Sharma et al. What factors determine reviewer credibility? An econometric approach validated through predictive modeling
Berinsky et al. Measuring attentiveness in self-administered surveys
Vaccari A tale of two e-parties: Candidate websites in the 2008 US presidential primaries
Thompson Shifting decision thresholds can undermine the probative value and legal utility of forensic pattern-matching evidence
Tay et al. Beyond mean rating: Probabilistic aggregation of star ratings based on helpfulness
Wang et al. Learning personalized privacy preference from public data
Yoder et al. Phans, stans and cishets: self-presentation effects on content propagation in Tumblr
Rathi et al. Psychometric profiling of individuals using Twitter profiles: A psychological Natural Language Processing based approach
Gil de Zúñiga et al. Intervening troubled marketplace of ideas: How to redeem trust in media and social institutions from pseudo-information

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)