8 things a reputation system must have

Masked PAY
4 min readJun 5, 2021

In this post we talk about the building blocks of reputation systems. At Verify, we’re building a reputation protocol on the Binance Smart Chain and are sharing these posts in an effort to share our knowledge with the wider crypto community.

Vavilis et al* proposed a framework for the analysis of reputation systems, after examining reputation systems and setting some key metrics that reputation systems should possess; in this post we will explore those metrics.

First, let us establish a definition of reputation, and we use one proposed by Witkowski et al**: reputation is a measurement of a user’s trustworthiness based on his past behavior. When you have enough data, you can make assumptions and perform predictions of future outcomes.

How a reputation system functions is of importance to end users. Knowing how the reputation system assess entities and transactions ultimately leads to greater end user trust in the said reputation system.

Users experiences’ are key to establishing reputation systems. Experiences should be carefully recorded and rated. For sellers, a good reputation could mean advance payment and for buyers it means greater confidence in the transaction.

A reputation system should have 8 things and these are:

1- There should be ratings that can accurately describe judgment of user to another user based on some transaction. This helps identify desired behavior. If a user is unsatisfied with another user or item, it would clearly indicate shaky trust.

2- There should be reputation values. The reputation value should accurately represent behavior of a user, spanning the entire range of both desired (good) and undesired(bad) behavior. Note also that reputation values should not be in relation with other users, i.e ranking; ranking users on reputation may provide a misleading perception of users’ behavior.

3- There should be a mechanism to identify and combat inaccurate ratings. This is a key component to assessing reputation values. Having incorrect ratings in the calculation of reputation values would result in false reputation. One way incorrect ratings can take place is by a user accidentally submitting low ratings for a given transaction. Another example would be an attacker purposely giving low ratings to sellers or even other users giving low rating just for the sake of winning some competition. This could also lead to users creating false accounts just to attack other users or inflate their own reputation through self-promoting attacks. One way to combat this phenomena or identify incorrect ratings is by identifying the origin of a rating. If ratings from different accounts originate from the same user or account, then it should undergo further analysis. if the outcome of this analysiswas suspicious, then the rating could be held for further verification.

4- Self-rating should be prohibited. Rating oneself would always lead to self-promoting attacks. This effect ultimately results in invalid reputation values that pervade throughout the system. By forbidding self-rating, you are essentially combating incorrect ratings, resulting in higher quality reputation values.

The approach to forbidding self-rating is often enforced by establishing an authentication mechanism that would prohibit a user from having multiple accounts or even using a single account to self-promote. Also similar to #3 above, identifying the origin can also aid in prohibiting self-rating.

5- When capturing the actual reputation value, various factors (including the rating itself) are aggregated into a single value There must be sufficient and meaningful information to compute an accurate and meaningful reputation value. In an eCommerce system, for instance, there are low-value transactions and there are higher value transactions. A seller could sell low-value items in an attempt to build a reputation and later use this reputation to commit high-value fraud. The reputation value should address this attack vector and ensure that the transaction value is considered in the calculation of reputation value.

6- Past interactions are vital in building a reputation. User behavior can change over time and it is important to keep track of both past transactions and current interactions in calculating a reputation value. For example, a user can act compliantly to build a reputation for a period of time only to change his behavior afterwards. It is advised to keep track of the of the timestamp of user transactions to capture change in behavior as transactions build up.

7- New users are critical in the reputation system. When they first participate, their behaviour is unknown and they should not be considered bad users; there should be a clear distinction between a bad user and a new user. However, the reputation system should simultaneously have a mechanism to taking advantage of being a new user. Someone with a bad reputation can always create a new account and become a new user (a technique known as white-washing). Several counter-measures can be taken; for example, a clear distinction can be drawn by labeling new users with a “new” label, or a fee could be instituted on the creation of a new user to discourage bad users from re-entering as new. Other measures could be employed to detect when a user with bad reputation attempts to re-enter the system.

8- Users should not be able to modify ratings and reputation values beyond a certain timeout period, to allow for the case where a user might have accidentally submitted a rating. Rating and reputation values should encrypted to avoid any kind of unauthorized changes. This is trivial in centralized systems, but implementation must be carefully planned out in decentralized systems.

References:
* Vavilis S, Petkovic M, Zannone N. (2014). A reference model for reputation systems. Decision Support Systems 61: pp. 147–154.
** M. Witkowski, A. Artikis, J. Pitt, Experiments in building experiential trust in a society of objective-trust based agents, Trust in Cyber-societies, LNCS, 2246,
Springer, 2001, pp. 111–132.

--

--