Messaging Identity Reputation

Goal

Describe identity, reputation, and sybil challenges in the network. Propose a simple option for boosting a messaging identity’s reputation using identity verification messaging peers.

Background and relevant context

Identity in XMTP

A social graph exists within XMTP that is made up of messaging nodes, which are connected to other messaging nodes through edges. Each node is represented by an identity. An identity is described by both stated attributes and reputation.

Reputation

An identity’s reputation changes with time. Activities and inactivity, behaviors, and performance determine the identity’s reputation. Critically, reputation is dependent upon other’s perception of specific activities and inactivity, behaviors, and performance, carried out by the identity.

A node’s reputation can be upgraded or degraded by it’s activities and inactivity, behaviors, and performance, and the perception of such by others.

All newly created, unfunded EVM addresses begin life with a neutral reputation.

Attacker motivation

Attackers do not have unlimited resources. They must choose where to deploy resources based on the expected value and cost of the full set of available opportunities. Attackers choose opportunities with the highest return on investment (ROI).

The influx of new users to XMTP has increased the set of positive ROI opportunities for attackers in the form of phishing attacks.

Post delivery filtering and indexing defense

Honest network participants would like to make phishing attacks a negative ROI opportunity. Available strategies include transaction fee mechanisms and post delivery filtering and indexing algorithms (see Spam defense classification). The focus of this proposal is post delivery filtering and indexing algorithms.

These algorithms are developed and maintained by user inbox app providers, and attempt to promote messages from nuetral-to-positive reputation messaging identities, and demote or filter out messages from negative reputation messaging identities. The possibility of phishing messages not reaching the user’s attention lowers the attacker’s expected value, and can result in a negative ROI.

Sybil

A single attacker can control more than one messaging identity in the network. This actor can bypass filtering algorithms by using addresses that have not yet accrued negative reputation.

Motivation

An attacker’s ability to create many new sybil addresses with a neutral reputation increases the potential for positive ROI attacks. This, in turn, leads to more spam in the network.

Proposal

Verifiable identity providers can use the XMTP messaging interface to create identity verification workflows. Honest users can upgrade their reputation by successfully completing the provider’s workflow.

In the above workflow, Alice invites a captcha bot provider to a conversation. The captcha bot joins the conversation with Alice and sends a challenge via conversation message. Alice responds to the challenge. The provider verifies Alice’s response and attests to Alice’s XMTP contact ID. Future message recipients can now independently verify the attestation to Alice’s contact (i.e., Alice’s reputation has been upgraded e.g., from neutral to positive).

Provider workflows may range from simple captcha challenges to requiring sensitive information for identity verification.

Spam mitigation

A user that has received attestation from a provider, has effectively created a connection with the provider. A graph of ecosystem messaging identities would look like the below diagram. The blue dots have one or more connections with known identity providers. Blue dots may be considered positive reputation identities by inbox apps, and their invite messages may be promoted in recipient inboxes. Red identities have yet to connect with a provider. Inbox apps may treat red dots similarly to negative reputation identities, and demote their invite messages in recipient inboxes.

XMTP inbox app providers can require proof-of-personhood for new users by simply requiring that a user’s first XMTP contact is an identity provider. This is similar to discord servers requiring verification to join. This reduces bot activity, as an attacker running their own client would be easily distinguishable from users that have joined via inbox app provider workflows that require identity provider verification.

In future versions of the protocol, a recipient user may be able to specify that senders must present proof-of-personhood in order for invite messages to be delivered to their inbox.

The benefits of a message-based workflow

  • Identity providers can accelerate distribution by leveraging the XMTP network user base
  • Minimize friction for users. Message-based proof-of-personhood workflows are more convenient for users than dedicated web interfaces.
  • Multiple identity providers can integrate their services into XMTP, thus providing choice for inbox app developers and users.

Questions

  • How is attestation represented?
  • Where do recipient inbox apps retrieve attestation state? Is the state included with the message, or must it be looked up somewhere?
    • Should attestation state be made public or kept private?
    • If it should remain private, how is the information accessible to recipient inbox apps?
  • Can identity providers build a message-based workflow that is capable of validating personhood?
    • Would this need to be assisted by the protocol in some way?
  • How will inbox apps and users safely distinguish between reputable and fraudulent providers?
    • What risks does this present to users?
1 Like

Thanks for putting this together. You’re making absolutely the right assumptions that an identity’s reputation changes over time and that an attacker has to have positive ROI for a spam attack to be considered.

The benefits of such a system:

There’s no need to do verification every time. Once a verification task is completed, other apps can leverage that data.

Reputation providers like captchas provide an easy way for users to start building on-chain history and to effectively block Sybils at scale.

Compatibility Concerns in Verification Systems:

Different clients may adopt varying verification protocols. This could lead to potential issues such as:

  • If Client A uses CAPTCHA and Client B mandates a passport for verification, a message from a Client A user might be treated as spam by Client B due to these differing requirements. This divergence can confuse users about which verification method they should prioritize.

Users are left with two choices:

  1. Undergo all available personhood checks to ensure broad compatibility.
  2. Limit their communication to recipients on the same client, sidestepping the need for additional verifications.

Furthermore, unique filtering criteria by individual inbox apps could inadvertently block users with insufficient on-chain activity or history. This presents a UX challenge.

Possible Solution:

A viable middle ground might be establishing a consensus on a standard set of verification practices. This way, individual client apps could use these universal signals as a foundation while tailoring their unique spam-detection algorithms. For neutral reputation accounts, they would be incentivized to participate in as many reputation systems as they’re comfortable with.

On the Importance of Differentiating Reputation Providers:

While the original post didn’t specify, it’s crucial to understand that all reputation providers are not equal.

  • Reputation solutions often have to make a tradeoff between Sybil resistance and good user experience.
  • If reputation providers are weighted equally, then malicious users will invariably opt for the path of least resistance.

The challenge then arises: How are reputation providers selected? If the system allows for any entity to become a provider, the onus is on the client to discern which providers are trustworthy and which aren’t. This echoes some of the questions you brought up in the discussion.

It’s worth exploring look at the reputation of the nodes and which providers they have used to determine the weighting of the reputation providers.

Hi @gama266! Thanks for this.

establishing a consensus on a standard set of verification practices.

Agreed there is a requirement for social consensus around providers/practices. What do you have in mind here?

It’s worth exploring look at the reputation of the nodes and which providers they have used to determine the weighting of the reputation providers.

Are you recommending a scoring system for nodes that have received attestation from a provider? Almost like a third party auditor. This would be a good input metric for any potential social consensus framework, as alluded to above.

As for next steps, I haven’t had a chance to pair with a dev to prototype something like this. I suspect it would be fairly basic–potentially a good hackathon project. Looking forward to someone taking this on.

This is a great discussion! Thanks for writing it up, @trevor!

How is attestation represented?

Ideally, we would use a standard Verifiable Credential format for this. (eg. W3C VC)
The advantage to using a VC is that once it is issued to the client, the client has agency of when and how to present it. Also, leveraging a standard for this means adoption can happen across ecosystems.

Where do recipient inbox apps retrieve attestation state? Is the state included with the message, or must it be looked up somewhere?

When making a new invitation to a recipient, the Proof-of-personhood/identity VC could be sent to the recipient as part of the message using an identity codec.

Should attestation state be made public or kept private?

If done as a VC it would be pseudo-private. Only the Issuer, Holder, and recipients in new conversations would ever see it.

Can identity providers build a message-based workflow that is capable of validating personhood?

I would think so. In addition tho this, there also may be other ways a person could obtain proof of personhood VCs

How will inbox apps and users safely distinguish between reputable and fraudulent providers?

This is a good question, and not easy to answer. Ultimately providers themselves need to be strong/well-known brands with good reputations. Ultimately, this is probably up to the client/app to decide how to answer.

What risks does this present to users?

IMO, the biggest risks are maintaining privacy of the participants.

1 Like