Multi-Level Antifake System (MLAFS)

The first level of the system is registration and login verification. If the account is logged in on another device, it will be disconnected from the previous device. In addition, it reads the fingerprint of the device to interact with it on further levels.

Digital watermarks:

  • Secure digital signatures.

  • Unique content digests.

  • Embedded digests in content.

  • Metadata storage for images and videos.

  • Non-visible pixel patterns.

  • Enhanced trust for signed visual content.

  • Biometric layer of security.

Level 2 interacts with the device fingerprint and analyzes user behavior - it is a supervised learning system for anomaly detection. The system accumulates an extensive collection of user related data. Group behavior is analyzed where attempts to manipulate reviews and ratings are detected.

At level 3 and 4, verification is done using sync and async algorithms. Thanks to verification pipeline we can detect incorrectness of fake reviews and determine their quality.

After that, verification subsystems come into play to issue a red or green flag on the content

Green checks are actions or verification measures that boost a content piece's trust value by adding protective measures, like digital watermarking or biometric verification.

On the other hand, Red checks aim to identify content that may be fraudulent or malicious, using tools such as machine learning for authenticity analysis and pattern recognition to spot likely fraud.

The blockchain and DAO act as a level 4-5. The community members who take the role of moderator additionally filter the feedback. Finally, the information is sent to the blockchain.

A cornerstone of our system's approach to preserving the integrity of reviews is the active engagement of our community. The Feedback and Reporting Mechanism is designed to empower users, giving them a direct role in safeguarding the trustworthiness of our platform. This subsystem combines user-friendly interfaces with sophisticated automated processes to ensure that suspicious content is swiftly identified, reported, and reviewed.

Empowering users with intuitive reporting tools

Understanding that the effectiveness of this mechanism relies on the active participation of our users, we've developed an intuitive interface that makes reporting suspicious reviews straightforward and accessible. This interface is seamlessly integrated into our platform, allowing users to flag content that appears fraudulent, misleading, or inappropriate with just a few clicks. By simplifying the reporting process, we encourage a proactive community vigilant against dishonest practices.

Automated review and escalation

Upon receiving a report, our system initiates an automated review process. This process employs a series of algorithms designed to assess the credibility of the reported content based on predefined criteria, such as the review's deviation from typical patterns, the reporting user's history, and the context of the report. Content that triggers red flags is then escalated to our team of moderators for further investigation.

Leveraging community insights for continuous improvement

The insights gathered from reported content play a crucial role in the continuous evolution of our detection algorithms. By analyzing patterns in reported reviews, our system learns to better identify fraudulent behavior, enhancing its ability to autonomously detect suspicious content. This feedback loop not only improves the efficiency of our review process but also strengthens the overall security of our platform.

Data-driven model refinement

Information from user reports and moderation outcomes is critical for refining our ML models. This includes ML Captcha and Behavior analysis systems, where such data informs supervised learning, allowing for nuanced detection of fraudulent behaviors. Continuous input from the community and moderation actions form a feedback loop, systematically enhancing model sensitivity to new and emerging fraud patterns.

Last updated