Skip to main content

Cookie settings

We use cookies to ensure the basic functionalities of the website and to enhance your online experience. You can choose for each category to opt-in/out whenever you want.

Essential

Preferences

Analytics and statistics

Marketing

Changes at "Use content classification systems for better SPAM detection"

Avatar: AH AH

Body

  • -["

    **Is your feature request related to a problem? Please describe.**

    SPAM users are becoming bigger and bigger problem for all Decidim instances. They register profiles to put advertisement in their profile bio or a SPAM link in their personal URL and they are flooding the comments section with SPAM.

    This is a real issue that is causing lots of extra work for the moderators of the platform. We should apply some automation in order to help their work.

    **Describe the solution you'd like**

    There is a gem available named Classifier Reborn which provides two alternative content classification algorithms:

    • Bayes - The system is trained using a predefined set of sentences to detect what are considered good and what are considered bad. When classifying content, it applies a word density search for the new content against this predefined database and provides a probability if the new content is considered good or bad.
    • Latent Semantic Indexer (LSI) - Behaves with similar logic as above but adds semantic indexing to the equation. Slower but more flexible.

    More information available from:
    https://jekyll.github.io/classifier-reborn/
    https://github.com/jekyll/classifier-reborn

    Based on one of these algorithms, we could calculate a SPAM probability score for any content the user enters + the user profile itself when it is updated because in the past years we have been seeing many users who create SPAM profiles to get a back link to their site for improved SEO scores.

    **Describe alternatives you've considered**

    • Manually moderating all users/content that are considered SPAM - very work heavy
    • Using 3rd party APIs to detect SPAM but they are likely not any better as what is suggested above + they come with a cost (or alternatively with a privacy impact)

    **Additional context**

    The suggested content classification systems with the predefined databases are likely to work only for English. I haven't dug deeper whether such databases are available for other languages.

    But, as of our experience, most of the SPAM users are spamming in English, so I think such classification systems could solve the problem at least for English SPAM.

    If the classification needs to be applied to other languages as well, there could be some way to train the system further with other datasets. By default, it could just be trained in English to get rid of most of the SPAM users.

    **Does this issue could impact on users private data?**

    No.

    **Funded by**

    Not funding available.

    But once there is big enough amount of needless work caused by the spammers, I'll refer to this issue whether we could find someone to fund the development.

    "]
  • +["

    **Is your feature request related to a problem? Please describe.**

    SPAM users are becoming bigger and bigger problem for all Decidim instances. They register profiles to put advertisement in their profile bio or a SPAM link in their personal URL and they are flooding the comments section with SPAM.

    This is a real issue that is causing lots of extra work for the moderators of the platform. We should apply some automation in order to help their work.

    **Describe the solution you'd like**

    There is a gem available named Classifier Reborn which provides two alternative content classification algorithms:

    • Bayes - The system is trained using a predefined set of sentences to detect what are considered good and what are considered bad. When classifying content, it applies a word density search for the new content against this predefined database and provides a probability if the new content is considered good or bad.
    • Latent Semantic Indexer (LSI) - Behaves with similar logic as above but adds semantic indexing to the equation. Slower but more flexible.

    More information available from:
    https://jekyll.github.io/classifier-reborn/
    https://github.com/jekyll/classifier-reborn

    Based on one of these algorithms, we could calculate a SPAM probability score for any content the user enters + the user profile itself when it is updated because in the past years we have been seeing many users who create SPAM profiles to get a back link to their site for improved SEO scores.

    **Describe alternatives you've considered**

    • Manually moderating all users/content that are considered SPAM - very work heavy
    • Using 3rd party APIs to detect SPAM but they are likely not any better as what is suggested above + they come with a cost (or alternatively with a privacy impact)

    **Additional context**

    The suggested content classification systems with the predefined databases are likely to work only for English. I haven't dug deeper whether such databases are available for other languages.

    But, as of our experience, most of the SPAM users are spamming in English, so I think such classification systems could solve the problem at least for English SPAM.

    If the classification needs to be applied to other languages as well, there could be some way to train the system further with other datasets. By default, it could just be trained in English to get rid of most of the SPAM users.

    **Does this issue could impact on users private data?**

    No.

    **Funded by**

    No funding available.

    But once there is big enough amount of needless work caused by the spammers, I'll refer to this issue whether we could find someone to fund the development.

    "]

Confirm

Please log in

The password is too short.

Share