Propose new features
Designing Decidim together
Detect the use of spam-bots and ban non compliant users
**Is your feature request related to a problem? Please describe.**
Fight against disinformation, spamming and trolls. At the moment, if the administrators do not set a proposals limit per user, it is easy for a malevolent user to create one account and use a tool like Selenium to publish hundreds of contributions. Furthermore, administrators are not able to ban users.
**Describe the solution you'd like**
Implement a way to report users
Like we have a way to flag a contribution for moderation, a similar mechanism can be implemented to flag users and give moderators the ability to block said user. Everyone can participate in this reporting (admin, moderators, users) and flag users based on their harmful behaviour towards the debate or the content they posted on they public profile (avatar, biography, personal website).
- Add a flag to report users on their public profile;
- In the admin, add a column to the participant table that displays the number of time a user was reported and make a sortable column so the admin can see first the ones with most reports and take action (block) if needed;
- Send notification to moderator and admin when a user is reported.
Allow administrators to ban non compliant users
Administrators should be able to ban users, for example when someone repeatedly attack the debate. This ban should be transparent.
Add a “ban” action button in the Participants panel.
- Admin can unban user
- Users will be banned at the Decidim Identities level meaning they cannot access the website with another provider through the EU login. (Ex : I connect with Twitter, got banned I cannot connect using Facebook if it has the same email or is associated to my EU login id)
When a user is banned :
- an attribute (ex: blocked) is added to their profile which makes it impossible for them to login
- its avatar is replaced by the default one
- its pseudo is replaced by “Banned user”
- Profile page is rendered inaccessible by non-admin users (to facilitate moderation based on their contribution history)
- All contribution remain visible
Automate the ban of spamming users
In order to detect those users, we need to define behaviours we want to prevent. For example, we can consider that more than ten messages published in less than one minute from the account justifies that the system automatically blocks the user.
- An asynchronous job could check the database every minute, searching for such behaviour and report or block user.
- The detailed list of behaviours in question should be made public and the code open sourced.
**Describe alternatives you've considered**
Above measure are up to selection / discussion.
**Additional context**
We've seen these behaviour happen in our latest experiences when we scaled it to a couple dozens of thousand users : automated user creation, automated content creation, coordinated mass posting.
**Does this issue could impact on users private data?**
No
**Funded by**
EU Commission
List of Endorsements
Report inappropriate content
Is this content inappropriate?
Comment details
You are seeing a single comment
View all comments
Conversation with Hadrien Froger
Hello!
We are on spam for some time now :)
Here my inputs/learnings:
1- it is not clear a custom classifier is needed for participation. Seeing result from kaggle contest seems to show us we can use training data from other sources and would work (labelling training data is half the work to create a good classifier). (https://github.com/mohitgupta-omg/Kaggle-SMS-Spam-Collection-Dataset-/blob/master/spam.csv)
2- With our naive implementation of spam (word triggers+black domains tlds), we had an immediate 40% improvement. So the spam found are quiet dumbs.
3- Our simplistic gem https://github.com/octree-gva/decidim-module-spam_signal will implement external detection with https://github.com/spamscanner/spamscanner (open-source&self-hosted). We bet to have good results on this one, without re-inventing the wheel.
Another note, Devise gem allow "locking" user: meaning:
1. logout the user
2. send instruction for unlocking
3. once back, you can continue.
In our module, we add the support for devise locking (https://github.com/octree-gva/decidim-module-spam_signal/blob/main/app/controllers/concerns/decidim/user_blocked_checker.rb), and instead of blocking spam user, we forbid the saving of spam content and "lock" them. This way, we avoid to block legitimate user.
You could then have a routine to remove locked user that are locked for more than 6months. Quiet proud of this small hack haha.
I think we _should_ join our forces on this subject, since we also made a spam detection module here at OSP : https://github.com/OpenSourcePolitics/decidim-spam_detection/.
The module works a different way : it is based on a 2 components. The decidim-module itself, handling blocking/reporting automation, and a serverless container handling the computation of whether or not a user is a spam.
1 ) The module works slightly differently : it runs as a Sidekiq job on a scheduled manner on all new registered users. Sending login informations (for now) to the python container, it receives a spam_probability P between 0 and 1.
If P > 0.99 the user is automatically blocked.
If 0.7 < P < 0.99, user is reported and admin decides whether or not to block it.
The module is slightly configurable to be only report-capable or report-and-blocking-capable.
2) The serverless container is based on an already-trained model. It's a simple API that receives information about a user in a JSON-way and returns the spam_probability computed.
While this division of work seems more complex, it had allowed us to improve the model and the module quite flexibly.
Whatever the final form of this module may be, we could manage to have a common base that could benefit from our different perspectives.
Loading comments ...