Mar 5, 17 / Ari 08, 01 17:10 UTC

Persistent abuse - Moderator / Admin attention required  

I would like to draw your attention to the following:

[MOD EDIT] [DELETED SPAM LINKS] Alan Player 7 March 2017 @ 23:18 AEDT

Notice any similarities? Took them a little while to get it with the markdown, but it became more uniform. I would not be surprised if that's the same person/team. There is clear trend and similarities. There are more in the dead thread bin, but those are the easiest to group. This indicates the banning isn't an overly effective method. The persistence suggests that so far the message you've sent is: Easy target.

Continuing to do nothing about this will not solve the problem, this will get a lot worse, unless you act soon. Ideally there are changes on the system side that can make this easier for you by removing a lot of it before it can be posted, and flagging up anything it's "not sure" about so you don't have to be digging for it. But minimally for now then some activity on your part should be required.

Yes, removing this content from public view and banning the user are both good moves - but this should not be the extent of the activities. Especially for a persistent offender it will be insufficient. In the realms of fighting software or teams, part-time moderation/admin will lose, constant and vigilant coverage required and you people have lives to attend to - a bot is relentless. Luckily automated systems have not been adapted to your registration and posting process. It will not be much longer, I assure you. However, should the current staff more agressively persue such events - clear signs to apply such behaviour would be clearly criminal activities, and obvious lack of intent for Asgardia as anything other than an advertising platform - then it's possible to get the message put out there that attempts to leverage our citizens as a harvestable source of income will result in cost instead.

This is acheived by taking away their toys, urniate in their cornflakes. In the case of residential services, this will represent a direct inconvenience. In the case of rented services, then this can get expensive when their $350/month server is ripped from under them with no refund two days into it's use and additional charges of $120 for dealing with the abuse report and $80 per spam message that left their systems. Dilligence in this action can make it economically unfeasible to persue using this as a spam platform, and any attempts to is going to directly relate in removing money from criminal enterprise.

I sense you shall be ill equipped in terms of access rights to get at the system logs - and these are the preferred citation of user abuse from people running the services these criminals and those like them use as this can then be confirmed against their own logs - but timestamps and timezone can be enough sometimes if they know what protocols they are looking for, and have a target addresses (HTTPS, https://asgardia.space/). I generally assume you have access to the user's IP but I would not be surprised if such simple moderator/admin functionality is absented. If you do not have access to this, then gaining access - for signup IP and posting IP - I suggest to be made a priority.

The IP should be able to tell you who the service provider is. If you don't have an operating system that comes with something like jwhois or are unable or unwilling to install such basic admin tools then your favorite search engine can provide. If one was to paste: whois 8.8.8.8 into your terminal emulator or search engine of choice it should return that it belongs to google(specifically, it's their public DNS). Of interest in the whois results would be OrgAbuseEmail: network-abuse@google.com as this is the department specifically setup to handle abuse of their systems - and stopping it from their end. If you review the companies ToS and AuP(almost all have these prominently published). They have a defacto responsibility to stop abuse from their end, and this is reflected in the ToS.

Should abusive posts of arrived from there, an email to that address with log evidence - or a copy and paste of the post, with timestamps(and indication of timzone) - indicating it arrived via HTTPS to asgardia.space citing which clauses in their ToS/AuP have been violated should result in rapid termination of the service for the offender.

In the cases highlighted in the opening paragraph, specific criminal activity is offered. Even better, parts of that network appear to be operating in the jurisdictions of law that can apply most heavily to these crimes. Being responsible, this cannot be simply ignored - the outcome of that is they simply try again with slight variation. Each attempt maximises exposure and increases chances a citizen will mistakenly think it's something trustable to click on.

The body of the message offers passports for a lot of countries - they should all have offices that deal specifically with the problem of fradulent documents in circulation, as per IACO guidelines - building good relations with these bodies is an incredibly good idea for when we have such a body, we would want them telling us if they found someone producing our passports. The same goes for currencies and other fradulent materials. Otherwise crimes have specific departments for, which LEA you'd require to direct it to would be on a case-by-case basis, the area the crime was commited from is usually a clue as to where this should be directed.

Otherwise the body of the message has other clues - other things that map to ID's on other services - the email address, the phone number - the ToS/AuP on these other services likely do not condone or tollerate illegal activities, either, and on suspicion of such will revoke service access pending investigations. More things can be made awkward for them and wherever possible cost them the most amount of money you can. Once you cost them thirty grand a week and they never get anything back they tend to try fishing in another pool.

If you wanted to be extra fancy, you should get yourselves a copy of Maltego. You should be able to collaborate on a file, simply input the source of the attack and several features that define it. You can then link the attack to the service, and over time various features of various attacks will have patterns that will allow them to be grouped, and a larger picture of what you're against can be drawn. Something like: http://morrigan.armed.me.uk/Linked_attacks.png or http://morrigan.armed.me.uk/Linked_attacks1.png is a slightly zoomed in image of the same picture. As Maltego is an OSint tool, it should also allow for gaining more information when you start getting nearer valid identities, and maybe enough to get near identities if they are careless with their information. If you look at the first picture, I only really input IP address(yellow), username(turqouise), email(cyan) and spam body(purple), per "entity". The software found the rest of the data for me, with a single click.

  Last edited by:  Alan Player (Asgardian)  on Mar 7, 17 / Ari 10, 01 12:19 UTC, Total number of edits: 5 times
Reason: typo. removed links to spam posts

Mar 5, 17 / Ari 08, 01 17:51 UTC

You are the relevant people, though.

You are on the frontline dealing with it. You are supposed to be what stands in the way between these criminals and the citizens, keeping good order on the service. I have no doubts you are aware and attempting to mitigate the spam - I was actually trying to highlight that a certain percentage is a single person/team - stopping them will result in a lesser workload for yourselves, significantly. Approach it with enough discrimination and others will not dare attempt.

There are ways to make things like abuse reporting easier - a lot can be templated, much can be scripted. Your operations are already convoluted and I can understand additional layers and activities are likely unappreciated - but the methods I highlight are incredibly effective and are a defacto standard employed in multiple scales of operations.

I personally think that what you'd need would be some panel of tools to assist in your moderation/admin activities. This isn't something that should be a chore for you, you shouldn't have to jump through hoops to achieve - for example the act of sanitising a post should autonomously evidence for you(as opposed to manual screenshot, which is easily adjustable with the most basic of photoshop skills) whilst taking other appropriate details from yourself - keeping it all in the same place and easy to operate. But, things being easy is just how I do things, because I'm lazy.

  Updated  on Mar 5, 17 / Ari 08, 01 17:55 UTC, Total number of edits: 2 times
Reason: typo

Mar 5, 17 / Ari 08, 01 21:46 UTC

Hello MODS,

I suggest that you have someone peruse the infrastructure and accomodations topic, the eliminate the monetary system and here's why topic, and the punish corruption with death by law topic. I have been the target of another member's personal attacks and the fact that not one MOD has even warned this person to stop their abusive behavior is quite disturbing

Mar 6, 17 / Ari 09, 01 15:16 UTC

Interesting Brandon7 felt it prominent to add that here, and now, rather than open up some specific post that would call direct attention to the "problem", when the "problem" occurred.

And yes, expect more instances of such adverstisments of fruadulent and criminal activity - until you actually start doing things to prevent it.

The above spiel specifically limits from requiring ability to edit the existing input system, and apply otherwise filters and focuses on what you can (and should) be doing to make sure this doesn't happen again. Failure to behave in this regard once automated systems have been adapted will not only impact us, but leave the hardware in place and operating to effect others - a concerted effort between impacted systems can limit the global damage potential. There are concerted efforts already in place by the likes of shared blacklists etc - and participation in, and use of such projects eventually is advised(however, you can start reporting abuse today).

  Updated  on Mar 6, 17 / Ari 09, 01 16:21 UTC, Total number of edits: 2 times
Reason: Additional data, typo

Mar 7, 17 / Ari 10, 01 11:48 UTC

I understand where you're comming from - and agree with the general supposition we're "both as bad as each other" - Which is why I'm genuinely surprised he felt the requirement to draw attention to it - if I'm "guilty" so is he. Remind you of children in a playground much?

Truely I could of handled it "better", but honestly I feel no "remorse"at the way I did handle it. I do not contain infinite patience and he's a big boy - even dresses himself in the morning - and he's had far worse previously, and I'd even dare say someone had a chuckle in there somewhere. I do genuinely place great effort into responses that are civilised on the whole.

Nearing 1000 posts now.. how many can you "fault"? Ratio's good.

  Updated  on Mar 7, 17 / Ari 10, 01 11:50 UTC, Total number of edits: 2 times
Reason: grammar, additional data

Mar 7, 17 / Ari 10, 01 12:24 UTC

Perhaps it would be best if both of you 'listened' to those around you, since you both seem incapable of realizing when you have gone too far.

Warning Signs, Jeff Foxworthy style:

  1. When the only two people who regularly post in the thread are yourself and a person with which you are disagreeing, you have probably gone too far.
  2. When you are no longer attacking their idea, but their personality, temperament, or intellectual or sexual ability, you have probably gone too far.
  3. When others not involved in the dispute are telling you that your words are inappropriate, you have probably gone too far.
  4. When your posts exceed three paragraphs, or 1000 words, you have probably gone too far.
  5. When a moderator tells you it's time to cool it off, you have definitely gone too far.

Hope these help.

Mar 7, 17 / Ari 10, 01 14:05 UTC

Lmao, thank you Phicksur. That has made my morning so much better after a graveyard shift.

Mar 7, 17 / Ari 10, 01 14:28 UTC

I was actually being serious.

Those are real warning signs. After writing them I realized they sounded similar to Jeff Foxworthy's old "You might be a redneck" lines, and re-wrote it to add that humor in an effort to blunt the possible misinterpretations by those who do not know me very well.

EDIT: Also, I believe EyeR knows me well enough to know that I do not engage in personal attacks.
At least, not unless I am very stressed out. I prefer to vent my frustrations in other ways.

  Updated  on Mar 7, 17 / Ari 10, 01 14:43 UTC, Total number of edits: 2 times
Reason: adding clarification

Mar 7, 17 / Ari 10, 01 15:42 UTC

To return to the orginal topic of discussion, employing captcha- especially one already being exploited by automated services - isn't likey to be an effective step in mitigating abuse.

To assume the offender cannot expoit it, then quite simply such will be forwarded to use of another criminal enterprise of theirs - and they will have users attempting to download pirated media etc fill out the captcha on behalf of the automation.

What will be effective is taking away the hardware they are using to do it. Every time they attempt it.

And regardless of personal knowlege EyeR isn't concerned about facil things like "personal attacks". I'm incredibly difficult to offend, on a personal level. Everyone has a right to an opinion, and this includes opinons of people. They are not always going to be favourable. I don't care about favour, I care about results. Everything I do tends to be aimed at provision of.

  Updated  on Mar 7, 17 / Ari 10, 01 15:45 UTC, Total number of edits: 1 time
Reason: Additional data

Mar 7, 17 / Ari 10, 01 16:09 UTC

I created a test account myself just to see how easy it would be. It was horribly easy. Once the account is set up, before the captcha-per-post was implemented, a person could easily create an account manually, then let the bot run through with it and generate hundreds of posts.

I put a link to the software package we are using to produce these forums here: https://asgardia.space/en/forum/forum/feedback-11/topic/bots-on-asgardiaspace-3486/

As I stated there, there just aren't a lot of anti-spam packages available to the software we are running. Unless someone feels like writing some custom python, we are going to be stuck with doing things manually.

Mar 7, 17 / Ari 10, 01 16:34 UTC

We suffered 300+ spam posts made by 20+ accounts over this twelve hour period

And this is still mostly manual. It's going to get a lot worse. 20+ accounts an hour is not unreasonable to expect.

Anti spam is "easy" really.

As previously suggested, modification of the input system to detect post speed and comparing to previous posted contents(say, last five posts) for "key features" would stop quite a lot, and slow down the rest to be easily compensated. The IDS system that should already be running should be able to be trained to fire up on "suspect behaviour" also. Additionally tools like fail2ban can patrol logs and act on other patterns.

To assume the admin team know how updates work, then I see the "best" way we can get the features we would like into this software right now would be to contribute to the project that this leeches from. Custom python isn't beyone the scope of possibility, tho I suspect there are those within our ranks more comfortable with it than I.

Mar 7, 17 / Ari 10, 01 16:58 UTC

+1 fail2ban saved my server a lot, in the past.

Mar 7, 17 / Ari 10, 01 17:11 UTC

I don't know python, but I do know programming and process flow. If I was able to program such a spam filter, here's how I'd do it. If someone is proficient in python and can make this happen, it should resolve the problems.

Required fields added to each user account:

  1. average_characters_per_second (avgcps) as a float
  2. total_posts_tracked (totpost) as a byte (0-64)
  3. abuse_threshold (at) as a char (0-255)

Instance variables:

  1. characters_per_second (cps) as a float
  2. post_length as a long

Logic Tree, activated each time the user posts:

Is Someone Abusing?

  1. Check time since last post.
    1. Calculate cps by dividing the number of characters in the post and title, divided by the time since the last post was made.
      If cps > avgcps and cps / avgcps > 1.2 then increase abuse_threshold by 2
      If cps > avgcps and cps / avgcps > 1.5 then increase abuse_threshold by 5
      If cps > avgcps and cps / avgcps > 1.7 then increase abuse_threshold by 10
      Else reduce abuse_threshold by 1
    2. recalculate avgcps
      avgcps = ( (avgcps*totpost) + cps ) / (totpost + 1)
    3. increment totpost
      if totpost > 60, reduce totpost to 10
  2. Check title of last post.
    1. Check title of post against previous post made by user for character strings in common.
      If more than 5 sequential characters are in common then increase abuse_threshold by 2
      If more than 10 sequential characters are in common then increase abuse_threshold by 5
      If more than 15 sequential characters are in common then increase abuse_threshold by 10
      Else reduce abuse_threshold by 2
  3. Check content of last post.
    1. Calculate character length of post (post_length).
      If post_length > 1000, increase abuse_threshold by 2
      If post_length > 5000, increase abuse_threshold by 5
      If post_length > 10000, increase abuse_threshold by 10
      Else reduce abuse_threshold by 5
    2. Look for duplicate posts.
    3. Choose a random point in the post, based on character length of post, and select the next 20 characters.
      1. If that exact string is found in the last post, increase abuse_threshold by 15.
      2. Else reduce abuse_threshold by 1
  4. Each minute, reduce abuse_threshold by 1 for all users, to a minimum of 0.

What do I do about it?

  1. abuse_threshold > 20
    1. New posts, and the previous two posts, are automatically marked as requiring moderator approval before being visible to users.
  2. abuse_threshold > 40
    1. User is prevented from posting for 1 minutes, a message is sent to moderator group for research.
  3. abuse_threshold > 60
    1. User is prevented from posting for 10 minutes, a message is sent to moderator group for research.
  4. abuse_threshold > 100
    1. User is prevented from posting for 60 minutes, a message is sent to moderator group for research.
  5. abuse_threshold > 200
    1. User is prevented from posting for 24 hours, a message is sent to moderator group for research.

Using this tree, the spambots would be caught for sure, and the users would never notice as their posts would get flagged too quickly.

Some of our more 'wordy' posters would be ok, unless they were copy/pasting from multiple posts, or accidentally duplicate a post. The highest any normal person should be able to get is 10 from post_length, 2 from sequential characters in title, maybe 2 from characterspersecond, for a total of 14. Maybe, on a severely random chance, they might get flagged if they use the same 20 characters, in a row, in two sequential posts, marking their posts as requiring approval, but that'd be really rare.

It shouldn't use too much processing overhead, either, and keep us all from seeing the spam.

Mar 7, 17 / Ari 10, 01 17:35 UTC

Similarly, I'm not overly conversant python, but can understand logical flow. What I know of python might be enough to make that happen.

If someone is more proficient that I (not difficult) then it's more sensible they attempt such.

From what I understand of python, you'd not need to worry about things like floating point, byte, long or char, it should figure that out itself as it executes. You'd not require to "add fields" to each user account, available charactors in the post can be sampled from the post itself, previous posts are already indexed so referencable, and the "abuse threshold" can exist in the script's side.

I cannot fault the logic tree, at a casual glance, and it does appear to indeed solve the majority of instances. My only adjustments here would be to apply filters from blacklists of specific phrases etc. The response tree IMHO would require a little work, and seems to rely on functionality that might not be currently present.

  Updated  on Mar 7, 17 / Ari 10, 01 17:37 UTC, Total number of edits: 1 time
Reason: Additional data

Mar 7, 17 / Ari 10, 01 17:43 UTC

The reason I suggested a character length of 20 was because most 'common phrases' are less than 20 characters. Because of this, even common phrases should be surrounded by other text which would prevent accidental flagging.

Even in the event of accidental flagging by the 20 characters subroutine, that's still only 10 points, which is not enough, alone, to flag anything. A post would have to have multiple reasons to flag to actually cause posts to not display.