Dec 25, 16 / Cap 24, 00 10:43 UTC
Asgardia exclusive email addresses ¶
Request for Asgardia exclusive email accounts provided by the government using our Asgardian ID's. Exp. FirstName LastName CountryOfOrigin@ Asgardia.space would be useful
Dec 25, 16 / Cap 24, 00 10:43 UTC
Request for Asgardia exclusive email accounts provided by the government using our Asgardian ID's. Exp. FirstName LastName CountryOfOrigin@ Asgardia.space would be useful
Dec 27, 16 / Cap 26, 00 03:10 UTC
I have to agree with this here, this could be handy - not exactly beneficial but can be extremely useful.
Dec 27, 16 / Cap 26, 00 03:22 UTC
500k+ Email Boxes with 5 GB = 5 PetaByte Storage - No cheap thing.
i would suggest another pattern: [alias-of-own-choice]@[earth-country].asgardia.mail
Example: in my case -> nihylum@germany.asgardia.mail
The asgardia.space domain should be reserved for official addresses ( Ministries, Projects, ... ).
Dec 28, 16 / Cap 27, 00 05:25 UTC
Think you might want to address your math.
(((572845x5)/1024)/1024) = 2.731537819 - for sake of argument, 2.74PB.
That's still significant, but about ½ of what you'd qouted.
It's also a poor idea to include names, ID numbers, or country of origin in the email address.
Dec 28, 16 / Cap 27, 00 05:52 UTC
I agree with EyeR.
They could even limit the email inbox to 1GB or something and that would bring it down to TB's of storage wich would be more doable.... still costly but more doable.
Dec 29, 16 / Cap 28, 00 02:05 UTC
Yes, the mail type @asgardia.space is good, however requires the cost... you may want to use only POP3-SMTP without storage of received mail on the server?
Dec 29, 16 / Cap 28, 00 14:43 UTC
cost? incredibly minor, on the scale of it.
We already have the domain. Subdomains should be able to be spawned costless(ie: mail.Asgardia.space could host something like roundcube for example, giving those without/who don't use dedicated email clients access via a web portal) and the existing server should be able to handle the sorting/delivery load with ease. I can say that without even knowing it's specs, email servers really don't eat many resources at all.
The one resource it does eat(that cannot be recycled), is space for the user's inbox. Emails themselves tend to be incredibly small, even setting a maximum attachment size of 200MB that's pretty small in terms of "todays storage" - However, it's the number of users that really begins to rack it up. A 5GB inbox per user bringing this up to near 3PB.
To assume the existing server powering this is (as it should be) Asgardian owned hardware, and to further assume this is a standard 4U blade then it should be possible to wedge about 24 HD's in there. With populating that capacity with 8TB HGST drives, then this will yeild about 192TB of storages. Less than 10% of requirements, and costing about $6000USD. This would then require another twelve blades provisioned thusly in order to fill what we'd need now - so that number can times by twelve, plus the blade itself, hosting costs etc. This doesn't leave much room for future expansion, either, and doesn't take into account inevitable drive failures and replacements, which over a pool of 240+HD's will be something requiring attention. But to assume the citizens are required to brunt the cost of this themselves, then it's about $0.13/head for the HD's. The blade to put them in, the cost of having this hosted etc will obviously be additional.
People still use POP3? Ewww, /me shudders. This does store on the server, btw, but it (if you're using the right client(s)) optionally removes the stored mail from the server side as users collect it. This is unsuitable for users of multiple devices as this will then inevitably lead to discrepencies across personal infrastructure and gives rise to concepts like (local)hardware failure, network disruptions, etc resulting in loss of mail the user has never read.
Dec 29, 16 / Cap 28, 00 20:04 UTC
Yep, specific email address (official like) may be useful to have a public email address that isn't our personal email address and therefore wouldn't leak our identity. But indeed it will cost some money.
Jan 2, 17 / Aqu 02, 01 03:09 UTC
@EyeR, I think it all depends initially on what infrastructure we have and what we hope to have in the future.
I have been searching in this subforum and in others and I didn´t find information on what we have at the moment. Is there official information on the location and characteristics of our hardware and software somewhere? (I don´t participate in Facebook, so if the information is there I apologize in advance).
I think it makes no sense to think of solitions on bases that we don´t know, at least that is my opinion in a project that can transform into something as big as a new nation.
Jan 3, 17 / Aqu 03, 01 03:59 UTC
I've not occured specifics on systems - and I have asked. Odd that this hasn't happened.
I'm not particular interested in the hardware right now, just that the software is secured. It's not really possible to buy secure hardware anymore, but we might be lucky and it's colo of some ancient Xeon beasty or some SPARC V8 dinosaur. Ideally those most qualified should be looking into things like OpenPrincton and get busy designing us some 100+ core CPU without backdoors. Eventually I'll have the equipment to build the equipment that will allow me to do this, but I'll be learning a lot as I go, and I don't predict it to evolve rapidly.
If I had to take an educated guess at what they're rocking, I'd suggest a skylake quadcore i7-6700 or a skylake Xeon E3-1275 v5, between 32GB and 64GB of RAM, 2x 4TB HD pushed through a 1Gbps pipe. This is just a guess.
If the information is on Facebook, we might as well throw in the towel now, and start again. This time with a little attention to what shady firms do with the data they collect.
From a software point of veiw, hardware isn't really that important. Even desktop hardware could handle what we're asking. Desktop users are likely to be amazed at how much functionality you can cram into one box, mostly because of how poorly performing their OS is they mistakenly think the computer is lacking. A common rule of thumb when deploying a new VM to host a service in is to provide it with 512MB of ram, minimally, per service operated within. That should be good for most uses. As previously mentioned, the most restrictive factor for deploying email on this system would likely be the storage space for the users' inbox, which due to the number of users will be extensive.
For current software we can assume(maybe slightly more than assume) this CMS is Drupal8, backed by ngnix, has no load balancer in front of it, doesn't use squid. It probably isn't even in a VM. They couldn't be bothered to set the rDNS either.
Sure we don't know the base, but that really doesn't matter. It can do it. Assured.
Jan 3, 17 / Aqu 03, 01 10:59 UTC
Well, you don't need it. If you had need for secure email services, you'd already of done something about that a long time ago, email is nothing new. It would however be nice to have.
As previously specifed, the largest barrier will be the storage space for the user's inboxes, which again, at current population levels, and to be using the 8TB HGST drives because they're slightly cheaper than the 10TB ones(and larger tested for reliability) we're still looking at over 350 of them to provide just a poxy 5GB of inbox per user - 2.75PB(petabyte) of storage. To assume the HD's will sit in a standard 4U blade, and to select a model you can cram 60 HD's into a blade(more common to see numbers like 24), that's then another 6 nodes that need hosting(purchase fee of 350 HD's, purchase of 6 blade chassis, purchase of 6 motherboard, potentially requirement to buy 6 sets of networking hardware, likely to require additional RAID cards per unit as one-off fee, then the monthly cost of powering and keeping connected to interwebs) - to assume each chassis costs about $200USD(likely more), each motherboard & CPU to be about $250USD, about the same for the RAID card(s) in each system(assume the front panel splits into backplanes, allowing to fit 5x SAS drives to a single cable, and the RAID has five slots, you'd need three RAID cards per 60 bay chassis, 18 total) making the cost in obtaining the HD's alone $122,500USD, and requiring about $126,700USD total just to buy the hardware - Then there's likely setup costs at the datacentre(we could do with our own one of them, too, ideally more than one) and ofc fees to keep all that powered, cool, and connected.
That is a significant startup fee. However, we have a lot of citizens. To divide that fee equally across all citizens would reduce that number to $0.23/head - which is much more manageable. This ofc doesn't cover the (likely monthly, I prefer to pay for such yearly where companies support) hosting fees. Hetzner, the folks hosting the server used for this site would be able to host them for €167,23/month each(plus €167,23 one-off setup fee) which would be the largest headache. So for about $0.50USD/head we can buy all the hardware, manage the setup fee and keep the hardware online for just over 12 months. I'm not sure if these racks would be in a secure cage(ideally potential of this should be monitored, to indicate if it's being cut into to bypass the lock and on detection of, nuke system contents. Should be able to build something that can do this for about £10/unit. Likely less).
A one-off fee is possibly something many would happily sign up for, but the regular expenditure is another thing. As the yearly running costs are less than $1/head, it's likely most would agree to this too - but it's starting to hit the realms of requiring some form of (even voluntary) taxation. Something I would be eager to avoid if possible. it also doesn't cover events like hardware failure(tho the Hitachi 8TB HGST's do have a good failure rate, but like any other drive, they do fail. Large stocks recomended). It also doesn't cover backing up this much data, which would be irresponsible not to do and require at least three times as many HD's(to follow industry standard Father, Son, Grandfather rotation pattern, and three times that to rotate thusly in a daily/weekly/monthly pattern, and three times that if you'd sensibly store backups in local firesafe, and two geographically remote firesafes) and will be a time cosuming process just swapping drives to write backups to, and putting the written ones into the firesafe.
Also about now, we are not desperate for Email services. What we are more in desperate requirement of (IMHO) is collaborational tools. That same hardware could be used to give users a 5GB storage qouta and provide us with "multiplayer software" so we can collectively work on the same part of the same project at the same time. Ofc, there's nothing to say this hardware cannot fulfill both roles sharing the 5GB per user between their email inbox and their filespace.