Thermal transfer in space obviously features neither convection or conduction, lacking the physical medium in which this takes place. This is why the frequent and constant mention of various forms of radiation, like InfraRed. You may, if you'd paid attention to the topic of the thread - or not as evidence suggests, noticed it specifically mentions "in space", maybe for precisely this reason. You may of further noticed, should you of been bothered to read it in the entirety, repeated mention to systems currently deployed on ISS(like the liquid water and NH3, or Triol and PolyMethyl Siloxane that bleeds IR through radiator panels), and further their lack of efficiencies due to the ineffective methods utilsed. Which is quite why this would actually need to be a "topic". It's not enough we simply use the most effective of what exists, that's already not good enough - we need to improve upon that. A mass habitation facility will likely require to dissipate gigawatts, if not terrawatts, plus organic input which is likley to be megawatts. The ETACS system on the US side of ISS is capable of ejecting 70KW of heat(via transferring heat from the station into liquid ammonia which is then circulated through external panelling via twin indendant loops) . Honestly, the real issue is dissipating heat. Generating power is trivial, and doesn't entirely require to be generated locally. We need technologies that are more effective, and less fragile. Simply scaling existing technologies will rapidly and repeatedly generate issues.
The raw ideas are all we'd need at this stage. Magnetocaloric effect is a good one, albeit difficult in the short term, obtaining resources from the solar system will not only make building habitation facilities viable, but solve problems like "rare" materials required to leverage such effects. Actually building our own even short-term temporary habitational facilities are decades away. I couldn't road map even getting materials to begin construction on long term mass habitation facilities till closer to 2050/2060. That's a lot of time to find better ideas, better techniques. It's likely the station(s) themselves wouldn't even begin to take final designs for decades. Ofc, numbers can be useful, and after identifying "interesting" things, we have basically created lists of potential areas of research to begin experimenting with(I feel few will have access to hard vacuum, so most work will naturally be simulated).
As for i2p - and general computer usage techniques and methods, maybe another place would more suit a dedicated topic, when we have a wiki it desperately requires one, and doing it here simply dilutes it's original purpose - but as a "competitor" to the same marketplace as TOR it features equal attention to mitigation. The most "secure" method I know of tunneling data would either be VPN or IP forwarding over SSH. I lend my favour to VPN - Trust in this, like anything else extends to the trust in the operator. You'd want one that does not log. Anything, preferably(beyond debug data). Free services are unlikely to be doing so for generosity, you are the product. And they will be selling it. The only way to be assured of anything is to run these services yourself. And no, I can't fit you all on mine. Maybe if charging for access - no more than operational costs(maybe $20USD/yr per head, ballpark guess), the entire thing can be automated, pretty much, in terms of it revoking access for non-payment, obtaining/reducing hardware as useage/subscription rises/falls - I could afford to obtain the globally distributed hardware required to act as relay and effectively distribute our collective load. However, I would feel uncomfortable doing this as a personal initative, and would rather an Asgardian initative form. I would desperately seek to avoid any impression of leveraging Asgardians as a facet of a get rich quick scheme. I would not expect an entire nation to expressly place their entire faith and trust upon me simply because o my say-so - I would expect some communual agreement about what is precisely operated as much as how it is operated. I specifically use and suggest OpenVPN. Free, open source, etc. It's possible to have users validate certificates used to connect without the certificate ever leaving the machine, so implimented correctly, the connection is as secure as the user's ability to keep their key secure, and breach of that key should not impact other users security. I had made mention somewhere else of PCKS11 or X.509 to be embedded in the digital section of passports, and card readers supplied with the passport. This could be used to securely authenticate for access to services, and could potentially provide for individual access to a Citizen-Only network - a private VPN, creating a secure tunnel to Asgardian services, and providing for access to a secure area(services hosting this content only communicates from VPN, or requests from, the internet is isolated from the software) which is where sensible access to things like collaborational tools etc(when we get them) should reside. IMHO. But we've got a long way to go before even passports.
The "best" method(s) is likely the realms of personal opinion but some effective ones are to boot into a live-disc. I'd also suggested putting one on the card reader, so can boot from USB. The "live-disc" should unfold itself to RAM, so not impact any existing software or configurations. Built correctly, the livedisc would provide a known-good OS free from any unsafe user habits and all "tools" to get further, and done "right" the user wouldn't even be aware of them operating. Operating in RAM this also has the added advantage of evading digital forensics as short of freezing RAM with liquid O2 and performing a cold boot attack user activities are lost with the power. From there I'd spawn a VM(Virtual machine), and boot another(potentially the same) live-disc inside - This sectionalises "online" activities from "bare metal" hardware. Work inside the VM, most malware of modern ages detect VM and don't operate to avoid detection and observation of their operation. In the worst case senario of infection, you break the OS etc, you just reboot the VM - fixed. Same for the host OS, should that somehow fail, but as you should be really avoiding using that, it should never fail it's not doing anything.