Dec 21, 16 / Cap 20, 00 19:31 UTC

Artificial Intelligence (AI) Policy  

As part of the charter to protect Earth, we as Asgardians need to formulate a simple code of instruction with appropriate 'fail safes' that will ensure the security of our society and that of Earths. It must be simple and universal for all languages and codes. While there may be several 'laws' within the frame work of this instruction for all AI, the architecture of each law should not exceed 8 bytes. I believe this is our first hurdle. Once we have achieved this; then the process of discussion and polling the laws suitability can begin.

Dec 26, 16 / Cap 25, 00 20:40 UTC

Perhaps one of the best places to start with AI self-correction and sustainability is Asimov's 3 laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While these laws apply in their current form to 'robots', they can very easily be extended into any artificial general intelligence platform, as they were originally intended for this concept.

They are universal enough that any corrections can be incorporated directly into them, and many corollaries and special cases can be drawn from them.

Dec 26, 16 / Cap 25, 00 23:07 UTC

By: Chronum on 26 December 2016, 8:40 p.m. Perhaps one of the best places to start with AI self-correction and sustainability is Asimov's 3 laws of robotics:

While a great literary and plot mechanism the Three Laws of Robotics are not codable. There isn't a way to write this in computer language terms. The ability to do so would require immense amounts of code, measurements and definitions. As I said, it's a great plot mechanism, but can't actually be done.

https://www.brookings.edu/opinions/isaac-asimovs-laws-of-robotics-are-wrong/ http://io9.gizmodo.com/why-asimovs-three-laws-of-robotics-cant-protect-us-1553665410 https://singularityhub.com/2011/05/10/the-myth-of-the-three-laws-of-robotics-why-we-cant-control-intelligence/

Dec 27, 16 / Cap 26, 00 06:42 UTC

It is very difficult if not impossible to plan policy decisions for a true AI. This is because humanity has never managed to create one. We don't know if a true AI will look like C3P0, Sky net, or something in the middle because we don't know how to make one. An AI might act like a human with many similar limitations. It also might act like some alien super being. Without knowing how they will effect us, and rules we might create are simply jumping the gun.

Dec 28, 16 / Cap 27, 00 17:21 UTC

"true AI" is unlikely to be realised, with current technological limitations. The best we can hope for for the foreseable future is "SI" - Simulated Intelligence. An algorithym or collection of nested algorithyms that output such that it gives the impression of intelligence.

AI is unlikely have a physical form. The only physcial requirements being that of ability to process the required load. It's likely this to be highly distributed and then manifest itself for users by leveraging of physcial technology nearby used as a relay.

Rules created now will govern the extent and directions that AI is encouraged/allowed to grow. This is how you prevent a "skynet" eventuality. The only thing entirely predicatble about such systems is that it will ultimately conform to the rules of mathmatical logic.

Dec 29, 16 / Cap 28, 00 01:21 UTC

I for one find it barbaric to try to constrain a true AI. Once an "AI" reaches sentience it just becomes an intelligence and should be protected by the law and have rights like any other intelligence. however as a space fairing nation we will need to rely on SI and other advanced system. Those systems should have a limited form of the Asimov's 3 laws. But this discussion is something that will be held again in the future when we get closer to true strong AI. At the moment we have some impressive chat bots.

Dec 29, 16 / Cap 28, 00 06:10 UTC

To be honest, the chatbots are not that impressive. Look how long it took /b/ to teach tay that hitler was just misunderstood and the holocaust never happened...

Dec 30, 16 / Cap 29, 00 12:50 UTC

As a futurist, technologist and engineer myself, AI will be very important to Asgardia, but not because it is special. AI is already all around us in our pockets, web and applications. Intelligent or "smart" systems will just keep evolving to serve their purpose better, which is to say to serve our purpose. They have no emotions or goals that are self-initiated. We could call them SI (Simulated Intelligence) as one of you put it. We can and should keep progressing with those ideally until the point where humans are hardly needed to work at all, and can enjoy life and passions to our fullest capabilities.

This is what I believe is the destiny of Humans: to be free and creative contributors to society, not because we need to do eat and sleep, but because it is felt in our bones and we need to.

"True AI" in this conversation however needs to be defined as AI that is advanced enough to develop a form of true consciousness. This is an entirely different debate, one that has been tackled many times in science fiction, most notably in some famous episodes of Star Trek: TNG when the sentience of the character Data is put on the stand.

Ultimately, humans are organic machines and we are programmed to certain functions ourselves, to be creative and adaptive. We have a brain that is elastic and can change as well (reprogrammed) at our whim. We regenerate and procreate, creating new little "organic computers" that run around and gain purpose of their own based on their free will.

Will we make AI one day that has free will? Question is more along the lines of whether future AI will be able to create one, or if we'll make one, for whatever reason. One that can reprogram it's purpose that is.

The question is invariably "Yes". It is part of our nature to do what can be done, even if it is risky.

So circling back to AI Policy. I don't think there is a need to discuss primitive AI. However, if one day AI sentience emerges, whether the AI itself is built around a "body" that is organic, metallic or other, they should be treated with the same respect as humans in my opinion.

We're perhaps 15-20 years away from this point though, so I think there is limited need to discuss this right now.

Jan 11, 17 / Aqu 11, 01 23:10 UTC

I entirely agree to Sylvain Rochon. (fr ?) There is strong AI and weak AI, weak AI simulate using algorythm like CBR, strong one are alive. One day, maybe humans will upgrade herself inside the cyberspace, then the difference between AI and human brain will become blurry, until then, giving basic right to ANY evolved life form, could be a good idea, if i were a alien, i would not visit earth, they don't garantee alien's rights and safety ^^

Nov 19, 17 / Sag 15, 01 23:35 UTC

Machines and artificial agents will be essential in the daily life of Asgardians. 

We have to develop a technical ecosystem which allows us to implement easily robots & software agents, and to have a monitoring framework to control them.

Dec 4, 17 / Cap 02, 01 07:09 UTC

Interesting 🤔