Apr 12, 17 / Tau 18, 01 12:02 UTC

Humanity is already losing control of artificial intelligence  

So I wake up this morning to this article: https://www.thesun.co.uk/tech/3306890/humanity-is-already-losing-control-of-artificial-intelligence-and-it-could-spell-disaster-for-our-species/

And I have to say, it is very upsetting.  In short, the article exposes how these "geniuses" are creating neural networks to identify diseases and to automate driving -- among other things which will make you, human, obsolete. 

They are finding that the neural networks are performing excessively well, the problem? The neural networks are so complex that researchers have no idea how they are learning and what the process behind AI decision-making from their unsupervised learning (I'm assuming it is unsupervised... ) actually is. I mean, holy c**p! Bro, if you don't know what you're doing, don't let it loose...  Which seems to be what they are doing.  This, coupled with that "terminator" thing being developed by Boston Dynamics...  I mean, I hate to be a pessimist, but...

How can we have such strong confidence in artificial intelligence when we don't even know what's happening under the hood.  Strange times ahead, folks.

Apr 12, 17 / Tau 18, 01 12:14 UTC

Human beings cannot even understand how their own brains work. Why would they have any clue as to why artificial brains work? Like our ancient ancestors and fire, they knew they could make it, and how to make it, but not how it worked. I expect that expertise would come in time.

The artificial intelligences presently being produced are not general purpose AIs. As such, they can only do one thing, and do that one thing really well. They are even more limited than an autistic savant. As such, they pose even less threat than an autistic savant, which is minimal. 


Apr 12, 17 / Tau 18, 01 12:21 UTC

Man, that^ argument doesn't make me feel any more confident. 

Why would they have any clue as to why artificial brains work?  I mean, a computer is an artificial brain of sorts and the creators understood how it worked before unleashing it into the wild.  Why would AI be any different? 

Elon Musk needs to hurry up with that neural lace.

  Last edited by:  Yoevelyn Rodriguez (Asgardian, Comm Assistant)  on Apr 12, 17 / Tau 18, 01 12:22 UTC, Total number of edits: 1 time
Reason: Grammar

Apr 12, 17 / Tau 18, 01 14:22 UTC

Ok, let me explain AI programming to you a bit, as far as I am able to understand it myself.

An AI is a set of rules, weights, and measures. It is given a list of 'things' it can do, and it is given an outcome that is satisfactory. Sometimes there are degrees of satisfaction and penalties for failure, but let us keep this simple.

The AI then assembles its 'things' into a series of collections to achieve its pre-programmed outcome. Those assemblies which fail are weighted poorly, while those which succeed are weighted favorably. It continues to weigh and counter-weigh all combinations of its 'things' until it comes to an outcome that reaches maximum favorability. In cases of competition, it keeps a collection of favorable collections and then applies the greatest favorable collection to the circumstances it encounters. Sometimes, the AI will think up a solution that the humans never considered, because AI are not hindered by things like negative experiences, morality, or social pressures. They are only interested in the results.

When you think about it, human beings' thought processes work the same way. We figure out what works, and what doesn't work, and then use what works best for us in whatever situation we are in. If we encounter a new experience, we explore it, test what works and what doesn't work, and then choose what works best.


Apr 12, 17 / Tau 18, 01 17:31 UTC

That is an interesting process and doesn't sound as complicated as they made it sound in the article.  Actually, deep learning doesn't sound all that out there.  It's pretty much statistical analysis. Sounds terribly close to bayesian probability.  Or maybe I'm misunderstanding it.

Apr 13, 17 / Tau 19, 01 00:34 UTC

Speak for yourself. Some of us have attempted to gain both.

Of course, the price of wisdom is pain, and most try to avoid it.


Apr 13, 17 / Tau 19, 01 12:05 UTC

I have written some fairly effective programs that may be classified as artificially intelligent. I programmed them how to guess based on factors, but very limited 'learning' ability. I am a programming dabbler. I have no degrees or official instruction in programming, I just needed to know it so I taught it to myself.


Apr 13, 17 / Tau 19, 01 22:17 UTC

OK. So let me ask you guys this question: do you think that what Elon Musk is proposing (the neuro lace)is an adequate solution to avoid our demise in the hands of AI?

Apr 14, 17 / Tau 20, 01 12:04 UTC

Aaaaaaactually, I applied for a job at that company. When I first started college (20 years ago) I wanted to major in cybernetics. My counselor had no idea what that was and shoved me into Physics instead. :/ Thus, I just went into business and taught myself.

I do not believe that our end will come at the hands of AI. I believe our end will come at our own hands due to willful ignorance. We are only one piece in an incredibly complicated world and we keep throwing things off-balance because we can. One of these days, things will be thrown so far off balance that they will not be able to remain stable for humanity. I expect some other life forms will survive, but not humanity. Unless we increase our collective wisdom, as a species, we will doom ourselves.


Apr 18, 17 / Tau 24, 01 17:24 UTC


Scary but not surprising, we humans do have a bad habit of reaching for more than what we can grasp after all. 

It will actually be humanity's arrogance and greed that destroys them. It is as if humanity as a species has a god complex and thinks it's interference is always needed to solve problems. Anyway, more than humanity increasing it's collective wisdom is needed for our species to survive, the understanding of that wisdom is also needed

Apr 20, 17 / Tau 26, 01 00:30 UTC

Like many have said, human always remember the question are they capable to do this and they forgot the most important question - should they do it.

Apr 20, 17 / Tau 26, 01 02:17 UTC

Well, guys, it's already happening.It's too late for the philosophical musings.  It's a practical problem we have at hand and who's to say that the level of intelligence that could threaten our very existence isn't in the works already.  We have the DWave already, so couple quantum computing with artificial super intelligence and we create god.

Apr 20, 17 / Tau 26, 01 10:39 UTC

The "news" article is sensationalist nonsense. 

"Computers are already performing incredible feats – such as driving cars and predicting diseases – but their makers say they aren’t entirely in control of their creations."

First off, handling driving variables and computing diseases are a long stretch from actual human-like intelligence. And secondly, no tools are entirely in control of their creator. Do you have control over the resistance of the coils inside of your electric dryer? As you drive, do you have constant, direct control over the bolts and nuts of the chassis? 

Artificial intelligence does not exist in an efficient form on Earth at this time. 

You have two kinds of "AI". Chatbot type AI are pre-programmed to answer a specific response to a specific user's input. For instance, I can program a logic gate so that if ( userInput = hello), response = " hi John". 

The second kind of AI, true AI, is an AI that can learn new responses when faced with unknown inputs. These AI are often portrayed as being super spectacular in movies and in sensationalist media, but in actual life they are extremely impractical. First off the AI has to have enough storage space to accumulate the memory of all its experiences. This means all video data, sound data and possibly all tactile data it recorded in all its existence. Then, the AI must access at will all this data - not in a linear fashion, but actually have the ability to access any random string of data right away. Which means that you've got to have the data stored on RAM and not ROM. And finally, the AI won't necessarily be smart. In order to solve a problem the AI will need to try random solutions, but these solutions will often be completely rubbish or illogical - much like a child. The mistakes and failures it makes (which must also all be stored in the memory storage, and some of which might also accidentally end the existence of the AI) are the only way a true AI may somehow find the solution to a problem. 

AI is a danger to mankind only if mankind is foolish enough to assume that AI is smart. AIs out there are either man-directed (to give the illusion of knowledge) or highly inefficient. 

  Last edited by:  John Skieswanne (Asgardian)  on Apr 20, 17 / Tau 26, 01 10:47 UTC, Total number of edits: 1 time

Apr 20, 17 / Tau 26, 01 17:54 UTC


I have to disagree with your assessment.  It's obvious to me that the state of computing is an ever changing one.  The landscape is especially promising for this sort of stuff with the introduction of quantum computing.  All of those constraints you used as a barrier will dissolve as quantum computing becomes more reliable which, in effect is already happening.  It's just a matter of time.

Maybe I don't know how the components of my washing machine work, but I'm sure that the mechanical engineers that put the machine together understand clearly the process behind how it works and why it works.  Same with a car engine.  There is no doubt in my mind.

What I take away from this article is the fact that artificial super intelligence's performance is far superior to what developers have anticipated (or at least really freaking efficient) without them knowing why or how it is happening.

I suppose that the lesson here is to stop and understand, really understand, what we are doing before committing to future full of dangerous unknowns. 

Apr 20, 17 / Tau 26, 01 17:57 UTC

There are people smarter than I am.

There are people dumber than I am.

So far none of us have managed to eliminate the rest of us from the gene pool.

I expect the same will be true for artificial intelligence.