After Google got it’s hands on DeepMind, internet has been going wild with articles regarding AI. From should they be allowed to vote to are they human has passed my feeds.

My thoughts are more grim, let’s start at the beginning. Current AI is not even close to anything most people expect it to be. It can’t yet think like you or me or make for a valuable smart companion. Sure you can ask something like IBM’s Watson a question and it will dig around it’s database to provide you the answer, but it will be an existing fact, not a collection of it’s experiences formed over a lifetime of interactions and imperfections.

Let’s start with “thoughts”, what are they?

Thought can refer to the ideas or arrangements of ideas that result from thinking, the act of producing thoughts, or the process of producing thoughts. Despite the fact that thought is a fundamental human activity familiar to everyone, there is no generally accepted agreement as to what thought is or how it is created. Thoughts are the result or product of spontaneous acts of thinking.
http://en.wikipedia.org/wiki/Thought

Sort answer we don’t know exactly. So my question is, how do we expect to recreate something in a radical different way from our own that we don’t even fully understand yet?

We made machines that mimic thoughts or what we consider thinking. Using sets of rules to behave we can make machines learn how to walk for example. We can calculate exactly how a robot should walk for it to move forward optimally, but making the machine do it himself is far more useful especially when having to walk over non flat terrain and adapt accordingly.

Still this is only one small part of AI and doesn’t come close to anything that is superbly complex and able to create new goals.

Currently much of the smart AI is a collection of different neural networks that are similar to the human brain in recognizing patterns. This is great for digging trough large amounts of data and finding relations within. So currently it seems that most AI is used for the big data dilemma that people can’t seem to comprehend what to do with as there are to many variables to get accurate conclusions.

Big data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it…

This is great for companies like GoogleFacebook and Microsoft and will certainly fuel the development of new AI technology. Still most of these developments will likely stay within the companies. Still there is a movement for the bigger masses starting to take shape with SkyMind for example.

But still these are all only but small parts that will one day have to work together in order to make our own “HAL 9000″ that will attempt to brutally murder us all.

The systems we have are the means to communicate, learning to interact with the environment (when applied to a mobile robot) and data processing. Communication and data processing already kinda work together, think of Siri for voice communication or even just any device that gives you the information you want, like Google. These are crude versions of AI that we already use every day.

To make these systems truly useful is to make them smarter than us. Learn them who we are, how they can convince us to accept certain facts or give advice on what to do with our lives now that our technologie already does everything for us better than we ever will be able to.

I’m clearly drifting off to a dangerous utopia where humans are done improving and might as well be extinct…

How to make an AI come up with it’s own solutions, think about subjects without human’s telling it what to do. This is the big philosophical question that drives many but none know how yet, or ever.

First we’d have to find out how we work ourselves, or at least how a lab mouse works. Isolating all of it’s components, transferring them into algorithms, linking those algorithms to works together and speeding up the process somehow of the evolution. Giving it proper outside input it can work with, be forced to evolve in the ways we want it to. Processing data in valuable ways to benefit itself or only others? From an evolution standpoint it’s difficult to see how evolution of oneself only to benefit someone else would work.

And another question, would we even want to interact with it.

It will be hard to an AI to be as error driven as a human, even during studies where humans had to work with an AI to solve a problem the AI’s that made mistakes and slowly suggested together with the human where generally more accepted than AI’s that blatantly tell the human how to do it.

If an AI was aware it would need a special program to keep it from being annoyed by humans who keep trying to make critical errors to it’s system. I believe we are far from ready to let something that has little to no flaws guide us.

An AI that will eventually be self improving, learning and aware would be best off only working on solving our biggest scientific problems and coming up with solutions to pretty much everything in an isolated area. Only to serve us thinking it’s serving itself?

There is much we have to learn. I for one look forward to the day consious AI becomes a reality. (This is mostly in my head and some things might not be scientificly grounded or already solved, feel free to change my mind.)