Engineering Conciousness

Something a bit different – an engineer’s view of conciousness.

Every so often I see articles on the Internet explaining conciousness in terms of vibrations or quantum theory. However, I think these are over complicating the issue. We can approach the problem from an engineering viewpoint and I think the reason for conciousness emerges.

This isn’t an idea I’m claiming for my own – I came across it when researching robotics in the 1990s. However, I haven’t seen it explained anywhere recently so it is probably worth writing down. If you can spot holes in the idea then please comment below.

TLDR: conciousness is a necessary part of creatures that learn how to interact with the world.

A simple problem with a simple robot

Imagine a simple mobile robot. It has:

  • Not much weight – it doesn’t matter if it bumps into things
  • Two motors driving two wheels
  • Two touch sensors – one at the front at the left and one at the front at the right.
  • Some kind of neural network that connects the touch sensors to the motor controls.
  • Programming that tells the robot that it must keep moving forward, so that if it stops it needs to reprogram its neural network so it can move forward.
Small, light robot

We put this robot into a maze and see what happens.

If we’ve got it right (and there are no dead ends – discussed later) then the robot learns to turn left when the right touch sensor hits something, and turn right when the left sensors hits something.

Robot learns to turn right when left bump sensor hits something

If there are dead ends then the robot will probably end up learning to always turn the same way – for example turn left when any sensor gets hit.

Easy huh? No need for conciousness. Lets try a harder problem.

A bigger heavier, faster robot

A bigger robot.
Picture by Jiuguang Wang – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=4060527

Ok – now we’ve got a robot weighing 20kg+, with big motors and batteries. It can move fast. If it hits something at high speed it may damage itself and whatever it hits. Can we use the same learning algorithm?

Clearly only using touch sensors won’t cut it here. By the time it hits the touch sensors it has already caused damage. We need to add some longer range sensors. Lets spend some money and use LIDAR which will give us an accurate 2D map of the environment in front of the robot. We could do something cheaper and less accurate, but to avoid any problems we’re just going to spend money.

This gives the robot a map something like this:

Results of LIDAR scan – robot is at the bottom, red lines indicate limits of scan

What should the robot do with this information?

  1. It could turn immediately
  2. It could turn at the last moment
  3. It could do something between 1 & 2
  4. It could slow down
  5. It could speed up, although this might not be a helpful strategy.

With the simple robot there is no element of time – everything is instantaneous. But with the heavy robot it will see a longer range view and there are lots of things it could do over a wide range of time. In addition, we may get wheel slip which makes everything more complex.

We could do a trial and error algorithm:

  1. Drive along until the touch sensors hit something;
  2. Use the touch sensor input as ‘pain’ prompting the robot to try a different neural network mapping from the LIDAR input to the motor controls;
  3. Try again;

However there are an awful lot of possible mappings so a blind search will take a lot of tests. Multiply that by the time for each test run and the time to get back to the starting position. A very long time to learn something simple.

There are other problems:

  • It is very hard to get back to the starting position with sufficient accuracy, particularly for a robot that is trying to learn how to move around.
  • The envrionment may have changed by the time we do the next test run. If we’re trying to recognise and avoid a cat, the cat may get bored and wander off.

Complex learning

Learning in a real environment is hard. One way around the problem is for the robot to simulate the world so it can use that simulation to work out what to do. A computer simulation isn’t bound by the rules of the real world – the number of test runs is only limited by the power of the computer, not by physics. Once the simulation has come up with a reasonable approach to the problem the robot can try that out and feed the results back to the simulation.

  • The simulation needs to model the robot itself. The robot may change over time (e.g. humans are growing while learning; the robot may change payload and thus weight);
  • The simulation needs to take account of some of the robot’s internal state – e.g. how much charge is in the batteries;
  • The simulation needs to include all relevant sensor data over time;
  • The objective is to try lots of possible mappings of the neural network to see which ones avoid hitting the touch sensors.

We can run thousands of simulations per second on the robot. This gives us a chance of getting the neural network mapped out in a reasonable way so the robot can avoid longer range obstacles.

So?

The hypothesis is that conciousness is the bit of our brains that lets us simulate the world to work out what to do. The reason that I have an impression of ‘me’ doing the thinking is that I am a vital part of the similation. In the robot’s case it needs to take account of its internal state before making a decision and I do too. I need to consider:

  • How hungry I am – both for whether I should be seeking food and how much energy I have
  • Any injuries
  • Confidence level
  • Anyone around I need to impress
  • Memories of similar occasions

and so on. Feed all that into a simulation, along with vision, hearing, smell etc and try out a few scenarios in our heads. Pick one and try it out. If it works then :-). If it doesn’t and I survive then I have learned something which I can use in my simulation next time.

The other way we learn about the world is via evolution – creatures that die don’t have many offspring. This works where the creature can be pre-programmed at birth to cope with everything it will encounter – e.g. an ant or mosquito. However, creatures that can learn to change their behaviour can have an advantage if the learning process doesn’t kill them. Hence evolutionary pressure towards conciousness as a better way to plan.

Really?

I think some kind of conciousness is necessary for machines interacting with the real world and trying to learn from it. I also think that this seems similar to what conciousness allows me to do. However, it is my conciousness that is trying to figure all this out from inside the box 🙂

What do you think? Does this make sense?

1 thought on “Engineering Conciousness

  1. I’ve recently read “The Hidden Spring” by Mark Solms, who has reached a similar conclusion, albeit by a more complex route. In (massively over-simplified) essence, ‘consciousness’ evolved as a means of dealing with uncertainty.

    ‘Self-consciousness’ though is a whole other kettle of robots. Susan Blackmore in “The Meme Machine” makes a good case for the ‘self’ being a not entirely benign parasitic memeplex. Gregg Henriques in “A New Synthesis for Solving the Problem of Psychology” (a much more significant work than that title implies) uses his Unified Theory of Knowledge (UTOK) to thoroughly ground the topic, through a series of emergent “planes of existence”, in pure physics.

    Next on my reading list is “Sentience” by Nicholas Humphrey, chosen after watching his Ri lecture https://youtu.be/9QWaZp_2I1k?si=sX8Yhu8Y1mqgYYO3

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.