Sunday 1 July 2012

Ioannis Rekleitis - Three Basic Questions in Robotics: New Directions


      Abstract: Three basic questions have generated most of the robotics research interest to date: Where am I? (Localization) What does the world look like? (Mapping)  How to go from A to B? (Path planning)
    I will examine several answers, identifying common themes. Where? and what? concern understanding the world and the robot's place in it: the Simultaneous Localization and Mapping (SLAM) problems. Localization generalizes to knowing about oneself, while mapping generalizes to knowledge representation, touching several fields. Solutions are based on both parametric and sample based strategies. Path planning is interesting, both theoretically and experimentally. I will review analytical solutions and randomized strategies from a historical perspective together with examples of current systems.
    Until recently robotics was trying to understand the world. Current and future research is more concerned with changing it. The problem of manipulating and grasping has gained prominence in the last few years. In the past, robots were concerned with moving through the environment, avoiding contact, and constructing models. Today, robots approach objects, use contact, and moderate forces to understand and modify the world.

    Dupuis, E; R L'Archeveque, P Allard, I Rekleitis and E Martin (2005) Toward Fully Autonomous Robotics Operation Framework.In:  Ayanna Howard and Eddie Tunstel (eds.) Intelligence for Space Robotics TSI Press, pp 217-234,
    Rekleitis, IM, Dudek, G & Evangelos M (2001) Multi-Robot Collaboration for Robust Exploration. Annals of Mathematics and Artificial Intelligence 31: 7-40 http://www.cim.mcgill.ca/~yiannis/Publications/journal.pdf

Comments invited

10 comments:

  1. I enjoy Dr. Rekleitis’ talk and although his research seems to center around robots as tools, specifically relating to environmental monitoring, and not a way of understanding ourselves, I appreciated him advancing opinions as to possible functions and evolution of consciousness in relation to this (especially in the end of the day discussion). Also, the idea that these more simple robots could be used as stepping stones toward parts of a full conscious robot (as he mentioned he can reproduce a robot that does almost all of the definitions Dr. Harnad had listed are equal to feeling in his talk, in separate robots) But even if we put all these ‘parts’ together, will this robot have consciousness? Would this robot have a feeling of what it is like to be a robot? I guess we would then have a T3 and all the issues/questions about identifying consciousness that go with it. It seems there is no escaping this hard problem!

    Izabo Deschênes

    ReplyDelete
    Replies
    1. I find that many definitions have different meaning for different people. Understanding is a good example. What does it mean to understand something? If I encode a set of rules that fully describe and predict a phenomenon, would the resulting robot understand this phenomenon? How about understanding the rules of traffic? Many conscious people clearly do not understand them, would a driverless car with the rules encoded in it, really understands?
      Interesting questions, I am really enjoying participating in the school.

      Delete
    2. I really love this example, because when someone is just following the rules, we say that he/she acts like a robot (even if this person has consciousness). I think it really means that they are not flexible with this knowledge. And I guess there is the way "understanding" feels, like grasping that now I make sense of all these rules, which would mean having consciousness (like in comics when there is the light bulb over the head of the person) which robots don’t have for now I guess.

      Delete
    3. We often say that the best soldiers are those who act like robots because (1.) they obey to commands and (2.) they do not seek to understand the rationale behind these commands. Those who might always be questioning decisions would end up morally conflicted. So I was wondering to what extent would an army of robots be an actual solution of these moral conflicts: behaving without necessarily understanding the behaviours they're doing. We're nearer to the "army of robots taking over the world" joke I. Rekleitis made during his speech! :D

      Delete
  2. Maybe to say that the robot "understands" its environment is a bit misleading. We tend to ascribe mental states to things but it does not mean that these things really are in theses states. As John Searle says, there is a gap between first person intentionality (the intentionality showed by humans) and derived intentionality (the one we give to objects and animals).

    ReplyDelete
    Replies
    1. As I said above, difficult question. However, many humans seem to have trouble to understand their surroundings also.

      Delete
  3. I wonder how the frame problem as it is studied in philosophy is related to the kind of work that Dr. Rekleitis does in robotics. The philosophers’ frame problem (which is different from the frame problem as it was originally defined in AI, from what I understand) is the problem of determining how an intelligent being must update his beliefs as it interacts in and with its environment. It is a fundamental problem, it is argued, because there is no known algorithm that could be used to determine which beliefs should be updated if the robot entertains complex enough representations of the world in its database. I would be tempted to think that the frame problem does not appear in any guise in Rekleitis’ work because the sensory systems of the robots he showed us only have to carry very specific type of representations of their environment -- such as (in the case of the lake robot) their orientation, their speed and the location of various rocks below them. The robots did not have to entertain anything like global representations of their environment as an integrated whole. If I am right, then it seems that this type of research is still a long way from answering the three fundamental questions of robotics that Rekleitis raised in his talk. (The lake robot would be completely lost in situations for which his sensory systems has not been designed -- for instance, if a earthquake suddenly occurs and the rocks at the bottom of the lake are completely shuffled in less than a few seconds).

    ReplyDelete
  4. I recall the beginning of Dr. Rekleitis' talk, when he spoke about a robot's ability to 'feel' a pinch, a handshake, a rainbow etc. Of course, this goes against much of what was implicitly held true at the summer institute, namely that a robot that is 'doing' is not 'feeling'. Discussions on the nature of pain asked the following: is an animal 'feeling' pain, or is an animal ONLY reacting to nociceptive input? Considerations about robots frame the same question differently: to what extent is 'feeling' related to 'sensing', as is unarguably done by all sorts of machinery? We all agree that it is possible to 'do' (in the very restrictive sense of carrying out a particular action in the world) withough 'feeling'. But is it possible to 'feel' without sensing - thus without 'doing'? I may be misunderstanding something here, but I know this last question has been put forth in slightly different form (I believe in the Damasio discussion).

    ReplyDelete
    Replies
    1. FEELING WITHOUT DOING

      Well, feeling a migraine is feeling without doing. But of course there will always be neural doing behind every feeling.

      Delete
  5. COPIED FROM FACEBOOK

    piereric:

    When the lobster encountered the robot (hmmm what's this?), did he realize : oh shit... this thing can't be conscious.... gtfo!

    Well, likely.
    I think the problem is that he did not sense any reaction in the robot, which appeared inert to his interaction attempt. This means : the robot could not even have displayed expression that could be interpreted as hostile to the losbter or to anybody.
    Indeed, the robot is not wired to interact or display in any way positive/associative or negative/hostile reactions. It might as be a rock or a floating branch to the lobster. So why did the lobster become defensive and did not pursue his exploratory behavior, like climbing on the robot? Unlike the mouse that climbed the cat, there is nothing in the lobster's nature or experience/nurture that suggests he should hide or get away defensively. Unless he thinks (like a caveman) and he is scared from things he doesn't understand. ( ? )

    Let's step a way from the remote controled robot.
    For me, robots might reach apparent "consciousness" when they will be able to put to use, in an autonomous fashion, enough affordances in their environment to ensure autonomous functionning. (My reasonning could go further toward feeling basic physiological and psychological needs and being able to autonomously attending to them (please don't use the baby as a counter argument))

    From that point, somebody with a minimally rigorous criterion (say, the Turing test) could say "here, this is conscious".

    Obviously, this is only partial view of the problem, and it's an attempt at making sense at a this thingamabob as an apprentice social psychologist (self-determination...).
    So can someone tell me how the perspective I provided CAN work ? Who agrees, who differs, who's doing it wrong?


    Claudia Polevoy :

    At least the lobster had some kind of a reaction (good or bad.. wtv !), but the robot just stayed still. He could not react since he had no introspection. What should I do in that kind of situation ? Is it good or bad to meet a lobster ? He can t tell since he did not experienced it or neither had a memory about a similar situation. I think the lobster had an adaptative reaction (protect itself) comparing to the robot. Or maybe this is only a demonstration of «flight or fight» ?!


    Ioannis Rekleitis:

    The robot at the time was teleoperated. The human operator decided to hold still to see what would happen. Sorry for the confusion.

    Pier-Éric Chamberland :

    Hi Mr Rekleitis. I found your presentation very interesting.
    I did guess that the operator held still, hence the rest of my reasoning. My apologies for stepping out of the "robot" topic and moving to the "animal consciousness" topic. So much questions were asked yesterday that were not answered, so these still linger on my mind.

    Claudia, I still don't understand why the lobster chose fight and/or flight. Maybe the only inert (but previously moving) life forms the lobster encountered were those killer fish that snap at you faster than you can see it?

    ReplyDelete