Cyborg Astrobiologist Interview & FAQ


This interview was given by Patrick McGuire to David Geer in early May 2005, for his "GeerHead" column in SERVO magazine, which is a monthly magazine for robotics enthusiasts. The questions for this interview are also suitable for a FAQ (Frequently Asked Questions), so we reproduce this interview in its entirety here.

Copyright ©: Patrick McGuire & David Geer 2005.
Last updated: 7 June 2005.


Interview Questions & FAQ

DG1)What is the ultimate goal of the cyborg astrobiologist project, both in scientific and then in layperson's terms?

DG2) I understand that the cyborg astrobiologist is "a wearable computer system for testing computer vision algorithms at geological and astrobiological field-sites here on the Earth". When we think of cyborgs, we think part man, part robot. Please detail what the cyborg astrobiologist is and what parts of it are robotic and how they make the wearer a cyborg?

DG3)How does the cyborg equipment do the work of a human geologist?

DG4)How is the technology related to that used to drive NASA's Mars rovers?

DG5)How does the technology behind the cyborg astrobiologist aim to let a robot think like a geologist?

DG6)Can you please describe the computer headgear?

DG7)How does it operate like a virtual reality headset?

DG8)How is the camera linked to it?

DG9)How is the camera robotic?

DG10)How does it determine an interesting feature?

DG11)How does it analyze the [interest point]? Where is the computer software? What kind is it?

DG12)When and how do you expect to add shape recognition, mineral identification and "keeping track of outcrops" features?

DG13)How many decades yet into the future are things like robot geologists and robot paleontologists?

DG14)How do you use the wearable computer system to "proof computer vision algorithms for future use in cyborg or robotic astrobiological study on other planets/moons"?

DG15)Can you tell me about real-time photo analysis?

DG16)Over what time periods might we see what degrees of autonomy for [robotic] astrobiologists [or robotic geologists]?

DG17) In what equipment might your cyborg astrobiologist show up? Would there be a robotic camera on some piece of space equipment on a planet's surface that would be connected to wirelessly by a human operator wearing the head gear on earth or what would a good practical scenario be?

DG18)Please provide some interesting anecdotes about the development or use of the cyborg astrobiologist whether humorous or otherwise interesting.

(bonus question)19)I believe that many people would think, when they hear a description of your work, that "I could have done that!". Image-segmentation algorithms are commonly available. What is important about your work? Where's the beef?


Interview & FAQ Answers

DG1) What is the ultimate goal of the cyborg astrobiologist project, both in scientific and then in layperson's terms?

PCM1) The ultimate goal of the Cyborg Astrobiologist (CA) project is to develop software for "scientific autonomy". We expect that this scientific autonomy software will eventually be used for astrobiological and geological exploration by robotic rovers, orbiting satellites, and augmented human astronauts on the surfaces or subsurfaces of other planets or moons in our solar system, such as the Moon, Mars, or Europa.

When a system has scientific autonomy, it has the capability to make scientific decisions for itself:

  • (PCM1a) to decide what is scientifically interesting in a geological scene, and then
  • (PCM1b) to ask questions in order to better understand the scene and its science, and then
  • (PCM1c) to get answers to those questions by whichever means are necessary and available.
  • In 10 years or more, we envision that we or others can develop a system along the same principles of the Cyborg Astrobiologist which could be tuned or biased to the scientific task at hand. We are dreaming of a system that has enough training and enough of a database that we can tune a few knobs in the software, and "voila": the system is now a "sedimentologist", and capable of studying and understanding the geological layers of sedimentation from aqueous processes. Or a few other knobs and "voila": the system is now a volcanologist, and capable of studying and understanding the formation and structures of igneous rocks. Or a few other knobs and "voila": the system is now an astrobiologist, and capable of studying and understanding the relations between water and mineral deposition or capable of finding signs of tell-tale organic chemicals in a special class of sediments.

    DG2) I understand that the cyborg astrobiologist is "a wearable computer system for testing computer vision algorithms at geological and astrobiological field-sites here on the Earth". When we think of cyborgs, we think part man, part robot. Please detail what the cyborg astrobiologist is and what parts of it are robotic and how they make the wearer a cyborg?

    PCM2) The Cyborg Astrobiologist is indeed part human and part machine. But we have not yet (and perhaps never will) take the truly gung-ho steps of inserting machine parts into a human body in order to have a one-piece "CYBernetic ORGanism". Rather, we augment a human with a wearable computer and a robotic camera. We can take off the wearable computer and put the robotic camera in its box after the geological mission is over.

    The wearable computer that we use is from ViA Computers in Minnesota, and has a Transmeta Crusoe 667 MHz processor in it, with about 112MB of memory. The Transmeta Crusoe processor is power-saving, so we don't have to carry a lot of heavy batteries. The human operator wears the wearable computer on his/her belt.

    The robotic camera is carried on a lightweight tripod. The robotic camera is connected to the wearable computer by a Firewire/IEEE1394 cable and also by a serial-port cable. The main reason we use a tripod for the camera is so that we can point the camera at a given area-of-interest, without all the motion jitter that we would encounter if the camera was mounted on the human's shoulders or head.

    DG3) How does the cyborg equipment do the work of a human geologist?

    PM3) It doesn't do the work of a human geologist... Yet... We are trying to write computer vision software that will eventually be competitive with the vision capabilities of a human geologist. That way this software could be sent on a robotic rover or robotic worm to Mars in order to search for possible signs of current or fossilized life on the Martian surface or subsurface.

    Or the Cyborg Astrobiologist's computer vision software could one day be sent in the spacesuits of astronauts who might be going to the Moon or to Mars. Thus the smart spacesuits could help an astronaut who may not know all about geology or who may be distracted by other things (like staying alive, or like getting back to the lander for more oxygen or for dinner).

    We are teaching the cyborg to have the eyes of a human geologist.

    DG4) How is the technology related to that used to drive NASA's Mars rovers?

    PM4) The Mars MER rovers (Spirit & Opportunity) have a remarkable ability to drive autonomously for more than 100 meters in one day without crashing into rocks. This uses a technique called Autonomous Hazard Avoidance. This is computer vision software for "engineering autonomy", as opposed to the computer vision software for "scientific autonomy" that we are interested in.

    There is little on-board "scientific autonomy" software on the MER rovers now. Practically all of the scientific decisions are made by a large team of 50-100 of scientists and engineers here on the Earth during Martian night. It is rather amazing that the scientists and engineers here on the Earth can structure a sequence of tens or hundreds of commands for the Mars rovers to follow for the whole next Martian day. The rovers are NOT driven in joystick mode with new commands being sent and received every 5-10 minutes. ALL of the commands are sent to the rover before it wakes up.

    Maybe in 6-12 years, the future Mars robots will be exploring Mars with scientific autonomy software on board, with capabilities to make a scientific decision by itself. Maybe these future rovers will be able to do more and better science in a given time period because of their on-board scientific autonomy software. Maybe the scientists and engineers and politicians and tax payers here on the Earth will have confidence in the future Mars robots and their scientific autonomy, in order to let that scientific autonomy be developed and deployed to the degree that is possible and to the degree that is wise.

    Right now, one reason for having so many engineers and scientists controlling the robots on Mars is because these robots are such a valued and costly investment. We want to make sure that we get as many scientific results that we can for the money invested, without the robots getting stuck too often or wasting too much time.

    But as we develop and improve measures for scientific autonomy, perhaps in the future we can augment all of this human decision-making, remove some of the burden from the large team of human operators who have been driving these rovers and doing science for about 15 Earth-months.

    We envision a way, for example, for being in "search mode" all the time, and if the Mars rovers in 10 years detect something sufficiently interesting (another "Bounce rock"? some weird layerings in some sediments? some carbonates?), they will automatically start to study the interesting thing.

    This can save 1 or more days of decision-making time on the Earth by the human operators. It can also give the rovers abilities to have its eyes open for science, and to discover things that they could not if their eyes were closed while they were following the human-operator's command to "move to Waypoint X that is 900 meters away."

    DG5) How does the technology behind the cyborg astrobiologist aim to let a robot think like a geologist?

    PM5) Thus far, we have tested some simple ways to approach items PCM1a, PCM1b, and PCM1c. We have developed software for our Cyborg Astrobiologist system that takes an image (or a mosaic of images) and decides what is interesting in that image, and then acts upon those decisions to find out more information. See:

  • McGuire, Ormö, et al., "The Cyborg Astrobiologist: First Field Experience", International Journal of Astrobiology, vol. 3, issue 3, pp. 189-207 (2004).open-access preprint version
  • McGuire, Díaz Martínez, et al., "The Cyborg Astrobiologist: Scouting Red Beds for Uncommon Features with Geological Significance", International Journal of Astrobiology, vol. 4, issue 2, pp. 101-113 (2005). open access preprint version
  • McGuire, Gomez-Elvira, et al., "Field Geology with a Wearable Computer: First Results of the Cyborg Astrobiologist System", presented at ICINCO'2005 (International Conference on Informatics in Control, Automation and Robotics), Barcelona, Spain, vol. 3, pp. 283-291 (September 2005).
  • http://www.cab.inta.es/~CYBORG/cyborg.summary.html.
  • The simplest and perhaps most basic way to look for interesting things in a scene is to look for those areas of a scene that are different than the rest of the scene, those areas that are the most "uncommon" in the scene. We believe that this is the way the vision system works for young children, and perhaps for young geology students.

    Once the Cyborg Astrobiologist has found the interesting area of a scene, it is capable of "asking a question": "if I study the interesting area in more detail, what do I find?". Then it "gets an answer": the Cyborg Astrobiologist can point its camera automatically at the interesting area, and then take a much higher resolution color picture of that area.

    Naturally, our simple approach needs much refinement by us, other scientists & engineers, and perhaps some of your readers, in order to be truly valuable in astrobiological exploration by robotic rovers, orbiting satellites or augmented human astronauts.

    DG6)Can you please describe the computer headgear?

    PM6) The computer headgear is a monocular head-mounted display SV-6 from Tekgear in Virginia. It has native pixel dimensions of 640 by 480 and works well in bright sunlight.

    It is not a see-through device with overlays, but it only blocks less than 20-35% of the field of view in only one of the human operator's eyes. This is important so the human operator can see any rocks that might cause the human to fall over. The display can also be swiveled out of the human's visual field of view.

    The depth of focus for the SV-6 can be adjusted very easily in the field in order to be at "infinity", so that the human's eyes don't get so tired.

    Sometimes when we want higher resolution or if we want to work together in a group, we turn off the computer, disconnect the SV-6 from the computer, and then connect the indoor-outdoor tablet display to the computer. This is better for sharing the experience and the results between the field team.

    DG7) How does it operate like a virtual reality headset?

    PM7) It is not so much like virtual reality. The user does not have the sense of being in another environment.

    The headset is used to display the MSWindows environment, together with the results from the real-time image analysis. In the field, the human user can hit a few switches in the computer vision program in order to study how the wearable computer comes the conclusions that it has come to about a particular image of a scene.

    This ability for the human to ask the computer "how?" or "why?" is particularly useful since it allows the human to make adjustments to the system in the field, in order to enhance performance.

    DG8) How is the camera linked to it?

    PM8) The SONY handycam video camera (model DCR-TRV620E-PAL) is linked to the wearable computer by a Firewire/IEEE1394 cable. The Firewire cable talks to the computer through a Firewire PCMCIA card that goes into the appropriate slot in the wearable computer.

    Images are sent from the camera over this Firewire cable into the computer.

    The robotic computer is also linked to the camera by a serial cable, which goes from a mini-serial port on the wearable computer to the controller for the Pan-Tilt Unit of the camera. The Pan-Tilt Unit was acquired from Directed Perception in California, and is model# PTU-46-70W.

    DG9) How is the camera robotic?

    PM9) The camera is robotic because the computer can control the pointing angle of the camera automatically, and it can pan and tilt in order to survey a scene, and to acquire a set of images of an entire scene.

    This set of images over the entire scene can be combined together mathematically in order to form a single large mosaicked image, which is a composite of all the individual images. We have not yet put the mathematical mosaicking into the system, but we have been able to make "quasi-mosaics", simply by butting the edges of the individual images together. These large quasi-mosaics are used in the field to get a broad general view of the scene.

    We hope to make the camera even more robotic in the future by giving the computer the ability to control the magnification level of the zoom lens in the Sony Handycam. We have started this work, and we have acquired an interface board from a supplier in Germany (IRDEO) in order to talk from the computer's serial port to the LANC port of the camera. We started to work on the low-level software for IRDEO control of the zoom lens through the LANC port, but this is an unfinished project, to which one of your readers could contribute.

    Without the automation of the zoom lens, there is only one magnification setting that is repeatably accessible. We cannot repeatedly zoom in and zoom out by some amount, so this kind of limits the level of autonomy that we can put into the system.

    Right now, the easiest thing to do for zooming in, is to leave the zoom lens alone and have the human operator carry the system twice as close.

    In the future, with zoom-lens automation, the human can set the camera's tripod at some point in the field, and the computer will automatically control the camera, surveying the scene, and finding the top interest points in the scene at low-magnification. The computer could then command the camera to point at those interest points and zoom in to medium-magnification. At medium-magnification, the system would find a new set of interest points for each of the low-magnification interest points. And then the process would be repeated at high-magnification, resulting in a tree of interest-points from low-magnification to high-magnification.

    DG10) How does it determine an interesting feature?

    PM10) We are using a "simple" technique for searching for interest points. Yet at the same time, this simple technique is both robust and elegant. We search for "uncommon" regions of the image.

    Our implementation:

  • 1) We break the RGB color image down into its color components (we choose Hue Saturation and Intensity for these color components -- see any book on image processing).
  • 2) We use a technique called "image segmentation" to cluster similar pixels together into different groups. For example, all the bright pixels would be in one group and all the dark pixels would be in another group. We do this image segmentation separately for Hue, Saturation, and Intensity.
  • 3) We look for those groups that have the smallest number of pixels, and we call those pixels "uncommon".
  • 4) We add together the uncommon maps from Hue, Saturation & Intensity, in order to form an "interest map".
  • 5) We blur the interest map, so that we can't see features that are too close together.
  • 6) We look for peaks in the blurred interest map. The top three peaks are called the top 3 "interest points".
  • In our most recent field test in Riba de Santiuste, the computer's interest points "agreed" with a human geologist's interest points a little better than 50% of the time. This is pretty good!! It's not perfect. But it's better than 0%.

    We can use this 50% result as a basis for future improvements of the system. Right now the system has "zero" low-level knowledge of what a geologist knows. It does not know the colors of the rocks or their textures or where they tend to be found.

    It does have one piece of high-level geologist knowledge: my geologist collaborators and I decided early on that we needed to implement "image segmentation". Such image segmentation is essential for determining the borders between geological regions (such geological borders are known as "contacts" in geology lingo). Geological borders tend to have a lot of information that can explain to a geologist about how the neighboring rocks were formed.

    We speculate that as we improve our current implementation of the Cyborg Astrobiologist's computer vision system (which lacks geologist knowledge), that the 50% result can be improved maybe to 60-80%. To get beyond 60-80%, we might need to start adding low-level geologist knowledge.

    DG11) How does it analyze the [interest point]? Where is the computer software? What kind is it?

    PM11) With the current software, after we determine the top three interest points in an image or in a mosaic, the wearable computer is capable of studying the interest point in more detail, by automatically repointing and taking a high-resolution color picture of each of the top three interesting points. These high-resolution color images are stored disk for post-mission analysis.

    This acquisition of high-resolution color images for each of the interest points is supposed to simulate using more advanced instruments (i.e. a thermal spectrometer or Raman spectrometer or tomographic imager) to acquire more information. It is also supposed to simulate the SELECTIVE storage of important information, for telemetry back to Earth. Telemetry bits are a valuable resource, and computer memory and data storage on a Mars rover are valuable and limited resources.

    We currently do not do any further software analysis of the interest points after they are determined and after the high-resolution color images are stored to disk. However, we do use human mobility and human knowledge in order to decide whether or not the Cyborg Astrobiologist should approach the interest point and study it further.

    The computer software is inside the wearable computer. It is not at some control station far away.

    The computer software is code that I have written using a graphical progamming language called "NEO", which was developed by Helge Ritter, Joerg Ontrup, Markus Oesker, Robert Haschke, Joerg Walter, and a number of other colleagues at the University of Bielefeld in Germany. I have also become a developer of the source code for this NEO graphical programming language. I have chosen to become a NEO developer because:

  • 1) I like the NEO graphical programming language -- it's the best thing I have ever seen;
  • 2) I want to make the NEO graphical programming language even better,
  • so that I can program the Cyborg Astrobiologist even better and faster;
  • 3) I like the people in the Ritter group in Bielefeld.
  • The computer software contains:

  • 1) image-segmentation routines that I wrote, which are based upon Haralick co-occurence 2D histogram analysis. This is a classic image-segmentation technique developed by Haralick in the early 1970's. I am now working with computer vision experts at the University of Girona in Catalonia (NE Spain), Jordi Freixenet et al, in order to test image-segmentation subroutines and techniques for inclusion in the Cyborg Astrobiologist. The image-segmentation techniques from Girona are capable of full-color segmentation, texture-based segmentation, and simultaneous full-color & texture-based segmentation.
  • 2) uncommon-map routines;
  • 3) robotic-camera control routines; quasi-mosaic image combination;
  • 4) code adapted from Microsoft DirectX for acquiring the images from the Firewire port.
  • DG12) When and how do you expect to add shape recognition, mineral identification and "keeping track of outcrops" features?

    PM12) Shape recognition is not a priority right now. For the broad panoramic views that we obtain with the video camera, color analysis and texture analysis are more important. We are not trying to recognize fossil bones or life right now, which have particular shapes. We are only trying to recognize when something is different than its surroundings. Shape may be important for this in the long run, but we need to walk before we can fly. So we are concentrating on color and texture analysis.

    Maybe after we start using our new field microscope with the wearable computer, we will find that at a microscopic scale, the analysis of shapes will become more important than it has been at the macroscopic scale. One way to do shape analysis is by "template matching", based upon templates that have been memorized by a computer from laboratory imagery.

    We will need a spectrometer in order to do mineral identification. We don't have one right now. And even if we did have a spectrometer right now, we have our hands full with the current project of looking for things with three-band (RGB) color imagery that are "different". There are several other groups in the world that have been working on mineral identification based upon spectra (i.e. NASA/Ames, Washington University in St. Louis, Brown University, and a number of others). They apply such techniques as "Spectral-Angle Matching (SAM)" in order to find a mineral sample's closest matching mineral in large databases of mineral spectra that have been acquired in the laboratory.

    The feature "keeping track of outcrops" that you ask about can refer to:

  • 1) "natural scene understanding" of a single outcrop, using computer reasoning to understand the relationships between different areas of a segmented image:
  • 2) "developing a temporal memory" for the Cyborg Astrobiologist, so that it can remember the characteristics of the images it has seen already and of the different types of regions it has seen already in its images.
  • I hope to develop item#2 this summer, and to do field tests of this "temporal memory" technique this summer. Right now the Cyborg Astrobiologist does not have such a temporal memory. It just analyzes each image for "spatially" uncommon areas of that image, without any memory of the things it has seen before. This summer, I want to give the Cyborg Astrobiologist a memory of color and texture. This will enhance its capabilities considerably.

    DG13) How many decades yet into the future are things like robot geologists and robot paleontologists?

    PM13) If I have anything to say about it, 1-1.5 decades. We are talking about robotic geologists on the Moon or Mars that are capable of making scientific decisions for themselves. Carnegie-Mellon and NASA/Ames have been demonstrating this year and in recent years NOMAD and "Zoe" in Antartica and in the Atacama desert in the Chilean Andes. They used NOMAD to autonomously search for and find meteorites on the surface of the ice of Antartica. From what I have read so far, they used Zoe as a remote-control robot in the Atacama desert to squirt fluorescent dyes at opportune places in the desert sands; and with Zoe's cameras they judged whether the dyed areas contained DNA or proteins or other cellular life signals. Some of the dyed areas did have such signals of life!

    Having a robotic paleontologist will probably be limited in the next decades to helping out human paleontologists in their searches for macrofossils in the deserts here on the Earth. There are probably not too many macrofossils on the Moon or on Mars. The most useful things for augmenting desert paleontology are:

  • 1) finding a good place to dig for a big fossil. I believe that paleontologists currently use subsurface radar imaging techniques to find a good place. The scanning process for covering many square kilometers of barren desert could be automated by a robotic vehicle with on-board radar. The subtle analysis of the radar-imagery could be better done by writing good software for "scientific autonomy". This software could be deployed in real-time, in the robotic paleontologist, in order to better home in on dinosaur or human fossils.
  • 2) doing the digging more efficiently without harming the fossil. Developing a robotic fossil-digger with "surgeon-like" hands for doing the digging would really be welcomed by human paleontologists. Some techniques that are currently being used for doing robot-augmented surgery in hospitals could be adapted for robotic fossil digging.
  • When it comes to microfossils, maybe there are some on Mars. It is possible that there once was bacterial life on Mars. With NASA/Ames (Carol Stoker is the PI) and with my research institute in Spain (the Centro de Astrobiologia outside of Madrid), we are developing a robotic "practice" mission called "MARTE" (Mars-Analog Research & Technology Experiment). "MARTE" also means "Mars" in Spanish. The last of the field tests at the Rio Tinto in Andalucia (SW Spain) will be held this September (2005). In September, we will deploy a robotic system at the Rio Tinto that will drill a hole 5-20 meters in depth in the rock near the river. NASA/Ames and Oklahoma University and other institutes have developed a robotic system that will autonomously analyze the rocky cores that are obtained from the bore hole. The Centro de Astrobiologia has developed a "robotic worm" (Borehole Inspection System) that will go down into the hole and study the borehole's walls with cameras, a Raman Spectrometer, and a magnetic susceptibility probe.

    Yesterday (May 4, 2005), I delivered software to our team for the robotic worm. This software is the same software that I described above for the Cyborg Astrobiologist. We are doing a "brain transplant" from the Cyborg Astrobiologist's wearable computer to the robotic worm's brain. Right now this software uses image segmentation and uncommon mapping to look for regions of the borehole that are "different", and the interest points will be used by the robotic worm for some limited "scientific autonomy". The software I moved from the Cyborg Astorbiologist to the robotic MARTE worm will use imagery to automatically decide where to point the Raman spectrometer as the robot goes down the borehole.

    DG14) How do you use the wearable computer system to "proof computer vision algorithms for future use in cyborg or robotic astrobiological study on other planets/moons"?

    PM14) Together with my geologist collaborators Enrique Diaz-Martinez (Institute de Geologia y Minero de Espana, IGME) and Jens Ormo (Centro de Astrobiologia), I have deployed the Cyborg Astrobiologist at two different field sites in central Spain, one at some gypsum-bearing cliffs near a suburb of Madrid (Rivas Vaciamadrid), and the other in some red sandstone formations in the northern part of the province of Guadalajara (Riba de Santiuste).

    At the first site, the Cyborg Astrobiologist's computer vision system (using image segmentation and uncommon maps) autonomously found two dark spots in the predominantly white-colored gypsum cliffs to be interesting. These dark spots turned out to be water leaking out of the cliff due to some rainstorms in previous months. Water is of high interest to astrobiologists, so we were pleased with the performance of the Cyborg Astrobiologist.

    At the second site, the Cyborg Astrobiologist's computer vision system found some grapefruit-sized white spots and some dark-red pea-sized nodules to be interesting in the predominantly red-colored sandstone. These white spots were areas where the red stains of the rusty iron impurities (hematite) in the sand had been removed. This removal was probably by chemical reduction of the oxidized iron, which makes the iron mobile and hence removable. It is possible that the chemical reductant was of biological origin. The dark-red pea-sized nodules are where the iron that was removed from the white spots had been concentrated and reoxidized. These processes may be similar to what formed the hematite blueberries on the plain of Terra Meridiani, where MER Opportunity is exploring on Mars right now.

    DG15) Can you tell me about real-time photo analysis?

    PM15) Real-time image analysis takes images from a camera and analyzes them inside the camera automatically in a short time. Such a camera is called a "smart camera", and it often has a CPU inside of the camera, together with a computer operating system and image processing software. The same thing can be done by having the CPU outside the camera but directly connecting to the camera, like we have done with the wearable computer.

    DG16) Over what time periods might we see what degrees of autonomy for [robotic] astrobiologists [or robotic geologists]?

    PM16) Now, the MER robots Spirit & Opportunity have significant "engineering autonomy", in that the human controllers on the Earth can tell the robot to go to some point 100-200 meters away, and the robot can go there. The MER robot can get to within a 3-10 meters of that human-chosen interest point without detailed instructions from the humans, and without getting stuck too often. It uses "hazard avoidance" software to do this.

    By 2009-2011, the robotic Mars Science Laboratory will have even better "engineering autonomy", perhaps going 500-2000 meters or further, without human instructions and without driving over a cliff. Mars Science Laborory should have much better software on Earth-based computers, which will offload much of the heavy burden that the Earth-based human controllers now carry, when it comes to the daily grind of planning rover operations for the next day.

    By 2012-2025, I hope that some of my software techniques for scientific autonomy will be incorporated into future robotic and cyborg astrobiologists that will be exploring the Moon and Mars. This could include a robotic worm robot that could be part of a future drilling mission to Mars.

    By 2025-2040, I would expect that humanity will have autonomous submarine robots exploring the oceans of underneath the ice of Europa (Jupiter's 3rd moon), searching for signs of complex lifeforms feeding off of volcanic vents at the bottom of the Europan ocean and in the cracks of the ice at the top of the Europan ocean.

    DG17) In what equipment might your cyborg astrobiologist show up? Would there be a robotic camera on some piece of space equipment on a planet's surface that would be connected to wirelessly by a human operator wearing the head gear on earth or what would a good practical scenario be?

    PM17) I am thinking more about robotic autonomy now than I am thinking about human telepresence. Michael McGreevy at NASA/Ames is the authority about human telepresence for planetary exploration. I think that with the time delays between here and Mars (5 minutes or more), that human telepresence would get pretty dull pretty fast. Human telepresence might be useful for exploring the Moon, where the time delays between the Earth are only about 1 second. Human telepresence could be used additionally on the Moon by astronauts working in the relative comfort of a lunar base, who could be controlling field-exploration robots by telepresence with the head gear like you suggest. Exploring the Moon could be useful for astrobiology, since there might be some meteorites from the early Earth on the surface of the Moon. These meteorites from the early Earth could tell us about more about how life arose here on the Earth; much of this information has been erased from the Earth by the active geology of the Earth and by weathering processes.

    Nothing is scheduled or planned yet beyond the MARTE drilling mission this fall. However, my software for the Cyborg Astrobiologist could later end up in the brain of a robotic worm which could be exploring the subsurface of Mars in the next 10-15 years, as part of a Mars drilling mission. The software could end up in some rovers or aerobots that may be exploring Mars in the same time frame. On a shorter time frame (1-7 years), techniques based partly upon the Cyborg Astrobiologist software could end up in the data-analysis routines used by a number of human researchers to study the geology and astrobiology of Mars with some of the upcoming missions, such as the imagers and spectrometers on: Mars Reconnaissance Orbiter (planned 2006 orbit) and the PHOENIX lander (planned 2007 launch), ...

    DG18) Please provide some interesting anecdotes about the development or use of the cyborg astrobiologist whether humorous or otherwise interesting.

    PM18) The anecdote that comes to mind is:

    INVENTION OF THE CYBORG ASTROBIOLOGIST: As I was being hired by the Centro de Astrobiologia, away from Helge Ritter's robotics laboratory at the University of Bielefeld, the robotics group leader at the Centro de Astrobiologia (Javier Gomez-Elvira) suggested to me that I could concentrate on developing computer vision techniques for scientific autonomy for geology and astrobiology. He suggested that I work on developing ways to determine what is interesting in a geological scene. I thought this was a good project for my main research project at the Centro de Astrobiologia.

    Soon afterwards, as I was leaving Germany, I visited my former home in Tucson, Arizona, in order to pack up the last of my things in Arizona prior to my longer-term move to Spain. During that January 2002 trip, I visited an astrobiology expert, Jonathan Lunine, at the Lunar & Planetary Laboratory at the University of Arizona. Jonathan and I were talking about my new project in Spain on computer vision and scientific autonomy, and he was telling me about the history of lunar exploration, about the big debate as to whether the astronauts in the Apollo missions to the Moon should be geologists who received astronaut training, or whether they should be astronauts with geologist training. Jonathan and I were also looking out his office window at the palm trees and cars and mountain scenery and other buildings, and trying to decide what was interesting visually, and "why it was interesting".

    Right after my appointment with Jonathan, as I was driving back to my friend's house to get my sports shoes for a Friday afternoon campus ultimate frisbee match with some former colleagues and friends, I had a "Eureka!" moment, and decided to use a wearable computer to develop the computer vision algorithms for scientific autonomy for astrobiology.

    That way, with the wearable computer, I could keep doing robotics-like "hardware" things like I had been doing in Germany, but I would not need to wait for a robot to get finished before I could start the computer vision work. I would not be restricted to analyzing image data on a computer from robotic systems that I could not touch myself.

    Furthermore, I could work with geologists in the field, and do something "interdisciplinary", which is one of the important things about Astrobiology and about the interdisciplinary Centro de Astrobiologia where I was to start working in March 2002.

    I came up with a name soon afterwards: "the Cyborg Astrobiologist". This name was to emphasize the part robot, part human aspect of the astrobiologist exploration system that I have spent the last 3 years developing. Thus, the Cyborg Astrobiologist was born...

    (bonus question)19)I believe that many people would think, when they hear a description of your work, that "I could have done that!". Image-segmentation algorithms are commonly available. What is important about your work? Where's the beef?

    initial response:
    I was able to do this image segmentation on a wearable computer and I was able to work with geologists to gain insight from them. I was able to demonstrate that I can develop a simple algorithm which is useful for geological exploration. This demonstration opens the door to more advanced algorithms.

    better response:
    "I could have done that":
    Yes, I agree with you that many people may think that. My algorithm, which combines image segmentation with uncommon mapping, is rather simple. I also believe it is elegant and robust. I had not found anything similar in the literature prior to my publication of these results last year (Int'l J. Astrobiology). This system for determining interest points agrees with a human geologist's judgement more than 50% of the time!! More complex algorithms may not have been so robust.

    It is possible that many people could have done this, but as far as I know they have not. I spent 1-2 years getting this working, and trying it in the field. I don't think other people have put the effort into something like this in order to demonstrate success.

    I have enough experience to know that it takes some wisdom to keep a system simple if you want success & robustness. Maybe a large fraction of other people do not have this wisdom or experience. The system worked at two different field sites with different types of imagery, without a single change in the algorithm and without any change in parameters. I would think that only a limited number of other people could have developed such a robust system.

    "Where's the beef?" (summary):
    Systems Integration & Platform for Advanced Future Work (see answer1),
    Demonstrated Success in Solving the Problem Given to us (see answer2), and
    Invention(?) of the Uncommon Map (see answer3).

    "Where's the beef?" (answer #1):
    Systems Integration & Platform for Advanced Future Work
    Thus far, this work has been more of a systems integration of several different components: image segmentation, wearable computing, geologists' feedback, uncommon mapping, image mosaicking... So the beef thus far is the systems integration of these components, not any one component. Systems integration is an underappreciated talent.

    We want to put state-of-the-art image segmentation into the system this year, in collaboration with the Girona computer vision people. The image segmentation that I have thus far implemented for the wearable computer is the classic Haralick image segmentation technique from 30 years ago. The addition of Girona's image-segmentation knowledge should beef up the system from an algorithmic point of view.

    I also plan to give the system a temporal memory, looking for things that are uncommon not only in space in one image, but also in time. The image segmentation technique serves as a basis for geological understanding. With only the segmentation and the spatial uncommon maps, we have taken a decisive first step towards natural scene understanding. Further enhancements can come through progressive development and deployment of the following items:

  • a)the temporal uncommon memory,
  • b)other interest features (to increase the number of true positives),
  • c)filters based on geologist expert knowledge (to filter out false positives), and
  • d)a higher-order natural-scene understanding inference engine.
  • I am not aware of any image segmentation algorithms that are being used autonomously by rovers or orbiters. There may be some systems that autonomously use segmentation. In early May 2005, I completed a software delivery: a transplant of the image segmentation and uncommon mapping from the wearable computer to the robotic worm "borehole inspection system" that will be deployed in the Rio Tinto simulated Mars mission (MARTE) later this year. I believe that this will be successful, and that this will demonstrate some of the benefits of science autonomy.

    "Where's the beef?" (answer #2):
    Demonstrated Success in Solving the Problem Given to us
    My boss, Javier Gomez-Elvira, suggested to me prior to my arrival at the Centro de Astrobiologia in March 2002, that a good project for me to work on was to develop computer vision techniques to find scientifically interesting things in an image, which could be used for autonomous guidance of a robotic rover on Mars.

    One way to do this is to develop a huge database of spectra and colors and textures and shapes all the interesting geological and astrobiological structures that one might expect to see on Mars. And then program the computer to keep an eye out for all of those structures. This task is daunting, but some groups around the world have begun such an approach (i.e., the Geologist's Field Assistant project at NASA/Ames). Furthermore, this approach can be biased, because it depends on what the designer puts into the database. Therefore, we chose a simpler, less biased, approach (see also answer3), and this simpler approach has already yielded promising results...

    I developed a simple computer vision system using image segmentation and uncommon mapping, which reports three interest points per image. At 1 geological field site, this computer vision system matched a human geologist's judgement as to interesting areas of the image (true positives) for about TP=70% of all the positives. The number of false positives was about at the FP=30% level. The number false negatives corresponded to about FN=30% of the total positive rate. The true positive rate was about TP/(TP+FP+FN)=54% when compared with all the negatives (FP+FN)/(TP+FP+FN)=46%. (It is not straightforward to estimate TN (true negatives)). The system performed approximately as well at this field site as it did at the prior field site, without tuning of the parameters of the software and without learning.

    All of this was done with very little a priori geologist knowledge. The only a priori geologist knowledge was 'very high level' -- through discussion with geologists we decided that the first thing we should develop was image segmentation, so that we can determine the different regions in the image, and the edges or 'geological contacts' between those regions. There was no low-level geologist knowledge in the system, like colors or spectra or textures or shapes of specific 'rocks or outcrops or geological formations'.

    With low-level 'a priori' geologist knowledge, perhaps we can improve upon the 54% rate. With a better image segmentation algorithm, we can also improve upon the 54% rate. With more advanced image analysis ('natural scene understanding'), we can improve upon the 54% true positive rate. These are 3 of the different ways we can improve our system. Given the lack of a priori geologist information, it is VERY encouraging that we can achieve better than 50% true positives. This is much better than 0%. But it sure would be nice to get up to 80-100%.

    Probably 90-100% of the interest points as determined by the computer vision system at our two field sites were 'uncommon' areas of the images, and hence interesting from a computer vision point of view. It's just that geologists have further filtering to search for truly interesting things from a geological point of view. Perhaps with the three techniques suggested above, we can get our computer vision system to match a geologist's vision system. Perhaps we can even tune the computer vision system towards the biases of a particular geologist, that is to say, to tune a few knobs in the computer vision system to be a sedimentologist or a volcanologist or even an astrobiologist.

    "Where's the beef?" (answer #3):
    Invention(?) of the Uncommon Map
    excerpt from: McGuire, Díaz Martínez, et al (2005).

    "With human vision, a geologist, in an unbiased approach to an outcrop (or scene):

  • Firstly, tends to pay attention to those areas of a scene which are most unlike the other areas of the scene; and then,
  • Secondly, attempts to find the relation between the different areas of the scene, in order to understand the geological history of the outcrop.\footnote{This concept can be compared to regular geological base-mapping.
  • The first step in this prototypical thought process of a geologist was our motivation for inventing the concept of uncommon maps. See McGuire, Ormö, et al. (2004) for an introduction to the concept of an uncommon map, and our implementation of it. We have not yet attempted to solve the second step in this prototypical thought process of a geologist, but it is evident from the formulation of the second step, that human geologists do not immediately ignore the common areas of the scene. Instead, human geologists catalog the common areas and put them in the back of their minds for "higher-level analysis of the scene'', or in other words, for determining explanations for the relations of the uncommon areas of the scene with the common areas of the scene. The concept of an `uncommon map' is our invention, though it indubitably has been independently invented by other authors, since it is somewhat useful. [Note in proofs: News reports in 2005 (i.e., "Chemical Guidebook May Help Mars Rover Track Extraterrestrial Life'', http://www.sciencedaily.com/releases/2005/05/050504180149.htm) brought the work at Idaho National Laboratory to our attention, in which the Idaho researchers use a mass spectrometer in raster mode on a sample, in order to make an `image', within which they search for uncommon areas. They also do higher-level fuzzy-logic processing with a Spectral IDentification Inference Engine (SIDIE) of these "hyperspectral images" of mass spectra. They have capabilities to blast more deeply into their samples, autonomously, if their inference engine suggests that it would be useful. See Scott, McJunkin, & Tremblay (Journal of the Association for Laboratory Automation, 2003) and Scott & Tremblay (Review of Scientific Instruments, 2002) for the status of their system as of a couple of years ago.] In our implementation, the uncommon map algorithm takes the top 8 pixel classes determined by the image segmentation algorithm, and ranks each pixel class according to how many pixels there are in each class. The pixels in the pixel class with the greatest number of pixel members are numerically labelled as `common', and the pixels in the pixel class with the least number of pixel members are numerically labelled as 'uncommon'. The `uncommonness' hence ranges from 1 for a common pixel to 8 for an uncommon pixel, and we can therefore construct an uncommon map given any image segmentation map. Rare pixels that belong to a pixel class of 9 or greater are usually noise pixels in our tests thus far, and are currently ignored."