2018 |
Zelinsky A, 'Welcome Message from the General Chair', Brisbane, QLD (2018) |
|
|
2014 |
Heinzman J, Zelinsky A, 'Automotive safety solutions through technology and human-factors innovation', Springer Tracts in Advanced Robotics (2014)
© Springer-Verlag Berlin Heidelberg 2014. Advanced Driver Assistance Systems (ADAS) provide warnings and in some cases autonomous actions to increase driver and passenger safety b... [more]
© Springer-Verlag Berlin Heidelberg 2014. Advanced Driver Assistance Systems (ADAS) provide warnings and in some cases autonomous actions to increase driver and passenger safety by combining sensor technologies and situation awareness. In the last 10 years progressed from prototype demonstrators to full product deployment in motor vehicles. Early ADAS examples include Lane Departure Warning (LDW) and Forward Collision Warning (FCW) systems have been developed to warn drivers of potentially dangerous situations. More recently, driver inattention systems have made their debut. These systems are tackling one of the major causes of fatalities on roads-drowsiness and distraction. This paper describes DSS, a driver inattention warning system which has been developed by Seeing Machines for commercial applications, with an initial focus on heavy vehicle fleet management applications. A case study reporting a year-long realworld deployment of DSS is presented. The study showed the effectiveness of the DSS technology in mitigating driver inattention in a sustained manner.
|
|
|
2012 |
Victor T, Blomberg O, Zelinsky A, 'Automating the Measurement of Driver Visual Behaviour Using Passive Stereo Vision', Vision in Vehicles IX. Proceedings of the 9th International Conference on Vision in Vehicles, Brisbane, Queensland (2012) |
|
|
2009 |
Dankers A, Barnes N, Bischof WF, Zelinsky A, 'Humanoid Vision Resembles Primate Archetype', EXPERIMENTAL ROBOTICS, Athens, GREECE (2009)
|
|
|
2009 |
Zelinsky A, 'Dependable autonomous systems', Zhuhai, China (2009)
|
|
|
2008 |
Fletcher L, Zelinsky A, 'Context sensitive driver assistance based on gaze - Road scene correlation', EXPERIMENTAL ROBOTICS, Rio de Janeiro, BRAZIL (2008)
|
|
|
2007 |
Dankers A, Barnes N, Zelinsky A, 'A Reactive Vision System: Active-Dynamic Saliency', Proceedings of the 5th International Conference on Computer Vision Systems (ICVS 2007), Bielefeld, DE (2007)
|
|
|
2007 |
Zelinsky A, 'Message from international advisory committee chair', ISCIT 2007 - 2007 International Symposium on Communications and Information Technologies Proceedings (2007)
|
|
|
2007 |
Fletcher L, Zelinsky A, 'Driver state monitoring to mitigate distraction', Distracted Driving. International Conference on Distracted Driving, Sydney, Australia (2007) |
|
|
2006 |
Dankers A, Barnes N, Zelinsky A, 'Bimodal active stereo vision', FIELD AND SERVICE ROBOTICS, Port Douglas, AUSTRALIA (2006) |
|
|
2006 |
Petersson L, Fletcher L, Zelinsky A, Barnes N, Arnell F, 'Towards safer roads by integration of road scene monitoring and vehicle control', INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, Mt Fuji, JAPAN (2006)
|
|
|
2005 |
Atienza R, Zelinsky A, 'Intuitive interface through active 3D gaze tracking', Proceedings of the 2005 International Conference on Active Media Technology, AMT 2005 (2005)
Our interaction with machines is always severely constrained by unnatural interfaces such as mouse, keyboard and joystick. Such interfaces make it difficult for us to convey our i... [more]
Our interaction with machines is always severely constrained by unnatural interfaces such as mouse, keyboard and joystick. Such interfaces make it difficult for us to convey our ideas in order for computers to understand and perform our intended tasks. In this research, our aim is to build systems that use natural human actions as interfaces. In particular we exploit gaze to track a person's focus of attention. We present an active gaze tracking system that enables a user to instruct a robot arm to pick up and hand over an object placed arbitrarily in 3D space. Our system determines the precise 3D position of the object of unknown size, shape and color by following the person's steady gaze. © 2005 IEEE.
|
|
|
2005 |
Fletcher L, Petersson L, Barnes N, Austin D, Zelinsky A, 'A sign reading driver assistance system using eye gaze', 2005 IEEE International Conference on Robotics and Automation (ICRA), Vols 1-4, Barcelona, SPAIN (2005)
|
|
|
2005 |
Fletcher L, Petersson L, Zelinsky A, 'Road scene monotony detection in a Fatigue Management Driver Assistance System', 2005 IEEE INTELLIGENT VEHICLES SYMPOSIUM PROCEEDINGS, Las Vegas, NV (2005)
|
|
|
2005 |
Dankers A, Barnes N, Zelinsky A, 'Active vision for road scene awareness', 2005 IEEE Intelligent Vehicles Symposium Proceedings, Las Vegas, NV (2005)
|
|
|
2005 |
Petersson L, Fletcher L, Zelinsky A, 'A framework for driver-in-the-loop driver assistance systems', 2005 IEEE Intelligent Transportation Systems Conference (ITSC), Vienna, AUSTRIA (2005)
|
|
|
2005 |
Fletcher L, Loy G, Barnes N, Zelinsky A, 'Correlating driver gaze with the road scene for driver assistance systems', ROBOTICS AND AUTONOMOUS SYSTEMS, Sendai, JAPAN (2005)
|
|
|
2004 |
Barnes N, Zelinsky A, 'Real-time radial symmetry for speed sign detection', 2004 IEEE INTELLIGENT VEHICLES SYMPOSIUM, Parma, ITALY (2004)
|
|
|
2004 |
Fletcher L, Zelinsky A, 'Super-resolving Signs for Classification', Canberra, ACT (2004) |
|
|
2004 |
Petersson L, Fletcher L, Barnes N, Zelinsky A, 'An interactive driver assistance system monitoring the scene in and out of the vehicle', 2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, New Orleans, LA (2004)
|
|
|
2004 |
Grubb G, Zelinsky A, Nilsson L, Rilbe M, '3D vision sensing for improved pedestrian safety', 2004 IEEE INTELLIGENT VEHICLES SYMPOSIUM, Parma, ITALY (2004)
|
|
|
2004 |
Matuszyk L, Zelinsky A, Nilsson L, Rilbe M, 'Stereo panoramic vision for monitoring vehicle blind-spots', 2004 IEEE INTELLIGENT VEHICLES SYMPOSIUM, Parma, ITALY (2004)
|
|
|
2004 |
Apostoloff N, Zelinsky A, 'Vision in and out of vehicles: Integrated driver and road scene monitoring', INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, SANT ANGELO, ITALY (2004)
|
|
|
2004 |
Dankers A, Zelinsky A, 'CeDAR: A real-world vision system - Mechanism, control and visual processing', MACHINE VISION AND APPLICATIONS, British Machine Vis Assoc, Cambridge, ENGLAND (2004)
|
|
|
2004 |
Dankers A, Barnes N, Zelinsky A, 'Active vision-rectification and depth mapping', ACRA 2004. Australasian Conference on Robotics and Automation 2004, Canberra, Australia (2004) |
|
|
2003 |
Atienza R, Zelinsky A, 'Interactive skills using active gaze tracking', ICMI'03: Fifth International Conference on Multimodal Interfaces (2003)
We have incorporated interactive skills into an active gaze tracking system. Our active gaze tracking system can identify an object in a cluttered scene that a person is looking a... [more]
We have incorporated interactive skills into an active gaze tracking system. Our active gaze tracking system can identify an object in a cluttered scene that a person is looking at. By following the user's 3-D gaze direction together with a zero-disparity filter, we can determine the object's position. Our active vision system also directs attention to a user by tracking anything with both motion and skin color. A Particle Filter fuses skin color and motion from optical flow techniques together to locate a hand or a face in an image. The active vision then uses stereo camera geometry, Kalman Filtering and position and velocity controllers to track the feature in real-time. These skills are integrated together such that they cooperate with each other in order to track the user's face and gaze at all times. Results and video demos provide interesting insights on how active gaze tracking can be utilized and improved to make human-friendly user interfaces. Copyright 2003 ACM.
|
|
|
2003 |
Halme A, Prassler E, Zelinsky A, 'Editorial: Special issue on the 3rd International Conference on Field and Service Robotics', International Journal of Robotics Research (2003)
|
|
|
2003 |
Petersson L, Apostoloff N, Zelinsky A, 'Driver assistance: An integration of vehicle monitoring and control', 2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS, TAIPEI, TAIWAN (2003)
|
|
|
2003 |
Gaskett C, Brown P, Cheng G, Zelinsky A, 'Learning implicit models during target pursuit', 2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS, TAIPEI, TAIWAN (2003)
|
|
|
2003 |
Fletcher L, Petersson L, Zelinsky A, 'Driver assistance systems based on vision in and out of vehicles', IEEE IV2003: INTELLIGENT VEHICLES SYMPOSIUM, PROCEEDINGS, COLUMBUS, OH (2003)
|
|
|
2003 |
Zelinsky A, 'Toward smart cars with computer vision for integrated driver and road scene monitoring', STUDIES IN PERCEPTION AND ACTION VII, GOLD COAST, AUSTRALIA (2003) |
|
|
2003 |
Apostoloff N, Zelinsky A, 'Robust vision based lane tracking using multiple cues and particle filtering', IEEE Intelligent Vehicles Symposium, Proceedings (2003)
© 2003 IEEE. One of the more startling effects of road related accidents is the economic and social burden they cause. Between 750,000 and 880,000 people died globally in road rel... [more]
© 2003 IEEE. One of the more startling effects of road related accidents is the economic and social burden they cause. Between 750,000 and 880,000 people died globally in road related accidents in 1999 alone, with an estimated cost of US$518 billion. One way of combating this problem is to develop Intelligent Vehicles that are self-aware and act to increase the safety of the transportation system. This paper presents the development and application of a novel multiple-cue visual lane tracking system for research into Intelligent Vehicles (IV). Particle filtering and cue fusion technologies form the basis of the lane tracking system which robustly handles several of the problems faced by previous lane tracking systems such as shadows on the road, unreliable lane markings, dramatic lighting changes and discontinuous changes in road characteristics and types. Experimental results of the lane tracking system running at 15 Hz will be discussed, focusing on the particle filter and cue fusion technology used.
|
|
|
2003 |
Thompson S, Zelinsky A, 'Accurate vision based position tracking between places in a topological map', Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation, CIRA (2003)
© 2003 IEEE. This paper presents a method for accurately tracking the position of a mobile robot which moves between places in a previously learned topological map. Places in the ... [more]
© 2003 IEEE. This paper presents a method for accurately tracking the position of a mobile robot which moves between places in a previously learned topological map. Places in the map are represented by sets of visual landmarks extracted from panoramic images. Probabilistic localisation methods and the landmark representation enable position tracking within places. A sensor model is presented which improves the accuracy of local position estimates, and is robust in the presence of occlusion and data association errors. Position tracking between places requires the recognition of place transition events and the passing of local position estimates between places. This paper presents such a system and reports real world position tracking results from paths through topological maps.
|
|
|
2003 |
Apostoloff N, Zelinsky A, 'Vision in and out of vehicles: Integrated driver and road scene monitoring', EXPERIMENTAL ROBOTICS VIII, SANT ANGELO, ITALY (2003) |
|
|
2003 |
Dankers A, Zelinsky A, 'A real-world vision system: Mechanism, control, and vision processing', COMPUTER VISION SYSTEMS, PROCEEDINGS, GRAZ, AUSTRIA (2003)
|
|
|
2003 |
Heinzmann J, Zelinsky A, 'Quantitative safety guarantees for physical human-robot interaction', INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH (2003)
|
|
|
2003 |
Dankers A, Fletcher L, Petersson L, Zelinsky A, 'Driver assistance: Contemporary road safety', ACRA 2003. Australasian Conference on Robotics Automation 2003, Brisbane, Australia (2003) |
|
|
2002 |
Loy G, Fletcher L, Apostoloff N, Zelinsky A, 'An adaptive fusion architecture for target tracking', FIFTH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, PROCEEDINGS, WASHINGTON, D.C. (2002)
|
|
|
2002 |
Atienza R, Zelinsky A, 'Active gaze tracking for human-robot interaction', FOURTH IEEE INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES, PROCEEDINGS, PITTSBURGH, PA (2002)
|
|
|
2002 |
Bianco GM, Zelinsky A, 'The convergence property of goal-based visual navigation', IEEE International Conference on Intelligent Robots and Systems (2002)
The use of landmarks is a natural and instinctive method to determine the whereabouts of a location or a means to proceed to a particular location. Results provided in this paper ... [more]
The use of landmarks is a natural and instinctive method to determine the whereabouts of a location or a means to proceed to a particular location. Results provided in this paper indicate that landmark-based navigation possesses a corrective or feedback trait that produces a convergence bound on the movements to the goal position, in contrast to the odometry-based movements, which leads to the drift between successive navigation movements. Experiments show that the vector field approach can be used to explain the convergence property of landmark-based guidance tasks. Experiments have been carried out operating with a Nomad mobile robot equipped with real-time visual landmark tracking system.
|
|
|
2002 |
Petersson L, Apostoloff N, Zelinsky A, 'Driver assistance based on vehicle monitoring and control', Proceedings of the 2002 Australasian Conference on Robotics and Automation, Auckland, NZ (2002) |
|
|
2001 |
Atienza R, Zelinsky A, 'A practical zoom camera calibration technique: An application of active vision for human-robot interaction', ACRA 2001. Australian Conference on Robotics and Automation 2001, Sydney, Australia (2001) |
|
|
2001 |
Göcke R, Millar JB, Zelinsky A, Robert-Ribes J, 'Stereo Vision Lip-Tracking for Audio-Video Speech Processing', Salt Lake City (2001) |
|
|
2001 |
Chen JR, Zelinsky A, 'Generating a configuration space representation for assembly tasks from demonstration', 2001 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS I-IV, PROCEEDINGS, SEOUL, SOUTH KOREA (2001)
|
|
|
2001 |
Chen JR, Zelinsky A, 'Programming by demonstration: Removing suboptimal actions in a partially known configuration space', 2001 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS I-IV, PROCEEDINGS, SEOUL, SOUTH KOREA (2001)
|
|
|
2001 |
Austin D, Fletcher L, Zelinsky A, 'Mobile robotics in the long term - Exploring the fourth dimension', IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, MAUI, HI (2001)
|
|
|
2001 |
Silpa-Anan C, Brinsmead T, Abdallah S, Zelinsky A, 'Preliminary experiments in visual servo control for autonomous underwater vehicle', IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, MAUI, HI (2001)
|
|
|
2001 |
Sutherland O, Truong H, Rougeaux S, Zelinsky A, 'Advancing active vision systems by improved design and control', EXPERIMENTAL ROBOTICS VII, WAIKIKI, HAWAII (2001) |
|
|
2001 |
Heinzmann J, Zelinsky A, 'Visual human-robot interaction', 2001 INTERNATIONAL WORKSHOP ON BIO-ROBOTICS AND TELEOPERATION, PROCEEDINGS, BEIJING INST TECHNOL, BEIJING, PEOPLES R CHINA (2001) |
|
|
2001 |
Silpa-Anan C, Zelinsky A, 'Kambara: past, present and future', ACRA 2001 : Proceedings of the 2001 Australian Conference on Robotics and Automation, Sydney, NSW (2001) |
|
|
2001 |
Goecke R, Millar JB, Zelinsky A, Robert-Ribes J, 'Analysis of audio-video correlation in vowels in Australian English', AVSP 2001 International Conference on Auditory-Visual Speech Processing, Aalborg, Denmark (2001) |
|
|
2001 |
Fletcher L, Apostoloff N, Chen J, Zelinsky A, 'Computer vision for vehicle monitoring and control', Proceedings 2001 Australian Conference on Robotics and Automation, Sydney, NSW (2001) |
|
|
2000 |
Truong H, Abdallah S, Rougeaux S, Zelinsky A, 'A novel mechanism for stereo active vision', ACRA 2000. Australian Conference on Robotics and Automation 2000, Sydney, Australia (2000) |
|
|
2000 |
Bianco G, Zelinsky A, 'Real time analysis of the robustness of the navigation strategy of a visually guided mobile robot', Intelligent Autonomous Systems 6, Venice, Italy (2000) |
|
|
2000 |
Cvetanovski J, Abdallah S, Brinsmead T, Zelinsky A, Wettergreen D, 'A state estimation system for an autonomous underwater vehicle', ACRA 2000. Australian Conference on Robotics and Automation 2000, Melbourne, Australia (2000) |
|
|
2000 |
Gaskett C, Fletcher L, Zelinsky A, 'Reinforcement learning for visual servoing of a mobile robot', ACRA 2000. Australian Conference on Robotics and Automation 2000, Melbourne Australia (2000) |
|
|
2000 |
Bryant M, Wettergreen D, Abdallah S, Zelinsky A, 'Robust camera calibration for an autonomous underwater vehicle', ACRA 2000. Australian Conference on Robotics and Automation 2000, Melbourne, Australia (2000) |
|
|
2000 |
Loy G, Goecke R, Rougeaux S, Zelinsky A, 'Stereo 3D Lip Tracking', Proceedings of the 6th International Conference on Control, Automation, Robotics and Vision ICARCV2000, Singapore (2000) |
|
|
2000 |
Sutherland O, Rougeaux S, Abdallah S, Zelinsky A, 'Tracking with hybrid-drive active vision', Melbourne, Vic (2000) |
|
|
2000 |
Goecke R, Tran QN, Zelinsky A, Millar JB, Robert-Ribes J, 'Validation of an automatic lip-tracking algorithm and design of a database for audio-video speech processing', Canberra, ACT (2000) |
|
|
2000 |
Gaskett C, Fletcher L, Zelinsky A, 'Reinforcement learning for a vision based mobile robot', 2000 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2000), VOLS 1-3, PROCEEDINGS, KAGAWA UNIV, TAKAMATSU, JAPAN (2000)
|
|
|
2000 |
O'Hagan R, Zelinsky A, 'Visual gesture interfaces for virtual environments', Proceedings - 1st Australasian User Interface Conference, AUIC 2000 (2000)
© 2000 IEEE. Virtual environments provide a whole new way of viewing and manipulating 3D data. Current technology moves the images out of desktop monitors and into the space immed... [more]
© 2000 IEEE. Virtual environments provide a whole new way of viewing and manipulating 3D data. Current technology moves the images out of desktop monitors and into the space immediately surrounding the user. Users can literally put their hands on the virtual objects. Unfortunately techniques for interacting with such environments have yet to mature. Gloves and sensor based trackers are unwieldy, constraining and uncomfortable to use. A natural, more intuitive method of interaction would be to allow the user to grasp objects with their hands and manipulate them as if they were real objects. We are investigating the use of computer vision in implementing a natural interface based on hand gestures. A framework for a gesture recognition system is introduced along with results of experiments in colour segmentation, feature extraction and template matching for finger and hand tracking and hand pose recognition. Progress in the implementation of a gesture interface for navigation and object manipulation in virtual environments is discussed.
|
|
|
2000 |
Newman R, Matsumoto Y, Rougeaux S, Zelinsky A, 'Real-time stereo tracking for head pose and gaze estimation', Proceedings - 4th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2000 (2000)
Computer systems which analyse human face/head motion have attracted significant attention recently as there are a number of interesting and useful applications. Not least among t... [more]
Computer systems which analyse human face/head motion have attracted significant attention recently as there are a number of interesting and useful applications. Not least among these is the goal of tracking the head in real time. A useful extension of this problem is to estimate the subject's gaze point in addition to his/her head pose. This paper describes a real-time stereo vision system which determines the head pose and gaze direction of a human subject. Its accuracy makes it useful for a number of applications including human/computer interaction, consumer research and ergonomic assessment. © 2000 IEEE.
|
|
|
2000 |
Matsumoto Y, Zelinsky A, 'An algorithm for real-time stereo vision implementation of head pose and gaze direction measurement', Proceedings - 4th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2000 (2000)
To build smart human interfaces, it is necessary for a system to know a user's intention and point of attention. Since the motion of a person's head pose and gaze direct... [more]
To build smart human interfaces, it is necessary for a system to know a user's intention and point of attention. Since the motion of a person's head pose and gaze direction are deeply related with his/her intention and attention, detection of such information can be utilized to build natural and intuitive interfaces. We describe our real-time stereo face tracking and gaze detection system to measure head pose and gaze direction simultaneously. The key aspect of our system is the use of real-time stereo vision together with a simple algorithm which is suitable for real-time processing. Since the 3D coordinates of the features on a face can be directly measured in our system, we can significantly simplify the algorithm for 3D model fitting to obtain the full 3D pose of the head compared with conventional systems that use monocular camera. Consequently we achieved a non-contact, passive, real-time, robust, accurate and compact measurement system for head pose and gaze direction. © 2000 IEEE.
|
|
|
2000 |
Zelinsky A, Matsumoto Y, Heinzmann J, Newman R, 'Towards human friendly robots: Vision-based interfaces and safe mechanisms', EXPERIMENTAL ROBOTICS VI, SYDNEY, AUSTRALIA (2000)
|
|
|
2000 |
Heinzmann J, Zelinsky A, 'Building human-friendly robot systems', ROBOTICS RESEARCH, SNOWBIRD, UT (2000) |
|
|
2000 |
Matsumoto Y, Ogasawara T, Zelinsky A, 'Behavior recognition based on head pose and gaze direction measurement', IEEE International Conference on Intelligent Robots and Systems (2000)
To build smart human interfaces, it is necessary for a system to know a user's intention and point of attention. Since the motion of a person's head pose and gaze direct... [more]
To build smart human interfaces, it is necessary for a system to know a user's intention and point of attention. Since the motion of a person's head pose and gaze direction are deeply related with his/her intention and attention, detection of such information can be utilized to build natural and intuitive interfaces. In this paper, we describe a behavior recognition system based on the real-time stereo face tracking and gaze detection system to measure head pose and gaze direction simultaneously. The key aspect of our system is the use of real-time stereo vision together with a simple algorithm which is suitable for real-time processing. Since the 3D coordinates of the features on a face can be directly measured in our system, we can significantly simplify the algorithm for 3D model fitting to obtain the full 3D pose of the head compared with conventional systems that use monocular camera. Consequently we achieved a non-contact, passive, real-time, robust, accurate and compact measurement system for head pose and gaze direction. The recognition of attentions and gestures of a person is demonstrated in the experiments.
|
|
|
2000 |
Göcke R, Millar JB, Zelinsky A, Robert-Ribes J, 'Automatic extraction of lip feature points', ACRA 2000. Australian Conference on Robotics and Automation 2000, Melbourne, Australia (2000) |
|
|
2000 |
Oh S, Zelinsky A, Taylor K, 'Autonomous battery recharging for indoor mobile robots', ACRA 2000. Australian Conference on Robotics and Automation 2000, Melbourne, Australia (2000) |
|
|
2000 |
Thompson S, Matsui T, Zelinsky A, 'Localisation using automatically selected landmarks from panoramic images', Proceedings of Australian Conference on Robotics and Automation (ACRA2000), Melbourne, Australia (2000) |
|
|
1999 |
Rahman S, Zelinsky A, 'Mobile robot navigation based on localisation using hidden Markov models', ACRA 1999. Australian Conference on Robotics and Automation 1999, Brisbane, Australia (1999) |
|
|
1999 |
Sotelo MÁ, Zelinsky A, Rodríguez FJ, Bergasa LM, 'Real-time Road Tracking using Templates Matching', IIA/SOCO 1999: Proceedings of the Third ICSC Symposia on Intelligent Industrial Automation (IIA'99) and Soft Computing (SOCO'99), Genoa, Italy (1999) |
|
|
1999 |
Gaskett C, Wettergreen D, Zelinsky A, 'Reinforcement learning applied to the control of an autonomous underwater vehicle', ACRA 1999. Australian Conference on Robotics and Automation 1999, Brisbane, Australia (1999) |
|
|
1999 |
Wettergreen D, Gaskett C, Zelinsky A, 'Reinforcement learning for a visually-guided autonomous underwater vehicle', Proceedings of the 11th International Symposium on Unmanned Untethered Submersible Technology, Durham, New Hampshire (1999) |
|
|
1999 |
Cheng G, Zelinsky A, 'Supervised Autonomy: a framework for human robot systems development', Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (1999)
In this paper we present a paradigm for robot control, Supervised Autonomy. Supervised Autonomy is a framework, which facilitates the development of human robot systems. Each of t... [more]
In this paper we present a paradigm for robot control, Supervised Autonomy. Supervised Autonomy is a framework, which facilitates the development of human robot systems. Each of these components have been devised to augment users in accomplishing their task. Experimental results of this framework in applying to the use of a teleoperation system are presented. Our current progress and planned future work is also presented.
|
|
|
1999 |
Hara I, Zelinsky A, Matsui T, Asoh H, Kurita T, Tanaka M, Hotta K, 'Communicative functions to support human robot cooperation', IEEE International Conference on Intelligent Robots and Systems (1999)
We have been developing an autonomous robotic agent that helps people in a real world environment, such as in an office. When a robotic agent works by cooperating with person in a... [more]
We have been developing an autonomous robotic agent that helps people in a real world environment, such as in an office. When a robotic agent works by cooperating with person in a real world environment, it must manage a lot of information and deal with the knowledge and languages that people usually use. Therefore it is important for the agent to recognize what people request as soon as possible. To realize common communication with people, the agent should provide robust communicative functions to obtain information from people. In this paper, we discuss communicative functions of our robotic agent called Jijo-2. Especially we focus on the problem of detecting human faces, and discuss how a method of detecting a human face can be robustly archived.
|
|
|
1999 |
Matsumoto Y, Zelinsky A, 'Real-Time stereo face tracking system for visual human interfaces', Proceedings - International Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, RATFG-RTS 1999 (1999)
When a person instructs operations to a robot, or performs a cooperative task with a robot, it is necessary to inform the robot of the person's intention and attention. Since... [more]
When a person instructs operations to a robot, or performs a cooperative task with a robot, it is necessary to inform the robot of the person's intention and attention. Since the motion of a person's face and the direction of the gaze is deeply related with person's intention and attention, detection of such motions can be utilized as a natural way of communication in human- robot interaction. In this paper, we describe our real-Time stereo face tracking system. The key of our system is the use of a stereo vision. Since the 3D coordinates of the features on the face can be directly measured in our system, we can drastically simplify the algorithm of the 3D model fitting to obtain the full 3D pose of the head compared with convention-Al system with monocular camera. Consequently we achieved a non-contact, passive, real-Time, robust, accurate and compact measurement system of human's head pose and gaze direction.
|
|
|
1999 |
Gaskett C, Wettergreen D, Zelinsky A, 'Q-learning in continuous state and action spaces', ADVANCED TOPICS IN ARTIFICIAL INTELLIGENCE, UNIV NEW S WALES, SYDNEY, AUSTRALIA (1999)
|
|
|
1999 |
Zelinsky A, 'Visual human-machine interaction', ADVANCED TOPICS IN ARTIFICIAL INTELLIGENCE, UNIV NEW S WALES, SYDNEY, AUSTRALIA (1999) |
|
|
1999 |
Matsumoto Y, Zelinsky A, 'Real-time face tracking system for human-robot interaction', Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (1999)
When a person instructs operations to a robot, or performs a cooperative task with a robot, it is necessary to inform the robot of the person's intention and attention. Since... [more]
When a person instructs operations to a robot, or performs a cooperative task with a robot, it is necessary to inform the robot of the person's intention and attention. Since the motion of a person's face and the direction of the gaze is deeply related with person's intention and attention, detection of such motions can be utilized as a natural way of communication in human-robot interaction. In this paper, we describe our real-time stereo face tracking system. The key of our system is the use of a stereo vision. Since the 3D coordinates of the features on the face can be directly measured in our system, we can drastically simplify the algorithm of the 3D model fitting to obtain the full 3D pose of the head compared with conventional system with monocular camera. Consequently we achieved a non-contact, passive, real-time, robust, accurate and compact measurement system of human's head pose and gaze direction.
|
|
|
1999 |
Koshizen T, Bartlett P, Zelinsky A, 'Sensor fusion of odometry and sonar sensors by the Gaussian Mixture Bayes' technique in mobile robot position estimation', Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (1999)
Modelling and reducing uncertainty are two essential problems with mobile robot localisation. Previously we developed a robot localisation system, namely the Gaussian Mixture of B... [more]
Modelling and reducing uncertainty are two essential problems with mobile robot localisation. Previously we developed a robot localisation system, namely the Gaussian Mixture of Bayes with Regularised Expectation Maximisation (GMB-REM), using sonar sensors. GMB-REM allows a robot's position to be modelled as a probability distribution, and uses Bayes' theorem to reduce the uncertainty of its location. In this paper, a new system for performing sensor fusion is introduced, namely an enhanced form of GMB-REM. Empirical results show the new system outperforms GMB-REM using sonar alone. More specifically, it is able to constrain the error of robot's positions even when sonar signals are noisy.
|
|
|
1999 |
Zelinsky A, 'Advances in robot vision: Mechanisms and algorithms', First Asian Symposium on Industrial Automation and Robotics, Bangkok, Thailand (1999) |
|
|
1999 |
Bianco G, Zelinsky A, 'Biologically-inspired visual landmark learning and navigation for mobile robots', IEEE International Conference on Intelligent Robots and Systems (1999)
This paper presents a biologically-inspired method for navigating using visual landmarks which have been self-selected within natural environments. A landmark is a region of the g... [more]
This paper presents a biologically-inspired method for navigating using visual landmarks which have been self-selected within natural environments. A landmark is a region of the grabbed image which is chosen according to its reliability measured through a phase (Turn Back and Look - TBL) that mimics the behavior of some social insects. From the self-chosen landmarks suitable navigation information can be extracted following a well known model introduced in Biology to explain the bee's navigation behavior. The landmark selection phase affects the conservativeness of the navigation vector field thus allowing us to explain the navigation model in terms of a visual potential function which drives the navigation to the goal. The experiments have been performed using a Nomad200 mobile robot equipped with monocular color vision.
|
|
|
1999 |
Jung D, Zelinsky A, 'Integrating spatial and topological navigation in a behaviour-based multi-robot application', IEEE International Conference on Intelligent Robots and Systems (1999)
According to the behaviour-based philosophy, the structure of an agent's internal representations of the environment should not be explicitly imposed by the designer; they sh... [more]
According to the behaviour-based philosophy, the structure of an agent's internal representations of the environment should not be explicitly imposed by the designer; they should be grounded in its sensor-action space. This paper presents a scheme in which the agent's action selection mechanism gives rise to an integrated spatial and topological navigation and mapping capability. The navigation behaviour emerges from the notion of location feature detectors and homogeneous action selection. The scheme is demonstrated using two autonomous mobile robots in a multi-robot cooperation scenario.
|
|
|
1999 |
Heinzmann J, Zelinsky A, 'Safe control of human-friendly robots', IEEE International Conference on Intelligent Robots and Systems (1999)
This paper introduces a new approach to the control of robot manipulators in a way that is safe for humans in the robot's workspace. Conceptually the robot is viewed as a too... [more]
This paper introduces a new approach to the control of robot manipulators in a way that is safe for humans in the robot's workspace. Conceptually the robot is viewed as a tool with limited autonomy. The limited perception capabilities of automatic systems prohibits the construction of failsafe robots with the capabilities of people. Instead, the goal of our control scheme is to make the interaction with a robot manipulator safe by making the robots actions predictable and understandable to the human operator. At the same time the forces the robot applies with any part of its body to its environment have to be controllable and limited. Experimental results are presented of a human-friendly robot controller that is under development for a Barrett Whole Arm Manipulator robot.
|
|
|
1999 |
Ward K, Zelinsky A, McKerrow P, 'Learning to avoid objects and dock with a mobile robot', ACRA 1999. Australian Conference on Robotics and Automation 1999, Brisbane, Australia (1999) |
|
|
1999 |
Newman R, Zelinsky A, 'Error analysis of head pose and gaze direction from stereo vision', ACRA 1999. Australian Conference on Robotics and Automation 1999, Brisbane, Australia (1999) |
|
|
1999 |
Thompson S, Zelinsky A, Srinivasan M, 'Automatic landmark selection for navigation with panoramic vision', ACRA 1999. Australian Conference on Robotics and Automation 1999, Brisbane, Australia (1999) |
|
|
1999 |
Truong SN, Kieffer J, Zelinsky A, 'A cable-driven pan-tilt mechanism for active vision', ACRA 1999. Australian Conference on Robotics and Automation 1999, Brisbane, Australia (1999) |
|
|
1999 |
Heinzmann J, Zelinsky A, 'Bounding Errors for Improved 3D Face Tracking in Visual Interfaces', ACRA 1999. Australian Conference on Robotics and Automation 1999, Brisbane, Australia (1999) |
|
|
1999 |
Loy G, Newman R, Zelinsky A, Moore J, 'An Alternative Approach to Recovering 3D Pose Information from 2D Data', Proceedings of Australian Conference on Robotics and Automation ACRA'99, Brisbane, Australia (1999) |
|
|
1999 |
Wettergreen D, Gaskett C, Zelinsky A, 'Autonomous guidance and control for an underwater robotic vehicle', Proceedings of the International Conference on Field and Service Robotics (FSR'99), Pittsburgh, USA (1999) |
|
|
1998 |
Jung D, Cheng G, Zelinsky A, 'Robot cleaning: An application of distributed planning and real-time vision', Field and Service Robotics, Canberra, Australia (1998)
|
|
|
1998 |
Cheng G, Zelinsky A, 'Real-time vision processing for a soccer playing mobile robot', RoboCup-97: Robot Soccer World Cup I, Nagoya, Japan (1998)
|
|
|
1998 |
Ward K, Zelinsky A, 'Acquiring mobile robot behaviors by learning trajectory velocities with multiple FAM matrices', 1998 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-4, KATHOLIEKE UNIV LEUVEN, LEUVEN, BELGIUM (1998)
|
|
|
1998 |
Jung D, Heinzmann J, Zelinsky A, 'Range and pose estimation for visual Servoing of a mobile robot', 1998 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-4, KATHOLIEKE UNIV LEUVEN, LEUVEN, BELGIUM (1998)
|
|
|
1998 |
Cheng G, Zelinsky A, 'Goal-oriented behaviour-based visual navigation', 1998 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-4, KATHOLIEKE UNIV LEUVEN, LEUVEN, BELGIUM (1998)
|
|
|
1998 |
Brooks A, Abdallah S, Zelinsky A, Kieffer J, 'A multimodal approach to real-time active vision', 1998 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS - PROCEEDINGS, VOLS 1-3, VICTORIA, CANADA (1998)
|
|
|
1998 |
Wettergreen D, Gaskett C, Zelinsky A, 'Development of a visually-guided autonomous underwater vehicle', OCEANS'98 - CONFERENCE PROCEEDINGS, VOLS 1-3, NICE, FRANCE (1998)
|
|
|
1998 |
Heinzmann J, Zelinsky A, '3-D facial pose and gaze point estimation using a robust real-time tracking paradigm', AUTOMATIC FACE AND GESTURE RECOGNITION - THIRD IEEE INTERNATIONAL CONFERENCE PROCEEDINGS, NARA, JAPAN (1998)
|
|
|
1998 |
Jung D, Cheng G, Zelinsky A, 'Experiments in realising cooperation between autonomous mobile robots', EXPERIMENTAL ROBOTICS V, UNIV POLITECNICA CATALUNYA, BARCELONA, SPAIN (1998)
|
|
|
1998 |
Ward K, Zelinsky A, 'Genetically Evolving Robot Perception to Effectively Learn Multiple Behaviours Simultaneously', Second IEEE International Conference on Intelligent Processing Systems, Gold Coast, Australia (1998) |
|
|
1997 |
Heinzmann J, Zelinsky A, 'Robust real-time face tracking and gesture recognition', IJCAI-97 - PROCEEDINGS OF THE FIFTEENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOLS 1 AND 2, NAGOYA, JAPAN (1997)
|
|
|
1997 |
O'hagan R, Zelinsky A, 'Finger track - A robust and real-time gesture interface', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (1997)
© Springer-Verlag Berlin Heidelberg 1997. Real-time computer vision combined with robust gesture recognition provides a natural alternative to traditional computer interfaces. Hum... [more]
© Springer-Verlag Berlin Heidelberg 1997. Real-time computer vision combined with robust gesture recognition provides a natural alternative to traditional computer interfaces. Human users have plenty of experience with actions and the manipulation of objects requiring finger movement. In place of a mouse, users could use their hands to select and manipulate data. This paper presents a first step in this approach using a finger as a pointing and selection device. A major feature of a successful tracking system is robustness. The system must be able to acquire tracked features upon startup, and reacquire them if lost during tracking. Reacquisition should be fast and accurate (i.e. it should pick up the correct feature). Intelligent search algorithms are needed for speedy, accurate acquisition of lost features with the frame. The prototype interface presented in this paper is based on finger tracking as a means of input to applications. The focus of the discussion is how the system can be made to perform robustly in real-time. Dynamically distributed search windows are defined for searching within the frame. The location and number of search windows are dependent on the confidence in the tracking of features. Experimental results showing the effectiveness of these techniques are presented.
|
|
|
1996 |
Cheng G, Zelinsky A, 'Supervised autonomy: A paradigm for teleoperating mobile robots', IROS '97 - PROCEEDINGS OF THE 1997 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOT AND SYSTEMS: INNOVATIVE ROBOTICS FOR REAL-WORLD APPLICATIONS, VOLS 1-3, GRENOBLE, FRANCE (1996)
|
|
|
1996 |
Jung D, Zelinsky A, 'Whisker based mobile robot navigation', IROS 96 - PROCEEDINGS OF THE 1996 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS - ROBOTIC INTELLIGENCE INTERACTING WITH DYNAMIC WORLDS, VOLS 1-3, SENRI LIFE SCI CTR, OSAKA, JAPAN (1996)
|
|
|
1996 |
Cheng G, Zelinsky A, 'Real-time visual behaviours for navigating a mobile robot', IROS 96 - PROCEEDINGS OF THE 1996 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS - ROBOTIC INTELLIGENCE INTERACTING WITH DYNAMIC WORLDS, VOLS 1-3, SENRI LIFE SCI CTR, OSAKA, JAPAN (1996)
|
|
|
1996 |
Zelinsky A, Heinzmann J, 'Human-robot interaction using facial gesture recognition', RO-MAN '96 - 5TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, AIST TSUKUBA RES CTR, TSUKUBA, JAPAN (1996)
|
|
|
1996 |
Zelinsky A, Heinzmann J, 'Real-time visual recognition of facial gestures for human-computer interaction', PROCEEDINGS OF THE SECOND INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, KILLINGTON, VT (1996)
|
|
|
1995 |
Cheng G, Zelinsky A, 'A physically grounded search in a behaviour based robot', AI '95 Proceedings of the Eighth Australian Joint Conference on Artificial Intelligence, Canberra, Australia (1995)
|
|
|
1995 |
Zelinsky A, Kuniyoshi Y, Suehiro T, Tsukune H, 'Using an augmentable resource to robustly and purposefully navigate a robot', Proceedings - IEEE International Conference on Robotics and Automation (1995)
We present a scheme for specifying and executing purposive navigation tasks for a behaviour-based mobile robot. A user specifies the robot's navigation task in general and qu... [more]
We present a scheme for specifying and executing purposive navigation tasks for a behaviour-based mobile robot. A user specifies the robot's navigation task in general and qualitative terms using a graphical resource called the Purposive Map (PM). The robot navigates using the incomplete and approximate information stored in the PM as an aid to achieve the specified mission. The robot is able to augment the knowledge provided in the PM with environment information learnt by the robot. Using the augmented PM, the robot learns how to perform efficient obstacle avoidance. We present experimental results using a real robot to show our scheme is robust. Our robot can escape from dead-ends, can deduce that goals are unreachable and can withstand disturbances to the environment between missions.
|
|
|
1994 |
Zelinsky A, Kuniyoshi Y, Tsukune H, 'Monitoring and co-ordinating behaviours for purposive robot navigation', IEEE/RSJ/GI International Conference on Intelligent Robots and Systems (1994)
This paper presents a new scheme for purposive navigation for mobile agents. The new scheme is robust, qualitative and provides a mechanism for combining mapping, planning and mis... [more]
This paper presents a new scheme for purposive navigation for mobile agents. The new scheme is robust, qualitative and provides a mechanism for combining mapping, planning and mission execution for a mobile agent into a single data structure called the Purposive Map (PM). The agent can navigate using incomplete and approximate information stored the PM. We present a novel approach to finding how to perform obstacle avoidance for a behaviour based robot. Our approach is based on using a physically grounded search while monitoring and co-ordinating behaviours. The physically grounded search exploits stagnation points (local minima) to guide the search for the shortest path to a target. This scheme enables our robot to escape from dead-end situations and allows it to deduce that a target location is unreachable. Simulation results are presented.
|
|
|
1993 |
Zelinsky A, Yuta S, 'A unified approach to planning, sensing and navigation for mobile robots', Experimental Robotics III, The 3rd International Symposium, Kyoto, Japan, October 28-30, 1993. Lecture Notes in Control and Information Sciences 200, Kyoto, Japan (1993) |
|
|
1993 |
Zelinsky A, Jarvis RA, Byrne JC, Yuta S, 'Planning Paths of Complete Coverage of an Unstructured Environment by a Mobile Robot', Proceedings of International Conference on Advanced Robotics, Tsukuba ,Japan (1993) |
|
|
1992 |
Zelinsky A, 'Mobile robot navigation - Combining local obstacle avoidance and global path planning', AI '92. Proceedings of the 5th Australian Joint Conference on Artificial Intelligence, Hobart, Tasmania (1992) |
|
|
1992 |
Zelinsky A, Dowson I, 'Continuous smooth path execution for an autonomous guided vehicle (AGV)', Melbourne, Vic (1992) |
|
|
1992 |
Zelinsky A, 'A navigation algorithm for industrial mobile robots', Melbourne, Vic (1992) |
|
|
1990 |
Zelinsky A, 'A Mobile Robot Control System based on a Transputer Network', The Transputer in Australasia. ATOUG-3. Proceedings of the 3rd Australian Transputer and OCCAM User Group Conference, Sydney, Australia (1990) |
|
|
1990 |
Zelinsky A, 'Environment mapping with a mobile robot using sonar', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (1990)
© Springer-Verlag Berlin Heidelberg 1990. This paper describes a method of producing high resolution maps of an indoor environment with an autonomous mobile robot equipped with so... [more]
© Springer-Verlag Berlin Heidelberg 1990. This paper describes a method of producing high resolution maps of an indoor environment with an autonomous mobile robot equipped with sonar range finding sensors. This method is based upon investigating obstacles in the near vicinity of a mobile robot. The mobile robot examines the straight line segments extracted from the sonar range data describing obstacles near the robot. The mobile robot then moves parallel to the straight line sonar segments, in close proximity to the obstacles, continually applying the sonar barrier test. The sonar barrier test exploits the physical constraints of sonar data, and eliminates noisy data. This test determines whether or not a sonar line segment is a true obstacle edge or a false reflection. Low resolution sonar sensors can be used with the described method. The performance of the algorithm is demonstrated using a Denning Corp. Mobile Robot, equipped with a ring of Polaroid Corp. Ultrasonic Rangefinders.
|
|
|
1989 |
Zelinsky A, 'Navigation By Learning', Proceedings. IEEE/RSJ International Workshop on Intelligent Robots and Systems '. (IROS '89) 'The Autonomous Mobile Robots and Its Applications, Tsukuba, Japan (1989)
|
|
|
1988 |
Zelinsky A, 'Robot navigation with learning', Australian Computer Science Communications, St Lucia, QLD (1988)
|
|
|