溫室大棚自動除雪機器人
溫室大棚自動除雪機器人,溫室,大棚,自動,機器人
密 級
分類號
編 號
成 績
本科生畢業(yè)設計 (論文)
外 文 翻 譯
原 文 標 題
Multi-degree of freedom walking robot up
譯 文 標 題
多自由度步行機器人
作者所在系別
機電工程學院
作者所在專業(yè)
機械設計制造及其自動化
作者所在班級
B13113
作 者 姓 名
李明
作 者 學 號
20134011327
指導教師姓名
陳明
指導教師職稱
教授
完 成 時 間
2017
年
3
月
北華航天工業(yè)學院教務處制
譯文標題
多自由度步行機器人
原文標題
Multi-degree of freedom walking robot up
作 者
Masayuki INABA
譯 名
美男雅之
國 籍
日本
原文出處
University of Tokyo
譯文:
多自由度步行機器人
摘要:在現實生活中設計一款不僅可以倒下而且還可以站起來的機器人靈活智能機器人很重要。本文提出了一種兩臂兩足機器人,即一個模仿機器人,它可以步行、滾動和站起來。該機器人由一個頭,兩個胳膊和兩條腿組成?;谶h程控制,設計了雙足機器人的控制系統(tǒng),解決了機器人大腦內的機構無法與無線電聯系的問題。這種遠程控制使機器人具有強大的計算頭腦和有多個關節(jié)輕盈的身體。該機器人能夠保持平衡并長期使用跟蹤視覺,通過一組垂直傳感器檢測是否跌倒,并通過兩個手臂和兩條腿履行起立動作。用實際例子對所開發(fā)的系統(tǒng)和實驗結果進行了描述。
1 引言
隨著人類兒童的娛樂,對于設計的雙足運動的機器人具有有站起來動作的能力是必不可少。為了建立一個可以實現兩足自動步行的機器人,設計中感知是站立還是否躺著的傳感器必不可少。
兩足步行機器人它主要集中在動態(tài)步行,作為一種先進的控制問題來對待它[3][4][5] 。然而,在現實世界中把注意力集中在智能反應,更重要的是創(chuàng)想,而不是一個不會倒下的機器人,是一個倒下來可以站起來的機器人。
為了建立一個既能倒下又能站起來的機器人,機器人需要傳感系統(tǒng)就要知道它是否跌倒或沒有跌倒。雖然視覺是一個機器人最重要的遙感功能,但由于視覺系統(tǒng)規(guī)模和實力的限制,建立一個強大的視覺系統(tǒng)在機器人自己的身體上是困難的。如果我們想進一步要求動態(tài)反應和智能推理經驗的基礎上基于視覺的機器人行為研究,那么機器人機構要輕巧足以夠迅速作出迅速反應,并有許多自由度為了顯示驅動各種智能行為。至于有腿機器人[6][7][8],只有一個以視覺為基礎的小小的研究[9]。面臨的困難是在基于視覺有腿機器人實驗研究上由硬件的顯示所限制。
在有限的硬件基礎上是很難繼續(xù)發(fā)展先進的視覺軟件。為了解決這些問題和推進基于視覺的行為研究,可以通過建立遠程腦的辦法。身體和大腦相連的無線鏈路使用無線照相機和遠程控制機器人,因為機體并不需要電腦板,所以它變得更加容易建立一個有許多自由度驅動的輕盈機身。
在這項研究中,我們制定了一個使用遠程腦機器人的環(huán)境并且使它執(zhí)行平衡的視覺和起立的手扶兩足機器人,通過胳膊和腿的合作,該系統(tǒng)和實驗結果說明如下。
圖1 遠程腦系統(tǒng)的硬件配置
圖2 兩組機器人的身體結構
2 遠程腦系統(tǒng)
遠程控制機器人不使用自己大腦內的機構。它留大腦在控制系統(tǒng)中并且與它用無線電聯系。這使我們能夠建立一個自由的身體和沉重大腦的機器人。身體和大腦的定義軟件和硬件之間連接的接口。身體是為了適應每個研究項目和任務而設計的。這使我們提前進行研究各種真實機器人系統(tǒng)[10]。
一個主要利用遠程腦機器人是基于超級并行計算機上有一個大型及重型顱腦。雖然硬件技術已經先進了并擁有生產功能強大的緊湊型視覺系統(tǒng)的規(guī)模,但是硬件仍然很大。攝像頭和視覺處理器的無線連接已經成為一種研究工具。遠程腦的做法使我們在基于視覺機器人技術各種實驗問題的研究上取得進展。
另一個遠程腦的做法的優(yōu)點是機器人機體輕巧。這開辟了與有腿移動機器人合作的可能性。至于動物,一個機器人有4個可以行走的四肢。我們的重點是基于視覺的適應行為的4肢機器人、機械動物,在外地進行試驗還沒有太多的研究。
大腦是提出的在母體環(huán)境中通過接代遺傳 。大腦和母體可以分享新設計的機器人。一個開發(fā)者利用環(huán)境可以集中精力在大腦的功能設計上。對于機器人的大腦被提出在一個母體的環(huán)境,它可以直接受益于母體的‘演變’ ,也就是說當母體升級到一個更強大的計算機時該軟件容易獲得權利。
圖1顯示了遠程腦系統(tǒng)由大腦基地,機器人的身體和大腦體界面組成。在遠程腦辦法中大腦和身體接觸面之間的設計和性能是關鍵。我們目前的執(zhí)行情況采取了完全遠程腦的辦法,這意味著該機體上沒有電腦芯片。目前系統(tǒng)由視覺子系統(tǒng),非視覺傳感器子系統(tǒng)和運動控制子系統(tǒng)組成。一個障礙物可以從機器人機體的攝像機上接收視頻信號。每個視覺子系統(tǒng)由平行放置的8個顯示板組成。
一個機體僅有一個運動指令信號和傳輸傳感器的信號的接收器。該傳感器信息從視頻發(fā)射機傳輸。傳輸其他傳感器的信息是可能的,如觸摸和伺服錯誤通過視頻傳輸的信號整合成一個視頻圖像[11] 。該驅動器是包括一個模擬伺服電路和接收安置器的連接模塊。離子參考價值來自于動作接收器。該動作控制子系統(tǒng)可以通過13個波段處理多達104個驅動器和每20兆秒發(fā)送參考價值的所有驅動器。
3 兩個手和足的機器人
圖2顯示了兩個手和足的機器人的結構。機器人的主要電力組成部分是連接著伺服驅動器控、制信號接收器定位傳感器,發(fā)射機,電池驅動器,傳感器和一個攝像頭,視頻發(fā)射機,沒有電腦板。伺服驅動器包括一個齒輪傳動電動機和伺服電路模擬的方塊??刂菩盘柦o每個伺服模塊的位置參考。扭矩伺服模塊可覆蓋2Kgcm -1 4Kgcm的速度約0 .2sec/60deg??刂菩盘杺鬏敓o線電路編碼的8個參考值。該機器人在圖2中有兩個接收器模塊在芯片上以控制16個驅動器。
圖3說明了方向傳感器使用了一套垂直開關。垂直開關是水銀開關。當水銀開關(a)是傾斜時,下拉關閉的汞之間接觸的兩個電極。方向傳感器安裝兩個汞開關,如圖顯示在(b)項。該交換機提供了兩個比特信號用來檢測4個方向的傳感器如圖所示在(c)項。該機器人具有在其胸部的傳感器并且它可以區(qū)分四個方向:面朝上,面朝下,站立和顛倒。該機體的結構設計和模擬在母親環(huán)境下。該機體的運動學模型是被描述面向一個口齒不清的對象,這使我們能夠描述幾何實體模型和窗口界面設計的行為。
圖3 傳感器的兩個水銀定位開關
圖4顯示遠程腦機器人的一些環(huán)境項目分類 。這些分類為擴大發(fā)展各種機器人提供了豐富的平臺。
4 基于視覺的平衡
該機器人可以用兩條腿站起來。因為它可以改變機體的重心,通過控制踝關節(jié)的角度,它可以進行靜態(tài)的兩足行走。如果地面不平整或不穩(wěn)定,在靜態(tài)步行期間機器人必需控制她的身體平衡。
為了視覺平衡和保持移動平穩(wěn),它要有高速的視覺系統(tǒng)。我們已經用相關的芯片[13]制定了一項跟蹤視覺板。這個視覺板由帶著特別LSI芯片(電位[14] :運動估計處理器)擴張轉換器組成 ,與執(zhí)行本地圖像塊匹配。
圖4 層次分類
圖5 步行步態(tài)
該輸入處理器是作為參考程序塊和一個圖像搜索窗口形象.該大小的參考程序塊可達16*16 像素.該大小的搜索窗口取決于參考塊的大小通常高達32*32像素,以便它能夠包括16 * 16且匹配。該處理器計算價值256薩赫勒(總和絕對差)之間的參考塊和256塊在搜索窗口,還找到最佳匹配塊,這就是其中的最低薩赫勒價值。
當目標平移時塊匹配是非常有力的。然而,普通的塊匹配方法當它旋轉時無法跟蹤目標。為了克服這一困難,我們開發(fā)了一種新方法,跟隨真正旋轉目標的候選模板。旋轉模板法首先生成所有目標圖像旋轉,并且?guī)讉€足夠的候選參考模板被選擇并跟蹤前面圖的場景相匹配。圖5展示了一個平衡實驗。在這個實驗中機器人站在傾斜的木板上。機器人視覺跟蹤著前面的場景。它會記住一個物體垂直方向作為視覺跟蹤的參照并產生了旋轉圖像的參考圖象。如果視覺跟蹤的參考對象使用旋轉圖像,它可以衡量身體旋轉。 為了保持身體平衡,機器人的反饋控制其身體旋轉來控制中心機體的重心。旋轉視覺跟蹤[15]可以跟蹤視頻圖像率。
圖6 雙足步行
該輸入處理器是作為參考程序塊和一個圖像搜索窗口形象.該大小的參考程序塊可達16*16 像素.該大小的搜索窗口取決于參考塊的大小通常高達32*32像素,以便它能夠包括16 * 16且匹配。該處理器計算價值256薩赫勒(總和絕對差)之間的參考塊和256塊在搜索窗口,還找到最佳匹配塊,這就是其中的最低薩赫勒價值。
當目標平移時塊匹配是非常有力的。然而,普通的塊匹配方法當它旋轉時無法跟蹤目標。為了克服這一困難,我們開發(fā)了一種新方法,跟隨真正旋轉目標的候選模板。旋轉模板法首先生成所有目標圖像旋轉,并且?guī)讉€足夠的候選參考模板被選擇并跟蹤前面圖的場景相匹配。
圖5展示了一個平衡實驗。在這個實驗中機器人站在傾斜的木板上。機器人視覺跟蹤著前面的場景。它會記住一個物體垂直方向作為視覺跟蹤的參照并產生了旋轉圖像的參考圖象。如果視覺跟蹤的參考對象使用旋轉圖像,它可以衡量身體旋轉。 為了保持身體平衡,機器人的反饋控制其身體旋轉來控制中心機體的重心。旋轉視覺跟蹤[15]可以跟蹤視頻圖像率。
圖6 雙足步行
圖7 雙足步行實驗
5 雙足步行
如果一個雙足機器人可以自由的控制機器人的重心,它可以執(zhí)行雙足行走。展示在圖7的機器人在腳踝的位置有以左和以右的角度,它可以在特定的方式下執(zhí)行雙足行走。該一個周期的一系列運動由八個階段組成,如圖6所示 。一個步驟包括四個階段;移動腳的重力中心,抬腿,向前移動,換腿。由于身體被描述用實體模型,根據重心參數機器人可以產生一個機構配置移動重力中心。這一運動后,機器人可以抬起另一條腿并且向前走。在抬腿過程中機器人必須操縱機構配置,以保持支持腳上的重心。依賴于重心的高度作為平衡的穩(wěn)定性,機器人選擇合適的膝蓋角度.圖 7顯示了一系列雙足機器人行走的實驗。
6 滾動和站立
圖8顯示了一系列滾動,坐著和站起來的動作。這個動作要求胳膊和腿之間的協(xié)調。
由于步行機器人有一個電池,該機器人可使用電池的重量做翻轉動作。當機器人抬起左腿,向后移動左臂且右臂向前,它可以得到機體周圍的旋轉力矩。如果身體開始轉動,右腿向后移動并且左腳依賴臉部返回原來位置。翻滾運動身體的變化方向從仰視到俯視。它可通過方向傳感器核查。
得到正面朝下的方向后,向下移動機器人的手臂以坐在兩個腳上。這個動作引起了雙手和地面之間的滑動。如果手臂的長度不夠達到在腳上的身體重心,這個坐的運動要求有手臂來推動運動。站立運動是被控制的,以保持平衡。
圖8 一系列滾動和站立運動
圖9:具有起身能力的雙足步行機器人的狀態(tài)轉換
7 通過集成傳感器網絡轉型的綜合
為了使上述描述的基本動作成為一體,我們通過一種方法來描述一種被認為是根據傳感器狀況的網絡轉型。圖9顯示了綜合了基本動作機器人的狀態(tài)轉移圖:兩足行走,滾動,坐著和站立。這種一體化提供了機器人保持行走甚至跌倒時的能力。 普通的雙足行走是由兩步組成,連續(xù)的左腿在前和右腿在前。這個姿勢‘依賴于背部和臉部’和‘站立’是一樣的 。也就是說,機器人的機體形狀是相同的,但方向是不同的。
該機器人可以探測機器人是否依賴于背部或面部使用方向傳感器。當機器人發(fā)覺跌倒時,它改變了依賴于‘背部或腹部’通過移動不確定姿勢的狀況。如果機器人‘依賴于背部’起來 ,一系列的動作將被計劃執(zhí)行:翻轉、坐下和站立動作。如果這種情況是‘依賴于臉部’ ,它不執(zhí)行翻轉而是移動手臂執(zhí)行坐的動作。
8 結束語
本文提出了一個兩手臂的可以執(zhí)行靜態(tài)雙足行走,翻轉和站立動作的機器人。建立這種行為的關鍵是遠程腦方法。正如實驗表明,無線技術允許機體自由移動。這似乎也改變我們概念化機器人的一種方式。在我們的實驗室已經發(fā)展一種新的研究環(huán)境,更適合于機器人和真實世界的人工智能。
這里提出的機器人是一個有腿的機器人。我們的視覺系統(tǒng)是基于高速塊匹配功能實施大規(guī)模集成電路的運動估算。視覺系統(tǒng)提供了與人交往作用的機體活力和適應能力。機械狗表現出建立在跟蹤測距的基礎上的適應行為。機械類人猿已經表明跟蹤和記憶的視覺功能和它們在互動行為上的綜合。
一個兩手臂機器人的研究為智能機器人研究提供了一個新的領域。因為它的各種行為可能造成一個靈活的機體。遠程腦方法將支持以學習為基礎行為的研究領域。下一個研究任務包括:如何借鑒人類行為以及如何讓機器人提高自身的學術行為。
外文原文
Multi-degree of freedom walking robot
Masayuki INABA, Fumio KANEHIRO
Satoshi KAGAMI, Hirochika INOUE
Department of Mechano-Informatics
The University of Tokyo
7-3-l Hongo, Bunkyo-ku, 113 Tokyo, JAPAN
Abstract:Focusing attention on flexibility and intelligent reactivity in the real world, it is more important to build, not a robot that won’t fall down, but a robot that can get up if it does full down. This paper presents a research on a two-armed bipedal robot, an apelike robot, which can perform biped walking, rolling over and standing up. The robot consists of a head, two arms, and two legs. The control system of the biped robot is designed based on the remote-brained approach in which a robot does not bring its own brain within the body and talks with it by radio links. This remote-brained approach enables a robot to have both a heavy brain with powerful computation and a lightweight body with multiple joints. The robot can keep balance in standing using tracking vision, detect whether it falls down or not by a set of vertical sensors, and perform getting up motion colaborating two arms and two legs. The developed system and experimental results are described with illustrated real examples.
1 Introduction
As human children show, it is indispensable to have capability of getting up motion in order to learn biped locomotion. In order to build a robot which tries to learn biped walking automatically, the body should be designed to have structures to support getting up as well as sensors to know whether it lays or not.
When a biped robot has arms, it can perform various behaviors as well as walking. Research on biped walking robots has presented with realization[1][2][3].It has mainly focused on the dynamics in walking,treating it as an advanced problem in control[3][4][5].However, focusing attention on the intelligent reactivity in the real world, it is more important to build, not a robot that won’t fall down, but a robot that can get up if it does fall down.
In order to build a robot that can get up if it falls down, the robot needs sensing system to keep the body balance and to know whether it falls down or not. Although vision is one of the most important sensing functions of a robot, it is hard to build a robot with a powerful vision system on its own body because of the size and power limitation of a vision system. If we want to advance research on vision-based robot behaviors requiring dynamic reactions and intelligent reasoning based on experience, the robot body has to be lightweight enough to react quickly and have many DOFS in actuation to show a variety of intelligent behaviors.
As for the legged robot [6] [7] [8],there is only a little research on vision-based behaviors[9]. The difficulties in advancing experimental research for vision-based legged robots are caused by the limitation of the vision hardware. It is hard to keep developing advanced vision software in limited hardware. In order to solve the problems and advance the study of vision-based behaviors, we have adopted a new approach through building remote-brained robots. The body and the brain are connected by wireless links by using wireless cameras and remote-controlled actuators.As a robot body does not need computers on-board,it becomes easier to build a lightweight bodywith many DOFS in actuation.
In this research, we developed a two-armed bipedal robot using the remote-brained robot environment and made it to perform balancing based on vision and getting up through cooperating arms and legs. The system and experimental results are described below.
2 The Remote-Brained System
The remote-brained robot does not bring its own brain within the body. It leaves the brain in the mother environment and communicates with it by radio links. This allows us to build a robot with a free body and a heavy brain. The connection link between the body and the brain defines the interface between software and hardware. Bodies are designed to suit each research project and task. This enables us advance in performing research with a variety of real robot systems[10].
A major advantage of remote-brained robots is that the robot can have a large and heavy brain based on super parallel computers. Although hardware technology for vision has advanced and produced powerful compact vision systems, the size of the hardware is still large. Wireless connection between the camera and the vision processor has been a research tool. The remote-brained approach allows us to progress in the study of a variety of experimental issues in vision-based robotics.
Another advantage of remote-brained approach is that the robot bodies can be lightweight. This opens up the possibility of working with legged mobile robots. As with animals, if a robot has 4 limbs it can walk. We are focusing on vision-basedadaptive behaviors of 4-limbed robots, mechanical animals, experimenting in a field as yet not much studied.
The brain is raised in the mother environment in-herited over generations. The brain and the mother environment can be shared with newly designed robots. A developer using the environment can concentrate on the functional design of a brain. For robots where the brain is raised in a mother environment, it can benefit directly from the mother’s ‘evolution’, meaning that the software gains power easily when the mother is upgraded to a more powerful computer.Figure 1 shows the configuration of the remote-brained system which consists of brain base, robot body and brain-body interface.
In the remote-brained approach the design and theperformance of the interface between brain and body is the key. Our current implementation adopts a fully remotely brained approach, which means the body has no computer onboard. Current system consists of the vision subsystems, the non-vision sensor subsystem and the motion control subsystem. A block can receive video signals from cameras on robot bodies. The vision subsystems are parallel sets each consisting of eight vision boards.
A body just has a receiver for motion instruction signals and a transmitter for sensor signals. The sensor information is transmitted from a video transmitter. It is possible to transmit other sensor information such as touch and servo error through the video transmitter by integrating the signals into a video image[11]. The actuator is a geared module which includes an analog servo circuit and receives a position reference value from the motion receiver. The motion control subsystem can handle up to 104 actuators through 13 wave bands and send the reference values to all the actuators every 20msec.
3 The Two-Armed Bipedal Robot
Figure 2 shows the structure of the two-armed bipedal robot. The main electric components of the robot are joint servo actuators, control signal receivers, an orientation sensor with transmitter, a battery set for actuators and sensors sensor and a camera with video transmitter. There is no computer on-board. A servo actuator includes a geared motor and analog servo circuit in the box. The control signal to each servo module is position reference. The torque of servo modules available cover 2Kgcm - 14Kgcm with the speed about 0.2sec/60deg. The control signal transmitted on radio link encodes eight reference values. The robot in figure 2 has two receiver modules onboard to control 16 actuators.
Figure 3 explains the orientation sensor using a set of vertical switches. The vertical switch is a mercury switch. When the mercury switch (a) is tilted, the drop of mercury closes the contact between the two electrodes. The orientation sensor mount two mercury switches such as shown in (b). The switches provides two bits signal to detect four orientation of the sensor as shown in (c). The robot has this sensor at its chest and it can distinguish four orientation; face up, face down, standing and upside down.
The body structure is designed and simulated in the mother environment. The kinematic model of the body is described in an object-oriented lisp, Euslisp which has enabled us to describe the geometric solid model and window interface for behavior design.
Figure 4 shows some of the classes in the programming environent for remote-brained robot written in Euslisp. The hierachy in the classes provides us with rich facilities for extending development of various robots.
4 Vision-Based Balancing
The robot can stand up on two legs. As it can change the gravity center of its body by controling the ankle angles, it can perform static bipedal walks. During static walking the robot has to control its body balance if the ground is not flat and stable.
In order to perform vision-based balancing it is re-quired to have high speed vision system to keep ob-serving moving schene. We have developed a tracking vision board using a correlation chip[l3]. The vision board consists of a transputer augmented with a special LSI chip(MEP[14] : Motion Estimation Processor) which performs local image block matching.
The inputs to the processor MEP are an image as a reference block and an image for a search window.The size of the reference blsearch window depends on the size of the reference block is usually up to 32 by 32 pixels so that it can include 16 * 16 possible matches. The processor calculates 256 values of SAD (sum of absolute difference) between the reference block and 256 blocks in the search window and also finds the best matching block, that is, the one which has the minimum SAD value.Clock is up to 16 by 16 pixels.The size of the search window depends on the size of the reference block is usually up to 32 by 32 pixels so that it can include 16 * 16 possible matches. The processor calculates 256 values of SAD (sum of absolute difference) between the reference block and 256 blocks in the search window and also finds the best matching block, that is, the one which has the minimum SAD value.
Block matching is very powerful when the target moves only in translation. However, the ordinary block matching method cannot track the target when it rotates. In order to overcome this difficulty, we developed a new method which follows up the candidate templates to real rotation of the target. The rotated template method first generates all the rotated target images in advance, and several adequate candidates of the reference template are selected and matched is tracking the scene in the front view. It remembers the vertical orientation of an object as the reference for visual tracking and generates several rotated images of the reference image. If the vision tracks the reference object using the rotated images, it can measures the body rotation. In order to keep the body balance, the robot feedback controls its body rotation to control the center of the body gravity. The rotational visual tracker[l5] can track the image at video rate.
5 Biped Walking
If a bipedal robot can control the center of gravity freely, it can perform biped walk. As the robot shown in Figure 2 has the degrees to left and right directions at the ankle position, it can perform bipedal walking in static way.
The motion sequence of one cycle in biped walking consists of eight phases as shown in Figure 6. One step consists offourphases; move-gravity-center-on-foot,lift-leg, move-forward-leg, place-leg. As the body is described in solid model, the robot can generate a body configuration for move-gravity-center-on-foot according to the parameter of the hight of the gravity center. After this movement, the robot can lift the other leg and move it forward. In lifting leg, the robot has to control the configuration in order to keep the center of gravity above the supporting foot. As the stability in balance depends on the hight of the gravity center, the robot selects suitable angles of the knees.Figure 7 shows a sequence of experiments of the robot in biped walking.
6 Rolling Over and Standing Up
Figure 8 shows the sequence of rolling over, sitting and standing up. This motion requires coordination between arms and legs. As the robot foot consists of a battery, the robot can make use of the weight of the battery for the roll-over motion. When the robot throws up the left leg and moves the left arm back and the right arm forward, it can get rotary moment around the body. If the body starts turning, the right leg moves back andthe left foot returns its position to lie on the face. This rollover motion changes the body orientation from face up to face down. It can be verified by the orientation sensor.
After getting face down orientation, the robot moves the arms down to sit on two feet. This motion causes slip movement between hands and the ground. If the length of the arm is not enough to carry the center of gravity of the body onto feet, this sitting motion requires dynamic pushing motion by arms. The standing motion is controlled in order to keep the balance.
7 Integration through Building Sensor-Based Transition Net
In order to integrate the basic actions described above, we adopted a method to describe a sensor-based transition network in which transition is considered according to sensor status. Figure 9 shows a state transition diagram of the robot which integrates basic actions: biped walking, rolling over, sitting, and standing up. This integration provides the robot with capability of keeping walking even when it falls down.
The ordinary biped walk is composed by taking two states, Left-leg Fore and Right-leg Fore, successively.The poses in ‘Lie on the Back’ and ‘Lie on the Face’are as same as one in ‘Stand’. That is, the shape ofthe robot body is same but the orientation is different.
The robot can detect whether the robot lies on the back or the face using the orientation sensor. When the robot detects falls down, it changes the state to ‘Lie on the Back’ or ‘Lie on the Front’ by moving to the neutral pose. If the robot gets up from ‘Lie on the Back’, the motion sequence is planned to execute Roll-over, Sit and Stand-up motions. If the state is ‘Lie on the Face’, it does not execute Roll-over but moves arms up to perform the sitting motion.
8 Concluding Remarks
This paper has presented a two-armed bipedal robot which can perform statically biped walk, rolling over and standing up motions. The key to build such behaviors is the remote-brained approach. As the expe
收藏