人工智能機(jī)器人已經(jīng)開(kāi)始學(xué)習(xí)折衣服和襪子、駕駛直升機(jī)以及縫合,但要能成為機(jī)器人管家、飛行員和外科醫(yī)生,仍然超越機(jī)器人現(xiàn)有的能力范圍之外。這是人工智能研究學(xué)者在日前嵌入式視覺(jué)高峰會(huì)議(Embedded Vision Summit)上一場(chǎng)專題演講中所提出的結(jié)論。
 |
“有 關(guān)機(jī)器人的研究存在許多限制,同時(shí)也還有諸多問(wèn)題正待解決,”美國(guó)加州大學(xué)柏克萊大學(xué)工程系副教授Pieter Abbeel透過(guò)視訊介紹他的研究團(tuán)隊(duì)至今所取得的重大研究進(jìn)展。“現(xiàn)在我們透過(guò)實(shí)地示范來(lái)指導(dǎo)機(jī)器人,但如果能讓他們用YouTube等視頻以實(shí)現(xiàn)自我學(xué)習(xí)的話,效果會(huì)更好。”
目前的研究人員們通常會(huì)教機(jī)器人完成一項(xiàng)大約20-30秒的任務(wù)。他們的目標(biāo)是實(shí)現(xiàn)分階段的規(guī)劃系 統(tǒng),讓機(jī)器人能自行將較大任務(wù)區(qū)分成較小的任務(wù)逐一完成。但在有關(guān)機(jī)器人如何認(rèn)識(shí)物體而不必求助于課程學(xué)習(xí),以及如何用或然率來(lái)為移動(dòng)的空間繪制地圖等等 任務(wù),則需要更多的研究進(jìn)展與突破。
盡管存在限制,Abbeel展示直升機(jī)機(jī)器人能夠執(zhí)行一連串令人印象深刻的高難度動(dòng)作,包括定點(diǎn)翻轉(zhuǎn)與側(cè)滾等。他說(shuō),“只要是飛行員能做的任何事情,我們的機(jī)器人系統(tǒng)就能透過(guò)學(xué)習(xí)實(shí)現(xiàn),甚至還能比飛行員做得更精確且具有可重復(fù)性。”
Abbeel的研究團(tuán)隊(duì)采用人類飛行員的多次飛行資料,為機(jī)器人建立了學(xué)習(xí)模型。同時(shí)還采用了傳統(tǒng)隱藏馬可夫模型(HMM)以及卡爾曼濾波器來(lái)琢磨提升這些模型。機(jī)器人們?nèi)匀槐仨毻高^(guò)各種短期課程,分別學(xué)習(xí)每一項(xiàng)動(dòng)作。
為了突破人工智能應(yīng)用于機(jī)器人手術(shù)時(shí)面臨的挑戰(zhàn),該研究團(tuán)隊(duì)還開(kāi)發(fā)出教導(dǎo)機(jī)器人如何進(jìn)行外科手術(shù)縫合的方法。
美國(guó)的外科醫(yī)生每年指導(dǎo)機(jī)器人系統(tǒng)進(jìn)行縫合逾30萬(wàn)次,但至今機(jī)器人自主縫合的試驗(yàn)上大約只有一半的次數(shù)能實(shí)現(xiàn)成功。Abbeel說(shuō):“結(jié)果雖然還不算太壞,但還不到能實(shí)際應(yīng)用于手術(shù)的時(shí)候?!?
Abbeel贊許在其實(shí)驗(yàn)室中所用的Willow Garage公司PR2 機(jī)器人,“這些機(jī)器人真的很棒,因?yàn)樗麄兿喈?dāng)可靠,也十分適用于實(shí)驗(yàn)上?!?

Abbeel的研究團(tuán)隊(duì)展示機(jī)器人會(huì)折毛巾和襪子等簡(jiǎn)單家務(wù)
Ul7esmc
本文授權(quán)編譯自EE Times,版權(quán)所有,謝絕轉(zhuǎn)載
編譯:Susan Hong
參考英文原文:DESIGN West: Robots study for butler role,by Rick Merritt
相關(guān)閱讀:
• 如何用i.mx27構(gòu)建機(jī)器人視頻監(jiān)控系統(tǒng)
• 弗吉尼亞理工大學(xué)為美國(guó)海軍開(kāi)發(fā)水下偵測(cè)機(jī)器人
• 開(kāi)源硬件方案雖好,但其商業(yè)模式行得通嗎?Ul7esmc
{pagination}
DESIGN West: Robots study for butler role
Rick Merritt
SAN JOSE, Calif. – Robots are learning to fold socks, fly helicopters and sew, but the robotic butler, pilot and surgeon are still beyond the horizon. That was the conclusion of a keynote from a leading researcher in artificial intelligence at the Embedded Vision Summit here.
“There are a lot of limitations and a lot to be resolved here,” said Pieter Abbeel, an assistant professor of engineering at the University of California at Berkeley, after showing videos of many advances from his team. “Now we guide robots through demos, but it would better to let them use YouTube videos to learn on their own,” Abbeel said.
Researchers typically teach robots 20-30 second tasks today. They aim to enable hierarchical planning systems so robots can break down large jobs into smaller tasks themselves, he said. Other advances are needed in how robots perceive objects without resorting to learned classes and how they use probability to map the space in which they act, he said.
Despite the limits, Abbeel showed robotic helicopters executing an impressive series of difficult maneuvers including stationary flips and rolls. “Anything a pilot can do, our system can learn, and it can go beyond a pilot in being more accurate and repeatable,” he said.
Abbeel’s team used data from flights of multiple human pilots to create models. It applied hidden Markov models and Kalman filters to refine the models. Still the robots needed to learn each move individually in separate short sessions.
The team also developed methods to teach robots to tie a wide variety of knots and sew a few stitches on large fabrics. The work was in response to a challenge from a colleague to apply artificial intelligence to robotic surgery.
U.S. surgeons guide robotic systems to tie 300,000 knots a year, but so far experiments in autonomous suturing only succeed half the time. “That’s not bad, but not the kind of thing you want to go into surgery with,” he joked.
Abbeel praised the PR2 robots from Willow Garage he uses in the lab. “They are amazing because they are so reliable and so good to do experiments with,” he said.
Doing the laundry: Abbeel’s team showed robots folding towels and socks
責(zé)編:Quentin