Skip to main content

Home/ Aasemoon'z Cluster/ Group items tagged vision

Rss Feed Group items tagged

Aasemoon =)

Autonomous Satellite Chasers Can Use Robotic Vision to Capture Orbiting Satellites | Po... - 0 views

  • UC3M's ASIROV Robotic Satellite Chaser Prototype ASIROV, the Acoplamiento y Agarre de Satélites mediante Sistemas Robóticos basado en Visión (Docking and Capture of Satellites through computer vision) would use computer vision tech to autonomously chase down satellites in orbit for repair or removal. Image courtesy of Universidad Carlos III de Madrid Spanish robotics engineers have devised a new weapon in the battle against zombie-sats and space junk: an automated robotics system that employs computer vision technology and algorithmic wizardry to allow unmanned space vehicles to autonomously chase down, capture, and even repair satellites in orbit. Scientists at the Universidad Carlos III de Madrid (UC3M) created the system to allow for the removal of rogue satellites from low earth orbit or the maintenance of satellites that are nearing the ends of their lives, prolonging their service (and extending the value of large investments in satellite tech). Through a complex set of algorithms, space vehicles known as “chasers” could be placed into orbit with the mission of policing LEO, chasing down satellites that are damaged or have gone “zombie” and dealing with them appropriately.
Aasemoon =)

Make Computers See with SimpleCV - The Open Source Framework for Vision - 0 views

  • So after all that you are probably asking, “What is SimpleCV?” It is an open source computer vision framework that lowers the barriers to entry for people to learn, develop, and use it across the globe. Currently there are a few open source vision system libraries in existence, but the downside to these is you have to be quite the domain expert and knowledgeable with vision systems as well as know cryptic programming languages like C. Where SimpleCV is different, is it is “simple”. It has been designed with a web browser interface, which is familiar to Internet users everywhere. It will talk to your webcam (which most computers and smart phones have built in) automatically. It works cross platform (Windows, Mac, Linux, etc). It uses the programming language Python rather than C to greatly lower the learning curve of the software. It sacrifices some complexity for simplicity, which is needed for mass adoption of any type of new technology.
Aasemoon =)

TechOnline | Video and Vision Solutions Guide - 0 views

  •  
    "Texas Instruments TI has a 25+ year history covering the video market from one end of the video chain to the other. Customers can leverage TI's expertise in video to launch differentiated products quickly and cost-effectively in any number of market segments. This comprehensive guide is a useful resource for developers of various video and vision products."
Aasemoon =)

Oh, Those Robot Eyes! | h+ Magazine - 0 views

  • Willow Garage is organizing a workshop at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) 2010 in San Francisco to discuss the intersection of computer vision with human-robot interaction. Willow Garage is the hardware and open source software organization behind the Robot Operating System (ROS) and the PR robot development platform. Here’s a recent video from Willow Garage of work done at the University of Illinois on how robots can be taught to perceive images:
Aasemoon =)

How computers can mimic human 3-D vision | KurzweilAI - 1 views

  • Researchers at Purdue University have developed two new techniques for computer-vision technology that mimic how humans perceive three-dimensional shapes.The techniques, heat mapping and heat distribution, apply mathematical methods to enable machines to perceive three-dimensional objects by mimicking how humans perceive three-dimensional shapes by instantly recognizing objects no matter how they are twisted or bent, an advance that could help machines see more like people.
Aasemoon =)

Module aids Camera Link FPGA image processing | Industrial Control Designline - 0 views

  • National Instruments has released a vision module for the PXI platform that provides a high-performance parallel processing architecture for hardware-defined timing, control and image pre-processing. The NI 1483 Camera Link adapter module, in combination with an NI FlexRIO field-programmable gate array (FPGA) board, offers a solution for embedding vision and control algorithms directly on FPGAs which are used to process and analyse an image in real time with little to no CPU intervention. The FPGAs can be used to perform operations by pixel, line and region of interest. They can implement many image processing algorithms that are inherently parallel, including fast Fourier transforms (FFTs), thresholding and filtering.
Aasemoon =)

robots.net - New Model Mimics Human Vision Tasks - 1 views

  • Researchers at MIT’s McGovern Institute for Brain Research are working on a new mathematical model to mimic the human brain's ability to identify objects. The model can predict human performance on certain visual-perception tasks suggesting it’s a good indication of what's actually happening in the brain. Researchers are hoping the new findings will make their way into future object-recognition systems for automation, mobile robotics, and other applications.
Aasemoon =)

Vigilant camera eye - Research News 09-2010-Topic 6 - Fraunhofer-Gesellschaft - 0 views

  • An innovatice camera system could in future enhance security in public areas and buildings. Smart Eyes works just like the human eye. The system analyzes the recorded data in real time and then immediately flags up salient features and unusual scenes.  »Goal, goal, goal!« fans in the stadium are absolutely ecstatic, the uproar is enormous. So it‘s hardly surprising that the security personnel fail to spot a brawl going on between a few spectators. Separating jubilant fans from scuffling hooligans is virtually impossible in such a situation. Special surveillance cameras that immediately spot anything untoward and identify anything out of the ordinary could provide a solution. Researchers from the Fraunhofer Institute for Applied Information Technology FIT in Sankt Augustin have now developed such a device as part of the EU project »SEARISE – Smart Eyes: Attending and Recognizing Instances of Salient Events«. The automatic camera system is designed to replicate human-like capabilities in identifying and processing moving images.
Aasemoon =)

Artificial Intelligence and Robotics: Bing augmented reality maps demo - 0 views

  • Microsoft Research who brought us some wonderful technologies such as the incredible Photosynth continue to impress with a much improved web mapping application integrated with the company's new Bing search engine. During the TED 2010 conference, Microsoft engineer Blaise Aguera y Arcas demoed the new Bing augmented reality maps showing real-time registration of video taken with a smart phone and street-view type maps. He showed how the live video can be overlayed over the static images and additional information about the area can be accessed via a Web interface. Much of this is made possible because of the advanced computer vision technology that has been developed in the past decade at Microsoft Research. The Seadragon technology is the back-end that makes it possible to manipulate such vast amounts of data in real-time. Microsoft has also integrated Photosynth and Worldwide telescope into their maps product. You are probably wondering what does this have to do with robotics other than the fact that it is a very impressive application? I can imagine robots using Bing maps to keep localized within a city. One of the most difficult and important problem in robotics is that of Simultaneous Localization and Mapping. Bing maps solve the mapping problem and the new vision techniques (with a bit of help from GPS) can be used to solve the localization problem. The registered video can be used by a robot to localized itself when it goes out to buy your weekly groceries.  You can watch the 10-minute demo below; I bet that it won't be long before Microsoft makes these new features available to us all for free.
Aasemoon =)

Artificial Intelligence and Robotics: LuminAR to shine a light on the future - 0 views

  • You might think that some devices in the modern age have reached their maximum development level, such as the common desk-lamp, but you would be wrong. Natan Linder, a student from The Massachusetts Institute of Technology (MIT) has created a robotic version that can not only light your room, but project internet pages on your desk as well. It is an upgrade on the AUR lamp from 2007, which tracks movements around a desk or table and can alter the color, focus, and strength of its light to suit the user’s needs. The LuminAR comes with those abilities, and much more. The robotic arm can move about on its own, and combines a vision system with a pico projector, wireless computer and camera. When turned on, the projector will look for a flat space around your room on which to display images. Since it can project more than one internet window, you can check your email and browse another website at the same time.
Aasemoon =)

IEEE Spectrum: RoboCup Kicks Off in Singapore This Week - 1 views

  • Humans aren't the only ones playing soccer right now. In just two days, robots from world-renowned universities will compete in Singapore for RoboCup 2010. This is the other World Cup, where players range from 15-centimeter tall Wall-E-like bots to adult-sized advanced humanoids. The RoboCup, now in its 14th edition, is the world’s largest robotics and artificial intelligence competition with more than 400 teams from dozens of countries. The idea is to use the soccer bots to advance research in machine vision, multi-agent collaboration, real-time reasoning, sensor-fusion, and other areas of robotics and AI. But its participants also aim to develop autonomous soccer playing robots that will one day be able to play against humans. The RoboCup's mission statement:
Aasemoon =)

robots.net - Robots: Programmable Matter - 0 views

  • The latest episode of the Robots Podcast looks at the following scenario: Imagine being able to throw a hand-full of smart matter in a tank full of liquid and then pulling out a ready-to-use wrench once the matter has assembled. This is the vision of this episode's guests Michael Tolley and Jonas Neubert from the Computational Synthesis Laboratory run by Hod Lipson at Cornell University, NY. Tolley and Neubert give an introduction into Programmable Matter and then present their research on stochastic assembly of matter in fluid, including both simulation (see video above) and real-world implementation. Read on or tune in!
Aasemoon =)

IEEE Spectrum: Japanese Snake Robot Goes Where Humans Can't - 0 views

  • Japanese robotics company HiBot has unveiled a nimble snake bot capable of moving inside air ducts and other narrow places where people can't, or don't want to, go. The ACM-R4H robot, designed for remote inspection and surveillance in confined environments, uses small wheels to move but it can slither and undulate and even raise its head like a cobra. The new robot, which is half a meter long and weighs in at 4.5 kilograms, carries a camera and LEDs on its head for image acquisition and can be fitted with other end-effectors such as mechanical grippers or thermo/infrared vision systems. Despite its seemingly complex motion capabilities, "the control of the robot is quite simple and doesn't require too much training," says robotics engineer and HiBot cofounder Michele Guarnieri.
Aasemoon =)

・ARMAR-III - 0 views

  • Continuing to work on a humanoid helper robot called ARMAR, the Collaborative Research Center 588: Humanoid Robots at the University of Karlsruhe began planning ARMAR-IIIa (blue) in 2006. It has 43 degrees of freedom (torso x3, 2 arms x7, 2 hands x8, head x7) and is equipped with position, velocity, and force sensors.  The upper-body has a modular design based on the average dimensions of a person, with 14 tactile sensors per hand.  Like the previous versions, it moves on a mobile platform.  In 2008 they built a slightly upgraded version of the robot called ARMAR-IIIb (red).  Both robots use the Karlsruhe Humanoid Head, which has 2 cameras per eye (for near and far vision).  The head has a total of 7 degrees of freedom (neck x4, eyes x3), 6 microphones, and a 6D inertial sensor.
Aasemoon =)

Taking movies beyond Avatar - for under £100 - 1 views

  • A new development in virtual cameras at the University of Abertay Dundee is developing the pioneering work of James Cameron’s blockbuster Avatar using a Nintendo Wii-like motion controller – all for less than £100.Avatar, the highest-grossing film of all time, used several completely new filming techniques to bring to life its ultra-realistic 3D action. Now computer games researchers have found a way of taking those techniques further using home computers and motion controllers.James Cameron invented a new way of filming called Simul-cam, where the image recorded is processed in real-time before it reaches the director’s monitor screen. This allows actors in motion-capture suits to be instantly seen as the blue Na’vi characters, without days spent creating computer-generated images.
Aasemoon =)

TechOnline | Introduction to NI LabVIEW Robotics - 0 views

  • NI LabVIEW Robotics is a software package that provides a complete suite of tools to help you rapidly design sophisticated robotics systems for medical, agricultural, automotive, research, and military applications. The LabVIEW Robotics Software Bundle includes all of the functionality you need, from multicore real-time and FPGA design capabilities to vision, motion, control design, and simulation. Watch an introduction and demonstration of LabVIEW Robotics.
Aasemoon =)

PRODUCT HOW TO - Embedding multicore PCs for Robotics & Industrial Control | Industrial... - 0 views

  • PC-compatible industrial computers are increasing in computing power at a rapid rate due to the availability of multi-core microprocessor chips, and Microsoft Windows has become the de-facto software platform for implementing human-machine interfaces (HMIs). PCs are also becoming more reliable. With these trends, the practice of building robotic systems as complex multi-architecture, multi-platform systems is being challenged. It is now becoming possible to integrate all the functions of machine control and HMI into a single platform, without sacrificing performance and reliability of processing. Through new developments in software, we are seeing industrial systems evolving to better integrate Windows with real-time functionality such as machine vision and motion control. Software support to simplify motion control algorithm implementation already exists for the Intel processor architecture.
Aasemoon =)

robots.net - Robotic Maid Makes Breakfast - 1 views

  • Mahru-Z is the robotic maid that can make breakfast!. Given certain voice commands the robot can perform functions such as working a microwave, delivering toast, and other tasks such as washing clothes. The robots can see with stereoscopic vision and can identify what objects are and even decide what jobs needs to be done with the objects. In the video, one robot appears to be tethered and the other is not making me wonder if they are really self contained. Also, one is wearing a dress and the other not, so are they both maids or is one a butler? Shouldn't they just call them robotic servants or is that redundant? Regardless, although not apparently sentient, these do appear to be advanced robots. I only wonder if they washed their hands before and after handling the food?
Aasemoon =)

・NAMO - 1 views

  • NAMO (Novel Articulated MObile platform)  is a humanoid robot built by The Institute of Field Robotics (FIBO) at King Mongkut’s University of Technology Thonburi in Thailand. FIBO is active in the RoboCup scene and have developed a wide range of robot types, including an experimental biped.  NAMO was unveiled on March 29th 2010, serving as FIBO’s mascot as part of the university’s 50 year anniversary celebrations.  NAMO will be used to welcome people to the university and may be deployed at museums.  Given its friendly appearance and functionality, it could be used to research human robot interaction and communication. NAMO is 130cm (4′3″) tall and has 16 degrees of freedom.  It moves on a stable three-wheeled omnidirectional base, and is equipped with a Blackfin camera for its vision system.  It is capable of simple gesture recognition, visually tracks humans or objects of interest automatically, and can speak a few phrases in a child-like voice (in Thai).
Aasemoon =)

IEEE Spectrum: Smart Grid Proof? - 0 views

  • The vision of a smarter grid is of course a lovely thing to behold: an electric power system that’s much more interactive, interoperable, reliable, and robust—“self-healing,” even. That’s why so much excitement attended the news this time last year that the U.S. stimulus bill would contain billions of dollars in new funding to support smart grid construction, and the news six months later than the National Institute for Standards and Technology was issuing draft standards and a roadmap for completing standardization of the smart grid (the Framework and Roadmap for Smart Grid Interoperability Standards, issued in final form in January). And it's the reason too why such high expectations ride on the avalanche of smart meter installation projects launched in the last year.
1 - 20 of 24 Next ›
Showing 20 items per page