Skip to main content

Home/ Robotics & AI/ Group items tagged machine vision

Rss Feed Group items tagged

otakuhacks

Data annotation - 0 views

image

data-science data annotations annotation machine-learning

started by otakuhacks on 10 Nov 20 no follow-up yet
Aasemoon =)

Autonomous Satellite Chasers Can Use Robotic Vision to Capture Orbiting Satellites | Po... - 0 views

  • UC3M's ASIROV Robotic Satellite Chaser Prototype ASIROV, the Acoplamiento y Agarre de Satélites mediante Sistemas Robóticos basado en Visión (Docking and Capture of Satellites through computer vision) would use computer vision tech to autonomously chase down satellites in orbit for repair or removal. Image courtesy of Universidad Carlos III de Madrid Spanish robotics engineers have devised a new weapon in the battle against zombie-sats and space junk: an automated robotics system that employs computer vision technology and algorithmic wizardry to allow unmanned space vehicles to autonomously chase down, capture, and even repair satellites in orbit. Scientists at the Universidad Carlos III de Madrid (UC3M) created the system to allow for the removal of rogue satellites from low earth orbit or the maintenance of satellites that are nearing the ends of their lives, prolonging their service (and extending the value of large investments in satellite tech). Through a complex set of algorithms, space vehicles known as “chasers” could be placed into orbit with the mission of policing LEO, chasing down satellites that are damaged or have gone “zombie” and dealing with them appropriately.
Aasemoon =)

Make Computers See with SimpleCV - The Open Source Framework for Vision - 0 views

  • So after all that you are probably asking, “What is SimpleCV?” It is an open source computer vision framework that lowers the barriers to entry for people to learn, develop, and use it across the globe. Currently there are a few open source vision system libraries in existence, but the downside to these is you have to be quite the domain expert and knowledgeable with vision systems as well as know cryptic programming languages like C. Where SimpleCV is different, is it is “simple”. It has been designed with a web browser interface, which is familiar to Internet users everywhere. It will talk to your webcam (which most computers and smart phones have built in) automatically. It works cross platform (Windows, Mac, Linux, etc). It uses the programming language Python rather than C to greatly lower the learning curve of the software. It sacrifices some complexity for simplicity, which is needed for mass adoption of any type of new technology.
Aasemoon =)

Oh, Those Robot Eyes! | h+ Magazine - 0 views

  • Willow Garage is organizing a workshop at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) 2010 in San Francisco to discuss the intersection of computer vision with human-robot interaction. Willow Garage is the hardware and open source software organization behind the Robot Operating System (ROS) and the PR robot development platform. Here’s a recent video from Willow Garage of work done at the University of Illinois on how robots can be taught to perceive images:
Aasemoon =)

Graspy PR2 robot learns to read | Computer Vision Central - 0 views

  • Researchers at the University of Pennsylvania are developing algorithms to enable robots to learn to read like a human toddler. Using a Willow Garage PR2 robot (nicknamed Graspy), the researchers demonstrate the ability for a robot to learn to read anything from simple signs to full-length warnings. Graspy recognizes the shapes of letters and associates them with sounds. Part of the computer vision challenge is reading hundreds of different fonts. More information is available in a Psyorg article and from the ROS website.
Aasemoon =)

ICT Results - Computers to read your body language? - 0 views

  • Can a computer read your body language? A consortium of European researchers thinks so, and has developed a range of innovative solutions from escalator safety to online marketing. The keyboard and mouse are no longer the only means of communicating with computers. Modern consumer devices will respond to the touch of a finger and even the spoken word, but can we go further still? Can a computer learn to make sense of how we walk and stand, to understand our gestures and even to read our facial expressions?The EU-funded MIAUCE project set out to do just that. "The motivation of the project is to put humans in the loop of interaction between the computer and their environment,” explains project coordinator Chaabane Djeraba, of CNRS in Lille. “We would like to have a form of ambient intelligence where computers are completely hidden,” he says. “This means a multimodal interface so people can interact with their environment. The computer sees their behaviour and then extracts information useful for the user."
Aasemoon =)

PRODUCT HOW TO - Embedding multicore PCs for Robotics & Industrial Control | Industrial... - 0 views

  • PC-compatible industrial computers are increasing in computing power at a rapid rate due to the availability of multi-core microprocessor chips, and Microsoft Windows has become the de-facto software platform for implementing human-machine interfaces (HMIs). PCs are also becoming more reliable. With these trends, the practice of building robotic systems as complex multi-architecture, multi-platform systems is being challenged. It is now becoming possible to integrate all the functions of machine control and HMI into a single platform, without sacrificing performance and reliability of processing. Through new developments in software, we are seeing industrial systems evolving to better integrate Windows with real-time functionality such as machine vision and motion control. Software support to simplify motion control algorithm implementation already exists for the Intel processor architecture.
Aasemoon =)

robots.net - New Model Mimics Human Vision Tasks - 0 views

  • Researchers at MIT’s McGovern Institute for Brain Research are working on a new mathematical model to mimic the human brain's ability to identify objects. The model can predict human performance on certain visual-perception tasks suggesting it’s a good indication of what's actually happening in the brain. Researchers are hoping the new findings will make their way into future object-recognition systems for automation, mobile robotics, and other applications.
otakuhacks

Transformers in NLP: Creating a Translator Model from Scratch | Lionbridge AI - 0 views

  •  
    Transformers have now become the defacto standard for NLP tasks. Originally developed for sequence transduction processes such as speech recognition, translation, and text to speech, transformers work by using convolutional neural networks together with attention models, making them much more efficient than previous architectures. And although transformers were developed for NLP, they've also been implemented in the fields of computer vision and music generation. However, for all their wide and varied uses, transformers are still very difficult to understand, which is why I wrote a detailed post describing how they work on a basic level. It covers the encoder and decoder architecture, and the whole dataflow through the different pieces of the neural network. In this post, we'll get deeper into looking at transformers by implementing our own English to German language translator.
Aasemoon =)

Scalable Object Recognition | Willow Garage - 0 views

  • Marius Muja from University of British Columbia returned to Willow Garage this summer to continue his work object recognition. In addition to working on an object detector that can scale to a large number of objects, he has also been designing a general object recognition infrastructure. One problem that many object detectors have is that they get slower as they learn new objects. Ideally we want a robot that goes into an environment and is capable of collecting data and learning new objects by itself. In doing this, however, we don't want the robot to get progressively slower as it learns new objects. Marius worked on an object detector called Binarized Gradient Grid Pyramid (BiGGPy), which uses the gradient information from an image to match it to a set of learned object templates. The templates are organized into a template pyramid. This tree structure has low resolution templates at the root and higher resolution templates at each lower level. During detection, only a fraction of this tree must be explored. This results in big speedups and allows the detector to scale to a large number of objects.
Aasemoon =)

3D Point Cloud Based Object Recognition System | Willow Garage - 0 views

  • The main focus for Bastian's work was on the feature-extraction process for 3D data. One of his contributions was a novel interest keypoint extraction method that operates on range images generated from arbitrary 3D point clouds. This method explicitly considers the borders of the objects identified by transitions from foreground to background. Bastian also developed a new feature descriptor type, called NARF (Normal Aligned Radial Features), that takes the same information into account. Based on these feature matches, Bastian then worked on a process to create a set of potential object poses and added spatial verification steps to assure these observations fit the sensor data.
Aasemoon =)

Artificial Intelligence and Robotics: LuminAR to shine a light on the future - 0 views

  • You might think that some devices in the modern age have reached their maximum development level, such as the common desk-lamp, but you would be wrong. Natan Linder, a student from The Massachusetts Institute of Technology (MIT) has created a robotic version that can not only light your room, but project internet pages on your desk as well. It is an upgrade on the AUR lamp from 2007, which tracks movements around a desk or table and can alter the color, focus, and strength of its light to suit the user’s needs. The LuminAR comes with those abilities, and much more. The robotic arm can move about on its own, and combines a vision system with a pico projector, wireless computer and camera. When turned on, the projector will look for a flat space around your room on which to display images. Since it can project more than one internet window, you can check your email and browse another website at the same time.
Aasemoon =)

IEEE Spectrum: RoboCup Kicks Off in Singapore This Week - 1 views

  • Humans aren't the only ones playing soccer right now. In just two days, robots from world-renowned universities will compete in Singapore for RoboCup 2010. This is the other World Cup, where players range from 15-centimeter tall Wall-E-like bots to adult-sized advanced humanoids. The RoboCup, now in its 14th edition, is the world’s largest robotics and artificial intelligence competition with more than 400 teams from dozens of countries. The idea is to use the soccer bots to advance research in machine vision, multi-agent collaboration, real-time reasoning, sensor-fusion, and other areas of robotics and AI. But its participants also aim to develop autonomous soccer playing robots that will one day be able to play against humans. The RoboCup's mission statement:
1 - 13 of 13
Showing 20 items per page