Skip to main content

Home/ Robotics & AI/ Group items tagged computer vision

Rss Feed Group items tagged

Aasemoon =)

Make Computers See with SimpleCV - The Open Source Framework for Vision - 0 views

  • So after all that you are probably asking, “What is SimpleCV?” It is an open source computer vision framework that lowers the barriers to entry for people to learn, develop, and use it across the globe. Currently there are a few open source vision system libraries in existence, but the downside to these is you have to be quite the domain expert and knowledgeable with vision systems as well as know cryptic programming languages like C. Where SimpleCV is different, is it is “simple”. It has been designed with a web browser interface, which is familiar to Internet users everywhere. It will talk to your webcam (which most computers and smart phones have built in) automatically. It works cross platform (Windows, Mac, Linux, etc). It uses the programming language Python rather than C to greatly lower the learning curve of the software. It sacrifices some complexity for simplicity, which is needed for mass adoption of any type of new technology.
Aasemoon =)

Autonomous Satellite Chasers Can Use Robotic Vision to Capture Orbiting Satellites | Po... - 0 views

  • UC3M's ASIROV Robotic Satellite Chaser Prototype ASIROV, the Acoplamiento y Agarre de Satélites mediante Sistemas Robóticos basado en Visión (Docking and Capture of Satellites through computer vision) would use computer vision tech to autonomously chase down satellites in orbit for repair or removal. Image courtesy of Universidad Carlos III de Madrid Spanish robotics engineers have devised a new weapon in the battle against zombie-sats and space junk: an automated robotics system that employs computer vision technology and algorithmic wizardry to allow unmanned space vehicles to autonomously chase down, capture, and even repair satellites in orbit. Scientists at the Universidad Carlos III de Madrid (UC3M) created the system to allow for the removal of rogue satellites from low earth orbit or the maintenance of satellites that are nearing the ends of their lives, prolonging their service (and extending the value of large investments in satellite tech). Through a complex set of algorithms, space vehicles known as “chasers” could be placed into orbit with the mission of policing LEO, chasing down satellites that are damaged or have gone “zombie” and dealing with them appropriately.
Aasemoon =)

ICT Results - Computers to read your body language? - 0 views

  • Can a computer read your body language? A consortium of European researchers thinks so, and has developed a range of innovative solutions from escalator safety to online marketing. The keyboard and mouse are no longer the only means of communicating with computers. Modern consumer devices will respond to the touch of a finger and even the spoken word, but can we go further still? Can a computer learn to make sense of how we walk and stand, to understand our gestures and even to read our facial expressions?The EU-funded MIAUCE project set out to do just that. "The motivation of the project is to put humans in the loop of interaction between the computer and their environment,” explains project coordinator Chaabane Djeraba, of CNRS in Lille. “We would like to have a form of ambient intelligence where computers are completely hidden,” he says. “This means a multimodal interface so people can interact with their environment. The computer sees their behaviour and then extracts information useful for the user."
Aasemoon =)

Oh, Those Robot Eyes! | h+ Magazine - 0 views

  • Willow Garage is organizing a workshop at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) 2010 in San Francisco to discuss the intersection of computer vision with human-robot interaction. Willow Garage is the hardware and open source software organization behind the Robot Operating System (ROS) and the PR robot development platform. Here’s a recent video from Willow Garage of work done at the University of Illinois on how robots can be taught to perceive images:
Aasemoon =)

Graspy PR2 robot learns to read | Computer Vision Central - 0 views

  • Researchers at the University of Pennsylvania are developing algorithms to enable robots to learn to read like a human toddler. Using a Willow Garage PR2 robot (nicknamed Graspy), the researchers demonstrate the ability for a robot to learn to read anything from simple signs to full-length warnings. Graspy recognizes the shapes of letters and associates them with sounds. Part of the computer vision challenge is reading hundreds of different fonts. More information is available in a Psyorg article and from the ROS website.
otakuhacks

Transformers in NLP: Creating a Translator Model from Scratch | Lionbridge AI - 0 views

  •  
    Transformers have now become the defacto standard for NLP tasks. Originally developed for sequence transduction processes such as speech recognition, translation, and text to speech, transformers work by using convolutional neural networks together with attention models, making them much more efficient than previous architectures. And although transformers were developed for NLP, they've also been implemented in the fields of computer vision and music generation. However, for all their wide and varied uses, transformers are still very difficult to understand, which is why I wrote a detailed post describing how they work on a basic level. It covers the encoder and decoder architecture, and the whole dataflow through the different pieces of the neural network. In this post, we'll get deeper into looking at transformers by implementing our own English to German language translator.
Aasemoon =)

PRODUCT HOW TO - Embedding multicore PCs for Robotics & Industrial Control | Industrial... - 0 views

  • PC-compatible industrial computers are increasing in computing power at a rapid rate due to the availability of multi-core microprocessor chips, and Microsoft Windows has become the de-facto software platform for implementing human-machine interfaces (HMIs). PCs are also becoming more reliable. With these trends, the practice of building robotic systems as complex multi-architecture, multi-platform systems is being challenged. It is now becoming possible to integrate all the functions of machine control and HMI into a single platform, without sacrificing performance and reliability of processing. Through new developments in software, we are seeing industrial systems evolving to better integrate Windows with real-time functionality such as machine vision and motion control. Software support to simplify motion control algorithm implementation already exists for the Intel processor architecture.
Aasemoon =)

IEEE Spectrum: Autonomous Vehicle Driving from Italy to China - 0 views

  • The Russian policeman waved at the orange van zigzagging around the empty plaza, ordering it to stop. The van didn’t, so the officer stepped closer to address the driver. But there was no driver. The van is an autonomous vehicle developed at the University of Parma’s Artificial Vision and Intelligent Systems Laboratory, known as VisLab. Crammed with computers, cameras, and sensors, the vehicle is capable of detecting cars, lanes, and obstacles -- and drive itself. The VisLab researchers, after getting tired of testing their vehicles in laboratory conditions, decided to set out on a real-world test drive: a 13,000-kilometer, three-month intercontinental journey from Parma to Shanghai. The group is now about halfway through their trip, which started in July and will end in late October, at the 2010 World Expo in China. (See real time location and live video.)
Aasemoon =)

Artificial Intelligence and Robotics: LuminAR to shine a light on the future - 0 views

  • You might think that some devices in the modern age have reached their maximum development level, such as the common desk-lamp, but you would be wrong. Natan Linder, a student from The Massachusetts Institute of Technology (MIT) has created a robotic version that can not only light your room, but project internet pages on your desk as well. It is an upgrade on the AUR lamp from 2007, which tracks movements around a desk or table and can alter the color, focus, and strength of its light to suit the user’s needs. The LuminAR comes with those abilities, and much more. The robotic arm can move about on its own, and combines a vision system with a pico projector, wireless computer and camera. When turned on, the projector will look for a flat space around your room on which to display images. Since it can project more than one internet window, you can check your email and browse another website at the same time.
Aasemoon =)

robots.net - Robots: Programmable Matter - 0 views

  • The latest episode of the Robots Podcast looks at the following scenario: Imagine being able to throw a hand-full of smart matter in a tank full of liquid and then pulling out a ready-to-use wrench once the matter has assembled. This is the vision of this episode's guests Michael Tolley and Jonas Neubert from the Computational Synthesis Laboratory run by Hod Lipson at Cornell University, NY. Tolley and Neubert give an introduction into Programmable Matter and then present their research on stochastic assembly of matter in fluid, including both simulation (see video above) and real-world implementation. Read on or tune in!
Aasemoon =)

Kinect-enabled robotic telepresence | Computer Vision Central - 0 views

  • Taylor Veltrop used a Kinect to read his arm movements which were then carried out by a robot. The robot was programmed using Willow Garage's open-source robot operating system (ROS). As Kit Eaton suggest, this quick experiment provides an illustration of the path towards robotic avatars.
Aasemoon =)

Urus Project - 0 views

  • In this project we want to analyze and test the idea of incorporating a network of robots (robots, intelligent sensors, devices and communications) in order to improve life quality in urban areas. The URUS project is focused in designing a network of robots that in a cooperative way interact with human beings and the environment for tasks of assistance, transportation of goods, and surveillance in urban areas. Specifically, our objective is to design and develop a cognitive network robot architecture that integrates cooperating urban robots, intelligent sensors, intelligent devices and communications.
Aasemoon =)

Robots with a human touch - A*STAR Research - 0 views

  • In recent years, ‘social’ robots—cleaning robots, nursing-care robots, robot pets and the like—have started to penetrate into people’s everyday lives. Saerbeck and other robotics researchers are now scrambling to develop more sophisticated robotic capabilities that can reduce the ‘strangeness’ of robot interaction. “When robots come to live in a human space, we need to take care of many more things than for manufacturing robots installed on the factory floor,” says Haizhou Li, head of the Human Language Technology Department at the A*STAR Institute for Infocomm Research. “Everything from design to the cognitive process needs to be considered.”
otakuhacks

Data annotation - 0 views

image

data-science data annotations annotation machine-learning

started by otakuhacks on 10 Nov 20 no follow-up yet
1 - 16 of 16
Showing 20 items per page