Skip to main content

Home/ Advanced Concepts Team/ Group items tagged algorithms

Rss Feed Group items tagged

Thijs Versloot

HealthMap software flagged Ebola 9 days before outbreak announced - 0 views

  •  
    HealthMap uses algorithms to scour tens of thousands of social media sites, local news, government websites, infectious-disease physicians' social networks and other sources to detect and track disease outbreaks. Sophisticated software filters irrelevant data, classifies the relevant information, identifies diseases and maps their locations with the help of experts.
johannessimon81

Google combines skycrane, VTOL and lifting wing to make drone deliveries - 6 views

  •  
    Nice video featuring the technology. Plus it comes with a good soundtrack! Google's project wing uses a lifting wing concept (more fuel efficient than normal airplane layouts and MUCH more efficient than quadrocopters) but it equips the plane with engines strong enough to hover in a nose up position, allowing vertical landing and takeoff. For the delivery of packages the drone does not even need to land - it can lower them on a wire - much like the skycrane concept used to deliver the Curiosity rover on Mars. Not sure if the skycrane is really necessary but it is certainly cool. Anyways, the video is great for its soundtrack alone! ;-P
  • ...4 more comments...
  •  
    could we just use genetic algorithms to evolve these shapes and layouts? :P
  •  
    > Not sure if the skycrane is really necessary but it is certainly cool. I think apart from coolness using a skycrane helps keep the rotating knives away from the recipient...
  •  
    Honest question, are we ever going to see this in practice? I mean besides some niche application somewhere, isn't it fundamentally flawed or do I need to keep my window opened on the 3rd floor without a balcony when I ordered something from DX? Its pretty cool yes, but practical?
  •  
    Package delivery is indeed more complicated than it may seem at first sight, although solutions are possible for instance by restricting delivery to distribution centers. What we really need of course is some really efficient and robust AI to navigate without any problems in urban areas : ) The hybrid is interesting since it combines the advantage of a Vertical Takeoff and Landing (and hover), and a wing for more efficient forward flight. Challenges lie in the control of the vehicle under any angle and all that this entails also for higher levels of control. Our lab has first used this concept a few years ago for the DARPA UAVforge challenge, and we had two hybrids in our entry last year for the IMAV 2013 (for some shaky images: https://www.youtube.com/watch?v=Z7XgRK7pMoU ).
  •  
    Fair enough, but even if you consider advanced/robust/efficient AI, why would you use a drone? Do we envision hundreds of drones above our heads in the street instead of UPS vans, or postmen, considering delivers letters might be more easily achievable. I am not so sure if personal delivery will take this route. On the other hand, if the system would work smoothly, I can image that I'm send a mail with the question whether I'm home (or they might know already from my personal GPS tracker) and then notify me that they are launching my DVD and it will come crashing into my door in 5min.
  •  
    I'm more curios how they're planning to keep people from stealing the drones. I could do with a drone army myself and having cheap amazon or google drones flying about sounds like a decent source.
dejanpetkow

Metamaterials + Genetic algorithm - 3 views

  •  
    ...and this is what comes out of this combination.
Athanasia Nikolaou

Neural Networks (!) in OLCI - ocean colour sensor onboard Sentinel 3 - 3 views

  •  
    Not easily digestible piece of esa document, but to prove Paul's point. And yes, they have already planned to train neural networks on a database of different water types, so that the satellite figures out from the combined retrieval of backscattering and absorption = f(λ) which type of water it is looking at. Type of water relates to οptical clarity of the water, a variable called turbidity. We could do this as well for mapping iron fertilization locations if we find its spectral signature. Lab time?????
jcunha

Training and operation of an integrated neural network based on memristors - 0 views

  •  
    Almost in time for the workshop last week! This new Nature paper (e-mail me for full paper) claims training and usage of neural network implemented with metal-oxide memristors, without selector CMOS. They used it to implement a delta-rule algorithm for classification of 3x3 pixel black and white letters. Very impressive work!!!!
  •  
    For those not that much into the topic, see the Nature's News and View section www.nature.com/nature/journal/v521/n7550/full/521037a.html?WT.ec_id=NATURE-20150507 where they feature this article.
jcunha

Maze-solving automatons can repair broken circuits - 1 views

  •  
    Researchers in Bangalore, India together with the Indian Space Research organization come up with an intelligent self-healing algorithm that can locate open-circuits faults and repair them in real-time. They used an insulating silicon oil containing conductive particles. Whenever a fault happens, an electric field develops there, causing the fluid to move in a 'thermodynamic automaton' way repairing the fault. The researchers make clear it could be one advantage for electronics in harsh environments, such as in space satellites.
Thijs Versloot

Bicycle airbag #howitworks - 2 views

  •  
    Thousands of cycling accidents were re-enacted using stunt riders and crash-test dummies to collect the specific movement patterns of cyclists in accidents. In parallel, normal cycling data has been collected using test cyclists wearing Hövding in everyday cycling. Based on this collected data, they have developed an algorithm that can distinguish normal cycling from accidents. As you don't want the 399GBP device to inflate when taking a sharp corner...
Alexander Wittig

Picture This: NVIDIA GPUs Sort Through Tens of Millions of Flickr Photos - 2 views

  •  
    Strange and exotic cityscapes. Desolate wilderness areas. Dogs that look like wookies. Flickr, one of the world's largest photo sharing services, sees it all. And, now, Flickr's image recognition technology can categorize more than 11 billion photos like these. And it does it automatically. It's called "Magic View." Magical deep learning! Buzzword attack!
  • ...4 more comments...
  •  
    and here comes my standard question: how can we use this for space? fast detection of natural disasters onboard?
  •  
    Even on ground. You could for example teach it what nuclear reactors or missiles or other weapons you don't want look like on satellite pictures and automatically scan the world for them (basically replacing intelligence analysts).
  •  
    In fact, I think this could make a nice ACT project: counting seals from satellite imagery is an actual (and quite recent) thing: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0092613 In this publication they did it manually from a GeoEye 1 b/w image, which sounds quite tedious. Maybe one can train one of those image recognition algorithms to do it automatically. Or maybe it's a bit easier to count larger things, like elephants (also a thing).
  •  
    In HiPEAC (High Performance, embedded architecture and computation) conference I attended in the beginning of this year there was a big trend of CUDA GPU vs FPGA for hardware accelerated image processing. Most of it orbitting around discussing who was faster and cheaper with people from NVIDIA in one side and people from Xilinx and Intel in the other. I remember of talking with an IBM scientist working on hardware accelerated data processing working together with the Radio telescope institute in Netherlands about the solution where they working on (GPU CUDA). I gathered that NVIDIA GPU suits best in applications that somehow do not rely in hardware, having the advantage of being programmed in a 'easy' way accessible to a scientist. FPGA's are highly reliable components with the advantage of being available in radhard versions, but requiring specific knowledge of physical circuit design and tailored 'harsh' programming languages. I don't know what is the level of rad hardness in NVIDIA's GPUs... Therefore FPGAs are indeed the standard choice for image processing in space missions (a talk with the microelectronics department guys could expand on this), whereas GPUs are currently used in some ground based (radio astronomy or other types of telescopes). I think that on for a specific purpose as the one you mentioned, this FPGA vs GPU should be assessed first before going further.
  •  
    You're forgetting power usage. GPUs need 1000 hamster wheels worth of power while FPGAs can run on a potato. Since space applications are highly power limited, putting any kind of GPU monster in orbit or on a rover is failed idea from the start. Also in FPGAs if a gate burns out from radiation you can just reprogram around it. Looking for seals offline in high res images is indeed definitely a GPU task.... for now.
  •  
    The discussion of how to make FPGA hardware acceleration solutions easier to use for the 'layman' is starting btw http://reconfigurablecomputing4themasses.net/.
jcunha

Where Life Meets Light: Bio-Inspired Photonics - 0 views

  •  
    Octopus and optoelectronics camouflage, light bugs and LEDs, or spider webs and touch screens, ... a whole cool bunch of biomimetic stuff
  •  
    See also the referred work "Light-extraction enhancement for light-emitting diodes: a firefly-inspired structure refined by the genetic algorithm" - quite cool! https://pure.fundp.ac.be/portal/files/11946897/paper89.pdf
Dario Izzo

Probabilistic Logic Allows Computer Chip to Run Faster - 3 views

  •  
    Francesco pointed out this research one year ago, we dropped it as noone was really considering it ... but in space a low CPU power consumption is crucial!! Maybe we should look back into this?
  • ...6 more comments...
  •  
    Q1: For the time being, for what purposes computers are mainly used on-board?
  •  
    for navigation, control, data handling and so on .... why?
  •  
    Well, because the point is to identify an application in which such computers would do the job... That could be either an existing application which can be done sufficiently well by such computers or a completely new application which is not already there for instance because of some power consumption constraints... Q2 would be then: for which of these purposes strict determinism of the results is not crucial? As the answer to this may not be obvious, a potential study could address this very issue. For instance one can consider on-board navigation systems with limited accuracy... I may be talking bullshit now, but perhaps in some applications it doesn't matter whether a satellite flies on the exact route but +/-10km to the left/right? ...and so on for the other systems. Another thing is understanding what exactly this probabilistic computing is, and what can be achieved using it (like the result is probabilistic but falls within a defined range of precision), etc. Did they build a complete chip or at least a sub-circiut, or still only logic gates...
  •  
    Satellites use old CPUs also because with the trend of going for higher power modern CPUs are not very convenient from a system design point of view (TBC)... as a consequence the constraints put on on-board algorithms can be demanding. I agree with you that double precision might just not be necessary for a number of applications (navigation also), but I guess we are not talking about 10km as an absolute value, rather to a relative error that can be tolerated at level of (say) 10^-6. All in all you are right a first study should assess what application this would be useful at all.. and at what precision / power levels
  •  
    The interest of this can be a high fault tolerance for some math operations, ... which would have for effect to simplify the job of coders! I don't think this is a good idea regarding power consumption for CPU (strictly speaking). The reason we use old chip is just a matter of qualification for space, not power. For instance a LEON Sparc (e.g. use on some platform for ESA) consumes something like 5mW/MHz so it is definitely not were an engineer will look for some power saving considering a usual 10-15kW spacecraft
  •  
    What about speed then? Seven time faster could allow some real time navigation at higher speed (e.g. velocity of a terminal guidance for an asteroid impactor is limited to 10 km/s ... would a higher velocity be possible with faster processors?) Another issue is the radiation tolerance of the technology ... if the PCMOS are more tolerant to radiation they could get more easily space qualified.....
  •  
    I don't remember what is the speed factor, but I guess this might do it! Although, I remember when using an IMU that you cannot have the data above a given rate (e.g. 20Hz even though the ADC samples the sensor at a little faster rate), so somehow it is not just the CPU that must be re-thought. When I say qualification I also imply the "hardened" phase.
  •  
    I don't know if the (promised) one-order-of-magnitude improvements in power efficiency and performance are enough to justify looking into this. For once, it is not clear to me what embracing this technology would mean from an engineering point of view: does this technology need an entirely new software/hardware stack? If that were the case, in my opinion any potential benefit would be nullified. Also, is it realistic to build an entire self-sufficient chip on this technology? While the precision of floating point computations may be degraded and still be useful, how does all this play with integer arithmetic? Keep in mind that, e.g., in the Linux kernel code floating-point calculations are not even allowed/available... It is probably possible to integrate an "accelerated" low-accuracy floating-point unit together with a traditional CPU, but then again you have more implementation overhead creeping in. Finally, recent processors by Intel (e.g., the Atom) and especially ARM boast really low power-consumption levels, at the same time offering performance-boosting features such as multi-core and vectorization capabilities. Don't such efforts have more potential, if anything because of economical/industrial inertia?
Joris _

SPACE.com -- Venus Probe's Problems May Cause Japan to Scale Back - 0 views

  • We have to be more conservative to plan our next planetary mission, so it will never fail in any aspect."
  • the probe's initial failure will have a big impact on how JAXA plans future planetary missions
  • hew to more conservative ideas in the near future
  •  
    what a shame! ambition and innovation have not been fairly rewarded ...
  • ...1 more comment...
  •  
    Did you try to run your algorithm on their problem as Dario suggested? I'm very curious!
  •  
    I didn't have time yet. But formulating the failure with a MTBF or a FIT, you can easily imagine a more robust solution. Instead of one single burn, you would make several smaller burns - It will take more time and require more fuel though. Another "robust" approach is to consider weak stability boundary capture. Again it takes time, but chances of failure are lessen.
  •  
    would be a pity indeed!
Luís F. Simões

Pattern | CLiPS - 2 views

  • Pattern is a web mining module for the Python programming language. It bundles tools for data retrieval (Google + Twitter + Wikipedia API, web spider, HTML DOM parser), text analysis (rule-based shallow parser, WordNet interface, syntactical + semantical n-gram search algorithm, tf-idf + cosine similarity + LSA metrics) and data visualization (graph networks).
  •  
    Intuitive, well documented, and very powerful. A library to keep an eye on. Check the example Belgian elections, June 13, 2010 - Twitter opinion mining
Joris _

Tracking Whale Sharks With Astronomical Algorithms | Wired Science | Wired.com - 3 views

  • equations were developed for astronomers using the Hubble telescope, Holmberg’s crew adapted them for biologists studying Earth’s biggest fishes
  • Holmberg also hopes that other programmers will follow his lead and lend their coding skills to worthy projects. “Pick the species or concern you’re most passionate about, pick the researchers who are working on it, and identify their technical needs,” he said. “I’m not even a great programmer. I’m underqualified but highly productive
Luís F. Simões

New algorithm offers ability to influence systems such as living cells or social networks - 3 views

  • a new computational model that can analyze any type of complex network -- biological, social or electronic -- and reveal the critical points that can be used to control the entire system.
  • Slotine and his colleagues applied traditional control theory to these recent advances, devising a new model for controlling complex, self-assembling networks.
  • Yang-Yu Liu, Jean-Jacques Slotine, Albert-László Barabási. Controllability of complex networks. Nature, 2011; 473 (7346): 167 DOI: 10.1038/nature10011
  •  
    Sounds too super to be true, no?
  • ...3 more comments...
  •  
    cover story in the May 12 issue of Nature
  •  
    For each, they calculated the percentage of points that need to be controlled in order to gain control of the entire system.
  •  
    > Sounds too super to be true, no? Yeah, how else may it sound, being a combination of hi-quality (I assume) research targeted at attracting funding, raised to the power of Science Daily's pop-pseudo-scientific journalists' bu****it? Original article starts with a cool sentence too: > The ultimate proof of our understanding of natural or technological systems is reflected in our ability to control them. ...a good starting point for a never-ending philosophers' debate... Now seriously, because of a big name behind the study, I'm very curious to read the original article. Although I expect the conclusion to be that in practical cases (i.e. the cases of "networks" you *would like to* "control"), you need to control all nodes or something equally impractical...
  •  
    then I am looking forward to reading your conclusions here after you will have actually read the paper
LeopoldS

Traders Profit With Computers Set at High Speed - NYTimes.com - 0 views

  •  
    following Francesco's post from Slatdot, this is now the original article from the NYT ... enjoy
  •  
    completely crazy these high speed transactions... they should rather put their efforts in science...!
Joris _

Animal personalities: Unnatural selection | The Economist - 0 views

  • the first time that differences in personality have been shown in wild birds
  • hese analyses are based on the assumption that the animals collected represent a randomly selected and thus representative sample of the population.
  • Instead it looks as if such trapping studies are selecting the bravest individuals.
  •  
    What if heuristic algorithms (PSO, DE, ...) do not actually simulate flocking birds or school gish beahviour ?!
ESA ACT

Slashdot | Text Compressor 1% Away From AI Threshold - 0 views

  •  
    "Alexander Ratushnyak compressed the first 100,000,000 bytes of Wikipedia to a record-small 16,481,655 bytes (including decompression program), thereby not only winning the second payout of The Hutter Prize for Compression of Human Knowledge, but also bri
ESA ACT

Wag the Robot? Brown Scientists Build Robot That Responds to Human Gestures | Brown Uni... - 0 views

  •  
    does not sound overly advanced to me - but am not a roboticist ...
ESA ACT

Wolfram|Alpha: Searching for Truth | h+ Magazine - 0 views

  •  
    interesting article and interview - for our computer guy to read: Francesco, Marek - but maybe even Tobias for the bioinspiration .... (LS)
« First ‹ Previous 61 - 80 of 101 Next › Last »
Showing 20 items per page