Skip to main content

Home/ Advanced Concepts Team/ Group items tagged pattern

Rss Feed Group items tagged

2More

Pattern | CLiPS - 2 views

  • Pattern is a web mining module for the Python programming language. It bundles tools for data retrieval (Google + Twitter + Wikipedia API, web spider, HTML DOM parser), text analysis (rule-based shallow parser, WordNet interface, syntactical + semantical n-gram search algorithm, tf-idf + cosine similarity + LSA metrics) and data visualization (graph networks).
  •  
    Intuitive, well documented, and very powerful. A library to keep an eye on. Check the example Belgian elections, June 13, 2010 - Twitter opinion mining
3More

Not a scratch - 7 views

shared by pandomilla on 12 Apr 12 - No Cached
LeopoldS liked it
  •  
    I hate scorpions, but this could be a nice subject for a future Ariadna study! This north African desert scorpion, doesn't dig burrows to protect itself from the sand-laden wind (as the other scorpions do). When the sand whips by at speeds that would strip paint away from steel, the scorpion is able to scurry off without apparent damage.
  •  
    Nice research, though they have done almost all the work that we could do in an Ariadna, didnt they? "To check, they took further photographs. In particular, they used a laser scanning system to make a three-dimensional map of the armour and then plugged the result into a computer program that blasted the virtual armour with virtual sand grains at various angles of attack. This process revealed that the granules were disturbing the air flow near the skeleton's surface in ways that appeared to be reducing the erosion rate. Their model suggested that if scorpion exoskeletons were smooth, they would experience almost twice the erosion rate that they actually do. Having tried things out in a computer, the team then tried them for real. They placed samples of steel in a wind tunnel and fired grains of sand at them using compressed air. One piece of steel was smooth, but the others had grooves of different heights, widths and separations, inspired by scorpion exoskeleton, etched onto their surfaces. Each sample was exposed to the lab-generated sandstorm for five minutes and then weighed to find out how badly it had been eroded. The upshot was that the pattern most resembling scorpion armour-with grooves that were 2mm apart, 5mm wide and 4mm high-proved best able to withstand the assault. Though not as good as the computer model suggested real scorpion geometry is, such grooving nevertheless cut erosion by a fifth, compared with a smooth steel surface. The lesson for aircraft makers, Dr Han suggests, is that a little surface irregularity might help to prolong the active lives of planes and helicopters, as well as those of scorpions."
  •  
    What bugs me (pardon the pun) is that the dimensions of the pattern they used were scaled up by many orders of magnitude, while "grains of sand" with which the surface was bombarded apparently were not... Not being a specialist in the field, I would nevertheless expect that the size of the surface pattern *in relation to* to size of particles used for bombarding would be crucial.
6More

Stochastic Pattern Recognition Dramatically Outperforms Conventional Techniques - Techn... - 2 views

  • A stochastic computer, designed to help an autonomous vehicle navigate, outperforms a conventional computer by three orders of magnitude, say computer scientists
  • These guys have applied stochastic computing to the process of pattern recognition. The problem here is to compare an input signal with a reference signal to determine whether they match.   In the real world, of course, input signals are always noisy so a system that can cope with noise has an obvious advantage.  Canals and co use their technique to help an autonomous vehicle navigate its way through a simple environment for which it has an internal map. For this task, it has to measure the distance to the walls around it and work out where it is on the map. It then computes a trajectory taking it to its destination.
  • Although the idea of stochastic computing has been around for half a century, attempts to exploit have only just begun. Clearly there's much work to be done. And since one line of thought is that the brain might be a stochastic computer, at least in part, there could be exciting times ahead.
  • ...1 more annotation...
  • Ref: arxiv.org/abs/1202.4495: Stochastic-Based Pattern Recognition Analysis
  •  
    hey! This is essentially the Probabilistic Computing Ariadna
  •  
    The link is there but my understanding of our purpose is different than what I understood from the abstract. In any case,the authors are from Palma de Mallorca, Balears, Spain "somebody" should somehow make them aware of the Ariadna study ... E.g somebody no longer in the team :-)
2More

Light bends itself round corners - physicsworld.com - 1 views

  •  
    The Florida team generated a specially shaped laser beam that could self-accelerate, or bend, sideways.
  •  
    very nice!!! read this e.g. "In addition to this self-bending, the beam's intensity pattern also has a couple of other intriguing characteristics. One is that it is non-diffracting, which means that the width of each intensity region does not appreciably increase as the beam travels forwards. This is unlike a normal beam - even a tightly collimated laser beam - which spreads as it propagates. The other unusual property is that of self-healing. This means that if part of the beam is blocked by opaque objects, then any disruptions to the beam's intensity pattern could gradually recover as the beam travels forward."
2More

NASA's GRAIL Mission Solves Mystery of Moon's Surface Gravity - 1 views

  •  
    Uneven gravity patterns on moon probably caused by asteroid impacts.
  •  
    Nooo, it's TMA-1!
1More

Mutations in DMRT3 affect locomotion in horses and spinal circuit function in mice : Na... - 0 views

  •  
    isn't it strange that one single gene mutation can enable or disable such a complex behavioural pattern? anything to take advantage of in our gate study (Guido?)
12More

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
5More

Gamers beat algorithms at finding protein structures - 0 views

  • Foldit takes a hybrid approach. The Rosetta algorithm is used to create some potential starting structures, but users are then given a set of controls that let them poke and prod the protein's structure in three dimensions; displays provide live feedback on the energy of a configuration. 
  • By tracing the actions of the best players, the authors were able to figure out how the humans' excellent pattern recognition abilities gave them an edge over the computer.
  • Humans turn out to be really bad at starting from a simple linear chain of proteins; they need a rough idea of what the protein might look like before they can recognize patterns to optimize. Given a set of 10 potential structures produced by Rosetta, however, the best players were very adept at picking the one closest to the optimal configuration.
  • ...1 more annotation...
  • The authors also note that different players tended to have different strengths. Some were better at making the big adjustments needed to get near an energy minimum, while others enjoyed the fine-scale tweaking needed to fully optimize the structure. That's where Foldit's ability to enable team competitions, where different team members could handle the parts of the task most suited to their interests and abilities, really paid off.
  •  
    Some interesting ideas for our crowdsourcing game in here.
3More

Inferring individual rules from collective behavior - 2 views

  •  
    "We fit data to zonal interaction models and characterize which individual interaction forces suffice to explain observed spatial patterns." You can get the paper from the first author's website: http://people.stfx.ca/rlukeman/research.htm
  •  
    PNAS? Didnt strike me as sth very new though... We should refer to it in the roots study though: "Social organisms form striking aggregation patterns, displaying cohesion, polarization, and collective intelligence. Determining how they do so in nature is challenging; a plethora of simulation studies displaying life-like swarm behavior lack rigorous comparison with actual data because collecting field data of sufficient quality has been a bottleneck." For roots it is NO bottleneck :) Tobias was right :)
  •  
    Here they assume all relevant variables influencing behaviour are being observed. Namely, the relative positions and orientations of all ducks in the swarm. So, they make movies of the swarm's movements, process them, and them fit the models to that data. In the roots, though we can observe the complete final structure, or even obtain time-lapse movies showing how that structure came out to be, getting the measurements of all relevant soil variables (nitrogen, phosphorus, ...) throughout the soil, and over time, would be extremely difficult. So I guess a replication of the kind of work they did, but for the roots, would be hard. Nice reference though.
1More

New Pattern Found in Prime Numbers - 0 views

  •  
    wow - how comes that after so many years there are still some hidden "patterns" in primes? does however not seem to help for predicting where to discover new higher prime numbers, correct?
1More

Flexible screen based on thermochromic effect: Paperlike thermochromic display - 0 views

  •  
    The authors report the design and implementation of a paperlike, thermally activated display fabricated from thermochromic composite and embedded conductive wiring patterns, shaped from mixture of metallic nanoparticles in polydimethylsioxane using soft l
1More

Method and apparatus for verifying an individual's identity - US Patent 4805222 - 0 views

  •  
    pattern recognition for identity
1More

Martian landscapes - The Big Picture - Boston.com - 2 views

  •  
    not an expert, but pattern recognising seems a bit tricky ?
3More

Rat Neurons Grown On A Computer Chip Fly A Simulated Aircraft - 1 views

  •  
    This could become quite relevant in future control systems if the setup can be made simple to keep alive and stable. I was doing some follow-up on a story about people controlling aircraft with their brainwaves (through EEG) when I ran into this really cool story. The idea of growing the neurons in patterns is incidentally very similar to the Physarium slime-mold stuff that Dario and me were curious about a little while ago.
  •  
    I think we already had a discussion on this during a wednesday meeting :P
  •  
    Oh, I thought that was on the little robot that was controlled by rat neurons and bumped into EVERYTHING. The interesting thing here is that they add a surface patterning (with some kind of nutrient) to control the growth of cells. (Maybe that is not new either, though.)
2More

Aroma: Using ML for code recommendation - 2 views

  •  
    A simple, but neat helper for coding: ML gives idiomatic usage patterns to semi-automate the daily development work.
  •  
    Machine learning to write better machine learning code...count me in haha
1More

UCR Today: Flight Patterns Reveal How Mosquitoes Find Hosts to Transmit Deadly Diseases - 0 views

  •  
    since fighting them desperately here in Jerusalem ... and relevant to our ariadna study (even if a bit late - Luke?)
2More

The Fold-and-Cut Problem (Erik Demaine) - 3 views

  •  
    How many shapes can be obtained by folding a paper and applying just one straight cut? You'll be surprised...
  •  
    "The theorem is that every pattern (plane graph) of straight-line cuts can be made by folding and one complete straight cut. Thus it is possible to make single polygons (possibly nonconvex), multiple disjoint polygons, nested polygons, adjoining polygons, and even floating line segments and points." - So the you can cut any assembly of polygons but not a single curved edge (with a finite number of folds (points don't count)): useless. :-P
4More

Convolutional networks start to rule the world! - 2 views

  •  
    Recently, many competitions in the computer vision domain have been won by huge convolutional networks. In the image net competition, the convolutional network approach halves the error from ~30% to ~15%! Key changes that make this happen: weight-sharing to reduce the search space, and training with a massive GPU approach. (See also the work at IDSIA: http://www.idsia.ch/~juergen/vision.html) This should please Francisco :)
  • ...1 more comment...
  •  
    where is Francisco when one needs him ...
  •  
    ...mmmmm... they use 60 million parameters and 650,000 neurons on a task that one can somehow consider easier than (say) predicting a financial crisis ... still they get 15% of errors .... reminds me of a comic we saw once ... cat http://www.sarjis.info/stripit/abstruse-goose/496/the_singularity_is_way_over_there.png
  •  
    I think the ultimate solution is still to put a human brain in a jar and use it for pattern recognition. Maybe we should get a stagiaire for this..?
5More

Mathematicians Predict the Future With Data From the Past - 6 views

  •  
    Asimov's Foundation meets ACT's Tipping Point Prediction?
  • ...2 more comments...
  •  
    Good luck to them!!
  •  
    "Mathematicians Predict the Future With Data From the Past". GREAT! And physicists probably predict the past with data from the future?!? "scientists and mathematicians analyze history in the hopes of finding patterns they can then use to predict the future". Big deal! That's what any scientist does anyway... "cliodynamics"!? Give me a break!
  •  
    still, some interesting thoughts in there ... "Then you have the 50-year cycles of violence. Turchin describes these as the building up and then the release of pressure. Each time, social inequality creeps up over the decades, then reaches a breaking point. Reforms are made, but over time, those reforms are reversed, leading back to a state of increasing social inequality. The graph above shows how regular these spikes are - though there's one missing in the early 19th century, which Turchin attributes to the relative prosperity that characterized the time. He also notes that the severity of the spikes can vary depending on how governments respond to the problem. Turchin says that the United States was in a pre-revolutionary state in the 1910s, but there was a steep drop-off in violence after the 1920s because of the progressive era. The governing class made decisions to reign in corporations and allowed workers to air grievances. These policies reduced the pressure, he says, and prevented revolution. The United Kingdom was also able to avoid revolution through reforms in the 19th century, according to Turchin. But the most common way for these things to resolve themselves is through violence. Turchin takes pains to emphasize that the cycles are not the result of iron-clad rules of history, but of feedback loops - just like in ecology. "In a predator-prey cycle, such as mice and weasels or hares and lynx, the reason why populations go through periodic booms and busts has nothing to do with any external clocks," he writes. "As mice become abundant, weasels breed like crazy and multiply. Then they eat down most of the mice and starve to death themselves, at which point the few surviving mice begin breeding like crazy and the cycle repeats." There are competing theories as well. A group of researchers at the New England Complex Systems Institute - who practice a discipline called econophysics - have built their own model of political violence and
  •  
    It's not the scientific activity described in the article that is uninteresting, on the contrary! But the way it is described is just a bad joke. Once again the results itself are seemingly not sexy enough and thus something is sold as the big revolution, though it's just the application of the oldest scientific principles in a slightly different way than used before.
1 - 20 of 62 Next › Last »
Showing 20 items per page