Skip to main content

Home/ Advanced Concepts Team/ Group items tagged extreme

Rss Feed Group items tagged

LeopoldS

Schumpeter: More than just a game | The Economist - 3 views

  •  
    remember the discussion I tried to trigger in the team a few weeks ago ...
  • ...5 more comments...
  •  
    main quote I take from the article: "gamification is really a cover for cynically exploiting human psychology for profit"
  •  
    I would say that it applies to management in general :-)
  •  
    which is exactly why it will never work .... and surprisingly "managers" fail to understand this very simple fact.
  •  
    ... "gamification is really a cover for cynically exploiting human psychology for profit" --> "Why Are Half a Million People Poking This Giant Cube?" http://www.wired.com/gamelife/2012/11/curiosity/
  •  
    I think the "essence" of the game is its uselessness... workers need exactly the inverse, to find a meaning in what they do !
  •  
    I love the linked article provided by Johannes! It expresses very elegantly why I still fail to understand even extremely smart and busy people in my view apparently waiting their time in playing computer games - but I recognise that there is something in games that we apparently need / gives us something we cherish .... "In fact, half a million players so far have registered to help destroy the 64 billion tiny blocks that compose that one gigantic cube, all working in tandem toward a singular goal: discovering the secret that Curiosity's creator says awaits one lucky player inside. That's right: After millions of man-hours of work, only one player will ever see the center of the cube. Curiosity is the first release from 22Cans, an independent game studio founded earlier this year by Peter Molyneux, a longtime game designer known for ambitious projects like Populous, Black & White and Fable. Players can carve important messages (or shameless self-promotion) onto the face of the cube as they whittle it to nothing. Image: Wired Molyneux is equally famous for his tendency to overpromise and under-deliver on his games. In 2008, he said that his upcoming game would be "such a significant scientific achievement that it will be on the cover of Wired." That game turned out to be Milo & Kate, a Kinect tech demo that went nowhere and was canceled. Following this, Molyneux left Microsoft to go indie and form 22Cans. Not held back by the past, the Molyneux hype train is going full speed ahead with Curiosity, which the studio grandiosely promises will be merely the first of 22 similar "experiments." Somehow, it is wildly popular. The biggest challenge facing players of Curiosity isn't how to blast through the 2,000 layers of the cube, but rather successfully connecting to 22Cans' servers. So many players are attempting to log in that the server cannot handle it. Some players go for utter efficiency, tapping rapidly to rack up combo multipliers and get more
  •  
    why are video games so much different than collecting stamps or spotting birds or planes ? One could say they are all just hobbies
Marcus Maertens

World's first telescopic contact lens gives you Superman-like vision | ExtremeTech - 1 views

  •  
    Now we just need an x-ray mode and we are done.
jmlloren

Cheap and easy-to-make perovskite films rival silicon for efficiency. - 11 views

I just wanted to put another paper in this context: http://science.sciencemag.org/content/324/5923/63.short Solar cells based on Oxides, in particular BiFeO3. The key point here, is that while hali...

solar cells technology

started by fichbio on 09 Mar 16 1 follow-up, last by jmlloren on 11 Mar 16
jcunha liked it
Dario Izzo

Bold title ..... - 3 views

  •  
    I got a fever. And the only prescription is more cat faces! ...../\_/\ ...(=^_^) ..\\(___) The article sounds quite interesting, though. I think the idea of a "fake" agent that tries to trick the classifier while both co-evolve is nice as it allows the classifier to first cope with the lower order complexity of the problem. As the fake agent mimics the real agent better and better the classifier has time to add complexity to itself instead of trying to do it all at once. It would be interesting if this is later reflected in the neural nets structure, i.e. having core regions that deal with lower order approximation / classification and peripheral regions (added at a later stage) that deal with nuances as they become apparent. Also this approach will develop not just a classifier for agent behavior but at the same time a model of the same. The later may be useful in itself and might in same cases be the actual goal of the "researcher". I suspect, however, that the problem of producing / evolving the "fake agent" model might in most case be at least as hard as producing a working classifier...
  •  
    This paper from 2014 seems discribe something pretty similar (except for not using physical robots, etc...): https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
  •  
    Yes, this IS basically adversarial learning. Except the generator part instead of being a neural net is some kind of swarm parametrization. I just love how they rebranded it, though. :))
jcunha

Corporate culture spreads to Scandinavian universities - 1 views

  •  
    "University of Copenhagen fired seismologist Hans Thybo, president of the European Geosciences Union. The official explanation for Thybo's dismissal - his alleged use of private e-mail for work, and telling a postdoc that it is legitimate to openly criticize university management - seems petty in the extreme."
Lionel Jacques

Cosmic-ray theory gets the cold shoulder - 0 views

  •  
    "One of the leading theories describing how the most energetic cosmic rays are produced may need a rethink in light of a new study by physicists at the IceCube Neutrino Observatory in Antarctica. The team had set out to detect the extremely energetic neutrinos that are expected to be produced alongside high-energy cosmic rays in the violent explosions that mark the deaths of massive stars - but after looking at hundreds of these explosions, no such neutrinos have been found. "
Lionel Jacques

CERN to announce Higgs boson observation at LHC - 1 views

  •  
    Tomorrow, at 9am EST, scientists at the Large Hadron Collider (LHC) at CERN in Switzerland are expected to announce, with fairly strong certainty, that they have observed the Higgs boson "God" particle at a mass-energy of 125 GeV. For just over a week, rumors have been rife that observations with 2.5 to 3.5 sigma certainty (96% to 99.9%) have been made.
Paul N

Sugar battery promises 10 times the energy density of lithium - 1 views

  •  
    intriguing but of little interest for space it seems to me
dejanpetkow

Photonic calculus with analog computer - 5 views

  •  
    Weird.
  •  
    This reminds me a 2013 paper on how to perform derivatives, integrals and even time reversal in optical fibres: http://www.nature.com/srep/2013/130403/srep01594/full/srep01594.html "The manipulation of dynamic Brillouin gratings in optical fibers is demonstrated to be an extremely flexible technique to achieve, with a single experimental setup, several all-optical signal processing functions. In particular, all-optical time differentiation, time integration and true time reversal are theoretically predicted, and then numerically and experimentally demonstrated."
  •  
    Would this kind of computer be more space environment resistive?
johannessimon81

A Different Form of Color Vision in Mantis Shrimp - 4 views

  •  
    Mantis shrimp seem to have 12 types of photo-receptive sensors - but this does not really improve their ability to discriminate between colors. Speculation is that they serve as a form of pre-processing for visual information: the brain does not need to decode full color information from just a few channels which would would allow for a smaller brain. I guess technologically the two extremes of light detection would be RGB cameras which are like our eyes and offer good spatial resolution, and spectrometers which have a large amount of color channels but at the cost of spatial resolution. It seems the mantis shrimp uses something that is somewhere between RGB cameras and spectrometers. Could there be a use for this in space?
  •  
    > RGB cameras which are like our eyes ...apart from the fact that the spectral response of the eyes is completely different from "RGB" cameras (http://en.wikipedia.org/wiki/File:Cones_SMJ2_E.svg) ... and that the eyes have 4 types of light-sensitive cells, not three (http://en.wikipedia.org/wiki/File:Cone-response.svg) ... and that, unlike cameras, human eye is precise only in a very narrow centre region (http://en.wikipedia.org/wiki/Fovea) ...hmm, apart from relying on tri-stimulus colour perception it seems human eyes are in fact completely different from "RGB cameras" :-) OK sorry for picking on this - that's just the colour science geek in me :-) Now seriously, on one hand the article abstract sounds very interesting, but on the other the statement "Why use 12 color channels when three or four are sufficient for fine color discrimination?" reveals so much ignorance to the very basics of colour science that I'm completely puzzled - in the end, it's a Science article so it should be reasonably scientifically sound, right? Pity I can't access full text... the interesting thing is that more channels mean more information and therefore should require *more* power to process - which is exactly opposite to their theory (as far as I can tell it from the abstract...). So the key is to understand *what* information about light these mantises are collecting and why - definitely it's not "colour" in the sense of human perceptual experience. But in any case - yes, spectrometry has its uses in space :-)
Robert Musters

Optimize work efficiency using music - 2 views

focus@will is a new neuroscience based music service that helps you focus, reduce distractions and retain information when working, studying, writing and reading. The technology is based on hard sc...

science sound focus attention

started by Robert Musters on 10 Feb 14 no follow-up yet
Thijs Versloot

The risk of geoengineering (or when abruptly stopping..) - 2 views

  •  
    The researchers used a global climate model to show that if an extreme emissions pathway -- RCP8.5 -- is followed up until 2035, allowing temperatures to rise 1°C above the 1970-1999 mean, and then SRM (Solar Radiation Management) is implemented for 25 years and suddenly stopped, global temperatures could increase by 4°C in the following decades.
  •  
    Nice quantitative study. They treat the problem within the full uncertainty range of climate sensitivity parameter (much uncertain), very complete. However, at SRM ceasing, after an initial positive spike of Radiative Forcing, the rate of warming seems to return to rates predicted for the non-geoengineering case: "The 20-year temperature trends following SRM cessation are 0.2−0.6 °C/decade for the range of climate sensitivities (figure 5), comparable to those trends that occur under the RCP8.5 scenario without any SRM." I am actually working on a similar idea for deliberate Mars terraforming: aiming to cool the planet down before we introduce a positive Temperature raising feedback with greenhouse gases, maybe could be more efficient than warming itself.
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
Thijs Versloot

Distorted GPS signals reveal hurricane wind speeds - 0 views

  •  
    With extreme precision, the possibility of getting information from the perturbations gives nice side effects
  •  
    They say it will not replace current wind speed dropsondes which have a ten times better wind speed accuracy, but they could definitely provide a much more extensive view of cyclone wind speeds - at a very low cost!
  •  
    It could be interesting to compare the accuracy and spatial resolution of the GPS method compared to what can be achieved using L-Band SAR data (e.g. SMOS mission)...
Marcus Maertens

Gadget Genius - nanotechnology breakthrough is big deal for electronics : The Universit... - 2 views

  •  
    Quote: "This is exactly what we are pursuing - self-assembling materials that organize at smaller sizes, say, less than 20 or even 10 nanometers"
  •  
    Direct Self-Assembly (DSA) is one of the competitors for the next-generation 'lithography' together with direct-write via electron beam and the more traditional extreme UV (EUV) lithography. Although there are huge benefits to use DSA, the technology does have some drawbacks when it comes to line edge roughness. It seems however particularly good for repetitive structures that are used in memory chips. As long as EUV is struggling to get it working, DSA definitely has a fighting chance to enter the market one day.
Aurelie Heritier

Have Harvard Scientists Created A Real Lightsaber? Kind Of. - 0 views

  •  
    A joint Harvard-MIT research program led by Harvard Professor of Physics Mikhail Lukin and MIT Professor of Physics Vladan Vuletic has created a new state of matter the two describe as extremely similar to the lightsabers seen in "Star Wars."
  •  
    "Photonic molecules"? Intriguing..
Thijs Versloot

Most Amazing Exoplanets #ifls - 1 views

  •  
    The most astounding fact about Kepler-78b is that it shouldn't even exist, according to our current knowledge of planetary formation. It is extremely close to its star at only 550,000 miles (900,000 kilometers). As a comparison, Mercury only gets within 28.5 million miles (45.9 million kilometers) of the sun in the nearest point of orbit. With that proximity, it isn't clear how the planet could have formed as the star was much larger when the planet formed. With its current distance, that would mean it formed inside the star, which is impossible as far as we know.
‹ Previous 21 - 40 of 77 Next › Last »
Showing 20 items per page