Skip to main content

Home/ Advanced Concepts Team/ Group items tagged knowledge

Rss Feed Group items tagged

LeopoldS

Lorentz Center - Core Knowledge, Language and Culture from 29 May 2012 through 1 Jun 2012 - 2 views

  •  
    interesting workshop on language here in Leiden ... Luis, Giusi??
  •  
    interesting indeed, but the lectures I'd be interested in already took place
  •  
    No good, 80% of these people are top quoted in my research area... I really should have been there. Evidently the organisers need to work on advertising.
Nicholas Lan

Google Seeks To Plant Antenna Farm in Iowa » Data Center Knowledge - 0 views

  •  
    Google subsidiary Google Fiber, Inc. is seeking permission to place satellite antennas on land near its data center in Council Bluffs, Iowa. The antennas could be used to receive content feeds from broadcast networks that could be bundled with a high-speed fiber service.
  •  
    an oddly similar story http://venturebeat.com/2012/02/20/apple-is-solar-friendly/ apple to build largest solar array in US
LeopoldS

The Cost of Knowledge - 1 views

  •  
    interesting initiative with already some high profile names from space sector such as Martin Rees, Baumjohann, two guys from NASA ...
Thijs Versloot

Wolfram Language - 11 views

Thats looks pretty awesome indeed. Some of those functions would be very helpfull right now :)

knowledge model everything

Thijs Versloot

The big data brain drain - 3 views

  •  
    Echoing this, in 2009 Google researchers Alon Halevy, Peter Norvig, and Fernando Pereira penned an article under the title The Unreasonable Effectiveness of Data. In it, they describe the surprising insight that given enough data, often the choice of mathematical model stops being as important - that particularly for their task of automated language translation, "simple models and a lot of data trump more elaborate models based on less data." If we make the leap and assume that this insight can be at least partially extended to fields beyond natural language processing, what we can expect is a situation in which domain knowledge is increasingly trumped by "mere" data-mining skills. I would argue that this prediction has already begun to pan-out: in a wide array of academic fields, the ability to effectively process data is superseding other more classical modes of research.
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
Thijs Versloot

Computer as smart as a 4-year-old? Researchers IQ test new artificial intelligence system - 0 views

  •  
    Artificial and natural knowledge researchers at the University of Illinois at Chicago have IQ-tested one of the best available artificial intelligence systems to see how intelligent it really is. Turns out it's about as smart as the average 4-year-old, they will report July 17 at the U.S. Artificial Intelligence Conference in Bellevue, Wash.
Thijs Versloot

Most Amazing Exoplanets #ifls - 1 views

  •  
    The most astounding fact about Kepler-78b is that it shouldn't even exist, according to our current knowledge of planetary formation. It is extremely close to its star at only 550,000 miles (900,000 kilometers). As a comparison, Mercury only gets within 28.5 million miles (45.9 million kilometers) of the sun in the nearest point of orbit. With that proximity, it isn't clear how the planet could have formed as the star was much larger when the planet formed. With its current distance, that would mean it formed inside the star, which is impossible as far as we know.
Athanasia Nikolaou

Nature Paper: Rivers and streams release more CO2 than previously believed - 6 views

  •  
    Another underestimated source of CO2, are turbulent waters. "The stronger the turbulences at the water's surface, the more CO2 is released into the atmosphere. The combination of maps and data revealed that, while the CO2 emissions from lakes and reservoirs are lower than assumed, those from rivers and streams are three times as high as previously believed." Alltogether the emitted CO2 equates to roughly one-fifth of the emissions caused by humans. Yet more stuff to model...
  • ...10 more comments...
  •  
    This could also be a mechanism to counter human CO2 emission ... the more we emit, the less turbulent rivers and stream, the less CO2 is emitted there ... makes sense?
  •  
    I guess there is a natural equilibrium there. Once the climate warms up enough for all rivers and streams to evaporate they will not contribute CO2 anymore - which stops their contribution to global warming. So the problem is also the solution (as always).
  •  
    "The source of inland water CO2 is still not known with certainty and new studies are needed to research the mechanisms controlling CO2 evasion globally." It is another source of CO2 this one, and the turbulence in the rivers is independent of our emissions in CO2 and just facilitates the process of releasing CO2 waters. Dario, if I understood correct you have in mind a finite quantity of CO2 that the atmosphere can accomodate, and to my knowledge this does not happen, so I cannot find a relevant feedback there. Johannes, H2O is a powerful greenhouse gas :-)
  •  
    Nasia I think you did not get my point (a joke, really, that Johannes continued) .... by emitting more CO2 we warm up the planet thus drying up rivers and lakes which will, in turn emit less CO2 :) No finite quantity of CO2 in the atmosphere is needed to close this loop ... ... as for the H2O it could just go into non turbulent waters rather than staying into the atmosphere ...
  •  
    Really awkward joke explanation: I got the joke of Johannes, but maybe you did not get mine: by warming up the planet to get rid of the rivers and their problems, the water of the rivers will be accomodated in the atmosphere, therefore, the greenhouse gas of water.
  •  
    from my previous post: "... as for the H2O it could just go into non turbulent waters rather than staying into the atmosphere ..."
  •  
    I guess the emphasis is on "could"... ;-) Also, everybody knows that rain is cold - so more water in the atmosphere makes the climate colder.
  •  
    do you have the nature paper also? looks like very nice, meticulous typically german research lasting over 10 years with painstakingly many researchers from all over the world involved .... and while important the total is still only 20% of human emissions ... so a variation in it does not seem to change the overall picture
  •  
    here is the nature paper : http://www.nature.com/nature/journal/v503/n7476/full/nature12760.html I appreciate Johannes' and Dario's jokes, since climate is the common ground that all of us can have an opinion, taking honours from experiencing weather. But, the same as if I am trying to make jokes for material science, or A.I. I take a high risk of failing(!) :-S Water is a greenhouse gas, rain rather releases latent heat to the environment in order to be formed, Johannes, nice trolling effort ;-) Between this and the next jokes to come, I would stop to take a look here, provided you have 10 minutes: how/where rain forms http://www.scribd.com/doc/58033704/Tephigrams-for-Dummies
  •  
    omg
  •  
    Nasia, I thought about your statement carefully - and I cannot agree with you. Water is not a greenhouse gas. It is instead a liquid. Also, I can't believe you keep feeding the troll! :-P But on a more topical note: I think it is an over-simplification to call water a greenhouse gas - water is one of the most important mechanisms in the way Earth handles heat input from the sun. The latent heat that you mention actually cools Earth: solar energy that would otherwise heat Earth's surface is ABSORBED as latent heat by water which consequently evaporates - the same water condenses into rain drops at high altitudes and releases this stored heat. In effect the water cycle is a mechanism of heat transport from low altitude to high altitude where the chance of infrared radiation escaping into space is much higher due to the much thinner layer of atmosphere above (including the smaller abundance of greenhouse gasses). Also, as I know you are well aware, the cloud cover that results from water condensation in the troposphere dramatically increases albedo which has a cooling effect on climate. Furthermore the heat capacity of wet air ("humid heat") is much larger than that of dry air - so any advective heat transfer due to air currents is more efficient in wet air - transporting heat from warm areas to a natural heat sink e.g. polar regions. Of course there are also climate heating effects of water like the absorption of IR radiation. But I stand by my statement (as defended in the above) that rain cools the atmosphere. Oh and also some nice reading material on the complexities related to climate feedback due to sea surface temperature: http://journals.ametsoc.org/doi/abs/10.1175/1520-0442(1993)006%3C2049%3ALSEOTR%3E2.0.CO%3B2
  •  
    I enjoy trolling conversations when there is a gain for both sides at the end :-) . I had to check upon some of the facts in order to explain my self properly. The IPCC report states the greenhouse gases here, and water vapour is included: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-2-1.html Honestly, I read only the abstract of the article you posted, which is a very interesting hypothesis on the mechanism of regulating sea surface temperature, but it is very localized to the tropics (vivid convection, storms) a region of which I have very little expertise, and is difficult to study because it has non-hydrostatic dynamics. The only thing I can comment there is that the authors define constant relative humidity for the bottom layer, supplied by the oceanic surface, which limits the implementation of the concept on other earth regions. Also, we may confuse during the conversation the greenhouse gas with the Radiative Forcing of each greenhouse gas: I see your point of the latent heat trapped in the water vapour, and I agree, but the effect of the water is that it traps even as latent heat an amount of LR that would otherwise escape back to space. That is the greenhouse gas identity and an image to see the absorption bands in the atmosphere and how important the water is, without vain authority-based arguments that miss the explanation in the end: http://www.google.nl/imgres?imgurl=http://www.solarchords.com/uploaded/82/87-33833-450015_44absorbspec.gif&imgrefurl=http://www.solarchords.com/agw-science/4/greenhouse--1-radiation/33784/&h=468&w=458&sz=28&tbnid=x2NtfKh5OPM7lM:&tbnh=98&tbnw=96&zoom=1&usg=__KldteWbV19nVPbbsC4jsOgzCK6E=&docid=cMRZ9f22jbtYPM&sa=X&ei=SwynUq2TMqiS0QXVq4C4Aw&ved=0CDkQ9QEwAw
tvinko

Massively collaborative mathematics : Article : Nature - 28 views

  •  
    peer-to-peer theorem-proving
  • ...14 more comments...
  •  
    Or: mathematicians catch up with open-source software developers :)
  •  
    "Similar open-source techniques could be applied in fields such as [...] computer science, where the raw materials are informational and can be freely shared online." ... or we could reach the point, unthinkable only few years ago, of being able to exchange text messages in almost real time! OMG, think of the possibilities! Seriously, does the author even browse the internet?
  •  
    I do not agree with you F., you are citing out of context! Sharing messages does not make a collaboration, nor does a forum, .... You need a set of rules and a common objective. This is clearly observable in "some team", where these rules are lacking, making team work inexistent. The additional difficulties here are that it involves people that are almost strangers to each other, and the immateriality of the project. The support they are using (web, wiki) is only secondary. What they achieved is remarkable, disregarding the subject!
  •  
    I think we will just have to agree to disagree then :) Open source developers have been organizing themselves with emails since the early '90s, and most projects (e.g., the Linux kernel) still do not use anything else today. The Linux kernel mailing list gets around 400 messages per day, and they are managing just fine to scale as the number of contributors increases. I agree that what they achieved is remarkable, but it is more for "what" they achieved than "how". What they did does not remotely qualify as "massively" collaborative: again, many open source projects are managed collaboratively by thousands of people, and many of them are in the multi-million lines of code range. My personal opinion of why in the scientific world these open models are having so many difficulties is that the scientific community today is (globally, of course there are many exceptions) a closed, mostly conservative circle of people who are scared of changes. There is also the fact that the barrier of entry in a scientific community is very high, but I think that this should merely scale down the number of people involved and not change the community "qualitatively". I do not think that many research activities are so much more difficult than, e.g., writing an O(1) scheduler for an Operating System or writing a new balancing tree algorithm for efficiently storing files on a filesystem. Then there is the whole issue of scientific publishing, which, in its current form, is nothing more than a racket. No wonder traditional journals are scared to death by these open-science movements.
  •  
    here we go ... nice controversy! but maybe too many things mixed up together - open science journals vs traditional journals, conservatism of science community wrt programmers (to me one of the reasons for this might be the average age of both groups, which is probably more than 10 years apart ...) and then using emailing wrt other collaboration tools .... .... will have to look at the paper now more carefully ... (I am surprised to see no comment from José or Marek here :-)
  •  
    My point about your initial comment is that it is simplistic to infer that emails imply collaborative work. You actually use the word "organize", what does it mean indeed. In the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review). Mailing is just a coordination mean. In collaborations and team work, it is about rules, not only about the technology you use to potentially collaborate. Otherwise, all projects would be successful, and we would noy learn management at school! They did not write they managed the colloboration exclusively because of wikipedia and emails (or other 2.0 technology)! You are missing the part that makes it successful and remarkable as a project. On his blog the guy put a list of 12 rules for this project. None are related to emails, wikipedia, forums ... because that would be lame and your comment would make sense. Following your argumentation, the tools would be sufficient for collaboration. In the ACT, we have plenty of tools, but no team work. QED
  •  
    the question on the ACT team work is one that is coming back continuously and it always so far has boiled down to the question of how much there need and should be a team project to which everybody inthe team contributes in his / her way or how much we should leave smaller, flexible teams within the team form and progress, more following a bottom-up initiative than imposing one from top-down. At this very moment, there are at least 4 to 5 teams with their own tools and mechanisms which are active and operating within the team. - but hey, if there is a real will for one larger project of the team to which all or most members want to contribute, lets go for it .... but in my view, it should be on a convince rather than oblige basis ...
  •  
    It is, though, indicative that some of the team member do not see all the collaboration and team work happening around them. We always leave the small and agile sub-teams to form and organize themselves spontaneously, but clearly this method leaves out some people (be it for their own personal attitude or be it for pure chance) For those cases which we could think to provide the possibility to participate in an alternative, more structured, team work where we actually manage the hierachy, meritocracy and perform the project review (to use Joris words).
  •  
    I am, and was, involved in "collaboration" but I can say from experience that we are mostly a sum of individuals. In the end, it is always one or two individuals doing the job, and other waiting. Sometimes even, some people don't do what they are supposed to do, so nothing happens ... this could not be defined as team work. Don't get me wrong, this is the dynamic of the team and I am OK with it ... in the end it is less work for me :) team = 3 members or more. I am personally not looking for a 15 member team work, and it is not what I meant. Anyway, this is not exactly the subject of the paper.
  •  
    My opinion about this is that a research team, like the ACT, is a group of _people_ and not only brains. What I mean is that people have feelings, hate, anger, envy, sympathy, love, etc about the others. Unfortunately(?), this could lead to situations, where, in theory, a group of brains could work together, but not the same group of people. As far as I am concerned, this happened many times during my ACT period. And this is happening now with me in Delft, where I have the chance to be in an even more international group than the ACT. I do efficient collaborations with those people who are "close" to me not only in scientific interest, but also in some private sense. And I have people around me who have interesting topics and they might need my help and knowledge, but somehow, it just does not work. Simply lack of sympathy. You know what I mean, don't you? About the article: there is nothing new, indeed. However, why it worked: only brains and not the people worked together on a very specific problem. Plus maybe they were motivated by the idea of e-collaboration. No revolution.
  •  
    Joris, maybe I made myself not clear enough, but my point was only tangentially related to the tools. Indeed, it is the original article mention of "development of new online tools" which prompted my reply about emails. Let me try to say it more clearly: my point is that what they accomplished is nothing new methodologically (i.e., online collaboration of a loosely knit group of people), it is something that has been done countless times before. Do you think that now that it is mathematicians who are doing it makes it somehow special or different? Personally, I don't. You should come over to some mailing lists of mathematical open-source software (e.g., SAGE, Pari, ...), there's plenty of online collaborative research going on there :) I also disagree that, as you say, "in the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review)". First of all I think the main engine of any collaboration like this is the objective, i.e., wanting to get something done. Rules emerge from self-organization later on, and they may be completely different from project to project, ranging from almost anarchy to BDFL (benevolent dictator for life) style. Given this kind of variety that can be observed in open-source projects today, I am very skeptical that any kind of management rule can be said to be universal (and I am pretty sure that the overwhelming majority of project organizers never went to any "management school"). Then there is the social aspect that Tamas mentions above. From my personal experience, communities that put technical merit above everything else tend to remain very small and generally become irrelevant. The ability to work and collaborate with others is the main asset the a participant of a community can bring. I've seen many times on the Linux kernel mailing list contributions deemed "technically superior" being disregarded and not considered for inclusion in the kernel because it was clear that
  •  
    hey, just catched up the discussion. For me what is very new is mainly the framework where this collaborative (open) work is applied. I haven't seen this kind of working openly in any other field of academic research (except for the Boinc type project which are very different, because relying on non specialists for the work to be done). This raise several problems, and mainly the one of the credit, which has not really been solved as I read in the wiki (is an article is written, who writes it, what are the names on the paper). They chose to refer to the project, and not to the individual researchers, as a temporary solution... It is not so surprising for me that this type of work has been first done in the domain of mathematics. Perhaps I have an ideal view of this community but it seems that the result obtained is more important than who obtained it... In many areas of research this is not the case, and one reason is how the research is financed. To obtain money you need to have (scientific) credit, and to have credit you need to have papers with your name on it... so this model of research does not fit in my opinion with the way research is governed. Anyway we had a discussion on the Ariadnet on how to use it, and one idea was to do this kind of collaborative research; idea that was quickly abandoned...
  •  
    I don't really see much the problem with giving credit. It is not the first time a group of researchers collectively take credit for a result under a group umbrella, e.g., see Nicolas Bourbaki: http://en.wikipedia.org/wiki/Bourbaki Again, if the research process is completely transparent and publicly accessible there's no way to fake contributions or to give undue credit, and one could cite without problems a group paper in his/her CV, research grant application, etc.
  •  
    Well my point was more that it could be a problem with how the actual system works. Let say you want a grant or a position, then the jury will count the number of papers with you as a first author, and the other papers (at least in France)... and look at the impact factor of these journals. Then you would have to set up a rule for classifying the authors (endless and pointless discussions), and give an impact factor to the group...?
  •  
    it seems that i should visit you guys at estec... :-)
  •  
    urgently!! btw: we will have the ACT christmas dinner on the 9th in the evening ... are you coming?
Thijs Versloot

Time 'Emerges' from #Quantum Entanglement #arXiv - 1 views

  •  
    Time is an emergent phenomenon that is a side effect of quantum entanglement, say physicists. And they have the first exprimental results to prove it
  • ...5 more comments...
  •  
    I always feel like people make too big a deal out of entanglement. In my opinion it is just a combination of a conserved quantity and an initial lack of knowledge. Imagine that I had a machine that always creates one blue and one red ping-pong ball at the same time (|b > and |r > respectively). The machine now puts both balls into identical packages (so I cannot observe them) and one of the packages is sent to Tokio. I did not know which ball was sent to Tokio and which stayed with me - they are in a superposition (|br >+|rb >), meaning that either the blue ball is with me and the red one in Tokio or vice versa - they are entangled. So far no magic has happened. Now I call my friend in Tokio who got the ball: "What color was the ball you received in that package?" He replies: "The ball that I got was blue. Why did you send me ball in the first place?" Now, the fact that he told me makes the superpositon wavefunction collapse (yes, that is what the Copenhagen interpretation would tell us). As a result I know without opening my box that it contains a red ball. But this is really because there is an underlying conservation law and because now I know the other state. I don't see how just looking at the conserved quantity I am in a timeless state outside of the 'universe' - this is just one way of interpreting it. By the way, the wave function for my box with the undetermined ball does not collapse when the other ball is observed by my friend in Tokio. Only when he tells me does the wavefunction collapse - he did not even know that I had a complementary ball. On the other hand if he knew about the way the experiment was conducted then he would have known that I had to have a red ball - the wavefunction collapses as soon as he observed his ball. For him it is determined that my ball must be red. For me however the superposition is intact until he tells me. ;-)
  •  
    Sorry, Johannes, you just develop a simple hidden-parameters theory and it's experimentally proven that these don't work. Entangeled states are neither the blue nor the red ball they are really bluered (or redblue) till the point the measurement is done.
  •  
    Hm, to me this looks like a bad joke... The "emergent time" concept used is still the old proposal by Page and Whotters where time emerges from something fundamentally unobservable (the wave function of the Universe). That's as good as claiming that time emerges from God. If I understand correctly, the paper now deals with the situation where a finite system is taken as "Mini-Universe" and the experimentalist in the lab can play "God of the Mini-Universe". This works, of course, but it doesn't really tell us anything about emergent time, does it?
  •  
    Actually, it has not been proven conclusively that hidden variable theories don' work - although this is the opinion of most physicists these days. But a non-local hidden variable would still be allowed - I don't see why that could not be equivalent to a conserved quantity within the system. As far as the two balls go it is fine to say they are undetermined instead of saying they are in bluered or redblue state - for all intents and purposes it does not affect us (because if it would the wavefunction would have collapsed) so we can't say anything about it in the first place.
  •  
    Non-local hidden variables may work, but in my opinion they don't add anything to the picture. The (at least to non-physicists) contraintuitive fact that there cannot be a variable that determines ab initio the color of the ball going to Tokio will remain (in your example this may not even be true since the example is too simple...).
  •  
    I guess I tentatively agree with you on both points. In the end there might anyway be surprisingly little overlap between the way that we describe what nature does and HOW it does it... :-D
  •  
    Congratulations! 100% agree.
Thijs Versloot

How Einstein Thought: Why "Combinatory Play" Is the Secret of Genius - 0 views

  •  
    by Maria Popova "Combinatory play seems to be the essential feature in productive thought." For as long as I can remember - and certainly long before I had the term for it - I've believed that creativity is combinatorial: Alive and awake to the world, we amass a collection of cross-disciplinary building blocks - knowledge, memories, bits of information, sparks of inspiration, and other existing ideas - that we then combine and recombine, mostly unconsciously, into something "new."
Alexander Wittig

Picture This: NVIDIA GPUs Sort Through Tens of Millions of Flickr Photos - 2 views

  •  
    Strange and exotic cityscapes. Desolate wilderness areas. Dogs that look like wookies. Flickr, one of the world's largest photo sharing services, sees it all. And, now, Flickr's image recognition technology can categorize more than 11 billion photos like these. And it does it automatically. It's called "Magic View." Magical deep learning! Buzzword attack!
  • ...4 more comments...
  •  
    and here comes my standard question: how can we use this for space? fast detection of natural disasters onboard?
  •  
    Even on ground. You could for example teach it what nuclear reactors or missiles or other weapons you don't want look like on satellite pictures and automatically scan the world for them (basically replacing intelligence analysts).
  •  
    In fact, I think this could make a nice ACT project: counting seals from satellite imagery is an actual (and quite recent) thing: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0092613 In this publication they did it manually from a GeoEye 1 b/w image, which sounds quite tedious. Maybe one can train one of those image recognition algorithms to do it automatically. Or maybe it's a bit easier to count larger things, like elephants (also a thing).
  •  
    In HiPEAC (High Performance, embedded architecture and computation) conference I attended in the beginning of this year there was a big trend of CUDA GPU vs FPGA for hardware accelerated image processing. Most of it orbitting around discussing who was faster and cheaper with people from NVIDIA in one side and people from Xilinx and Intel in the other. I remember of talking with an IBM scientist working on hardware accelerated data processing working together with the Radio telescope institute in Netherlands about the solution where they working on (GPU CUDA). I gathered that NVIDIA GPU suits best in applications that somehow do not rely in hardware, having the advantage of being programmed in a 'easy' way accessible to a scientist. FPGA's are highly reliable components with the advantage of being available in radhard versions, but requiring specific knowledge of physical circuit design and tailored 'harsh' programming languages. I don't know what is the level of rad hardness in NVIDIA's GPUs... Therefore FPGAs are indeed the standard choice for image processing in space missions (a talk with the microelectronics department guys could expand on this), whereas GPUs are currently used in some ground based (radio astronomy or other types of telescopes). I think that on for a specific purpose as the one you mentioned, this FPGA vs GPU should be assessed first before going further.
  •  
    You're forgetting power usage. GPUs need 1000 hamster wheels worth of power while FPGAs can run on a potato. Since space applications are highly power limited, putting any kind of GPU monster in orbit or on a rover is failed idea from the start. Also in FPGAs if a gate burns out from radiation you can just reprogram around it. Looking for seals offline in high res images is indeed definitely a GPU task.... for now.
  •  
    The discussion of how to make FPGA hardware acceleration solutions easier to use for the 'layman' is starting btw http://reconfigurablecomputing4themasses.net/.
LeopoldS

From Sci-Mate to Mendeley - a brief history of reference managers - Trading knowledge B... - 1 views

  •  
    very nice comparison of reference managers ...
Ma Ru

Test your knowledge of biomimicry - 5 views

  •  
    How much remained in your head? Don't tell Tobias, but I scored only 5/10 :-(
  •  
    There's one question missing: how many of these designs are claimed to be baised on biomimicry just because this sells?? Tobi, please!
Tobias Seidl

Wombats detected from space - 4 views

  •  
    Demonstrates how useful space technology can be.
  • ...4 more comments...
  •  
    Also, this reminds me of a poem that once sprung out on my Linux console login: The wombat lives across the seas, Among the far Antipodes. He may exist on nuts and berries, Or then again, on missionaries; His distant habitat precludes Conclusive knowledge of his moods, But I would not engage the wombat In any form of mortal combat.
  •  
    sprung out of your console????? my mac never talks like this to me ....
  •  
    See? Even console can be user-friendly ;-) If I remember well, it was Slackware linux and at every console start-up the fortune program was launched : http://linux.die.net/man/6/fortune
  •  
    so you are still not convinced about macs being superior after working for a year with martin?
  •  
    Apparently not - I just got a brand new sexy Sony Vaio S :-)
  •  
    I am sorry for you ... :-)
jmlloren

Exotic matter : Insight : Nature - 5 views

shared by jmlloren on 03 Aug 10 - Cached
LeopoldS liked it
  •  
    Trends in materials and condensed matter. Check out the topological insulators. amazing field.
  • ...12 more comments...
  •  
    Aparently very interesting, will it survive the short hype? Relevant work describing mirror charges of topological insulators and the classical boundary conditions were done by Ismo and Ari. But the two communities don't know each other and so they are never cited. Also a way to produce new things...
  •  
    Thanks for noticing! Indeed, I had no idea that Ari (don't know Ismo) was involved in the field. Was it before Kane's proposal or more recently? What I mostly like is that semiconductors are good candidates for 3D TI, however I got lost in the quantum field jargon. Yesterday, I got a headache trying to follow the Majorana fermions, the merons, skyrnions, axions, and so on. Luzi, are all these things familiar to you?
  •  
    Ismo Lindell described in the early 90's the mirror charge of what is now called topological insulator. He says that similar results were obtained already at the beginning of the 20th century... Ismo Lindell and Ari Sihvola in the recent years discussed engineering aspects of PEMCs (perfect electro-megnetic conductors,) which are more or less classical analogues of topological insulators. Fundamental aspects of PEMCs are well knwon in high-energy physics for a long time, recent works are mainly due to Friedrich Hehl and Yuri Obukhov. All these works are purely classical, so there is no charge quantisation, no considerations of electron spin etc. About Majorana fermions: yes, I spent several years of research on that topic. Axions: a topological state, of course, trivial :-) Also merons and skyrnions are topological states, but I'm less familiar with them.
  •  
    "Non-Abelian systems1, 2 contain composite particles that are neither fermions nor bosons and have a quantum statistics that is far richer than that offered by the fermion-boson dichotomy. The presence of such quasiparticles manifests itself in two remarkable ways. First, it leads to a degeneracy of the ground state that is not based on simple symmetry considerations and is robust against perturbations and interactions with the environment. Second, an interchange of two quasiparticles does not merely multiply the wavefunction by a sign, as is the case for fermions and bosons. Rather, it takes the system from one ground state to another. If a series of interchanges is made, the final state of the system will depend on the order in which these interchanges are being carried out, in sharp contrast to what happens when similar operations are performed on identical fermions or bosons." wow, this paper by Stern reads really weired ... any of you ever looked into this?
  •  
    C'mon Leopold, it's as trivial as the topological states, AKA axions! Regarding the question, not me!
  •  
    just looked up the wikipedia entry on axions .... at least they have some creativity in names giving: "In supersymmetric theories the axion has both a scalar and a fermionic superpartner. The fermionic superpartner of the axion is called the axino, the scalar superpartner is called the saxion. In some models, the saxion is the dilaton. They are all bundled up in a chiral superfield. The axino has been predicted to be the lightest supersymmetric particle in such a model.[24] In part due to this property, it is considered a candidate for the composition of dark matter.[25]"
  •  
    Thank's Leopold. Sorry Luzi for being ironic concerning the triviality of the axions. Now, Leo confirmed me that indeed is a trivial matter. I have problems with models where EVERYTHING is involved.
  •  
    Well, that's the theory of everything, isn't it?? Seriously: I don't think that theoretically there is a lot of new stuff here. Topological aspects of (non-Abelian) theories became extremely popular in the context of string theory. The reason is very simple: topological theories are much simpler than "normal" and since string theory anyway is far too complicated to be solved, people just consider purely topological theories, then claiming that this has something to do with the real world, which of course is plainly wrong. So what I think is new about these topological insulators are the claims that one can actually fabricate a material which more or less accurately mimics a topological theory and that these materials are of practical use. Still, they are a little bit the poor man's version of the topological theories fundamental physicists like to look at since electrdynamics is an Abelian theory.
  •  
    I have the feeling, not the knowledge, that you are right. However, I think that the implications of this light quantum field effects are great. The fact of being able to sustain two currents polarized in spin is a technological breakthrough.
  •  
    not sure how much I can contribute to your apparently educated debate here but if I remember well from my work for the master, these non-Abelian theories were all but "simple" as Luzi puts it ... and from a different perspective: to me the whole thing of being able to describe such non-Abelian systems nicely indicates that they should in one way or another also have some appearance in Nature (would be very surprised if not) - though this is of course no argument that makes string theory any better or closer to what Luzi called reality ....
  •  
    Well, electrodynamics remains an Abelian theory. From the theoretical point of view this is less interesting than non-Abelian ones, since in 4D the fibre bundle of a U(1) theory is trivial (great buzz words, eh!) But in topological insulators the point of view is slightly different since one always has the insulator (topological theory), its surrounding (propagating theory) and most importantly the interface between the two. This is a new situation that people from field and string theory were not really interested in.
  •  
    guys... how would you explain this to your gran mothers?
  •  
    *you* tried *your* best .... ??
Joris _

Making space exploration pay with asteroid mining - 1 views

  • Asteroids happen to be particularly rich in platinum group metals
  • a motive for space travel beyond "the pursuit of knowledge"
  • So to those despairing about the recent cutting of space budgets across the world, invest your savings in asteroid mining
Joris _

American Institute of Aeronautics and Astronautics - Space and the Biological Economy - 0 views

  • the U.S. space program has a robust life science program that is diligently working to innovate new approaches, research and technologies in the fields of biotechnology and bio-nanotechnology science, which are providing new solutions for old problems – including food security, medical needs and energy needs
  • more money be allocated to develop environmentally sound and energy efficient engine programs for commercial and private aviation
  • waste water program
  • ...3 more annotations...
  • we lack fundamental knowledge about the entire effect of the photosynthesis system on food growth, and that space-based research could provide vital clues to scientists on how to streamline the process to spur more efficient food growth
  • From the start of the space age until 2010 only around 500 people have journeyed into space, but with the advent of private space travel in the next 24 months another 500 people are expected to go into space
  • Wagner indentified prize systems that award monetary prizes to companies or individuals as an effective way to spur innovation and creativity, and urged the Congressional staffers present to consider creating more prize systems to stimulate needed innovation
  •  
    a bunch of ideas, iinitiatives, and good points about upcoming changes in space ...
Francesco Biscani

The End Of Gravity As a Fundamental Force - 6 views

  •  
    "At a symposium at the Dutch Spinoza-instituut on 8 December, 2009, string theorist Erik Verlinde introduced a theory that derives Newton's classical mechanics. In his theory, gravity exists because of a difference in concentration of information in the empty space between two masses and its surroundings. He does not consider gravity as fundamental, but as an emergent phenomenon that arises from a deeper microscropic reality. A relativistic extension of his argument leads directly to Einstein's equations."
  • ...8 more comments...
  •  
    Diffcult for me to fully understand / believe in the holographic principle at macroscopical scales ... potentially it looks though as a revolutionary idea.....
  •  
    never heard about it... seems interesting. At first sight it seems that it is based on fundamental principle that could lead to a new phenomenology, so that could be tested. Perhaps Luzi knows more about this ? Did we ever work on this concept ?
  •  
    The paper is quite long and I don't have the time right now to read it in detail. Just a few comments: * We (ACT) definitely never did anything in this direction? But: is there a new phenomenology? I'm not sure, if the aim is just to get Einstein's theory as emergent theory, then GR should not change (or only change in extreme conditions.) * Emergent gravity is not new, also Erik admits that. The claim to have found a solution appears quite frequently, but most proposals actually are not emergent at all. At least, I have the impression that Erik is aware of the relevant steps to be performed. * It's very difficult to judge from a short glance at the paper, up to which point the claims are serious and where it just starts to be advertisments. Section 6 is pretty much a collection of self-praise. * Most importantly: I don't understand how exactly space and time should be emergent. I think it's not new to observe that space is related to special canonical variables in thermodynamics. If anybody can see anything "emergent" in the first paragraphs of section 3, then please explain me. For me, this is not emergent space, but space introduced with a "sledge hammer." Time anyway seems to be a precondition, else there is nothing like energy and nothing like dynamics. * Finally, holography appears to be a precondition, to my knowledge no proof exists that normal (non-supersymmetric, non-stringy, non-whatever) GR has a holographic dual.
  •  
    Update: meanwhile I understood roughly what this should be about. It's well known that BH physics follow the laws of theormodynamics, suggesting the existence of underlying microstates. But if this is true, shouldn't the gravitational force then be emergent from these microstates in the same way as any theromdynamical effect is emergent from the behavior of its constituents (e.g. a gas)? If this can be prooven, then indeed gravity is emergent. Problem: one has to proof that *any* configuration in GR may be interpreted as thermodynamical, not just BHs. That's probably where holography comes into the play. To me this smells pretty much like N=4 SYM vs. QCD. The former is not QCD, but can be solved, so all stringy people study just that one and claim to learn something about QCD. Here, we look at holographic models, GR is not holographic, but who cares... Engineering problems...
  •  
    is there any experimental or observational evidence that points to this "solution"?
  •  
    Are you joking??? :D
  •  
    I was a bit fast to say it could be tested... apparently we don't even know a theory that is holographic, perhaps a string theory (see http://arxiv.org/abs/hep-th/9409089v2). So very far from any test...
  •  
    Luzi, I miss you!!!
  •  
    Leo, do you mean you liked my comment on your question more than Pacome's? Well, the ACT has to evolve and fledge, so no bullshitting anymore, but serious and calculating answers... :-) Sorry Pacome, nothing against you!! I just LOVE this Diigo because it gives me the opportunity for a happy revival of my ACT mood.
  •  
    haha, today would have been great to show your mood... we had a talk on the connection between mind and matter !!
‹ Previous 21 - 40 of 56 Next ›
Showing 20 items per page