Skip to main content

Home/ Advanced Concepts Team/ Group items tagged state

Rss Feed Group items tagged

santecarloni

[1111.3328] The quantum state cannot be interpreted statistically - 1 views

  •  
    Quantum states are the key mathematical objects in quantum theory. It is therefore surprising that physicists have been unable to agree on what a quantum state represents. There are at least two opposing schools of thought, each almost as old as quantum theory itself. One is that a pure state is a physical property of system, much like position and momentum in classical mechanics. Another is that even a pure state has only a statistical significance, akin to a probability distribution in statistical mechanics. Here we show that, given only very mild assumptions, the statistical interpretation of the quantum state is inconsistent with the predictions of quantum theory....
Joris _

Physicists Propose Scheme for Teleporting Light Beams - 0 views

  • it’s possible that a physical object (e.g. a quantum field) in one location could emerge at another location in the same quantum state
  •  
    have to be carefull this article is really badly written. Teleportation is not some matter that disapear from somewhere to reappear somewhere else... Usually we talk about entangled particles (by essence quantum theory is non-local), and teleportation means that if you force (with a measurement) one particle to take a particular quantum state then the other one will have the same quantum state. There it seems that the same could be done with a quantum field, which is the second quantization of the quantum theory: the state of a quantum field can contain a mixture of n-particles states (meaning that the number of particles is not defined, only the mean value). If you measure the number of particle then you force the quantum field to "choose" a particular quantum state (which can be the vacuum). Teleportation means that you will have exactly the same measure on another quantum field, which is somehow intricated to your first one.
jmlloren

Exotic matter : Insight : Nature - 5 views

shared by jmlloren on 03 Aug 10 - Cached
LeopoldS liked it
  •  
    Trends in materials and condensed matter. Check out the topological insulators. amazing field.
  • ...12 more comments...
  •  
    Aparently very interesting, will it survive the short hype? Relevant work describing mirror charges of topological insulators and the classical boundary conditions were done by Ismo and Ari. But the two communities don't know each other and so they are never cited. Also a way to produce new things...
  •  
    Thanks for noticing! Indeed, I had no idea that Ari (don't know Ismo) was involved in the field. Was it before Kane's proposal or more recently? What I mostly like is that semiconductors are good candidates for 3D TI, however I got lost in the quantum field jargon. Yesterday, I got a headache trying to follow the Majorana fermions, the merons, skyrnions, axions, and so on. Luzi, are all these things familiar to you?
  •  
    Ismo Lindell described in the early 90's the mirror charge of what is now called topological insulator. He says that similar results were obtained already at the beginning of the 20th century... Ismo Lindell and Ari Sihvola in the recent years discussed engineering aspects of PEMCs (perfect electro-megnetic conductors,) which are more or less classical analogues of topological insulators. Fundamental aspects of PEMCs are well knwon in high-energy physics for a long time, recent works are mainly due to Friedrich Hehl and Yuri Obukhov. All these works are purely classical, so there is no charge quantisation, no considerations of electron spin etc. About Majorana fermions: yes, I spent several years of research on that topic. Axions: a topological state, of course, trivial :-) Also merons and skyrnions are topological states, but I'm less familiar with them.
  •  
    "Non-Abelian systems1, 2 contain composite particles that are neither fermions nor bosons and have a quantum statistics that is far richer than that offered by the fermion-boson dichotomy. The presence of such quasiparticles manifests itself in two remarkable ways. First, it leads to a degeneracy of the ground state that is not based on simple symmetry considerations and is robust against perturbations and interactions with the environment. Second, an interchange of two quasiparticles does not merely multiply the wavefunction by a sign, as is the case for fermions and bosons. Rather, it takes the system from one ground state to another. If a series of interchanges is made, the final state of the system will depend on the order in which these interchanges are being carried out, in sharp contrast to what happens when similar operations are performed on identical fermions or bosons." wow, this paper by Stern reads really weired ... any of you ever looked into this?
  •  
    C'mon Leopold, it's as trivial as the topological states, AKA axions! Regarding the question, not me!
  •  
    just looked up the wikipedia entry on axions .... at least they have some creativity in names giving: "In supersymmetric theories the axion has both a scalar and a fermionic superpartner. The fermionic superpartner of the axion is called the axino, the scalar superpartner is called the saxion. In some models, the saxion is the dilaton. They are all bundled up in a chiral superfield. The axino has been predicted to be the lightest supersymmetric particle in such a model.[24] In part due to this property, it is considered a candidate for the composition of dark matter.[25]"
  •  
    Thank's Leopold. Sorry Luzi for being ironic concerning the triviality of the axions. Now, Leo confirmed me that indeed is a trivial matter. I have problems with models where EVERYTHING is involved.
  •  
    Well, that's the theory of everything, isn't it?? Seriously: I don't think that theoretically there is a lot of new stuff here. Topological aspects of (non-Abelian) theories became extremely popular in the context of string theory. The reason is very simple: topological theories are much simpler than "normal" and since string theory anyway is far too complicated to be solved, people just consider purely topological theories, then claiming that this has something to do with the real world, which of course is plainly wrong. So what I think is new about these topological insulators are the claims that one can actually fabricate a material which more or less accurately mimics a topological theory and that these materials are of practical use. Still, they are a little bit the poor man's version of the topological theories fundamental physicists like to look at since electrdynamics is an Abelian theory.
  •  
    I have the feeling, not the knowledge, that you are right. However, I think that the implications of this light quantum field effects are great. The fact of being able to sustain two currents polarized in spin is a technological breakthrough.
  •  
    not sure how much I can contribute to your apparently educated debate here but if I remember well from my work for the master, these non-Abelian theories were all but "simple" as Luzi puts it ... and from a different perspective: to me the whole thing of being able to describe such non-Abelian systems nicely indicates that they should in one way or another also have some appearance in Nature (would be very surprised if not) - though this is of course no argument that makes string theory any better or closer to what Luzi called reality ....
  •  
    Well, electrodynamics remains an Abelian theory. From the theoretical point of view this is less interesting than non-Abelian ones, since in 4D the fibre bundle of a U(1) theory is trivial (great buzz words, eh!) But in topological insulators the point of view is slightly different since one always has the insulator (topological theory), its surrounding (propagating theory) and most importantly the interface between the two. This is a new situation that people from field and string theory were not really interested in.
  •  
    guys... how would you explain this to your gran mothers?
  •  
    *you* tried *your* best .... ??
santecarloni

[1101.6015] Radio beam vorticity and orbital angular momentum - 1 views

  • It has been known for a century that electromagnetic fields can transport not only energy and linear momentum but also angular momentum. However, it was not until twenty years ago, with the discovery in laser optics of experimental techniques for the generation, detection and manipulation of photons in well-defined, pure orbital angular momentum (OAM) states, that twisted light and its pertinent optical vorticity and phase singularities began to come into widespread use in science and technology. We have now shown experimentally how OAM and vorticity can be readily imparted onto radio beams. Our results extend those of earlier experiments on angular momentum and vorticity in radio in that we used a single antenna and reflector to directly generate twisted radio beams and verified that their topological properties agree with theoretical predictions. This opens the possibility to work with photon OAM at frequencies low enough to allow the use of antennas and digital signal processing, thus enabling software controlled experimentation also with first-order quantities, and not only second (and higher) order quantities as in optics-type experiments. Since the OAM state space is infinite, our findings provide new tools for achieving high efficiency in radio communications and radar technology.
  •  
    It has been known for a century that electromagnetic fields can transport not only energy and linear momentum but also angular momentum. However, it was not until twenty years ago, with the discovery in laser optics of experimental techniques for the generation, detection and manipulation of photons in well-defined, pure orbital angular momentum (OAM) states, that twisted light and its pertinent optical vorticity and phase singularities began to come into widespread use in science and technology. We have now shown experimentally how OAM and vorticity can be readily imparted onto radio beams. Our results extend those of earlier experiments on angular momentum and vorticity in radio in that we used a single antenna and reflector to directly generate twisted radio beams and verified that their topological properties agree with theoretical predictions. This opens the possibility to work with photon OAM at frequencies low enough to allow the use of antennas and digital signal processing, thus enabling software controlled experimentation also with first-order quantities, and not only second (and higher) order quantities as in optics-type experiments. Since the OAM state space is infinite, our findings provide new tools for achieving high efficiency in radio communications and radar technology.
  •  
    and how can we use this?
jcunha

Synthetic Landau levels for photons - 1 views

  •  
    Very nice experiment on the verge of Condensed matter Physics! The presence of Landau levels is a necessary condition to obtain a Quantum Hall state. Quantum Hall states have first appeared in 2D electronic gases when applied a perpendicular magnetic field that induces a new topological state of the "electronic gas". This new topological state is believed to "protect" some parameters of the system, such as conductance making it possible to measure fundamental constants with very high precision even in imperfect experimental conditions. In this fundamental experiment, a synthetic magnetic field was created that acts in continuum photons, producing "an integer quantum Hall system in curved space, a long-standing challenge in condensed matter physics".
santecarloni

NC State News :: NC State News and Information » Researcher Finds Faster, Che... - 0 views

  •  
    A North Carolina State University researcher has developed a more efficient, less expensive way of cooling electronic devices - particularly devices that generate a lot of heat, such as lasers and power devices.
santecarloni

Coherent Schrödinger's cat still confounds - physicsworld.com - 1 views

  •  
    The famous paradox of Schrödinger's cat starts from principles of quantum physics and ends with the bizarre conclusion that a cat can be simultaneously in two physical states - one in which the cat is alive and the other in which it is dead. In real life, however, large objects such as cats clearly don't exist in a superposition of two or more states and this paradox is usually resolved in terms of quantum decoherence. But now physicists in Canada and Switzerland argue that even if decoherence could be prevented, the difficulty of making perfect measurements would stop us from confirming the cat's superposition.
LeopoldS

An optical lattice clock with accuracy and stability at the 10-18 level : Nature : Natu... - 0 views

  •  
    Progress in atomic, optical and quantum science1, 2 has led to rapid improvements in atomic clocks. At the same time, atomic clock research has helped to advance the frontiers of science, affecting both fundamental and applied research. The ability to control quantum states of individual atoms and photons is central to quantum information science and precision measurement, and optical clocks based on single ions have achieved the lowest systematic uncertainty of any frequency standard3, 4, 5. Although many-atom lattice clocks have shown advantages in measurement precision over trapped-ion clocks6, 7, their accuracy has remained 16 times worse8, 9, 10. Here we demonstrate a many-atom system that achieves an accuracy of 6.4 × 10−18, which is not only better than a single-ion-based clock, but also reduces the required measurement time by two orders of magnitude. By systematically evaluating all known sources of uncertainty, including in situ monitoring of the blackbody radiation environment, we improve the accuracy of optical lattice clocks by a factor of 22. This single clock has simultaneously achieved the best known performance in the key characteristics necessary for consideration as a primary standard-stability and accuracy. More stable and accurate atomic clocks will benefit a wide range of fields, such as the realization and distribution of SI units11, the search for time variation of fundamental constants12, clock-based geodesy13 and other precision tests of the fundamental laws of nature. This work also connects to the development of quantum sensors and many-body quantum state engineering14 (such as spin squeezing) to advance measurement precision beyond the standard quantum limit.
LeopoldS

David Miranda, schedule 7 and the danger that all reporters now face | Alan Rusbridger ... - 0 views

  •  
    During one of these meetings I asked directly whether the government would move to close down the Guardian's reporting through a legal route - by going to court to force the surrender of the material on which we were working. The official confirmed that, in the absence of handover or destruction, this was indeed the government's intention. Prior restraint, near impossible in the US, was now explicitly and imminently on the table in the UK. But my experience over WikiLeaks - the thumb drive and the first amendment - had already prepared me for this moment. I explained to the man from Whitehall about the nature of international collaborations and the way in which, these days, media organisations could take advantage of the most permissive legal environments. Bluntly, we did not have to do our reporting from London. Already most of the NSA stories were being reported and edited out of New York. And had it occurred to him that Greenwald lived in Brazil?

    The man was unmoved. And so one of the more bizarre moments in the Guardian's long history occurred - with two GCHQ security experts overseeing the destruction of hard drives in the Guardian's basement just to make sure there was nothing in the mangled bits of metal which could possibly be of any interest to passing Chinese agents. "We can call off the black helicopters," joked one as we swept up the remains of a MacBook Pro.

    Whitehall was satisfied, but it felt like a peculiarly pointless piece of symbolism that understood nothing about the digital age. We will continue to do patient, painstaking reporting on the Snowden documents, we just won't do it in London. The seizure of Miranda's laptop, phones, hard drives and camera will similarly have no effect on Greenwald's work.

    The state that is building such a formidable apparatus of surveillance will do its best to prevent journalists from reporting on it. Most journalists can see that. But I wonder how many have truly understood
  •  
    Sarah Harrison is a lawyer that has been staying with Snowden in Hong Kong and Moscow. She is a UK citizen and her family is there. After the miranda case where the boyfriend of the reporter was detained at the airport, can Sarah return safely home? Will her family be pressured by the secret service? http://www.bbc.co.uk/news/world-latin-america-23759834
Thijs Versloot

New Quantum Theory to explain flow of time - 2 views

  •  
    Basically quantum entanglement, or more accurately the dispersal and expansion of mixed quantum states, results in an apparent flow of time. Quantum information leaks out and the result is the move from a pure state (hot coffee) to a mixed state (cooled down) in which equilibrium is reached. Theoretically it is possible to get back to a pure state (coffee spontaneously heating up) but this statistical unlikelihood gives the appereance of irreversibility and hence a flow o time. I think an interesting question is then: how much useful work can you extract from this system? (http://arxiv.org/abs/1302.2811) It should for macroscopic thermodynamic systems lead to the Carnot cycle, but on smaller scales it might be possible to formulate a more general expression. Anybody interested to look into it? Anna, Jo? :)
  •  
    What you propose is called Maxwell's demon: http://en.wikipedia.org/wiki/Maxwell%27s_demon Unfortunately (or maybe fortunately) thermodynamics is VERY robust. I guess if you really only want to harness AND USE the energy in a microscopic system you might have some chance of beating Carnot. But any way of transferring harvested energy to a macroscopic system seems to be limited by it (AFAIK).
Thijs Versloot

Time 'Emerges' from #Quantum Entanglement #arXiv - 1 views

  •  
    Time is an emergent phenomenon that is a side effect of quantum entanglement, say physicists. And they have the first exprimental results to prove it
  • ...5 more comments...
  •  
    I always feel like people make too big a deal out of entanglement. In my opinion it is just a combination of a conserved quantity and an initial lack of knowledge. Imagine that I had a machine that always creates one blue and one red ping-pong ball at the same time (|b > and |r > respectively). The machine now puts both balls into identical packages (so I cannot observe them) and one of the packages is sent to Tokio. I did not know which ball was sent to Tokio and which stayed with me - they are in a superposition (|br >+|rb >), meaning that either the blue ball is with me and the red one in Tokio or vice versa - they are entangled. So far no magic has happened. Now I call my friend in Tokio who got the ball: "What color was the ball you received in that package?" He replies: "The ball that I got was blue. Why did you send me ball in the first place?" Now, the fact that he told me makes the superpositon wavefunction collapse (yes, that is what the Copenhagen interpretation would tell us). As a result I know without opening my box that it contains a red ball. But this is really because there is an underlying conservation law and because now I know the other state. I don't see how just looking at the conserved quantity I am in a timeless state outside of the 'universe' - this is just one way of interpreting it. By the way, the wave function for my box with the undetermined ball does not collapse when the other ball is observed by my friend in Tokio. Only when he tells me does the wavefunction collapse - he did not even know that I had a complementary ball. On the other hand if he knew about the way the experiment was conducted then he would have known that I had to have a red ball - the wavefunction collapses as soon as he observed his ball. For him it is determined that my ball must be red. For me however the superposition is intact until he tells me. ;-)
  •  
    Sorry, Johannes, you just develop a simple hidden-parameters theory and it's experimentally proven that these don't work. Entangeled states are neither the blue nor the red ball they are really bluered (or redblue) till the point the measurement is done.
  •  
    Hm, to me this looks like a bad joke... The "emergent time" concept used is still the old proposal by Page and Whotters where time emerges from something fundamentally unobservable (the wave function of the Universe). That's as good as claiming that time emerges from God. If I understand correctly, the paper now deals with the situation where a finite system is taken as "Mini-Universe" and the experimentalist in the lab can play "God of the Mini-Universe". This works, of course, but it doesn't really tell us anything about emergent time, does it?
  •  
    Actually, it has not been proven conclusively that hidden variable theories don' work - although this is the opinion of most physicists these days. But a non-local hidden variable would still be allowed - I don't see why that could not be equivalent to a conserved quantity within the system. As far as the two balls go it is fine to say they are undetermined instead of saying they are in bluered or redblue state - for all intents and purposes it does not affect us (because if it would the wavefunction would have collapsed) so we can't say anything about it in the first place.
  •  
    Non-local hidden variables may work, but in my opinion they don't add anything to the picture. The (at least to non-physicists) contraintuitive fact that there cannot be a variable that determines ab initio the color of the ball going to Tokio will remain (in your example this may not even be true since the example is too simple...).
  •  
    I guess I tentatively agree with you on both points. In the end there might anyway be surprisingly little overlap between the way that we describe what nature does and HOW it does it... :-D
  •  
    Congratulations! 100% agree.
Tobias Seidl

Global Futures Studies & Research by the MILLENNIUM PROJECT - 0 views

  •  
    The Millennium Project is a global participatory futures research think tank of futurists, scholars, business planners, and policy makers who work for international organizations, governments, corporations, NGOs, and universities. The Millennium Project manages a coherent and cumulative process that collects and assesses judgements from its several hundred participants to produce the annual "State of the Future", "Futures Research Methodology" series, and special studies such as the State of the Future Index, Future Scenarios for Africa, Lessons of History, Environmental Security, Applications of Futures Research to Policy, and a 700+ annotated scenarios bibliography.
  •  
    very nice page - we should use some of its resources!!
LeopoldS

Operation Socialist: How GCHQ Spies Hacked Belgium's Largest Telco - 4 views

  •  
    interesting story with many juicy details on how they proceed ... (similarly interesting nickname for the "operation" chosen by our british friends) "The spies used the IP addresses they had associated with the engineers as search terms to sift through their surveillance troves, and were quickly able to find what they needed to confirm the employees' identities and target them individually with malware. The confirmation came in the form of Google, Yahoo, and LinkedIn "cookies," tiny unique files that are automatically placed on computers to identify and sometimes track people browsing the Internet, often for advertising purposes. GCHQ maintains a huge repository named MUTANT BROTH that stores billions of these intercepted cookies, which it uses to correlate with IP addresses to determine the identity of a person. GCHQ refers to cookies internally as "target detection identifiers." Top-secret GCHQ documents name three male Belgacom engineers who were identified as targets to attack. The Intercept has confirmed the identities of the men, and contacted each of them prior to the publication of this story; all three declined comment and requested that their identities not be disclosed. GCHQ monitored the browsing habits of the engineers, and geared up to enter the most important and sensitive phase of the secret operation. The agency planned to perform a so-called "Quantum Insert" attack, which involves redirecting people targeted for surveillance to a malicious website that infects their computers with malware at a lightning pace. In this case, the documents indicate that GCHQ set up a malicious page that looked like LinkedIn to trick the Belgacom engineers. (The NSA also uses Quantum Inserts to target people, as The Intercept has previously reported.) A GCHQ document reviewing operations conducted between January and March 2011 noted that the hack on Belgacom was successful, and stated that the agency had obtained access to the company's
  •  
    I knew I wasn't using TOR often enough...
  •  
    Cool! It seems that after all it is best to restrict employees' internet access only to work-critical areas... @Paul TOR works on network level, so it would not help here much as cookies (application level) were exploited.
johannessimon81

Mathematicians Predict the Future With Data From the Past - 6 views

  •  
    Asimov's Foundation meets ACT's Tipping Point Prediction?
  • ...2 more comments...
  •  
    Good luck to them!!
  •  
    "Mathematicians Predict the Future With Data From the Past". GREAT! And physicists probably predict the past with data from the future?!? "scientists and mathematicians analyze history in the hopes of finding patterns they can then use to predict the future". Big deal! That's what any scientist does anyway... "cliodynamics"!? Give me a break!
  •  
    still, some interesting thoughts in there ... "Then you have the 50-year cycles of violence. Turchin describes these as the building up and then the release of pressure. Each time, social inequality creeps up over the decades, then reaches a breaking point. Reforms are made, but over time, those reforms are reversed, leading back to a state of increasing social inequality. The graph above shows how regular these spikes are - though there's one missing in the early 19th century, which Turchin attributes to the relative prosperity that characterized the time. He also notes that the severity of the spikes can vary depending on how governments respond to the problem. Turchin says that the United States was in a pre-revolutionary state in the 1910s, but there was a steep drop-off in violence after the 1920s because of the progressive era. The governing class made decisions to reign in corporations and allowed workers to air grievances. These policies reduced the pressure, he says, and prevented revolution. The United Kingdom was also able to avoid revolution through reforms in the 19th century, according to Turchin. But the most common way for these things to resolve themselves is through violence. Turchin takes pains to emphasize that the cycles are not the result of iron-clad rules of history, but of feedback loops - just like in ecology. "In a predator-prey cycle, such as mice and weasels or hares and lynx, the reason why populations go through periodic booms and busts has nothing to do with any external clocks," he writes. "As mice become abundant, weasels breed like crazy and multiply. Then they eat down most of the mice and starve to death themselves, at which point the few surviving mice begin breeding like crazy and the cycle repeats." There are competing theories as well. A group of researchers at the New England Complex Systems Institute - who practice a discipline called econophysics - have built their own model of political violence and
  •  
    It's not the scientific activity described in the article that is uninteresting, on the contrary! But the way it is described is just a bad joke. Once again the results itself are seemingly not sexy enough and thus something is sold as the big revolution, though it's just the application of the oldest scientific principles in a slightly different way than used before.
Nina Nadine Ridder

Material could harvest sunlight by day, release heat on demand hours or days later - 5 views

  •  
    Imagine if your clothing could, on demand, release just enough heat to keep you warm and cozy, allowing you to dial back on your thermostat settings and stay comfortable in a cooler room. Or, picture a car windshield that stores the sun's energy and then releases it as a burst of heat to melt away a layer of ice.
  •  
    interesting indeed: Such chemically-based storage materials, known as solar thermal fuels (STF), have been developed before, including in previous work by Grossman and his team. But those earlier efforts "had limited utility in solid-state applications" because they were designed to be used in liquid solutions and not capable of making durable solid-state films, Zhitomirsky says. The new approach is the first based on a solid-state material, in this case a polymer, and the first based on inexpensive materials and widespread manufacturing technology. Read more at: http://phys.org/news/2016-01-material-harvest-sunlight-day-demand.html#jCp
Thijs Versloot

Quantum entanglement at ambient conditions in a macroscopic solid-state spin ensemble - 1 views

  •  
    Quoted from one of the authors in a separate interview: "We know that the spin states of atomic nuclei associated with semiconductor defects have excellent quantum properties at room temperature," said Awschalom, Liew Family Professor in Molecular Engineering and a senior scientist at Argonne National Laboratory. "They are coherent, long-lived and controllable with photonics and electronics. Given these quantum 'pieces,' creating entangled quantum states seemed like an attainable goal." Bringing the quantum world to the macroscopic scale could see some interesting applications in sensors, or generally entanglement-enhanced applications.
  •  
    They were previously working on the same concept in N-V centers in diamond (as a semiconductor). Here the advantage is that SiC could in principle be integrated with Si or Ge. Anyway its all about controlling coherence. In the next 10 years some breakthroughs are expected in the field of semiconductor spintronics, but quantum computing in this way lies still in the horizon
LeopoldS

Helix Nebula - Helix Nebula Vision - 0 views

  •  
    The partnership brings together leading IT providers and three of Europe's leading research centres, CERN, EMBL and ESA in order to provide computing capacity and services that elastically meet big science's growing demand for computing power.

    Helix Nebula provides an unprecedented opportunity for the global cloud services industry to work closely on the Large Hadron Collider through the large-scale, international ATLAS experiment, as well as with the molecular biology and earth observation. The three flagship use cases will be used to validate the approach and to enable a cost-benefit analysis. Helix Nebula will lead these communities through a two year pilot-phase, during which procurement processes and governance issues for the public/private partnership will be addressed.

    This game-changing strategy will boost scientific innovation and bring new discoveries through novel services and products. At the same time, Helix Nebula will ensure valuable scientific data is protected by a secure data layer that is interoperable across all member states. In addition, the pan-European partnership fits in with the Digital Agenda of the European Commission and its strategy for cloud computing on the continent. It will ensure that services comply with Europe's stringent privacy and security regulations and satisfy the many requirements of policy makers, standards bodies, scientific and research communities, industrial suppliers and SMEs.

    Initially based on the needs of European big-science, Helix Nebula ultimately paves the way for a Cloud Computing platform that offers a unique resource to governments, businesses and citizens.
  •  
    "Helix Nebula will lead these communities through a two year pilot-phase, during which procurement processes and governance issues for the public/private partnership will be addressed." And here I was thinking cloud computing was old news 3 years ago :)
LeopoldS

Google and NASA Launch Quantum Computing AI Lab | MIT Technology Review - 0 views

  •  
    any idea if what the canadians claim to sell is closer to a quantum computer than what they did 2011? (I remember Luzi's comment back then that this had nothing to do with a quantum computer) Canada being member state of ESA ... should we start getting interested?
santecarloni

Rydberg atom simulates Trojan asteroids - physicsworld.com - 3 views

  •  
    The atom may not be a planetary system, but under specific circumstances it can behave like one. That is the curious finding of physicists in Austria and the US, who have confirmed a 1994 prediction that, in the presence of an applied electromagnetic field, electrons in very highly energized atomic states should behave like the Trojan asteroids of Jupiter.
  •  
    Bohr's model finally not so wrong?
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
1 - 20 of 123 Next › Last »
Showing 20 items per page