Skip to main content

Home/ Advanced Concepts Team/ Group items tagged predictability

Rss Feed Group items tagged

johannessimon81

Mathematicians Predict the Future With Data From the Past - 6 views

  •  
    Asimov's Foundation meets ACT's Tipping Point Prediction?
  • ...2 more comments...
  •  
    Good luck to them!!
  •  
    "Mathematicians Predict the Future With Data From the Past". GREAT! And physicists probably predict the past with data from the future?!? "scientists and mathematicians analyze history in the hopes of finding patterns they can then use to predict the future". Big deal! That's what any scientist does anyway... "cliodynamics"!? Give me a break!
  •  
    still, some interesting thoughts in there ... "Then you have the 50-year cycles of violence. Turchin describes these as the building up and then the release of pressure. Each time, social inequality creeps up over the decades, then reaches a breaking point. Reforms are made, but over time, those reforms are reversed, leading back to a state of increasing social inequality. The graph above shows how regular these spikes are - though there's one missing in the early 19th century, which Turchin attributes to the relative prosperity that characterized the time. He also notes that the severity of the spikes can vary depending on how governments respond to the problem. Turchin says that the United States was in a pre-revolutionary state in the 1910s, but there was a steep drop-off in violence after the 1920s because of the progressive era. The governing class made decisions to reign in corporations and allowed workers to air grievances. These policies reduced the pressure, he says, and prevented revolution. The United Kingdom was also able to avoid revolution through reforms in the 19th century, according to Turchin. But the most common way for these things to resolve themselves is through violence. Turchin takes pains to emphasize that the cycles are not the result of iron-clad rules of history, but of feedback loops - just like in ecology. "In a predator-prey cycle, such as mice and weasels or hares and lynx, the reason why populations go through periodic booms and busts has nothing to do with any external clocks," he writes. "As mice become abundant, weasels breed like crazy and multiply. Then they eat down most of the mice and starve to death themselves, at which point the few surviving mice begin breeding like crazy and the cycle repeats." There are competing theories as well. A group of researchers at the New England Complex Systems Institute - who practice a discipline called econophysics - have built their own model of political violence and
  •  
    It's not the scientific activity described in the article that is uninteresting, on the contrary! But the way it is described is just a bad joke. Once again the results itself are seemingly not sexy enough and thus something is sold as the big revolution, though it's just the application of the oldest scientific principles in a slightly different way than used before.
pacome delva

Can Google Predict the Stock Market? - ScienceNOW - 2 views

  •  
    in related news: Twitter Mood Predicts The Stock Market "An analysis of almost 10 million tweets from 2008 shows how they can be used to predict stock market movements up to 6 days in advance" http://www.technologyreview.com/blog/arxiv/25900/ http://arxiv.org/abs/1010.3003
  • ...1 more comment...
  •  
    not overly impressive: "The Google data could not predict the weekly fluctuations in stock prices. However, the team found a strong correlation between Internet searches for a company's name and its trade volume, the total number of times the stock changed hands over a given week."
  •  
    Likewise, I can predict the statistical properties of white noise :-)
  •  
    the problem is that usually the google search queries and the twitter updates happen after a crisis for example. I dont really think that people all over the world suddenly realised that Lehman would collapse and started googling it like crazy before it collapsed. More likely they did it afterwards.
Athanasia Nikolaou

Measuring the predictability of life outcomes with a scientific mass collaboration | PNAS - 3 views

  •  
    This is a social sciences paper trying to make use of ML. Quote from text: "Social scientists studying the life course must find a way to reconcile a widespread belief that understanding has been generated by these data-as demonstrated by more than 750 published journal articles using the Fragile Families data (10)-with the fact that the very same data could not yield accurate predictions of these important outcomes." "(...) In other words, the submissions were much better at predicting each other than at predicting the truth."
  •  
    an important message to learn from
Thijs Versloot

The World's Fair 2014 - Isaac Asimov's predictions 40 years ago - 3 views

  •  
    saac Asimov's predictions of the year 2014 back in 1964.. Truly amazing to read how close his sharp mind turned out to be at that time (cold war, Yuri Gagarin just went into space and Fortran first appeared 7 years before). The last prediction also came true I think, however the solution was not psychiatry.. instead we invented Facebook, Twitter and Instagram
  •  
    Also, he predicted that solar power stations would power the places on earth where solar power nor fission (?) would be available... Not there yet
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
Francesco Biscani

Slashdot Science Story | String Theory Predicts Behavior of Superfluids - 0 views

  •  
    "String theory" and "predict" in the same sentence?!?
Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
Luís F. Simões

Why Is It So Hard to Predict the Future? - The Atlantic - 1 views

  • The Peculiar Blindness of Experts Credentialed authorities are comically bad at predicting the future. But reliable forecasting is possible.
  • The result: The experts were, by and large, horrific forecasters. Their areas of specialty, years of experience, and (for some) access to classified information made no difference. They were bad at short-term forecasting and bad at long-term forecasting. They were bad at forecasting in every domain. When experts declared that future events were impossible or nearly impossible, 15 percent of them occurred nonetheless. When they declared events to be a sure thing, more than one-quarter of them failed to transpire. As the Danish proverb warns, “It is difficult to make predictions, especially about the future.”
  • Tetlock and Mellers found that not only were the best forecasters foxy as individuals, but they tended to have qualities that made them particularly effective collaborators. They were “curious about, well, really everything,” as one of the top forecasters told me. They crossed disciplines, and viewed their teammates as sources for learning, rather than peers to be convinced. When those foxes were later grouped into much smaller teams—12 members each—they became even more accurate. They outperformed—by a lot—a group of experienced intelligence analysts with access to classified data.
  • ...1 more annotation...
  • This article is adapted from David Epstein’s book Range: Why Generalists Triumph in a Specialized World.
jcunha

Accelerated search for materials with targeted properties by adaptive design - 0 views

  •  
    There has been much recent interest in accelerating materials discovery. High-throughput calculations and combinatorial experiments have been the approaches of choice to narrow the search space. The emphasis has largely been on feature or descriptor selection or the use of regression tools, such as least squares, to predict properties. The regression studies have been hampered by small data sets, large model or prediction uncertainties and extrapolation to a vast unexplored chemical space with little or no experimental feedback to validate the predictions. Thus, they are prone to be suboptimal. Here an adaptive design approach is used that provides a robust, guided basis for the selection of the next material for experimental measurements by using uncertainties and maximizing the 'expected improvement' from the best-so-far material in an iterative loop with feedback from experiments. It balances the goal of searching materials likely to have the best property (exploitation) with the need to explore parts of the search space with fewer sampling points and greater uncertainty.
pacome delva

Special relativity passes key test - 2 views

  • Granot and colleagues studied the radiation from a gamma-ray burst – associated with a highly energetic explosion in a distant galaxy – that was spotted by NASA's Fermi Gamma-ray Space Telescope on 10 May this year. They analysed the radiation at different wavelengths to see whether there were any signs that photons with different energies arrived at Fermi's detectors at different times.
  • According to Granot, these results "strongly disfavour" quantum-gravity theories in which the speed of light varies linearly with photon energy, which might include some variations of string theory or loop quantum gravity. "I would not use the term 'rule out'," he says, "as most models do not have exact predictions for the energy scale associated with this violation of Lorentz invariance. However, our observational requirement that such an energy scale would be well above the Planck energy makes such models unnatural."
  •  
    essentially they made an experiment that does not prove or disprove anything -big deal-... what is the scientific value of "strongly disfavour"??? I also like the sentence "most models do not have exact predictions for the energy scale associated with this violation of Lorentz invariance" ... but if this is true WHAT IS THE POINT OF THE EXPERIMENT!!!! God, physics is in trouble ....
  •  
    hum, null result experiments are not useless !!! there is always the hope of finding "something wrong", which would lead to a great discovery. For the state of theoretical physics (the "no exact predictions" quote), i totally agree that physics is in trouble... That's what happen when physicists don't care anymore about experiments...! All you can do now is drawing "nice"graph with upper bounds on some parameters of an all tunable weird theory !
ESA ACT

Prediction of human errors by maladaptive changes in event-related brain networks -- Ei... - 0 views

  •  
    Predicting when humans are to make errors - would be great for EVA, piloting, ground control, etc.
santecarloni

[1101.6015] Radio beam vorticity and orbital angular momentum - 1 views

  • It has been known for a century that electromagnetic fields can transport not only energy and linear momentum but also angular momentum. However, it was not until twenty years ago, with the discovery in laser optics of experimental techniques for the generation, detection and manipulation of photons in well-defined, pure orbital angular momentum (OAM) states, that twisted light and its pertinent optical vorticity and phase singularities began to come into widespread use in science and technology. We have now shown experimentally how OAM and vorticity can be readily imparted onto radio beams. Our results extend those of earlier experiments on angular momentum and vorticity in radio in that we used a single antenna and reflector to directly generate twisted radio beams and verified that their topological properties agree with theoretical predictions. This opens the possibility to work with photon OAM at frequencies low enough to allow the use of antennas and digital signal processing, thus enabling software controlled experimentation also with first-order quantities, and not only second (and higher) order quantities as in optics-type experiments. Since the OAM state space is infinite, our findings provide new tools for achieving high efficiency in radio communications and radar technology.
  •  
    It has been known for a century that electromagnetic fields can transport not only energy and linear momentum but also angular momentum. However, it was not until twenty years ago, with the discovery in laser optics of experimental techniques for the generation, detection and manipulation of photons in well-defined, pure orbital angular momentum (OAM) states, that twisted light and its pertinent optical vorticity and phase singularities began to come into widespread use in science and technology. We have now shown experimentally how OAM and vorticity can be readily imparted onto radio beams. Our results extend those of earlier experiments on angular momentum and vorticity in radio in that we used a single antenna and reflector to directly generate twisted radio beams and verified that their topological properties agree with theoretical predictions. This opens the possibility to work with photon OAM at frequencies low enough to allow the use of antennas and digital signal processing, thus enabling software controlled experimentation also with first-order quantities, and not only second (and higher) order quantities as in optics-type experiments. Since the OAM state space is infinite, our findings provide new tools for achieving high efficiency in radio communications and radar technology.
  •  
    and how can we use this?
mkisantal

Reinforcement Learning with Prediction-Based Rewards - 3 views

  •  
    Prediction-based method for encouraging reinforcement learning agents to explore their environments through curiosity (reward for unfamiliar states). Learns some games without any extrinsic reward!
  •  
    Fun failure case: agent gets stuck in front of TV.
  •  
    Not read this article but on a related note: Curiosity and various metrics for it have been explored for some time in robotics (outside of RL) as a framework for exploring (partially) unfamiliar environments. I came across some papers on this topic applied to UAVs when prep'ing for a PhD app. This one (http://www.cim.mcgill.ca/~yogesh/publications/crv2014.pdf) comes to mind - which used a topic modelling approach.
jaihobah

Machine Learning's 'Amazing' Ability to Predict Chaos - 2 views

  •  
    Researchers have used machine learning to predict the chaotic evolution of a model flame front.
Dario Izzo

Kaggle: making data science a sport - 2 views

  •  
    Old post from Luis brought back from graveyard..... At least two good ideas to put there: 1) tipping points prediction 2) planetary phases for trajectory transfer and probably many more if we think about it a bit more
LeopoldS

Meteorite Crashes In Russia, Panic Spreads (Updating) - 5 views

  •  
    Latest update: the European Space Agency says their experts "confirm there is no link between the meteor incidents in Russia and asteroid 2012DA14 flyby tonight". How did they find this? As they did not see this one coming, how could they come to that conclusion that early!
  • ...5 more comments...
  •  
    As you can see from the videos of this meteorite it is coming in from an east to south-east direction (i.e. the direction of the sunrise, more or less). 2012DA14 is coming from due south as you can see here: http://www.wired.com/wiredscience/2013/02/how-to-watch-asteroid-2012-da14/ So the two objects seem to be coming from different directions - at least that would be my explanation.
  •  
    My point is, that if you want to come to such a conclusion (that it is not rubble) you need to be able to construct back the orbits of both objects. 2012DA14 has been observed for one year only, but it is well enough. When the meteor has been observed for the first time, such that we knew its orbit? has it been observed before? if yes, why the impact has not been predicted?
  •  
    If you can show that they come from different directions you know that they are not associated, even if you don't reconstruct their orbits.
  •  
    I don't think so. If both objects were part of the same, they would be on different but intersecting orbits anyway, hence different directions. Anyway, I am not knowledgeable in atmospheric entry ... But, with so few information about the object, I am surprised they are 100% certain it is not related to DA14. I think science requires more cautions ... With only the direction they are 100% sure, while the probability of such event is itself extremely small, I am amazed... They can't even predict with 100% certainty where a space debris will fall... plus, nobody consider the object being part of a bigger one that broke up during early entry (which has not been observed) ... so many uncertainties and possible hypothesis... and i am not the only one :) http://www.infowars.com/russian-meteor-linked-to-da14-asteroid/
  •  
    was not that evident to me also but apparently with the right understanding it was quite clear; was amazed also how quickly NASA has published the likely trajectory of the russian object - have a look at it: quite evident that these are not coming from the same body
  •  
    yes, now i get my 100% certainty with the reconstructed orbits nothing else (http://wiki.nasa.gov/cm/blog/Watch%20the%20Skies/posts/post_1361037562855.html) ... I still think that esa anouncemement was highly premature but with a high probability of being right...
  •  
    Some more results on the topic (link to an arxiv article inside): http://www.bbc.co.uk/news/science-environment-21579422
santecarloni

[1111.3328] The quantum state cannot be interpreted statistically - 1 views

  •  
    Quantum states are the key mathematical objects in quantum theory. It is therefore surprising that physicists have been unable to agree on what a quantum state represents. There are at least two opposing schools of thought, each almost as old as quantum theory itself. One is that a pure state is a physical property of system, much like position and momentum in classical mechanics. Another is that even a pure state has only a statistical significance, akin to a probability distribution in statistical mechanics. Here we show that, given only very mild assumptions, the statistical interpretation of the quantum state is inconsistent with the predictions of quantum theory....
LeopoldS

IBM's Five Predictions for the Next Five Years - Businessweek - 1 views

  •  
    nothing revolutionary but coming from IBM ....
santecarloni

Was a metamaterial lurking in the primordial universe? - physicsworld.com - 1 views

  •  
    A scientist in the US is arguing that the vacuum should behave as a metamaterial at high magnetic fields. Such magnetic fields were probably present in the early universe, and therefore he suggests that it may be possible to test the prediction by observing the cosmic microwave background (CMB) radiation - a relic of the early universe that can be observed today.
1 - 20 of 118 Next › Last »
Showing 20 items per page