Skip to main content

Home/ Advanced Concepts Team/ Group items tagged data

Rss Feed Group items tagged

Beniamino Abis

Northern and southern hemisphere climates follow the beat of different drummers - 0 views

  •  
    Over the last 1000 years, temperature differences between the Northern and Southern Hemispheres were larger than previously thought. Using new data from the Southern Hemisphere, researchers have shown that climate model simulations overestimate the links between the climate variations across the Earth with implications for regional predictions.
Thijs Versloot

The Port - Hackathon at CERN - apply now - 3 views

  •  
    Interdisciplinary teams of handpicked individuals chosen for their field-leading expertise and innovative mind combine humanitarian questions with state of the art science, cutting-edge technology and endless fantasy. Organised by THE Port Association, hosted by CERN (IdeaSquare tbc) and with partners from other non-governmental organisations, a three-day problem solving workshop hackathon will be devoted to humanitarian, social and public interest topics. Interdisciplinary teams of selected participants will work together in the fields of: communication - transport - health - science - learning - work - culture - data
jcunha

Artifitial Intelligence to predict solar flares - 0 views

  •  
    A team from Stanford showing possibility of solar flare prediction using AI techniques and data from the NASA SDO observatory.
Christophe Praz

Scientific method: Defend the integrity of physics - 2 views

  •  
    Interesting article about theoretical physics theories vs. experimental verification. Can we state that a theory can be so good that its existence supplants the need for data and testing ? If a theory is proved to be untestable experimentally, can we still say that it is a scientific theory ? (not in my opinion)
  •  
    There is an interesting approach by Feynman that it does not make sense to describe something of which we cannot measure the consequences. So a theory that is so removed from experiment that it cannot be backed by it is pointless and of no consequence. It is a bit as with the statement "if a tree falls in the forrest and nobody is there to hear it, does it make a sound?". We would typically extrapolate to say that it does make a sound. But actually nobody knows - you would have to take some kind of measurement. But even more fundamentally it does not make any difference! For all intents and purposes there is no point in forcing a prediction that you cannot measure and that therefore has noto reflect an event in your world.
  •  
    "Mathematics is the model of the universe, not the other way round" - M. R.
Thijs Versloot

Scikit-learn is an open-source machine learning library for Python. Give it a try here! - 5 views

  •  
    Browsing Kaggle...
  •  
    Very nice library, we actually use it for GTOC7.
annaheffernan

Highly accurate quantum accelerometers - 5 views

  •  
    Their accuracy is orders of magnitude better than what is currently being used, however at the moment, it sounds like quite a large setup -> they're working on getting it down to 1m^3 :o, still any gravity mapping instruments could benefit from these in the future.
  •  
    Actually GPS is much more accurate, but as it doesnt work under water, the only alternative (without building an underwater GPS equivalent using probes) is to use cumulative accelerometer data. But as this is prone to drifting over time, quantum systems like this can help improving the accuracy significantly.
  •  
    Very true :). I was thinking though when you want to remove 'noise' from any gravity mapping experiment, highly accurate accelerometers are required, like those used in GOCE.
Ma Ru

Ambition - 0 views

shared by Ma Ru on 15 Mar 13 - No Cached
LeopoldS liked it
  •  
    Today we released the Astro Drone app. People that have the Parrot AR drone can freely download the game. While they fly their drone in the real world, they are trying to dock to the ISS in the virtual world. But the app is more than a game. Players can choose to participate in a scientific crowd sourcing experiment that aims to improve autonomous capabilities of space probes, such as landing, obstacle avoidance, and docking. If participating, the app extracts visually salient features from the images made by the drone's camera. The features are then combined with the estimates of the drone's state and uploaded. The data is then used in a research aiming to improve robot navigation.
  •  
    Visit the main ESA website and you'll be greeted with a 6-minute Rosetta promo movie by a kickass Polish artist... P.S. You can also find the video here. P.P.S It seems I've just discovered a way to hijack old diigo entries ;-)
jcunha

First Terahertz Amplifier "Goes to 11" - 2 views

  •  
    Guinness World Record breaking, "first radio amplifier operating at terahertz frequencies could lead to communications systems with much higher data rates, better radar, high-resolution imaging that could penetrate smoke and fog, and better ways of identifying dangerous substances, say the researchers who built it". Built from HEMTs (High Electron Mobility Transistors) made of InP (Indium Phosphide), this is a new milestone on the road to the THz applications.
Thijs Versloot

Scotland's Renewable Sector Generated Over 100% of Electricity Needs In October - 0 views

  •  
    Clean Power November 5th, 2014 by The Scottish renewable energy sector is one of the world's best performing, and new data from WeatherEnergy has shown that October was a "bumper month" for the country, generating more than enough electricity from renewable sources to power the country.
Thijs Versloot

Laser link offers high-speed delivery - 0 views

  •  
    New technology demonstration of laser communication link to offer high-speed near real time delivery of data from space... Oh, that's us... :)
Thijs Versloot

Advanced AI May Be Coming to Smartphones | MIT Technology Review - 2 views

  •  
    Software that roughly mimics the way the brain works could give smartphones new smarts-leading to more accurate and sophisticated apps for tracking everything from workouts to emotions. The software exploits an artificial-intelligence technique known as deep learning, which uses simulated neurons and synapses to process data.
Dario Izzo

Critique of 'Debunking the climate hiatus', by Rajaratnam, Romano, Tsiang, and Diffenba... - 8 views

  •  
    Hilarious critique to a quite important paper from Stanford trying to push the agenda of global warming .... "You might therefore be surprised that, as I will discuss below, this paper is completely wrong. Nothing in it is correct. It fails in every imaginable respect."
  • ...4 more comments...
  •  
    To quote Francisco "If at first you don't succeed, use another statistical test" A wiser man shall never walk the earth
  •  
    why is this just put on a blog and not published properly?
  •  
    If you read the comments it's because the guy doesn't want to put in the effort. Also because I suspect the politics behind climate science favor only a particular kind of result.
  •  
    just a footnote here, that climate warming aspect is not derived by an agenda of presenting the world with evil. If one looks at big journals with high outreach, it is not uncommon to find articles promoting climate warming as something not bringing the doom that extremists are promoting with marketing strategies. Here is a recent article in Science: http://www.ncbi.nlm.nih.gov/pubmed/26612836 Science's role is to look at the phenomenon and notice what is observed. And here is one saying that the acidification of the ocean due to increase of CO2 (observed phenomenon) is not advancing destructively for coccolithophores (a key type of plankton that builds its shell out of carbonates), as we were expecting, but rather fertilises them! Good news in principle! It could be as well argued from the more sceptics with high "doubting-inertia" that 'It could be because CO2 is not rising in the first place'', but one must not forget that one can doubt the global increase in T with statistical analyses, because it is a complex variable, but at least not the CO2 increase compared to preindustrial levels. in either case : case 1: agenda for 'the world is warming' => - Put random big energy company here- sells renewable energies case 2: agenda for 'the world is fine' => - Put random big energy company here - sells oil as usual The fact that in both cases someone is going to win profits, does not correllate (still not an adequate statistical test found for it?) with the fact that the science needs to be more and more scrutinised. The blog of the Statistics Professor in Univ.Toronto looks interesting approach (I have not understood all the details) and the paper above is from JPL authors, among others.
Luís F. Simões

Nature's special issue on Interdisciplinarity - 2 views

  • Nature’s special issue probes how scientists and social scientists are coming together to solve the grand challenges of energy, food, water, climate and health. This special scrutinizes the data on interdisciplinary work and looks at its history, meaning and funding. A case study and a reappraisal of the Victorian explorer Richard Francis Burton explore the rewards of breaking down boundaries. Meanwhile, a sustainability institute shares its principles for researchers who work across disciplines. Thus inspired, we invite readers to test their polymathy in our lighthearted quiz.
Paul N

Computers Learn How to Paint Whatever You Tell Them To - 3 views

  •  
    Most self-respecting artists wouldn't agree to paint a portrait of a toilet in the middle of a field. Fortunately, advancements in artificial intelligence have given computers the ability to imagine just about any scenario, no matter how bizarre, and illustrate it. Take a look at this image.
  •  
    Those are some creepy faces among them.. This is also just completely random isn't it?
  •  
    Well, biased to the data it was trained on. Computing a net is pretty deterministic. But not everything is perfectly correlated yet. Still nice progress.
Marcus Maertens

AI at Google: our principles - 4 views

  •  
    Google is taking position here, but can they live up to their own standards?
  •  
    " Avoid creating or reinforcing unfair bias." Thats the very definition of the AI used today. If you learn from a dataset, you are biased to that data set. No escape from it.
Marion Nachon

Frontier Development Lab (FDL): AI technologies to space science - 3 views

Applications might be of interest to some: https://frontierdevelopmentlab.org/blog/2019/3/1/application-deadline-extended-cftt4?fbclid=IwAR0gqMsHJCJx5DeoObv0GSESaP6VGjNKnHCPfmzKuvhFLDpkLSrcaCwmY_c ...

technology AI space science

started by Marion Nachon on 08 Apr 19 no follow-up yet
domineo

Rocking puts adults to sleep faster and makes slumber deeper | Science News - 2 views

  •  
    First really strong evidence that the vestibular system affects sleep architecture, sleep stability and sleep spindles. If there is an effect due to a changing acceleration there might also be an effect of no gravity vector. We'll find out when I get the space shuttle data.
Ma Ru

Map of all geo-tagged articles on Wikipedia - 4 views

  •  
    I know you like these... [Edit] And by the way, this website contains also more practical stuff, like this
  • ...1 more comment...
  •  
    they must have tricked the data in favour of Poland ...
  •  
    of course, "they" being Polish Wikipedia contributors who geo-tag like mad...
  •  
    Have you had a look on Japan? It looks like they just geo-tagged all their train stations.
Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
« First ‹ Previous 201 - 220 of 231 Next ›
Showing 20 items per page